an effort estimation model for agile software development
TRANSCRIPT
i
An Effort Estimation Model for Agile Software
Development
By
Muhammad Ijaz Khan
7431-D-04
A thesis is submitted in partial fulfillment of the requirements for the
degree of Ph.D. in Computer Science
Institute of Computing and Information Technology
Gomal University
Dera Ismail Khan
Pakistan
July 2020
ii
CERTIFICATE OF APPROVAL
We, the Departmental Supervisory Committee, hereby certify that the contents and
form of dissertation submitted by Muhammad Ijaz Khan, Ph.D. in Computer Science,
Institute of Computing and Information Technology were checked and found
satisfactory. As per directions of the Higher Education Commission, the thesis of the
student was checked for plagiarism in which 12% similarities were found as per
report attached hereto which is within the acceptable range. Thus, the revised thesis is
submitted for notification.
Supervisory Committee
Name Signature
a) Dr. Ziauddin Supervisor (from the major field) _______________
b) Dr. Zubair Asghar Member (from the major field) _______________
c) Dr. Fazal Mehsud Member (from the minor field) _______________
Forwarded by
Dr. Ziauddin Chairperson/Director _______________
Dean _______________
iii
DEDICATION
I dedicate this work to my wife and children. This would not have been
possible without your great patience and encouragement. I am very
grateful to you
iv
List of Contents
S No. Description P No
1 Student’s Declaration…………………………………………… vii
2 List of Tables……………………………………………………… viii
3 List of Figures……………………………………………………... x
4 List of Abbreviations……………………………………………… xi
5 Acknowledgement………………………………………………… xii
6 Abstract……………………………………………………………. xiii
7 Chapter 1: Introduction
1.1
1.2
1.3
1.4
Overview
Problem Statement
Research Objective
Significance of The Study
01
01
02
02
8 Chapter 2: Estimation Models and Methods
2.1 Estimation Techniques in Traditional Software
Development
03
2.2 Algorithmic Based Estimation 03
2.2.1. Putnam Model/Slim Model
2.2.2. Functional Point Analysis.
2.2.3. COCOMO Model
04
06
08
2.3 Expert Opinion Method 14
2.3.1. Delphi Technique
2.3.2. Work Breakdown Structure
2.3.3. Analogy
2.3.4. Top-Down
2.3.5. Bottom-Up
14
15
17
19
20
2.4 Learning-Based Methods 21
2.5 Agile Software Development 22
1.6.1 Characteristics of Agile Process 22
2.6 Effort Estimation in Agile 23
2.7 Agile Estimation Derives At 3 Scales 26
2.8 Agile estimation workflow 27
v
9 Chapter 3: Review of Literature
3.1 Survey on Basic Software Effort Estimation 29
3.2 Survey on Traditional Software Estimation Techniques 32
3.3 Survey on Agile Software Effort Estimation 36
3.4 Survey on Story Point Approach for Agile Software Effort
Estimation
37
10 Chapter 4: Proposed Models and Methods
4.1 Research Question 41
4.2 Proposed Model 41
4.3 User Story Size 43
4.4 User Story Complexity 43
4.5 Team Velocity 46
4.6 Optimization of Velocity 46
4.7 Friction Forces 46
4.8 Dynamic Forces 47
4.9 Effort Estimator 48
4.10 Uncertainty 49
4.11 Evaluation Procedure and Experimental Analysis 49
11 Chapter 5: Results and Discussion
5.1 User Story Size Determination: SLR 52
5.1.1 Research Question (RQ)
5.1.2 Search Strategy and Query String
5.1.3 Study Selection Criteria
5.1.4 Study Selection Process
5.1.5 Quality Assessment (QA)
5.1.6 Results
5.1.7 User Story Size Guidelines
52
52
53
53
54
54
55
5.2 User Story Complexity Determination: SLR 56
5.2.1 Research Question (RQ)
5.2.2 Search strategy
5.2.3 Primary and Secondary Search Strategies
5.2.4 Study Selection Criteria
5.2.5 Study Selection Process
56
56
57
57
58
vi
5.2.6 Quality Assessment (QA)
5.2.7 Results
58
58
5.3 Regression Analysis of Survey conducted for User Story
Complexity Factors
59
5.4 Quantification of User Story Complexity Factors 60
5.5 User Story Complexity 61
5.6 User Story Effort Estimation 62
5.7 Friction Forces Determination: SLR 62
5.7.1 Research Question (RQ)
5.7.2 Search Strategy and Query String
5.7.3 Study Selection Criteria
5.7.4 Study selection Process
5.7.5 Quality assessment
5.7.6 Results
5.7.7 Friction Forces Quantification
62
62
64
64
64
64
66
5.8 Dynamic Forces Determination: SLR 67
5.8.1 Research Question
5.8.2 Search Strategy and Query String
5.8.3 Study Selection Criteria
5.8.4 Study selection Process
5.8.5 Quality assessment
5.8.6 Results
5.8.7 Dynamic Forces Quantification
67
67
68
68
69
69
70
5.9 Team Velocity 71
5.10 Optimization of Team Velocity 71
5.11 Effort Estimator 71
5.12 Development Cost 72
5.13 Uncertainty of Calculation 73
5.14 Model Summary 73
5.15 Accuracy Evaluation 75
5.16 Experimental Analysis 76
5.17 Summary of Experimental Analysis 87
5.18 Critical Analysis 88
vii
5.19 Conclusion and Future Work 88
10 References 89
viii
Student’s Declaration
I, Muhammad Ijaz Khan, do hereby state that my Ph.D. thesis titled “An Effort
Estimation Model for Agile Software Development” is my own work and has not been
submitted previously by me for taking any degree from Gomal University, Dera
Ismail Khan or anywhere else in the country/world.
I understand the zero-tolerance policy of the HEC and Gomal University, Dera Ismail
Khan towards plagiarism. Therefore, I declare that no portion of my thesis has been
plagiarized and any material used as reference is properly cited.
I undertake that if I am found guilty of any formal plagiarism in the above titled thesis
even after award of Ph.D. degree, the university reserves the rights to
withdraw/revoke my Ph.D. degree and that HEC has the right to publish my name on
the website on which names of students are placed who submitted plagiarized work.
Name of Student Signature_____________ Date__________
Name of Supervisor Signature_____________ Date__________
ix
List of Tables
Table No Description Page No
2.1 Overview of Function Point Analysis (Albrecht, 1979) 6
2.2 Basic COCOMO Coefficients 8
2.3 Effort Multipliers in Intermediate COCOMO 9
2.4 Intermediate COCOMO Coefficient 10
2.5 Effort Multipliers in COCOMO-II 12
4.1 Questionnaire 45
4.2 Friction Forces 47
5.1 Keywords for User Story Size SLR 52
5.2 List of online Databases for User Story Size SLR 53
5.3 Quality Assessment Checklist adopted by [19,12, 16] 54
5.4 Papers in Study selection and QA (User Story Size SLR) 55
5.5 Story Size guidelines, their frequency and size 55
5.6 Keywords for User Story Complexity Factors (SLR) 56
5.7 Search Results for User story Complexity Factors (SLR) 57
5.8 Papers in Study selection and QA (User Story Complexity
Factors SLR)
58
5.9 User story Complexity Factor’s Weights on the basis of
researcher opinion
59
5.10 Coefficient and significance of Story Complexity Factors 60
5.11 Story Complexity Factors Quantification 61
5.12 User Story Complexity 62
5.13 keywords for Friction Forces (SLR) 63
5.14 Search Result for Friction Forces (SLR) 64
5.15 Papers in Study selection and QA (Friction Forces SLR) 65
5.16 Friction Forces Affecting Software Effort Estimation In ASD 65
5.17 Friction Forces weights 66
5.18 Keywords for Dynamic Forces (SLR) 67
5.19 Database Search Result Before and After Duplication
(Dynamic Forces SLR)
68
5.20 Papers in Study selection and QA 69
x
5.21 Dynamic Forces Affecting Software Effort Estimation In
ASD
70
5.22 Dynamic Forces and Their Weights 70
5.23 Agile Team Salary and other Cost Heads 72
xi
List of Figures
Figure No Description Page No
2.1 Putnam Time-Effort Curve 5
2.2 WBS Product Hierarchy 16
2.3 WBS Activity Hierarchy 16
2.4 A Top-down Estimate (Roberts, 1997) 19
2.5 A Top Down and Bottom Up Estimate (Roberts, 1997) 20
2.6 Neural Network for Software Cost Estimation (Boehm et. Al,
2000b)
21
2.7 Effort Estimation in Agile Software Development 24
2.8 Agile Estimation Overview 25
2.9 Agile Estimation Workflow 27
4.1 Overview of Effort Estimation Model in ASD 42
xii
List of Abbreviations
SEE Software Effort Estimation
ASD Agile Software Development
USP User Story Points
CPA Class Point Approach
UCP Use Case Point Approach
ISBSG International Software Benchmarking Standards Group
SPA Story Point Approach
ML Machine Learning
DT Decision Tree
MAE Mean Absolute Error
MSE Mean Square Error
MMRE Mean Magnitude of Relative Error
MMER Mean Magnitude of Error Relative to the estimate
RMSE Root Mean Square Error
PRED Prediction Accuracy
SLIM Software Life-cycle Management
FP Function Point
COCOMO Constructive Cost estimation Model
SLOC Source Line of Code
KSLOC Kilo Source Line of Code
IFPUG International Function Point Users Group
FPA Function point Approach
UML Unied Modeling Language
TUCP Total Unadjusted Class Point
TCF Technical Complexity Factor
ACP Adjusted Class Point
UAW Unadjusted Actor Weight
UUCW Unadjusted Use Case Weight
EF Environmental Factor
AFP Adjusted Function Point
xiii
Acknowledgement
First of all, I am very thankful to Allah Almighty who gave me courage and helped
me to complete this task.
I am very grateful to my supervisor and teacher Dr. Zia-ud-Din. He always helped me
in this whole research. He helped me a lot. Whenever I needed help, he guided me. In
fact, I learned a lot from him. He is a capable teacher and excellent researcher.
I am also very grateful to Mr. Tayyab Mughal, Mr. Irfan Babar and all the
Organizations who provided me with data for this research.
I am very grateful to all of my colleagues especially Mr. Tariq Naeem, Mr. Saqib, Mr.
Fahim ullah Kundi, Mr. Abid Ali and Mr. Farhan who always supported me.
I am also very grateful to Prof. Dr. Iftikhar Ahmad, Vice Chancellor, Gomal
University, who has developed research-friendly policies at the University and
provided a conducive environment for research in this COVID-19 Pandemic.
I am also very grateful to all my friends and family members who have always been
with me, prayed for me and endured me during this research.
xiv
Abstract
Effort estimation is the process of forecasting the size and schedule of a software
project. An estimate is made before starting any software project. This estimate is
necessary to evaluate the project and get its approval. This process is very critical
because the success or failure of a project depends entirely on effort estimation.
Agile is a new and innovative software development technique. In this process,
business requirements are obtained in the form of user stories. There are many
estimating models based on user story to estimate software project in Agile Software
Development but still seventy five percent projects fail due to miscalculation.
The current research has explored all those factors that cause the estimate to be
inaccurate. Then it is known how these factors affect the estimation. These factors
include user story size, story complexity, Friction Forces, Dynamic Forces, Software
Quality and Agile Team Velocity.
For all these factors SLR has been conducted first. The user story size is categorized
and weighs are assigned, then user story complexity factors and their impact on
estimation is explored. These complexity factors are then quantified on the basis of
researcher and expert opinion. The quantification is done through meta-analysis and
regression techniques. Impact of Friction Forces, Dynamic Forces and required
software quality is explored and quantified in the same way as used for Story
Complexity. Agile team velocity is optimized on the basis of Friction Forces and
Dynamic Forces.
1
Chapter 1: Introduction
1.1.Overview
The software estimation is the process of forecasting the size of the software product, required
development efforts, project schedules, and approximating overall cost of the project. It is the most
critical and challenging task to accurately estimate the cost in the project management. For
successful software development the required resources and schedules are needed to be accurately
estimated [134][20].
It is an admitted fact that nearly 3 out of 4 projects overrun their budget or time or both as CHAOS
summary reports continuously described decrease in success rate of projects [16]. It is the most
critical and complex issue in software development to predict the development cost, time and
efforts accurately to make good management decisions required for both project managers, system
analyst and developers otherwise it will lead to complete fiasco. It is believed that huge overrun
occurs only due to inaccurate estimation.
The overall cost estimation process of software project management is not different from
estimating the cost of any other engineering discipline but there are some aspects that are peculiar
to software estimation due to the nature of software and software estimating methodologies. Every
estimation method has different parameters for predicting the cost of software because the software
is invisible, intangible and intractable which makes it more difficult to understand and forecast the
cost of software. Furthermore, every software is somewhat different than any other software which
leads to a different characteristic set.
1.2. Problem Statement
It is the most critical and complex issue in software development to predict the development cost,
time and efforts accurately to make good management decisions required for both project
managers, system analyst and developers otherwise it will lead to complete fiasco. It is believed
that huge overrun occurs only due to inaccurate estimation.
Different software cost estimation methods have been developed including algorithmic methods,
non-algorithmic methods, estimating by analogy, price to win method, expert opinion methods,
top down method, and bottom up method. Using these methods different cost estimation models
have been developed which are successfully being used in different environment, however, some
2
of these estimation models are more suitable for certain methodologies but fails for rest of other.
Some of these models are tool dependent whereas some of them are methodology dependent. All
of the existing models have certain limitations as one model cannot be applied in all environments.
A software effort estimation model needs to be developed for agile software development
methodology which will take different aspects of the product as its input and calculate estimated
effort based on user stories; by considering their nature, complexity, expected future changes and
required attributes of quality.
1.3.Research Objective
A software effort estimation model is to be developed for agile software development methodology
which will take different aspects of the product as its input and calculate estimated effort based on
user stories; by considering their nature, complexity, expected future changes and required
attributes of quality
1.4.Significance of The Study
As agile methodology is widely practiced by software industry but no credible software effort
estimation model exists for this methodology. The existing estimation models have lots of
limitations and ignore basic factors that affect effort estimation in Agile Software Development.
This research will help software industry in solving this particular problem. Furthermore, it will
provide a guideline for the future researchers to improve the existing estimation models or to
develop better and effective effort estimation models for Agile Software Development.
3
Chapter 2: Estimation Models and Methods
This chapter contains review on software estimation models and methods. The literature review
covers traditional as well as agile software estimation techniques.
2.1. Estimation Techniques in Traditional Software Development.
Software Effort estimation is the biggest challenge for project managers, customers and
development teams. Although the requirements are well-defined and not subject to change in the
traditional development, it is still difficult for the manager to precisely estimate the effort, time,
and therefore the budget needed to develop a software system [83, 15, 30, 84].
Different software effort estimation methods have been developed. These methods include
algorithmic methods, non-algorithmic methods, analogy based estimation, price to win, expert
opinion methods, top down and bottom up method as discussed in [92, 35, 130, 103, 122, 133].
Using these methods different cost estimation models have been developed which are successfully
being used in different environment, however, some of these estimation models are more suitable
for certain methodologies but fails for rest of other [114][109][102]. Some of these models are tool
dependent whereas some of them are methodology dependent. All of the existing models have
certain limitations as one model cannot be applied in all environments.
Following section presents the detail of estimation methods that are used mostly by software
organizations due to the success factor and simplicity [23][67][74].
2.2. Algorithmic Based Estimation
This method is based on mathematical equations to perform efforts estimation [110][128]. These
mathematical relations use various cost factors as inputs such as product, machine and people
factors. The input parameters and adjustment factors are derived from the previously completed
projects. This method requires calibration of data and input parameters according to the specific
development environment [25]. This method is considered as more superior and accurate as
compared to estimation by analogy and expert opinion cost estimation methods. However, it is
very difficult to quantize all the mentioned cost factors and many of these factors are ignored in
some software projects. The major disadvantage of these methods is the inconsistency of estimates.
The study conducted by “KEMERER” [91] indicates that the difference between predicted and
actual estimate is as much as 85-610 %. Calibrating the system to the specific development
4
environment improves the accuracy however these methods still produce 50-100% errors and
calibration is considered as additional overhead.
Nowadays various models have developed on the basis of this method. Some of most famous
models based on this method include COCOMO models, Putnam model, and function point
Analysis (FPA) [82][116].
2.2.1. Putnam Model/Slim Model
In 1970 L. Putnam developed an empirical model called Putnam model [104]. This model uses
Rayleigh Curve Function to find the development effort and time of software project. A proprietary
suite on the basis of Putnam model was developed by his company. This suite was named by
Putnam as SLIM.
The main equation of this model is:
𝑆 = 𝐸 × (𝐸𝑓𝑓𝑜𝑟𝑡)1/3𝑡𝑑4/3
In the above equation “td” is delivery time. “E” is the environmental Factors. “S” is the product
size and “Effort” is the overall development effort in Person-year.
where
• ‘td’ = delivery time
• ‘E’ = Environmental factor
• ‘S’ = size of product in ESLOC
• ‘Effort’ = Total Development Effort in person-years.
Another significant equation of this model is:
(𝐸𝑓𝑓𝑜𝑟𝑡) = 𝐷0 × 𝑡𝑑3
“𝐷0” is manpower build-up parameter. it has range from 8 (New system having number of
interfaces) to 27 (Rebuilding old system). New software system will take more time and effort to
develop. Rebuilding of old software system will take least time and effort as large portion of code
and logic have already been developed. Similarly, a new standalone software system and further
combinations lies in between.
5
By joining the above equations, we can get the following relation for scheduling and effort
calculation.
𝐸𝑓𝑓𝑜𝑟𝑡 = (𝐷0
47 × 𝐸 −
97) × 𝑆9/7
And
𝑡𝑑 = (𝐷0
−17 × 𝐸
−37 ) × 𝑆3/7
Putnam model uses the effort as a function of time to produce time-effort arc, shown in the
following figure 2.1. The dots on the arc shows the assessed development efforts at specific time.
Figure 2.1: Putnam Time-Effort Curve
The Putnam model needs calibration with the data of previously completed project but if the data
of similar project is not available then a set of calibration question is used. Calibration simplicity
is one of the major advantages of this model. Regardless of maturity level, most of the software
organizations can effortlessly accumulate effort, scope and time for the past projects.
6
2.2.2. Functional Point Analysis.
In 1979 Allan Albrecht was given a task by one of his employers, IBM to measure project
productivity [6]. Thus, he came up with a new way of finding the size of software project. He
linked the size of software project with the functionality delivered by the software. He strongly
felt the need of alternative to LOC as a measure of software size. “He [Allan A.] argued that the
business output unit of the software project ought to be legitimate for all languages and ought to
be matters of anxiety to the user of software. In short, he wished to measure the functionality of
the software"[119].
After extensive research, Albrecht thought a new way to measure software applications uniformly
based on five main attributes.
1. Application’s inputs.
2. Application’s Outputs.
3. Provision to query.
4. Internal data store.
5. External interfaces.
These five attributes were platform independent and could be easily identified for majority of
software applications. All above parameters are those which are clearly visible to client and hens
are tangible. In 1979 Albrecht presented a paper on his research findings in IBM conference. Thus,
he developed an advanced estimation method for finding the size of software applications, call
function point analysis (FPA). This method was universally accepted to find the size, effort,
productivity and defect density of software applications.
FP is the estimation unit to compute the software application size. Software size, measured in
function point (FP) is proportional to software application functionality.
In FPA the functionality of software application is identified and categorized on the basis of above
five attributes. These functions are then checked against the complexity (modest, normal are
intricate) and allocated FP on the basis of complication and nature. The total sum of FP at that
point balanced utilizing 14 specialized attributes. The cost of project development (in
hours/money) of a solitary unit is ascertained from previous projects.
7
Friction Count Max Range: Factor * 2
Level of Information Processing Function
Max
Ran
ge
Fac
tor
2.5
Type ID Description
Sim
ple
Avera
ge Complex
Tot
al
IT External Input 3 4 6
OT External Output 4 5 7
FT Logical Internal File 7 10 15
EI External Interface File 5 7 10
QT External Inquiry 3 4 6
FC Total Unadjusted Function Points
General Processing Information Characteristics
Characteristics DI Characteristics DI
C1 Data communications C8
Online-
Updates
C2 distributed functions C9
Complex
processing
C3 Performance C10 Re-Usabilty
C4
Heavily used
configuration C11
Installation
ease
C5 transaction rate C12
operational
ease
C6 online-data entry C13 multiple sites
C7 End user efficiency C14
Facilitate
Change
PC Total degree of influence
DI Values
Not present/no
influence 0
insignificant influence 1
Moderate influence 2
Average influence 3
Significance influence 4
Strong Influence
Throughout 5
FC (function count)
Total Unadjusted
Function Points
PC (Process
Complexity) total degree of influence
PCA (Process
Complexity Adj) 0.65+0.01*PC
FP (Function point
measure) FC*PCA
Table 2.1: Overview of Function Point Analysis (Albrecht, 1979)
8
Each and every identified function is then mapped to the client business functionality. The
functions estimated in FP can be simply mapped into customer-oriented necessities but hides the
internal functions like algorithms etc. These algorithms also need resources to implement.
Later on, several variations have been made in functional point analysis to make it more efficient.
For instance, Feature-Point-Analysis expands the FP by including internal functionalities into this
technique. Algorithm resolve a substantial computational problem. Every algorithm is assigned a
weight. The weight ranges from 1-10. Where 1 shows basic and 10 shoes complex algorithm.
Entire weight of feature point is the aggregate of FP and algorithms. This method is useful for the
system where little input output is required and high algorithm complexity is needed.
2.2.3. COCOMO Model:
COCOMO is an algorithmic size estimation model developed by Barry Boehm [32]. In the1970's
COCOMO was developed in TRW Aero Space on the basis of 63 software development projects
study. Waterfall model and procedural languages were used in all these software development
projects. In 1990's improvements were made in COCOMO and COCOMO II was introduced
which was able to estimate modern software development projects and process [69].
Basically, COCOMO uses mathematical equation to find the cost of software development project.
Parameters are gotten from the data of past projects which are then adjusted according to the
current project attributes. Original COCOMO can be further divided into three types. Which are
basic, intermediate, and detailed COCOMO. The detail of these three forms and COCOMO-II is
given in the following subsection.
Modes of Software Development:
COCOMO divides software development project into three different modes on the basis of
development complexity. All these development modes use same cost-estimation relationship but
generate different cost estimation for same size projects.
Organic: in organic mode the problem is well understood and familiar to the development team.
Much data is available from the past projects and small development team is needed to develop
such projects.
Semi-detached: In this mode the development team comprises of mixture of experienced and non-
experienced staff. The development team has less experience about the project to be developed.
9
Embedded: Embedded mode consists software system which is more Complex and strongly
coupled to hardware. More creativity and high-level experience is required to develop such
projects.
Types of Model:
Basic COCOMO:
It is used to estimate small to medium range Projects. It can be easily used to find quick and slightly
rough estimation of software development projects but its precision is restricted because of its
simplicity and absence of sufficient cost factor consideration.
In basic COCOMO effort is calculated in term of function of software size. The software size is
uttered in KDSI. Basic COCOMO uses the equation beneath.
𝐸𝑓𝑓𝑜𝑟𝑡(𝑀𝑎𝑛𝑀𝑜𝑛𝑡ℎ𝑠) = 𝑎𝑏 × (𝐾𝐷𝑆𝐼)𝑏𝑏
𝐷𝑒𝑣𝑒𝑙𝑜𝑝𝑚𝑒𝑛𝑡𝑇𝑖𝑚𝑒(𝑀𝑜𝑛𝑡ℎ𝑠) = 𝑐𝑏 × (𝐸𝑓𝑓𝑜𝑟𝑡)𝑑𝑏
𝑃𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑣𝑖𝑡𝑦 = 𝐾𝐷𝑆𝐼/𝐸𝑓𝑓𝑜𝑟𝑡
𝐴𝑣𝑒𝑟𝑎𝑔𝑒𝑆𝑡𝑎𝑓𝑓𝑖𝑛𝑔 = 𝐸𝑓𝑓𝑜𝑟𝑡/𝐷𝑒𝑣𝑒𝑙𝑜𝑝𝑚𝑒𝑛𝑡𝑇𝑖𝑚𝑒
The coefficients ab, bb, cc, db are selected according to the following table.
Table 2.2: Basic COCOMO Coefficients
Basic COCOMO does not use many important factors including constraints, personal excellence,
skill and additional project characteristics. These factors may significantly affect the software
development cost.
Intermediate COCOMO:
Intermediate COCOMO uses program size and 15 cost factors to calculate the software
development cost. These cost factors are separated in four groups, given in the following table.
10
Attribute
Category
Cost driver Ratings
Very
Low
Low Nominal High Very
High
Extra
High
Product
Attributes
Required software reliability 0.75 0.88 1 ,00 1,15 1,40
Size of application database 0,94 1 ,00 1,08 1,16
Complexity of Product 0,70 1 ,00 1,15 1,30 1,65
Hardware
Attributes
Run-time performance
constraints
1,00 1,11 1,30 1,66
Memory constraints 1 ,00 1,06 1,21 1,56
Volatility of the virtual machine
environment
0,87 1 ,00 1,15 1,30
Required turnabout time 0,87 1 ,00 1,15
Personnel
Attributes
Analyst capability I .46 1,19 1,00 0,86 0,71
Applications experience 1.29 1,13 1 ,00 0.91 0.82
Software engineering capability 1,42 1 ,00 0,86 0,70
Virtual machine experience 1,21 1,10 1 ,00 0,90
Programming Language
experience
1.14 1,00 0,95
Project
Attributes
Application of software
engineering methods
1,24 1, 10 1 ,00 0,91 0,82
Use of software tools 1,24 1 ,10 1 ,00 0,91 0,83
Required development schedule 1.23 1,08 1 ,00 1,04 1,10
Table 2.3: Effort Multipliers in Intermediate COCOMO
The cost drivers are rated by project manager according to particular project on the scale ranging
from "very low" to "extra high" and the value of relevant cost drivers are taken from the above
11
table. These 15 values are then multiplied to get effort adjustment factor (EAF). Basic COCOMO
equation is then multiplied with EAF to get the following equation.
𝐸𝑓𝑓𝑜𝑟𝑡 = 𝑎𝑖 𝑥 (𝐾𝐷𝑆𝐼)𝑏𝑖 × 𝐸𝐴𝐹
Similar to basic COCOMO, here "E" (effort) is also used to find development time, phase
distribution, development schedule and activity distribution.
The coefficient of the constant a, b is given in the following table
Table 2.4: Intermediate COCOMO Coefficient
Detailed COCOMO:
Detailed COCOMO uses the same equation for estimation as the intermediate COCOMO uses in
addition with cost driver’s impact on each and every project phase. In this technique every phase
uses different effort multiplier for each cost driver attribute.
The whole software development is partitioned into various modules. Then COCOMO is applied
in every module and then sum the effort of every module to get the total effort for the whole
project.
Detailed COCOMO consists of following six phases.
12
COCOMO-II:
This is the most advanced version of old COCOMO model developed by Boehm and published in
2000 [69]. The original COCOMO model has been very successful in traditional software
engineering but as software engineering is changed it cannot be able to apply to the new software
development practices. The basic idea of developing COCOMO II was to cope with modern
software engineering methods.
There are three different forms of COCOMO-II:
• The application composition model.
This model is suitable to calculate time and efforts of projects built with modern GUI-
builder tools for RAD.
• The early design Model
It can be utilized for early estimation of project cost and duration before you have
determined the entire architecture of the project to be developed. Since this model is used
in early phases of SDP where not sufficient information is existed. This model is based on
function points (LOC when available) seven cost factors and five scale factors.
• The post architecture model.
It is the most detailed COCOMO 2 model utilized when upper level design is completed.
It has new line counting rules, new equations and new cost drivers. COCOMO 2 takes
setup factors as input and software size to estimate effort in person months. Estimation can
be made more accurate by taking other factors into consideration like team qualification
and experience and development environment.
Attribute Category Cost driver
Product attributes Required Software Reliability
Database Size
Product Complexity
Required Reusability
Documentation match to life-cycle needs
13
Attribute Category Cost driver
Platform attributes Execution Time Constraints
Main Storage Constraints
Platform Volatility
Personnel attributes Analyst and Programmer Capability
Application Experience
Platform Experience
Language and Tool Experience
Personnel Continuity
Project attributes Use of Modern Programming Practices
Use of Software Tools
Multisite Development
Required Development Schedule
Classified Security application
Table 2.5: Effort Multipliers in COCOMO-II
COCOMO-II uses the following equation:
In the above equation 'A' is a calibration factor. It tends to be adjusted with the organization's past
project data.
'E' (scale factor) relies upon five components. These factors are architecture/risk resolution, team
Cohesion, development adaptability, process presentences and process maturity.
14
2.3. Expert Opinion Method
In this technique one or more experts in both software development and the application domain
use their understanding and experience of the proposed project to arrive at an estimate of its cost
[66]. This method completely depends on the experience of the experts in the similar development
environment and historical data of completed projects and their accuracy. However the study
carried out by “VIGDER & KARK” [61] indicates that most of the cost estimators do not refer to
previous projects as it is very difficult for the expert to see how the information will help in the
accuracy of estimate. In case if the opinion of more than one expert is used then weighted average
of their estimation is taken.
Despite the fact that this method is extensively used, the method receives poor reputation as this
method is subjective and unstructured which makes it vulnerable against more structured methods
[26] [51] [88].
Some popular estimation models developed on the basis of this approach, include CA-
ESTIMACS, SPQR/20, PMS/Bridge, Check point, BYL (Before you leap), estimate Pro, Quest
for better estimates (Quest FBE) [38], Delphi and Wideband Delphi [14]. Some of these methods
have been successfully used for years for different types of environment. Among these methods
checkpoint, CA-Estimacs and PMS/bridge resulted in very good estimates with respect to
estimating extents of functions, early estimating validity and project planning. These methods gave
good results for traditional software methodologies like waterfall, spiral, RAD and SDLC
methodologies but these models could not cope with modern methodologies like Agile, RUP, XP
and Crystal methods.
Some of the most important expert-based techniques are presented in the following section.
2.3.1. Delphi Technique
Delphi is the most famous technique based on expert opinion. It was named after ancient Greek
Oracle, who may predict the future. This technique involves the aggregation and collection of
Expert Opinion through a series of iterative questions, meetings and surveys to get group
consensus. This technique was originally developed by RAND in 1950 to predict the impact of
warfare [41][42]. Yet, it very well may be similarly connected to numerous different fields
including effort estimation for SDP.
15
Delphi Technique was deficient of group debate. But wideband Delphi involved group discussions,
Greater interaction and more communication between each assessment rounds [32]. This technique
is very useful when no empirical data is available and estimation is purely based on expert opinion.
The following steps are involved in wideband Delphi for effort estimation of software development
project.
1) Product specification forms are presented to all experts.
2) A group summit is called by project manager in which destination is discussed by experts.
3) Each expert fills out the form.
4) The summary of estimation is prepared and distributed.
5) A group meeting is called to discuss those points where expert estimation very widely.
6) Form are filled by the experts again. Repeat step 4 to 6 unless consensus is created among
expert estimation.
During the titration process if complete consensus is impossible then average score is obtained
which is considered a quiet reliable estimate. Reliable result can be obtained using Delphi Method
because it participates many experts. This method can be applied in situation where the project
manager has little expertise.
2.3.2. Work Breakdown Structure:
In work breakdown structure the whole Complex system is divided into smaller and manageable
elements that can be easily managed and estimated [100]. Mostly this method is used by project
manager to make the project execution simple and easy. In this method a big task is divided into
smaller chunks of work in term of System, subsystem, tasks, subtasks components and work
packages. These chunks of work could be easily supervised and estimated.
These project elements are grouped in a hierarchy. In this hierarchy each downward level embodies
more comprehensive explanation of project work. Work packages and elements at the lowest level
give a legitimate base to characterizing activities or to allocate duties to particular individual or
association.
At the point when cost is related with every individual work package in the hierarchy, the total
project cost is then calculated from the bottom up. The ability in WBS is required in determination
16
of work package specification inside the structure and their estimation and probabilities related to
each work.
The product WBS comprises of two structures that is software product (representing project itself)
structure and activity structure (representing activity required to build the product) [32].
The product hierarchy depicts the overall structure of the product, showing how different software
segment fits into the whole system. Figure 2.2. indicates activities related to the relevant system
component.
Figure 2.2: WBS Product Hierarchy
Figure 2.3: WBS Activity Hierarchy
17
The following steps are involved in development of WBS [39].
1) Determining the project scope/objectives
2) Choose the best project organization
3) Finding the main deliverables such as product, services and output results
4) Recognize the known attributes for every activity.
A vital outline standard for work breakdown structure is known as 100% rule [31]. According to
this rule, WBS represents 100% of the work required to complete the project. This rule can be
applied to all levels of WBS Hierarchy. Add up the total of the work at any level is equivalent to
the aggregate work denoted by the parent. This rule may also be applied to activity hierarchy. Add
up the total work of each activity is equivalent to hundred percent of the work needed to finish
work package [39].
Other than supporting in the effort assessment WBS helps amid the project life cycle in the
accompanying ways [39].
• It isolates a factual thing into its segment, illuminating the association among the parts.
• It empowers powerful planning and task of technical and management duties.
• This assistances status following of risks, specialized endeavors, asset distributions, uses
and plan.
WBS isn’t limited to a particular field with regards to application. This technique can be utilized
for any type of project management [39][86].
2.3.3. Analogy
Another important and useful technique for estimating software project is proposed by Shepperd
[117] called Estimation by analogy. This technique first identifies all the completed projects which
are similar to the existing one and then they are compared and estimated [93].
In this technique the proposed project is characterized and then previously completed projects are
selected which are similar in all aspects to the current project [11] [121]. The current project’s cost
is determined by looking at the previously completed similar project’s cost.
Estimating by analogy is relatively straightforward and accurate if the actual project data is
available. However, in spite of this simplicity, the theoretical study of Estimating by analogy is
18
quite complicated. If we do not find a pre-completed project that can be compared to the current
project, it becomes even impossible to estimate the cost in this technique. This technique requires
a database that contains the cost of completed projects in a systematic way. The major drawback
of this technique is that it requires a lot of computation.
Some practical models have been developed on the basis of this method which include ESTOR
[90], ACE [132] and ANGEL [117]. The major problem with these models is that they combine
different other models with varying features to predict software estimates. Therefore, their
characteristics are more dependent on their base models including COCOMO and Function Point.
The efficiency of these models varies from case to case. Therefore, it cannot be claimed that they
are more suitable for certain methodologies.
ANGEL is a 5-step process and software tool. It is developed by Shepperd for cost assessment by
analogy [117]. These five steps are given bellow:
1) Features or data is identified for collection.
2) Data definition and collection mechanism is agreed upon.
3) Case base is populated.
4) Estimation method is tuned.
5) New project effort is estimated.
Angel is a technique in which organization selects a new project that should to be assessed. It is
tried to discover comparative finished project. Since the development effort is well known for
completed project so it tends to be utilized as a base for estimation of current project. Similarities
are defined in the both projects on the basis of project features i.e. development methods, number
of interfaces, application domain etc.
In the initial phase, all the information that is used to compare between different projects is
obtained. This information is used to find the similarities between the projects. Those components
are considered that are quantifiable, be gathered easily and may affect the cost of SDP [108][124].
In the next step ‘collected data’ is agreed upon. In reality, even inside associations there may be
no regular cognizance of what is inferred by exertion. Any estimation program will be flawed,
maybe mortally, if unmistakable undertakings are evaluating comparative highlights in various
19
ways. It is moreover basic to recognize who is accountable for the data gathering and when they
should accumulate the data. Now and again, it might be helpful to have a comparable individual
assembling the data transversely over activities with a particular ultimate objective to grow the
level of consistency.
Third, the case-base must be colonized. As projects are finished and their exertion data winds up
accessible. Regardless, there appear, apparently, to be a couple of tradeoffs between the proportion
of the dataset and homogeneousness. A couple of experiences suggest that there is legitimize in
the technique of isolating exceedingly particular tasks into discrete datasets.
Next the estimation method is tuned. The customer moreover should attempt diverse things with
the perfect number of analogies hunt down, and whether to use a subset of factors. Since a couple
of highlights may not accommodatingly add to the route toward discovering compelling
similarities. Tuning can have a huge impact to the idea of forecasts. Consistently tuning can yield
a twofold change in execution. Along these lines the ANGEL tool gives mechanized help to this
technique.
In the last, estimation is conducted for new project. ANGEL is used to find comparative projects
and the client can make judgment with regards to the estimation of the analogies.
2.3.4. Top-Down:
Top-down estimation model is a technique which starts with the overall project cost [37]. This
targeted cost is further split down to the numerous segments/stages [32].
An example of top-down estimate is presented in the figure 2.4 beneath. Top-down estimation is
used in conjunction with other estimation techniques. The following figure 2.4 shows an example
of Top-down estimation.
Figure 2.4: A Top-down Estimate (Roberts, 1997)
20
Top-down approaches are used in certain conditions like small projects, strategic planning,
projects having uncertain scope or high uncertainty or before the WBS has been developed. Top-
down approach is useful because it is easy to implement and usually faster. It costs less to estimate
as compared to others approaches and focus on system-level activities such as configuration,
integration, documentation etc. which may ignore by other estimation approaches.
However, this approach is less accurate as compared to other methods of cost estimation due to
uncertainty of the project scope [52]. The most significant drawbacks of top down estimation are
that it often misses low level system components and does not distinguish low level specialized
issues. Due to these limitations, top-down estimation cannot be used as the final cost figure and
often used only for selecting projects to pursue. After selecting the project, other techniques of
cost estimation are utilized to evaluate the general expense of the project.
2.3.5. Bottom-Up:
In this approach the expense of individual component is assessed and afterward every one of these
expenses are added to calculate overall estimate of the system [19]. Prior to the base up estimation,
the overall software product ought to be initially disintegrated into set of smaller work products or
components, for instance utilizing work breakdown structure.
Top-down estimation is utilized related to other estimation techniques. Bottom up estimation is
illustrated in the figure 2.5 below.
Figure 2.5: A Top Down and Bottom Up Estimate (Roberts, 1997)
21
Top down and bottom up approaches have been used in connection with existing methods to refine
the model but these models possessed numerous discrepancies to handle modern software
methodologies, though the estimates were fairly improved with use of these methods.
2.4. Learning-Based Methods:
Learning-based techniques are based on analytical comparisons and interpretations of previously
completed projects instead of any mathematical relation [115]. These methods require some
information about the previous projects which are similar to the under-estimate project [128]. The
estimation process is carried out by using these historical datasets.
These methods include artificial neural network (ANN), fuzzy logic models, case-based reasoning,
evolutionary computation, combinational models etc. [12].
ANN has the capability to learn from previous data and hence used in effort estimation to model
the complex relationships between cost drivers and effort [44] [45]. ANN is trained by the training
dataset to produce acceptable result. These methods use soft computing techniques such as feed
forward multilayer perceptron, sigmoid function and back-propagation algorithm to predict efforts.
Figure 2.6: Neural Network for Software Cost Estimation (Boehm et. Al, 2000b)
22
It has been observed that most of Learning-based techniques produce accurate estimation as
compared to traditional algorithmic methods however these soft computing techniques are used in
connection with existing algorithmic methods to refine the model which reflects the same
limitation as that of algorithmic methods.
2.5. Agile Software Development:
Agile software development (ASD) is basically a group of software development techniques based
on iterative and incremental development in which software requirements and solutions evolve
through collaboration between cross-functional and self-organizing teams [112]. It focuses on
promoting evolutionary development, adaptive planning, early delivery, continuous improvement,
and encourages flexible and rapid response to changes. The term “Agile” was introduced by agile
manifesto in 2001. It is a conceptual framework that emphasizes on empowering the people to
collaborate and make team decisions. Various development methods have been developed on the
basis of agile methodology.
Most of them promote development, collaboration, teamwork, and process adaptability throughout
the lifecycle of the project. Early implementations of lightweight methods include Crystal Clear,
Scrum (1995), Extreme Programming (1996), Adaptive Software Development, Dynamic Systems
Development Method (DSDM) (1995), and Feature Driven Development. These are now typically
referred to as agile methodologies, after the Agile Manifesto published in 2001.
2.5.1. Characteristics of Agile Process:
Modularity: One of the key elements of agile process is seclusion that permits a development
process to be fragmented into segments known as activities.
Iterative: Agile processes emphasis on short cycle. A certain set of activities is finished in each
cycle
Time-Bound: Time limits are set for every cycle and timetable. This time limit is called Sprint.
Parsimony: Agile software processes focus on parsimony. They require an insignificant number
of activities important to relieve risks and accomplish their objectives.
Adaptive: During the iteration new risks may be found which requires new activities. The agile
process easily accommodates new activities or modifies existing activities during iteration.
23
Incremental: Agile Process splits the entire nontrivial system into increments. These increments
may be developed in parallel at different times and at different rates. After the completion, each
increment is tested independently and integrated into the system.
Convergent: Agile process attempts to construct the framework closer to the truth by applying
every single conceivable strategy to guarantee achievement in the speediest way.
People-Oriented: In Agile process individuals are supported over process and innovation where
they create through adjustment in a natural way. Approved designers raise their profitability,
execution and quality. In the association, they are the best people who know how to roll out these
improvements.
Collaborative: Agile process tries to promote inter communication among team members.
Communication plays a vital role in the agile process because in agile process, a large process is
developed in pieces by individuals so understanding how the pieces fit together is crucial to create
the finished product. As the increments are being developed, they are integrated in parallel. The
integration of increments requires collaboration.
2.6. Effort Estimation in Agile
In waterfall a team member’s workload capacity is determined by the manager. He assesses to
what extent certain undertakings will take and afterward appoints work dependent on that
colleague's aggregate accessible time. Agile methodology adopts an impressively unique strategy
to deciding a member’s ability. As a matter of first importance, it relegates work to a whole group,
not a person. Philosophically, this places an accentuation on aggregate exertion. Second, it declines
to evaluate work as far as time since this would undermine the self-association fundamental to the
accomplishment of methodology. This is a noteworthy break from waterfall. Here in agile process
team members estimate their work by utilizing exertion and level of difficulty.
Agile Methodology does not recommend a solitary route for team to gauge their work. In any case,
it asks that groups not assess as far as time, but rather, rather, utilize a more preoccupied metric to
evaluate effort. Main estimating methods comprise t-shirt sizes, numeric sizing, the Fibonacci
sequence and even dog breeds. The vital thing is that the group shares a comprehension of the
scale it is utilizes, so every individual from the group is OK with the scale's qualities.
24
In the meeting of Sprint Forecasting, the group takes a seat to assess its exertion for the stories in
the backlog [112]. The Product Owner needs these evaluations, so he or she is engaged to viably
organize things in the backlog and, accordingly, estimate discharges dependent on the group's
speed. This implies the Product Owner needs a fair examination of how troublesome job will be.
Subsequently, it is prescribed that the Product Owner does not watch the estimation procedure to
abstain from influencing a group to lessen its exertion gauges and go up against more work.
Notwithstanding when the group gauges among itself, moves ought to be made to lessen impacting
how a group gauge. All things considered; it is prescribed that all colleagues unveil their
assessments at the same time. Since people demonstrate their hands without a moment's delay, this
procedure resembles a game of poker.
All things considered, notwithstanding when groups have a mutual comprehension of their scale,
they can't resist the urge to assess in an unexpected way. To land at a solitary exertion estimation
that mirrors the whole group's feeling of a story's difficulty, it frequently requires various rounds
of estimation. Veteran groups who know about the procedure, in any case, should achieve an
accord after only a couple of rounds of planning poker. Normally effort estimation happens toward
the start of new cycle amid Release Planning. XP-Project is appeared in Figure 2.7.
Figure 2.7: Effort Estimation in Agile Software Development
Hence the entire effort can only be finalized after conducting a number of release planning
meetings but not in advance as most of the existing methods of efforts estimation only deal with
the completeness of work but not with changes.
Estimation is the figured guess of the outcome which is usable paying little heed to whether input
data may be lacking or questionable. Agile estimation parts this rationale away and the project is
25
re-assessed after every cycle. Agile estimation frameworks won't remove defenselessness from
early gauges, yet they will upgrade precision as the project proceeds. This is valid in light of the
way that agile estimation techniques acknowledge genuine work into record as the project
advances. Project’s work blend may be different, yet in case you measure at a total level you can
regardless recognize a normal that you can use for assessing your ability [54].
Figure 2.8: Agile Estimation Overview
o Desired Features – client prerequisites are gathered and changed over them into client
stories.
o Estimate Size – client stories measure is evaluated, i.e. planning poker.
o Derive Duration – usage length of client stories is figured dependent on the group speed
into iterations.
o Schedule – Entire venture execution is planned.
It is extremely good to estimate project schedule using user story. One of the issues with using
story focuses is that it's amazingly alluring to explicitly relate them to hours — like expressing a
point is worth four hours. This mapping among story points and hours must be kept an eye on after
each cycle it should not be a comparable relentless through all repetitions. Subsequently after each
cycle a group, gauge ends up being more exact in light of the experience from past cycle. By then
the evaluations, limit, and identified addictions are gone into a project plan.
Presently, the team has a calendar that they feel beyond any doubt about, and they share it with the
partners.
26
Agile estimation strategies address the shortcomings of this strategy. In perspective of Agile
Manifesto, you don't gauge and plan the majority of your features until there has been a level of
prioritization and you're sure the features are required. You used a staged way to deal with manage
estimation, seeing that you can be dynamically certain as the project advances and you take in
additional about the highlights.
In more extensive view, the staged procedure takes after this:
1. Evaluate the features in a less time, time-boxed workout amid which you gauge include
estimate, not length.
2. Utilize element scope to dole out highlights to cycles and make a discharge arrangement
3. Break down the highlights you apportioned to the first cycle. Separating infers recognizing
the specific tasks required to develop the component and assessing the hours required.
4. Re-gauge each day amid iteration, surveying the time remaining on open errands.
2.7. Agile Estimation Derives At 3 Scales:
1. Iteration Plan Estimation: At the beginning of iteration Whole team gets together, gauge
each thing (client stories) planning poker session in light of team speed stack client stories
into the repetition.
2. Release Plan Estimation: This technique is same as “iteration plan Estimation” but
multiple iterations are involved in this technique. Assignments are by and large coarser.
Since it requires doable code for the customer, it is sharp to apportion possibly entire
iteration for the bug settling with the end goal to discharge "bug free" software. Discharge
plan uses two basic frameworks:
• Given a discharge date, continue padding content until the point that the date is
come to.
• Given content, continue including iterations until the point when the moment that
content is finished.
3. Project Estimation.
This method is like " Release plan Estimation" yet multiple iterations are involved in this
technique. Task plan is the main and last component in an Agile planning.
27
In an Agile process, precision of attributes gauges is expanded by assessing the attributes together
as a group. Assessments are not compelled to managers or leads but instead in like manner
consolidate developers, testers, analyzers, DBAs, and draftsmen. The features are seen from
various perspectives, and solidified these perspectives to make a typical, concurred on gauge [27].
Team assessment has different advantages since colleagues are nearer to the work, they know
current architecture, code, and spaces and what they need to convey. In the event that estimation
is given by the team, they will own the evaluation and sense greater responsibility to meet the
deadlines, they gave [101].
The most widely recognized techniques for evaluating ASD are:
o Expert opinion
o Analogy
o Disaggregation
Every one of the above procedures can be utilized by its own however for better outcome these
methods ought to be joined.
2.8. Agile estimation workflow
There are 6 general units of Estimation agile workflow.
Figure 2.9: Agile Estimation Workflow
User requirements are taken in the form of client stories.
28
At the point when all the client stories are composed, get the rundown all stories considered and
high-level assessment is finished. Estimation isn't time based but it is point based.
In order to compare features the user stories are broken down to units of relative size.
Product Backlog speaks to the majority of the work items for the project. Backlog has an item, its
measure and its priority [50].
Velocity is the quantity of story points finished in single iteration. Velocity is figured to foresee
possible work done in one iteration. First iteration is for the most part as a guideline for the next
iterations.
After each iteration Re – assessment of the entire task has to be done. Usually it’s affected by:
• New velocity
• New story points added or removed
29
Chapter 3: Review of Literature
This chapter reviews related work on software estimation techniques and methods. Estimation
techniques are categorized into different categories and then extensive survey is conducted on each
category of effort estimation. A detail review on software estimation techniques is presented in
this chapter. The techniques that have been proposed for estimating software projects by various
researchers, are described in this chapter. This chapter gives detailed insights into relevant
literature and provides useful information about general project estimation modes and agile
estimation models.
3.1. Survey on Basic Software Effort Estimation
SLIM is a great technique for software estimation. Putnam developed this technique in 1978 [104].
This is a very sophisticated estimation technique based on empirical calculations. It is used with
the utmost skill to know the required time and effort to complete a project of a certain size. It uses
Rayleign Curve Function to find the development effort and time of software project [68]. A
proprietary suite on the basis of Putnam model was developed by his company. This suite was
named by Putnam as SLIM. This model needs calibration with the data of previously completed
project but if the data of similar project is not available then a set of calibration question is used.
Calibration simplicity is one of the major advantages of this model. Regardless of maturity level,
most of the software organizations can effortlessly accumulate effort, scope and time for the past
projects.
In 1979 Allan Albrecht was given a task by one of his employers, IBM to measure project
productivity [6]. Thus, he came up with a new way of finding the size of software project. He
linked the size of software project with the functionality delivered by the software. He strongly
felt the need of alternative to LOC as a measure of software size. “He [Allan A.] argued that the
business output unit of the software project ought to be legitimate for all languages and ought to
be matters of anxiety to the user of software. In short, he wished to measure the functionality of
the software"[119]. After extensive research, Albrecht thought a new way to measure software
applications uniformly based on five main attributes i.e. Application’s inputs/outputs, Provision to
query, Internal data store and External interfaces. These five attributes were platform independent
and could be easily identified for majority of software applications. All above parameters are those
which are clearly visible to client and hens are tangible. In 1979 Albrecht presented a paper on his
30
research findings in IBM conference. Thus, he developed an advanced estimation method for
finding the size of software applications, call function point analysis (FPA). This method was
universally accepted to find the size, effort, productivity and defect density of software
applications.
An important milestone in software project estimation was achieved when B. Boehm introduced
one of the most popular and useful techniques in the late 1970's [32]. it is an algorithmic model
and was named as COnstructive COst MOdel or in short COCOMO. In 1970's COCOMO was
developed in TRW Aero Space on the basis of 63 software development projects study. Waterfall
model and procedural languages were used in all these software development projects. In 1990's
improvements were made in COCOMO and COCOMO II was introduced which was able to
estimate modern software development projects and process [69]. Basically, COCOMO uses
mathematical equation to find the cost of software development project. Parameters are obtained
from the data of past projects which are then adjusted according to the current project attributes.
Original COCOMO can be further divided into three types. Which are basic, intermediate, and
detailed COCOMO.
COCOMO II is the most advanced version of old COCOMO model developed by Boehm and
published in 2000 [69]. The original COCOMO model has been very successful in traditional
software engineering but as software engineering is changed it cannot be able to apply to the new
software development practices. The basic idea of developing COCOMO II was to cope with
modern software engineering methods.
Expert opinion is a technique in which experts’ opinion is taken and software projects are estimated
based on their opinion [66]. These experts are experienced in software development and current
project domain. The estimation of software project depends entirely on the opinion of the experts.
How well the estimate is calculated depends on how much experience the experts have for the
current project. This technique calculates the average of estimates taken from different
experts. Despite the fact that this method is extensively used, the method receives poor reputation
as this method is subjective and unstructured which makes it vulnerable against more structured
methods [26] [51] [88]. Some popular estimation models developed on the basis of this approach,
include CA-ESTIMACS, SPQR/20, PMS/Bridge, Check point, BYL (Before you leap), estimate
Pro, Quest for better estimates (Quest FBE) [38], Delphi and Wideband Delphi [14][42]. Some of
31
these methods have been successfully used for years for different types of environment. Among
these methods checkpoint, CA Estimacs and PMS/bridge resulted in very good estimates with
respect to estimating extents of functions, early estimating validity and project planning. These
methods gave good results for traditional software methodologies like waterfall, spiral, RAD and
SDLC methodologies but these models could not cope with modern methodologies like Agile,
RUP, XP and Crystal methods.
Delphi is the most famous technique based on expert opinion. It was named after ancient Greek
Oracle, who may predict the future. This technique involves the aggregation and collection of
Expert Opinion through a series of iterative questions, meetings and surveys to get group
consensus. This technique was originally developed by RAND in 1950 to predict the impact of
warfare [41][42]. Yet, it very well may be similarly connected to numerous different fields
including effort estimation for SDP.
Delphi Technique was deficient of group debate. But wideband Delphi involved group discussions,
Greater interaction and more communication between each assessment rounds [32]. This technique
is very useful when no empirical data is available and estimation is purely based on expert opinion.
In work breakdown structure the whole Complex system is divided into smaller and manageable
elements that can be easily managed and estimated [100]. Mostly this method is used by project
manager to make the project execution simple and easy. In this method a big task is divided into
smaller chunks of work in term of System, subsystem, tasks, subtasks components and work
packages. These chunks of work could be easily supervised and estimated. These project elements
are grouped in a hierarchy. In this hierarchy each downward level embodies more comprehensive
explanation of project work. Work packages and elements at the lowest level give a legitimate
base to characterizing activities or to allocate duties to particular individual or association.
Another important and useful technique for estimating software project is proposed by Shepperd
[117] called Estimation by analogy. This technique first identifies all the completed projects which
are similar to the existing one and then they are compared and estimated [93]. In this technique
the proposed project is characterized and then previously completed projects are selected which
are similar in all aspects to the current project [11] [121]. The current project’s cost is determined
by looking at the previously completed similar project’s cost.
32
Estimating by analogy is relatively straightforward and accurate if the actual project data is
available. However, in spite of this simplicity, the theoretical study of Estimating by analogy is
quite complicated. If we do not find a pre-completed project that can be compared to the current
project, it becomes even impossible to estimate the cost in this technique. This technique requires
a database that contains the cost of completed projects in a systematic way. The major drawback
of this technique is that it requires a lot of computation. Some practical models have been
developed on the basis of this method which include ESTOR [90], ACE [132] and ANGEL [117].
The major problem with these models is that they combine different other models with varying
features to predict software estimates. Therefore, their characteristics are more dependent on their
base models including COCOMO and Function Point. The efficiency of these models varies from
case to case. Therefore, it cannot be claimed that they are more suitable for certain methodologies.
F. Walkerden and R. Jeffery did a research in 1999 and compare different analogy-based effort
estimation models with each other and with simple linear regression model [85]. He came to the
conclusion that man is far better than using tools to select analogues for a dataset.
Idri et al. [8] introduced a new model by modifying analogy-based effort estimation. he took a
detailed look at his model results and compare them with the results of various other techniques.
He came to the conclusion that the results of his model are far better and fine and can be further
improved if this model is used with Fuzzy Logic or genetic algorithms.
Idri et al. [7] also proposed a new model in 2016 that it named the 2FA‐kprototypes. This model
can be used to evaluate the software cost when describing the software project based on combined
numerical and unmitigated traits. For this purpose, the well‐known fuzzy k‐prototypes algorithm
is incorporated into the procedure of estimation by similarity. In order to determine the estimating
accuracy of the model, the results were compared with the old analogy-based models and it was
seen that the current model shown much better results.
3.2. Survey on Traditional Software Estimation Techniques
K. Lind and R. Heldal [72] proposed a framework to appraise the size of embedded application in
auto mobile industry. It’s very important to estimate the code size accurately in initial stages in
order to save reasonable amount of cost and so development effort. COSMIC Functional size
estimation strategy has connected in various car ventures. The study demonstrates that there is a
solid relation in size and code. That is vital for getting further exact assessment.
33
A.B. Nassif et al. [95] conducted a study and associate DTF model for estimation. He gave the
alternative method for estimation. It was better than decision tree model and numerous regression
models. According to him this model is more accurate than existing models of estimation. He
applied dataset to his model to check its accuracy. The results are tested using various evaluation
methods i.e. MRE, MMRE and PRED(Y). Though, this method is probably won't be appropriate
for Agile projects because of this heavyweight approach.
E. Kocaguneli et al. [65] anticipated a basic dynamic learning technique named QUICK. This
method was aimed to decrease the multifaceted nature of information portrayal. One of the features
of this model is that it provides guidance on which technique is best for estimating projects. The
exercises engaged with this strategy are following.
1. Grouping lines and sections dependent on the similitudes.
2. Reject the repeatable sections and anomaly pushes dependent on its likenesses and
individuality.
3. Produce an exertion estimation with the rest of the information from closest precedent. This
methodology probably won't be reasonable for complex techniques (datasets) as this
strategy center is just around simple project datasets.
F. Schnitzhofer et al. [113] presented another tool named pocket-estimator. They used a cloud-
based structure to assess the SDE. Their principle objective is to build up a gigantic software
development project dataset. In this tool they utilized both “expert weighted estimation” and
“learning” calculations with the end goal to anticipate the exertion all the more unequivocally. Be
that as it may, this technique does not deal with other assessment factors like cost, schedule and
the scope.
Existing estimation methods has numerous difficulties because of the ongoing advancement in
emanant advances and systems. Zia et al. [135] presented an assessment method for segment based
4G Languages. He utilizes the current COCOMO-II system for estimation. To cope with modern
component-based environment and to make this model more perfect he included various
components utilized in the project e.g. tables and other database-components utilized in the
product. The methodology introduced by Ziauddin is based on fourth generation language. The
results are ended up being powerful, steady and enhanced precision level when contrasted with
other methodologies. Even today, many experts believe that the basic input for estimating the
34
software is the size of program [21][118][125]. To estimate any software project, it is important to
know how much time it will take to complete the project.
M.Tsunoda et al. [118] proposed a model for effort estimation, dependent on size and development
activities of the project. Those factors that precede the start of a software project provides
important input in estimating the project. To find out how accurate the model’s estimation is, the
results are compared with existing size-based estimation models. Results of this model shows that
greatly improvement can be made in estimation if both size and pre-development activities are
used as input.
A. B. Nassif [96] proposed another strategy. He called it Cascade Correlation Neural Network. It
calculates estimation on the bases of use case diagrams. In this approach complexity of project,
productivity, team and size are the inputs to this model. Numerous linear regression models were
created with a similar arrangement of input parameters. This model was tested for the accuracy
using various regression models and proved to be more accurate.
In spite of many years of research, there is no accord on which programming exertion estimation
techniques create the most precise models. Therefore, E.Kocaguneli et al. [64] presented a
procedure that joins various assessment strategies into solo. In this technique the best single
estimation technique is picked and then applied to dataset. It is then validated by using 7 basic
errors of estimation. This study affirms that troupes of numerous performance techniques are
steadier and more exact in estimation contrasted with the independent strategies.
Boehm [22] conducted a survey by investigation, interview and reviews with expert estimators
and clients. He concluded that estimation has direct impact on “software engineering practices”.
Ren et al. [4] investigated various existing estimation techniques. They studied Analogy based,
Expert opinion based, Algorithmic based techniques. They concluded that there is not single
estimation technique suitable for all kind of projects or environments. Their technique helps in
choosing the best suitable estimation technique for the given environment. However, their
technique is not applicable for agile software development environment due to its frequently
changing requirements nature.
N.Mittas [87] presented a model based on statistical analysis. He compares various estimation
techniques on the basis of their performance, strength and weakness. This technique gives fulfilling
35
results by distinguishing the gathering of models that have critical contrasts in precision and
bunching them into nonoverlapping gatherings. An ultimate conclusion on picking proper model
depends on specialists and their own inclinations like commonality of programming and client
experiences.
M. Azzeh and A. B. Nassif [13] provide a technique which is based on clustering algorithm. It
actually categorizes the similar project on the basis of clustering algorithm. When this technique
is applied to set projects, it categorizes projects on the basis of analogy. Results of this technique
are compared with other existing analogy-based estimation techniques and found this technique
more satisfactory as compared to other.
Ekrem [63] presented another technique based on analogy. He made assumption about analogy-
based techniques. On the basis of these assumption the projects are grouped into tree hierarchy.
He compares the sub tree with super tree and found similarities in the project’s clusters. By using
project data, the closest neighboring project can be dynamically selected.
AI techniques proves more affective in dynamically clustering of projects by providing data.
Estimation by analogy and Artificial intelligence methods are widely used techniques for
estimation software projects. Bardsiri et al. introduced technique based on Analogy and AI
techniques. This hybrid technique was used to improve the accuracy in estimation. This technique
cluster relevant projects, remove irrelevant things to improve the accuracy rate.
Khatibi [60] proposed another technique to overcome the accuracy issues in Analogy-based
estimation. This is all the more broadly utilized method lately, on account of its
straightforwardness and estimation capacity utilized for assessing the exertion required to build up
a project. The prior strategies looked at the two related projects deprived of seeing their interior
ascribes which prompts mistaken and one-sided gauges. This research centers about building up
model based on AI and Analogy-based to make the estimation more accurate. The related SDP are
named gathering of groups by considering the inside venture qualities, for example, platform, type
of organization and level of skill. At that point the attributes should be weighted. The development
effort was broken down for each group of tasks. The accomplished outcomes are approved by
contrasting and the current model and this strategy conveys a promising outcome as for precision
and execution measurements.
36
M. Jorgensen [53] led an investigation to assess the impact of analogy-based estimation models.
The evaluations depend on basic correlations of effort associated traits of the current projects with
that of finished comparative projects. In light of the investigation, researcher proposes the
accompanying rules that will profit the developers as far as precision while playing out the
estimation with the end goal to enhance the estimation exactness. 1) Examination with comparable
tasks can be made as far as work hours as opposed to ratios. 2) Provide significance to the
inimitable characteristics of the reference project. 3) It is suggested that estimation should be made
on the bases of Size ordering. For instance, estimate the segment that is smaller in size and after
that proceed onward to the intermediate and then to large. This will enhance the estimation
exactness.
3.3. Survey on Agile Software Effort Estimation
[123] proposed a system for assessment and scheduling of online activities reasonable for projects
based on Agile Development Methodology. This methodology depends on value-based point of
view by consolidating various current “agile strategies”. It is approved by actual contextual
analyses with the end goal to acquire the precise conclusion. This methodology is exceedingly
appropriate for planning, managing, and evaluating online projects.
[47] built up a strategy to evaluate the functional size of “COSMIC standard”. This strategy is used
to estimate the size of function needed by the client. Be that as it may, this methodology isn't
reasonable for agile process because it requires the client necessities.
[120] presented an assessment Technique to estimate the Software projects. It uses function points
(FP) as an input. FP are generally to calculate effort and schedule to develop the project. This
methodology is generally utilized in conventional methodology. In Agile process, broadly
acknowledged estimation strategy based on user story size. It fused the FP approach what's more
toward story Point to accomplish the most abnormal amount of precision. The project position is
progressively followed with the assistance of “Kalman filter algorithm”. In this process validation
is achieved with the assistance of contextual investigation by contrasting the outcomes and the
conventional methodology.
The most effective assessment technique for ASD is Use case estimation (UCP). Parvez [18] built
up another layer in the current UCP assessment technique. In this technique they have presented
two contributing elements specifically: productivity and hazard for assessing the exertion required
37
for testing. The current UCP strategy considers just the project attributes. This research is also
centering around the team attributes. It is equipped by including new attribute like Test team
attributes, span, testing weights the imperative factors to be focused in the new layer are resources
of test team, span, testing weightage, personal skills and risk factors. Introducing these new factors
in the current UCP enhances the accuracy of the assessment.
[36] developed estimation model for ASD. He recognized highly corelated attributes. He
presented “Principle Component Analysis (PCA)” for lessening the quantity of extensive features.
Even this methodology is appropriate without statistical information and expert-opinion. Outcome
of this methodology demonstrates to have a superior exactness and accuracy of cost assessment in
ASD Methodology.
The “Story based Approach” is the utmost generally utilized methodology in ASD assessment. [3]
enhances the estimation exactness in Agile using ANN. This methodology deliberates distinctive
sorts of neural systems like “General Regression neural systems (GRNN)”, “polynomial” and
“Probabilistic Neural Networks” to enhance the exactness of the effort estimation. This strategy
is good for effort assessment, anyway it ignores cost, schedule and risk.
K. Moharreri et al. [57] gave the idea of automatic assessment technique called "Auto Estimate"
for evaluating exertion for ASD. This methodology is supplementing to broadly utilized “Manual
Planning Poker” procedure. The superlative learning strategy is chosen by carrying. 1) Data
accumulation by utilizing story cards, textual investigation. Building the model on the bases of
taking out attributes and makes investigation by estimating the performance. This model likewise
furnishes promising outcomes regarding exactness.
3.4. Survey on Story Point Approach for Agile Software Effort Estimation
Keaveney [126] researched how the current estimation techniques can be applied to ASD. Since
Agile is a sophisticated technique for software size estimation. Keaveney took various software
companies as case study to complete his research. He took those companies which use Agile
development. He tried to use the existing estimation techniques for ASD.
Coelho [34] researched and tried to find which estimation technique would be more suitable for
software estimation in ASD. They studied various estimation techniques. They ended up with
38
conclusion that Story based estimation is more suitable to estimate software project in ASD. They
also highlight those areas of story-based estimation techniques which needs improvement.
Andreas et al. [10] conducted a comprehensive research on software estimation techniques. They
tried to make estimation better. Their main focus was on XP.
Zia et al. [29] introduced algorithmic model for estimation in ASD. The major input to this model
is user story size and complexity. He collaborated his results on previous completed projects data.
Usman [89] has conducted a research on estimation techniques in ASD. He actually conducted a
comprehensive SLR on Agile base estimation techniques. He highlighted various areas of
estimation techniques which needs improvement.
Hearty et al. [99] researched on estimation model for XP. He introduced estimation model based
on Bayesian network for XP. He tried to find the estimation form previous completed projects
data. On the bases of previous data this model estimates cost and risk for new project.
Popli [106] worked on estimation techniques. Using regression analysis, he introduced a model
for estimation in ASD.
Hussain et al. [46] studied various estimation models. He highlights the limitations of various
estimation models. On the bases of those limitations he proposed new model for estimation in
ASD.
A. E. D. Hamouda [5] conducted comprehensive research on software estimation techniques. He
proposed a new model for effort estimation. This model was based on algorithm. He tested his
model against dataset and compared results from his model to actual results to make his model
more accurate.
Ungan et al. [33] conducted research on COSMIC Function Points (CFP) and SCRUM’s story
points methodologies. He compared both methodologies. He took case studies and various
experiments of both methodologies. He proved that CFP is much better than SCRUM’s SP for
effort estimation in ASD.
Viljohn [81] illustrated a contextual inquiry, with a view to consider the behavior of development
team who aimed to use Scrum for the first time. His basic aim was to study the behavior of those
companies planning to introduce scrum into their system for development. It has been discovered
39
that basic planning and laborious gadgets are more promising, however, their diagnostics and
setting capabilities have improved from race to dash.
Mahnick [127] broadened his method by unfolding a series of steps that give IT management a
permanent considerate of scrum-based software development process. The proposed measures
were applied inside the extent of the task of revamping a site, which filled in as a logical
examination for appraisal of their convenience. The relevant examination exhibited that each
proposed measure portrays an important procedure angle and assortment of information doesn't
require extra managerial work that would hurt the dexterity of scrum.
Garg [111] used Principal Component Analysis (PCA) to decrease the elements of the
characteristics obligatory and recognize the key qualities which have most extreme connection to
the improvement cost. Afterward he used imperative unraveling way to deal with fulfill the criteria
forced by agile manifesto. This strategy is found to wager appropriate for agile projects. Because
it utilizes imperative programming to expressly check for fulfillment of agile strategies. From the
investigation of results, it is discovered that the proposed model displays a small MMRE esteem
than the current models.
Lenarduzzi [129] have acquainted utilitarian size measurements with improve estimation precision
and to gauge the exactness of opinion-based assessment. Additional they stretched out this
technique to deal with plain Scrum techniques, where the first examination was reproduced two
times, smearing a precise replication to two plain Scrum advancement forms. The results of this
duplicated examination show that the precision of the exertion assessed by the designers is
extremely exact.
Raslan et al. [75] have proposed a structure dependent on the fluffy rationale which gets fluffy
information parameters of Story Points (SP), Implementation Level Factor (ILF), Friction factors
(FR), and Dynamic Forces (DF) to be handled in numerous progressive strides to deliver in last
the exertion estimation. They dissected the usage of fluffy rationale in improving the exertion
estimation precision utilizing the client stories by portraying inputs parameters utilizing
trapezoidal participation capacities.
Britto [76] have played out an exact examination on the condition of the training on exertion
estimation in AGSD. To do as such, an overview was completed utilizing as instrument an on-line
poll and an example including programming professionals experienced in exertion estimation
40
inside the Agile Global Software Development (AGSD) setting. Results show that the exertion
estimation methods utilized inside the AGSD and assembled settings stayed unaltered, with
arranging poker being the one utilized the most.
41
Chapter 4: Proposed Models and Methods
It is the most critical and complex issue in software development to predict the development cost,
time and efforts accurately to make good management decisions required for both project
managers, system analyst and developers otherwise it will lead to complete fiasco. It is believed
that huge overrun occurs only due to inaccurate estimation.
Different software cost estimation methods have been developed including algorithmic methods,
non-algorithmic methods, estimating by analogy, price to win method, expert opinion methods,
top down method, and bottom up method. Using these methods different cost estimation models
have been developed which are successfully being used in different environment, however, some
of these estimation models are more suitable for certain methodologies but fails for rest of other.
Some of these models are tool dependent whereas some of them are methodology dependent. All
of the existing models have certain limitations as one model cannot be applied in all environments.
A software effort estimation model needs to be developed for agile software development
methodology which will take different aspects of the product as its input and calculate estimated
effort based on user stories; by considering their nature, complexity, expected future changes and
required attributes of quality.
4.1. Research Question
The research question of my thesis is “How to develop cost estimation model for Agile Software
Development that can address all parameters of Agile Model?”
To answer the above question the following questions are needed to be answer.
1. How the story size affect cost estimation in Agile?
2. How to accurately estimate story complexity in Agile?
3. How to accurately estimate Agile team velocity?
4. How to explore and estimate required product quality and Friction Forces in Agile?
5. How to explore and estimate Dynamic Forces in Agile?
4.2. Proposed Model
The following model is proposed to estimate the required effort for agile software development
methodology.
42
Figure 4.1: Overview of Effort Estimation Model in ASD
In order to carry out the proposed methodology extensive SLR was conducted, followed by user
survey to obtain the expert opinion. The model consists of seven steps. Details of each step is given
bellow.
43
4.3.User Story Size
In agile software development, a user story is one or more sentences in everyday language of the
end user that capture the business requirements of the system. It is a very high-level definition of
a system requirements. It captures the “who”, “what”, and “Why” of system requirement. It is
written in concise and simple way that can be written on a small paper notecard.
The size of the user story in Agile development is a key factor that can affect the esffort estimation
process. The size of user story plays vital role in determining the effort of a particular user story.
The size of user story will be calculated using following factors.
• Atomic Large
• Non-Atomic Medium
• Non-Atomic Large
• Atomic Medium
• Small
A systematic literature review will be conducted to categorize user stories and assign values to
each category according to the story size.
4.4.User Story Complexity
Complexities are the characteristic of user stories that create uncertainty to estimation. In ASD,
requirements are collected in the form of user stories. Effort estimation techniques are mainly
based on user stories and most of these techniques ignore user story characteristics.
The following major characteristics of user story are selected that affect effort estimation process.
1. Independent
2. Negotiable
3. Atomic
4. Conflict free
5. Valuable
6. Estimable
7. Testable
8. Unambiguous
9. Full Sentence
44
10. Unique
11. Priority
12. Flexibility
In order to find how these 12 story charateristics affect effort estimation, a survey and SLR will
be conducted. Questionnaire will be sent to expert arround the globe. The data collected from the
experts will be then analized to quantify each characteristic using linear regression as.
𝑌 = 𝑎 + 𝑏1 ∗ 𝑅1 + 𝑏2 ∗ 𝑅2 + ⋯ . . +𝑏𝑛 ∗ 𝑅12 …………….Equation 1
Where Ri are independent variables, represent 12 characteristics of user story and Y is dependent
variables representing product attributes. Equation 1 will be used to find the quantitative impact
of each characteristic of user story.
The basic reason for using the linear regression is its ability to explore relationship between
dependent and independent variables.
The following questionnaire will be asked from the expert.
Variables Questions
Dep
enden
t Variab
les
D1: Do all of your projects are completed within estimated Time?
D2: Do your projects are accepted in less than three test runs?
D3: Do you not receive customer complaints in variant nature of
projects?
D4: Do all of the activities are performed according to schedule?
D5: Do all of your projects are completed within estimated cost? In
dep
enden
t variab
les
R1: Independent How often software development schedule is affected if
business requirements are inter-dependent on each other?
R2: Negotiable If requirements are not equally understood by the developer and
customer, how much it results wastage of time?
R3: Atomic If user story contains more than one feature how much it affects
the accuracy of expected effort estimation?
R4: Conflict free If the business requirements are inconsistent with other business
requirements, how much it effects the cost estimation?
45
Variables Questions
R5: Valuable If business requirements are vogue to the user or customer how
much it will affect efforts estimation?
R6: Estimable How often software development schedule is affected if
business requirements are not properly estimated or
sized/understood?
R7: Testable If the requirements are not tested against customer expectations
how much it will affect the effort?
R8: Unambiguous How much ambiguity in business requirements lead to
confusion, wastage of time and rework?
R9: Full Sentence If a user story cannot be narrated in a single sentence, how often
it affects requirement formulation?
R10: Unique If the business requirements are repeated with a slightly
different nature, how often it will provide difficulty in analysis?
R11: Priority If the stories are not prioritized by the customer, how often it
will cause problem for the designer?
R12: Flexibility How often rigid stories affected the overall schedule of
development?
Table 4.1: Questionnaire
The quantitative value of each story characteristic will be find using equation 1 as well as using
meta analysis of SLR. The final value of story characteristic will be the average of both.
Using the two vectors i.e. size and complexity, effort of a particular story is calculated by the
following simple formula
𝐸𝑆 = 𝑆𝑖𝑧𝑒 ∗ 𝑐𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦
Effort for the entire project will be the summation of all individual stories.
𝐸 = ∑(ES)𝑖
𝑛
𝑖=1
Here ES is the effort of single user story while E is the effort for the entire project. The unit of
the effort is Story Point (SP) which is the amount of effort completed per unit time.
46
4.5.Team Velocity
Velocity is actually the speed at which something moves in one direction. It is very simple to
calculate the velocity of something in physics as it is the ratio of speed and time.
𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 =𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒
𝑇𝑖𝑚𝑒
Here in my case velocity can be defined as the amount of effort completed per unit time (Sprint
time). In my purpose the distance is the unit of Effort completed and time is the length of sprint.
𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 =𝑈𝑛𝑖𝑡 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡
4.6. Optimization of Velocity
Optimization is the process that should be completed before calculating the software effort. There
are some factors that may affect the velocity of Agile Team. The following two factors must be
considered:
1. Friction Forces: These are consistent forces that can drag on productivity and reduce
project velocity.
2. Dynamic Forces: These are the dynamic forces that can affect Agile team Velocity and can
reduce the overall productivity.
It is important to optimize the team velocity as it is the major factor that can affect the team velocity
and project productivity which eventually affect the overall effort estimation process.
4.7.Friction Forces
According to newton’s First law “any power that restricts the movement of an article because of
the contact of the item with different bodies”. Friction forces are those external forces that have
negative impact on project productivity. They reduce the team velocity. These forces may be
reduced by project manager or developer but can’t be eliminated.
The following friction forces are selected that affect estimation process.
Category
Friction Forces
Product Factors Product Nature
Product Category
47
Category
Friction Forces
Product Usage
Product Performance & Quality
Product Development Complexity
Project Factors
Project Constraints
Project Characteristics
Project Management
Risk Management
Project Type
People Factors
Personal Expertise
Tool Expertise
Tool Availability
Process Factors Process Maturity & stability
Table 4.2: Friction Forces
SLR will be conducted to find the impact of these forces on estimation proces. Friction (FR) is calculated
as the sum of all 14 Friction Forces (FF)
𝐹𝑅 = ∑(FF)𝑖
14
𝑖=1
4.8. Dynamic Forces
Dynamic or Variable forces are often unexpected and unpredictable. These forces may cause the project
decelerate and cause a loss of Velocity. Their effects are sometimes dramatic, but their influence is often
brief.
The following Dynamic Forces are selected that affect estimation process.
1. Expected Team Changes
2. Introduction of New Tools
3. Vendor’s Defect
4. Team member’s responsibilities outside the project
5. Personal Issues
6. Expected Delay in Stakeholder response
7. Expected Ambiguity in Details
48
8. Expected Changes in environment
9. Expected Relocation
SLR will be conducted to find the impact of these forces on estimation process.
Dynamic Forces (DF) is calculated as the sum of all 09 Variable Factors (VF)
𝐷𝐹 = ∑(VF)𝑖
9
𝑖=1
Deacceleration is the rate of negative change of velocity. Here in my case deacceleration is the product of
Dynamic Forces (DF) and Friction Force (FR). It affects team velocity and is calculated as
𝐷 =1
(𝐹𝑅 ∗ 𝐷𝐹)
The final velocity is calculated as
𝑉 = (𝑉𝑖)𝐷
4.9. Effort Estimator
Completion Time
Completion Time (T) is the total time needed to complete the entire project, it is calculated as
𝑇 =𝐸
𝑉 𝑑𝑎𝑦𝑠
𝑇 =∑ (𝐸𝑆)𝑖𝑛
𝑖=1
(𝑉𝑖)𝐷𝐷𝑎𝑦𝑠
Here the unit of Time (T) is Days. To convert it into month, divided it by the number of working days per
week (WD).
𝑇 =∑ (𝐸𝑆)𝑖𝑛
𝑖=1
(𝑉𝑖)𝐷∗
1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠
If there are multiple teams working on the same project then Time will be calculated for each team
separately using above formula and the total time (T) for the entire project will be the summation of each
team time as following.
𝑇 = 𝑇1 + 𝑇2 + ⋯ 𝑇𝑛
49
4.10. Uncertainity
Estimating the completion time of any software project depends upon your confidence level. that is, if
you are one hundred percent confident on your calculations, your estimation will be the most probable.
But if you are not confident in your estimate, then your calculated time will also be a probable forecast.
In that case, estimated time will be in a range called Span of uncertainity. The lower point of this range
is called optimistic point while the uppor range is called pessimistic point. Here I am introducing new
variable for Confidence Level (CF) which will be used to calculate optimistic and pessimistic time
using the following equations.
𝑇𝑖𝑚𝑒𝑃𝑟𝑜𝑏𝑎𝑏𝑙𝑒 = 𝑇
𝑇𝑖𝑚𝑒𝑂𝑝𝑡𝑖𝑚𝑖𝑠𝑡𝑖𝑐 =1 − (100 − 𝐶𝐿)
100∗ 𝑇
𝑇𝑖𝑚𝑒𝑝𝑒𝑠𝑖𝑚𝑖𝑠𝑡𝑖𝑐 =1 + (100 − 𝐶𝐿)
100∗ 𝑇
𝑆𝑝𝑎𝑛 𝑜𝑓 𝑈𝑛𝑐𝑒𝑟𝑡𝑎𝑖𝑛𝑡𝑦 = 𝑇𝑖𝑚𝑒𝑝𝑒𝑠𝑖𝑚𝑖𝑠𝑡𝑖𝑐 − 𝑇𝑖𝑚𝑒𝑂𝑝𝑡𝑖𝑚𝑖𝑠𝑡𝑖𝑐
4.11. Evaluation Procedure And Experimental Analysis
The following there major steps will be used to test the accuracy of the Model.
4. Data set of already completed projects will be applied to my model to calculate estimated time
and cost for the Software Project
5. The estimated Efforts will be compared with actual time and cost
6. Accuracy of the project will be checked using estimated and actual results.
In order to calculate the occuracy of my model three metrics will be used i.e. Magnitude of Relative Error
(MRE), Mean Magnitude of Relative Error (MMRE), and Percentage of Prediction, PRED(x).
MRE: It is used to calculate the absolute error for both under-estimation and over-estimation as given by.
𝑀𝑅𝐸 = 𝑎𝑏𝑠(𝐴𝑐𝑡𝑢𝑎𝑙 𝑅𝑒𝑠𝑢𝑙𝑡𝑠 − 𝐸𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑑 𝑅𝑒𝑠𝑢𝑙𝑡𝑠
𝐴𝑐𝑡𝑢𝑎𝑙 𝑅𝑒𝑠𝑢𝑙𝑡𝑠)
MMRE: It is used to calculate the percentage of Average of MRE’s over an entire dataset.
Given by the following equation. It calculates the accuracy of Model using N number of tests.
𝑀𝑀𝑅𝐸 =100
𝑁∑ 𝑀𝑅𝐸
𝑁
𝑖
PRED(x): It is the ratio of estimates that fall within x percent of the actual value.
50
𝑃𝑅𝐸𝐷(𝑥) =𝐾
𝑛
where K is the number of estimations where MRE value is less or equal to x and n is the total number of
estimations.
MRE is calculated for individual Test. Where MMRE and PRED(x) is calculated for the whole dataset.
51
Chapter 5: Results and Discussion
This chapter describe the results and discussion. The model is constructed and then dataset is
applied to the model to evaluate the accuracy of the model. There are Six inputs to the model which
are listed below.
1. User story Size
2. User Story Complexity
3. Friction Forces
4. Dynamic Forces
5. Team Velocity
6. Velocity Optimization
Section 5.1 describe User Story Size Determination. For this purpose, a detail SLR is conducted
to Categorize the user story according Size and assign weights to the different categories of user
story.
Section 5.2 describe the Determination of User Story Complexity. SLR is conducted to Explore
the Complexity.
In Section 5.3 a Survey is conducted to get the expert opinion on “How user story Complexity
affect effort estimation?”. Statistical analysis is conducted to explore the impact of story
complexity on effort estimation.
Section 5.4 describes the quantification of story complexity factor and weights are assigned to
complexity factors.
Section 5.5 describes the computation of user story Complexity
Section 5.6 describes the calculation of Story Efforts
Section 5.7 describes the determination of Friction Forces, SLR is conducted to explore the
Friction Forces, they are categorized and assigned weights.
Section 5.8 describes the determination of Dynamic Forces, SLR is conducted to explore the
Dynamic Forces, they are categorized and assigned weights.
Section 5.9 and 5.10 describe Agile Team Velocity and Optimization team velocity on the basis
of Friction and Dynamic Forces.
52
Section 5.11 describe the construction of Effort Estimator Algorithm and then dataset of previously
completed projects are applied to the Model to evaluate the accuracy of the Model.
Detail of each section is given bellow.
5.1. User Story Size Determination (SLR)
To determine how user story size affect software effort estimation, the studies of various
researchers has been explored. On the basis of this study the story size is categorized into five
categories and weights are assigned to each category as suggested by researchers. The details of
this SLR is given bellow.
5.1.1. Research Question (RQ)
RQ: How the story size affect cost estimation in Agile?
5.1.2. Search Strategy and Query String
The first thing to begin the searching process is to find all the key terms and their alternatives.
These key words are the used to form the Search query. I searched for all the alternative terms
from the studies of well-known researchers and make those terms the part of my search query. I
used SLR rules [133] to make my basic search query. All the key term and their alternatives were
used with “OR” and the “ANDed” to create my final search query.
The key terms used in my Search query are listed in the following table 5.1.
SNo. Keywords References
1 User story/ User story size [75,43,79,2,1,98,71,131,73,70,58,49]
2 Agile cost/ Agile effort [75,43, 79,2,1,98,71,131,73,70,58]
4 Story point estimation, Agile estimation, Agile
estimating
[75,98,131,73,70,49]
5 Agile software development [76,79,98,40,24,131]
6 Agile requirements [55,131, 77, 24,131]
Table 5.1: Keywords for User Story Size SLR
53
The final search query is given below.
(“user story” OR “User story Size” Or “Story points”) AND (Agile OR "extreme programming"
OR "Scrum" OR "feature driven development" OR "dynamic systems development method" OR
"crystal software development" OR "crystal methodology") AND (estimate* OR predict* OR
forecast* OR calculate* OR assessment OR measure* OR sizing)
The above search string was applied to the well-known search engines on internet. I applied date
filter since 2001. I kept the result from each database in separate Excel sheet. At the end I combined
all search results and remove the duplicate result. I ended up with 28 papers after primary search.
Databases and search results are listed below in table 5.2.
Database Before Removal of Duplication After Removal of Duplication
Scopus 25 4
IEEE Explore 40 9
Web of Science 36 4
INSPEC 09 1
Science Direct 22 5
ACM DL 20 3
Springer Link 22 2
Total 174 28
Table 5.2: List of online Databases for User Story Size SLR
5.1.3. Study Selection Criteria
Inclusion-exclusion criteria were set up keeping the purpose of SLR and RQ. I decided to include
all those studies that are related to story point effort estimation in agile software development and
described in English. Those studies were excluded which were not based on story point estimation
in agile development and which were not described in English.
5.1.4. Study Selection Process
In this phase first I checked the title and abstract of all 28 papers. I applied the inclusive-exclusive
criteria to the titles and abstract to check the relevance of the studies. Those papers were excluded
54
which were not relevant to the current SLR. Then I studied the rest of the papers in detail. I applied
the selection criteria to the full text of the papers. I ended up with 14 papers.
5.1.5. Quality Assessment (QA)
In this phase all 14 papers were checked against quality criteria given in [133]. I used three-point
scale to determine the quality of each paper. Each question was answered by Yes (Y=1), No (N=0),
Average (A=0.5). Each study could get 0-13 points.
Using the first quartile (13/4= 3.25) as the end point for including a study.
Table 5.3: Quality Assessment Checklist adopted by [35, 74, 133]
5.1.6. Results:
This section describes the overall result of SLR, details of rejected and selected papers, user story
size and weights assigned to each category.
55
Table 5.4 shows summary of studies going though different phases. Details of rejected papers in
different phases is also given in table 5.4.
Database Search
Search Result after Removing duplicates 28
Inaccessible papers 0
Excluded on inclusive exclusive criteria 16
Duplicate study 0
Excluded on the bases of low-quality score 2
Final papers(a-b-c-d-f) 10
Table 5.4: Papers in Study selection and QA (User Story Size SLR)
5.1.7. User Story Size Guidelines
Total no of studies that have been selected in this SLR is 10. The following table 5.5 shows the
size guidelines of user story that has been derived from this study. Different researchers have
categorized the size in different manner but after merging the related categories user story size is
divided into five categories and weight is assigned to each category. The Story size values can be
given using sequential values or by Fibonacci numbers as suggested by various researches [17].
Here I choose linear sequential Values to differentiate the size of user story according to each
category.
Category Guidelines Frequency Story Size
Atomic Large An extremely large story that can’t be further divided
into smaller one
Too large to accurately estimate
10 5
Non-Atomic
Medium
A very large Story
Requires the focused effort of a developer for a long
period of time-Think in terms of more than a week of
work
8 4
Non-Atomic Large A moderately large story
Think in terms of two to five days of work
6 3
Atomic Medium Think in terms of a roughly a day or two of work 6 2
Small A very small story representing tiny effort level.
Think in terms of only a few hours of work.
10 1
Table 5.5: Story Size guidelines, their frequency and size
56
The user story is assigned values from 1 to 5 according to above size guidelines. Where value 1
shows smallest user story and 5 the largest user story.
5.2. User Story Complexity Determination (SLR)
In this section SLR was conducted to explore the impact of User Story Complexity Factors. The
complexity factors are then quantified on the basis of researcher’s opinion. The details of this SLR
is given bellow.
5.2.1. Research Question (RQ)
RQ: How to accurately estimate story complexity in Agile?
5.2.2. Search strategy:
In search strategy the electronic databases and manual conferences proceedings are searched. A
search strategy starts with the identification of major key terms, their alternatives and synonyms.
These terms are used to form a query string that is used to derive the rest of the search process.
All synonyms of the terms and their alternatives are used with “OR” and then ANDed to create
searching string. I applied the basic query string on search engine to get the pertinent studies. The
basic search query was applied to well-known search engines like IEEE explore, Scopus, Science
Direct and google scholar etc. keywords from well-known researcher work was identified.
The keywords used for search query are listed in Table 5.6.
S No. Keywords References
1 Agile cost/effort [75,43,79,2,1,98,71,131,73,70,58,49]
2 Agile estimation, Agile estimating [75,43, 79,2,1,98,71,131,73,70,58]
3 Agile software development [76,79,98,40,24,131]
4 User story size/story points [75, 76, 43,98,131,80,105,24,49,48,9]
5 User story metrics [131,28,78]
6 User story complexity [49,9]
7 User story characteristics [43,1,136,77]
8 Good story quality [76, 136, 77,]
9 Agile requirements [55,131, 77, 24,9]
Table 5.6: Keywords for User Story Complexity Factors (SLR)
57
The final search query is listed below.
5.2.3. Primary and Secondary Search Strategies:
In primary search strategy I utilized the search string on well-known databases. The date filter was
applied to get literature since 2001. The search result from each source was kept and managed in
separate Excel sheets. At the end results from all databases were combined and duplicates were
removed. After removing duplicated I ended up with 73 primary studies. Databases and the search
result (before & after duplicates) are listed in the following table 5.7.
Database Before Removal of Duplication After Removal of Duplication
Scopus 20 5
IEEE Explore 99 10
EI Compendex 3 3
Web of Science 15 7
INSPEC 16 4
Science Direct 15 11
ACM DL 78 24
Springer Link 10 9
Total 256 73
Table 5.7: Search Results for User story Complexity Factors (SLR)
5.2.4. Study Selection Criteria
I applied Inclusion and exclusion criteria according to research question and goals of SLR.
All those studies were included which is related to story-based estimation in agile or story based
agile requirement, written in english and reported in any workshop/conference/journal. All those
studies were excluded which were clearly not related to user story or not related to agile.
58
5.2.5. Study Selection Process
In this phase the titles and abstracts of all 273 papers were studied. Inclusion/exclusion criteria
was applied to titles and abstracts to decide their significance to the current review. At that point
those studies were excluded which are clearly not relevant to ASD. Then the criteria are applied
to the full text of paper to check the relevancy. I ended up with 17 papers.
5.2.6. Quality Assessment (QA)
All 17 papers were evaluated independently according to 13 criteria given in table 5.3. The same
assessment procedure was adopted as mentioned in section 4.1.5.
5.2.7. Results:
This section describes the outcomes for the overall SLR process and for research question also.
Table 5.8 shows the numbers of studies going through various phases of the SLR.
Database Search
a. Search Result 73
b. After titles and abstracts screening 30
c. Excluded on inclusive exclusive criteria 13
d. Duplicate study 1
e. Excluded on the bases of low-quality score 1
f. Final papers(b-c-d-f) 15
Table 5.8: Papers in Study selection and QA (User Story Complexity Factors SLR)
In this SLR 15 researcher’s study are selected. The following table 5.9 presents the frequencies
and weights assigned to each Complexity Factor on the basis of Researcher opinion. The weights
are calculated on the following formula.
𝑅𝑊 =𝐹𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦
𝑇𝑜𝑡𝑎𝑙 𝑆𝑡𝑢𝑑𝑖𝑒𝑠 ∗ 5
Where RW is the weight assigned to each Complexity Factor on the basis of Researcher’s opinion,
Frequency is the total number of occurrences of Complexity Factor in 15 selected studies. It is
multiplied by 5 because I chose five the maximum number for weight. E.g. if a complexity factor
is identified by every researcher then its frequency is 15. It means that factor has maximum impact
59
on Software Effort estimation. When its frequency is divided by total no. of studies and then
multiplied by 5 its weight become five.
User Story Complexity Factors Frequency RW
Independent 15 5
Negotiable 15 5
Atomic 12 4
Conflict free 14 4.7
Valuable 15 5
Estimable 15 5
Testable 15 5
Unambiguous 15 5
Full Sentence 12 4
Unique 13 4.3
Priority 10 3.3
Flexibility 10 3.3
Table 5.9: User story Complexity Factor’s Weights on the basis of researcher opinion
The Complexity Factors are also analyzed using regression analysis to make its impact on the
software Effort estimation more precise. To conduct linear regression, a questionnaire (Table 4.1)
was sent to 150 experts all over the world. However, 56 responses are received.
The 56 responses from worldwide experts are then analyzed using linear regression using SPSS20.
The result of this analysis is represented as under.
5.3. Regression Analysis of Survey conducted for User Story Complexity Factors
For this statistical analysis D1 to D5 denoted by “P” are dependent variables i.e. project success
factors while Story Complexity Factors are independent variables.
For models with dependent variable (Project Success Factors), the following variables are
constants or have missing correlations: ESTIMABLE, UNAMBIGUOUS. They will be deleted
from the analysis.
Adjusted R Square= 0.548
The statistical results show that the adjusted R square value is 0.548 which is in accepted range.
60
Model Summary:
Variable Coefficient Value
(CoV)
Significance
α 73.481 .033
INDEPENDENT .166 .379
NEGOTIABLE .289 .019
ATOMIC .431 .004
CONFLICTFREE .455 .009
VALUABLE .412 .023
TESTABLE .628 .000
FULLSENTENCE 1.346 .000
UNIQUE .051 .743
PRIORITY .603 .004
FLEXIBILITY .065 .667
Table 5.10: Coefficient and significance of Story Complexity Factors
Standard error= 33.045
P=73.481 + 0.289 NEGOTIABLE+ 0.431 ATOMIC + 0.455 CONFLICTFREE + 0.412
VALUABLE + 0.628 TESTABLE + 1.346 FULLSENTENCE + 0.603 PRIORITY ± 33.045
5.4. Quantification of User Story Complexity Factors
The final values of user story Complexity Factors are calculated by the following formula.
𝑆𝑡𝑜𝑟𝑦 𝐶𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦 𝑉𝑎𝑙𝑢𝑒 (𝑆𝐶𝑉) = (𝑅𝑊 + 𝐶𝑜𝑉
2)/6.81
Here the average of RW and CoV is divided by 6.81 to make the complexity value of a user story
in the range of 1-5.
SN Story Characteristics RW CoV SCV
1 Independent 5 0.17 0.38
2 Negotiable: 5 0.29 0.388
3 Atomic: 4 0.43 0.325
61
SN Story Characteristics RW CoV SCV
4 Conflict free: 4.7 0.46 0.376
5 Valuable: 5 0.41 0.397
6 Estimable: 5 5 0.734
7 Testable: 5 0.63 0.413
8 Unambiguous: 5 5 0.734
9 Full Sentence 4 1.35 0.393
10 Unique: 4.3 0.05 0.322
11 Priority: 3.3 0.6 0.289
12 Flexibility: 3.3 0.07 0.25
Table 5.11: Story Complexity Factors Quantification
5.5. User Story Complexity
Each Story Complexity Factor is assigned a weight (1 to 5) to show its intensity as shown in the
table 5.12 and then the complexity of the User Story is calculated using the following formula.
𝑆𝑡𝑜𝑟𝑦 𝐶𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦 (𝑆𝐶) = ∑(SCV ∗ Weight)
12
𝑘=1
SNo.
Story
Complexity
Factors
Weights CSV
CSV * Weight
Very
High
=1
High
=2
Normal=
3 Low=4
Very
Low=5
1 Independent ✓ 0.38 0.38
2 Negotiable ✓ 0.388 0.388
3 Atomic ✓ 0.325 1.3
4 Conflict free ✓ 0.376 1.024
5 Valuable ✓ 0.397 0.794
6 Estimable
✓ 0.734 3.36
7 Testable ✓ 0.413 0.826
8 Unambiguous ✓ 0.734 1.468
9 Full Sentence ✓ 0.393 0.786
10 Unique ✓ 0.322 0.966
62
SNo.
Story
Complexity
Factors
Weights CSV
CSV * Weight
Very
High
=1
High
=2
Normal=
3 Low=4
Very
Low=5
11 Priority ✓ 0.289 0.867
12 Flexibility ✓ 0.25 1
𝑆𝑡𝑜𝑟𝑦 𝐶𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦 (𝑆𝐶) = ∑(SCV ∗ Weight)
12
𝑘=1
13.159
Table 5.12: User Story Complexity
5.6. User Story Effort Calculation:
Using the two vectors i.e. size and complexity, effort of a particular story is calculated by the
following simple formula
𝐸𝑆 = 𝑠𝑡𝑜𝑟𝑦 𝑆𝑖𝑧𝑒 ∗ 𝑠𝑡𝑜𝑟𝑦 𝑐𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦
Effort for the entire project will be the summation of all individual stories.
𝐸 = ∑(ES)𝑖
𝑛
𝑖=1
Here ES is the effort of single user story while E is the effort for the entire project. The unit of
the effort is Story Point (SP) which is the amount of effort completed per unit time.
5.7. Friction Forces Determination
In this section the impact of Friction Forces of effort estimation is explored by Systematic
Literature Review. The Friction Forces are identified and weighted as suggested by Researchers.
The details are given below.
5.7.1. Research Question (RQ)
RQ: How fiction forces affecting cost estimation in agile software development?
5.7.2. Search Strategy and Query String
The formation of Query string is an iterative process. I searched for all the alternative terms from
the studies of well-known researchers and make those terms the part of my search query. I used
63
SLR rules [133] to make my basic search query. All the key term and their alternatives were used
with “OR” and the “ANDed” to create my final search query.
The key terms used in my Search query are listed in the following table 5.13.
SNo. Keywords References
1 Friction Forces [136, 62, 56, 107, 97]
2 External Forces [136, 62, 56, 107, 97]
3 System Forces [136, 62, 56,97]
4 Agile cost/ Agile effort [75,43,55, 73,70,58]
6 Agile estimation, Agile estimating [75,98,131,73,70,49]
7 Agile software development [76,79,98,40,24,9]
8 Agile requirements [55,131, 77, 24,9]
Table 5.13: keywords for Friction Forces (SLR)
The final search query is given below.
(Friction Forces OR External Forces OR System Forces) AND (Agile OR "extreme programming"
OR "Scrum" OR "feature driven development" OR "dynamic systems development method" OR
"crystal software development" OR "crystal methodology") AND (estimate* OR predict* OR
forecast* OR calculate* OR assessment OR measure* OR sizing)
The above search string was applied to the well-known search engines on internet. I applied date
filter since 2001. I kept the result from each database in separate Excel sheet. At the end I combined
all examine search results and remove the duplicate result. I ended up with 80 papers after primary
search.
The following databases and results are listed below.
Database Before Removal of Duplication After Removal of Duplication
Scopus 16 5
IEEE Explore 76 27
EI Compendex 2 0
Web of Science 60 4
INSPEC 10 0
64
Database Before Removal of Duplication After Removal of Duplication
Science Direct 39 25
ACM DL 30 11
Springer Link 15 8
Total 248 80
Table 5.14: Search Result for Friction Forces (SLR)
5.7.3. Study Selection Criteria
In this section I form the inclusion-exclusion criteria for intended SLR according to research
question. I decided to include all those studies that are related to agile software development/agile
requirements/agile effort estimation and described in English. Those studies were excluded which
are not related to agile development/ agile estimation/agile requirements and which were not
described in English.
5.7.4. Study selection Process
In this phase first I checked the title and abstract of all 80 papers. I applied the inclusive-exclusive
criteria to the titles and abstract to check the relevance of the studies. Those papers were excluded
which were not relevant to the current SLR. Then I studied the rest of the papers in detail. I applied
the selection criteria to the full text of the papers. I ended up with 20 papers.
5.7.5. Quality assessment
In this phase all 20 papers were checked against quality criteria given in table 5.3. I used three-
point scale to determine the quality of each paper. Each question was answered by Yes (Y=1), No
(N=0), Average (A=0.5). Each study could get 0-13 points.
Using the first quartile (13/4= 3.25) as the end point for including a study as adopted in section
4.1.5.
5.7.6. Results:
In this section the overall result of SLR is described.
Table 5.20 shows summary of studies going though different phases. Details of rejected papers in
different phases is also given in table 5.20
65
Database Search
a. Search Result 80
b. After titles and abstracts screening 26
c. Inaccessible papers 5
d. Excluded on inclusive exclusive criteria 6
e. Duplicate study 3
f. Excluded on the bases of low-quality score 2
g. Final papers(b-c-d-f) 10
Table 5.15: Papers in Study selection and QA (Friction Forces SLR)
The following table 5.16 presents the Friction Forces that can affect software effort estimation in
ASD.
Category
Friction Forces Frequency
Product Factors
Product Nature 09
Product Category 09
Product Usage 09
Product Performance & Quality 10
Product Development Complexity 09
Project Factors
Project Constraints 10
Project Characteristics 10
Project Management 10
Risk Management 10
Project Type 08
People Factors
Personal Expertise 10
Tool Expertise 10
Tool Availability 07
Process Factors Process Maturity & stability 09
Table 5.16: Friction Forces Affecting Software Effort Estimation In ASD
66
5.7.7. Friction Forces Quantification
The same linear numbers are used to weight the friction forces as used for user story size. The
following table 5.17 shows friction forces with their weights. For each software project the weights
of friction forces are set from 1 to 3, where 1 shows low intensity and 3 shows highest intensity.
After assigning the weights for particular project, the final Friction is calculated as the sum of all
friction forces. Since I used 3 the highest range that is why the sum is divided by 42 because the
sum of weights of all fiction forces is 42, which is then multiplied by 3. In this way it gives 3 score,
which is the highest friction.
Friction Forces (FF)
Weight FF
Valu
e
Low =
1
Medium
=2
High=
3
Product
Factors
1. Product Nature ✓ 1
2. Product Category ✓ 1
3. Product Usage ✓ 1
4. Product Performance & Quality ✓ 2
5. Product Development
Complexity ✓
3
Project
Factors
6. Project Constraints ✓ 3
7. Project Characteristics ✓ 2
8. Project Management ✓ 2
9. Risk Management ✓ 3
10. Project Type ✓ 2
People Factors
11. Personal Expertise ✓ 3
12. Tool Expertise ✓ 2
13. Tool Availability ✓ 3
Process
Factors 14. Process Maturity & stability ✓
2
𝑭𝑹 =∑ (𝑭𝑭 𝑽𝒂𝒍𝒖𝒆)𝒊𝟏𝟒
𝒊=𝟏
𝟒𝟐∗ 𝟑
3.57
Table 5.17: Friction Forces weights
67
Friction (FR) is calculated as the sum of all 14 Friction Forces (FF)
𝐹𝑅 =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 3
5.8. Dynamic Forces Determination (SLR)
In this section the impact of Dynamic Forces of effort estimation is explored by Systematic
Literature Review. The Dynamic Forces are identified and weighted as suggested by Researchers.
The details are given below
5.8.1. Research Question
How dynamic forces affecting cost estimation in agile software development?
5.8.2. Search Strategy and Query String
The first thing to begin the searching process is to find all the key terms and their alternatives.
These key words are the used to form the Search query. I searched for all the alternative terms
from the studies of well-known researchers and make those terms the part of my search query. I
used SLR rules [133] to make my basic search query. All the key term and their alternatives were
used with “OR” and the “ANDed” to create my final search query.
The key terms used in my Search query are listed in the table 5.18.
SNo. Keywords References
1 Dynamic Forces [136, 62, 56, 107, 97]
2 System Forces [136, 62, 56, 107, 97]
3 Agile cost, Agile Effort [75,43,79,2,1,98,71,131,73,70,58,49]
4 Agile estimation, Agile estimating [75,43, 79,2,1,98,71,131,73,70,58]
5 Agile software development [76,79,98,40,24,9]
6 Agile requirements [55,131, 77, 24,9]
Table 5.18: Keywords for Dynamic Forces (SLR)
The final search query is given below.
(Dynamic Forces OR System Forces) AND (Agile OR "extreme programming" OR "Scrum" OR
"feature driven development" OR "dynamic systems development method" OR "crystal software
68
development" OR "crystal methodology") AND (estimate* OR predict* OR forecast* OR
calculate* OR assessment OR measure* OR sizing)
The above search string was applied to the well-known search engines on internet. I applied date
filter since 2001. I kept the result from each database in separate Excel sheet. At the end I combined
all examine search results and remove the duplicate result. I ended up with 38 papers after primary
search.
The following databases and results are listed below.
Database Before Removal of Duplication After Removal of Duplication
Scopus 21 6
IEEE Explore 50 8
EI Compendex 10 1
Web of Science 52 3
INSPEC 8 1
Science Direct 43 8
ACM DL 28 6
Springer Link 21 5
Total 233 38
Table 5.19: Database Search Result Before and After Duplication (Dynamic Forces SLR)
5.8.3. Study Selection Criteria
inclusion-exclusion criteria were set up keeping the purpose of SLR and RQ. I decided to include
all those studies that are related to agile software development/agile requirements/agile effort
estimation and described in English. Those studies were excluded which were not related to agile
development/ agile estimation/agile requirements and which were not described in English.
5.8.4. Study selection Process
In this phase first I checked the title and abstract of all 38 papers. I applied the inclusive-exclusive
criteria to the titles and abstract to check the relevance of the studies. Those papers were excluded
which were not relevant to the current SLR. Then I studied the rest of the papers in detail. I applied
the selection criteria to the full text of the papers. I ended up with 18 papers.
69
5.8.5. Quality assessment
In this phase all 18 papers were checked against quality criteria given in table 5.3. I used three-
point scale to determine the quality of each paper. Each question was answered by Yes (Y=1), No
(N=0), Average (A=0.5). Each study could get 0-13 points.
Using the first quartile (13/4= 3.25) as the end point for including a study.
5.8.6. Results:
In this section the overall result of SLR is described.
Table 5.20 shows summary of studies going though different phases. Details of rejected papers in
different phases is also given in table 5.20.
Database Search
h. Search Result 38
i. After titles and abstracts screening 23
j. Inaccessible papers 1
k. Excluded on inclusive exclusive criteria 5
l. Duplicate study 0
m. Excluded on the bases of low-quality score 5
n. Final papers(b-c-d-f) 12
Table 5.20: Papers in Study selection and QA
The following table 5.21 presents the Dynamic Forces that can affect software effort estimation in
ASD.
Dynamic Forces Frequency
Expected Team Changes 10
Introduction of New Tools 12
Vendor’s Defect 10
Team member’s responsibilities outside the project 12
Personal Issues 12
Expected Delay in Stakeholder response 12
Expected Ambiguity in Details 10
70
Dynamic Forces Frequency
Expected Changes in environment 10
Expected Relocation 10
Table 5.21: Dynamic Forces Affecting Software Effort Estimation In ASD
5.8.7. Dynamic Forces Quantification
The same linear numbers are used to weight the Dynamic Forces as used for user story size. The
following table 5.22 shows Dynamic forces with their weights. For each software project the
weights of Dynamic forces are set from 1 to 3, where 1 shows lowest intensity and 3 shows highest
intensity. After assigning the weights to Dynamic Forces for particular project, the final value for
Dynamic Forces is calculated as the sum of all VF value. Since I used 5 the highest range that is
why the sum is divided by 27 because the highest weight of all Dynamic forces is 27, which is
then multiplied by 3. In this way it gives 3 score, which is the highest Dynamic Force value.
SNo
. Dynamic Forces
weight VF
Valu
e
Low=
1
Medium=
2
High=
3
1 Expected Team Changes ✓
1
2 Introduction of New Tools ✓
1
3 Vendor’s Defect ✓
2
4 Team member’s responsibilities outside the
project ✓
2
5 Personal Issues ✓
1
6 Expected Delay in Stakeholder response
✓ 3
7 Expected Ambiguity in Details ✓
2
8 Expected Changes in environment
✓ 3
9 Expected Relocation ✓
1
∑ (𝑽𝑭)𝒊𝟗𝒊=𝟏
𝟐𝟕∗ 𝟑
2.96
Table 5.22: Dynamic Forces and Their Weights
71
Dynamic Forces (DF) is calculated as under.
𝐷𝐹 =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 3
5.9. Team Velocity
Team velocity is calculated as Total Unit of Effort divided by sprint size as under.
𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡
5.10. Optimization of Team Velocity:
Since Team velocity is deaccelerated by Friction Forces and Dynamic Forces. Thus, the final
team velocity (V) is obtained by optimizing Vi as under
Deacceleration 𝐷 =1
(𝐹𝑅∗𝐷𝐹)
The final velocity is calculated as
𝑉 = (𝑉𝑖)𝐷
5.11. Effort Estimator
Completion Time
Completion Time (T) is the total time needed to complete the entire project, it is calculated as
𝑇 =𝐸
𝑉 𝑑𝑎𝑦𝑠
𝑇 =∑ (𝐸𝑆)𝑖𝑛
𝑖=1
(𝑉𝑖)𝐷𝐷𝑎𝑦𝑠
Here the unit of Time (T) is Days. To convert it into month, divided it by the number of working
days per week (WD).
𝑇 =∑ (𝐸𝑆)𝑖𝑛
𝑖=1
(𝑉𝑖)𝐷∗
1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠
If there are multiple teams working on the same project then Time will be calculated for each
team separately using above formula and the total time (T) for the entire project will be the
summation of each team time as following.
72
𝑇 = 𝑇1 + 𝑇2 + ⋯ 𝑇𝑛
5.12. Development Cost
Since agile development team is constant. By taking team salary as unit I conducted a survey of
18 Pakistani companies at CMMI level 3 to calculate monthly expenditure per project. The
expenses are calculated for one project per month. Average expenses per month along with their
ratio to team salary are presented in Table 5.23.
Cost Head Amount Ratio with Team Salary
Team Salary 640000 1
Non-Technical Staff Salary 50000 0.078125
Equipment 30885 0.048257813
Depreciation 14372 0.02245625
Rent 40000 0.0625
Travelling 14653 0.022895313
Furniture 8000 0.0125
Utility Bills 50741 0.079282813
Copyright & Licensing 15220 0.02378125
Software Purchase & Subscription 10120 0.0158125
Repair & Maintenance 6800 0.010625
Stationary 14445 0.022570313
Marketing 5777 0.009026563
Others Expenses 13000 0.0203125
Net Ratio 1.4281
Table 5.23: Agile Team Salary and other Cost Heads
By taking the Net Ratio, the development cost is calculated as:
𝐶𝑜𝑠𝑡 = 1.4281 ∗ 𝑇𝑆 ∗ 𝑇
Hers TS is the total Monthly team Salary and T is calculated Time in months.
73
5.13. Uncertainty of Calculation
Predicting the completion time of any software project depends on the confidence of your
estimator. For example, if you are 100% confident in what you have estimated, your calculated
time will be the most probable time. If you are not confident in your prediction then calculated
time will also only be a probable estimate. In such case your estimation may lie between your
confidence level. This range is called “Span of Uncertainty”. Lower range is called “Optimistic
Point” and upper range is called “Pessimistic Point”. I used another variable for Confidence Level
(CL) to calculate Optimistic and Pessimistic time.
𝑇𝑖𝑚𝑒𝑃𝑟𝑜𝑏𝑎𝑏𝑙𝑒 = 𝑇
𝑇𝑖𝑚𝑒𝑂𝑝𝑡𝑖𝑚𝑖𝑠𝑡𝑖𝑐 =1 − (100 − 𝐶𝐿)
100∗ 𝑇
𝑇𝑖𝑚𝑒𝑝𝑒𝑠𝑖𝑚𝑖𝑠𝑡𝑖𝑐 =1 + (100 − 𝐶𝐿)
100∗ 𝑇
𝑆𝑝𝑎𝑛 𝑜𝑓 𝑈𝑛𝑐𝑒𝑟𝑡𝑎𝑖𝑛𝑡𝑦 = 𝑇𝑖𝑚𝑒𝑝𝑒𝑠𝑖𝑚𝑖𝑠𝑡𝑖𝑐 − 𝑇𝑖𝑚𝑒𝑂𝑝𝑡𝑖𝑚𝑖𝑠𝑡𝑖𝑐
5.14. Model Summary
Input
• N= No. of User Stories
• WD= Work Days/Month
• TS= Monthly Team Salary
• Sprint Time= No. of Days in One Sprint
• E= Units of Effort Completed by team in One Sprint
• CL= Estimator Confidence Level
Metrics
• Story Size Metric (Table 5.5)
• Story Complexity Metric (Table 5.12)
• Friction Forces Metric (Table 5.17)
• Dynamic Forces Metric (Table 5.22)
74
Evaluation
Calculation of Completion Time (T):
𝑇 =∑ (𝐸𝑆)𝑖𝑛
𝑖=1
(𝑉𝑖)𝐷∗
1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠
Here WD is the no of work days in a month. ES is User Story Effort, which is calculated as
under.
𝐸𝑆 = 𝑠𝑡𝑜𝑟𝑦 𝑆𝑖𝑧𝑒 ∗ 𝑠𝑡𝑜𝑟𝑦 𝑐𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦
Vi is the initial velocity, calculated as following.
𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡𝑠 𝑜𝑓 𝑒𝑓𝑓𝑜𝑟𝑡 𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡
Sprint Time is the No. of days in sprint.
In order to adjust velocity against friction forces and dynamic forces, deacceleration (D) is
calculated as following.
Deacceleration 𝐷 =1
𝐹𝑅∗𝐷𝐹
Here FR is sum of all 14 Friction Forces described in table 5.17 and calculated as following.
𝐹𝑅 =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 5
Here DF is sum of all 9 Dynamic Forces described in table 5.22 and calculated as following.
𝐷𝐹 =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 5
Development cost of project is calculated as under.
𝐶𝑜𝑠𝑡 = 𝑇𝑆𝑅𝑎𝑡𝑖𝑜 ∗ 𝑇𝑆 ∗ 𝑇
Hers TS is team salary and T is estimated time.
By using confidence level in estimation (CL), Probable, Optimistic and Pessimistic Time is
calculated as under
𝑇𝑖𝑚𝑒𝑃𝑟𝑜𝑏𝑎𝑏𝑙𝑒 = 𝑇
75
𝑇𝑖𝑚𝑒𝑂𝑝𝑡𝑖𝑚𝑖𝑠𝑡𝑖𝑐 =1 − (100 − 𝐶𝐿)
100∗ 𝑇
𝑇𝑖𝑚𝑒𝑝𝑒𝑠𝑖𝑚𝑖𝑠𝑡𝑖𝑐 =1 + (100 − 𝐶𝐿)
100∗ 𝑇
𝑆𝑝𝑎𝑛 𝑜𝑓 𝑈𝑛𝑐𝑒𝑟𝑡𝑎𝑖𝑛𝑡𝑦 = 𝑇𝑖𝑚𝑒𝑝𝑒𝑠𝑖𝑚𝑖𝑠𝑡𝑖𝑐 − 𝑇𝑖𝑚𝑒𝑂𝑝𝑡𝑖𝑚𝑖𝑠𝑡𝑖𝑐
5.15. Accuracy Evaluation
In order to Evaluate the accuracy of the model, the following there major steps has been taken.
1. Data set of 11 already completed Software projects are collected from three different
Software Houses. These projects have been developed using Agile Software Development
Methodology.
2. The data was applied to the model to calculate estimated effort and cost.
3. In order to calculate the accuracy of model three metrics were used i.e. Magnitude of
Relative Error (MRE), Mean Magnitude of Relative Error (MMRE), and Percentage of
Prediction, PRED(x).
MRE: It is used to calculate the absolute error for both under-estimation and over-estimation as
given by.
𝑀𝑅𝐸 = 𝑎𝑏𝑠(𝐴𝑐𝑡𝑢𝑎𝑙 𝑅𝑒𝑠𝑢𝑙𝑡𝑠 − 𝐸𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑑 𝑅𝑒𝑠𝑢𝑙𝑡𝑠
𝐴𝑐𝑡𝑢𝑎𝑙 𝑅𝑒𝑠𝑢𝑙𝑡𝑠)
MMRE: It is used to calculate the percentage of Average of MRE’s over an entire dataset.
Given by the following equation. It calculates the accuracy of Model using 11 number of tests.
𝑀𝑀𝑅𝐸 =100
11∑ 𝑀𝑅𝐸
11
𝑖
PRED(x): It is the percentage of estimates that fall within x percent of the actual value.
𝑃𝑅𝐸𝐷(𝑥) =𝐾
11
where K is the number of estimations where MRE value is less or equal to x and 11 is the total
number of estimations.
76
MRE is calculated for individual Test. Where MMRE and PRED(x) is calculated for the whole
dataset.
The following table 5.30 shows the experimental analysis of the model when dataset of 11
previously completed projects were applied.
5.16. Experimental Analysis
This section gives details of experimental analysis of the model. Data of 11 previously completed
projects are applied to the model to analyze the results and accuracy of the model.
Project No. 1: Health Care ERP (BitSharp IT Solution)
Inputs
No. of User Stories = 212
Units of Efforts Completed in Sprint = 12
Sprint Size = 6 Days
No. Working Days Per Month = 24
Monthly Team Salary = 620000
Ratio with Team Salary (TS Ratio) = 1.4419
Confidence Level in Estimation (CF) = 90
Results
Efforts (𝐸) = ∑ (ES)𝑖 =212𝑖=1 𝟓𝟕𝟕. 𝟒𝟏21
Initial Velocity (Vi) = 𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡𝑠 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡𝑠
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡 =
12
6= 𝟐
Friction Forces (𝐹𝑅) =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 3 = 𝟏. 𝟒𝟐𝟖𝟔
Dynamic Forces (𝐷𝐹) =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 3 = 𝟏. 𝟑𝟑𝟑𝟑
Deacceleration (D) = 1
(𝐹𝐹∗𝐷𝐹)=
1
(𝟏.𝟒𝟐𝟖𝟔 ∗ 𝟏.𝟑𝟑𝟑𝟑) = 0.525
Final Velocity (V) = (Vi)D = (2)0.525 = 1.44
Estimated Time (𝑇) =∑ (𝐸𝑆)𝑖212
𝑖=1
(𝑉𝑖)𝐷 ∗ 1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠 =
577.4121
1.44∗
1
24= 𝟏𝟔. 𝟕𝟐 𝑴𝒐𝒏𝒕𝒉𝒔
77
Estimated (𝐶𝑜𝑠𝑡) = 1.4419 ∗ 𝑇𝑆 ∗ 𝑇 = 1.4419 ∗ 620000 ∗ 16.72 = 𝟏𝟒𝟗𝟒𝟕𝟓𝟖𝟎. 𝟑𝟓 (𝑷𝑲𝑹)
TIME Probable = (T) = 𝟏𝟔. 𝟕𝟐 𝑴𝒐𝒏𝒕𝒉𝒔
TIME Optimistic =1−(100−90)
100∗ 16.72 = -1.505 Months
TIME Pessimistic =1+(100−𝐶𝐿)
100∗ 16.72 = 1.8392 Months
Project No. 2: Human Resource Information System (HRIS) (BitSharp IT Solution)
Inputs
No. of User Stories = 158
Units of Efforts Completed in Sprint = 12
Sprint Size = 7 Days
No. Working Days Per Month = 24
Monthly Team Salary = 640000
Ratio with Team Salary (TS Ratio) = 1.4281
Confidence Level in Estimation (CF) = 90
Results
Efforts (𝐸) = ∑ (ES)𝑖 =158𝑖=1 𝟒𝟒𝟎. 𝟓
Initial Velocity (Vi) = 𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡𝑠 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡𝑠
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡 =
12
7= 1.71
Friction Forces (𝐹𝑅) =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 3 = 𝟏. 𝟎𝟕𝟏𝟒
Dynamic Forces (𝐷𝐹) =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 3 = 𝟏. 𝟏𝟏𝟏𝟏
Deacceleration (D) = 1
(𝐹𝑅∗𝐷𝐹)=
1
(𝟏.𝟎𝟕𝟏𝟒 ∗ 𝟏.𝟏𝟏𝟏𝟏) = 0. 84
Final Velocity (V) = (Vi)D = (1.71)0.84 = 1.57
Estimated Time (𝑇) =∑ (𝐸𝑆)𝑖158
𝑖=1
(𝑉𝑖)𝐷 ∗ 1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠 =
440.5
1.57∗
1
24= 𝟏𝟏. 𝟔𝟕 𝑴𝒐𝒏𝒕𝒉𝒔
Estimated (𝐶𝑜𝑠𝑡) = 1.4281 ∗ 𝑇𝑆 ∗ 𝑇 = 1.4281 ∗ 640000 ∗ 11.67 = 𝟏𝟎𝟔𝟔𝟔𝟏𝟗𝟑. 𝟐𝟖 (𝑷𝑲𝑹)
78
TIME Probable = (T) = 𝟏𝟏. 𝟔𝟕 𝑴𝒐𝒏𝒕𝒉𝒔
TIME Optimistic =1−(100−90)
100∗ 11.67 = -1.05 Months
TIME Pessimistic =1+(100−90)
100∗ 11.67 = 1.28 Months
Project No.3: Hospital Information System (BitSharp IT Solution)
Inputs
No. of User Stories = 222
Units of Efforts Completed in Sprint = 12
Sprint Size = 6 Days
No. Working Days Per Month = 24
Monthly Team Salary = 640000
Ratio with Team Salary (TS Ratio) = 1.4281
Confidence Level in Estimation (CF) = 90
Results
Efforts (𝐸) = ∑ (ES)𝑖 =222𝑖=1 661.97
Initial Velocity (Vi) = 𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡𝑠 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡𝑠
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡 =
12
6= 2
Friction Forces (𝐹𝑅) =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 3 = 𝟏. 𝟏𝟒𝟐𝟗
Dynamic Forces (𝐷𝐹) =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 3 = 𝟏. 𝟐𝟐𝟐𝟐
Deacceleration (D) = 1
(𝐹𝑅∗𝐷𝐹)=
1
(𝟏.𝟏𝟒𝟐𝟗 ∗ 𝟏.𝟐𝟐𝟐𝟐) = 0. 716
Final Velocity (V) = (Vi)D = (2)0.716 = 1.64
Estimated Time (𝑇) =∑ (𝐸𝑆)𝑖222
𝑖=1
(𝑉𝑖)𝐷 ∗ 1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠 =
661.97
1.64∗
1
24= 𝟏𝟔. 𝟕𝟗 𝑴𝒐𝒏𝒕𝒉𝒔
Estimated (𝐶𝑜𝑠𝑡) = 1.4281 ∗ 𝑇𝑆 ∗ 𝑇 = 1.4281 ∗ 640000 ∗ 16.79 = 𝟏𝟓𝟑𝟒𝟖𝟑𝟓𝟎. 𝟓𝟐 (𝑷𝑲𝑹)
79
TIME Probable = (T) = 𝟏𝟔. 𝟕𝟗 𝑴𝒐𝒏𝒕𝒉𝒔
TIME Optimistic =1−(100−90)
100∗ 16.79 = -1.51 Months
TIME Pessimistic =1+(100−90)
100∗ 16.79= 1.84 Months
Project No.4: Educational Institute (BitSharp IT Solution)
Inputs
No. of User Stories = 130
Units of Efforts Completed in Sprint = 12
Sprint Size = 6 Days
No. Working Days Per Month = 24
Monthly Team Salary = 640000
Ratio with Team Salary (TS Ratio) = 1.4281
Confidence Level in Estimation (CF) = 90
Results
Efforts (𝐸) = ∑ (ES)𝑖 =130𝑖=1 335.41
Initial Velocity (Vi) = 𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡𝑠 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡𝑠
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡 =
12
6= 2
Friction Forces (𝐹𝑅) =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 3 = 1.3571
Dynamic Forces (𝐷𝐹) =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 3 = 𝟏. 𝟒𝟒𝟒𝟒
Deacceleration (D) = 1
(𝐹𝑅∗𝐷𝐹)=
1
(𝟏.𝟑𝟓𝟕𝟏 ∗ 𝟏.𝟒𝟒𝟒𝟒) = 0. 51
Final Velocity (V) = (Vi)D = (2)0.51 = 1.4242
Estimated Time (𝑇) =∑ (𝐸𝑆)𝑖130
𝑖=1
(𝑉𝑖)𝐷∗
1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠 =
335.41
1.4242∗
1
24= 𝟗. 𝟖𝟏 𝑴𝒐𝒏𝒕𝒉𝒔
Estimated (𝐶𝑜𝑠𝑡) = 1.4281 ∗ 𝑇𝑆 ∗ 𝑇 = 1.4281 ∗ 640000 ∗ 9.81 = 𝟖𝟗𝟔𝟖𝟕𝟒𝟐. 𝟐 (𝑷𝑲𝑹)
80
TIME Probable = (T) = 𝟗. 𝟖𝟏 𝑴𝒐𝒏𝒕𝒉𝒔
TIME Optimistic =1−(100−90)
100∗ 9.81 = -0.883 Months
TIME Pessimistic =1+(100−90)
100∗ 9.81= 1.0791 Months
Project No.5: Sugar Mill Industry (BitSharp IT Solution)
Inputs
No. of User Stories = 195
Units of Efforts Completed in Sprint = 12
Sprint Size = 6 Days
No. Working Days Per Month = 24
Monthly Team Salary = 640000
Ratio with Team Salary (TS Ratio) = 1.4281
Confidence Level in Estimation (CF) = 90
Results
Efforts (𝐸) = ∑ (ES)𝑖 =195𝑖=1 526.32
Initial Velocity (Vi) = 𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡𝑠 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡𝑠
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡 =
12
6= 2
Friction Forces (𝐹𝑅) =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 3 = 1.5
Dynamic Forces (𝐷𝐹) =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 3 = 𝟏. 𝟑𝟑𝟑𝟑
Deacceleration (D) = 1
(𝐹𝑅∗𝐷𝐹)=
1
(𝟏.𝟑𝟓𝟕𝟏 ∗ 𝟏.𝟒𝟒𝟒𝟒) = 0. 5
Final Velocity (V) = (Vi)D = (2)0.50 = 1.4142
Estimated Time (𝑇) =∑ (𝐸𝑆)𝑖195
𝑖=1
(𝑉𝑖)𝐷 ∗ 1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠 =
526.32
1.4142∗
1
24= 𝟏𝟓. 𝟓𝟏 𝑴𝒐𝒏𝒕𝒉𝒔
Estimated (𝐶𝑜𝑠𝑡) = 1.4281 ∗ 𝑇𝑆 ∗ 𝑇 = 1.4281 ∗ 640000 ∗ 15.51 = 𝟖𝟗𝟔𝟖𝟕𝟒𝟐. 𝟐 (𝑷𝑲𝑹)
81
TIME Probable = (T) = 15.51 𝑴𝒐𝒏𝒕𝒉𝒔
TIME Optimistic =1−(100−90)
100∗ 15.51 = -1.396 Months
TIME Pessimistic =1+(100−90)
100∗ 15.51= 1.7061 Months
Project No.6: pharmaceutical information system (BitSharp IT Solution)
Inputs
No. of User Stories = 139
Units of Efforts Completed in Sprint = 10
Sprint Size = 6 Days
No. Working Days Per Month = 24
Monthly Team Salary = 700000
Ratio with Team Salary (TS Ratio) = 1.391
Confidence Level in Estimation (CF) = 90
Results
Efforts (𝐸) = ∑ (ES)𝑖 =139𝑖=1 389.89
Initial Velocity (Vi) = 𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡𝑠 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡𝑠
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡 =
10
6= 𝟏. 𝟔𝟕
Friction Forces (𝐹𝑅) =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 3 = 1.1429
Dynamic Forces (𝐷𝐹) =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 3 = 𝟏. 𝟐𝟐𝟐𝟐
Deacceleration (D) = 1
(𝐹𝑅∗𝐷𝐹)=
1
(𝟏.𝟏𝟒𝟐𝟗 ∗ 𝟏.𝟐𝟐𝟐𝟐) = 0. 716
Final Velocity (V) = (Vi)D = (1.67)0.716 = 1.44
Estimated Time (𝑇) =∑ (𝐸𝑆)𝑖139
𝑖=1
(𝑉𝑖)𝐷 ∗ 1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠 =
389.89
1.44∗
1
24= 𝟏𝟏. 𝟐𝟕 𝑴𝒐𝒏𝒕𝒉𝒔
Estimated (𝐶𝑜𝑠𝑡) = 1.391 ∗ 𝑇𝑆 ∗ 𝑇 = 1.391 ∗ 700000 ∗ 11.27 = 𝟏𝟎𝟗𝟕𝟑𝟒𝟎𝟒. 𝟑 (𝑷𝑲𝑹)
82
TIME Probable = (T) = 11.27 𝑴𝒐𝒏𝒕𝒉𝒔
TIME Optimistic =1−(100−90)
100∗ 11.27 = -1.0143 Months
TIME Pessimistic =1+(100−90)
100∗ 11.27= 1.239 Months
Project No.7: Campus Management System for College (BitSharp IT Solution)
Inputs
No. of User Stories = 178
Units of Efforts Completed in Sprint = 12
Sprint Size = 7 Days
No. Working Days Per Month = 24
Monthly Team Salary = 580000
Ratio with Team Salary (TS Ratio) = 1.4724
Confidence Level in Estimation (CF) = 90
Results
Efforts (𝐸) = ∑ (ES)𝑖 =178𝑖=1 𝟒𝟖𝟗. 𝟒𝟏
Initial Velocity (Vi) = 𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡𝑠 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡𝑠
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡 =
12
7= 𝟏. 𝟕𝟏𝟒
Friction Forces (𝐹𝑅) =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 3 = 1.0714
Dynamic Forces (𝐷𝐹) =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 3 = 𝟏
Deacceleration (D) = 1
(𝐹𝑅∗𝐷𝐹)=
1
(𝟏.𝟎𝟕𝟏𝟒 ∗ 𝟏) = 0. 9333
Final Velocity (V) = (Vi)D = (1.714)0.9333 = 1.653
Estimated Time (𝑇) =∑ (𝐸𝑆)𝑖178
𝑖=1
(𝑉𝑖)𝐷 ∗ 1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠 =
489.41
1.653∗
1
24= 𝟏𝟐. 𝟑𝟑 𝑴𝒐𝒏𝒕𝒉𝒔
Estimated (𝐶𝑜𝑠𝑡) = 1.4724 ∗ 𝑇𝑆 ∗ 𝑇 = 1.4724 ∗ 580000 ∗ 12.33 = 𝟏𝟎𝟓𝟐𝟗𝟕𝟐𝟏. 𝟑𝟔 (𝑷𝑲𝑹)
83
TIME Probable = (T) = 12.33 𝑴𝒐𝒏𝒕𝒉𝒔
TIME Optimistic =1−(100−90)
100∗ 12.33 = -1.11 Months
TIME Pessimistic =1+(100−90)
100∗ 12.33 = 1.3563 Months
Project No.8: Real Estate ERP NetSol Technologies Inc.
Inputs
No. of User Stories = 163
Units of Efforts Completed in Sprint = 12
Sprint Size = 7 Days
No. Working Days Per Month = 24
Monthly Team Salary = 640000
Ratio with Team Salary (TS Ratio) = 1.4281
Confidence Level in Estimation (CF) = 90
Results
Efforts (𝐸) = ∑ (ES)𝑖 =163𝑖=1 𝟒𝟖𝟗. 𝟕𝟕
Initial Velocity (Vi) = 𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡𝑠 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡𝑠
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡 =
12
7= 𝟏. 𝟕𝟏𝟒
Friction Forces (𝐹𝑅) =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 3 = 1
Dynamic Forces (𝐷𝐹) =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 3 = 𝟏.1111
Deacceleration (D) = 1
(𝐹𝑅∗𝐷𝐹)=
1
(𝟏 ∗ 𝟏.𝟏𝟏𝟏𝟏) = 0. 9
Final Velocity (V) = (Vi)D = (1.714)0.9 = 1.624
Estimated Time (𝑇) =∑ (𝐸𝑆)𝑖163
𝑖=1
(𝑉𝑖)𝐷 ∗ 1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠 =
489.77
1.624∗
1
24= 𝟏𝟐. 𝟓𝟔 𝑴𝒐𝒏𝒕𝒉𝒔
Estimated (𝐶𝑜𝑠𝑡) = 1.4281 ∗ 𝑇𝑆 ∗ 𝑇 = 1.4281 ∗ 640000 ∗ 12.56 = 𝟏𝟏𝟒𝟕𝟗𝟔𝟑𝟗. 𝟎𝟒 (𝑷𝑲𝑹)
84
TIME Probable = (T) = 12.56 𝑴𝒐𝒏𝒕𝒉𝒔
TIME Optimistic =1−(100−90)
100∗ 12.56 = -1.13 Months
TIME Pessimistic =1+(100−90)
100∗ 12.56 = 1.38 Months
Project No.9: ERP for Rice Mill NetSol Technologies Inc.
Inputs
No. of User Stories = 135
Units of Efforts Completed in Sprint = 12
Sprint Size = 7 Days
No. Working Days Per Month = 24
Monthly Team Salary = 640000
Ratio with Team Salary (TS Ratio) = 1.4281
Confidence Level in Estimation (CF) = 90
Results
Efforts (𝐸) = ∑ (ES)𝑖 =135𝑖=1 𝟑𝟕𝟑. 𝟏
Initial Velocity (Vi) = 𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡𝑠 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡𝑠
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡 =
12
7= 𝟏. 𝟕𝟏𝟒𝟑
Friction Forces (𝐹𝑅) =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 3 = 1.0714
Dynamic Forces (𝐷𝐹) =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 3 = 𝟏.3333
Deacceleration (D) = 1
(𝐹𝑅∗𝐷𝐹)=
1
(𝟏.𝟎𝟕𝟏𝟒 ∗ 𝟏.𝟑𝟑𝟑𝟑) = 0. 7
Final Velocity (V) = (Vi)D = (1.7143)0.7 = 1.4584
Estimated Time (𝑇) =∑ (𝐸𝑆)𝑖135
𝑖=1
(𝑉𝑖)𝐷 ∗ 1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠 =
373.1
1.4584∗
1
24= 𝟏𝟎. 𝟔𝟓𝟗𝟓 𝑴𝒐𝒏𝒕𝒉𝒔
Estimated (𝐶𝑜𝑠𝑡) = 1.4281 ∗ 𝑇𝑆 ∗ 𝑇 = 1.4281 ∗ 640000 ∗ 10.6595 = 𝟗𝟕𝟒𝟐𝟔𝟏𝟐. 𝟒 (𝑷𝑲𝑹)
85
TIME Probable = (T) = 10.65𝟗𝟓 𝑴𝒐𝒏𝒕𝒉𝒔
TIME Optimistic =1−(100−90)
100∗ 10.6595 = -0.959 Months
TIME Pessimistic =1+(100−90)
100∗ 10.6595 = 1.1726 Months
Project No.10: E-Commerce Store NetSol Technologies Inc.
Inputs
No. of User Stories = 144
Units of Efforts Completed in Sprint = 12
Sprint Size = 7 Days
No. Working Days Per Month = 24
Monthly Team Salary = 710000
Ratio with Team Salary (TS Ratio) = 1.3859
Confidence Level in Estimation (CF) = 90
Results
Efforts (𝐸) = ∑ (ES)𝑖 =144𝑖=1 𝟒𝟏𝟎. 𝟎𝟑
Initial Velocity (Vi) = 𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡𝑠 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡𝑠
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡 =
12
7= 𝟏. 𝟕𝟏𝟒𝟑
Friction Forces (𝐹𝑅) =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 3 = 1.6429
Dynamic Forces (𝐷𝐹) =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 3 = 𝟏.5556
Deacceleration (D) = 1
(𝐹𝑅∗𝐷𝐹)=
1
(𝟏.𝟔𝟒𝟐𝟗 ∗ 𝟏.𝟓𝟓𝟓𝟔) = 0. 391
Final Velocity (V) = (Vi)D = (1.7143)0.391 = 1.235
Estimated Time (𝑇) =∑ (𝐸𝑆)𝑖144
𝑖=1
(𝑉𝑖)𝐷∗
1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠 =
410.03
1.235∗
1
24= 𝟏𝟑. 𝟖𝟑 𝑴𝒐𝒏𝒕𝒉𝒔
Estimated (𝐶𝑜𝑠𝑡) = 1.4281 ∗ 𝑇𝑆 ∗ 𝑇 = 1.3859 ∗ 710000 ∗ 13.83 = 13608567.87 (𝑷𝑲𝑹)
TIME Probable = (T) = 13.83 𝑴𝒐𝒏𝒕𝒉𝒔
86
TIME Optimistic =1−(100−90)
100∗ 13.83 = -1.246 Months
TIME Pessimistic =1+(100−90)
100∗ 13.83 = 1.5224 Months
Project No. 11: CRM NetSol Technologies Inc.
Inputs
No. of User Stories = 98
Units of Efforts Completed in Sprint = 12
Sprint Size = 7 Days
No. Working Days Per Month = 24
Monthly Team Salary = 710000
Ratio with Team Salary (TS Ratio) = 1.3859
Confidence Level in Estimation (CF) = 90
Results
Efforts (𝐸) = ∑ (ES)𝑖 =98𝑖=1 257
Initial Velocity (Vi) = 𝑇𝑒𝑎𝑚 𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 (𝑉𝑖) =𝑈𝑛𝑖𝑡𝑠 𝑜𝑓 𝐸𝑓𝑓𝑜𝑟𝑡𝑠
𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑝𝑟𝑖𝑛𝑡 =
12
7= 1.71
Friction Forces (𝐹𝑅) =∑ (𝐹𝐹 𝑉𝑎𝑙𝑢𝑒)𝑖14
𝑖=1
42∗ 3 = 𝟏
Dynamic Forces (𝐷𝐹) =∑ (𝑉𝐹)𝑖9
𝑖=1
27∗ 3 = 𝟏. 𝟏𝟏𝟏𝟏
Deacceleration (D) = 1
(𝐹𝐹∗𝐷𝐹)=
1
(𝟏∗ 𝟏.𝟏𝟏𝟏𝟏) = 0.9
Final Velocity (V) = (Vi)D = (2)0.9 = 1.6244
Estimated Time (𝑇) =∑ (𝐸𝑆)𝑖98
𝑖=1
(𝑉𝑖)𝐷 ∗ 1
𝑊𝐷 𝑀𝑜𝑛𝑡ℎ𝑠 =
257
1.6244∗
1
24= 𝟔. 𝟓𝟗 𝑴𝒐𝒏𝒕𝒉𝒔
Estimated (𝐶𝑜𝑠𝑡) = 1.3859 ∗ 𝑇𝑆 ∗ 𝑇 = 1.3859 ∗ 710000 ∗ 6.59 = 6484487.51 (𝑷𝑲𝑹)
TIME Probable = (T) = 6.59 𝑴𝒐𝒏𝒕𝒉𝒔
TIME Optimistic =1−(100−90)
100∗ 6.59 = -0.593 Months
TIME Pessimistic =1+(100−𝐶𝐿)
100∗ 6.59 = 0.7249 Months
87
5.17. Summary of Experimental Analysis
P No Efforts Vi D V Sprint
Size
Work
Days
Team
salary
Actual
Time
Estimated
Time
Actual
Cost
Estimated
Cost
MRE
Time
(%)
MRE
Cost
(%)
p01 577.41 2 0.525 1.44 6 22 620000 16 16.72 14500000 14947580.35 4.502 3.087
p02 440.5 1.71 0.84 1.57 7 22 640000 12 11.67 10400000 10666650.27 2.746 2.56
p03 661.97 2 0.716 1.64 6 22 640000 17 16.79 15000000 15348350.52 1.219 2.322
p04 335.41 2 0.51 1.42 6 22 640000 11.6 9.81 10200000 8968742.2 15.4 12.07
p05 526.32 2 0.5 1.41 6 22 640000 17 15.51 14765000 14173241.29 8.782 4.0
p06 389.89 1.67 0.716 1.44 6 22 700000 11 11.27 10200000 10973404.26 2.453 7.5
p07 489.41 1.71 0.933 1.65 7 22 580000 12.5 12.33 11000000 10530062.96 1.357 4.27
p08 489.77 1.71 0.9 1.62 7 22 640000 12.02 12.56 11000000 11482198.2 4.51 4.384
p09 373.1 1.71 0.7 1.46 7 22 640000 11 10.66 9359000 9742612.45 3.095 4.0
p10 410.03 1.71 0.391 1.23 7 22 710000 14.45 13.84 14000000 13614471.8 4.25 2.754
p11 257 1.71 0.9 1.62 7 22 710000 6.5 6.59 6200000 6486652.29 1.42 4.6
Table: 5.30: Experimental Results and Analysis
𝑀𝑀𝑅𝐸 𝑇𝑖𝑚𝑒 = 4.52 %
𝑀𝑀𝑅𝐸 𝐶𝑜𝑠𝑡 = 4.68 %
𝑃𝑅𝐸𝐷 𝑇𝑖𝑚𝑒 (4.52) = 81.82 %
𝑃𝑅𝐸𝐷 𝐶𝑜𝑠𝑡 (4.68) = 82.0 %
88
4.52 % MMRE for Time while 4.68 % MMRE for Cost have been observed, which is very low
rate. The result is quite satisfactory and acceptable estimation accuracy.
5.18. Critical Analysis
The model has been calibrated on eleven different projects which resulted satisfactorily. However,
there may be some parameters of Friction and Dynamic forces that belong to local environments
because social and political environment of a country may also affect determination of cost and
time for software development. These parameters have not been included.
This model doesn’t support Product-Line Software Engineering where development is purely
based on reusable components.
This model provides a base for future researchers who are going to develop Cost Estimation
models for Agile software development from a different perspective. The model has
comprehensive set of inputs which can be easily modified to suit future research.
5.19. Conclusion and Future Work
In this research a Model for effort estimation in Agile Software Development Methodology has
been developed. As the Agile model is based on User Story therefore this model also focuses on
User Story. User Story is not a tangible parameter to measure the cost and time of software
development. There are many other factors which affect determination of cost and time. These
factors include Friction Forces, Dynamic Forces, Team Velocity. In order to explore the nature of
these factors, extensive research has been conducted from reliable source so that to establish a
well-defined set of parameters for these factors. The model is easy to understand and can be easily
implemented in any organization.
The model not only apply on single team development but also support multi team development
where more than one teams with variable velocities are working on the same project.
The model has been tested on 11 Projects from two CMMI appraised companies which resulted in
4.52 % MMRE for time whereas 4.68 % MMRE for cost which is quite acceptable.
89
References
[1] A. E. D. Hamouda, “Using agile story points as an estimation technique in CMMI
organizations,” Proc. - 2014 Agil. Conf. Agil. 2014, pp. 16–23, 2014.
[2] A. T. Raslan, “Towards a Fuzzy based Framework for Effort Estimation in Agile
Software Development,” International Journal of Computer Science and Information
Security, vol. 13, no. 1, pp. 37–45, 2015.
[3] Aditi Panda, Shashank Mouli Satapathy, Santanu kumar Rath, “Emprical validation of
Neural Network Models for Agile Software Effort Estimation based on Story Points,”
Procedia Computer Science, 2015; 57: 772-781.
[4] Aihua Ren, Chen Yun, “Research of Software Size Estimation Method,” Cloud and
Service Computing (CSC), International Conference on, Beijing, 2013.
[5] Alaa El Deen Hamouda. “Using agile story points as an estimation technique in CMMI
organizations.” In Agile Conference (AGILE), 2014, pages 16–23. IEEE, 2014.
[6] Albrecht, A. J. (1979). “Measuring application development productivity”, Proceeding
of the Joint SHARE, GUIDE and IBM application development symposium, IBM
Corporation, 1979.
[7] Ali Idri, Fatima azzahra Amazal, and Alain Abran. “Accuracy comparison of analogy-
based software development effort estimation techniques.” International Journal of
Intelligent Systems, 31(2):128–152, 2016.
[8] Ali Idri, Fatima azzahra Amazal, and Alain Abran. “Analogy-based software
development effort estimation: A systematic mapping and review.” Information and
Software Technology, 58:206–230, 2015.
[9] Ali M, Shaikh Z, Ali E. “Estimation of Project Size Using User Stories.” In
International Conference on Recent Advances in Computer Systems 2015 Nov.
Atlantis Press.
[10] Andreas Schmietendorf, Martin Kunz, and Reiner Dumke. Effort estimation for agile
software development projects. In 5th Software Measurement European Forum, pages
113–123, 2008.
[11] Angelis, Lefteris, Ioannis Stamelos, and Maurizio Morisio. "Building a software cost
estimation model based on categorical data." Proceedings Seventh International
Software Metrics Symposium. IEEE, 2001.
[12] Attarzadeh I. Siew HockOw, “Proposing a New Software Cost Estimation Model
Based on Artificial Neural Networks”, IEEE International Conference on Computer
Engineering and Technology (ICCET) , Volume: 3, Page(s): V3-487 - V3-491 2010.
90
[13] Azzeh, M, Nassif, A B, “Analogy-based effort estimation: a new method to discover
set of analogies from dataset characteristics,” in IET Software, 2015; 9 (2): 39-50.
[14] Barry Boehm Dr., “COCOMO II Model Definition Manual,” The American Journal of
Tropical Medicine and Hygiene, vol. 87, p. i, 2012.
[15] Barry W Boehm et al. Software engineering economics, volume 197. Prentice-hall
Englewood Cliffs (NJ), 1981.
[16] Benschop, Nick, et al. "Detection of early warning signals for overruns in IS projects:
linguistic analysis of business case language." European Journal of Information
Systems 29.2 (2020): 190-202.
[17] B. J. Prater, Konstantinos Kirytopoulos, and Tony Ma. "An Investigation of Estimation
Techniques for Information Technology Projects." 2019 IEEE International
Conference on Industrial Engineering and Engineering Management (IEEM). IEEE,
2019.
[18] Bilgaiyan, Saurabh, et al. "Effort estimation of back-end part of software using
chaotically modified genetic algorithm." International Journal of Intelligent Systems
and Applications 11.1 (2019): 32.
[19] Bilgaiyan, Saurabh, et al. "A Systematic Review on Software Cost Estimation in Agile
Software Development." Journal of Engineering Science & Technology Review 10.4
(2017).
[20] Boehm BW, Papaccio PN. Understanding and controlling software costs. IEEE
transactions on software engineering. 1988 Oct;14(10):1462-77.
[21] Bashir, Hamdi A., and Vince Thomson. "An analogy-based model for estimating
design effort." Design Studies 22.2 (2001): 157-167.
[22] Boehm, B., Valerdi, R, “Impact of software resource estimation research on practice:
a preliminary report on achievements, synergies, and challenges,” in Software
Engineering (ICSE), 33rd Internation Conference, 2011.
[23] Bruce Benton, “Model-Based Time and Cost Estimation in a Software Testing
Environment.” Information Technology: New Generations, (lING), 2009: 801-816.
[24] C. Patel and M. Ramachandran, “SoBA: A tool support for story card based agile
software development,” Int. Conf. Softw. Eng. Theory Pract. 2008, SETP 2008, vol.
2, no. 2, pp. 17–23, 2008.
[25] Chetan Nagar,” Software efforts estimation using Use Case Point approach by
increasing technical complexity and experience factors”, IJCSE, ISSN:0975-3397,
V ol.3 No.10, Pg No 3337- 3345, October 2011.
91
[26] Chong TT, Apps M, Giehl K, Sillence A, Grima LL, Husain M. Neurocomputational
mechanisms underlying subjective valuation of effort costs. PLoS biology. 2017 Feb
24;15(2):e1002598.
[27] Fronza, Ilenia, et al. "Bringing the benefits of Agile techniques inside the classroom: a
practical guide." Agile and Lean Concepts for Teaching and Learning. Springer,
Singapore, 2019. 133-152.
[28] Dalton, Jeff. "Team Estimation Game." Great Big Agile. Apress, Berkeley, CA, 2019.
255-257.
[29] Dantas, Emanuel, et al. "Effort Estimation in Agile Software Development: An
Updated Review." International Journal of Software Engineering and Knowledge
Engineering 28.11n12 (2018): 1811-1831.
[30] Donelson, W. S. "Project Planning and Control, Datamation." (2008): 73-80.
[31] Elouris, Triant G., and Dennis Lock. Managing Aviation Projects from Concept to
Completion. Routledge, 2016.
[32] Englund, Randall, and Robert J. Graham. Creating an environment for successful
projects. Berrett-Koehler Publishers, 2019.
[33] Erdir Ungan, Numan Cizmeli, and Onur Demirors. Comparison of functional size-
based estimation and story points, based on effort estimation effectiveness in scrum
projects. In Software Engineering and Advanced Applications (SEAA), 2014 40th
EUROMICRO Conference on, pages 77–80. IEEE, 2014.
[34] Evita Coelho and Anirban Basu. Effort estimation in agile software development using
story points. development, 3(7), 2012.
[35] Fadhil, Anfal A., Rasha GH Alsarraj, and Atica M. Altaie. "Software Cost Estimation
Based on Dolphin Algorithm." IEEE Access 8 (2020): 75279-75287.
[36] Garg, S, Gupta, D, “PCA based cost estimation model for agile software development
projects,” Industrial Engineering and Operations Management (IEOM), International
Conference on, Dubai, 2015: 1-7.
[37] Gautam, Swarnima Singh, and Vrijendra Singh. "The state‐of‐the‐art in software
development effort estimation." Journal of Software: Evolution and Process 30.12
(2018): e1983.
[38] Goff S. and President P., “Twenty Years of Better IT Estimating Software © 1999,
2002,” vol. 20, 2002.
[39] Haugan, Gregory T. Effective work breakdown structures. Berrett-Koehler Publishers,
2001.
92
[40] Heck, Petra, and Andy Zaidman. "A quality framework for agile requirements: a
practitioner's perspective." arXiv preprint arXiv:1406.4692 (2014).
[41] Helmer, O. (1966). Social Technology, Basic Books, NY, 1966.
[42] Hsu, Chia-Chien, and Brian A. Sandford. "The Delphi technique: making sense of
consensus." Practical Assessment, Research, and Evaluation 12.1 (2007): 10.
[43] I. ul Hassan, N. Ahmad, and B. Zuhaira, “Calculating completeness of software project
scope definition”, Information and Software Technology., vol. 94, pp. 208–233, 2018.
[44] Idri A., Abran A., Khoshgoftaar T.M. “Estimating software project effort by analogy
based on linguistic values” in Proceedings of the Eighth IEEE Symposium on Software
Metrics, pp. 21– 30, 2002.
[45] Idri A., Khoshgoftaar T. M., Abran A.. “Can neural networks be easily interpreted in
software cost estimation?”, IEEE Trans. Software Engineering, Vol. 2, pp. 1162 –
1167,2002.
[46] JPanda, Aditi, Shashank Mouli Satapathy, and Santanu Kumar Rath. "Empirical
validation of neural network models for agile software effort estimation based on story
points." Procedia Computer Science 57 (2015): 772-781.
[47] Ishrar Hussain, Leila Kosseim, Olga Ormandjieva, “Approximation of COSMIC
functional size to support early effort estimation in Agile,” Data & Knowledge
Engineering, 2013; 85: 2-14.
[48] J. Hyvönen, “Creating shared understanding with Lego Serious Play,” Proc. Semin.
58314308 Data- Value-Driven Softw. Eng. with Deep Cust. Insight, no. 58314308, pp.
36–42, 2014.
[49] J.-M. Desharnais, L. Buglione, and B. Kocatürk, “Using the COSMIC method to
estimate Agile user stories,” Proc. 12th Int. Conf. Prod. Focus. Softw. Dev. Process
Improv. - Profes ’11, p. 68, 2011.
[50] Jain, A., and Shilpi Purohit. "Estimating Earned Business Value for Agile Projects
using Relative Scoring Method." ResearchGate (2015).
[51] Jørgensen, Magne. "A review of studies on expert estimation of software development
effort." Journal of Systems and Software 70.1-2 (2004): 37-60.
[52] Jorgensen M., “Top-Down and Bottom-Up Expert Estimation of Software
Development Effort,” Information and Software Technology, vol. 46, no. 1, pp. 3-16,
Jan. 2004.
[53] Jorgensen, M, “Relative Estimation of Software Development Effort: It matters with
What and How You compare,” in IEEE Software, 2013; 30 (2): 74-79.
93
[54] Kang Sungjoo, Okjoo Choi, Jongmoon Baik. “Model-based Dynamic Cost Estimation
and Tracking Method for Agile Software Development.” 9th IEEE/ACIS International
Conference on Computer and Information Science, pp. 743-748, 2010.
[55] Kannan V, Basit MA, Bajaj P, Carrington AR, Donahue IB, Flahaven EL, Medford R,
Melaku T, Moran BA, Saldana LE, Willett DL. “User stories as lightweight
requirements for agile clinical decision support development.” Journal of the American
Medical Informatics Association. 2019 Nov;26(11):1344-54.
[56] Kaushik A, Tayal DK, Yadav K. “A Fuzzy Approach for Cost and Time Optimization
in Agile Software Development.” In Advanced Computing and Intelligent Engineering
2020 (pp. 629-639). Springer, Singapore.
[57] Kayhan Moharreri, Alhad Vinayak Sapre, Jayashree Ramanathan, Rajiv Ramnath,
“Cost-effective Supervised Learning Models for Software Effort Estimation in Agile
Environments”, IEEE 40th Annual Computer Software and Applications Conference,
2016.
[58] Khanh NT, Daengdej J, Arifin HH. “Human stories: a new written technique in agile
software requirements.” In Proceedings of the 6th International Conference on
Software and Computer Applications 2017 Feb 26 (pp. 15-22).
[59] Khatibi Bardsiri, V, Jawawi, D N A, Hashim S Z M, Khatibi E, “Increasing the
accuracy of software development effort estimation using projects clustering,” in IET
Software, 2012; 6 (6): 461- 473.
[60] Khatibi, E, Khatibi Bardsiri, V, “Model to estimate the software development effort
based on in-depth analysis of project attributes,” in IET software, 2015; 9 (4): 109-
118.
[61] Khazaiepoor, Mahdi, Amid Khatibi Bardsiri, and Farshid Keynia. "A Dataset-
Independent Model for Estimating Software Development Effort Using Soft
Computing Techniques." Applied Computer Systems 24.2 (2019): 82-93.
[62] Khuat TT, Le MH. “An effort estimation approach for agile software development
using fireworks algorithm optimized neural network.” Int. J. Comput. Sci. Inf. Secur.
2016 Jul;14.
[63] Kocaguneli, E, “Exploiting the essential assumptions of Analogy-Based Effort
Estimation”, IEEE Transactions on Software Engineering, 2012; 38 (2): 425-438.
[64] Kocaguneli, E, Menzies, T, Keung, J W, “On the value of ensemble effort estimation,”
in Software Engineering, IEEE Transactions on, 2012; 38 (6): 1403-1416.
[65] Kocaguneli, E, Menzies, T, Keung, J, Cok, D, Madachy, R, “Active learning and effort
estimation: Finding the essential content of software effort estimation data,” in
Software Engineering, IEEE Transactions on, 2013; 39 (8): 1040-1053.
94
[66] Korytkowski, Przemyslaw, and Bartlomiej Malachowski. "Competence-based
estimation of activity duration in IT projects." European Journal of Operational
Research 275.2 (2019): 708-720.
[67] Krishna Mohan, K, Verma, A K, Srividya, A, “Software Reliability Estimation
Through Black Box and White Box Testing at Prototype Level,” Reliability, Safety
and Hazard (lCRESH), 2010: 517-522.
[68] Krishnakumar Pillai and VS Sukumaran Nair. “A model for software development
effort and cost estimation.” Software Engineering, IEEE Transactions on, 23(8):485–
497, 1997.
[69] Kuutila, Miikka, et al. "Time pressure in software engineering: A systematic
review." Information and Software Technology 121 (2020): 106257.
[70] L. Buglione and A. Abran, "Improving the User Story Agile Technique Using the
INVEST Criteria," 2013 Joint Conference of the 23rd International Workshop on
Software Measurement and the 8th International Conference on Software Process and
Product Measurement, Ankara, 2013, pp. 49-53, doi: 10.1109/IWSM-
Mensura.2013.18.
[71] L. Williams, Agile Software Development Methodologies and Practices, 1st ed., vol.
80, no. C. Elsevier Inc., 2010.
[72] Lind, K, Heldal, R, “A Practical Approach to Size Estimation of Embedded Software
Components,” in Software Engineering, IEEE Transactions on, 2012; 38 (5): 993-
1007.
[73] Liskin O, Pham R, Kiesling S, Schneider K. “Why we need a granularity concept for
user stories,” In International Conference on Agile Software Development 2014 May
26 (pp. 110-125). Springer, Cham.
[74] Li-Xin Jiang, Wan-Jiang HAN, Chen-Chen YAN, Bo-Ying SHI, “Research on Size
Estimation Model for Software system Test based on testing steps and Its Application,”
International Conference on Computer Science and Information Processing (CSIP),
2012: 1245-1248.
[75] Lucassen G, Dalpiaz F, van der Werf JM, Brinkkemper S. “Improving agile
requirements: the quality user story framework and tool”. Requirements Engineering.
2016 Sep 1;21(3):383-403.
[76] Lucassen G, Dalpiaz F, van der Werf JM, Brinkkemper S. “The use and effectiveness
of user stories in practice”. In International working conference on requirements
engineering: Foundation for software quality 2016 Mar 14 (pp. 205-222). Springer,
Cham.
95
[77] Lucassen G, Dalpiaz F, Van Der Werf JM, Brinkkemper S. “Forging high-quality user
stories: towards a discipline for agile requirements.” In 2015 IEEE 23rd international
requirements engineering conference (RE) 2015 Aug 24 (pp. 126-135). IEEE.
[78] M. Cohn, “User stories applied: For agile software development. Addison-Wesley
Professional., pp. 17–29, 2004.
[79] M. Daneva and O. Pastor, “Requirements engineering: Foundation for software
quality: 22nd international working conference, REFSQ 2016 Gothenburg, Sweden,
march 14–17, 2016 proceedings,”, vol. 9619, pp. 171–187, 2016.
[80] M. Usman, E. Mendes, F. Weidt, and R. Britto, “Effort estimation in agile software
development,” Proc. 10th Int. Conf. Predict. Model. Softw. Eng. - PROMISE ’14, vol.
3, no. 7, pp. 82–91, 2014.
[81] Malgonde, Onkar, and Kaushal Chari. "An ensemble-based model for predicting agile
software development effort." Empirical Software Engineering 24.2 (2019): 1017-
1055.
[82] Marbán O., Seco a D. A., Cuadrado J., and García L., “Cost Drivers of a Parametric
Cost Estimation Model for Data Mining Projects (DMCOMO).,” Adis, 2002.
[83] Mensah, Solomon, et al. "Investigating the significance of the bellwether effect to
improve software effort prediction: Further empirical study." IEEE Transactions on
Reliability 67.3 (2018): 1176-1198.
[84] Manish, Agrawal, and Kaushal Chari. "Impacts of process audit review and control
efforts on software project outcomes." IET Software 14.3 (2020): 293-299.
[85] Menzies, Tim, et al. "Negative results for software effort estimation." Empirical
Software Engineering 22.5 (2017): 2658-2683.
[86] MIL-HDBK-881 (1998). Work Breakdown Structures for Defence Material Items,
Department of Defence, United States of America, 1998.
[87] Mittas, N, Angelis, L, “Ranking and Clustering Software Cost Estimation Models
through a Multiple Comparisons Algorithm,” in IEEE Transactions on Software
Engineering, 2013; 39 (4): 537 – 551.
[88] Moløkken-Østvold K., Jørgensen M., Tanilkan S. S., Gallis H., Lien A. C., and Hove
S. E., “A survey on software estimation in the norwegian industry,” Proc. - Int. Softw.
Metrics Symp., pp. 208–219, 2004.
[89] Muhammad Usman, Emilia Mendes, Francila Weidt, and Ricardo Britto. Effort
estimation in agile software development: A systematic literature review. In
Proceedings of the 10th International Conference on Predictive Models in Software
Engineering, pages 82–91. ACM, 2014.
96
[90] Mukhopadhyay T., Vicinanza S., and Pri-etula M., “Examining the feasibility of a
case-based reasoning model for software effort estimation,” MIS Quarterly, Vol. 16,
No. 2, pp. 155–171, 1992
[91] Mustapha, Hain, and Namir Abdelwahed. "Investigating the use of random forest in
software effort estimation." Procedia computer science 148 (2019): 343-352.
[92] N. Sharma, A. Bajpai, and R. Litoriya, “A comparison of software cost estimation
methods: A Survey,” Int. J. Comput. Sci. Appl., vol. 1, no. 3, pp. 121–127, 2012.
[93] Narendra Sharma, Aman Bajpai, Mr. Ratnesh Litoriya, “The International Journal of
Computer Science & Applications” (TIJCSA) ISSN – 2278-1080, Vol. 1 No.3 May
2012.
[94] Nasr-azadani B. and Mohammaddoost R., “Estimation of Agile Functionality in
Software Development,” Computer (Long. Beach. Calif)., vol. I, pp. 19–21, 2008.
[95] Nassif, A B, Azzeh, M, Capretz, L F, Ho, D, “A comparision between decision trees
and decision tree forest models for software development effort estimation,” in
Communications and Information Technology (ICCIT), Third International
Conferencce on 2013: 220-224.
[96] Nassif, A B, Capretz, L F, Ho, D, “Software Effort Estimation in the early stages of
the software life cycle using a cascade correlation neural network model,” Software
Engineering, Artificial Intelligence, Networking and Parallel & Distributed Computing
(SNPD), 13th ACIS International Conference on Kyoto, 2012: 589-594.
[97] Ozkan N, Tarhan AK. Investigating Causes of Scalability Challenges in Agile
Software Development from a Design Perspective. In2019 1st International
Informatics and Software Engineering Conference (UBMYK) 2019 Nov 6 (pp. 1-6).
IEEE.
[98] Patanakul P, Rufo-McCarron R. “Transitioning to agile software development:
Lessons learned from a government-contracted program.” The Journal of High
Technology Management Research. 2018 Nov 1;29(2):181-92.
[99] Perkusich, Mirko, et al. "Intelligent software engineering in the context of agile
software development: A systematic literature review." Information and Software
Technology 119 (2020): 106241.
[100] PMBOK (2000). A Guide to the Project Management Body of Knowledge, Project
Management Institute, 2000.
[101] Popli, Rashmi, and Naresh Chauhan. "Cost and effort estimation in agile software
development." 2014 international conference on reliability optimization and
information technology (ICROIT). IEEE, 2014.
97
[102] Prokopova, Zdenka, Petr Silhavy, and Radek Silhavy. "Analysis of the Software
Project Estimation Process: A Case Study." Computer Science On-line Conference.
Springer, Cham, 2019.
[103] Putnam, L H, “A general empirical solution to the macro software sizing and
estimating problem,” IEEE Trans. Soft. Eng., 1978: 345-361.
[104] Putnam, L. (1978). “A general empirical solution to the macro software sizing and
estimating problem”, IEEE Transaction on Software Engineering, pp. 345-361, July
1978.
[105] Putri AY, Subriadi AP. “Software Cost Estimation Using Function Point Analysis.”
IPTEK Journal of Proceedings Series. 2019 Apr 21(1):79-83.
[106] Rashmi Popli and Naresh Chauhan. Cost and effort estimation in agile software
development. In Optimization, Reliabilty, and Information Technology (ICROIT),
2014 International Conference on, pages 57–61. IEEE, 2014.
[107] Raslan AT, Darwish NR. An Enhanced Framework for Effort Estimation of Agile
Projects. International Journal of Intelligent Engineering and Systems.
2018;11(3):205-14.
[108] Roberts, Paul. Guide to project management: Achieving lasting benefit through
effective change. Vol. 16. John Wiley & Sons, 2007.
[109] S. Shekhar and U. Kumar, “Review of Various Software Cost Estimation Techniques,”
Int. J. Comput. Appl., vol. 141, no. 11, pp. 975–8887, 2016.
[110] Saini, Jatinderkumar R., and Vikas S. Chomal. "A Double-Weighted Parametric Model
for Academic Software Project Effort Estimation." ICT Analysis and Applications.
Springer, Singapore, 2020. 31-45.
[111] Sakshi Garg and Daya Gupta. Pca based cost estimation model for agile software
development projects. In Industrial Engineering and Operations Management (IEOM),
2015 International Conference on, pages 1–7. IEEE, 2015.
[112] Satapathy, Shashank Mouli, and Santanu Kumar Rath. "Empirical assessment of
machine learning models for agile software development effort estimation using story
points." Innovations in Systems and Software Engineering 13.2-3 (2017): 191-200.
[113] Schnitzhofer, F, Schnitzhofer, P, “Pocket Estimator – A commercial Solution to
provide free parametric software estimation combining an expert and a learning
algorithm,” Software Engineering and Advanced Applications, 38th EUROMICRO
conference, 2012: 422-425.
98
[114] Sehra, Sumeet Kaur, Yadwinder Singh Brar, Navdeep Kaur, and Sukhjit Singh Sehra.
"Research patterns and trends in software effort estimation." Information and Software
Technology 91 (2017): 1-21.
[115] Sharma A. and Kushwaha D. S., “Estimation of Software Development Effort from
Requirements Based Complexity,” Procedia Technol., vol. 4, pp. 716–722, 2012.
[116] Sharma N. and Litoriya R., “Incorporating Data Mining Techniques on Software Cost
Estimation: Validation and Improvement,” Int. J. Emerg. Technol. Adv. Eng., vol. 2,
no. 3, pp. 301–309, 2012.
[117] Shepperd M., Schofield C., and Kitchenham B., “Effort estimation using analogy,” in
Proceedings of the 18th International Conference on Soft-ware Engineering, Berlin,
pp. 170–178, 1996. IEEE, 1996
[118] Shepperd, Martin, and Chris Schofield. "Estimating Software Project Effort Using
Analogies." SERIES ON SOFTWARE ENGINEERING AND KNOWLEDGE
ENGINEERING 16 (2005): 64.
[119] Sree, Sripada Rama, and Chatla Prasada Rao. "A Study on Application of Soft
Computing Techniques for Software Effort Estimation." A Journey Towards Bio-
inspired Techniques in Software Engineering. Springer, Cham, 2020. 141-165.
[120] Sungjoo Kang, Okjoo Choi, Jongmoon Baik, “Model based dynamic cost estimation
and tracking method for Agile Software Development,” in Computer and Information
Science (ICIS), IEEE / ACIS 9th International conference on, 2010: 743-748.
[121] Tanuu, Kumar Y. "Comparative Analysis of Different Software Cost Estimation
Methods." International Journal of Computer Science and Mobile Computing 3.6
(2014): 547-557.
[122] Tausworthe, R, Deep Space Network Software Cost Estimation Model, Jet Propulsion
Laboratory Publication, 1981: 67-78.
[123] Torrecilla-Salinas, C J, Sedeno, J, Escalona, M J, Mejias, M, “Estimating, planning
and managing Agile web development projects under a value-based perspective,”
Information and Software Technology, 2015; 61: 124-144.
[124] Tripathi, Rekha, and Dr PK Rai. "Comparative Study of Software Cost Estimation
Technique." International Journal of Advanced Research in Computer Science and
Software Engineering 6.1 (2016)..
[125] Tsunoda, M, Kamei, Y, Toda, K, Nagappan, M, Fushida, K, Ubayashi, N, “Revisiting
software development effort estimation based on early phase development activities,”
Mining Software Repositories (MSR),10th IEEE Working Conference on, San
Francisco, CA, 2013: 429-438.
99
[126] Usman, Muhammad, et al. "Developing and using checklists to improve software effort
estimation: A multi-case study." Journal of Systems and Software 146 (2018): 286-
309.
[127] V Mahnic and N Zabkar. “Measuring progress of scrum-based software projects.”
Elektronika ir Elektrotechnika, 18(8):73–76, 2012.
[128] Vahid Khatibi, Dayang N. A. Jawawi. “Software Cost Estimation Methods: A Review,
Journal of Emerging Trends in Computing and Information Sciences”, Vol.2, No.1.
2010.
[129] Valentina Lenarduzzi, Ilaria Lunesu, Martina Matta, and Davide Taibi. "Functional
size measures and effort estimation in agile development: a replicated study."
International Conference on Agile Software Development. Springer, Cham, 2015.
[130] Virine L, Trumper M. “Project decisions: the art and science.” Berrett-Koehler
Publishers; 2019 Nov 5.
[131] Vladimir K, Nikita B. “Estimation of the tasks complexity for large-scale high-tech
projects using Agile methodologies.” Procedia computer science. 2018 Jan 1; 145:266-
74.
[132] Walkerden F. and Jeffery R., “An empirical study of analogy-based software effort
estimation,” Empirical Software Engineering, Vol. 4, No. 2, pp. 135–158, 1999.
[133] Walston, C E, Felix, C P, “A method of programming measurement and estimation,”
IBM Systems Journal, 1977; 16 (1): 54-73.
[134] Ziauddin, Sh K. Tipu, Khairuz Zaman, and Shahrukh Zia. "Software cost estimation
using soft computing techniques." Advances in Information Technology and
Management (AITM) 2.1 (2012): 233-238.
[135] Ziauddin, A. Rashid, and K. uz Zaman, “Software cost estimation for component-
based fourth-generation-language software applications,” IET Software., vol. 5, no. 1,
p. 103, 2011.
[136] Ziauddin SK, Zia S. “An effort estimation model for agile software development.”
Advances in computer science and its applications (ACSA). 2012 Jul;2(1):314-24.