schedule & effort
DESCRIPTION
Schedule & effort. http://www.flickr.com/photos/28481088@N00/315671189/sizes/o/. Problem. Our ability to realistically plan and schedule projects depends on our ability to estimate project costs and development efforts - PowerPoint PPT PresentationTRANSCRIPT
http://www.flickr.com/photos/28481088@N00/315671189/sizes/o/
Schedule & effort
Problem
• Our ability to realistically plan and schedule projects depends on our ability to estimate project costs and development efforts
• In order to come up with a reliable cost estimate, we need to have a firm grasp on the requirements, as well as our approach to meeting them
• Typically costs need to be estimated before these are fully understood
1. Figure out what the project entails– Requirements, architecture, design
2. Figure out dependencies & priorities– What has to be done in what order?
3. Figure out how much effort it will take
4. Plan, refine, plan, refine, …
Planning big projects
What are project costs?
• For most software projects, costs are:– Hardware Costs– Travel & Training costs– Effort costs
Aggravating & mitigating factors
• Market opportunity• Uncertainty/risks• Contractual terms• Requirements volatility• Financial health• Opportunity costs
Cost drivers
• Software reliability • Size of application database • Complexity • Analyst capability • Software engineering
capability • Applications experience • Programming language
expertise
• Performance requirements • Memory constraints • Volatility of virtual machine • Environment • Use of software tools • Application of software
engineering methods • Required development
schedule
What are effort costs?
• Effort costs typically largest of the 3 types of costs (hardware, training and effort), and the most difficult to estimate.
• Effort costs include:– Developer hours– Heating, power, space– Support staff; accountants, administrators, cleaners, management– Networking and communication infrastructure– Central facilities such as rec room & library– Social security and employee benefits
Software cost estimation – Boehm (1981)
• Algorithmic cost modeling– Base estimate on project size (lines of code)
• Expert judgment– Ask others
• Estimation by analogy– Cost based on experience with similar projects
• Parkinson’s Law– Project time will expand to fill time available
• Pricing to win– Cost will be whatever customer is willing to pay
• Top-down estimation– Estimation based on function/object points
• Bottom-up estimation– Estimation based on components
Productivity metrics
• Lines of code– Simple, but not very meaningful metric– Easy to pad, affected by prog language– How to count revisions/debugging etc?
• Function points– Amount of useful code produced
(goals/requirements met)– Less volatile, more meaningful, not perfect
Function points
Function points are computed by first calculating an unadjusted function point count (UFC). Counts are made for the following categories (Fenton, 1997):
– External inputs – those items provided by the user that describe distinct application-oriented data (such as file names and menu selections)
– External outputs – those items provided to the user that generate distinct application-oriented data (such as reports and messages, rather than the individual components of these)
– External inquiries – interactive inputs requiring a response – External files – machine-readable interfaces to other systems – Internal files – logical master files in the system
Each of these is then assessed for complexity and given a weighting from 3 (for simple external inputs) to 15 (for complex internal files).
Unadjusted Function Point Count (UFC)
Weighting Factor
Item Simple Average Complex
External inputs 3 4 6
External outputs 4 5 7
External inquiries 3 4 6
External files 7 10 15
Internal files 5 7 10
Each count is multiplied by its corresponding complexity weight and the results are summed to provide the UFC
Object points
Similar to function points (used to estimate projects based heavily on reuse, scripting and adaptation of existing tools)
• Number of screens (simple x1, complex x2, difficult x3)• Number of reports (simple x2, complex x5, difficult x8)• Number of custom modules written in languages like Java/C
x10
COCOMO II Model
• Supports spiral model of development• Supports component composition, reuse, customization• 4 sub-models:
– Application composition model – assumes system written with components, used for prototypes, development using scripts, db’s etc (object points)
– Early design model – After requirements, used during early stages of design (function points)
– Reuse model – Integrating and adapting reusable components (LOC)
– Post architecture model – More accurate method, once architecture has been designed (LOC)
• Computes software development effort as function of program size and a set of "cost drivers”.
• Product attributes – Required software reliability– Size of application database– Complexity of the product
• Hardware attributes – Run-time performance constraints– Memory constraints
Intermediate COCOMO
• Personnel attributes – Analyst capability– Software engineering capability– Applications experience– Virtual machine experience– Programming language experience
• Project attributes – Use of software tools– Application of software engineering methods– Required development schedule
Intermediate COCOMO
• Each of the 15 attributes receives a rating on a six-point scale that ranges from "very low" to "extra high" (in importance or value). An effort multiplier from the table below applies to the rating. The product of all effort multipliers results in an effort adjustment factor (EAF). Typical values for EAF range from 0.9 to 1.4.
Intermediate COCOMO
Example: Twitter repression report
Repressed citizen
UC#1: Report repression
UC#2: Clarify tweet
Concerned public
UC#3: View reports
UC#3a: View on map UC#3b: View as RSS feed
One possible architecture
TwitterTwitterFaçade
GeocoderGeocoder
Façade
Database
Apache+PHP
MappingWeb site
Google maps
RSSWeb service
Tweetprocessor
MySQL
Activity graph: shows dependencies of a project’s activities
1a
1c 2
1b
3a
3b
4
Do Twitter facadeDo Twitter facade
Do geocode facadeDo geocode facade
Design dbDesign db
Do tweet processor Do tweet processor
Do map outputDo map output Do RSS outputDo RSS output
Test & debug mapTest & debug map
Test & debug RSSTest & debug RSSAdvertiseAdvertise
3
Test & debug componentsTest & debug components
Milestone 2: DB contains real dataMilestone 3: DB contains real, reliable dataMilestone 4: Ready for public use
• Filled circles for start and finish
• One circle for each milestone
• Labeled arrows indicate activities– What activity must be performed to get to a
milestone?– Dashed arrows indicate “null” activities
Activity graph: shows dependencies of a project’s activities
• Ways to figure out effort for activities– Expert judgment– Records of similar tasks– Effort-estimation models– Any combination of the above
Effort
• Not a terrible way to make estimates, but…– Often vary widely– Often wrong– Can be improved through iteration & discussion
• How long to do the following tasks:– Read tweets from Twitter via API?– Send tweets to Twitter via API?– Generate reports with Google maps?
Effort: expert judgment
• Personal software process (PSP)– Record the size of a component (lines of code)
• Breakdown # of lines added, reused, modified, deleted
– Record time taken• Breakdown planning, design, implement, test, …
– Refer to this data when making future predictions
• Can also be done at the team level
Effort: records of similar tasks
• Algorithmic (e.g.: COCOMO: constructive cost model)– Inputs = description of project + team– Outputs = estimate of effort required
• Machine learning (e.g.: CBR)– Gather descriptions of old projects + time taken– Run a program that creates a model You now have a custom algorithmic method
• Same inputs/outputs as algorithmic estimation method
Effort: estimation models
1. Assess the system’s complexity2. Compute the # of application points3. Assess the team’s productivity4. Compute the effort
Using COCOMO-like models
Assessing complexity
e.g.: A screen for editing the database involves 6 database tables, and it has 4 views.This would be a “medium complexity screen”.
This assessment calls for lots of judgment.
Pfleeger & Atlee
Computing application points (a.p.)
e.g.: A medium complexity screen costs 2 application points.
3GL component = reusable programmatic component that you create
Pfleeger & Atlee
Assessing team capabilities
e.g.: Productivity with low experience + nominal CASE… productivity = (7+13)/2 = 10application points per person-month (assuming NO vacation or weekends!!!!!)
Pfleeger & Atlee
• It offer many benefits for developers building large-scale systems.
• As spiraling user requirements continue to drive system complexity to new levels, CASE tools enable engineers to abstract away from the entanglement of source code, to a level where architecture & design become apparent and easier to understand and modify.
• The larger a project, the more important it is to use a CASE tool in software development.
CASE (comp aided SE) TOOLS
• As developers interact with portions of a system designed by their colleagues, they must quickly seek a subset of classes and methods and assimilate an understanding of how to interface with them.
• In a similar sense, management must be able, in a timely fashion and from a high level, to look at a representation of a design and understand what's going on. Hence case tools are used
CASE TOOLS
Identify screens, reports, components
TwitterTwitterFaçade
GeocoderGeocoder
Façade
Database
Apache+PHP
MappingWeb site
Google maps
RSSWeb service
Tweetprocessor
MySQL
3GL components - Tweet processor - Twitter façade - Geocoder façade
Reports - Mapping web site - RSS web service
Use complexity to computeapplication points
3GL components - Tweet processor - Twitter façade - Geocoder façade
Reports - Mapping web site - RSS web service
Simple model assumes thatall 3GL components are 10application points.
Displays data from only a few database tables (3? 4?)Neither has multiple sections.Each is probably a “simple” report, 2 application points.
3*10 = 30 a.p.
2*2 = 4 a.p.
30 + 4 = 34 a.p.
• Assume at your company the team has…– Extensive experience with websites, XML– But no experience with Twitter or geocoders– Since 30 of the 34 a.p. are on this new stuff,
assume very low experience– Virtually no CASE support… very low
• therefore calculate the productivity as application points in the “person-months”.
• Note: this assumes no vacation or weekends
Assess the team’s productivity& compute effort
Distribute the person-months over the activity graph
1a
1c 2
1b
3a
3b
4
Do Twitter façade (1.25)Do Twitter façade (1.25)
Do geocode façade (1.25)Do geocode façade (1.25)
Design db (0.25)Design db (0.25)
Do tweet processor (1.00) Do tweet processor (1.00)
Do map output (0.25)Do map output (0.25) Do RSS output (0.25)Do RSS output (0.25)
Test & debug map (0.25)Test & debug map (0.25)
Test & debug RSS (0.25)Test & debug RSS (0.25)Advertise (1.0?)Advertise (1.0?)
3
Test & debug components (3.75)Test & debug components (3.75)
• Divide person-months between implementation and other activities (design, testing, debugging)– Oops, forgot to include an activity for testing and
debugging the components… revise activity graph
• Notice that some activities aren’t covered– E.g.: advertising; either remove from diagram or
use other methods of estimation
The magic behinddistributing person-months
• Ways to get more accurate numbers:– Revise numbers based on expert judgment or
other methods mentioned.– Perform a “spike”… try something out and actually
see how long it takes– Use more sophisticated models to analyze how
long components will really take– Use several models and compare
• Expect to revise estimates as project proceeds
Do you believe those numbers?
Further analysis may give revised estimates…
1a
1c 2
1b
3a
3b
Do Twitter façade (1.50)Do Twitter façade (1.50)
Do geocode façade (0.75)Do geocode façade (0.75)
Design db (0.25)Design db (0.25)
Do tweet processor (0.50) Do tweet processor (0.50)
Do map output (0.50)Do map output (0.50) Do RSS output (0.25)Do RSS output (0.25)
Test & debug map (0.25)Test & debug map (0.25)
Test & debug RSS (0.25)Test & debug RSS (0.25)
3
Test & debug components (4.25)Test & debug components (4.25)
• Sort all the milestones in “topological order”– i.e.: sort milestones in terms of dependencies
• For each milestone (in order), compute the earliest that the milestone can be reached from its immediate dependencies
Critical path: longest route through the activity graph
Example: computing critical path
1a
1c 2
1b
3a
3b
Do Twitter façade (1.50)Do Twitter façade (1.50)
Do geocode façade (0.75)Do geocode façade (0.75)
Design db (0.25)Design db (0.25)
Do tweet processor (0.50) Do tweet processor (0.50)
Do map output (0.50)Do map output (0.50) Do RSS output (0.25)Do RSS output (0.25)
Test & debug map (0.25)Test & debug map (0.25)
Test & debug RSS (0.25)Test & debug RSS (0.25)
3
Test & debug components (4.25)Test & debug components (4.25)
0.250.25
1.501.50
1.501.50
2.002.00
6.256.25
6.756.75
6.506.507.007.00
Example: tightening the critical path
1a
1c 2
1b
3a
3b
Do Twitter façade (1.50)Do Twitter façade (1.50)
Do geocode façade (0.75)Do geocode façade (0.75)
Design db (0.25)Design db (0.25)
Do tweet processor (0.50) Do tweet processor (0.50)
Do map output (0.50)Do map output (0.50)
Do RSS output (0.25)Do RSS output (0.25)
Test & debug map (0.25)Test & debug map (0.25)
Test & debug RSS (0.25)Test & debug RSS (0.25)
3
Test & debug components (4.25)Test & debug components (4.25)
0.250.25
1.501.50
1.501.50
2.002.00
2.002.00
2.502.50
2.252.256.256.25
What if we get started on the reports as soon as we have a (buggy) version of the database and components?
• Shows activities on a calendar– Useful for visualizing ordering of tasks & slack– Useful for deciding how many people to hire
• One bar per activity• Arrows show dependencies between activities• Milestones appear as diamonds
Gantt Chart
Example Gantt chart
Gantt chart quickly reveals that we only need to hire two people (blue & green)
• Scheduling with a set of requirements and an architecture?
• In contrast, assume that you are scheduling before you have requirements and an architecture. How much different would that be?
• What are the pros and cons of each approach?
Two ways of scheduling
• Updated vision statement– Your chance for extra credit!!– Thursday presentation: Each team is given 15
minutes to present how their vision has been more clear through this time (power-point presentation)
– You can include your requirements gathering, constraints and other details of your work until now.
– What are your future plans?– You will receive your midterms back tomorrow.
What’s next for you?