unified process on software implemenation & testing cen 5016 software engineering dr. david a....
Post on 01-Jan-2016
215 Views
Preview:
TRANSCRIPT
Unified Processon
Software Implemenation & Testing
CEN 5016
Software Engineering
Dr. David A. Workman
School of EE and Computer Science
March 20, 2007
March 20, 2007 (c) Dr. David A. Workman 2
Implementation (USP)• Purpose
To translate the design into machine-readable and executable form. Specifically to:
– Plan system integrations required in each implementation increment or iteration
– Distribute the system by mapping executable components to nodes in the deployment model.
– Implement design classes and subsystems found in the Design Model.
– Unit test the components, and integrate them by compiling and linking them together into one or more executables, before they are sent to integration and system tests.
• Artifacts– Implementation Model
• Components: <executable>, <file>, <library>, <dbtables>, <document>
• Interfaces
• Implementation subsystems
– Components
– Implementation Subsystems
– Interfaces
– Build Plan
March 20, 2007 (c) Dr. David A. Workman 3
Integration & System Test (USP)• Purpose
To verify the result of each build and to validate the complete system via acceptance tests.
– Plan tests required in each iteration, including integration and system tests. Integration tests are required after each build, while system tests are done as part of client acceptance and system delivery.
– Design and implement test plans by creating test cases. Test cases specify what to test and define procedures and programs for conducting test exercises.
– Perform various test cases to capture and verify test results. Defects are formally captured, tracked and removed before delivery.
Verification: testing a work product to determine whether or not they conform to the product's specifications. "Did we build the system right?" (Boehm)
Validation: testing a complete system to determine whether or not it satisfies requirements and solves the problem users need to have solved. "Did we build the right system?" (Boehm)
March 20, 2007 (c) Dr. David A. Workman 4
Integration & Test (USDP)• Concepts
– Test Strategies• Black box testing: demonstrating correct component or subsystem behavior by
observing output generated at its interface as a function of inputs supplied at its interface.
• White box testing: demonstrating that valid computational paths and interactions are observed internal to a component or subsystem as a function of given inputs.
– Test Types• Installation tests: verify that the system can be installed and run correctly on the
client's platform.
• Configuration tests: verify that the system can run correctly in different configurations.
• Negative tests: attempt to cause the system to fail by presenting inputs or loads for which it was not designed.
• Stress tests: performance tests designed to identify problems when resources are limited.
• Regression tests: after a change is made to a component, re-running tests that the component had successfully pasted before the change was made to ensure that previous capability remains valid
March 20, 2007 (c) Dr. David A. Workman 5
Integration & Test (USDP)• Artifacts
– Test Model• A set of Test Cases
• A set of Test Procedures
• A set of Test Components
– Test CasesDesigned to verify certain use cases, or use case scenarios.
• Demonstrates that post-conditions of use cases are satisfied, if their pre-conditions are met.
• Demonstrates that a sequence of actions defined by the use case is followed.
• Identifies the test components and their features to be tested.
• Predicts or describes expected component output and behavior.
• Defines the inputs necessary to produce the desired test output.
• Specifies the conditions that must exist to conduct the test case.
– Test ProceduresSpecifies how to perform one or several test cases.
• Test programs (or "harnesses")(or "benches") and shell scripts may have to be executed as part of a test procedure.
• Defines a sequence of steps that must be followed to complete the procedure, along with the inputs and outputs expected to each step.
March 20, 2007 (c) Dr. David A. Workman 6
Integration & Test (USDP)• Artifacts
– Test ComponentsAutomates one or several test procedures or parts of them.
• Test drivers
• Test scripts
• Test harnesses
– Test Plan• Describes testing requirements, strategies, resources, and schedule for each build
and system release.
• Describes what test cases should be performed and passed for each build and/or system release.
– Test EvaluationsCapture results of test cases; declares whether or not test case was successful; generates
defect or anomaly reports for tracking.
– Anomaly ReportsDocument test anomalies – unexpected test events or results. Capture and track
anomalies during development. Ensure all anomalies have been satisfactorily addressed or removed before delivery (customer signoff).
March 20, 2007 (c) Dr. David A. Workman 7
IEEE Std (829) for Software Testing
• Test PlanTo prescribe the scope, approach, resources, and schedule of testing activities. To
identify items to be tested, the features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the risks associated with the plan.
• Test Design Spec
• Test Case Spec
• Test Procedure Spec
• Test Item Transmittal Report
• Test Log
• Test Incident Report
• Test Summary Report
March 20, 2007 (c) Dr. David A. Workman 8
Test Work Flow
Test Engineer
IntegrationTester
Plan Tests
SystemTester
PerformIntegration
Tests
PerformSystemTests
ComponentEngineer
ImplementTests
Analyze aClass
Design Tests
March 20, 2007 (c) Dr. David A. Workman 9
Generic Testing for OO Systems
• ReferenceTesting Object-Oriented Systems: Models, Patterns, and Tool
by Robert V. BinderAddison-Wesley, © 2000, ISBN = 0201809389
March 20, 2007 (c) Dr. David A. Workman 10
Testing OO Systems
• Fault ModelsA fault model answers a simple question about a test technique: Why do the features
called out by the technique warrant our effort? A fault model therefore identifies the relationships and components of the system under test that are most likely to have faults.
• Bug HazardA circumstance that increases the chance of a bug. Example: type coercion in C++
is a hazard because the rules are complex and may depend on hidden declarations when working with a particular class.
• Test Strategies1. Conformance-directed testing: seeks to establish conformance to requirements
and specifications. Tests are designed to be sufficiently representative of the essential features of the system under test. (Implies a non-specific fault model – any fault that violates conformance is equal to any other.)
2. Fault-directed testing: seeks to reveal implementation faults. It is motivated by the observation that conformance can be demonstrated for an implementation that contains faults. (Implies a specific fault model – directs testing toward particular kinds of faults).
March 20, 2007 (c) Dr. David A. Workman 11
Testing OO Systems
• Conformance- vs Fault-directed Testing– Conformance-directed testing should be feature sufficient: they should, at least,
exercise all specified features of the system.
– Fault-directed testing should be fault efficient: they should have a high probability of revealing a fault of a particular type or types.
• Fault Hazards in OO Programming Languages“Object-oriented technology does not in any way obviate the basic motivation for
software testing. In fact, it poses new challenges. … Although the use of OOPLs may reduce some kinds of errors, they increase the chance of others.”
– Dynamic binding and complex inheritance structures create many opportunities for faults due to unanticipated bindings or misinterpretation of correct usage.
– Interface programming errors are a leading cause of faults in procedural languages. OO programs typically have many small components and therefore more interfaces. Interface errors are more likely, other things being equal.
– State control errors are likely. Objects preserve state, but state control (the acceptable sequence of events) is typically distributed over an entire program.
March 20, 2007 (c) Dr. David A. Workman 12
OO Fault Studies
• Steven P. Fiedler: “Object-oriented unit testing” in Hewlett-Packard Journal, Vol. 40, No. 2, April 1989.
In reference to a C++ system for a medical electronics application, Fiedler reports that “on the average, a defect was uncovered for every 150 LOC, and correspondingly, the mean density exceeded 5.1 per KSLOC.”
• V. Basili, L.C. Briand, and W. L. Melo: “A validation of object-oriented design metrics as quality indicators” in IEEE Transactions on Software Engineering, Vol. 22, No. 10, October 1996.
The conclusions of a study done on small C++ systems ( less than 30 KSLOC ) are:
– Classes that send relatively more messages to instance variables and message parameter objects are more likely to be buggy;
– Classes with greater depth ( number of superclasses ) and higher specialization (number of new and overridden methods ) are more likely to be buggy;
– No significant correlation was found between classes that lack cohesion (number of pairs of methods that have no attributes in common) and the relative frequency of bugs.
March 20, 2007 (c) Dr. David A. Workman 13
OO Fault Studies
• Martin Sheppard & M. Cartwright: “An empirical study of object-oriented metrics” in a Tech Report, TR-97/01, Dept. of Computing, Bournmouth University, U.K.
In reference to a 133 KSLOC C++ system for a telecommunications application, the report states, “ classes that participated in inheritance structures were 3 times more defect prone than other classes.”
• Capers Jones: The economics of object-oriented software development Software Productivity Research Inc., Burlington, MA., April 1997.
Summarizing data gathered from 150 development organizations and 600 projects, Jones reports:
1. The OO learning curve is very steep and causes many first-use errors.
2. OO analysis and design seem to have higher defect potential than older design methods.
3. Defect removal efficiency against OO design problems seems lower than against older design methods.
4. OOPLs seem to have lower defect potential than procedural languages.
5. Defect removal efficiency against programming errors is roughly equal to, or somewhat better than, removal efficiency against older procedural language errors.
March 20, 2007 (c) Dr. David A. Workman 14
An OO Testing Manifesto
• Observations– The hoped-for reduction in OO testing due to reuse is illusory.
– Inheritance, polymorphism, late binding, and encapsulation present some new problems for test case design, testability, and coverage analysis.
– To the extent that OO development is iterative and incremental, test planning, design, and execution must be similarly iterative and incremental.
– Regression testing and its antecedents must be considered essential techniques for professional OO development.
• Guidance– Unique Bug Hazards: Test design must be based on the bug hazards that are
unique to the OO programming paradigm.
– OO Test Automation: Application-specific test tools must be OO and must offset obstacles to testability intrinsic to the OO programming paradigm.
– Test-effective process: The testing process must adapt to iterative and incremental development and mosaic modularity. The intrinsic structure of the OO paradigm requires that test design must consider method, class, and cluster scope simultaneously.
March 20, 2007 (c) Dr. David A. Workman 15
An OO Testing Manifesto
• Unique Bug Hazards1. The interaction of individually correct superclass and subclass methods can be
buggy and must be systematically exercised.
2. Superclass test suites must be rerun on subclasses and should be constructed so that they can be reused to test any subclass.
3. Unanticipated bindings that result from scoping nuances in multiple and repeated inheritance can produce bugs that are triggered only by certain superclass/subclass interactions. Subclasses must be tested at “flattened” scope, superclass test cases must be reusable.
4. Poor design of class hierarchies supporting dynamic binding (polymorphic servers) can result in failures of a subclass to observe superclass contracts. All bindings must be systematically exercised to reveal these bugs.
5. The loss of intellectual control that results from spaghetti polymorphism ( yo-yo problem) is a bug hazard. A client of a polymorphic server can be considered to have been adequately tested only if all server bindings that the client can generate have been exercised.
March 20, 2007 (c) Dr. David A. Workman 16
An OO Testing Manifesto
• Unique Bug Hazards6. Classes with sequential constraints on method activation and their clients can
have control bugs. The required control behavior can be systematically tested using a state machine model.
7. Subclasses can accept illegal superclass method sequences or generate corrupt states by failing to observe the state model of the superclass. Where sequential constraints exist, subclass testing must be based on a flattened state model.
8. A generic class instantiated with a type parameter for which the generic class has not been tested is almost the same as completely untested code. Each generic instantiation must be tested to verify it for that parameter.
9. The difficulty and complexity of implementing multiplicity constraints can easily lead to incorrect state/output when an element of a composition group is added, updated, or deleted. The implementation of multiplicity must be systematically tested.
10. Bugs can easily hide when an update method ( a method that changes instance state) computes a corrupt state, the class interface does not make this corrupt state visible (ie. providing a public feature for reporting a corrupt state or throwing an exception), and the corruption does not inhibit other operations. Def-use sequences of method calls must be systematically tested.
March 20, 2007 (c) Dr. David A. Workman 17
Functional vs. Object-Oriented Architectures
Related computationson different objects.
(a) Functional Architecture
A
Use Case Flow
Data &ControlFlow
X
Y
Z
CB
Y1
X
Y2
Z1
Z2
Control/Boundary
Object
(b) Object-Oriented Architecture
Class-A
Class-C
Class-B
PublicServiceLayer
Messages
A B
A
C
B
March 20, 2007 (c) Dr. David A. Workman 18
A
B
C
D
E
1
2
2
2
2
1
1
1
1
3
3
Functional Dependencies Among Methods
March 20, 2007 (c) Dr. David A. Workman 19
Functional Threads (Use Cases)A B C D E
A1 B1
A2
D2 E1
A3
D1
B2
C1
E2
D3
March 20, 2007 (c) Dr. David A. Workman 20
Hierarchical View of Design
main
A::One() A::Two() A::Three()
B::One()C::One()
B::Two()
D::One()
E::Two()
E::One() D::Three()
D::Two()
Use Case #1
Use Case #3Use Case #2
March 20, 2007 (c) Dr. David A. Workman 21
Design & Testing Principles
A
D
B C
Principle 1: Design should be performed “top-down” for each functional thread defined by a Use Case; that is, the interface and detailed design of a module should follow the design of all modules that functionally depend on it.
Rationale: By performing interface and detailed design top-down, we ensure that all requirements flow from dependent modules toward the modules they depend on. This principle attempts to postpone detailed design decisions until all functional requirements for a module are known.
Principle 1: Design should be performed “top-down” for each functional thread defined by a Use Case; that is, the interface and detailed design of a module should follow the design of all modules that functionally depend on it.
Rationale: By performing interface and detailed design top-down, we ensure that all requirements flow from dependent modules toward the modules they depend on. This principle attempts to postpone detailed design decisions until all functional requirements for a module are known.
Principle 2: Coding and Unit Testing should be performed “bottom-up” for a functional thread; that is, the unit testing of a module should precede the unit testing of all modules that functionally depend on it.
Rationale: By performing unit testing bottom-up, we ensure that all subordinate moduleshave been verified before verifying the module that depends on them. This principle attempts to localize and limit the scope and propagation of changes resulting from unit testing.
Principle 2: Coding and Unit Testing should be performed “bottom-up” for a functional thread; that is, the unit testing of a module should precede the unit testing of all modules that functionally depend on it.
Rationale: By performing unit testing bottom-up, we ensure that all subordinate moduleshave been verified before verifying the module that depends on them. This principle attempts to localize and limit the scope and propagation of changes resulting from unit testing.
March 20, 2007 (c) Dr. David A. Workman 22
Design & Testing Schedules
Layer 1 Layer 2 Layer 3 Layer 4 Layer 5A1, A2, A3 B1, D2, C1 E1, B2, D3 D1 E2
Development Layers for Detailed Design and Coding
Layer 1' Layer 2' Layer 3' Layer 4' Layer 5'B1, E1,E2, D3
A1, D2, D1 A2, B2 C1 A3
Development Layers for Unit Testing
B1 A1A1 B1 E1 E1A2 D2 B2 B3 D2 A2A3 C1 D3 D1 E2 E2 D1 B2 C1 A3
Effort
Time
Development Schedule
Build #1 (Integration Test 1)Build #2 (Integration Test 2)Build #3 (System Test)
See Notes
March 20, 2007 (c) Dr. David A. Workman 23
McCabe's* Cyclomatic Complexity
For a flow graph G:V(G)* = E - N + p + 1E = # edges in the flow graphN = # nodes in the flow graphp = number of independent program components.
A component can be represented by a 1-entry/1-exit DAG (directed acyclic graph).
McCabe proved that his metric gives the number of linearly independent flow paths through the DAG. The number of LI paths relate strongly to the testing complexity of the component.
1
0
3
6
5
7
42
Branch nodes = 1,3 (fan out > 1)Join nodes = 6, 7 (fan in > 1)Sequential nodes = 0, 2, 4, 5
*NOTE: This formula is actually a variant of McCabe’s metric proposed by Brian Henderson-Sellers. McCabe’s metric for p isolated components is given by: V(G) = E – N + 2p. Henderson-Sellers showed that his variant gives a consistent value when the isolated components are treated as procedures connected to their call sites.
In this example: E = 9N = 8p = 1 andV(G)* = 3
It can also be shown that if Gis a planar graph, then V(G)*is the number of bounded regionsor faces of the graph.
March 20, 2007 (c) Dr. David A. Workman 24
McCabe's* Cyclomatic Complexity
0
1
4
2 3
A1
A4
A3A2
B1
B7
B2
B3
B6
B5B4
5
Main()
6
Call A
Call B
A() B()
V(main)* = 7 – 7 + 2 = 2
V(A)* = 4 – 4 + 2 = 2
V(B)* = 8 – 7 + 2 = 3
V(main+A+B)* = 19 – 18 + 4 = 5Note: p =3 (Independent Modules = Main, A, B)
March 20, 2007 (c) Dr. David A. Workman 25
McCabe's* Cyclomatic Complexity
0
1
4
2
3
A1
A4
A3A2
B1
B7
B2
B3
B6
B5B4
5
Main()
6
Call A
Call B
A() B()
V(main+A+B)* = 23 – 20 + 2 = 5
3’
5’Linearly Independent Paths are:(0,1,2,4,5,B1,B2,B7,5’,6)(0,1,2,4,5,B1,B3,B4,B6,B7,5’,6)(0,1,2,4,5,B1,B3,B5,B6,B7,5’,6)(0,1,3,A1,A2,A4,3’,4,5,B1,B3,B4,B6,B7,5’,6)(0,1,3,A1,A3,A4,3’,4,5,B1,B3,B5,B6,B7,5’,6)
Call nodes are “split” tomatch the single entry andexit nodes of the calledcomponent. 1 node and 2edges are added for eachcalled component.
March 20, 2007 (c) Dr. David A. Workman 26
Example Method (C++)
void Server::Work(){
ifstream input; // input file streamofstream output; // output file streamTray tray;cout << "McDonald's Implementation in C++" << endl;cout << "by Amit Hathiramani and Neil Lott" << endl;cout << endl;
while(1) { string szInputFileName; cout << "Please enter the name of the input file: "; cin >> szInputFileName; input.open(szInputFileName.c_str()); if(!input) cerr << endl << "No file named " << szInputFileName << " found." << endl; else break;}
} //Server::Work
1
2
3
54
6
16
InsertSegment
A
March 20, 2007 (c) Dr. David A. Workman 27
Example Method (C++)
FoodItems *pFood;
while(!input.eof()) { char szMarker[4]; input >> szMarker; strupper(szMarker); if(strcmp(szMarker, "$D") == 0) pFood = new Drinks; // drink else if(strcmp(szMarker, "$S") == 0) pFood = new Sandwiches; // sandwich else if(strcmp(szMarker, "") == 0) continue; // blank line; skip it else throw InputException("Unknown type found " + string(szMarker)); pFood->Get(input); tray.Add_Item(pFood);} //while
Segment A
6
7
8
9 10
11 12
13
1415
March 20, 2007 (c) Dr. David A. Workman 28
Example Method (C++)
1
2
9876
5
4
3
10
14
13
16
15
1211
17
Exceptionexit
Normalexit
Systemexit
V(G)* = 19 – 16 + 2 = 5OrV(G)* = 21 – 17 + 2 = 6
Design & Test Example: Discrete Event Simulator
©Dr. David A. Workman
School of Computer Science
University of Central Florida
March 20, 2007 (c) Dr. David A. Workman 30
Use Case Diagram: Simulator
Simulation System
ConstructWorld
SpecifyInput
SimulateWorld
OutputWorld
Objects
ReportSimulation
Data
InitializeSimulation
Simulation User
SpecifyOutput
SimulationInputFile
SimulationLogFile
March 20, 2007 (c) Dr. David A. Workman 31
Simulation Architecture
March 20, 2007 (c) Dr. David A. Workman 32
Simulation Architecture: Design Template
The Passive layer contains all classes that model problem data and inanimate objects of the simulated world. Agents make direct calls on passive objects, but must account for the simulation time consumed when doing so. Passive objects make direct calls to each other, if necessary. Passive objects may be passed from one Agent to another as part of a instance of some Message subclass.
This layer contains all the derived subclasses of Message. These classes are used to pass data for servicing interaction events between Agents. Only recipient Agent classes know the content and use of instances of these classes. Methods of Agents receiving messages optionally take parameters which are instances of one (or more) of the Passive classes and return an instance of class Message or one of its sub-classes. Instances of the root class Message carry no data and denotesignals that require some action on the part of the receiver Agent.
Virtual World
Message
Players
Agent
Event
2
Passive Class Layer
Message Layer
Agent Layer(Active Objects)
Interface and Control Layer
EventMgr *
1Other
Subclasses
All ActiveObjects
* *
All PassiveClasses/Objects
* *
*
This layer consists of all active object classes. Active objects must beinstances of some subclass of abstract class Agent. The simulation progresses as a result of Events created andserviced by Agents. An Event has four components: a Sender agent, a Recvr agent, an instance of some Messagesubclass, and an event time. When one Agent wishes to interact with another, it must do so by creating an Eventthat defines a “future” time at which the interaction will occur. The message component defines an action to theRecevr agent and possibly data necessary to complete the action.
SimModels Classes SimMgmt Classes
March 20, 2007 (c) Dr. David A. Workman 33
Simulation Architecture: Student Conversation
Conversation
Message
Players
Agent
Event
2
Passive Class Layer
Message Layer
Agent Layer(Active Objects)
Interface and Control Layer
EventMgr *
1
AnswerMsg
* *
Question
SimModels Classes SimMgmt Classes
Student
Answer
QuestionMsg
March 20, 2007 (c) Dr. David A. Workman 34
Design Graph: 1
0: Main()
4 Reusable Methods9 New MethodsClass Conversation
1: Conversation()
Class Agent2: Agent()3: operator>>()4: Get()
Class Student5: Student()6: Extract()7: Get()
1
5
6
3
2
7
4
Use Case 1
March 20, 2007 (c) Dr. David A. Workman 35
Design Graph: 2
0: Main()
5 Reusable Methods9 New Methods
Class Conversation1: Conversation()8: Initialize()
Class Agent2: Agent()3: operator>>()4: Get()11: NameOf()21: ~Agent()
Class Student5: Student()6: Extract()7: Get()13: Initialize()16: AcceptQuest()
8
20
Class Players9: Players()12: setAgent()14: getAgent()15: getOther()20: ~Players()
Class Message10: Message()
9
2
12 13
11 14 15 16
Class SpeakMsg17: SpeakMsg()
17
10
Class Event18: Event()
18
Class EventMgr-3: EventMgr()19: postEvent()
19 21
Use Case 2
March 20, 2007 (c) Dr. David A. Workman 36
Design Graph: 3
0: Main()
Class Conversation1: Conversation()8: Initialize()22: Insert()
Class Agent2: Agent()3: operator>>()4: Get()11: NameOf()21: ~Agent()23: oper<<()26: Put()
Class Student5: Student()6: Extract()7: Get()13: Initialize()16: AcceptQuest()24: Insert()25: Put
Class Players9: Players()12: setAgent()14: getAgent()15: getOther()20: ~Players()
Class Message10: Message()
Class SpeakMsg17: SpeakMsg()
Class Event18: Event()
Class EventMgr-3: EventMgr()19: postEvent()
25
24
22
23
26
Use Case 3 2 Reusable Methods3 New Methods
March 20, 2007 (c) Dr. David A. Workman 37
Design Graph: 4
0: Main()
2
Class Conversation1: Conversation()8: Initialize()22: Insert()44: Simulate()
Class Agent2: Agent()3: operator>>()4: Get()11: NameOf()21: ~Agent()23: oper<<()26: Put()
Class Student5: Student()6: Extract()7: Get()13: Initialize()16: AcceptQuest()24: Insert()25: Put()37: Dispatch()39: doQuestion()40: AcceptAnswr()41: doAnswer()
Class Players9: Players()12: setAgent()14: getAgent()15: getOther()20: ~Players()
Class Message10: Message()30: Oper<<()31: Insert()32: Put()42: ~Message()
Class SpeakMsg17: SpeakMsg()33: Insert()34: Put()38: getHandlr()
Class Event18: Event()29: oper<<()35: getRecvr()36: getMsg()43: ~Event()
Class EventMgr-3: EventMgr()19: postEvent()27: moreEvents()28: getNextEvent()
17
27 28 29
23
24
25
26
30
32
3133
34
3635
40
3938
37
41
10
18 19
42
44
10 Reusable Methods8 New Methods
43
Use Case 4
March 20, 2007 (c) Dr. David A. Workman 38
Design Graph: 2
0: Main()
Class Conversation1: Conversation()8: Initialize()22: Insert()44: Simulate()45: WrapUp()46: ~Conversation()
Class Agent2: Agent()3: operator>>()4: Get()11: NameOf()21: ~Agent()23: oper<<()26: Put()
Class Student5: Student()6: Extract()7: Get()13: Initialize()16: AcceptQuest()24: Insert()25: Put()37: Dispatch()39: doQuestion()40: AcceptAnswr()41: doAnswer()47: ~Student()
Class Players9: Players()12: setAgent()14: getAgent()15: getOther()20: ~Players()
Class Message10: Message()30: Oper<<()31: Insert()32: Put()42: ~Message()
Class SpeakMsg17: SpeakMsg()33: Insert()34: Put()38: getHandlr()
Class EventMgr-3: EventMgr()19: postEvent()27: moreEvents()28: getNextEvent()48: ~EventMgr()
4645
47
Use Case 5Class Event18: Event()29: oper<<()35: getRecvr()36: getMsg()43: ~Event()
1 Reusable Methods3 New Methods
21 Reusable27 New----------------48
Total Methods
4 Reusable4 New---------------8
Total Classes
March 20, 2007 (c) Dr. David A. Workman 39
Scheduling
Development ScheduleUse Case 1
Use Case 3
Use Case 5
Use Case 4
Use Case 20: Main()
1
5
6 3
2
74
8
20
92 12
13
1114
15
161710
18
19
21 25 24 222326
17
27
28
29
23242526
30
32
31
3334
36
3540
39
38
3741
10
18
19
42
44
43
46
45
47
March 20, 2007 (c) Dr. David A. Workman 40
Scheduling
>>(Agent)
Agent::Get() Student::
Get()Student
::Extract()
Conversation()
Development Time
Agent::Extract()
Agent::Agent()
Student::Student()
Agent::Put()
Student::Put()
Student::Insert() Conversation::
Insert()<<(Agent)
Use Case 1
EventMgr::moreEvent()
EventMgr::getNextEvent()Event::
getRecvr()
Conversation::~Conversation()
Student::~Student()
Event::Event()
EventMgr::postEvent()
Message::Message()
Event::getMsg()Message::
getId()
EventMgr::EventMgr()
EventMgr::Insert()
<<(Message)
Agent::Insert()
Agent::~Agent()
Message::~Message()
Players::Players()
Players::~Players()
<<(Event)
Use Case 3
Message::Insert()
Message::Put()
Event::getTime()
Event::getSendr()
Event::operator<()
Use Case 5Conversation::
WrapUp()
Use Case 4
Conversation::Simulate()
top related