testing manual document 1
TRANSCRIPT
-
7/29/2019 Testing Manual Document 1
1/70
Feasibility Study
Design
Coding
Analysis
Testing
What Manual Testing?
Testing activities performed by people without the help of Software Testing Tools.
What is Software Quality?
It is reasonably bug free delivered on time with in the budget, meets all requirements andit is maintainable.
1. The Software life cycle
All the stages from start to finish that take place when developing new Software.
The software life-cycle is a description of the events that occur between the birth and
death of a software project inclusively.
SDLC is separated into phases (steps, stages)
SLDC also determines the order of the phases, and the criteria for transitioning from
phase to phase
Change Requests on RequirementSpecifications
Why Customer ask Change Requests
Different users/customers have
different requirements.
Requirements get clarified/ known at a
later date
Changes to business environment
Technology changes
Misunderstanding of the statedrequirements due to lack of domain
knowledge
How to Communicate the Change Requests
to team
1
The Analyst conducts an initialstudy of the problem and asks isthe solution
Technologically possible?Economically possible?
Legally possible?Operationally possible?
s
i
bl
e
?O
p
e
ra
ti
o
n
al
l
y
pos
s
i
bl
e
?S
c
he
d
ul
e
d
t
i
m
e
s
Feasibility Study What exactly is this system supposed to do?
Analysis
Determine and list out the details of the problem. Design How will the system solve the problem?
Coding Translating the design into the actual system.
Testing Does the system solve the problem?
Have the requirements been satisfied?
Does the system work properly in all situations?
Maintenance Bug fixes
-
7/29/2019 Testing Manual Document 1
2/70
Formal indication of Changes to
requirements
Joint Review Meetings
Regular Daily Communication
Queries
Defects reported by Clients during testing.
Client Reviews of SRS, SDD, Test plans
etc.
Across the corridor/desk (InternalProjects)
Presentations/ Demonstrations
Analyzing the Changes
Classification
Specific
Generic
Categorization
Bug
Enhancement
Clarification etc.
Impact Analysis
Identify the Items that will be effected
Time estimations
Any other clashes / open issues raised
due to this?
Benefits of accepting Change Requests
1. Direct Benefits
Facilitates Proper Control and
Monitoring
Metrics Speak for themselves
You can buy more time.
You may be able to bill more.
2. Indirect Benefits:
Builds Customer Confidence.
What can be done if the requirements are
changing continuously?
Work with project stakeholders early on tounderstand how the requirements might
change. So that alternative test plans and
strategies can be worked out in advance.
It is helpful if the application initial design
has some adaptability. So that laterchanges do not require redoing the
application from scratch.
If the code is well commented and welldocumented. Then it is easy to make
changes for developers.
Use rapid prototyping whenever possible
to help customers feel sure of theirrequirements and minimize changes.
Negotiate to allow only easily-
implemented new requirements into the
project, while moving more difficult newrequirements into future versions of the
application.
The feasibility report
BRS (Business Requirement Document)
Applications areas to be considered (Stock control, banking, Accounts etc)
System investigations and System requirements for each application
Cost estimates
Timescale for implementation
Expected benefits
2
a
le
po
s
s
ib
l
e?
st
em
s
an
a
ly
s
t
c
o
nd
u
ct
s
a
n
i
n
it
i
al
st
u
d
y
o
f
t
-
7/29/2019 Testing Manual Document 1
3/70
Feasibility Study
Design
Coding
Analysis(Requirements)
Testing
Installation &Maintenance
Feasibility Study
Design
Coding
Analysis
Testing
Installation &
Maintenance
1.1 Feasibility Study:
1.2 Systems Analysis:
3
e
pr
o
bl
e
m
a
nd
a
sk
s
i
s
th
e
s
o
lu
t
i
o
n
Te
c
hn
o
l
og
ica
l
ly
po
s
s
ib
The process of identifying problems, resources opportunities, requirementsand constraints
System analysis is the process of Investigating a business with a view todetermining how best to manage the various procedures and informationprocessing tasks that it involves.
1.2.1 The Systems Analyst
Performs the investigation and mighrecommend the use
of a computer to improve the efficiency of the
information system.
1.2.2 Systems Analysis
The intention to determine how well abusiness copes
with its current information processing needs
Whether it is possible to improve theprocedures inorder
to make it more efficient or profitable.
The (BRS, FRS and SRS) Documents Bridge the communication
-
7/29/2019 Testing Manual Document 1
4/70
Feasibility Study
Design
Coding
Analysis
Testing
Installation &Maintenance
1.3 Systems Design:
The business of finding a way to meet the functional requirements within the specified
constraints using the available technology
Planning the structure of the information system to be implemented.
Systems analysis determines what the system should do
Design determines how it should be done.
4
?
E
co
n
om
i
ca
ll
y
p
os
s
ib
l
e?L
e
ga
l
ly
p
os
s
ib
l
e?
O
pe
r
a
ti
o
na
l
ly
p
os
The System Analysis Report
SRS(Software Requirement Specification)
Use Cases( User action and system Response)
FRS(Functional Requirement Document) Or Functional
specifications
[These 3 are the Base documents for writing Test Cases]
Documenting the results
Systems flow charts
Data flow diagrams
Organization charts
Report
Note FRS contains Input, Output, process but no format.
Use Cases contains user action and s stem res onse with fixed
User interface designDesign of output reports
Input screensData storage i .e files, database
TablesSystem securityBackups, validation, passwordsTest plan
-
7/29/2019 Testing Manual Document 1
5/70
System Design Report consist of
Architectural Design
Database Design
Interface Design
Design Phases
High Level Design
Low Level Design
High Level Design
1. List of modules and a
brief description of each
2. Brief functionality of
each module
3. Interface relationship
among modules
4. Dependencies between
modules
5. Database tables identified
with key elements
6. Overall architecturediagrams along with technology
details
Low Level Design
1. Detailed functional logic of the
module, in pseudo code
2. Database tables, with all
elements, including their type
and size
3. All interface details
4. All dependency issues
5. Error MSG listing
6. Complete input and output
format of a module
Note: HLD and LLD phases put together called Design phase
1.4 Coding:
5
b
l
e?
S
ch
e
du
le
d
t
im
e
s
c
al
e
po
s
si
b
l
e?
Translating the design into theactual system
Program development
Unite testing by DevelopmentTem (Dev Tem)
Feasibility Study
Design
Coding
Analysis
Testing
Installation &Maintenance
Coding Report
All the programs, Functions, Reports thatrelated to System
-
7/29/2019 Testing Manual Document 1
6/70
Feasibility Study
Design
Coding
Analysis
Testing
Installation &Maintenance
1.5 Testing:
.
IEEE Definitions
Failure: External behavior is incorrect
Fault: Discrepancy in code that causes a
failure.
Error: Human mistake that caused fault
Note:
Error is terminology of Developer
Bug is terminology of Tester
Why is Software Testing?
1. To discover defects.
2. To avoid user detecting problems
3. To prove that the software has no faults
4. To learn about the reliability of the software.
5. To avoid being sued by customers
6. To ensure that product works as user expected.
7. To stay in business
8. To detect defects early, which helps in reducing
the cost of defect fixing?
Cost of Defect Repair
Phase % Cost
Requirements 0
Design 10Coding 20
Testing 50
Customer Site 100
6
1.5.1 What Is Software Testing?
IEEE Terminology : An examination of the behavior of theprogram by executing on sample data sets.
Testing is Executing a program with an intention of finding defects
Testing is executing a program with an indent of findingError/Fault and Failure.
Fault : It is a condition that causes the software to fail toperform its
required function.
Error : Refers to difference between Actual Output andExpected
output.
F Failure : It is the inability of a system or component to perfre uired function accordin to its s ecification
-
7/29/2019 Testing Manual Document 1
7/70
0
20
40
60
80
100
RequirementsDesign Coding TestingCustomer Site% Cost 0 10 20 50 100
Cost
SDLC Phase
Cost of Defect Repair
How exactly Testing is different from QA/QC
Testing is often confused with the processes
of quality control and quality assurance.
Testing
It is the process of Creating, Implementingand Evaluating tests.
Testing measures software quality
Testing can find faults. When they are
removed, software quality is improved.
Quality Control (QC)
It is the process of Inspections, Walk-
troughs and Reviews. Measures the quality of the product
It is a Detection Process
Quality Analysis (QA )
Monitoring and improving the entire
SDLC process
Make sure that all agreed-upon standardsand procedures are followed
Ensuring that problems are found and
addressed.
Measures the quality of the process used tocreate good quality Product
It is a Prevention Process
Why should we need an approach for testing?
Yes, We definitely need an approach for
testing.
To over come following problems, we need a
formal approach for Testing.
Incomplete functional coverage: Completeness
of testing is difficult task for testing team with outa formal approach. Team will not be in a position
to announce the percentage of testing completed.
No risk management -- this is no way to measureoverall risk issues regarding code coverage and
quality metrics. Effective quality assurance
measures quality over time and starting from aknown base of evaluation.
Too little emphasis on user tasks -- because
testers will focus on ideal paths instead of real
paths. With no time to prepare, ideal paths aredefined according to best guesses or developer
feedback rather than by careful consideration of
how users will understand the system or how
users understand real-world analogues to theapplication tasks. With no time to prepare, testers
will be using a very restricted set input data,
rather than using real data (from user activity logs,from logical scenarios, from careful consideration
of the concept domain).
Inefficient over the long term -- qualityassurance involves a range of tasks. Effectivequality assurance programs expand their base of
documentation on the product and on the testing
process over time, increasing the coverage andgranularity of tests over time. Great testing
requires good test setup and preparation, but
success with the kind Test plan-less approach
7
-
7/29/2019 Testing Manual Document 1
8/70
described in this essay may reinforce bad project
and test methodologies. A continued pattern of
quick-and-dirty testing like this is a sign that theproduct or application is unsustainable in the long
run.
Areas of Testing:
Black Box Testing
White Box Testing
Gray Box Testing
Black Box Testing
Test the correctness of the functionalitywith the help of Inputs and Outputs.
User doesnt require the knowledge of
software code.
Black box testing is also called asFunctionality Testing.
It attempts to find errors in the following
categories:
Incorrect or missing functions.
Interface errors.
Errors in data structures or external data
base access.
Behavior or performance based errors.
Initialization or termination errors.
Approach:
Equivalence Class:
For each piece of the specification,generate one or more equivalence Class
Label the classes as Valid or
Invalid
Generate one test case for each InvalidEquivalence class
Generate a test case that covers asmany Valid Equivalence Classes as possible
An input condition for Equivalence
Class
A specific numeric value
A range of values
A set of related values
A Booleancondition
Equivalence classes can be defined
using the following guidelines
If an input condition specifies a range,one valid and two invalid equivalence class
are defined.
If an input condition requires a specific
value, one valid and two invalid equivalenceclasses are defined.
If an input condition specifies amember of a set, one valid and one invalid
equivalence classes are defined.
If an input condition is Boolean, one
valid and one invalid classes are defined.
Boundary Value Analysis
Generate test cases for the boundary
values.
Minimum Value, Minimum Value + 1,
Minimum Value -1
Maximum Value, Maximum Value +1, Maximum Value - 1
Error Guessing.
Generating test cases against to the
specification.
8
-
7/29/2019 Testing Manual Document 1
9/70
White Box Testing
Testing the Internal program logic
White box testing is also called as
Structural testing.User does require the knowledge of
software code.
Purpose
Testing all loops
Testing Basis paths
Testing conditional statements
Testing data structures
Testing Logic Errors
Testing Incorrect assumptions
Structure = 1 Entry + 1 Exit with certainConstraints, Conditions and Loops.
Logic Errors and incorrect assumptions mostare likely to be made while coding for
special cases. Need to ensure these
execution paths are tested.
Approach
Basic Path Testing (Cyclomatic
Complexity(Mc Cabe Method)
Measures the logical complexity of a
procedural design.
Provides flow-graph notation to
identify independent paths of processing
Once paths are identified - tests can bedeveloped for - loops, conditions
Process guarantees that every
statement will get executed at least once.
Structure Testing:
Condition Testing
All logical conditions contained in
the program module should be
tested.
Data Flow Testing
Selects test paths according to the
location of definitions and use ofvariables.
Loop Testing
Simple Loops
Nested Loops
Concatenated Loops
Unstructured Loops
Gray Box Testing.
It is just a combination of both
Black box & white box testing.
It is platform independent and
language independent.
Used to test embedded systems.
Functionality and behavioral
parts are tested.
Tester should have the
knowledge of both the internals andexternals of the function
If you know something about
how the product works on the inside,
you can test it better from theoutside.
Gray box testing is especially important
with Web and Internet applications,
because the Internet is built around looselyintegrated components that connect via
relatively well-defined interfaces. Unless
you understand the architecture of the Net,
your testing will be skin deep.
9
-
7/29/2019 Testing Manual Document 1
10/70
Feasibility Study
Design
Coding
Analysis
Testing
Installation &Maintenance
1.6 Installation & Maintenance
Installation
File conversion
New system becomes operational
Staff training
Maintenance
Corrective maintenance A type of maintenance performed to correct a defect
Perfective maintenance Reengineering include enhancement
Adaptive maintenance To change software so that it will work in an altered
environment, such as when an operating system, hardware platform, compiler,software library or database structure changes
Table format of all the phases in SDLC:
PHASE INPUT OUTPUT
Analysis BRS FRS and SRS
Design FRS and SRS Design Doc
Coding Design Doc .exe File/Application/Website
Testing All the above Docs Defect Report
2. Software Development Life Cycles
Life cycle: Entire duration of a
project, from inception to termination
Different life cycle models
2.1. Code-and-fix model:
Earliest software development
approach (1950s)
Iterative, programmers' approach
Two phases: 1. coding, 2. fixingthe code
No provision for:
Project planning
Analysis
Design
Testing
Maintenance
10
-
7/29/2019 Testing Manual Document 1
11/70
Problems with code-and-fix model:
1. After several iterations, codebecame very poorly structured;
subsequent fixes became very
expensive
2. Even well-designed softwareoften very poorly matched users
requirements: were rejected or
needed to be redeveloped
(expensively!)
3. Changes to code were expensive,
because of poor testing and
maintenance practices
Solutions:
1. Design before coding
2. Requirements analysis beforedesign
3. Separate testing and maintenance
phases after coding
2.2. Waterfall model:
Also called the classic life cycle
Introduced in 1956 to overcomelimitations of code-and-fix model
Very structured, organized
approach and suitable for
planning Waterfall model is a linear
approach, quite inflexible
At each phase, feedback toprevious phases is possible (but
is discouraged in practice)
Still is the most widespreadmodeltoday
Main phases:1. Requirements
2. Analysis
3. Design (overall design & detailed
design)
4. Coding
5. Testing (unit test, integration test,
acceptance test)
6. Maintenance
11
-
7/29/2019 Testing Manual Document 1
12/70
Approaches
The standard waterfall model for
systems development is an approach
that goes through the following steps:
1. Document System Concept
2. Identify System Requirements
and Analyze them
3. Break the System into Pieces
(Architectural Design)
4. Design Each Piece (DetailedDesign)
5. Code the System Components and
Test Them Individually (Coding,
Debugging, and Unit Testing)
6. Integrate the Pieces and Test the
System (System Testing)
7. Deploy the System and Operate It
Waterfall Model Assumption
The requirements are knowable
in advance of implementation.
12
Classic Waterfall
Requirements
Design
Coding
Testing
Analysis
Maintenance
-
7/29/2019 Testing Manual Document 1
13/70
The requirements have no
unresolved, high-risk implications
-- e.g., risks due to COTS
choices, cost, schedule,
performance, safety,security, user interface,
organizational impacts
The nature of the requirements
are compatible with all the key
system stakeholdersexpectations
-- e.g., users, customer,
developers, maintainers, investors
The right architecture forimplementing the requirements is
well understood.
There is enough calendar time to
proceed sequentially.
Advantages of Waterfall Model
Conversion of existing projects
in to new projects.
For proven platforms and
technologies, it works fine.
Suitable for short durationprojects.
The waterfall model is effective
when there is no change in the
requirements, and therequirements are fully known .
If there is no Rework, this model
build a high quality product.
The stages are clear cut
All R&D done before codingstarts, implies better quality
program design
13
-
7/29/2019 Testing Manual Document 1
14/70
Disadvantages with Waterfall Model:
Testing is postponed to later
stage till coding completes.
Not suitable for large projects
It assumes uniform and orderlysequence of steps.
Risk in certain project where
technology itself is a risk.
Correction at the end of phaseneed correction to the previous
phase, So rework is more.
Real projects rarely flow in a
sequential process.
It is difficult to define allrequirements at the beginning of
a project.
The model has problems
adapting to change.
A working version of the system
is not seen until late in the
project's life.
Errors are discovering later
(repairing problem further alongthe lifecycle becomes
progressively more expensive).
Maintenance cost can be as muchas 70% of system costs.
Delivery only at the end (long
wait)
2.3. Prototyping model:
Introduced to overcome
shortcomings of waterfall model
Suitable to overcome problem of
requirements definition Prototyping builds an
operational modelof the plannedsystem, which the customer can
evaluate
Main phases:
1. Requirements gathering
2. Quick design
3. Build prototype
4. Customer evaluation of prototype
5. Refine prototype
6. Iterate steps 4. and 5. to "tune"the prototype
7. Engineer product
14
-
7/29/2019 Testing Manual Document 1
15/70
Note: Mostly, the prototype is discarded after step 5. and the actualsystem is built from scratch in step 6. (throw-away prototyping)
Possible problems:
Customer may object to prototype being thrown away and may demand "a few
changes" to make it working (results in poor software quality and maintainability)
Inferior, temporary design solutions may become permanent after a while, whenthe developer has forgotten that they were only intended to be temporary (results
in poor software quality)
Advantages
Helps counter the limitations of waterfall model
After prototype is developed, the end user and the client are permitted to use theapplication and further modifications are done based on their feedback.
User oriented
What the user sees
Not enigmatic diagrams
Quicker error feedback
Earlier training
15
Prototyping
RequirementsEngineerProduct
QuickDesign
BuildPrototype
EvaluatePrototype
RefinePrototype
Changes?
YesNo
-
7/29/2019 Testing Manual Document 1
16/70
Possibility of developing a system that closely addresses users' needs and
expectations
Disadvantages
Development costs are high.
User expectations
Bypass analysis
Documentation
Never ending
Managing the prototyping process is difficult because of its rapid, iterative nature
Requires feedback on the prototype
Incomplete prototypes may be regarded as complete systems
2.4 Incremental:
During the first one-month phase, the development team worked from static visual
designs to code a prototype. In focus group meetings, the team discussed users needsand the potential features of the product and then showed a demonstration of its
prototype. The excellent feedback from these focus groups had a large impact on the
quality of the product.
Main phases:
1. Define outline Requirements 4. Develop2. Assign requirements to increments 5. Integrate
3. Design system architecture 6. Validate
16
Incremental
Define outlinerequirements
Assign requirementsto increments
Design systemarchitecture
Develop systemincrement
Validateincrement
Integrateincrement
Validatesystem
FinalSystem
System incomplete
-
7/29/2019 Testing Manual Document 1
17/70
After the second group of focus groups, the feature set was frozen and the product
definition complete. Implementation consisted of four-to-six-week cycles, with software
delivered for beta use at the end of each cycle. The entire release took 10 months fromdefinition to manufacturing release. Implementation lasted 4.5 months. The result was a
world-class product that has won many awards and has been easy to support.
2.5 V-Model:Verification (Static System Doing Right Job) To test the system
correctness as to whether the system is being functioning as per specifications.
Typically involves in Reviews and Meetings to evaluate
documents, plans, code, requirements and specifications.
This can be done with checklists, issue lists, walkthroughs and
inspection meetings.
Validation (Dynamic System - Job Right) Testing the system in a real
environment i.e, whether software is catering the customers requirements.
Typically involves in actual testing and take place after verifications arecompleted
Advantages
Reduces the cost of defect repair (. Every document is verified by tester )
No Ideal time for Testers
Efficiency of V-model is more when compare to Waterfall Model
Change management can be effected in V-model
Disadvantages
17
VERIFI
CATION
VALI
DATION
-
7/29/2019 Testing Manual Document 1
18/70
Risk management is not possible
Applicable of medium sized projects
2.6 Spiral model:
Objective: overcome problems of other models, while combining their advantages
Key component: risk management (because traditional models often fail whenrisk is neglected)
Development is done incrementally, in several cycles_ Cycle as often as
necessary to finish
Main phases:
1. Determine objectives, alternatives for development, and constraints for theportion of the whole system to be developed in the current cycle
2. Evaluate alternatives, considering objectives and constraints; identify and resolverisks
3. Develop the current cycle's part of the system, using evolutionary or conventional
development methods (depending on remaining risks); perform validation at theend
4. Prepare plans for subsequent phases
18
-
7/29/2019 Testing Manual Document 1
19/70
Spiral Model
This model is very appropriate for large software projects. The model consists of four
main parts, or blocks, and the process is shown by a continuous loop going from theoutside towards the inside. This shows the progress of the project.
Planning
This phase is where the objectives, alternatives, and constraints are determined.
Risk Analysis
What happens here is that alternative solutions and constraints are defined, and
risks are identified and analyzed. If risk analysis indicates uncertainty in therequirements, the prototyping model might be used to assist the situation.
Engineering
Here the customer decides when the next phase of planning and risk analysis
occur. If it is determined that the risks are to high, the project can be
terminated.
Customer EvaluationIn this phase, the customer will assess the engineering results and make changes
if necessary.
19
-
7/29/2019 Testing Manual Document 1
20/70
Spiral model flexibility
Well-understood systems (lowtechnical risk) - Waterfall model.
Risk analysis phase is relatively
cheap
Stable requirements and formalspecification. Safety criticality -
Formal transformation model
High UI risk, incomplete
specification - prototyping model
Hybrid models accommodatedfor different parts of the project
Advantages of spiral model:
Good for large and complex
projects
Customer Evaluation allows forany changes deemed necessary,
or would allow for new
technological advances to beused
Allows customer and developer
to determine and to react to risks
at each evolutionary level Direct consideration of risks at
all levels greatly reduces
problems
Problems with spiral model:
Difficult to convince customerthat this approach is controllable
Requires significant risk
assessment expertise to succeed
Not yet widely used efficacy notyet proven
If a risk is not discovered,problems will surely occur
2.7 RAD Model
RAD refers to a development life
cycle designed to give muchfaster development and higher
quality systems than the
traditional life cycle.
It is designed to take advantageof powerful development
software like CASE tools,
prototyping tools and codegenerators.
The key objectives of RAD are:
High Speed, High Quality andLow Cost.
RAD is a people-centered andincremental development
approach.
Active user involvement, as wellas collaboration and co-operation
between all stakeholders are
imperative.
Testing is integrated throughout
the development life cycle so thatthe system is tested and reviewed
by both developers and users
incrementally.
20
-
7/29/2019 Testing Manual Document 1
21/70
Problem Addressed By RAD
With conventional methods,
there is a long delay before thecustomer gets to see any results.
With conventional methods,development can take so long
that the customer's business hasfundamentally changed by the
time the system is ready for use.
With conventional methods,
there is nothing until 100% of theprocess is finished, then 100% of
the software is delivered
Bad Reasons For Using RAD
To prevent cost overruns(RAD needs a team
already disciplined in cost management)
To prevent runaway schedules
(RAD needs a teamalready disciplined in time management)
Good Reasons for using RAD
To converge early toward adesign acceptable to the customer
and feasible for the developers
To limit a project's exposure to
the forces of change
To save development time,
possibly at the expense of
economy or product quality
RAD in SDLC
Mapping between SystemDevelopment Life Cycle (SDLC)
of ITSD and RAD stages isdepicted as follows.
SDLC RAD
21
Project Request
System Analysis &
Design SA&D)
Implementation
Post Implementation
Review
Transition
Requirements
Planning
User Design
RAD Construction
-
7/29/2019 Testing Manual Document 1
22/70
Essential Ingredients of RAD
RAD has four essentialingredients:
Tools Methodology
People
Management.
The following benefits can be realized
in using RAD:
High quality system will bedelivered because of
methodology, tools and user
involvement;
Business benefits can be realizedearlier;
Capacity will be utilized to meet
a specific and urgent business
need;
Standards and consistency can beenforced through the use of
CASE tools.
In the long run, we will also achieve
that:
Time required to get systemdeveloped will be reduced;
Productivity of developers will
be increased.
Advantages of RAD
Buying may save money compared to building
Deliverables sometimes easier to port
Early visibility
22
-
7/29/2019 Testing Manual Document 1
23/70
Greater flexibility (because developers can redesign almost at will)
Greatly reduced manual coding (because of wizards, code generators, code reuse)
Increased user involvement (because they are represented on the team at all times)
Possibly reduced cost (because time is money, also because of reuse)
Disadvantages of RAD
Buying may not save money compared to building
Cost of integrated toolset and hardware to run it
Harder to gauge progress (because there are no classic milestones)
Less efficient (because code isn't hand crafted)
More defects
Reduced features
Requirements may not converge
Standardized look and feel (undistinguished, lackluster appearance)
Successful efforts difficult to repeat
Unwanted features
Testing Process:
Design testcases
Prepare testdata
Run programwith test data
Compare resultsto test cases
Testcases
Testdata
Testresults
Testreports
4. Testing Life Cycle: A systemic approach for Testing
23
-
7/29/2019 Testing Manual Document 1
24/70
Test Plan Design
Test Case Review
Test Case Execution
Test Case Design
Defect Handling
Gap Analysis
System Study
Scope/ Approach/ Estimations
Deliverables
3.1 System Study
1. Domain Knowledge :- Used to know about the client business
Banking / Finance / Insurance / Real-estates / ERP / CRM / Others
2. Software : -
Front End(GUI) VB / JAVA/ FORMS / Browser Process Language witch we want to write programmes Back End Database like Oracle, SQL Server etc.
3. Hardware: - Internet/ Intranet/ Servers which you want to install.
4. Functional Points: - Ten Lines Of Code (LOC) = 1 Functional Point.
5. Number of Pages: - The document which you want to prepare.
6. Number of Resources : -Like Programmers, Designers, and Managers.
7. Number of Days: - For actual completion of the Project.
8. Numbers of Modules
9. Priority:- High/ Medium/ Low importance for Modules
3.2 Scope/ Approach/ Estimation:
Scope
24
-
7/29/2019 Testing Manual Document 1
25/70
What to test
What not to test
Approach
Methods, tools and techniques used to accomplish test objectives.
Estimation
Estimation should be done based on LOC/ FP/Resources
1000 LOC = 100 FP (by considering 10 LOC = 1 FP) 100 x 3 = 300 (FP x 3 Tech. = Test Cases)The 3 Tech are (Equivalence Class, Boundary Value Analysis, Error Guessing)
30 TC Par Day => 300/30 = 10 Days to Design Test Cases Test Case Review => of Test Case Design (5 Days) Test Case Execution = 1 of Test Case Design(15 Days) Defect Headlining = Test Case Design (5 Days) Test Plan = 5 days ( 1 week ) Buffer Time = 25% of Estimation
3.3 Test Plan Design:
A test plan prescribes the scope, approach, resources, and schedule of testing
activities.
Why Test Plan?
1. Repeatable
2. To Control
3. Adequate Coverage
Importance of Test Plan
Test planning process is a critical step in the testing process. Without a documented
test plan, the test itself cannot be verified, coverage cannot be analyzed and the test
is not repeatable
The Test Plan Design document helps in test execution it contain
1. About the client and company
2. Reference document (BRS, FRS and UI etc.)
3. Scope (What to be tested and what not to be )
4. Overview of Application
5. Testing approach (Testing strategy)
6. For each testing
25
-
7/29/2019 Testing Manual Document 1
26/70
Definition
Technique
Start criteria
Stop criteria
7. Resources and there Roles and Responsibilities8. Defect definition
9. Risk / Contingency / Mitigation Plan
10. Training Required
11. Schedules
12. Deliverables
To support testing, a plan should be there, which specifies
What to Do?
How to Do?
When to Do?
3.4 Test Cases Design:
What is a test case?
Test case is a description of what to be tested, what data to be given and what actionsto be done to check the actual result against the expected result.
What are the items of test case?
1. Test Case Number
2. Pre-Condition
3. Description
4. Expected Result
5. Actual Result
6. Status (Pass/Fail)
7. Remarks.
Test Case Template
TC ID Pre-
Condition
Description Expecte
d Result
Actual
Result
Status Remarks
26
-
7/29/2019 Testing Manual Document 1
27/70
Unique
Test Casenumber
Condition
to satisfied
1. What to
be tested2. what data
to
provided
3. whataction to
be done
As pear
FSR
System
response
Pass or
Fail
If any
Yahoo-
001
Yahoo
web pageshould
displayed
1. Check
inbox isdisplaye
d
2. User ID/PW
3. Click on
Submit
System
shouldmail box
System
response
Test case Development process
Identify all potential Test Cases needed to fully test the business and technical
requirements
Document Test Procedures and Test Data requirements
Prioritize test cases
Identify Test Automation Candidates
Automate designated test cases
Types of Test Cases
Type Source
1. Requirement Based Specifications
2. Design based Logical system
3. Code based Code
4. Extracted Existing files or test cases
5. Extreme Limits and boundary conditionsCan this test cases reusable?
Test cases developed for functionality testing and can be reusable for
Integration
System
Regression
27
Testing with fewmodifications.
-
7/29/2019 Testing Manual Document 1
28/70
Performance
What are the characteristics of good test case?
A good test case should have the following:
TC should start with what you are testing.
TC should be independent.
TC should not contain If statements.
TC should be uniform. (Convention should be followed same across the Project
Eg. , Links
The following issues should be considered while writing the test cases
All the TCs should be traceable.
There should not be any duplicate test cases.
Out dated test cases should be cleared off.
All the test cases should be executable.
Test case Guidelines
Developed to verify that specific requirements or design are satisfied
Each component must be tested with at least two test cases: Positive and
Negative
Real data should be used to reality test the modules after successful test data is
used
3.5 Test Case Review:
1. Peer to peer Reviews
2. Team Lead Review
3. Team Manager Review
Review Process
28
-
7/29/2019 Testing Manual Document 1
29/70
Go through the Use cases & FunctionalSpec
Submit the Review Report
Try to find the gap between TC & Usecases
Take checklist
Take a demo of functionally
3.6 Test Case Execution:
Test execution is completion of testing activities, which involves executing theplanned test cases and conducting of the tests.
Test execution phase broadly involves execution and reporting.
Execution and execution results plays a vital role in the testing.
Test execution consists of following activities to be performed
1. Creation of test setup or Test bed
2. Execution of test cases on the setup
3. Test Methodology used
4. Collection of Metrics
5. Defect Tracking and Reporting
6. Regression Testing
The following activities should be taken care:
1. Number of test cases executed.
2. Number of defects found3. Screen shoots of failure executions should be taken in word document.
4. Time taken to execute.
5. Time wasted due to the unavailability of the system.
Test Case Execution Process:
29
-
7/29/2019 Testing Manual Document 1
30/70
Check the availability of application
Implement the Test Cases
Take the Test Case document
Raise the Defects
3.7 Defect Handling
What is Defect?
Defect is a coding error in acomputer program.
A software error is present whenthe program does not do what its
end user expects it to do.
Who can report a Defect?
Anyone who has involved in software
development life cycle and who is
using the software can report a
Defect. In most of the cases defects
are reported by Testing Team.A short list of people expected to
report bugs:
1. Testers / QA Engineers
2. Developers
3. Technical Support
4. End Users
5. Sales and Marketing Engineers
Defect Reporting
Defect or Bug Report is the
medium of communication
between the tester and theprogrammer
Provides clarity to the
management, particularly at the
summary level
Defect Report should beaccurate, concise, thoroughly-
edited, well conceived, high-
quality technical document
The problem should bedescribed in a way that
maximizes the probability that it
will be fixed
30
Test Case
Test Data Test CaseExecution
Raise the Defect
Screen shot
INPUT PROCESS Installation&
OUTPUT
-
7/29/2019 Testing Manual Document 1
31/70
Defect Report should be non-
judgmental and should not pointfinger at the programmer
Crisp Defect Reporting process
improves the test teams
communications with the seniorand peer management
Defect Life Cycle
Defect Life Cycle helps in
handling defects efficiently.
This DLC will help the users to
know the status of the defect.
No
31
Valid
Close the Defect
Defect Accepted
Defect Raised
Defect Fixed
Internal DefectReview
Defect Submitted toDev Team
Valid
DefectRejected
DefectPostponed
No
Yes
Yes
-
7/29/2019 Testing Manual Document 1
32/70
Types of Defects
1. Cosmetic flaw
2. Data corruption3. Data loss
4. Documentation Issue
5. Incorrect Operation
6. Installation Problem
7. Missing Feature
8. Slow Performance
9. System Crash10. Unexpected Behavior
11. Unfriendly behavior
How do u decide the Severity of the defect
Severity
Level
Description Response Time or Turn-around Time
High A defect occurred due to the inabilityof a key function to perform. This
problem causes the system hang it
halts (crash), or the user is dropped
out of the system. An immediate fixor work around is needed from
development so that testing can
continue.
Defect should be responded to within24 hours and the situation should be
resolved test exit
A defect occurred which severely
restricts the system such as theinability to use a major function of
the system. There is no acceptable
work-around but the problem doesnot inhibit the testing of other
functions
A response or action plan should be
provided within 3 working days andthe situation should be resolved before
test exit.
Severity
Level
Description Response Time or Turn-around
Time
Low A defect is occurred which places
minor restrict on a function that is notcritical. There is an acceptable work-
around for the defect.
A response or action plan should be
provided within 5 working days andthe situation should be resolved before
test exit.
32
-
7/29/2019 Testing Manual Document 1
33/70
Others An incident occurred which places no
restrictions on any function of thesystem. No immediate impact to
testing.
A Design issue or Requirements not
definitively detailed in project.
The fix dates are subject to
negotiation.
An action plan should be provided for
next release or future enhancement
Defect Severity VS Defect Priority
The General rule for the fixing the defects will depend on the Severity.
All the High Severity Defects should be fixed first.
This may not be the same in all cases some times even though severity of the bug
is high it may not be take as the High priority.
At the same time the low severity bug may be considered as high priority.
Defect Tracking Sheet
Defect
No
Description Origin Severity Priority Status
Unique
No
Dec of Bug Birth place
of the Bug
Critical
Major
Medium
MinorCosmetic
High
Medium
Low
Submitted
Accepted
Fixed
RejectedPostponed
Closed
Defect Tracking Tools
Bug Tracker -- BSL Proprietary
Tools
Rational Clear Quest
Test Director
3.8 Gap Analysis:
1. BRS Vs SRS
BRS01 SRS01
-SRS02
-SRS03
33
BRS SRS
Test
Case Defects
BRS001
SRS001
TC001Defect001
Defect002
TC002
TC003
SRS002
SRS003
-
7/29/2019 Testing Manual Document 1
34/70
2. SRS Vs TC
SRS01 TC01
- TC02
- TC03
3. TC Vs Defects
TC01 Defects01
-- Defects02
3.9 Deliverables:
All the documents witch are prepared in
each and every stage.
FRS
SRS
Use Cases
Test Plain
Defect Report
Review Report etc.,
4 Testing Phases
Requirement Analysis
Testing
Design Testing
Unit Testing
Integration Testing
System Testing
Acceptance Testing
4.1 Requirement Analysis Testing
Objective
The objective of RequirementAnalysisTesting is to ensure
software quality by eradicating
errors as earlier as possible in thedevelopement process
If the errors noticed at the end of
the software life cycle are more
costly compared to that of earlyones, and there by validating
each of the Outputs.
The objective can be acheived by
three basic issues:
1. Correctness
2. Completeness
3. Consistency
Types of requirements
Functional Requirements
Data Requirements
Look and Feel requirements
Usability requirements
Performance Requirements
Operational requirements
Maintainability requirements
Security requirements
Scalability requirements
Difficulties in conducting
requirements analysis:
Analyst not prepared
Customer has no time/interest
Incorrect customer personnel
involved
34
-
7/29/2019 Testing Manual Document 1
35/70
Insufficient time allotted in
project schedule
What constitutes good requirements?
Clear Unambiguous terminology
Concise No unnecessary narrative
or non-relevant facts
Consistent Requirements that are
similar are stated in similar terms.
Requirements do notconflict with each other.
Complete All functionality needed to
satisfy the goals of the system isspecified to a level of
detail sufficient for design to take place
Testing related activities during
Requirement phase
1. Creation and finalization of testing
templates
2. Creation of over-all Test Plan and TestStrategy
3. Capturing Acceptance criteria and
preparation of Acceptance Test Plan
4. Capturing Performance criteria of the
software requirements
4.2 Design Testing
Objective
The objective of the design phase
testing is to generate a complete
specifications for implementing a
system using a set of tools and
languagesDesign objective is fulfilled by five
issues
1. Consistency
2. Completeness
3. Correctness
4. Feasibility
5. Tractability
Testing activities in Design phase
1. Develop Test cases to ensure that
product is on par withRequirement Specification
document.
2. Verify Test Cases & test scripts
by peer reviews.
3. Preparation of traceability matrixfrom system requirements
4.3 Unit Testing
Objective
In Unit testing user is supposed
to check each and every micro
function.
All field level validations are
expected to test at the stage of
testing.
In most of the cases Developer
will do this.The objective can be achieved by the
following issues
1. Correctness
2. Completeness
3. Early Testing
4. Debugging
4.4 Integration Testing:
Objective
The primary objective of
integration testing is to discover
errors in the interfaces betweenModules/Sub-Systems (Host &
Client Interfaces).
35
-
7/29/2019 Testing Manual Document 1
36/70
Minimizing the errors which
include internal and external
Interface errors
Approach:
Top-Down Approach
The integration process is performed in a series of 5 steps
1. The main control module is used as a test driver, and stubs are substituted for all
modules directly subordinate to the main control module.
2. Depending on the integration approach selected (depth or breadth-first)subordinate stubs are replaced at a time with actual modules.
3. Tests are conducted as each module is module is integrated.
4. One completion of each set of tests, another stub is replaced with the real-module.
5. Regression testing may be conducted to ensure that new errors have not been
introduced.
Advantages
We can verify the major controls early in the testing Process
Disadvantage:
Stubs are required. Very difficult to develop stubs
36
-
7/29/2019 Testing Manual Document 1
37/70
Bottom-Up Approach.
A bottom-up integration strategy may
be implemented with the following
steps:
1. Low level modules are combinedinto clusters (Some times called
builds) that perform a specific
software sub function.2. A driver (control program for
testing) is written to coordinate
test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters
are combined upward in theprogram structure
Advantages
Easy to Develop the drivers thanstubs
Disadvantage:
The need of test drivers
Late detection of interface
problems
An integration testing is conducted,
the tester should identify critical
modules. A critical module has oneor more of the following
characteristics:
1. Address several software
requirements.
2. Has a high-level of control.
(resides relatively high in theprogram structure)
3. Complex & Error-Phone.
4. Have definite performancerequirements.
Testing activities in Integration
Testing Phase
1. This testing is conducted in
parallel with integration ofvarious applications (or
components)
2. Testing the product with its
external and internal interfaceswithout using drivers and stubs.
37
-
7/29/2019 Testing Manual Document 1
38/70
3. Incremental approach while
integrating the interfaces.
4.5 System Testing:
The primary objective of system
testing is to discover errors whenthe system is tested as a hole.
System testing is also called as
End-End Testing.
User is expected to test from
Login-To-Logout by covering
various business functionalities.
The following Tests will be conducted
in Systemtesting
Recovery Testing.
Security Testing.
Load & Stress Testing.
Functional Testing
Testing activities in System Testing
phase
1. System test is done for validating
the product with respect to client
requirements
2. Testing can be in multiple rounds
3. Defect found during system testshould be logged into Defect
Tracking System for the purpose
of tracking.
4. Test logs and defects arecaptured and maintained.
5. Review of all the test documents
Approach: IDO Model
Identifying the End-
End/Business Life Cycles.
Design the test and data.
Optimize the End-End/Business
Life Cycles.
4.6 Acceptance Testing:
The primary objective of
acceptance testing is to get theacceptance from the client.
Testing the system behavior
against customers requirements
Customers undertake typical
tasks to check their requirements
Done at the customers premiseson the user environment
Acceptance Testing Types
Alpha Testing
Testing the application on the
developers premises itself in
a controlled environment. Generally, the Quality
Assurance cell is the body
that is responsible for conducting the test.
On successful completion of
this phase, the software is
ready to migrate outside thedevelopers premises.
Beta Testing
It is carried out at one or
more users premises using
their infrastructure in anuncontrolled manner.
It is the customer or his
representative that conductsthe test, with/without the
developer around. As and
bugs are uncovered, thedeveloper is notified about
the same.
This phase enables the
developer to modify the code
so as to alleviate anyremaining bugs before the
final official release.
Approach: BE
38
-
7/29/2019 Testing Manual Document 1
39/70
Building a team with real-time
user, functional users anddevelopers.
Execution of business Test
Cases.
When Should we start writing Test Cases/ Testing
V Model is the most suitable way to start writing Test Cases and conduct Testing.
SDLC Phase Requirements
Freeze
Requirements
Build
Business
Requirements Docs
Acceptance Test
Cases
Acceptance Testing
Software
Requirements Docs
System Test Cases System testing
Design
Requirements Docs
Integration test
Cases
Integration Testing
Code Unit Test Cases Unit Testing
5 Testing Methods
5.1 Functionality Testing:
Objective:
Testing the functionality of the
application with the help of inputand out put
Test against system
requirements.
To confirm all the requirementsare covered.
Approach:
1. Equivalence Class
2. Boundary Value Analysis
3. Error Guessing.
5.2 Usability Testing:
To test the Easiness and User-friendliness of the system.
Approach:
1. Qualitative & Quantitative
2. Qualitative Approach:
Qualitative Approach
Each and every function shouldavailable from all the pages of
the site.
User should able to submit eachand every request with in 4-5
actions.
Confirmation message should
be displayed for each and every
submit.
Quantitative Approach:
Heuristic Checklist should beprepared with all the general
test cases that fall under the
classification of checking.
This generic test cases shouldbe given to 10 different people
and ask to execute the system to
mark the pass/fail status.
The average of 10 differentpeople should be considered as
the final result.
39
-
7/29/2019 Testing Manual Document 1
40/70
Example: Some people may feel
system is more users friendly, If the
submit is button on the left side ofthe screen. At the same time some
other may feel its better if the submit
button is placed on the right side.
Classification of Checking:
Clarity of communication.
Accessibility
Consistency
Navigation
Design & Maintenance
Visual Representation.
5.3 Reliability Testing:
Objective
Reliability is considered as the
probability of failure-free
operation for a specified time in a
specified environment for a givenpurpose
To find Mean Time between
failure/time available under
specific load pattern. Mean timefor recovery.
Approach
By performing the continuous
hours of operation.
More then 85% of the stability ismust.
Reliability Testing helps you to
confirm:
Business logic performs asexpected
Active buttons are really active
Correct menu options are
available
Reliable hyper links
Note: This should be done by using
performance testing tools
5.4 Regression Testing:
Objective is to check the newfunctionalities has incorporated
correctly with out failing the
existing functionalities.
RAD In case of Rapid
Application development
Regression Test plays a vital roleas the total development happens
in bits and pieces.
Testing the code problems have
been fixed correctly or not.
Approach
Manual Testing (By
using impact Analysis)
Automation tools
5.5 Performance Testing:
Primary objective of the
performance testing is todemonstrate the system works
functionally as per specifications
with in given response time on aproduction sized database.
Objectives
Assessing the system
capacity for growth.
Identifying weak points
in the architecture
Detect obscure bugs insoftware
Tuning the system
Verify resilience &
reliability
Performance Parameters
Request-Response Time
40
-
7/29/2019 Testing Manual Document 1
41/70
Transactions per Second
Turn Around time
Page down load time
Through Put
Approach
Usage of Automation Tools
Classification of Performance
Testing:
Load Test
Volume Test
Stress Test
Load Testing
Estimating the design capacity ofthe system within the resources
limit
Approach is Load Profile
Volume Testing
Is the process of feeding aprogram with heavy volume of
data.
Approach is data profile
Stress Testing
Estimating the breakdown point
of the system beyond theresources limit.
Repeatedly working on the same
functionality
Critical Query Execution (JoinQueries) To Emulate peak load.
Load Vs Stress:
With the Simple Scenario
(Functional Query), N number of
people working on it will notenforce stress on the server.
A complex scenario with even
one less number of users willstress the server.
5.6 Scalability Testing:
Objective is to find themaximum number of user
system can handle.
Classification:
Network Scalability
Server Scalability
Application Scalability
Approach
Performance Tools
5.7 Compatibility Testing:
Compatibility testing provides a basic
understanding of how a product willperform over a wide range of
hardware, software & network
configuration and to isolate thespecific problems.
Approach
Environment Selection.
Understanding the end users
Importance of selecting both
old browser and newbrowsers
Selection of the Operating
System
Test Bed Creation
Partition of the hard disk. Creation of Base Image
41
-
7/29/2019 Testing Manual Document 1
42/70
5.8 Security Testing:
Testing how well the system
protects against unauthorized
internal or external access.
Verify how easily asystem is subject to security
violations under different
conditions and environments
During Security testing,
password cracking, unauthorized
entry into the software, networksecurity are all taken into
consideration.
5.8 Installation Testing:
Installation testing is
performed to ensure that all
Install features and optionsfunction properly and to verify
that all necessary components of
the application are installed.
The uninstallation of the
product also needs to be tested
to ensure that all data,
executables, and .DLLs are
removed.
The uninstallation of the
application is tested using DOS
command line, Add/Removeprograms, and manual deletion
of files
5.9 Adhoc Testing
Testing carried out using no
recognized test case design
technique.5.10 Exhaustive Testing
Testing the application with all
possible combinations of valuesfor program variables.
Feasible only for small, simple
programs.
6 Performance Life Cycle
6.1 What is Performance Testing?
Primary objective of the
performance testing is to
demonstrate the system worksfunctionally as per specifications
with in given response time on aproduction sized database
6.2 Why Performance Testing:
To assess the system
capacity for growth
The load and response data
gained from the tests can beused to validate the capacity
planning model and assistdecision making.
To identify weak points
in the architecture
The controlled load can be
increased to extreme levels tostress the architecture and break
it bottlenecks and weak
components can be fixed orreplaced
To detect obscure bugs insoftware
Tests executed for extended
periods can cause failures
caused by memory leaks andreveal obscure contention
problems or conflicts
To tune the system
Repeat runs of tests can be
performed to verify that tuningactivities are having the desired
effect improvingperformance.
To verify resilience &
reliability
42
-
7/29/2019 Testing Manual Document 1
43/70
Executing tests at production
loads for extended periods is
the only way to access thesystems resilience and
reliability to ensure required
service levels are likely to bemet.
6.3 Performance-Tests:
Used to test each part of
the web application to find outwhat parts of the website are slow
and how we can make them
faster.
6.4 Load-Tests:
This type of test is doneto test the website using the load
that the customer expects to have
on his site. This is something like
a real world test of the website.
First we have to define
the maximum request times we
want the customers to experience,this is done from the business and
usability point of view, not from a
technical point of view. At this
point we need to calculate theimpact of a slow website on the
company sales and support costs.
Then we have to calculate
the anticipated load and loadpattern for the website (Refer
Annexure I for details on load
calculation) which we thensimulate using the Tool.
At the end we compare
the test results with the requeststimes we wanted to achieve.
6.5 Stress-Tests:
They simulate brute force attacks
with excessive load on the web
server. In the real world
situations like this can be created
by a massive spike of users far
above the normal usage e.g.
caused by a large referrer(imagine the website being
mentioned on national TV).
The goals of stress tests are tolearn under what load the server
generates errors, whether it will
come back online after such a
massive spike at all or crash andwhen it will come back online.
6.6 When should we start
Performance Testing:
It is even a good idea to start
performance testing before a line
of code is written at all! Earlytesting the base technology
(network, load balancer,application-, database- and web-
servers) for the load levels can
save a lot of money when youcan already discover at this
moment that your hardware is to
slow. Also the first stress testscan be a good idea at this point.
The costs for correcting a
performance problem rise steeply
from the start of developmentuntil the website goes productive
and can be unbelievable high for
a website already online.
As soon as several web pages areworking the first load tests
should be conducted and from
there on should be part of the
regular testing routine each day
or week or for each build of thesoftware.
6.7 Popular tools used to conduct
Performance Testing:
LoadRunner from Mercury
Interactive
43
-
7/29/2019 Testing Manual Document 1
44/70
AstraLoad from Mercury
Interactive
Silk Performer from Segue
Rational Suite Test Studio from
Rational Rational Site Load from
Rational
Webload from Radview
RSW eSuite from Empirix
MS Stress tool from Microsoft
6.8 Performance Test Process:
This is a general process for
performance Testing. This processcan be customized according to the
project needs. Few more process
steps can be added to the existingprocess, deleting any of the steps
from the existing process may result
in Incomplete process. If Client is
using any of the tools, In this caseone can blindly follow the respective
process demonstrated by the tool.
General Process Steps:
44
Setting up of the
Environment
Record & Playback in
the standby mode
Enhancement of the
script to support
multiple users
Configure the scripts
Execution for fixed
users and reporting the
status to the developers
Re-execution of the
scenarios after the
developers fine-tune the
code
-
7/29/2019 Testing Manual Document 1
45/70
Setting up of the test environment
The installation of the tool, agents
Directory structure creation for thestorage of the scripts and results
Installation of additional software if
essential to collect the server
statistics
It is also essential to ensure thecorrectness of the environment by
implementing the dry run.
Record & playback in the stand bymode
The scripts are generated using the
script generator and played back to
ensure that there are no errors in thescript.
Enhancement of the script to
support multiple users
The variables like logins, user inputs
etc. should be parameterised tosimulate the live environment.
It is also essential since in some of
the applications no two users can
login with the same id.
Configuration of the scenarios
Scenarios should be configured torun the scripts on different agents,
schedule the scenarios
Distribute the users onto differentscripts, collect the data related to
database etc.
Hosts: The next important
step in the testing approach isto run the virtual users on
different host machines to
reduce the load on the client
machine by sharing the
resources of the othermachines.
Users: The number of
users who need to be
activated during theexecution of the scenario.
Scenarios: A scenario
might either comprise of a
single script or multiplescripts. The main intention
of creating a scenario tosimulate load on the serversimilar to the live/production
environment.
Ramping: In the live
environment not all the userslogin to the application
simultaneously. At this stage
we can simulate the virtualusers similar to the live
environment by deciding -
1. How many users should be
activated at a particular pointof time as a batch?
2. What should be the time
interval between every batch
of users?
Execution for fixed users and
reporting the status to the developers
The script should be initially
executed for one user and theresults/inputs should be verified tocheck it out whether the server
response time for a transaction is less
than or equal to the acceptable limit(bench mark).
If the results are found adequate the
execution should be continued for
45
-
7/29/2019 Testing Manual Document 1
46/70
different set of users. At the end of
every execution the results should be
analysed.
If a stage reaches when the time
taken for the server to respond to a
transaction is above the acceptablelimit, then the inputs should be given
to the developers.
Re-execution of the scenarios after
the developers fine tune the code
After the fine-tuning, thescenarios should be re-executed
for the specific set of users for
which the response wasinadequate.
If found satisfactory, then theexecution should be continueduntil the decided load.
Final report
At the end of the performance
testing, final report should begenerated which should comprise of
the following
Introduction about the
application.
Objectives set / specifiedin the test plan.
Approach summary of thesteps followed in conducting
the test
Analysis & Results is abrief explanation about the
results and the analysis of the
report.
Conclusion the report
should be concluded by
telling whether the objectives
set before the test is met ornot.
Annexure can consist of
graphical representation of
the data with a brief description, comparison
statistics if any etc.
7 Life Cycle of Automation
46
Analyze theApplication
Select the Tool
Reporting the Defect
Finding & Reporting theDefect
Finding & Reporting theDefect
Finding & Reporting theDefect
Identify the Scenarios
Design / Record TestScripts
Modify the TestScripts
Run the Test Scripts
-
7/29/2019 Testing Manual Document 1
47/70
7.1 What is Automation?
A software program that is used
to test another software program, This isreferred to as automated software
testing.
7.2 Why Automation
Avoid the errors that humansmake when they get tired after multiple
repetitions.
The test program wont skip any
test by mistake.
Each future test cycle will takeless time and require less human
intervention.
Required for regression testing.
7.3 Benefits of Test Automation:
Allows more testing to happen
Tightens / Strengthen Test Cycle
Testing is consistent, repeatable
Useful when new patches
released
Makes configuration testingeasier
Test battery can be continuously
improved.
7.4 False Benefits:
Fewer tests will be needed
It will be easier if it is automate
Compensate for poor design
No more manual testing.
7.5 What are the different tools
available in the market?
Rational Robot
WinRunner
SilkTest
QA Run
WebFT
8 Testing
8.1 Test Strategy
Test strategy is statement of
overall approach of testing to
meet the business and testobjectives.
It is a plan level document and
has to be prepared in therequirement stage of the project.
It identifies the methods,
techniques and tools to be used
for testing .
It can be a project or an
organization specific. Developing a test strategy which
effectively meets the needs of the
organization/project is critical tothe success of the software
development
An effective strategy has to meet
the project and businessobjectives
Defining the strategy upfront
before the actual testing helps inplanning the test activities
A test strategy will typically cover
the following aspects
Definition of test objective
47
Defect
sdasdadadasdadafhgfdgdf
Finding & Reporting theDefect
Finding & Reporting the
Defect
-
7/29/2019 Testing Manual Document 1
48/70
Strategy to meet the specified
objective
Overall testing approach
Test Environment
Test Automation requirements
Metric Plan
Risk Identification, Mitigation
and Contingency plan
Details of Tools usage
Specific Document templatesused in testing
8.2 Testing Approach
Test approach will be based onthe objectives set for testing
Test approach will detail the way
the testing to be carried out
Types of testing to be done viz
Unit, Integration and system
testing
The method of testing viz Blackbox, White-box etc.,
Details of any automated testingto be done
8.3 Test Environment
All the Hardware and Softwarerequirements for carrying out
testing shall be identified in
detail.
Any specific tools required fortesting will also be identified
If the testing is going to be doneremotely, then it has to be
considered during estimation
8.4 Risk Analysis
Risk analysis should carried out
for testing phase
The risk identification will be
accomplished by identifyingcauses-and-effects or effects-and-
causes
The identified Risks are
classified into to Internal andExternal Risks.
The internal risks are things
that the test team can control or
influence.
The external risks are thingsbeyond the control or
influence of the test team
Once Risks are identified and
classified, the following activitieswill be carried out
Identify the probability of
occurrence
Identify the impact areas ifthe risk were to occur
Risk mitigation plan how
avoid this risk?
Risk contingency plan if the
risk were to occur what do wedo?
8.5 Testing Limitations
You cannot test a program
completely
We can only test against systemrequirements
May not detect errors in the
requirements.
Incomplete or ambiguousrequirements may lead to
inadequate or incorrect
testing.
Exhaustive (total) testing is
impossible in present scenario.
48
-
7/29/2019 Testing Manual Document 1
49/70
Time and budget constraints
normally require very carefulplanning of the testing effort.
Compromise between
thoroughness and budget.
Test results are used to make
business decisions for releasedates.
Even if you do find the last bug,
youll never know it
You will run out of time beforeyou run out of test cases
You cannot test every path
You cannot test every valid input You cannot test every invalid
input
8.6 Testing Objectives
You cannot prove a program
correct (because it isnt!)
The purpose of testing is to findproblems
The purpose of finding problemsis to get them corrected
8.7 Testing Metrics
Time
Time per test caseTime per test script
Time per unit test
Time per system test
Sizing
Function points
Lines of code Defects
Numbers of defects
Defects per sizing measureDefects per phase of testing
Defect origin
Defect removal efficiency
Number of defects found in producer testing
Defect Removal Efficiency =
Number of defects during the life of the product
Actual Size-Planed Size
Size Variance =Planed Size
Actual end date Planed end date
Delivery Variance =
Planed end date Planed start date
Actual effort Planed effort
Effort =
Planed effortEffort
Productivity =
Size
No defect found during the review time
Review efficiency =
49
-
7/29/2019 Testing Manual Document 1
50/70
Effort
8.7 Test Stop Criteria:
Minimum number of test cases
successfully executed.
Uncover minimum number of
defects (16/1000 stm)
Statement coverage
Testing uneconomical
Reliability model
8.8 Six Essentials of Software Testing
1. The quality of the test process
determines the success of the test
effort.
2. Prevent defect migration by
using early life-cycle testing
techniques.
3. The time for software testingtools is now.
4. A real person must take
responsibility for improving the
testing process.
5. Testing is a professionaldiscipline requiring trained,
skilled people.
6. Cultivate a positive team attitude
of creative destruction.
8.9 What are the five common
problems in s/w development process?
Poor Requirements: If the
requirements are unclear, incomplete,
too general and not testable there will
be problem.
Unrealistic Schedules: If too much
work is creamed in too little time.
Problems are inventible.
Inadequate Testing: No one will
know weather the system is good or
not. Until the complains system crash
Futurities: Requests to pile on newfeatures after development is
underway. Extremely common
Miscommunication: If thedevelopers dont know what isneeded (or) customers have erroneous
expectations, problems are
guaranteed.
8.10 Give me five common problems
that occur during software
development.
Solid requirements :Ensure the
requirements are solid, clear,
complete, detailed, cohesive,attainable and testable.
Realistic Schedules: Have schedules
that are realistic. Allow adequate timefor planning, design, testing, bug
fixing, re-testing, changes and
documentation. Personnel should beable to complete the project without
burning out.
Adequate Testing: Do testing that is
adequate. Start testing early on, re-test after fixes or changes, and plan
for sufficient time for both testing and
bug fixing.
Firm Requirements: Avoid newfeatures. Stick to initial requirements
as much as possible.
Communication. Communicate
Require walk-thorough andinspections when appropriate
8.11 What Should be done no enoughtime for testing
Risk analysis to determine where
testing should be focused
Which functionality is most
important to the project's
intended purpose?
50
-
7/29/2019 Testing Manual Document 1
51/70
Which functionality is most
visible to the user?
Which functionality has thelargest safety impact?
Which functionality has thelargest financial impact on
users?
Which aspects of the applicationare most important to the
customer?
Which aspects of the applicationcan be tested early in the
development cycle?
Which parts of the code are most
complex and thus most subjectto errors?
Which parts of the application
were developed in rush or
panic mode?
Which aspects of similar/relatedprevious projects caused
problems?
Which aspects of similar/related
previous projects had largemaintenance expenses?
Which parts of the requirements
and design are unclear or
poorly thought out?
What do the developers think are
the highest-risk aspects of the
application?
What kinds of problems would
cause the worst publicity?
What kinds of problems would
cause the most customer
service complaints?
What kinds of tests could easilycover multiple functionalities?
Which tests will have the best
high-risk-coverage to time-required ratio?
8.12 How do you know when to stop
testing?
Common factors in deciding when to
stop are...
Deadlines, e.g. release deadlines,testing deadlines;
Test cases completed with certain
percentage passed;
Test budget has been depleted;
Coverage of code, functionality,
or requirements reaches aspecified point;
Bug rate falls below a certain
level; or
Beta or alpha testing period ends.
8.14 Why does the software have
Bugs?
Miscommunication or No
communication
Software Complexity
Programming Errors
Changing Requirements
Time Pressures
Poorly Documented Code
8.15 Different Type of Errors in
Software
User Interface Errors
Error Handling
Boundary related errors
Calculation errors
Initial and Later states
Control flow errors
51
-
7/29/2019 Testing Manual Document 1
52/70
Errors in Handling or
Interpreting Data
Race Conditions
Load Conditions
Hardware
Source, Version and ID Control
9 Roles and Responsibilities
9.1 Test Manager
Single point contact betweenWipro onsite and offshore team
Prepare the project plan
Test Management Test Planning
Interact with Wipro onsite lead,
Client QA manager
Team management
Work allocation to the team
Test coverage analysis
Co-ordination with onsite for
issue resolution. Monitoring the deliverables
Verify readiness of the productfor release through release
review
Obtain customer acceptance onthe deliverables
Performing risk analysis when
required
Reviews and status reporting
Authorize intermediate
deliverables and patch releasesto customer.
9.2 Test Lead
Resolves technical issues for
the product group
Provides direction to the teammembers
Performs activities for therespective product group
Review and Approve of Test
Plan / Test cases
Review Test Script / Code
Approve completion of
Integration testing
Conduct System / Regression
tests
Ensure tests are conducted asper plan
Reports status to the Offshore
Test Manager
9.3 Tester Engineer
Development of Test cases andScripts
Test Execution
Result capturing and analysing Follow the test plans, scripts
etc. as documented.
Check tests are correct beforereporting s/w faults
Defect Reporting and Status
reporting
Assess risk objectively
Prioritize what you report
Communicate the truth.
9.4 How to Prioritize Tests:
We cant test every thing.
There is never enough time to
do all testing you would like.
52
Testing Errors
-
7/29/2019 Testing Manual Document 1
53/70
So Prioritize Tests.
Tips
Possible ranking criteria ( allrisk based)
Test where a failure would bemost serve.
Test where failures would bemost visible.
Take the help of customer in
understanding what is most
important to him.
What is most critical to thecustomers business.
Areas changed most often.
Areas with most problems in
the past.
Most complex areas, or technically critical.
Note: If you follow above, whenever
you stop testing, you have done the
best testing in the time available.
10 How can we improve the efficiency
in testing?
In the recent year it has show
lot of outsourcing in testingarea.
Its right time to think and
create process to improve the
efficiency of testing projects.
The best team will result in
the efficient deliverables.
The team should contain 55%
hard core test engineers, 30domain knowledge engineersand 15% technology
engineers.
How did we arrive to this figures?
The past projects has shown 50-60 percent of the test cases are written on the basis of
testing techniques, 28-33% of test cases are resulted to cover the domain orientedbusiness rules and 15-18% technology oriented test cases.
Testing55%
Domain30%
Technology
15%
Testing Vs Domain Vs Tech
Software testability is simply how easily a computer program can be tested.
A set of program characteristics that lead to testable software:
53
-
7/29/2019 Testing Manual Document 1
54/70
11. CMM Levels
CMM = 'Capability Maturity
Model', developed by the SEI.
It's a model of 5 levels of
organizational 'maturity' thatdetermine effectiveness in
delivering quality software.
It is geared to largeorganizations such as large
U.S. Defense Department
contractors.
However, many of the QA
processes involved areappropriate to any
organization, and if reasonably applied can behelpful.
Organizations can receive
CMM ratings by undergoingassessments by qualified
auditors.
The Software Engineering
Institute uses a conceptual
framework based on industry
best practices to assess theprocess maturity, capability
and performance of a softwaredevelopment organization.
This framework is called the
Capability Maturity Model
"CMM".
The extent of implementation
for a specific key Process Areais evaluated by assessing:
1. Commitment to perform(policies and leadership)
2. Ability to perform
(resources and training)
3. Activities performed (plansand procedures)
4. Measurement and analysis
(measures and status)
5. Verification