testing_funda

35
What is Regression Testing?  If a piece of Software is modified for any reason testing needs to be done to ensure that it works as specified and that it has not negatively impacted any functionality that it offered previously. This is known as Regression Testing. Regres sion testing means rerunning test cases from existi ng test suites to build confide nce that software changes have no unintended side-effects. The “ideal” process would be to create an extensive test suite and run it after each and every change. Making Regression Testing Cost Effective: Every time a change occurs on e or more of the following scenarios may occu r: - More Functionality may be added to the system - More complexity may be added to the system - New bugs may be introduced - New vulnerabilities may be introduced in the system - System may tend to become more and more fragile with each change After the change the new functionality may have to be tested along with all the original functionality. integration testing A type of testing in which software and/or hardware components are combined and tested to confirm that they interact according to their requirements. Integration testing can continue progressively until the entire system has been integrated. Integration Testing The fundamentals of Integration Testing: Definition, Analogy, Ws,  Approaches, Tips What is Integration Testing? Integration Testing is a level of the software testing process where individual units are combined and tested as a group.  The purpose of Integration Testing is to expose faults in the interaction between integrated units.  Test drivers and test stubs are used to assist in Integration Testing.

Upload: prasad-reddy

Post on 09-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 1/35

What is Regression Testing? 

If a piece of Software is modified for any reason testing needs to be done to ensure that it works

as specified and that it has not negatively impacted any functionality that it offered previously.This is known as Regression Testing.

Regression testing means rerunning test cases from existing test suites to build confidence that

software changes have no unintended side-effects. The “ideal” process would be to create an

extensive test suite and run it after each and every change.

Making Regression Testing Cost Effective:

Every time a change occurs one or more of the following scenarios may occur:

- More Functionality may be added to the system

- More complexity may be added to the system

- New bugs may be introduced

- New vulnerabilities may be introduced in the system- System may tend to become more and more fragile with each change

After the change the new functionality may have to be tested along with all the originalfunctionality.

integration testing

A type of testing in which software and/or hardware components are

combined and tested to confirm that they interact according to their

requirements. Integration testing can continue progressively until theentire system has been integrated.

Integration Testing 

The fundamentals of Integration Testing: Definition, Analogy, Ws,

 Approaches, Tips

What is Integration Testing?

Integration Testing is a level of the software testing process where individual units

are combined and tested as a group.

 The purpose of Integration Testing is to expose faults in the interaction between

integrated units.

 Test drivers and test stubs are used to assist in Integration Testing.

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 2/35

Note: The definition of a unit could vary from people to people and it could mean

any of the following:

1. the smallest testable part of a software2. a 'module' which could consist of many of ‘1’

3. a 'component' which could consist of many of '2'

Integration Testing Analogy

During the process of manufacturing a ballpoint pen, the cap, the body, the tail and

clip, the ink cartridge and the ballpoint are produced separately and unit tested

separately. When two or more units are ready, they are assembled and Integration

 Testing is performed. For example, whether the cap fits into the body as required or

not.

When is Integration Testing performed?

Integration Testing is performed after Unit Testing and before System Testing.

Who performs Integration Testing?

Either Developers themselves or independent Testers perform Integration Testing.

Which testing method is used in Integration Testing?

Any of the Black Box Testing, White Box Testing, and Gray Box Testing methods can

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 3/35

be used. Normally the method depends on your definition of ‘unit’.

Integration Testing Approaches

1. Big Bang is an approach to Integration Testing where all or most of the units

are combined together and tested at one go. This approach is taken when thetesting team receives the entire software in a bundle. So what is thedifference between Big Bang Integration Testing and System Testing? Well,the former tests only the interactions between the units while the latter teststhe entire system.

2. Top Down is an approach to Integration Testing where top level units aretested first and lower level units step by step after that. This approach istaken when top down development approach is followed. Test Stubs areneeded to simulate lower level units which may not be available during theinitial phases.

3. Bottom Up is an approach to Integration Testing where bottom level unitsare tested first and upper level units step by step after that. This approach is

taken when bottom up development approach is followed. Test Drivers areneeded to simulate higher level units which may not be available during theinitial phases.

4. Sandwich/Hybrid is an approach to Integration Testing which is acombination of Top Down and Bottom Up approaches.

Integration Testing Tips

• Ensure that you have a proper Detail Design document where interactionsbetween each unit are clearly defined. In fact, you will not be able to performIntegration Testing without this information.

• Ensure that you have a robust Software Configuration Management system inplace. Or else, you will have a tough time tracking the right version of eachunit, especially if the number of units to be integrated is huge.

• Make sure that each unit is first unit tested before you start Integration Testing.

• As far as possible, automate your tests, especially when you use the TopDown or Bottom Up approach, since regression testing is important each timeyou integrate a unit, and manual regression testing can be inefficient.

What is Unit Testing?

Unit Testing is a level of the software testing process where individual units/components of asoftware/system are tested. The purpose is to validate that the software performs as designed.

A unit is the smallest testable part of software. It usually has one or a few inputs and usually a

single output. In procedural programming a unit may be an individual program, function,

 procedure, etc. In object-oriented programming, the smallest unit is a method, which may belong

to a base/super class, abstract class or derived/child class. (Some treat a module of an applicationas a unit. This is to be discouraged as there will probably be many individual units within that

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 4/35

module.)

Unit testing frameworks, drivers, stubs and mock or fake objects are used to assist in unit testing.

When is Unit Testing performed?

Unit Testing is performed prior to Integration Testing.

Who performs Unit Testing?

Unit Testing is normally performed by software developers themselves or their peers. In rare

cases it may also be performed by independent software testers.

Which testing method is used in Unit Testing?

Unit Testing is primarily performed by using the White Box Testing method.

What are the benefits of Unit Testing?

• Unit testing increases confidence in changing/maintaining code. If good unit tests are

written and if they are run every time any code is changed, the likelihood of any defects

due to the change being promptly caught is very high. If unit testing is not in place, themost one can do is hope for the best and wait till the test results at higher levels of testing

are out. Also, if codes are already made less interdependent to make unit testing possible,the unintended impact of changes to any code is less.

• Codes are more reusable. In order to make unit testing possible, codes need to be

modular. This means that codes are easier to reuse.

• Development is faster. How? If you do not have unit testing in place, you write your codeand perform that fuzzy ‘developer test’ (You set some breakpoints, fire up the GUI,

 provide a few inputs that hopefully hit your code and hope that you are all set.) In case

you have unit testing in place, you write the test, code and run the tests. Writing tests

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 5/35

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 6/35

partially known. This involves having access to internal data structures and

algorithms for purposes of designing the test cases, but testing at the user, or

black-box level.

Gray Box Testing is named so because the software program, in the eyes of the

tester is like a gray/semi-transparent box; inside which one can partially see.

EXAMPLE

An example of Gray Box Testing would be when the codes for two units/modules are

studied (White Box Testing method) for designing test cases and actual tests are

conducted using the exposed interfaces (Black Box Testing method).

LEVELS APPLICABLE TO

 Though Gray Box Testing method may be used in other levels of testing, it is

primarily useful in Integration Testing.

ADVANTAGES OF GRAY BOX TESTING

• Determine from the combination of advantages of Black Box Testing andWhite Box Testing.

DISADVANTAGES OF GRAY BOX TESTING

• Determine from the combination of disadvantages of Black Box Testing and

White Box Testing.

Note that Gray is also spelt as Grey. Hence Grey Box Testing and Gray Box Testing

mean the same.

White Box Testing 

White Box Testing Definition, Example, Application, Advantages and

Disadvantages

DEFINITION

White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box

 Testing, Transparent Box Testing or Structural Testing) is a software testing method

in which the internal structure/design/implementation of the item being tested is

known to the tester. The tester chooses inputs to exercise paths through the code

and determines the appropriate outputs. Programming know-how and the

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 7/35

implementation knowledge is essential. White box testing is testing beyond the user

interface and into the nitty-gritty of a system.

White Box Testing method is named so because the software program, in the eyes

of the tester, is like a white/transparent box; inside which one clearly sees.

White Box Testing is contrasted with Black Box Testing. View Differences between

Black Box Testing and White Box Testing.

EXAMPLE

A tester, usually a developer as well, studies the implementation code of a certain

field on a webpage, determines all legal (valid and invalid) AND illegal inputs and

verifies the outputs against the expected outcomes, which is also determined bystudying the implementation code.

LEVELS APPLICABLE TO

White Box Testing method is applicable to the following levels of the software

testing process:

• Unit Testing: For testing paths within a unit• Integration Testing: For testing paths between units• System Testing: For testing paths between subsystems

However, it is mainly applied to Unit Testing.

ADVANTAGES OF WHITE BOX TESTING

•  Testing can be commenced at an earlier stage. One need not wait for the GUIto be available.

•  Testing is more thorough, with the possibility of covering most paths.

DISADVANTAGES OF WHITE BOX TESTING

• Since tests can be very complex, highly skilled resources are required, withthorough knowledge of programming and implementation.

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 8/35

•  Test script maintenance can be a burden if the implementation changes toofrequently.

• Since this method of testing it closely tied with the application being testing,tools to cater to every kind of implementation/platform may not be readilyavailable.

White Box Testing is like the work of a mechanic who examines the engine to see

why the car is not moving.

Black Box Testing 

Black Box Testing Definition, Example, Application, Techniques,

Advantages and Disadvantages

DEFINITION

Black Box Testing, also known as Behavioral Testing, is a software testing method in

which the internal structure/design/implementation of the item being tested is notknown to the tester. These tests can be functional or non-functional, though usually

functional.

Black Box Testing method is named so because the software program, in the eyes

of the tester, is like a black box; inside which one cannot see.

Black Box Testing is contrasted with White Box Testing. View Differences between

Black Box Testing and White Box Testing.

Black Box Testing attempts to find errors in the following categories:

• Incorrect or missing functions

• Interface errors• Errors in data structures or external database access• Behavior or performance errors• Initialization and termination errors

EXAMPLE

A tester, without knowledge of the internal structures of a website, tests the web

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 9/35

pages by using a browser; providing inputs (clicks, keystrokes) and verifying the

outputs against the expected outcome.

LEVELS APPLICABLE TO

Black Box Testing method is applicable to all levels of the software testing process:

• Unit Testing• Integration Testing• System Testing• Acceptance Testing

 The higher the level, and hence the bigger and more complex the box, the more

black box testing method comes into use.

TEST DESIGN TECHNIQUES

Equivalence partitioning

Equivalence Partitioning is a software test design technique that involves dividing

input values into valid and invalid partitions and selecting representative values

from each partition as test data.

Boundary Value Analysis

Boundary Value Analysis is a software test design technique that involves

determination of boundaries for input values and selecting values that are at the

boundaries and just inside/outside of the boundaries as test data.

Cause Effect Graphing

Cause Effect Graphing is a software test design technique that involves identifying

the cases (input conditions) and effects (output conditions), producing a Cause-

Effect Graph, and generating test cases accordingly.

ADVANTAGES OF BLACK BOX TESTING

•  Tests are done from a user's point of view and will help in exposingdiscrepancies in the specifications

•  Tester need not know programming languages or how the software has beenimplemented

•  Tests can be conducted by a body independent from the developers, allowingfor an objective perspective and the avoidance of developer-bias

•  Test cases can be designed as soon as the specifications are complete

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 10/35

DISADVANTAGES OF BLACK BOX TESTING

• Only a small number of possible inputs can be tested and many programpaths will be left untested

• Without clear specifications, which is the situation in many projects, testcases will be difficult to design

•  Tests can be redundant if the software designer/ developer has already run atest case.

Ever wondered why a soothsayer closes the eyes when foretelling events? So is

almost the case in Black Box Testing.

Differences Between Black Box Testing and White Box Testing 

 The Differences Between Black Box Testing and White Box Testing are listed below.

Criteria Black Box Testing White Box Testing

Definition

Black Box Testing is a software

testing method in which the

internal

structure/design/implementation

of the item being tested is NOT

known to the tester

White Box Testing is a software

testing method in which the

internal

structure/design/implementation of 

the item being tested is known to

the tester.

Levels

Applicable To

Mainly applicable to higher levels

of testing: Acceptance Testing,

System Testing

Mainly applicable to lower levels of 

testing: Unit Testing, Integration

 Testing

ResponsibilityGenerally, independent Software

 TestersGenerally, Software Developers

Programming

KnowledgeNot Required Required

Implementati

on KnowledgeNot Required Required

Basis for TestCases

Requirement Specifications Detail Design

For a combination of the two testing methods, see Gray Box Testing.

If you have any other differences between Black Box Testing and White Box Testing,

let me know and I will add them to the list.

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 11/35

system testing

<testing> (Or "application testing") A type of testing to

confirm that all code modules work as specified, and that the

system as a whole performs adequately on the platform onwhich it will be deployed.

System testing should be performed by testers who are trained

to plan, execute, and report on application and system code.

They should be aware of scenarios that might not occur to the

end user, like testing for null, negative, and format

inconsistent values. A tester should be able to repeat the

steps that caused an error.

How does System Testing fit into the Software Development Life Cycle?

In a typical Enterprise, ‘unit testing’ is done by the programmers. This ensures that the individualcomponents are working OK. The ‘Integration testing’ focuses on successful integration of allthe individual pieces of software (components or units of code).

Once the components are integrated, the system as a whole needs to be rigorously tested to

ensure that it meets the Quality Standards.

Thus the System testing builds on the previous levels of testing namely unit testing andIntegration Testing.

Usually a dedicated testing team is responsible for doing ‘System Testing’.

Why System Testing is important?

System Testing is a crucial step in Quality Management Process.

........- In the Software Development Life cycle System Testing is the first level where

...........the System is tested as a whole

........- The System is tested to verify if it meets the functional and technical

...........requirements

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 12/35

........- The application/System is tested in an environment that closely resembles the

........... production environment where the application will be finally deployed

........- The System Testing enables us to test, verify and validate both the Business

...........requirements as well as the Application Architecture

Prerequisites for System Testing:

The prerequisites for System Testing are:

........- All the components should have been successfully Unit Tested

........- All the components should have been successfully integrated and Integration

..........Testing should be completed

........- An Environment closely resembling the production environment should be

...........created.

When necessary, several iterations of System Testing are done in multiple environments.

Steps needed to do System Testing:

The following steps are important to perform System Testing:........Step 1: Create a System Test Plan

........Step 2: Create Test Cases

........Step 3: Carefully Build Data used as Input for System Testing........Step 3: If applicable create scripts to

..................- Build environment and

..................- to automate Execution of test cases

........Step 4: Execute the test cases

........Step 5: Fix the bugs if any and re test the code

........Step 6: Repeat the test cycle as necessary

What is a ‘System Test Plan’? 

As you may have read in the other articles in the testing series, this document typically describesthe following:

.........- The Testing Goals

.........- The key areas to be focused on while testing

.........- The Testing Deliverables

.........- How the tests will be carried out

.........- The list of things to be Tested

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 13/35

.........- Roles and Responsibilities

.........- Prerequisites to begin Testing

.........- Test Environment

.........- Assumptions

.........- What to do after a test is successfully carried out

.........- What to do if test fails.........- Glossary

How to write a System Test Case?

A Test Case describes exactly how the test should be carried out.

The System test cases help us verify and validate the system.

The System Test Cases are written such that:

........- They cover all the use cases and scenarios........- The Test cases validate the technical Requirements and Specifications

........- The Test cases verify if the application/System meet the Business & Functional

...........Requirements specified

........- The Test cases may also verify if the System meets the performance standards

Since a dedicated test team may execute the test cases it is necessary that System Test Cases. The

detailed Test cases help the test executioners do the testing as specified without any ambiguity.

The format of the System Test Cases may be like all other Test cases as illustrated below:

Test Case ID• Test Case Description:

o What to Test?

o How to Test?

• Input Data

• Expected Result

• Actual Result

Sample Test Case Format:

TestCaseID

What ToTest?

How toTest?

Input Data ExpectedResult

ActualResult

Pass/Fail

. . . . . . .

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 14/35

Additionally the following information may also be captured:

........a) Test Suite Name

........ b) Tested By

........c) Date

........d) Test Iteration (The Test Cases may be executed one or more times)

Working towards Effective Systems Testing:

There are various factors that affect success of System Testing:

1) Test Coverage: System Testing will be effective only to the extent of the coverage of Test

Cases. What is Test coverage? Adequate Test coverage implies the scenarios covered by the test

cases are sufficient. The Test cases should “cover” all scenarios, use cases, Business

Requirements, Technical Requirements, and Performance Requirements. The test cases should

enable us to verify and validate that the system/application meets the project goals andspecifications.

2) Defect Tracking: The defects found during the process of testing should be tracked.

Subsequent iterations of test cases verify if the defects have been fixed.

3) Test Execution: The Test cases should be executed in the manner specified. Failure to do so

results in improper Test Results.

4) Build Process Automation: A Lot of errors occur due to an improper build. ‘Build’ is a

compilation of the various components that make the application deployed in the appropriate

environment. The Test results will not be accurate if the application is not ‘built’ correctly or if the environment is not set up as specified. Automating this process may help reduce manual

errors.

5) Test Automation: Automating the Test process could help us in many ways:

a. The test can be repeated with fewer errors of omission or oversight

 b. Some scenarios can be simulated if the tests are automated for instance

simulating a large number of users or simulating increasing large amounts

of input/output data

6) Documentation: Proper Documentation helps keep track of Tests executed. It also helpscreate a knowledge base for current and future projects. Appropriate metrics/Statistics can be

captured to validate or verify the efficiency of the technical design /architecture.

Summary:

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 15/35

In this article we studied the necessity of ‘System Testing’ and how it is done.

What is User Acceptance Testing?

User Acceptance Testing is often the final step before rolling out the application.

Usually the end users who will be using the applications test the application before ‘accepting’

the application.

This type of testing gives the end users the confidence that the application being delivered to

them meets their requirements.

This testing also helps nail bugs related to usability of the application.

User Acceptance Testing – Prerequisites:

Before the User Acceptance testing can be done the application is fully developed.Various levels of testing (Unit, Integration and System) are already completed before User 

Acceptance Testing is done. As various levels of testing have been completed most of the

technical bugs have already been fixed before UAT.

User Acceptance Testing – What to Test?

To ensure an effective User Acceptance Testing Test cases are created.

These Test cases can be created using various use cases identified during the Requirements

definition stage.The Test cases ensure proper coverage of all the scenarios during testing.

During this type of testing the specific focus is the exact real world usage of the application. The

Testing is done in an environment that simulates the production environment.

The Test cases are written using real world scenarios for the application

User Acceptance Testing – How to Test?

The user acceptance testing is usually a black box type of testing. In other words, the focus is on

the functionality and the usability of the application rather than the technical aspects. It is

generally assumed that the application would have already undergone Unit, Integration andSystem Level Testing.

However, it is useful if the User acceptance Testing is carried out in an environment that closely

resembles the real world or production environment.

The steps taken for User Acceptance Testing typically involve one or more of the following:

.......1) User Acceptance Test (UAT) Planning

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 16/35

.......2) Designing UA Test Cases

.......3) Selecting a Team that would execute the (UAT) Test Cases

.......4) Executing Test Cases

.......5) Documenting the Defects found during UAT

.......6) Resolving the issues/Bug Fixing

.......7) Sign Off 

User Acceptance Test (UAT) Planning:

As always the Planning Process is the most important of all the steps. This affects the

effectiveness of the Testing Process. The Planning process outlines the User Acceptance Testing

Strategy. It also describes the key focus areas, entry and exit criteria.

Designing UA Test Cases:

The User Acceptance Test Cases help the Test Execution Team to test the application

thoroughly. This also helps ensure that the UA Testing provides sufficient coverage of all thescenarios.

The Use Cases created during the Requirements definition phase may be used as inputs for 

creating Test Cases. The inputs from Business Analysts and Subject Matter Experts are also used

for creating.

Each User Acceptance Test Case describes in a simple language the precise steps to be taken to

test something. The Business Analysts and the Project Team review the User Acceptance Test

Cases.

Selecting a Team that would execute the (UAT) Test Cases:

Selecting a Team that would execute the UAT Test Cases is an important step.The UAT Team is generally a good representation of the real world end users.

The Team thus comprises of the actual end users who will be using the application.

Executing Test Cases:

The Testing Team executes the Test Cases and may additional perform random Tests relevant to

them

Documenting the Defects found during UAT:

The Team logs their comments and any defects or issues found during testing.

Resolving the issues/Bug Fixing:The issues/defects found during Testing are discussed with the Project Team, Subject Matter 

Experts and Business Analysts. The issues are resolved as per the mutual consensus and to the

satisfaction of the end users.

Sign Off:

Upon successful completion of the User Acceptance Testing and resolution of the issues the team

generally indicates the acceptance of the application. This step is important in commercial

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 17/35

software sales. Once the User “Accept” the Software delivered they indicate that the software

meets their requirements.

The users now confident of the software solution delivered and the vendor can be paid for thesame.

The Product Quality Measures:

1. Customer satisfaction index

This index is surveyed before product delivery and after product delivery

(and on-going on a periodic basis, using standard questionnaires).The following are analyzed:

•  Number of system enhancement requests per year 

•  Number of maintenance fix requests per year 

• User friendliness: call volume to customer service hotline

User friendliness: training time per new user •  Number of product recalls or fix releases (software vendors)

•  Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities

They are normalized per function point (or per LOC) at product delivery (first 3 months or first

year of operation) or Ongoing (per year of operation) by level of severity, by category or cause,e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect

introduced by fixes, etc.

3. Responsiveness (turnaround time) to users

• Turnaround time for defect fixes, by level of severity

• Time for minor vs. major enhancements; actual vs. planned elapsed time

4. Product volatility

• Ratio of maintenance fixes (to repair the system & bring it into compliance with

specifications), vs. enhancement requests (requests by users to enhance or changefunctionality)

5. Defect ratios 

• Defects found after product delivery per function point.

• Defects found after product delivery per LOC

• Pre-delivery defects: annual post-delivery defects

• Defects per function point of the system modifications

6. Defect removal efficiency 

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 18/35

•  Number of post-release defects (found by clients in field operation), categorized by level

of severity

• Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects

• All defects include defects found internally plus externally (by customers) in the first

year after product delivery

7. Complexity of delivered product

• McCabe's cyclomatic complexity counts across the system

• Halstead’s measure

• Card's design complexity measures

• Predicted defects and maintenance costs, based on complexity measures

8. Test coverage

Breadth of functional coverage• Percentage of paths, branches or conditions that were actually tested

• Percentage by criticality level: perceived level of risk of paths

• The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects 

• Business losses per defect that occurs during operation

• Business interruption costs; costs of work-arounds

• Lost sales and lost goodwill

• Litigation costs resulting from defects

Annual maintenance cost (per function point)• Annual operating cost (per function point)• Measurable damage to your boss's career 

10. Costs of quality activities

• Costs of reviews, inspections and preventive measures

• Costs of test planning and preparation

• Costs of test execution, defect tracking, version and change control

• Costs of diagnostics, debugging and fixing

• Costs of tools and tool support

Costs of test case library maintenance• Costs of testing & QA education associated with the product

• Costs of monitoring and oversight by the QA organization (if separate from the

development and test organizations)

11. Re-work 

• Re-work effort (hours, as a percentage of the original coding hours)

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 19/35

• Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)

• Re-worked software components (as a percentage of the total delivered components)

12. Reliability 

Availability (percentage of time a system is available, versus the time the system isneeded to be available)

• Mean time between failure (MTBF).

• Man time to repair (MTTR)

• Reliability ratio (MTBF / MTTR)

•  Number of product recalls or fix releases

•  Number of production re-runs as a ratio of production runs

Metrics Used In Testing

 

Metrics Used In Testing

In this tutorial you will learn about metrics used in testing, The Product Quality Measures - 1.Customer satisfaction index, 2. Delivered defect quantities, 3. Responsiveness (turnaround time)

to users, 4. Product volatility, 5. Defect ratios, 6. Defect removal efficiency, 7. Complexity of 

delivered product, 8. Test coverage, 9. Cost of defects, 10. Costs of quality activities, 11. Re-

work, 12. Reliability and Metrics for Evaluating Application System Testing.

The Product Quality Measures:

1. Customer satisfaction index

This index is surveyed before product delivery and after product delivery(and on-going on a periodic basis, using standard questionnaires).The following are analyzed:

• Number of system enhancement requests per year• Number of maintenance fix requests per year• User friendliness: call volume to customer service hotline•

User friendliness: training time per new user• Number of product recalls or fix releases (software vendors)• Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities

They are normalized per function point (or per LOC) at product delivery (first 3 months or firstyear of operation) or Ongoing (per year of operation) by level of severity, by category or cause,

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 20/35

e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect

introduced by fixes, etc.

3. Responsiveness (turnaround time) to users

•  Turnaround time for defect fixes, by level of severity•  Time for minor vs. major enhancements; actual vs. planned elapsed time

4. Product volatility

• Ratio of maintenance fixes (to repair the system & bring it into compliancewith specifications), vs. enhancement requests (requests by users to enhanceor change functionality)

5. Defect ratios 

• Defects found after product delivery per function point.• Defects found after product delivery per LOC• Pre-delivery defects: annual post-delivery defects• Defects per function point of the system modifications

6. Defect removal efficiency 

• Number of post-release defects (found by clients in field operation),categorized by level of severity

• Ratio of defects found internally prior to release (via inspections and testing),

as a percentage of all defects• All defects include defects found internally plus externally (by customers) in

the first year after product delivery

7. Complexity of delivered product

• McCabe's cyclomatic complexity counts across the system• Halstead’s measure• Card's design complexity measures• Predicted defects and maintenance costs, based on complexity measures

8. Test coverage

• Breadth of functional coverage• Percentage of paths, branches or conditions that were actually tested• Percentage by criticality level: perceived level of risk of paths•  The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects 

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 21/35

• Business losses per defect that occurs during operation• Business interruption costs; costs of work-arounds• Lost sales and lost goodwill• Litigation costs resulting from defects• Annual maintenance cost (per function point)• Annual operating cost (per function point)• Measurable damage to your boss's career

10. Costs of quality activities

• Costs of reviews, inspections and preventive measures• Costs of test planning and preparation• Costs of test execution, defect tracking, version and change control• Costs of diagnostics, debugging and fixing• Costs of tools and tool support• Costs of test case library maintenance• Costs of testing & QA education associated with the product•

Costs of monitoring and oversight by the QA organization (if separate fromthe development and test organizations)

11. Re-work 

• Re-work effort (hours, as a percentage of the original coding hours)• Re-worked LOC (source lines of code, as a percentage of the total delivered

LOC)• Re-worked software components (as a percentage of the total delivered

components)

12. Reliability 

• Availability (percentage of time a system is available, versus the time thesystem is needed to be available)

• Mean time between failure (MTBF).• Man time to repair (MTTR)• Reliability ratio (MTBF / MTTR)• Number of product recalls or fix releases

• Number of production re-runs as a ratio of production runs

Metrics for Evaluating Application System Testing:

Metric = Formula

Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents

Lines of Code)

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 22/35

Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of 

Code).

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size

Quality of Testing = No of defects found during Testing/(No of defects found during testing +

 No of acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources processed by the

system.

System complaints = Number of third party complaints / number of transactions processed

Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

What is Project Planning?

Project Planning is an aspect of Project Management that focuses a lot on Project Integration.

The project plan reflects the current status of all project activities and is used to monitor and

control the project.

The Project Planning tasks ensure that various elements of the Project are coordinated and

therefore guide the project execution.

Project Planning helps in- Facilitating communication

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 23/35

- Monitoring/measuring the project progress, and

- Provides overall documentation of assumptions/planning decisions

The Project Planning Phases can be broadly classified as follows:- Development of the Project Plan

- Execution of the Project Plan- Change Control and Corrective Actions

Project Planning is an ongoing effort throughout the Project Lifecycle.

Why is it important?

“If you fail to plan, you plan to fail.”

Project planning is crucial to the success of the Project.

Careful planning right from the beginning of the project can help to avoid costly mistakes. It provides an assurance that the project execution will accomplish its goals on schedule and within

 budget.

What are the steps in Project Planning?

Project Planning spans across the various aspects of the Project. Generally Project Planning is

considered to be a process of estimating, scheduling and assigning the projects resources in order to deliver an end product of suitable quality. However it is much more as it can assume a very

strategic role, which can determine the very success of the project. A Project Plan is one of the

crucial steps in Project Planning in General!

Typically Project Planning can include the following types of project Planning:1) Project Scope Definition and Scope Planning

2) Project Activity Definition and Activity Sequencing

3) Time, Effort and Resource Estimation4) Risk Factors Identification

5) Cost Estimation and Budgeting

6) Organizational and Resource Planning7) Schedule Development

8) Quality Planning

9) Risk Management Planning

10) Project Plan Development and Execution11) Performance Reporting

12) Planning Change Management

13) Project Rollout Planning

We now briefly examine each of the above steps:

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 24/35

1) Project Scope Definition and Scope Planning:

In this step we document the project work that would help us achieve the project goal. We

document the assumptions, constraints, user expectations, Business Requirements, Technicalrequirements, project deliverables, project objectives and everything that defines the final

 product requirements. This is the foundation for a successful project completion.

2) Quality Planning:

The relevant quality standards are determined for the project. This is an important aspect of 

Project Planning. Based on the inputs captured in the previous steps such as the Project Scope,

Requirements, deliverables, etc. various factors influencing the quality of the final product are

determined. The processes required to deliver the Product as promised and as per the standardsare defined.

3) Project Activity Definition and Activity Sequencing:

In this step we define all the specific activities that must be performed to deliver the product by producing the various product deliverables. The Project Activity sequencing identifies the

interdependence of all the activities defined.

4) Time, Effort and Resource Estimation:

Once the Scope, Activities and Activity interdependence is clearly defined and documented, thenext crucial step is to determine the effort required to complete each of the activities. See the

article on “Software Cost Estimation” for more details. The Effort can be calculated using one of 

the many techniques available such as Function Points, Lines of Code, Complexity of Code,

Benchmarks, etc.This step clearly estimates and documents the time, effort and resource required for each activity.

5) Risk Factors Identification:

“Expecting the unexpected and facing it”

It is important to identify and document the risk factors associated with the project based on the

assumptions, constraints, user expectations, specific circumstances, etc.

6) Schedule Development:

The time schedule for the project can be arrived at based on the activities, interdependence andeffort required for each of them. The schedule may influence the cost estimates, the cost benefit

analysis and so on.

Project Scheduling is one of the most important task of Project Planning and also the mostdifficult tasks. In very large projects it is possible that several teams work on developing the project. They may work on it in parallel. However their work may be interdependent.

Again various factors may impact in successfully scheduling a project

...........o Teams not directly under our control

...........o Resources with not enough experience

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 25/35

Popular Tools can be used for creating and reporting the schedules such as Gantt Charts

7) Cost Estimation and Budgeting:

Based on the information collected in all the previous steps it is possible to estimate the costinvolved in executing and implementing the project. See the article on "Software Cost

Estimation" for more details. A Cost Benefit Analysis can be arrived at for the project. Based on

the Cost Estimates Budget allocation is done for the project.

8) Organizational and Resource Planning

Based on the activities identified, schedule and budget allocation resource types and resources

are identified. One of the primary goals of Resource planning is to ensure that the project is run

efficiently. This can only be achieved by keeping all the project resources fully utilized as possible. The success depends on the accuracy in predicting the resource demands that will be

 placed on the project. Resource planning is an iterative process and necessary to optimize the use

of resources throughout the project life cycle thus making the project execution more efficient.

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 26/35

There are various types of resources – Equipment, Personnel, Facilities, Money, etc.

9) Risk Management Planning:

Risk Management is a process of identifying, analyzing and responding to a risk. Based on the

Risk factors Identified a Risk resolution Plan is created. The plan analyses each of the risk 

factors and their impact on the project. The possible responses for each of them can be planned.Throughout the lifetime of the project these risk factors are monitored and acted upon as

necessary.

10) Project Plan Development and Execution:

Project Plan Development uses the inputs gathered from all the other planning processes such as

Scope definition, Activity identification, Activity sequencing, Quality Management Planning,

etc. A detailed Work Break down structure comprising of all the activities identified is used. Thetasks are scheduled based on the inputs captured in the steps previously described. The Project

Plan documents all the assumptions, activities, schedule, timelines and drives the project.

Each of the Project tasks and activities are periodically monitored. The team and the stakeholdersare informed of the progress. This serves as an excellent communication mechanism. Any delays

are analyzed and the project plan may be adjusted accordingly

11) Performance Reporting:

As described above the progress of each of the tasks/activities described in the Project plan is

monitored. The progress is compared with the schedule and timelines documented in the ProjectPlan. Various techniques are used to measure and report the project performance such as EVM

(Earned Value Management) A wide variety of tools can be used to report the performance of the

 project such as PERT Charts, GANTT charts, Logical Bar Charts, Histograms, Pie Charts, etc.

12) Planning Change Management:

Analysis of project performance can necessitate that certain aspects of the project be changed.The Requests for Changes need to be analyzed carefully and its impact on the project should be

studied. Considering all these aspects the Project Plan may be modified to accommodate this

request for Change.

Change Management is also necessary to accommodate the implementation of the projectcurrently under development in the production environment. When the new product is

implemented in the production environment it should not negatively impact the environment or 

the performance of other applications sharing the same hosting environment.

13) Project Rollout Planning:

In Enterprise environments, the success of the Project depends a great deal on the success of its

rollout and implementations. Whenever a Project is rolled out it may affect the technical systems,

 business systems and sometimes even the way business is run. For an application to besuccessfully implemented not only the technical environment should be ready but the users

should accept it and use it effectively. For this to happen the users may need to be trained on the

new system. All this requires planning.

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 27/35

Summary:

In this article we explored the various aspects of Project Planning and Scheduling.

Effective Software Testing

Effective Software Testing

In this tutorial you will learn about Effective Software Testing? How do we measure‘Effectiveness’ of Software Testing? Steps to Effective Software Testing, Coverage and Test

Planning and Process.

A 1994 study in US revealed that only about “9% of software projects were successful”

A large number of projects upon completion do not have all the promised features or they do not

meet all the requirements that were defined when the project was kicked off.

It is an understatement to say that – 

An increasing number of businesses depend on the software for their day-to-day businesses.Billions of Dollars change hands every day with the help of commercial software.

Lots of lives depend on the reliability of the software for example running critical medical

systems, controlling power plants, flying air planes and so on.

Whether you are part of a team that is building a book keeping application or a software that runsa power plant you cannot afford to have less than reliable software.

Unreliable software can severely hurt businesses and endanger lives depending on the criticality

of the application. The simplest application poorly written can deteriorate the performance of your environment such as the servers, the network and thereby causing an unwanted mess.

To ensure software application reliability and project success Software Testing plays a very

crucial role.

Everything can and should be tested – 

•  Test if all defined requirements are met•  Test the performance of the application•  Test each component•  Test the components integrated with each other•

 Test the application end to end•  Test the application in various environments•  Test all the application paths•  Test all the scenarios and then test some more

What is Effective Software Testing? 

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 28/35

How do we measure ‘Effectiveness’ of Software Testing? 

The effectiveness of Testing can be measured if the goal and purpose of the testing effort is

clearly defined. Some of the typical Testing goals are:

•  Testing in each phase of the Development cycle to ensure that the“bugs”(defects) are eliminated at the earliest

•  Testing to ensure no “bugs” creep through in the final product•  Testing to ensure the reliability of the software• Above all testing to ensure that the user expectations are met

The effectiveness of testing can be measured with the degree of success in achieving the above

goals.

Steps to Effective Software Testing:

Several factors influence the effectiveness of Software Testing Effort, which ultimatelydetermine the success of the Project.

A) Coverage:

The testing process and the test cases should cover 

• All the scenarios that can occur when using the software application• Each business requirement that was defined for the project• Specific levels of testing should cover every line of code written for the

application

There are various levels of testing which focus on different aspects of the software application.

The often-quoted V model best explains this:

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 29/35

The various levels of testing illustrated above are:

• Unit Testing• Integration Testing• System Testing• User Acceptance Testing

The goal of each testing level is slightly different thereby ensuring the overall project reliability.

Each Level of testing should provide adequate test coverage.

Unit testing should ensure each and every line of code is testedIntegration Testing should ensure the components can be integrated and all the interfaces of each

component are working correctly

System Testing should cover all the “paths”/scenarios possible when using the system

The system testing is done in an environment that is similar to the production environment i.e.

the environment where the product will be finally deployed.

There are various types of System Testing possible which test the various aspects of the software

application.

B) Test Planning and Process:

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 30/35

To ensure effective Testing Proper Test Planning is important

An Effective Testing Process will comprise of the following steps:

•  Test Strategy and Planning• Review Test Strategy to ensure its aligned with the Project Goals• Design/Write Test Cases• Review Test Cases to ensure proper Test Coverage• Execute Test Cases• Capture Test Results•  Track Defects• Capture Relevant Metrics• Analyze

Having followed the above steps for various levels of testing the product is rolled.

It is not uncommon to see various “bugs”/Defects even after the product is released to

 production. An effective Testing Strategy and Process helps to minimize or eliminate these

defects. The extent to which it eliminates these post-production defects (Design Defects/CodingDefects/etc) is a good measure of the effectiveness of the Testing Strategy and Process.

As the saying goes - 'the proof of the pudding is in the eating'

Summary:

The success of the project and the reliability of the software application depend a lot on the

effectiveness of the testing effort. This article discusses “What is effective Software

Testing?”

Software Quality Management 

Software Quality Management

This article gives an overview of Software Quality Management and various processes that are a

 part of Software Quality Management. Software Quality is a highly overused term and it maymean different things to different people. You will learn What is Software Quality

Management?, What does it take to Manage Software Quality?, Quality Planning, Quality

Assurance, Quality Control, Importance of Documentation and What is Defect Tracking?

“Totality of characteristics of an entity that bears on its ability to satisfy stated and implied 

needs.”

This means that the Software product delivered should be as per the requirements defined. We

now examine a few more terms used in association with Software Quality.

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 31/35

Quality Planning:

In the Planning Process we determine the standards that are relevant for the Software Product,

the Organization and the means to achieve them.

Quality Assurance:

Once the standards are defined and we start building the product. It is very important to have processes that evaluate the project performance and aim to assure that the Quality standards are

 being followed and the final product will be in compliance.

Quality Control:

Once the software components are built the results are monitored to determine if they comply

with the standards. The data collected helps in measuring the performance trends and as needed

help in identifying defective pieces of code.

What is Software Quality Management?

Software Quality Management simply stated comprises of processes that ensure that the

Software Project would reach its goals. In other words the Software Project would meet theclients expectations.

The key processes of Software Quality Management fall into the following three categories:

1) Quality Planning

2) Quality Assurance3) Quality Control

What does it take to Manage Software Quality?

The Software Quality Management comprises of Quality Planning, Quality Assurance and

Quality Control Processes. We shall now take a closer look at each of them.

1) Quality Planning

Quality Planning is the most important step in Software Quality Management. Proper planning

ensures that the remaining Quality processes make sense and achieve the desired results. The

starting point for the Planning process is the standards followed by the Organization. This is

expressed in the Quality Policy and Documentation defining the Organization-wide standards.Sometimes additional industry standards relevant to the Software Project may be referred to as

needed. Using these as inputs the Standards for the specific project are decided. The Scope of the

effort is also clearly defined. The inputs for the Planning are as summarized as follows:

a. Company’s Quality Policy b. Organization Standards

c. Relevant Industry Standards

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 32/35

d. Regulations

e. Scope of Work 

f. Project Requirements

Using these as Inputs the Quality Planning process creates a plan to ensure that standards agreed

upon are met. Hence the outputs of the Quality Planning process are:

a. Standards defined for the Project

 b. Quality Plan

To create these outputs namely the Quality Plan various tools and techniques are used. Thesetools and techniques are huge topics and Quality Experts dedicate years of research on these

topics. We would briefly introduce these tools and techniques in this article.

a. Benchmarking: The proposed product standards can be decided using the existing

 performance benchmarks of similar products that already exist in the market.

b. Design of Experiments: Using statistics we determine what factors influence the Quality or 

features of the end product

c. Cost of Quality: This includes all the costs needed to achieve the required Quality levels. It

includes prevention costs, appraisal costs and failure costs.

d. Other tools: There are various other tools used in the Planning process such as Cause and

Effect Diagrams, System Flow Charts, Cost Benefit Analysis, etc.

All these help us to create a Quality Management Plan for the project.

2) Quality Assurance

The Input to the Quality Assurance Processes is the Quality Plan created during Planning.

Quality Audits and various other techniques are used to evaluate the performance of the project.

This helps us to ensure that the Project is following the Quality Management Plan.

The tools and techniques used in the Planning Process such as Design of Experiments, Cause andEffect Diagrams may also be used here, as required.

3) Quality Control

Following are the inputs to the Quality Control Process:

- Quality Management Plan.

- Quality Standards defined for the Project- Actual Observations and Measurements of the Work done or in Progress

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 33/35

The Quality Control Processes use various tools to study the Work done. If the Work done is

found unsatisfactory it may be sent back to the development team for fixes. Changes to the

Development process may be done if necessary.

If the work done meets the standards defined then the work done is accepted and released to the

clients.

Importance of Documentation:

In all the Quality Management Processes special emphasis is put on documentation. Many

software shops fail to document the project at various levels. Consider a scenario where the

Requirements of the Software Project are not sufficiently documented. In this case it is quiet possible that the client has a set of expectations and the tester may not know about them. Hence

the testing team would not be able test the software developed for these expectations or 

requirements. This may lead to poor “Software Quality” as the product does not meet theexpectations.

Similarly consider a scenario where the development team does not document the installation

instructions. If a different person or a team is responsible for future installations they may end up

making mistakes during installation, thereby failing to deliver as promised.

Once again consider a scenario where a tester fails to document the test results after executingthe test cases. This may lead to confusion later. If there were an error, we would not be sure at

what stage the error was introduced in the software at a component level or when integrating it

with another component or due to environment on a particular server etc. Hence documentation

is the key for future analysis and all Quality Management efforts.

Steps:

In a typical Software Development Life Cycle the following steps are necessary for QualityManagement:

1) Document the Requirements

2) Define and Document Quality Standards

3) Define and Document the Scope of Work 

4) Document the Software Created and dependencies5) Define and Document the Quality Management Plan

6) Define and Document the Test Strategy7) Create and Document the Test Cases

8) Execute Test Cases and (log) Document the Results

9) Fix Defects and document the fixes10) Quality Assurance audits the Documents and Test Logs

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 34/35

Various Software Tools have been development for Quality Management. These Tools can helpus track Requirements and map Test Cases to the Requirements. They also help in Defect

Tracking.

What is Defect Tracking?

This is very important to ensure the Quality of the end Product. As test cases are executed at

various levels defects if any are found in the Software being tested. The Defects are logged and

data is collected. The Software Development fixes these defects and documents how they werefixed The testing team verifies whether the defect was really fixed and closes the defects. This

information is very useful. Proper tracking ensures that all Defects were fixed. The information

also helps us for future projects.

The Capability Maturity Model defines various levels of Organization based on the processesthat they follow.

Level 0

The following is true for “Level 0” Organizations -

There are no Processes, tracking mechanisms, no plans. It is left to the developer or any person

responsible for Quality to ensure that the product meets expectations.

Level 1 – Performed Informally

The following is true for “Level 1” Organizations -

In Such Organizations, Typically the teams work extra hard to achieve the results. There are notracking mechanisms, standards defined. The work is done but is informal and not well

documented.

Level 2 – Planned and Tracked

The following is true for “Level 2” Organizations -

There are processes within a team and the team can repeat them or follow the processes for all projects that it handles.

However the process is not standardized throughout the Organization. All the teams within the

organization do not follow the same standard.

Level 3 – Well-Defined

In “Level 3” Organizations the processes are well defined and followed throughout the

organization.

8/7/2019 Testing_funda

http://slidepdf.com/reader/full/testingfunda 35/35

Level 4 – Quantitatively Controlled

In “Level 4” Organizations -

- The processes are well defined and followed throughout the organization- The Goals are defined and the actual output is measured

- Metrics are collected and future performance can predicted

Level 5 – Continuously Improving

“Level 5” Organizations have well defined processes, which are measured and the organizationhas a good understanding of IT projects affect the Organizational goals.

The Organization is able to continuously improve its processes based on this understanding.

Summary:

In this article we studied the Software Quality Management process.