levels and types of testing. contents what is verification and validation? levels of testing: v -...

Post on 19-Jan-2018

239 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

What is Verification and Validation ? According to the IEEE Standard Glossary of Software Engineering Terminology V&V is defined as the process of determining whether:  Requirements for a system or component are complete and correct.  Products of each development phase fulfill the requirements or conditions imposed by the previous phase.  Final systems or components comply with specified requirements Validation Validation is a process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation are testing, inspection and reviewing Verification Verification is a process of determining whether or not the products of a given phase of the software development cycle meets the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing

TRANSCRIPT

Levels and Types of Testing

Contents What is Verification and Validation?

Levels of Testing: V - Model

Unit Testing

Integration Testing

System Testing

Performance Testing

Load and Stress Testing

Compatibility Testing

Concurrency Testing

Security Testing

Disaster Recovery Testing

User Acceptance Testing

Other types of Testing

What is Verification and Validation ?

According to the IEEE Standard Glossary of Software Engineering Terminology V&V is defined as the process of determining whether:

Requirements for a system or component are complete and correct.

Products of each development phase fulfill the requirements or conditions imposed by the previous phase.

Final systems or components comply with specified requirements

ValidationValidation is a process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation are testing, inspection and reviewing

VerificationVerification is a process of determining whether or not the products of a given phase of the software development cycle meets the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing

What is Verification and Validation ?.....continued

ValidationAre We Building the product right?

VerificationAre we building the right product?

For example:

While testing a program for Adding two numbers, testers needs to validate if the requirements were to create a Addition program or was it multiplication.

Tester will also need to verify if the program is adding any two numbers properly and giving expected result

Levels of Testing: V - Model

System Testing

V - Model

High LevelDesign

Coding

Unit Testing

User Acceptance Testing

Detailed Design/Prog. Specs.

Integration Testing

Functional Specifications

Requirement Analysis

Unit-Testing

System Testing

Testing Stages – Unit Testing

High LevelDesign

Coding

Unit Testing

User Acceptance Testing

Detailed Design/Prog. Specs.

Integration Testing

Functional Specifications

Requirement Analysis

Unit Testing – What to test

Check the unit against standards

Validate field against data types, format and data they hold

Validate business rule

Validate business rule with Invalid input

Validate business rule against boundary value

Test for Code Coverage

Test for resource utilization (CPU, memory etc.,)

Test In-bound Interfaces

Test Out-bound Interfaces

Usability Aspect of Software

Functionality Aspect of Software

Performance Aspect of Software

System Integration Aspect of Software

Unit Testing

Activities

Identification of Testable units/modules

Focus on code and logic

Design logic-based Test cases and prepare Test data

Run tests and verify output through path traversing

For example:

Testing of field length by typing more than the specified number of characters

Giving only numeric values in the alphabets field

Invalid Login functionality of login module

Unit Testing focuses on verification effort on the smallest unit of software module

Unit Testing….continued

The Module interface is tested to ensure that information flows into and out of the program.

Tests for data flow across interface are required before any other tests are initiated

Some of the key things that can be tested at Unit level are:

Number of Input Parameters are equal to number of arguments

Parameters and arguments attribute match?

Parameters and Argument units systems match?

Any references to parameters are not associated with current point of entry?

Buffer size matches record size?

Glen Myers checklist for Unit testing

Integration Testing

System Testing

Testing Stages – Integration Testing

High LevelDesign

Coding

Unit Testing

User Acceptance Testing

Detailed Design/Prog. Specs.

Integration Testing

Functional Specifications

Requirement Analysis

Integration Testing

Combining and testing multiple components together

Integration of modules, programs and functions

Tests Internal Program interfaces

Tests External interfaces for modules

For e.g.

• Data flow between two modules

• Control flow between two modules

Integration testing is a systematic technique for constructing the program structure while conducting tests to uncover errors associated with interfacing

The objective is to take unit tested modules and build a program structure that has been dictated by design.

Integration Testing – Sample Banking Application with different units

Banking System

CRM HR Financial

Java Based Based on C, C++

VB based

Online banking Loans Account Opening

External interface

Inter module interface

Intra module interface

Unit Module

External System

Integration Testing: Methods for Integrating Units Big-bang Integration (non-incremental) - Big Bang integration testing can be

likened to the rather naive 'Run it and see' approach to testing to be seen among inexperienced programmers. The program is integrated without any formal integration testing, and then run to ensure that it all 'fits together'.

Random Incremental Integration Top-Down Integration - Top-down integration testing starts at the most central module of the

program (often the 'main program') and works towards the outermost branches of the visibility tree, gradually adding in modules as it proceeds. Initially all modules and the resources that they provide are simulated by means of stubs. The initial test-harness must therefore provide all of these stubs

Bottom-up Integration - Bottom-Up integration testing works from the modules at the outermost branches of the module visibility tree towards the module making up the 'main program'. For each module a test-harness is developed which puts that module through it's paces. Once the outermost branch modules have been tested in this way, the modules which use these are added in one by one

Hybrid - Both Bottom-Up and Top-Down methods of integration testing have problems when the testing of Concurrent Programs is considered. In order to overcome some of these problems a hybrid approach can be taken - using each of these approaches where appropriate.

Top-down vs. Bottom-up Integration

Bottom-up Integration Top-down Integration

Advantage Key Interface defects trapped earlier

Core functionality tested early

Disadvantage Core functionality tested late in the cycle

Key Interface defects trapped late in the cycle

System-Testing

System Testing

Testing Stages – System Testing

High LevelDesign

Coding

Unit Testing

User Acceptance Testing

Detailed Design/Prog. Specs.

Integration Testing

Functional Specifications

Requirement Analysis

System Testing…continued

System / Application Under Test

Functionalilty

Usa

bili t

y

SecurityInternationalization

Disaster & Recovery

Compatibility

Installability

Perf

orm

anc e

Test Environments / Test Beds

SIT QA UAT XAT PROD

System Testing

System Testing - what is it ?

Specifications-based Testing

Typically independent team testing

Simulated environment testing

Live/Simulated user data

Tests the whole system

Functional and non-functional requirements tested

Business transaction-driven testing

Compatibility errors uncovered

Performance limitations uncovered

System Testing - Different types

Functional testing (Sanity / Regression )

Performance and Scalability testing

Load/Stress testing

Usability testing

Installability testing

Disaster and Recovery testing

Security testing

Compatibility testing

Concurrency testing

System Testing…..continued

Types of Testing to be covered under System Testing

Functional

Security 40%

Usability

House keeping

Disaster Recovery 20%

Re-Installation

Sanity Check 10%

Performance 30%

If any of the above is carried out under specialized testing then the respective effort is re-distributed across other types of testing

Regression Testing

Functional Testing-Regression

Re-execution of one or more tests in subsequent build of the application/product to ensure

Revisiting and testing all prior bug-fixes in response to a new fix/enhancement

Re-testing all programs that might be affected by the fix/enhancement

Hidden Bugs are uncovered

The baseline for Regression Testing grows with every build

Regression Testing

Should Cover -

Business Process depending on criticality

User friendliness

Cross functional dependencies

Activities -

System appreciation

Preparation of Test case repository

Automate test cases

Execute Regression test suite

Without Regression Testing –Hidden Bugs

PASS PASS PASS

FAIL PASS

PASS Bug Unidentified

Bug Unidentified

Test Requirement 1

Test Requirement 2

Test Requirement 3

Fix

CBA

Software Builds

Hidden BugNew

Problem

PASS

With Regression Testing –No Hidden Bugs

PASS PASS PASS

FAIL PASS PASS

PASS Bug identified

PASS

Test Requirement 1

Test Requirement 2

Test Requirement 3

Fix

CBA

Software Builds

New Problem

Fix

Re g

ress

ion

Test

ing

Re g

ress

ion

Test

ing

Performance Testing

Performance Testing Testing conducted to evaluate the compliance of a system with specified performance

of a system or component with specified performance requirements.

Often this is performed with specified performance testing tool to stimulate large number of users for e.g. “Load Runner”.

Number of concurrent users accessing at any point in given time

System’s performance under high volume of data

Stress testing for systems, which are being scaled up to larger environments or implemented for the first time

Operational intensive transactions. (Most frequently used transactions)

For e.g.

Application’s Response time

No. of passed transactions

No. of failed transactions

Load and Stress Testing

Load and Stress TestingOne of the most common, but unfortunate misuse of terminology is treating "load testing" and "stress testing" as synonymous. The consequence of this ignorant semantic abuse is usually that the system is neither properly "load tested" nor subjected to a meaningful stress test.

Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, MIPS, interrupts, etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.

Load testing: Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.E.g. Test whether the system fails or not, when huge number of transactions are being processed in a typical banking process.

Compatibility Testing

Compatibility Testing

The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Testing whether software is compatible with other elements of a system with which it should operate.

E.g. Browsers, Operating System, or Hardware.

Concurrency Testing

Concurrency Testing

Multi user testing geared towards determining the effects of accessing the same application code, module or database records.

Identifies and measures the level of locking, dead locking and use of single threaded code.

E.g. Two users accessing the same application the same time.

Security Testing

Security Testing

Testing which confirms that the program can restrict access to unauthorized personnel and that the authorized personnel can access the functions available to their security level.

E.g. Employees who do not have access to view the folder are restricted from viewing it.

Disaster and Recovery Testing

Disaster and Recovery Testing

Testing which confirms that the program recovers from expected or unexpected events without loss of data or functionality.

Events can include shortage of disk space, un expected loss of communication, or power out conditions.

User Acceptance Testing

System Testing

V - Model

High LevelDesign

Coding

Unit Testing

User Acceptance Testing

Detailed Design/Prog. Specs.

Integration Testing

Functional Specifications

Requirement Analysis

User Acceptance Testing User Acceptance Testing is carried out by End User to ensure that

functional and non functional requirements specified during requirement analysis are completely met by system / application

User Acceptance test will be based on functional and non-functional acceptance criteria mutually agreed upon in the beginning of the project

End User will validate the System / application against industry standards

Demonstrates that the system meets mutually agreed Acceptance criteria

Critical Requirements

Minimum Performance level

Maximum Defect Detection Rate

Typically, a sub-set of System testing

User Acceptance Testing…..continued

Approach

Approach for User Acceptance Testing is generally defined by the customer

Focus is more on end-to-end functionality

User acceptance testing is carried out in production like environment

When testing is carried out by End User within a company it is known as Alpha test

When testing is carried out by Customer’s team it is known as beta test

User Acceptance Testing….continuedProcedures for conducting the Acceptance Test

Define the Acceptance Criteria Functionality requirements Performance requirements Interface quality requirements Overall software quality requirements

Develop an Acceptance plan Project description User Responsibities Acceptance Description

Execute the Acceptance Test Plan

Alpha / Beta Testing

Forms of Acceptance testing

Testing in the production environment

Alpha testing is performed by end users within a company but outside development group

Beta testing is performed by a sub-set of actual customers outside the company

Installation Testing

Installation Testing

Basic installation

Installation of various configurations

Installation on various platforms

Regression testing of basic functionality

A Testing Phase where the Test Engineer tries to ‘break’ the system by randomly trying the system's functionality. This also includes negative testing.

Other Types Of Testing

Ad Hoc Testing

Testing which confirms that any value that may become large over time (Such as accumulated counts, logs, and data files), can be accumulated by the program and will not cause the program to stop working or degrade its operation in any manner.

Volume Testing

Other Types Of Testing…continued

To ensure that the application continues to operate successfully and meets the performance requirements when operating with high levels of multiple access.

To assess the impact of the system on the overall performance of other systems running on the same hardware.

Includes multi-user concurrent access to the system, to prove locking, system processes and data integrity

Multi-user Testing

To ensure that the results obtained after a change in machine environment are the same as those obtained under the old environment, e.g. transfer from development to production environment.

In client server environments there may be many combinations of hardware and software across client workstations, network connections and servers. Portability Testing would ensure that the application works across all combinations.

Portability Testing

Other Types Of Testing ….Continued

Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work without bothering with finer details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Internationalization Testing (I18N) - Testing related to handling foreign text and data within the program. This would include sorting, importing and exporting test and data, correct handling of currency and date and time formats, string parsing, upper and lower case handling and so forth. [Clinton De Young, 2003].

Exploratory Software Testing is a powerful and fun approach to testing. In some situations, it can be orders of magnitude more productive than scripted testing. I haven’t found a tester yet who didn’t, at least unconsciously, perform exploratory testing at one time or another. Yet few of us study this approach, and it doesn’t get much respect in our field. It’s high time we stop the denial, and publicly recognize the exploratory approach for what it is: scientific thinking in real time. Friends, that’s a good thing.

Interoperability Testing: Measures the ability of software to communicate across the network on multiple machines from multiple vendors each of whom may have interpreted a design specification critical to your success differently

Negative Test: A test whose primary purpose is falsification; that is tests designed to break the software[B.Beizer1995]

Spike Testing: To test performance or recovery behavior when the system under test (SUT) is stressed with a sudden and sharp increase in load should be considered a type of load test.[ Load Testing Terminology by Scott Stirling ]

Other Types of Testing…continued

• Software Engineering: Practitioner’s approach: Roger S. Pressman

• Art of Software Testing: Glen Myers

References

Q & A

Thank You

top related