software testing – unit 3 · attributes such as usability, reliability, and performance. figure...

48
SOFTWARE TESTING – UNIT 3 Prepared by Dr. R. Kavitha Page 1 3.1 The need for levels of Testing Execution-based software testing, especially for large systems, is usually carried out at different levels. At each level there are specific testing goals, which are indicated in the diagram Figure 3.1. For example, at unit test a single component is tested. A principal goal is to detect functional and structural defects in the unit. At the integration level several, components are tested as a group and the tester investigates component interactions. At the system level the system, as a whole is tested and a principle goal is to evaluate attributes such as usability, reliability, and performance. Figure 3.1: Levels of testing For both object-oriented and procedural-based software systems - The testing process begins with the smallest units or components to identify functional and structural defects. After the individual components have been tested, and any necessary repairs made, they are integrated to build subsystems and clusters. System test begins when all of the components have been integrated successfully. It usually requires the bulk of testing resources. Laboratory equipment, special software, or special hardware may be necessary, especially for real-time, embedded, or distributed systems. At the system level the tester looks for defects, but the focus is on evaluating performance, usability, reliability, and other quality-related requirements. If the system is being custom made for an individual client then the next step following system test is acceptance test. This is a very important testing stage for the developers. During acceptance test the development organization must show that the software meets all of the client’s requirements. Very often final payments for system development depend on the quality of the software as observed during the acceptance test.

Upload: others

Post on 18-Sep-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 1

3.1 The need for levels of Testing

Execution-based software testing, especially for large systems, is usually carried out at differentlevels. At each level there are specific testing goals, which are indicated in the diagram Figure3.1.

For example, at unit test a single component is tested. A principal goal is to detectfunctional and structural defects in the unit.

At the integration level several, components are tested as a group and the testerinvestigates component interactions.

At the system level the system, as a whole is tested and a principle goal is to evaluateattributes such as usability, reliability, and performance.

Figure 3.1: Levels of testing

For both object-oriented and procedural-based software systems - The testing processbegins with the smallest units or components to identify functional and structural defects.

After the individual components have been tested, and any necessary repairs made, theyare integrated to build subsystems and clusters.

System test begins when all of the components have been integrated successfully. Itusually requires the bulk of testing resources. Laboratory equipment, special software, orspecial hardware may be necessary, especially for real-time, embedded, or distributedsystems. At the system level the tester looks for defects, but the focus is on evaluatingperformance, usability, reliability, and other quality-related requirements.

If the system is being custom made for an individual client then the next step followingsystem test is acceptance test.

This is a very important testing stage for the developers. During acceptance test thedevelopment organization must show that the software meets all of the client’srequirements.

Very often final payments for system development depend on the quality of thesoftware as observed during the acceptance test.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 2

Software developed for the mass market often goes through a series of tests called alphaand beta tests.

Alpha tests bring potential users to the developer’s site to use the software.Developers note any problems.

Beta tests send the software out to potential users who use it under real-worldconditions and report defects to the developing organization.

Implementing all of these levels of testing require a large investment in time andorganizational resources.

Organizations with poor testing processes tend to skimp on resources, ignore test planninguntil code is close to completion, and omit one or more testing phases.

The approach used to design and develop a software system has an impact on how testers planand design suitable tests. There are two major approaches to system development—bottom-up,and top-down.

These approaches are supported by two major types of programming languages:Procedure-oriented and Object-oriented.

Levels of abstraction for the two types of systems are also somewhat different.In traditional procedural systems,

Lowest level of abstraction is described as a function or a procedure that performs somesimple task.

The next higher level of abstraction is a group of procedures (or functions) that call oneanother and implement a major system requirement. These are called subsystems.

Combining subsystems finally produces the system as a whole, which is the highest levelof abstraction.

In object-oriented systems, Lowest level is viewed by some researchers as the method or member function. The next highest level is viewed as the class that encapsulates data and methods that

operate on the data. To move up one more level in an object-oriented -use the concept of the cluster, which is a

group of cooperating or related classes

Finally, there is the system level, which is a combination of all the clusters and anyauxiliary code needed to run the system.

Object-oriented development - Beneficial features were encapsulation, inheritance, andpolymorphism. These features would simplify design and development and encouragereuse.

However, testing of object-oriented systems is not straightforward due to these samefeatures.

For example, encapsulation can hide details from testers, and that can lead to uncoveredcode.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 3

Inheritance also presents many testing challenges, among those the retesting of inheritedmethods when they are used by a subclass in a different context.

*****3.2. Unit Test

The unit test is the lowest level of testing performed during software developmentwhere individual units of software are tested in isolation from other parts of a program.

Since the software component being tested is relatively small in size and simple infunction, it is easier to design, execute, record, and analyze test results.

If a defect is revealed by the tests, it is easier to locate and repair since only the oneunit is under consideration.

In a conventional structured programming language, such as C, the unit to be tested istraditionally the function or sub-routine.

In object oriented languages such as C++, the basic unit to be tested is the class.

When developing a strategy for unit testing, there are two basic organizational approaches thatcan be taken: Top down and Bottom Up.

Organizational Approaches

Top Down Testing

Bottom Up Testing

1. Top down testing

Figure 3.2: Top down testing

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 4

In top down testing, the unit at the top of the hierarchy is tested first. All called units arereplaced by stubs. As the testing progresses the stubs are replaced by actual units.

Top down testing requires test stubs, but not test drivers. In the given example unit D is being tested and ABC have already been tested. All the

units below D have been replaced by test stubs. In top down testing units are tested from top to bottom. The units above a unit are calling

units and below are the called units. The units below the unit being tested are replaced bystubs. As the testing progresses stubs are replaced by actual units.

Stubs:Stubs are dummy modules which are known as "called programs" which is used in integrationtesting (top down approach), used when sub programs are under construction.

Drivers:Drivers are also kind of dummy modules which are known as "calling programs", which is usedin bottom up integration testing, used when main programs are under construction.Stub means a Dummy model of a particular module.

Real Life Example:Suppose, to test the integration between two modules ‘A’ and ‘B’, and we

have developed only module ‘A’ while Module ‘B’ is yet in development stage. So in such casewe can’t do integration test for module ‘A’ but, if we prepare a dummy module having similarfeatures like ‘B’, then using that we can do integration testing.

Our main aim in this is to test Module ‘A’ & not Module B. So that we can save timeotherwise we have to wait till the module B is actually developed. Hence this dummy module B iscalled as Stub.

Now module B cannot send/receive data from module A directly/automatically so, in suchcase we have to transfer data from one module to another module by some external features. Thisexternal feature used is called Driver.

2. Bottom up Testing

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 5

Figure 3.3: Bottom up Testing

In bottom up testing the lowest level units are tested first. They are then used to test higherlevel units. The process is repeated until you reach the top of the hierarchy.

Bottom up testing requires test drivers but does not require test stubs.

In the example given in figure 3.3, unit ‘D’ is the unit under test and all the units below ithave been tested, and it is being called by test drivers instead of the units above it.

*****

3.3. Designing the unit test A test design should consist of these stages. i.e. Test Strategy, Test Planning, Test

Specification, Test Procedure. These four stages apply to all levels of testing including the unit testing. Test strategy and test planning are mainly project management activities and Test

procedure is the actual implementation of the test.

3.3.1. Test Planning

Goal of unit testing To insure that each individual software unit is functioning according to its specification. Good testing practice calls for unit tests that are planned. Planning includes designing tests

to reveal defects such as functional description defects, algorithmic defects, data defects,and control logic and sequence defects.

Resources should be allocated, and test cases should be developed using both white andblack box test design strategies. The unit should be tested by an independent tester

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 6

(someone other than the developer) and the test results and defects found should berecorded as a part of the unit history.

Each unit should also be reviewed by a team of reviewers, preferably before the unit test.To prepare for unit test the developer/tester must perform several tasks. These are:

(i) Plan the general approach to unit testing;(ii) Design the test cases, and test procedures (these will be attached to the test plan);(iii) Define relationships between the tests;(iv) Prepare the auxiliary code necessary for unit test.

Phase 1: Describe Unit Test Approach and RisksIn this phase of unit testing, planning the general approach to unit testing is outlined. The

test planner:(i) Identifies test risks;(ii) Describes techniques to be used for designing the test cases for the units;(iii) Describes techniques to be used for data validation and recording of test results;(iv) Describes the requirements for test harnesses and other software that interfaces with the unitsto be tested, for example, any special objects needed for testing object-oriented units.The planner estimates resources needed for unit test, such as hardware, software, and staff, anddevelops a tentative schedule under the constraints identified at that time.

Phase 2: Identify Unit Features to be tested

The planner determines which features of each unit will be tested, for example: functions,performance requirements, states, state transitions, control structures, messages, and dataflow patterns.

Phase 3: Add Levels of Detail to the Plan

The planner adds new details to the approach, resource and scheduling portions of the unit testplan. As an example, existing test cases that can be reused for this project can be identified inthis phase.

Unit availability and integration scheduling information should be included in the revisedversion of the test plan. The planner must be sure to include a description of how test resultswill be recorded.

Test-related documents that will be required for this task, for example, test logs and testincident reports should be described, and references to standards for these documents shouldbe provided. Any special tools required for the tests are also described.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 7

3.3.2. Test SpecificationEach unit test case should include four essential elements:

A statement of the initial state of the unit, the starting point of the test case The inputs to the unit What the test case actually tests, in terms of the functionality of the unit and the

analysis used in the design of the test case The expected outcome of the test case

3.3.3. Process in Test SpecificationSix step general process for developing a unit test specification as a set of individual unit testcases.

Step 1 - Make it RunStep 2 - Positive TestingStep 3 - Negative TestingStep 4 - Special ConsiderationsStep 5 - Coverage TestsStep 6 - Coverage Completion

Step 1 - Make it RunThe purpose of the first test case in any unit test specification should be to execute the unit

under test in the simplest way possible.Suitable techniques:

Specification derived tests Equivalence partitioning

Step 2 - Positive TestingTest cases should be designed to show that the unit under test does what it is supposed to

do.

Suitable techniques:o Specification derived testso Equivalence partitioningo State-transition testing

Step 3 - Negative TestingTest cases should be enhanced and further test cases should be designed to show that the

software does not do that it is not supposed to do.Suitable techniques:

Error guessing Boundary value analysis

Internal boundary value testing State-transition testing

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 8

Step 4 - Special ConsiderationsWhere appropriate, test cases should be designed to address issues related to performance,

safety and security requirements.Suitable techniques:

Specification derived testsStep 5 - Coverage TestsAdd more test cases to the unit test specification to achieve specific test coverage objectives.Suitable techniques:

Branch testing

Condition testing Data definition-use testing State-transition testing

Test Execution At this point the test specification can be used to develop an actual test procedure and

executed.

Execution of the test procedure will identify errors in the unit which can be corrected andthe unit re-tested.

Running of test cases will indicate whether coverage objectives have been achieved. Ifnot…

Step 6 - Coverage CompletionWhere coverage objectives are not achieved, analysis must be conducted to determine why?Failure to achieve a coverage objective may be due to:

o Infeasible paths or conditionso Unreachable or redundant codeo Insufficient test cases

*****3.4. Test Case Design Techniques

• Test case design techniques can be broadly split into two main categories.• Black box techniques use the interface to a unit and a description of functionality, but do

not need to know how the inside of a unit is built.• White box techniques make use of information about how the inside of a unit works.• There are also some other techniques which do not fit into either of the above categories.

Error guessing falls into this category. The types of

Black Box ( Functional) White Box ( Structural ) OtherSpecification derived tests Branch testing

Error GuessingEquivalence partitioning Condition testingBoundary value analysis Data definitionState transition testing Internal boundary value testing

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 9

Black Box TestingSpecification Derived Tests

Test cases are designed by walking through the relevant specifications. Each test case should test one or more statements of specification.

Positive test case design technique.Example Specification

Input - real number

Output - real number When given an input of zero or greater, the positive square root of the input shall be

returned. When given an input of less than zero, the error message "Square root error - illegal

negative input" shall be displayed and a value of 0 returned.Equivalence Partitioning

It is based upon splitting the inputs and outputs of the software under test into anumber of partitions

Test cases should therefore be designed to test one value in each partition.

Still positive test case design technique.Boundary Value Analysis

Similar to Equivalence Partitioning

Assumes that errors are most likely to exist at the boundaries between partitions. Test cases are designed to exercise the software on and at either side of boundary values.

Incorporates a degree of negative testing into the test designState-Transition Testing

Used where the software has been designed as a state machine

Test cases are designed to test transition between states by generating events Negative testing can be done by using illegal combinations of states and events

White Box TestingBranch Testing

Designed to test the control flow branches

E.g. if-then-elseCondition Testing

Used to complement branch testing

It tests logical conditions E.g. while(a<b) – For different values of ‘a’ and ‘b’, the condition should be checked.

Error Guessing Based solely on the experience of the test designer The test cases are designed for the values which can generate errors.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 10

If properly implemented it can be the most effective way of testingNote about unit testing:

Unit testing provides the earliest opportunity to catch the bugs And fixing them at this stage is the most economical

Black Box and White Box techniques to develop individual test cases Unit testing will find bugs at a stage of the software development where they can be

corrected economically.

Unit testing requires:o That the design of units is documented in a specificationo That unit tests are designed from the specification before coding beginso The expected outcomes of unit test cases are specified in the unit test specification

*****3.5. Running the Unit tests and Recording results

The table 3.2 indicates the sample worksheet for recording status of unit test.

Table 3.2: Sample worksheet for recording the status of unit test

******3.6. Integration Test

Goals: Integration test for procedural code has two major goals:

To detect defects that occur on the interfaces of units;

To assemble the individual units into working subsystems and finally acomplete system that is ready for system test.

Integration test planning

For conventional/ procedural/functional-oriented systems there are four majorintegration strategies—top-down and bottom-up, bidirectional and system integration.

Unit test worksheetUnit Name:Unit Identifier:Tester:Date:Test case ID Status ( run/not run) Summary of results Pass/Fail

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 11

1. Top down integrationIntegration testing involves testing the topmost component interface with other

components in same order as you navigate from top to bottom, till we cover all the components.For Example consider the following example shown in figure 3.4. The integration testing startswith testing the interface between component 1 and component 2. To complete the integrationtesting, all interfaces mentioned in the figure covering all the arrows, have to be tested together.

Figure 3.4: Example for top down integration

The order in which the interfaces are to be tested is shown in the following table 3.3.

Step Interfaces tested

1 1-22 1-33 1-44 1-2-55 1-3-66 1-3-6-(3-7)7 (1-2-5)-(1-3-6-(3-7))8 1-4-89 (1-2-5)-(1-3-6-(3-7))-(1-4-8)

Table 3.3: order of interfaces tested in top down testing

To optimize the number of steps in integration testing , steps 6 and 7 can be combined andcan be executed in a single step. Similarly, steps 8 and 9 also can be combined and tested in asingle step. Combining steps does not mean reduction in the number of interfaces tested. It justan optimization in the elapsed time, and we do not have to wait for steps 6 and 8 to get over tostart with testing steps 7 and 9 respectively.

If a component at a higher level requires a modification every time module gets added tothe bottom. For each component addition, integration testing needs to be repeated starting formstep 1.

Component 1

t 1Component 3Component 2

t 1

Component 4

Component 5 Component 6 Component 7 Component 8

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 12

Note: A breadth first appraoch will get the component order such as 1-2, 1-3, 1-4 and so on. Anda depth first order will get the component order such as 1-2-5, 1-3-6 and so on. In this examplebreadth first appraoch was used.

2. Bottom up integration testing

The navigation in botttom up integration starts from component 1, covering all sub-systems, till component 8 is reached. The order in which the interfaces have to be tested is shownin table. The number of steps can be optimized into four steps, by combining steps 2 and 3 and bycombining steps 5-8. For an incremental product development, only the impacted and addeinterfaces need to be tested, covering all sub-systems and system components.

Example of bottom up integration – Arrows pointing up indicates integration path

Figure 3.5: Example for bottom up integration testing

Step Interfaces tested

1 1-52 2-6, 3-63 2-6-(3-6)4 4-75 1-5-86 2-6-(3-6)-87 4-7-88 1-5-8-(2-6-(3-6)-8)-(4-7-8)

Table 3.4: order of interfaces tested in bottom up testing

******

Component 8

t 1Component 6Component 5

t 1

Component 7

Component 1 Component 2 Component 3 Component 4

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 13

3.7. System Testing• The testing conducted on the complete integrated products and solutions to evaluate

system compliance with specified requirements on functional and non functional aspects iscalled system testing.

• A system is defined as a set of hardware, software and other parts that together provideproduct features and solutions.

• System testing is the only phase of testing which test, both functional and nonfunctional aspects of the product.

Functional side- Testing focuses on real time customer usage.Non functional side- Focuses on quality factors

Why system testing?• An independent team normally does system testing.• This independent team is different from the team that does the component and integration

testing. The behavior of the complete product is verified during system testing.

1. Performance / Load testing: To evaluate the time taken or response time of the system toperform its required functions in comparison with different versions of same products (s) or adifferent component (s) is called performance testing.2. Scalability testing: A testing that requires enormous amount of resource to find out themaximum capability of the system parameter is called scalability testing.3. Reliability testing: To evaluate the ability of the system or any independent component of thesystem to perform its required functions repeatedly for a specified period of time is calledreliability testing.4. Stress testing: Evaluating a system beyond the limits of the specified requirements orsystem resources (disk space, memory, processor utilization) to ensure that the system does notbreak down unexpectedly is called stress testing.5. Interoperability testing: This testing is done to ensure that two or more products canexchange information, use the information and work closely.6. Localization testing: Testing conducted to verify that the localized products works in differentlanguages is called localization testing.

• System testing is performed on the basis of written test cases according to informationcollected from detailed architecture/design documents, module specifications and systemrequirements specifications.

• System test cases can also be developed bases on user stories, customer discussions, andpoints made by observing typical customer usage.

• System testing may not include much negative scenario verification such as testing forincorrect and negative values.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 14

• Since such negative testing already performed by component and integration testing.

• System testing started once unit, component and integration testing are completed.

System testing is done for the following reasons• Provide independent perspective in testing• Bring in customer perspective in testing• Provide a “fresh pair of eyes” to discover defects not found earlier by testing.• Test product behavior in a holistic, complete and realistic environment.• Test both functional and non functional aspects of the product.• Build confidence in the product.• Analyze and reduce the risk of releasing the product.• Ensure all requirements are met and ready of the product for acceptance testing.• Apart from verifying the pass or fail status, non functional tests results are also determined

by the amount of effort involved in executing them and any problems faced duringexecution. For example Test met pass/fail after 10 th iterations,here the experience is badand the result cannot be taken as pass.

Types of system testsThere are many types of system testing is available which is indicated in figure

Functional testing Performance testing

Stress testing Configuration testing

Security testing Recovery testing Reliability testing

Usability testing

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 15

Figure 3.6: Types of system testing

• Not all software systems need to undergo all the types of system testing. Test plannersneed to decide on the type of tests applicable to a particular software system.

• Decisions depend on the characteristics of the system and the available test resources.• For example, if multiple device configurations are not a requirement for your system,

then the need for configuration test is not significant.• During system test, the testers can repeat these tests and design additional tests for the

system as a whole. The repeated tests can in some cases be considered as regression test.• Properly planned and executed system tests are excellent preparation for acceptance test.• An important tool for implementing system tests is a load generator.A load generator is

essential for testing quality requirements such as performance and stress.• A load is a series of inputs that simulates a group of transactions.• A Transaction consists of a set of operations that may be performed by a Person, software

system, or a device that is outside the system.• A use case can be used to describe a transaction.• EX- system testing of telecommunication System needs a load that simulated a series of

phone calls (transactions) of particular types and lengths arriving from different locations.• A load can be a real load, that is, we can put the system under test to real usage by having

actual telephone users connected to it. Loads can also be produced by tools called loadgenerators

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 16

Functional vs non functional testing• Functional – testing a product’ functionality and features.• Non functional – Testing the product’s quality factors.

Functional testing:• Testing result normally depends on the product, not on the environment.• It requires in-depth product knowledge.

Non functional testing: Non functional testing is checking the quality factors.

Testing requires the expected results to be documented in qualitative and quantifiableterms.

It requires large amount of resources and results are different for different configurationsand resources.

It is very complex since it needs large amount of data.

3.7.1. P e r f o r m a n c e T e s t i n g

Definition:The testing performed to evaluate the response time, throughput and utilization of the

system to execute its required functions in comparison with different versions of same product(s)or a different competitive product(s) is called “Performance Testing “.

• The goal of system performance tests is to see, if the software meets the performancerequirements.

• Testers also learn from performance test, whether there are any hardware or softwarefactors that impact on the system’s performance.

• Performance testing allows testers to tune the system; that is, to optimize the allocation ofsystem resources.

• Performance objectives must be articulated clearly by the users/clients. In therequirements documents, and be stated clearly in the system test plan.

• The objectives must be quantified. For example, a requirement that the system return aresponse to a query in “a reasonable amount of time”. It’s not an acceptable requirement;the time requirement must be specified in quantitative way. Results of performance testsare quantifiable.

• At the end of the tests the tester will know, for example, the number of CPU cycles used,the actual response time in seconds (minutes, etc.), and the actual number of transactionsprocessed per time period. These can be evaluated with respect to requirements objectives.

• Resources for performance testing must be allocated in the system test plan.• A source of transactions to drive the experiments. For example if you were performance

testing an operating system you need a stream of data that represents typical userinteractions. Typically the source of transaction for many systems is a load.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 17

• An experimental test bed that includes hardware and software the system-under-testinteracts with. The test bed requirements sometimes include special laboratory equipmentand space that must be reserved for the tests.

• Instrumentation or probes that help to collect the performance data. Probes may behardware or software in nature.

• A set of tools to collect, store, process, and interpret the data. Very often, large volumes ofdata are collected, and without tools the testers may have difficulty in processing andanalyzing the data in order to evaluate true performance levels.

For example: If there is an application which can handle 25 simultaneous user login ata time. In load testing we will test the application for 25 users and check how application isworking in this stage, in performance testing we will concentrate on the time taken to performthe operation. Whereas in stress testing we will test with more users than 25 and the test willcontinue to any number and we will check where the application is cracking the hardwareresources.

Load and Stress Testing

• Testing the application with maximum number of users/input is defined as load testing.While testing the application with more than maximum number of user/inputs is definedas stress testing.

• In load testing, system performance is measured based on a volume of users. While instress testing, the breakpoint of a system has been measured.Example:

• If an application is build for 500 users, checking up to 500 users is called load testing andfor stress testing, checking should be done for greater than 500 users.

• A banking application can take a maximum user load of 20,000 concurrent users. Increasethe load to 21,000 and do some transaction like deposit or withdraw. As soon as you didthe transaction, banking application server database will sync with ATM database server.Now check with the user load of 21,000 does this sync happened successfully. Now repeatthe same test with 22,000 thousand concurrent users and so on.

Factors governing performance testingA product is expected to handle multiple transactions in a given period. The capability of

the system or the product in handling multiple transactions is determined by a factor calledthroughput.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 18

Figure: 3.7: Response of throughput for various load condition

In the above figure 3.7, it can be noticed that initially the throughput keeps increasing asthe user load increases. This is the ideal situation for any product and indicated that the product iscapable of delivering more, when there are more users trying to use the product. In the secondpart of the graph, beyond certain user load conditions (after the bend), it can be noticed that thethroughput comes down. The optimum throughput is represented by the saturation point and is theone that represents the maximum throughput for the product.

Response time can be defined as the delay between the point of request and the first responsefrom the product. Tuning is a procedure by which the product performance is enhanced by settingdifferent values to the parameters (Variables) of the product, operating system and othercomponents. Tuning improves the product performance without having to touch the source codeof the product. Another factor that needs to be considered for performance testing is performanceof competitive products. This type of performance testing where in competitive products arecompared is called benchmarking.

Lastly, performance testing requirement needs to be associated with the actual number orpercentage of improvement that is desired. For example, if a business transaction, say ATMmoney withdrawal, should be completed within two minutes, the requirement needs to documentthe actual response time expected.

One of the most important factors that affect performance testing is the availability ofresources. Both hardware and software is needed to derive the best results from performancetesting and for deployments. The exercise to find out what resources and configurations areneeded is called “Capacity Planning”.

Planning of Performance TestingCollecting requirements is the first step in planning the performance testing. Typically

functionality testing has a definite set of inputs and outputs with a clear definition of expectedresults. But, Performance testing requires clear documentation and environmental setup and the

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 19

expected results may not well know in advance. So, collecting requirements is a very bigchallenge in performance testing.

Note: A performance test can only be carried out for a completely automated product. A featureinvolving a manual intervention cannot be performance tested as the results depend on how fast auser responds with inputs to the product.

Secondly, Performance testing requirements needs to clearly state what factors needs to bemeasured and improved. Performance has several factors such as response time, latencythroughput and resource utilization.

Sources for deriving performance requirements

1. Performance compared to the previous release of the same product: ATM withdrawaltransaction will be faster than the previous release by 10%.2. Performance compared to the competitive product(s): ATM withdrawal with be as fast as orfaster than competitive product XYZ.3. Performance compared to absolute numbers derived from actual need: “ATM machine shouldbe capable of handling 1000 transactions per day with each transaction not taking more than aminute.

Note: Performance numbers derived from architecture and design. The architecture and designgoal are based on the performance expected for a particular load. So, source code should bewritten in such a way to meet that numbers.

Types of requirements – Performance testing

1. Generic requirements: common across all products in the product domain area. Allproducts in that area are expected to meet those performance expectations. EX: time takento load a page, initial response when a mouse is clicked, time taken to navigate betweenscreens.

2. Specific Requirements: It depends on implementation of a particular product and differsfrom one product to another in a given domain. Ex: Time taken to withdraw amount fromATM

Writing Test Cases for performance testing

Step 1: list of operations or business transactions to be tested.Step 2: Steps for executing those operations/transactions.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 20

Step 3: List of product, OS parameters that impact the performance testing and their values.Step 4: Loading patternStep 5: Resource and their configuration (network, hardware and software configurations)Step 6: Expected resultsStep 7: Comparison of result with competitive product

While testing the product for different load patterns, it is important to increase the load orscalability gradually to avoid any unnecessary effort in case of failures. For example, if an ATMwithdrawal fails for ten concurrent operations, there is no point in trying it for 10,000 operations.The effort involved in testing for 10 concurrent operations may be several times lesser than that oftesting for 10,000 operations. Hence, a methodical approach is to gradually improve theconcurrent operations by say 10, 100, 1000, 10,000, and so on rather than trying to attempt 10,000concurrent operations in the first iteration itself. The test case documentation should clearlyreflect this approach.

3.7.2. Configuration Testing

• Configuration testing is the process of testing a system with each of the supportedsoftware and hardware configurations.

• Testing the software against the hardware or testing thesoftware against the software is called configuration testing.

• During this testing tester will test whether the software build is supporting differenthardware technologies or not. Ex: printer, scanners and topologies etc.,

• Configuration testing is the process of testing a system under development on machineswhich have various combinations of hardware and software.

• In many situations the number of possible configurations is far too large to test. Forexample, suppose you are a Member of a test team which working on some desktop userApplication. The number of combinations of operating system versions, memory sizes,hard drive types, CPU’s alone could be enormous. If you target only 10 different operatingSystem versions, 8 different memory sizes, 6 different hard Drives, and 7 different CPU’s,there are already 10 * 8 * 6 * 7 = 3,360 different hardware configurations.

3.7.3. Security testing

• Security testing is a type of software testing that intends to uncover vulnerabilities of thesystem and determine that its data and resources are protected from possible intruders.

• Websites are not meant only for publicity or marketing but these have been evolved intothe stronger tools to cater complete business needs.

• Web based payroll systems, shopping malls, banking, stock trade application are not onlybeing used by organizations but are also being sold as products today.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 21

• This means that online applications have gained the trust of customers and users regardingtheir vital feature named as security. No doubt, the security factor is of primary value fordesktop applications too.

• However, when we talk about web, importance of security increases exponentially. If anonline system cannot protect the transaction data, no one will ever think of using it.

Examples of security flaws in an application:

1) A Student Management System is insecure if ‘Admission’ branch can edit the data of‘Exam’branch2) An online Shopping mall has no security, if customer’s Credit Card Detail is notencrypted3) A custom software possess inadequate security, if an SQL query retrieves actualpasswords of its users

• Typical security requirements may include specific elements of confidentiality, integrity,authentication, availability, and authorization.

• Actual security requirements tested depend on the security requirements implemented bythe system. Security testing as a term has a number of different meanings and can becompleted in a different ways.

Confidentiality

• A security measure which protects against the disclosure of information to parties otherthan the intended recipient.

Integrity

A measure intended to allow the receiver to determine that the information provided by asystem is correct.

Authentication

It is any process by which a system verifies the identity of a user who wishes to access it.

Authorization

The process of determining that a requester is allowed to receive a service or perform anoperation.

Availability

Assuring information and communications services will be ready for use when expected.

Information must be kept available to authorized persons when they need it.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 22

RECOVERY TESTING

• In software testing, recovery testing is the activity of testing how well an application isable to recover from crashes, hardware failures and other similar problems.

• Recovery testing is the forced failure of the software in a variety of ways to verify thatrecovery is properly performed.

• Recovery testing should not be confused with reliability testing, which tries to discoverthe specific point at which failure occurs.

• Recovery testing is basically done in order to check how fast and better the application canrecover against any type of crash or hardware failure etc.

• Type or extent of recovery is specified in the requirement specifications. It is basicallytesting how well a system recovers from crashes, hardware failures or other catastrophicproblems

Examples of recovery testing

• While an application is running, suddenly restart the computer, and afterwards check thevalidness of the application's data integrity.

• While an application is receiving data from a network, unplug the connecting cable. Aftersome time, plug the cable back in and analyze the application's ability to continuereceiving data from the point at which the network connection disappeared.

• Restart the system while a browser has a definite number of sessions. Afterwards, checkthat the browser is able to recover all of them.

3.7.4. Regression TestingRegression testing is a type of software testing that seeks to uncover new software bugs, or

regressions, in existing functional and non-functional areas of a system after changes such asenhancements, patches or configuration changes, have been made to them.When to do regression testing?

1. Reasonable amount of initial testing is already carried out.2. A good number of defects have been fixed3. Defect fixes that can produce side-effects are taken care of.4. Regression testing should be done periodically.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 23

Methodology for RT1. Performing an initial “smoke “or “sanity test

Figure: 3.8: An initial test in regression testingSmoke testing

Smoke testing term came from hardware testing, when you get new hardware and powerit ON, if smoke comes out then you do not proceed with its testing. In software testing, a smoketest is run on application initial builds to ascertain application most critical areas are workingcorrectly and application is ready for thorough testing.Sanity testing

Once a new build is received after minor changes, instead of starting its complete testingsanity test is conducted to make sure previous defects has been fixed and new issues have notbeen introduces by these fixes. Sanity testing is a subset of regression testing.

2. Understanding the criteria for selecting test casesTwo approaches for selecting test cases

1. Constant set of regression tests that are run for every build or change.2. Selecting test cases dynamically

Selecting test cases requires knowledge of1. Defect fixes and changes made in the current build2. The way to test the current changes3. The impact that the current changes may have on other parts of the system and4. The ways of testing the other impacted parts.

While regressing testing,

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 24

• Include test cases that have produced the maximum defects in the past.• Include test cases for a functionality in which a change has been made.• Include test cases in which problems are reported.• Include a test case which covers the mandatory requirements of the customer.• Include test cases to test the positive test conditions.• Include the area which is highly visible to the users.

3. Classifying test cases

• It is important to know the relative priority of test cases for a successful test execution.

• Fixing priority based on importance and customer usage is an important activity inregressing testing.

Priority 0- These are called sanity test cases checks the basic functionality.• Run for accepting the build for further testing• Run when a product goes through major changes• It delivers a very high project values to development teams and the customers

Priority 1• It delivers a very high project values to development teams and the customers.

Priority 2.• It delivers moderate project values.( it will be used for regression testing on a

need basis)

4. Methodology for selecting test cases

The selection of test cases based on impact are listed in the below table 3.5 and in figure 3.9.

Critical andimpact ofdefect fixes

Selection of test cases Additional test cases

Low Few test cases from TCDB( TEST CASE DATA BASE)

-

Medium All Priority 0 and Priority 1 test cases Test cases from priority 2 isdesirable but not necessary

High All Priority 0 and Priority 1 test cases Subset of Priority 2 test cases

Table 3.5: Selection of test cases in regression testing

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 25

Figure: 3.9: Selection of test cases in regression testing

The following methodologies are used based on the availability of time.1. Regress all: All the test cases under P0, P1 and P2 will be executed.2. Priority based regression: P0, P1, P2 are run in order based on the availability of time,

when to stop is based on availability of time.3. Regress changes: In this testing, primarily changes in the code are compared with the last

cycle of testing and test cases are selected based on their impact on the code.4. Random regression: Test cases will be selected randomly.5. Context based dynamic regression: Few test cases from P0 will be selected.

5. Resetting the test cases for regression testing

• Information about test cases will be recorded in each cycle is called TCRH ( Test CaseResult History), then test cases are selected for in next cycle based on their history. themethod or procedure that uses TCH to indicate some of the test cases be selected forregression testing is called reset procedure.

• Most of the cases, not all the types of testing or all the test cases are repeated for eachcycle.

Points to be considered while Retesting• When there is a major change in the product.• Changes in the build procedure, which affects the product.• Some of the test cases not executed for a long period.• When the product is in the final regression test cycle with a few selected test cases.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 26

• Expected result of the test cases is quite different from the previous test cycle.• Where there is a situation, that the expected results of the test cases could be quite

different from the previous cycles.• Whenever existing application functionality is removed, the related test cases can be reset.• Test cases that consistently produce a positive result can be removed.• Test cases relating to a few negative test conditions (not producing any defects) can be

removed.

When the above guidelines are not met, we may want to rerun the test cases rather thanreset the results of the test cases. There are only a few differences between the rerun and resetstates in test cases. In both instances, the test cases arc executed but in the case of "reset" we canexpect a different result from what was obtained in the earlier cycles. In the case of rerun, the testcases are expected to give the same test result as in the past; hence, the management need not beunduly worried because those test cases are executed as a formality and are not expected to revealany major problem.

Test cases belonging to the "rerun" state help to gain confidence in the product bytesting for more time. Such test cases are not expected to fail or affect the release. Test casesbelonging to the "reset" state say that the test results can be different from the past, and only afterthese test cases are executed can we know the result of regression and the release status.

A rerun state in a test case indicates low risk and reset status represents medium to highrisk for a release. Hence, close to the product release, it is a good practice to execute the "reset"test cases first before executing the "rerun" test cases.

Since regression uses test cases that have already executed more than once, it is expectedthat 100% of those test cases pass using the same build, if defect fixes are done right. In suchsituations where the pass percentage is not 100%, the test manager can compare with the previousresults of the test case to conclude whether regression was successful or not. The following table3.6 indicates the action to be taken based on the comparison of results of current regression testand previous regression test.

Table 3.6: Comparison of current and previous result of regression

Current Resultfrom

regression

Previousresult

Conclusion Remarks

FAIL PASS FAIL Need to improve the regression process andcode reviews

PASS FAIL PASS This is the expected result of a goodregression to say defect fixes work properly

FAIL FAIL FAIL Need to analyze why defect fixes are notworking. "Is it a wrong fix?" Also shouldanalyze why this test is rerun for regression

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 27

PASS PASS PASS This pattern of results gives a comfort feelingthat there are no.. side-effects due to defectfixes

Best Practices in Regression Testing Regression can be used for all types of releases. Mapping defect identifiers with test cases improves regression quality.

Create and execute regression test bed daily. Ask your best test engineer to select the test case. Detect defects, and protect your product from defects and defect fixes.

3.7.5. Usability and Accessibility Testing

1. Usability Testing is: The testing that validates the ease of use, speed and aesthetics of the product from the

user’s point of view is called usability testing. A means for measuring how well people can use some human-made object (such as a web

page, a computer interface, a document, or a device) for its intended purpose. Why usability can be validated- not tested?

Perceptions of good usability vary from user to user. For example, Developer wouldconsider use of command line flags as good user interface; an end user will want everything interms of GUI elements such as menus, dialog boxes and so on.

2. Characteristics of Usability testing

It tests the product from the user’s point of view. It is for checking the product to see if it is easy to use for the various categories of users. It is the process to identify discrepancies between the user interface of the product and the

human requirements in terms of the pleasantness and aesthetics aspects.

3. Approach to Usability

In UT, certain human factors can be represented in a quantifiable way and can be testedobjectively. Ex: the Number of mouse clicks, Number of keystroke and number ofcommands used to perform a task.

UT is not only for product binaries or executables. It also applies to documentation andother deliverables that are shipped along with a product.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 28

AUTORUN script that automatically brings up product setup when the release media isinserted in the machine. Sometimes this script is written for a particular operating systemversion and may not get auto executed on a different OS version.

4. People Suited To Perform Usability Testing

Typical representatives of the actual user segments who would be using the product, sothat the typical user patterns can be captured.

People who are new to the product, so that they can start without any bias and be able toidentify usability problem.

Generally it is difficult to develop test cases for usability. Checklists and guidelines areprepared for UT.

5. Usability depends on message – system gives to its user.

A system should intelligently detect and avoid wrong usage and if wrong usage cannot beavoided , it should provide appropriate and meaningful messages.

Information – Informational message is verified to find out whether an end user canunderstand that message and associate it with the operation done.

Warning message – message is checked for why it happened and what to do to avoid thewarning.

Error message – what is the error, why that error happened, and what to do to avoid orwork around that error.

UT should go PT ( Positive Testing and Negative Testing) and NT- to know the correctand incorrect usage of the product

6. Verification of Usability Design

1. Style sheets

A style sheet is a file or form that is used in word processing and desktop publishing todefine the layout style of a document. A style sheet contains the specifications of a document'slayout, such as the page size, margins, fonts and font sizes. In modern word processors such asMicrosoft Word, a style sheet is known as a template. The most well-known form of style sheet isthe Cascading Style Sheet (CSS), which is used for styling Web pages.

Use of SS ensures consistency of design elements across several screens and testing –Ensures that the basic usability designs is tested. SS checks the font size, colorscheme, and so on.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 29

Screen prototypes

This prototype gives an idea of how exactly the screen will look and function when theproduct is released.

Test team and real life users test this prototype.

Screens are designed as they will be shipped to the customer, but are not integratedwith other modules of the product.

User interface is tested independently without integrating with the functionalitymodules.

Paper design

The design of the screen, layout and menus are drawn up on paper and sent to users forfeedback.

Usage of style sheets requires further coding, prototypes need binaries and resources toverify, but paper design does not require any other resources.

Layout design

Layout helps in arranging different elements on the screen dynamically.

It ensures arrangement of elements, spacing, size of fonts, pictures and so on thescreen.

Usability

Usability is a habit and a behavior. Just like humans, the products are expected to behavedifferently and correctly with different users and to their expectations.

7. Checklist for usability testing

Do users complete the assigned tasks/operations successfully?

if so, how much time do they take to complete the tasks/operations? is the response from he product fast enough to satisfy them?

where did the users get struck? What problems do they have? Where do they get confused? Were they able to continue on their own? What helped

them to continue?

8. Quality factors for Usability

1. Comprehensibility

Product should have simple and logical structure of features and documentation.

They should be grouped on the basis of user scenarios and usage.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 30

Most frequently operations should be presented first using the user interfaces

2. Consistency

A product should be consistent with any applicable standards, platform look and feel,and base infrastructure.

Multiple products from the same company – Check the consistency in the look andfeel.

User interfaces are different for different OS - Services irritate the user.

3. Navigation

It helps in determining how easy it is to select the different operations of the product. The number of mouse clicks should be minimized to perform any operation to improve

the usability.

4. Responsiveness

How fast the product responds to the user request. Whenever the product is processing some information, the visual display should

indicate the progress and also the amount of time left so that the users can waitpatiently till the operations is completed.

9. Aesthetics Testing

Important aspect in usability is making the product “Beautiful”. Aesthetics related problems in the product are generally mapped to a defect

classification called “Cosmetic” which is low priority. It’s not possible for all products to measure up with the Taj mahal for its beauty.

Testing for aesthetics can at least ensure the product is pleasing to the eye. Aesthetics is not in the external look alone. It is in all the aspects such as colors, nice

icons, messages, screens, colors and images.

ACCESSIBILITY TESTING

Testing the product usability for physically challenged users is called accessibility testing.For such users an alternative method of using the product has to be provided.

1. Accessibility of the product can be provided by two means

1. Making use of accessibility features provided by the underlying infrastructure ( ex : OS )called basic accessibility and

2. Providing accessibility in the product through standards called product accessibility.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 31

2. Basic Accessibility

It is provided by the hardware and operating system.

Keyboard Accessibility: Input and output devices of the computer and their accessibility optionsare categorized under basic accessibility.

Ex: Keyboard – Top of the keys are ‘F’ keys.

3. Sticky Keys

is an accessibility feature to help windows users who have physical disabilities, but it isalso used by others as a means to reduce repetitive strain injury (or a syndrome called the EmacsPinky). It essentially serializes keystrokes instead of pressing multiple keys at a time: Sticky Keysallows the user to press and release a modifier key, such as Shift, Ctrl, Alt, or the Windows key,and have it remain active until any other key is pressed.

4. Filter keys

Useful for stopping the repetitions completely or slowing down the repetition.

5. ToggleKeys

It is a feature of Microsoft Windows. It is an accessibility function which is designed forpeople who have vision impairment or cognitive disabilities. When Toggle Keys turned on,computer will provide sound cues when the locking keys ( Caps lock , Num lock ) are pressed. Ahigh-pitched sound plays when the keys are switched on and a low-pitched sound plays when theyare switched off.

6. Sound Keys – Pronounce the key when they hit on the keyboard.

7. Arrow keys - To control mouse.

8. Screen accessibility

Accessibility of keyboard for vision impaired and mobility impaired users is also animportant. For example, Visual feedback on the screen is required for hearing impairedusers.

Enabling caption for multimedia: All multimedia speech and sound can be enabledwith text equivalents and they should be displayed on the screen when speech andsound are played.

Soft keyboards: Mobility and vision impaired users feel easier to use pointing devicesinstead of the keyboard. Soft keyboard helps users by displaying the keyboard on the

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 32

screen. Characters can be typed by clicking on the keyboard layout on the screen usingpointing devices such as the mouse.

Easy reading with high contrast: Vision impaired users have problems in recognizingsome colors and size of font in menu items. Toggle option should be available to switchto a high contrast mode. It uses pleasing colors.

9. Product accessibility

1. Text equivalents have to be provided for audio, video and picture images-improve theaccessibility for the hearing impaired.

2. Documents and fields should be organized. So, that they can be read without requiring aparticular resolution of the screen and templates (style sheet)

3. User interfaces should be designed so that all information conveyed with color and also itshould be available without color.

Example:

Use green button to start the program. Use red button to stop the running program

Color blind users may not be able to select the right button for operations. So the correct approach–is to retain the color and name of the buttons appropriately.

4. Reduce flicker rate, speed of moving text, avoid flashed and blinking rate.

The reading speed of the below average people is normally low – They may get irritatingto see text that is blinking and flashing. People with good vision find the flashed and flickersbeyond a particular frequency uncomfortable. (Usability standards – 2 hz to 55 hz)

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 32

screen. Characters can be typed by clicking on the keyboard layout on the screen usingpointing devices such as the mouse.

Easy reading with high contrast: Vision impaired users have problems in recognizingsome colors and size of font in menu items. Toggle option should be available to switchto a high contrast mode. It uses pleasing colors.

9. Product accessibility

1. Text equivalents have to be provided for audio, video and picture images-improve theaccessibility for the hearing impaired.

2. Documents and fields should be organized. So, that they can be read without requiring aparticular resolution of the screen and templates (style sheet)

3. User interfaces should be designed so that all information conveyed with color and also itshould be available without color.

Example:

Use green button to start the program. Use red button to stop the running program

Color blind users may not be able to select the right button for operations. So the correct approach–is to retain the color and name of the buttons appropriately.

4. Reduce flicker rate, speed of moving text, avoid flashed and blinking rate.

The reading speed of the below average people is normally low – They may get irritatingto see text that is blinking and flashing. People with good vision find the flashed and flickersbeyond a particular frequency uncomfortable. (Usability standards – 2 hz to 55 hz)

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 32

screen. Characters can be typed by clicking on the keyboard layout on the screen usingpointing devices such as the mouse.

Easy reading with high contrast: Vision impaired users have problems in recognizingsome colors and size of font in menu items. Toggle option should be available to switchto a high contrast mode. It uses pleasing colors.

9. Product accessibility

1. Text equivalents have to be provided for audio, video and picture images-improve theaccessibility for the hearing impaired.

2. Documents and fields should be organized. So, that they can be read without requiring aparticular resolution of the screen and templates (style sheet)

3. User interfaces should be designed so that all information conveyed with color and also itshould be available without color.

Example:

Use green button to start the program. Use red button to stop the running program

Color blind users may not be able to select the right button for operations. So the correct approach–is to retain the color and name of the buttons appropriately.

4. Reduce flicker rate, speed of moving text, avoid flashed and blinking rate.

The reading speed of the below average people is normally low – They may get irritatingto see text that is blinking and flashing. People with good vision find the flashed and flickersbeyond a particular frequency uncomfortable. (Usability standards – 2 hz to 55 hz)

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 33

5. Reduce physical movement requirements for the users when designing the interface and allowadequate time for user response. Spreading user interfaces elements to the corner of the screensshould be avoided.

10. Usability and Accessibility tools

Name of the tool PurposeJaws For testing accessibility of the product with some assistive

technologies.HTML Validator To validate the HTML source file.

Style sheet validator To validate the style sheets for usability standards set by W3C

Magnifier Accessibility tool for vision challenged

Narrator Reads the information displayed on the screen and creates audiodescription for vision challenged users.

Soft keyboard Enables the use of pointing devices to use the keyboard by displayingit on the screen

11. Usability lab setup

It has two sections – Recording section and Observation section.

Recording section – The user is requested to come to the lab with a prefixed set of operationsthat are to be performed in the recording section. The product usage should be explained inadvance the also the documentation should be provided to the user well in advance. The usercomes prepared to perform the tasks.

Observation Section: Usability experts sit and observe the user body language and associated tdefects with the screens.

Observation made through one way glass – The expert can see the user but the user cannot seethe experts. Camera also monitors the different angles. After watching the different users use theproduct, the usability experts suggest usability improvement in the product.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 34

3.7.6. Internationalization Testing

Internationalization testing is a non-functional testing technique. It is a process of designing asoftware application that can be adapted to various languages and regions without any changes.

Internationalization testing is the process of verifying the application under test to work uniformlyacross multiple regions and cultures.

The main purpose of internationalization is to check if the code can handle all internationalsupport without breaking functionality that might cause data loss or data integrity issues.

1. Primer on Internationalization

Definition of language: Language is a tool used for communication- focuses on humanlanguages (such as Japanese, English, French ), Not on computer languages ( java, C and C++ )which has a set of characters, a set of valid words formed from these characters and grammar.

Character set

Standards are used to represent characters of different languages in the computer. Ex:ASCII- It is a byte representation for characters (8 bits) used in computer. It uses 8 bits torepresent all characters that are used in computers. Using this method ( 2 power 8 = 256 )characters are represented in binary.

LocaleCommercial software not only needs to remember the language, but also the country in

which it is spoken. There are conventions associated with language which need to be taken care ofin the software. There could be two countries speaking the same language with identicalgrammar, words, and character set. However, there should be still variations, such as currency anddate formats. A locale is a term used to differentiate these parameters and conventions.For example, English language spoken in USA and India. However the currency symbol used inthe two countries are different ($ and Rs. respectively). The punctuation symbols used in numbersare also different. For example, 1,000,000 is represented in USA as 1,000,000 and as 10,00,000 inIndia. Software need, to remember the locale, apart from language, for it to function properly.

Internationalization (I18n): it means all activities that are required to make the softwareavailable for international market.

Localization (L10n): It is a term used to mean the translation work of software resources such asmessages to the target language and conventions.

Globalization (G11n): It is used to mean internationalization and localization.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 35

2. Enabling Testing

An activity of code review or code inspection mixed with some test cases for unit testing, with anobjective to catch I18n defects is called enabling testing.

Checklist for enabling testing

Check the code for hard –coded date, currency formats, ASCII code or characterconstants.

Check the dialog boxes and screens to see whether they leave at least 0.5 times more spacefor expansion.

Ensure region-cultural based messages and slang is not in the code.

Ensure that adequate size is provided for buffers and variables to contain translatedmessages.

Check that no messages contain technical jargon and that all messages are understoodeven the least experienced user of the product.

If the code uses scrolling of text, then the screen and dialog boxes must allow adequateprovisions for direction change in scrolling as top to bottom, right to left, left to right, bottom totop and so on, as conventions are different in different languages. For example shown in figure3.10, Arabs uses "right to left” direction for reading and "left to right for scrolling.

Reading Scrolling

Figure: 3.10: Selection of test cases in regression testing

3. Locale testingOnce the code has been verified for I18n and the enabling test is completed, changing the

different locales using the system settings or environment variables and testing the softwarefunctionality, number, date, time and currency format is called locale testing.Checklist for locale testing

Hot keys, function keys and help screens should be tested with different applicablelocales.

JAPANESE

ENGLISH

ARABIC

JAPANESE

ENGLISH

ARABIC

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 36

Date and time format are in line with the defined locale of the language. Currency is in line with the selected locale and language.

Number format is in line with selected locale and language. Time zone information.

Note: locale testing focuses on testing the conventions for number, punctuations, date and timeand currency formats.

4. Internationalization validation

It focuses on component functionality for input/output of non-English message.

Checklist for validation

The functionality in all languages and locales are the same. The input to the software can be in non-ASCII or special characters using tools like IME

and can be entered and functionality must be consistent.

The display of the non ASCII characters in the name is displayed as they were entered. The cut or copy and paste of non ASCII characters retain their styles after pasting, and the

software functions as expected.

The software functions correctly with different languages words. For example, log inshould work with English user name and German user name.

The documentation contains consistent documentation style, punctuations and all languageconventions are followed.

5. Fake Language Testing

Fake language testing helps in simulating the functionality of the localized product for adifferent language using software translators. This also ensures that switching between languagesworks properly and correct messages are picked up from proper directories that have thetranslated messages. Fake language testing helps in identifying the issues proactively before theproduct is localized. For this purpose, all messages are consolidated from the software, and fakelanguage conversions are done by tools and tested. The fake language translators use English-liketarget languages, which are easy to understand and test. Figure 3.11 illustrates fake languagetesting.

Product English

Latin

Latin

Hello

…..

….

Hello

Ellohay

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 37

Figure: 3.11: Language testing

6. Documentation

The software documentation needs to be localized for the target language. For example,people in English speaking countries, they understand that the dirty t-shirt on the left hand sidewhen put inside the washing machine becomes a clean t-shirt as shown on the right-hand sideshown in figure 3.12. Because people in this country read from left to right. If the picture isshown to people in Arab countries they may understand that a clean t-shirt when put inside thewashing machine becomes dirty, since they read from right to left.

Figure 3.12 :Example for an understanding of documentation

3.7.7. Ad hoc Testing

Adhoc testing is an informal testing type with an aim to break the system. This testing isusually an unplanned activity. It does not follow any test design techniques to create test cases. Infact, it does not create test cases altogether! This testing is primarily performed if the knowledgeof testers in the system under test is very high. Testers randomly test the application without anytest cases or any business requirement document.

Ad hoc testing does not follow any structured way of testing and it is randomly done onany part of application. The main aim of this testing is to find defects by random checking. Adhoctesting can be achieved with the testing technique called Error Guessing. Error guessing can bedone by the people having enough experience on the system to "guess" the most likely source oferrors.

This testing requires no documentation/ planning /process to be followed. Since thistesting aims at finding defects through random approach, without any documentation, defects willnot be mapped to test cases.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 38

Characteristics of Ad-hoc testing

1. Ad-hoc testing is done after the completion of the formal testing on the application orproduct.

2. This testing is performed with the aim to break the application without following anyprocess.

3. The testers executing the ad-hoc testing, should have thorough knowledge on the product.

4. The bugs found during ad-hoc testing exposes the loopholes of the testing processfollowed.

5. Ad-hoc testing can be executed only once until and unless a defect is found whichrequires retesting.

Advantages or benefits of Ad-hoc testing:

Below are few of the advantages or benefits related to the Ad-hoc testing:

1. Ad-hoc testing gives freedom to the tester to apply their own new ways of testing theapplication which helps them to find out more number of defects compared to the formaltesting process.

2. This type of testing can be done at anytime anywhere in the Software Development Lifecycle (SDLC) without following any formal process.

3. This type of testing is not only limited to the testing team, but this can also be done by thedeveloper while developing their module which helps them to code in a better way.

4. Ad-hoc testing proves to be very beneficial when there is less time and in-depth testing ofthe feature is required. This helps in delivering the feature with quality and on time.

5. Ad-hoc testing can be simultaneously executed with the other types of testing which helpsin finding more bugs in lesser time.

6. In this type of testing the documentation is not necessary which helps the tester to do thefocused testing of the feature or application without worrying about the formaldocumentation.

Disadvantages of Ad-hoc testing:

1. The test scenarios executed during the ad-hoc testing are not documented, so the tester hasto keep all the scenarios in their mind which he/she might not be able to recollect infuture.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 39

2. Ad-hoc testing is very much dependent on the skilled tester who has thorough knowledgeof the product it cannot be done by any new joiner of the team.

Types of Ad-hoc testing

Basically there are three types of Ad-hoc testing. They are:

1. Buddy testing: This type of testing is done by the developer and the tester who are responsiblefor that particular module delivery. In this type of testing the developer and tester will sit togetherand work on that particular module in order to avoid from building the invalid scenarios, whichalso in other hand help the tester from reporting the invalid defects.

2. Pair testing: In this type of testing two testers work together on one module. They basicallydivide the testing scenarios between them. The aim of this type of testing is to come up withmaximum testing scenarios, so that the entire module should have complete test coverage. Posttesting the entire module together they can also document their test scenarios and observations.

3. Monkey testing: In this type of testing some random tests are executed with some randomdata, with the aim of breaking the system. This testing helps us to discover some new bugs whichmight not be caught earlier.

3.7.8. Testing Object-Oriented Systems

Object oriented Testing is a collection of testing techniques to verify and validate object-orientedsoftware. Testing is a continuous activity during software development. In object-orientedsystems, testing encompasses three levels, namely, unit testing, subsystem testing, and systemtesting.

1. Unit Testing

In unit testing, the individual classes are tested. It is seen whether the class attributes areimplemented as per design and whether the methods and the interfaces are error-free. Unit testingis the responsibility of the application engineer, who implements the structure.

2. Subsystem Testing

This involves testing a particular module or a subsystem and is the responsibility of thesubsystem lead. It involves testing the associations within the subsystem as well as the interactionof the subsystem with the outside. Subsystem tests can be used as regression tests for each newlyreleased version of the subsystem.

3. System Testing

System testing involves testing the system as a whole and is the responsibility of thequality-assurance team. The team often uses system tests as regression tests when assembling newreleases.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 40

4. Object-Oriented Testing Techniques

Grey Box Testing

The different types of test cases that can be designed for testing object-oriented programs arecalled grey box test cases. Some of the important types of grey box testing are −

State model based testing − This encompasses state coverage, state transition coverage,and state transition path coverage.

Use case based testing − Each scenario in each use case is tested. Class diagram based testing – In each class, derived class, associations, and aggregations

should be tested. Sequence diagram based testing − This methods is used to check the sequence diagrams.

5. Techniques for Subsystem Testing

The two main approaches of subsystem testing are

Thread based testing − All classes that are needed to realize a single use case in asubsystem are integrated and tested.

Use based testing − The interfaces and services of the modules at each level of hierarchyare tested. Testing starts from the individual classes to the small modules comprising ofclasses, gradually to larger modules, and finally all the major subsystems.

6. Categories of System Testing

Alpha testing − This is carried out by the testing team within the organization thatdevelops software.

Beta testing − This is carried out by select group of co-operating customers. Acceptance testing − This is carried out by the customer before accepting the

deliverables.

7. Software Quality Assurance

Software Quality

Schulmeyer and McManus have defined software quality as “the fitness for use of the totalsoftware product”. Good quality software does exactly what it is supposed to do and is interpretedin terms of satisfaction of the requirement specification laid down by the user.

Quality Assurance

Software quality assurance is a methodology that determines the extent to which a softwareproduct is fit for use. The activities that are included for determining software quality are −

Auditing

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 41

Development of standards and guidelines Production of reports Review of quality system

Quality Factors

The following external quality factors are used to assess the testability of an object orientedsoftware.

Correctness − Correctness determines whether the software requirements areappropriately met.

Usability − Usability determines whether the software can be used by different categoriesof users (beginners, non-technical, and experts).

Portability − Portability determines whether the software can operate in differentplatforms with different hardware devices.

Maintainability − Maintainability determines the ease at which errors can be correctedand modules can be updated.

Reusability − Reusability determines whether the modules and classes can be reused fordeveloping other software products.

8. Object-Oriented Metrics

Metrics can be broadly classified into three categories: project metrics, product metrics, andprocess metrics.

Project Metrics

Project Metrics enable a software project manager to assess the status and performance of anongoing project. The following metrics are appropriate for object-oriented software projects −

Number of scenario scripts Number of key classes Number of support classes Number of subsystems

Product Metrics

Product metrics measure the characteristics of the software product that has been developed. Theproduct metrics suitable for object-oriented systems are −

Methods per Class − It determines the complexity of a class. If all the methods of a classare assumed to be equally complex, then a class with more methods is more complex andthus more susceptible to errors.

Inheritance Structure − Systems with several small inheritance lattices are more well–structured than systems with a single large inheritance lattice. As a thumb rule, an

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 42

inheritance tree should not have more than 7 (± 2) number of levels and the tree should bebalanced.

Coupling and Cohesion − Modules having low coupling and high cohesion areconsidered to be better designed, as they permit greater reusability and maintainability.

Response for a Class − It measures the efficiency of the methods that are called by theinstances of the class.

Process Metrics

Process metrics help in measuring how a process is performing. They are collected over allprojects over long periods of time. They are used as indicators for long-term software processimprovements. Some process metrics are −

Number of KLOC (Kilo Lines of Code) Defect removal efficiency Average number of failures detected during testing Number of latent defects per KLOC

3.7.9. Configuration Testing

Configuration testing is the method of testing an application with multiple combinationsof software and hardware to find out the optimal configurations that the system can work withoutany flaws or bugs. Configuration Testing is a software testing where the application under test hasto be tested using multiple combinations of Software and Hardware.

Configuration Testing Example

Let's understand this with an example of a Desktop Application:

Generally, Desktop applications will be of 2 tier or 3 tier, here we will consider a 3 tier Desktopapplication which is developed using Asp. Net and consists of Client, Business Logic Server andDatabase Server where each component supports below-mentioned platforms.

Client Platform - Windows XP, Window7 OS, windows 8 OS , etcServer Platform - Windows Server 2008 R2, Windows Server 2008 R2, Windows Server 2012R2Database –SQL Server 2008, SQL Server 2008R2, SQL Server 2012, etc.

A tester has to test the Combination of Client, Server and Database with combinations of theabove-mentioned platforms and database versions to ensure that the application is functioningproperly and does not fail.

Configuration testing is not only restricted to Software but also applicable for Hardware, which iswhy it is also referred as a Hardware configuration testing, where we test different hardwaredevices like Printers, Scanners, Web cams, etc. that support the application under test.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 43

Pre-requisites for Configuration Testing

For any project before starting with the configuration testing, we have to follow some pre-requisites

Creation of matrix which consists of various combinations of software and hardwareconfigurations

Prioritizing the configurations as its difficult to test all the configurations Testing every configuration based on prioritization.

Objectives of Configuration Testing

The objectives of configuration Testing is:

Validate the application to determine if it fulfills the configurability requirements. Manually causing failures which help in identifying the defects that are not efficiently

found during testing (Ex: changing the regional settings of the system like Time Zone,Language, Date time formats, etc.)

Determine an optimal configuration of the application under test. Analyzing the system performance by adding or modifying the hardware resources like

Load Balancers, increase or decrease in memory size, connecting various printer models,etc.

Analyzing system Efficiency based on the prioritization, how efficiently the tests wereperformed with the resources available to achieve the optimal system configuration.

Verification of the system in a geographically distributed Environment to verify howeffectively the system performs.

For Ex: Server at a different location and clients at a different location, the system should workfine irrespective of the system settings.

Verifying how easily the bugs are reproducible irrespective of the configuration changes. Ensuring how traceable the application items are by properly documenting and

maintaining the versions which are easily identifiable. Verifying how manageable the application items are throughout the software development

life cycle.

Types of Configuration testing

Two types of configuration testing as mentioned below

Software Configuration Testing Hardware Configuration Testing

Software Configuration Testing

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 44

Software configuration testing is testing the Application under test with multiple OS, differentsoftware updates, etc. Software Configuration testing is very time consuming as it takes time toinstall and uninstall different software's that is used for the testing.

One of the approaches that are followed to test the software configuration is to test on VirtualMachines. Virtual Machine is an Environment that is installed on software and acts like aPhysical Hardware and users will have the same feel as of a Physical Machine. Virtual Machinessimulates real-time configurations.

Instead of Installing and uninstalling the software in multiple physical machines which is time-consuming, it's always better to install the application/software in the virtual machine andcontinue testing. This process can be performed by having multiple virtual machines, whichsimplifies the job of a tester

Software configuration testing can typically begin when

Configurability requirements to be tested are specified Test Environment is ready Testing Team is well trained in configuration testing Build released is unit and Integration test passed

Typical Test Strategy that is followed to test the software configuration test is to run thefunctional test suite across multiple software configurations to verify if the application under testis working as desired without any flaws or errors. Another strategy is to ensure the system isworking fine by manually failing the test cases and verifying for the efficiency.

Hardware Configuration Testing

Hardware configuration testing is generally performed in labs, where we find physical machineswith different hardware attached to them.

Whenever a build is released, the software has to be installed in all the physical machines wherethe hardware is attached, and the test suite has to be run on each machine to ensure that theapplication is working fine.

To perform the above task a significant amount of effort is required to install the software on eachmachine, attach the hardware and manually running or even to automate the above said processand running the test suite.

Also, while performing hardware configuration test, we specify the type of hardware to be tested,and there are a lot of computer hardware and peripherals which make it quite impossible to run allof them. So it becomes the duty of the tester to analyze what hardware is mostly used by usersand try to make the testing based on the prioritization.

Sample Test Cases

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 45

Consider a Banking Scenario to test for the hardware compatibility. A Banking Application that isconnected to Note Counting Machine has to be tested with different models like Rolex, Strob,Maxsell, StoK, etc.

Let's take some sample test cases to test the Note Counting Machine

Verifying the connection of application with Rolex model when the prerequisites are NOTinstalled

Verifying the connection of application with Rolex model when the prerequisites areinstalled

Verify if the system is counting the notes correctly Verify if the system is counting the notes incorrectly Verifying the tampered notes Verifying the response times Verifying if the fake notes are detected and so on

The above test cases are for one model, and the same has to be tested with all the models availablein the market by setting them up in a test lab which is difficult.

Configuration Testing should be given with equal importance like other testing types. Withoutconfiguration testing being performed it is difficult to analyze the optimal system performanceand also software might encounter compatibility issues that it is supposed to run on.

3.7.9. Website Testing

Web testing is the name given to software testing that focuses on web applications.Complete testing of a web-based system before going live can help address issues before thesystem is revealed to the public.

Challenges in Web application testing:

Web based systems and applications reside on network and interoperate with many different

1. Operating systems,

2. Browsers,

3. Hardware platforms,

4. Communications protocols,

Dimensions of Quality for Web Applications:

Quality is in corporate in to a web application as a consequence of good design. Reviews andTesting examine one or more of the following quality dimensions.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 46

1. Content 2.Function 3.Structure 4.Usability 5.Navigaility 6.Performance7.Compatibility8.lnteroperability 9.Security.

Testing approach for web application

• The content model for the web app is reviewed to uncover errors.

• The interface model is reviewed to ensure that all use cases can be accommodated.

• The design model for the web app is reviewed to uncover navigation errors.

• The user interface is tested to uncover errors in presentation and/or navigation mechanics

• Functional components are unit tested.

• Navigation throughout the architecture should be tested.

• The web app is implemented in a variety of different environmental configurations and istested for compatibility with each configuration.

• Security tests are conducted in an attempt to exploit vulnerabilities in the web app or withinits environment

• Performance tests should be conducted.

• The web app is tested by a controlled and monitored population of end users the results oftheir interaction with the system are evaluated for content and navigation errors, usabilityconcerns, compatibility concerns, and the web app security, reliability, and performance.

Type of Testing

Content testing has three important objectives

1. To uncover syntactic errors (for eg., typo, grammar mistakes) in the text-based documents,graphical representations, and other media

2. To uncover semantic errors (i.e., focuses on the information presented within each contentobject)

3. To find errors in the organization or structure of the content that is presented to the end user.

Database testing

• Tests should be designed to uncover errors made in translating the user's request into a form thatcan be processed by the DBMS.

• Tests that uncover errors in communication between the web app and the remote database mustbe developed.

• Raw data acquired from the database must be transmitted to the web app server and properlyformatted for subsequent transmittal to the client.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 47

• Tests that demonstrate the validity of the transformations applied to the raw data to create validcontent objects must also be created.

• Content and compatibility testing will be done after the dynamic content object is transmitted tothe client in a form that can be displayed to end user.

User Interface Testing

Interface features include type fonts, the use of colours, frames, images, borders, tables andrelated interface features that are generated as web app execution proceeds should be tested.

When a user interacts with a web app, the interaction occurs through one or more interfacemechanisms.

Links:

Each navigation link is tested to ensure that the proper content object or function is reached.Forms:

At a macroscopic level, tests are performed to ensure that Labels correctly identify fields withinthe form and that mandatory fields are identified visually for the user

• The server receives all information contained within the form and that no data are lost in thetransmission between client and server

• Appropriate defaults are used when the user does not select from a pull-down menu or set ofbuttons

• Browser functions (e.g., back arrow) do not corrupt data entered in a form.

Pop-up windows:

A series of tests ensure that

• The pop-up is properly sized and positioned

• The pop-up does not cover the original web app window

• The aesthetic design of the pop-up is consistent with aesthetic design of the interface

• Scroll bars and other control mechanisms appended to the pop-up are properly located andfunction is required.

Component level testing

Each web app function is a software component and can be tested using BBT and WBT.

SOFTWARE TESTING – UNIT 3

Prepared by Dr. R. Kavitha Page 48

Black Box techniques are equivalence partitioning, boundary value analysis.

White box testing: path testing

Configuration testing

The configuration testing has to perform to address the following questions:

Do system security measures (e.g., firewalls or encryption) allow the webapp to executeand service users without interference or performance degradation?

Is the webapp properly integrated with database software? Is the webapp sensitive to different versions of database software? Do server-side webapp scripts execute properly?