software testing intermediate

153
Software Testing Intermediate User Guide 26/03/2012

Upload: others

Post on 09-Feb-2022

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Software Testing Intermediate

Software Testing Intermediate

User Guide

26/03/2012

Page 2: Software Testing Intermediate

Software Testing Intermediate

Page 2

FOREWORD.................................................................................................... 3

SECTION 1 - Hardware/Software Pre-requisites.......................................... 4

SECTION 2 ...................................................................................................... 5

Module 1 - Introduction ................................................................................. 5

Module 2 – Fundamentals of Testing ........................................................... 8

Module 3 – Testing through the lifecycle .................................................... 35

Module 4 – Static Techniques .................................................................... 61

Module 5 – Test Design Techniques .......................................................... 75

Module 6 – Test Management .................................................................. 107

Module 7 – Tool Support for Testing ........................................................ 136

SECTION 3 - Activity Answers .................................................................. 149

Page 3: Software Testing Intermediate

Software Testing Intermediate

Page 3

FOREWORD Printed and Published by: ILX Group plc George House Princes Court Beam Heath Way Nantwich Cheshire CW5 6GD The Company has endeavoured to ensure that the information contained within this User Guide is correct at the time of its release. The information given here must not be taken as forming part of or establishing any contractual or other commitment by ILX Group plc and no warranty or representation concerning the information is given. All rights reserved. This publication, in part or whole, may not be reproduced, stored in a retrieval system, or transmitted in any form or by any means – electronic, electrostatic, magnetic disc or tape, optical disc, photocopying, recording or otherwise without the express written permission of the publishers, ILX Group plc. © ILX Group plc 2012

Page 4: Software Testing Intermediate

Software Testing Intermediate

Page 4

SECTION 1 - Hardware/Software Pre-requisites For the best experience using this multimedia course on a computer, we recommend the following specification:

Operating System: Windows 2000, XP, Vista, Mac OSX

CPU: Intel Pentium II 450MHz or faster processor/PowerPC G3 500MHz or faster processor

RAM: 128MB

Screen resolution: 1024 x 768 or higher

Peripherals: Sound Card & Speakers, Keyboard & Mouse

Software: Flash Player 8 or higher

Page 5: Software Testing Intermediate

Software Testing Intermediate

Page 5

SECTION 2

Module 1 - Introduction

Module 1 Section 1 - Introduction S1-1-P1 - Objectives Welcome to this multimedia training course in software testing. This course has been designed to provide you with a broad overview of the knowledge and skills required to carry out software testing at a professional level. A more specific intention of this course is to provide you with sufficient knowledge to sit and pass the BCS Intermediate Certification in Software Testing. Those who will benefit from this course include:

People operating or intending to operate in an application development or software testing environment, including testers, test analysts, test engineers, test consultants, test managers, user acceptance testers and software developers.

Individuals and organisations requiring an understanding more than just the basic concepts of software testing.

Individuals wishing to gain formal qualification in software testing, in particular those people who prefer to study at their own pace.

In this introductory module we will:

Describe the structure of the e-learning course and highlight some of the useful features.

Outline the Intermediate level qualification.

Describe the syllabus on which this course is based and clarify the structure of the learning objectives.

Briefly review the make-up of the Intermediate examination itself.

S1-1-P2 - About this course Before we begin, let's take a few moments to introduce you to the makeup of the course. The course is divided into 7 modules, each subdivided into a number of sections. Each module focuses on a particular area of study. Specifically these are: Module 1 – The introduction. Module 2 – The Fundamentals of Testing Module 3 – Testing Through the Lifecycle Module 4 – Static Techniques Module 5 – Test Design Techniques Module 6 – Test Management and finally Module 7 – Tool Support for Testing Throughout the modules and associated sub-sections, you'll find interactive tests and questions indented to test your knowledge and to help your learning. Whether you answer correctly or not, all activities provide useful feedback. Also included in the course is a fully functional exam simulator. This exam simulator is intended to provide you with plenty of practice of answering questions similar to those you will encounter in the exam proper. You may well have noticed some other features included in the courses interface. These include a very useful interactive glossary of terms as well as a list of useful acronyms. The resources provide access to several of the examination syllabi, along with a comprehensive bibliography.

Page 6: Software Testing Intermediate

Software Testing Intermediate

Page 6

S1-1-P3 - The Software Testing Intermediate Certificate The Intermediate Level qualification is aimed at anyone involved in software testing. This includes people in roles such as testers, test analysts, test engineers, test consultants, test managers, user acceptance testers and software developers. The Intermediate Level qualification is also appropriate for anyone who wants more than just a basic understanding of software testing. This will include people such as project managers, quality managers, software development managers, business analysts, IT directors and management consultants. To take the BCS Intermediate in Software Testing examination candidates must already hold the Foundation level qualification. S1-1-P4 - Exam Entry Requirements The BCS Intermediate Certificate in Software Testing syllabus states that the entry criteria for candidates taking the Intermediate Certificate in Software Testing examination are as follows:

Hold the BCS or ISTQB Foundation Certificate in Software Testing qualification

AND EITHER

have at least 18 months experience in software testing

OR

have completed a BCS accredited training course for the Intermediate Certificate in Software Testing

BUT preferably have all three of the above. S1-1-P5 - Learning Objectives and the Examination In order to provide appropriate guidance to learners, every section of this course is assigned one of four learning objective levels. These give guidance for how each

topic in the syllabus should be approached and how that topic will be examined. These levels are:

K1: Remember

K2: Understand

K3: Apply

K4: Analyse Throughout this course we will indicate the appropriate level buy using this key in the interface, shown here. Let's describe these levels in more detail: K1 - Remember, requires that the candidate will recognize, remember and recall a term or concept. For example, the candidate should be able to recognise the definition of failure as:

non-delivery of service to an end user or any other stakeholders or actual deviation of the component or system from its expected delivery, service or results.

S1-1-P6 - Learning Objectives and the Examination continued Continuing the description of learning objectives outlined in the syllabus. K2 - Understand, requires that the candidate should be able to select the reasons or explanations for statements related to the topic, and can summarise, compare, classify and give examples for the testing concept. For example the candidate should be able to explain the reason why tests should be designed as early as possible:

To find defects when they are cheaper to remove

To find the most important defects first

They should also be able to explain the similarities and differences between integration and system testing:

Page 7: Software Testing Intermediate

Software Testing Intermediate

Page 7

Similarities: testing more than one component, and can test non-functional aspects

Differences: integration testing concentrates on interfaces and interactions, and system testing concentrates on whole-system aspects, such as end to end processing. K3 - Apply requires that the candidate should be able to select the correct application of a concept or technique and apply it to a given context. For example the candidate can:

Can identify boundary values for valid and invalid partitions

Can select test cases from a given state transition diagram in order to cover all transitions.

K4: Analyse, requires that the candidate can separate information related to a concept or technique into its constituent parts for better understanding, and can distinguish between facts and inferences. For example:

Can understand the various options available for risk identification.

Can describe which portions of an incident report are factual and which are inferred from the results.

S1-1-P7 - The Intermediate Exam The BCS Intermediate Certificate in Software Testing examination will be based on the BCS Intermediate Certificate in Software Testing syllabus. Answers to examination questions may require the use of material based on more than one section of the syllabus. It's important to note that all sections of the syllabus are examinable and that ISTQB Foundation Certificate in Software Testing syllabus is also examinable. All required information from the Foundation syllabus is covered in this eLearning course. The examination is a one hour closed book examination (that means that no materials can be taken into the

examination room) and will consist of 25 scenario-based multiple choice questions. The pass mark is 15/25. Candidates who have a disability or whose first language is not English will normally be allowed an additional 15 minutes extra time. You must contact your training advisor for more information if you believe this applies to you. We'll provide further explanation, sample questions as we progress through the course. S1-1-P8 - Activity Can you match the syllabus learning objective types to its acronym? S1-1-P9 - Module Summary This concludes the introductory module. In this first module we have; Described the structure of the e-learning course and highlight some of the useful features.

Outlined the Intermediate level qualification in software testing.

Outlined the learning objectives.

Finally we briefly reviewed the make-up of the Intermediate examination.

The next module is entitled Fundamentals of Testing. To begin this module, return to the main menu by clicking here.

Page 8: Software Testing Intermediate

Software Testing Intermediate

Page 8

Module 2 – Fundamentals of Testing

Module 2 Section 1 – Why is Testing Necessary? S2-1-P1 - Module Objectives Welcome to Module Two entitled Fundamentals of Testing. This module is broken down into seven sections. This first section is entitled 'Why is testing necessary?’. In this first section we will:

Look at some of the reasons why software testing is required

Outline the types of failure and the most common causes

Define what is meant by errors, defects and failures and provide an opportunity to examine some famous or infamous examples of software failure

Review the role of testing in software development, maintenance and operations and go on to look at managing quality in testing and how requirements might affect our approach

Finally we will answer the question 'how much testing is enough?’.

S2-1-P2 - Introduction Let's begin this module with an overview of the reasons for software testing. Testing is necessary because the existence of faults in software is inevitable. Beyond fault-detection, the modern view of testing holds that fault-prevention (for example, early fault detection or removal from requirements, designs and so on, through static tests) is at least as important as detecting faults in software by executing dynamic tests. Fault detection (and removal) is the cornerstone of testing, but there are two other objectives overall:

Risk measurement or reduction

Confidence building.

Testers need to know how faults occur because it is their business to detect these faults. Only by understanding our quarry, can we prepare effective strategies to detect them. It is generally believed that software can never be made perfect. Consequently, we test software to find as many faults as we can, to ensure that a high quality product with a minimum of faults is delivered. Given that the job of a tester is to find faults, then we must understand how faults occur. S2-1-P3 - Types of Failure Software systems are an increasing part of life, from business applications, such as banking, to consumer products, like cars for example. Most people have had an experience with software that did not work as expected. Software that does not work correctly can lead to many problems, including loss of money, time or business reputation, and could even cause injury or death. Software failures can occur in any system, whatever the applications. Some famous or infamous examples can be viewed here. Click on each to see more details:

Therac-25 (1985-1987)

London Ambulance System (1992)

Denver baggage handling system

Taurus (1993)

Ariane 5 (1996)

E-mail buffer overflow (1998)

USS Yorktown (1998)

Mars Climate Orbiter (September 23rd, 1999).

Therac-25 (1985-1987) Six people were overexposed during radiation treatments for cancer by Canada's Therac-25 radiation therapy machine. Three of these patients were believed to have died from the overdoses. The root cause was a lack of quality assurance, which led to an over-complex, inadequately tested, under-documented system. The system was developed but the supplier subsequently failed to take adequate corrective action. (Pooley & Stevens, 1999) London Ambulance System (1992) A succession of software engineering failures, especially in project management,

Page 9: Software Testing Intermediate

Software Testing Intermediate

Page 9

caused 2 failures of London's (England) Ambulance dispatch system. The repair cost was estimated at £9m, but it is believed that people died who would not have died if ambulances had reached them as promptly as they would have done without the failures. Denver Baggage Handling System The Denver airport baggage handling system was so complex (involving 300 computers) that the development overrun prevented the airport from opening on time. Fixing the incredibly buggy system required an additional 50% of the original budget - nearly $200m. Taurus (1993) Taurus, the planned automated transaction settlement system for the London Stock Exchange was cancelled after 5 years of failed development. Losses are estimated at £75m for the project and £450m to customers. (Pooley & Stevens, 1999) Ariane 5 (1996) The Ariane 5 rocket exploded on its maiden flight in June [4], 1996 because the navigation package was inherited from the Ariane 4 without proper testing. The new rocket flew faster, resulting in larger values of some variables in the navigation software. Shortly after launch, an attempt to convert a 64-bit floating-point number into a 16-bit integer generated an overflow. The error was caught, but the code that caught it elected to shut down the subsystem. The rocket veered off course and exploded. It was unfortunate that the code that failed generated inertial reference information useful only before lift-off; had it been turned off at the moment of launch, there would have been no trouble. (Kernighan, 1999) E-mail Buffer Overflow (1998) Several E-mail systems suffer from a "buffer overflow error", when extremely long e-mail addresses are received. The internal buffers receiving the addresses do not check for length and allow their buffers to overflow causing the applications to crash. Hostile hackers use this fault to trick the computer into running a malicious program in its place. USS Yorktown (1998) A crew member of the guided-missile cruiser USS Yorktown mistakenly entered

a zero for a data value, which resulted in a division by zero. The error cascaded and eventually shut down the ship's propulsion system. The ship was dead in the water for several hours because a program didn't check for valid input. (reported in Scientific American, November 1998) Mars Climate Orbiter (September 23rd, 1999) The 125 million dollar Mars Climate Orbiter is assumed lost by officials at NASA. The failure responsible for loss of the orbiter is attributed to a failure of NASA’s system engineer process. The process did not specify the system of measurement to be used on the project. As a result, one of the development teams used Imperial measurement while the other used the metric system of measurement. When parameters from one module were passed to another during orbit navigation correct, no conversion was performed, resulting in the loss of the craft. S2-1-P4 - Causes of Software Defects A human being can make an error or mistake, which produces a defect, fault or bug, in the code, in software or a system, or in a document. If the defective code is executed, the system will fail to do what it should do, or do something it shouldn’t, causing a failure. Defects in software, systems or documents may result in failures, but not all defects do so. Defects occur because human beings are fallible and because often there is time pressure, complex code, complexity of infrastructure, changed technologies, and/or many system interactions. Of course failures can be caused by environmental conditions as well: for example radiation, magnetism, electronic fields, and pollution can cause faults in firmware or influence the execution of software by changing hardware conditions. S2-1-P5 - Activity A bug or a defect is what?

Page 10: Software Testing Intermediate

Software Testing Intermediate

Page 10

S2-1-P6 - Errors, Defects and Failures It may seem pedantic, and some might say testers should be pedantic, but it is essential that the difference in definitions between errors, defects and failures is understood. Misunderstandings between project managers, developers and testers often arise because the concepts of human error, bugs and blame are confused. Blaming people for their mistakes, without improving the process, the working environment or project constraints is self-defeating as it will only upset people on the team. The tester should adopt a consistent set of terms to ensure they communicate with clarity. The tester should approach their role with a dispassionate point of view. They should not 'take a position' on the people or personalities, but they should have an objective and realistic view of the product. S2-1-P7 - A Failure Is... A failure is 'a deviation of the software from its expected delivery or service'. Software fails when it behaves in a different way than we expect or require. If we use the software properly and enter data correctly into the software but it behaves in an unexpected way, we say it fails. A failure occurs when software does the 'wrong' thing. We can say that if the software does the wrong thing, then the software has failed. This is a judgement made by the user or tester. You cannot tell whether software fails unless you know how the software is meant to behave. This might be explicitly stated in requirements or you might have a sensible expectation that the software should not 'crash'. Most software does the ‘right’ thing most of the time. Unfortunately, even a small number of failures may make the Software unusable or unacceptable. The defects in the software only become apparent when the software is used in a way that exercises the defective code. The software defects cause the code to fail and from the failure we can find the fault and

fix it. It may be easy or very difficult to diagnose where the defect in the code is from the information we get from the failure. S2-1-P8 - A Defect Is... A defect is 'a manifestation of human error in software'. A defect in software is caused by an unintentional action by someone building a deliverable. We normally think of programmers when we talk about software defects and human error. However, defects can be created by managers working on project plans, analysts working on requirements, designers working on designs or testers working on test plans too. Human error can cause defects in any project deliverable. Only defects in software cause software to fail. This is the most familiar situation. Defects are often called faults. Bugs are a universally accepted name for faults. The terms bugs and defects are used interchangeably throughout this training course. All software development activities are prone to error. Defects may occur in all software deliverables when they are first being written or when they are being maintained. When we test software, it is easy to believe that the defects in the software move. In fact software defects are static. Once injected into the software, they will remain there until exposed by a test and fixed. There are two ways of detecting defects in software. The first is by reading the code itself. The second is by executing the software, when the software fails we can infer the existence of a defect. S2-1-P9 - An Error Is... Finally an error can be described as 'a human action producing an incorrect result.' The error is the activity undertaken by an analyst, designer, developer or tester whose outcome is a defect in the deliverable being produced.

Page 11: Software Testing Intermediate

Software Testing Intermediate

Page 11

We usually think of programmers when we mention errors, but any person involved in the development activities can make the error which injects a defect into a deliverable. Errors are not just accidents or mistakes that can be cured by "being more careful". Errors are usually the result of an unforeseen event that distracts the developer, overwork, or overload, misunderstandings and so on. In fact many errors can be traced back to some original problem or event. Being more careful might help a little, but many errors occur because of circumstances beyond our control. Errors are not an act of incompetence. Software developers try to do good work, but errors are inevitable because of the process, project constraints or our human fallibility. Errors are inevitable in a complex activity. Testers should regard them as an inevitability in software development. Personal blame is counter-productive as most errors occur in spite of our best efforts to eliminate them. S2-1-P10 - Role of testing in software development Rigorous testing of systems and documentation can help to reduce the risk of problems occurring in an operational environment and contribute to the quality of the software system, if defects found are corrected before the system is released for operational use. Software testing may also be required to meet contractual or legal requirements, or industry-specific standards. S2-1-P11 - Testing and Confidence We know that if we run tests to detect faults and we find faults, then the quality of the product can be improved. However, if we look for faults and do not find any, our confidence is increased. Let's look at a few real life examples. If we buy a software package the software supplier may be reputable and have a good test process, so we would normally assume that the product works. However

we would always test the product to give us confidence that we really are buying a good product. If we buy a car, cooker, off-the-peg suit or other mass produced goods, we normally assume that they work, because the product has probably been tested in the factory. For example, a new car should work, but before we buy we would always give the car an inspection, a test drive and ask questions about the car's specification - just to make sure it would be suitable. Essentially, we assume that mass produced goods work, but we need to establish whether they will work for us. Alternatively when we buy a kitchen, haircut or a bespoke suit we are involved in the requirements process. If we had a kitchen designed we know that although we were involved in the requirements, there are always some misunderstandings, some problems due to the imperfections of the materials, our location and the workmanship of the supplier. As such, we would want to be kept closely informed of progress and monitor the quality of the work throughout. So in effect, if we were involved in specifying or influencing the requirements, we need to test. We were involved in the requirements and we test them. S2-1-P12 - Activity The effect of testing is to...? S2-1-P13 - Testing and Contractual Requirements When we buy custom-built software a contract may be in place. A contract will usually state:

The requirements for the software

The price of the software

The delivery schedule and acceptance process.

We don't pay the supplier until we have received and acceptance tested the software. Acceptance tests help to determine whether the supplier has met the requirements. Testing is normally a key activity that takes place as part of the contractual arrangement between the supplier and user of software. Acceptance test

Page 12: Software Testing Intermediate

Software Testing Intermediate

Page 12

arrangements are critical and are often defined in their own clause in the contract. Acceptance test dates represent a critical milestone and have two purposes:

To protect the customer from poor products

To provide the supplier with the necessary evidence that they have completed their side of the bargain.

Large sums of money may depend on the successful completion of acceptance tests. S2-1-P14 - Testing and Other Requirements There are other important reasons why testing may figure prominently in a project plan. Some industries, for example, financial services, are heavily regulated and the regulator may impose rigorous conditions on the acceptability of systems used to support an organisation's activities. Some industries may self-regulate, others may be governed by the law of the land. The Millennium bug is an obvious example of a situation where customers may insist that a supplier's product is compliant in some way, and may insist on conducting tests of their own. For some software, for example, safety-critical software, the type and amount of testing, and the test process itself, may be defined by industry standards. On almost all development or migration projects, we need to provide evidence that a software product is compliant in one way or another. It is, by and large, the test records that provide that evidence. When project files are audited, the most reliable evidence that supports the proposition that software meets its requirements is derived from test records. S2-1-P15 - Testing and Quality Testing is a measurement activity. Testing gives us an insight into how closely the product meets its specification, so in effect it provides an objective measure of its 'fitness for purpose'.

If we assess the rigour and number of tests and count the number of faults found, we can make an objective assessment of the quality of the system under test. When we test, we aim to detect defects. If we do detect defects, then these can be fixed and the quality of the product can be improved. S2-1-P16 - Activity Which document would typically provide the most reliable evidence that software is compliant? S2-1-P17 - How Much Testing is Enough? 'How much testing is enough?' is perhaps the single most perplexing question that a tester must address. Enough in terms of 'how much testing should be planned?' Enough in terms of 'how much testing should be executed?’ If testing is squeezed, we need to know whether we have done enough so that the risk of release before testing is finished is acceptably low. The problem is that, in practice, there is no upper limit to how much testing we could do, and there are no formulae for calculating what 'enough' testing is. All too often, the tester is blamed if faults find their way into production. However, when testers ask for additional resources or time, they have no way of justifying this, because the benefits of additional testing cannot be identified before the project is completed, and, perhaps not even then. The tester cannot win, so is it a reasonable question to ask in the first place? To answer this we need to look further into what 'enough' testing actually means. S2-1-P18 - How Much Testing is Enough? (2) There are an infinite number of tests we could apply and software is never perfect We know that it is impossible, or at least impractical, to plan and execute all possible tests. We also know that software can never be expected to be perfectly fault-free (even after testing). If 'enough'

Page 13: Software Testing Intermediate

Software Testing Intermediate

Page 13

testing were defined as 'when all the faults have been detected', we obviously have a problem - we can never do 'enough'. So is it sensible to talk about 'enough' testing? There are objective measures of coverage, or targets, that we can arbitrarily set, and meet. These are normally based on the traditional test design techniques. We will look at these in more detail later in the course Test design techniques give an objective target. The test design and measurement techniques set out coverage items and then tests can be designed and measured against these. Using these techniques, arbitrary targets can be set and met. Some industries have industry specific standards. DO-178b is a standard for airborne software, and mandates stringent test coverage targets and measures. S2-1-P19 - How Much Testing is Enough? (3) The problem is that for all but the most critical developments, even the least stringent test techniques may generate many more tests than are possible or acceptable within the project budget available. In many cases, testing is time limited. Ultimately, even in the highest integrity environments, time limits what testing can be done. We may have to rely on a consensus view to ensure we do at least the most important tests. Often the test measurement techniques give us an objective 'benchmark', but possibly, there will be an impractical number of tests, so we usually need to arrive at an acceptable level of testing by consensus. It is an important role for the tester to provide enough information on risks and the tests that address these risks so that the business and technical experts can understand the value of doing some tests while approaching the risks of not doing other tests. In this way, we arrive at a balanced test approach. Deciding how much testing is enough should take account of the level of risk, including technical and business, product

and project risks, and project constraints such as time and budget. Testing should provide sufficient information to stakeholders to make informed decisions about the release of the software or system being tested, for the next development step or handover to customers. S2-1-P20 - The Bugs That Lurk in Our Systems Let's use some simple analogies to illustrate some of the main principles of testing. The diagram shows our system and the bugs (or defects) that exist in the software. The oval shape is the system and the green blobs are the bugs. But where are the bugs? Of course, if we knew that, we could fix them and go home! If we knew where the bugs were, we could simply fix each one in turn and perfect the system. We can't say where any individual fault is, but we can make some observations on, say a macroscopic level. Experience tells us a number of things about bugs. Bugs are sociable, they tend to cluster. Suppose you were invited into the kitchen in a restaurant. While you are there, a large cockroach scurries across the floor and the chef stamps on it and kills it saying "I got the bug". Would you still want to eat there? Probably not. When you see a bug in this context we say "it's infested". It's the same with software faults. S2-1-P21 – Activity In percentage terms, when considering making a change to software, what do you think are the chances of introducing a new problem? S2-1-P22 - Where are the Bugs? Experience tells us that bugs tend to cluster, and the best place to find the next bug is in the vicinity of the last one found. Off-the-shelf components are likely to have been tested thoroughly and used in many other projects. Bugs found in these components in production have probably been reported and corrected. The same

Page 14: Software Testing Intermediate

Software Testing Intermediate

Page 14

applies to legacy system code that is being reused in a new project. Bug fixing and maintenance are error-prone - 50% of changes cause other faults. Have you ever experienced the 'Friday night fix' that goes wrong? All too often, minor changes can disrupt software that works. Tracing the potential impact of changes to existing software is extremely difficult. Before testing, there is a 50% chance of a change causing a problem (a regression) elsewhere in existing software. Maintenance and bug-fixing are error-prone activities. The principle here is that faults do not uniformly distribute themselves through software. Because of this, our test activities should vary across the software, to make the best use of tester's time. S2-1-P23 - Testing as a Fishing Net Let's use a fishing analogy for testing. Let's go fishing for bugs. How do the professionals fish? They use sonar to find the shoal. Then they cast large nets to maximise their chances of catching as many fish as possible. Pro-testers do the same. They ask: Where are the greatest risks? Where are most bugs likely to be found? Where have bugs been found in the past? Then the testers create large, systematic test plans to catch as many bugs in as little time as possible. If we are fishing for bugs, we can imagine the scale of the test effort compares with the size of the net we are going to use. The size of the mesh compares with the rigour and sophistication of the tests. A large mesh will find all the obvious bugs and might give us some overall confidence, but is not likely to find every bug. A small mesh will, like a net, catch virtually everything. A fine mesh would catch dolphins and sardines at the same time - not good. We will use small mesh in areas that are critical to make sure that the bugs get detected. High density tests are expensive, so we use them carefully. We use such tests where we are most concerned that the software must work; for

example, where the impact of failure in a particular piece of code is very high. We also want to use high-density tests in areas where we expect to find lots of bugs. We might use this in areas of complexity or where we've found faults in the past. S2-1-P24 - Coverage of Business-oriented Tests Suppose we left all the testing to the users? What do users tend to do when testing? If they are systematic, they will tend to identify all the features of the system under test, and execute tests that cover all the major situations they are concerned about. They tend to build tests which exercise broad paths through the whole system, in the same way a user would execute a variety of system features as part of their daily routine. Although all the major features may be covered, it is unlikely that all situations that the software must cope with will be exercised. Users tend to plan and execute large tests with a large mesh. S2-1-P25 - Rigorous Tests of the Riskiest Parts of the System We need to complement the broad-brush tests prepared by users with more focused tests that address the concerns over the more technically critical parts of the software. For example, if there are batch procedures which must run faultlessly every night on many thousands of transactions, this would be regarded as a critical area to be tested, even though the users may never see these features. We must seek advice from the developers as to where the technically critical parts of the software are and test these areas more rigorously. S2-1-P26 - Testing the business critical parts Although the technically critical parts of the system may be tested well, the users would soon realise that only they could identify the business critical parts of the software. Often these may not be the same. The user, for example, may not regard the batch programs as critical, but

Page 15: Software Testing Intermediate

Software Testing Intermediate

Page 15

in a call-centre application, the screens used by telesales operators might be regarded as business critical. If these do not work, the business stops working. Tests of the business critical parts of the software must complement tests of the technically critical parts of the software. S2-1-P27 - What About the Bugs We Don't Find? Even though we might take a balanced and thorough test approach, we always expect some bugs to get through. What can we say about these? If we've tested the business critical parts of the software, we can say that the bugs that get through are less likely to be of great concern to the users. If we've tested the technically critical parts of the software, we can say that the bugs that get through are less likely to cause technical failures, so perhaps there's no issue there either. Faults should be of low impact. The bugs remaining in the critical part of the system should be few and far between. If bugs do get through and are in the critical parts of the software, at least we can say that this is the least likely situation as we will have eliminated the vast majority of such problems. Such bugs should be very scarce and obscure. S2-1-P28 - Module Summary This concludes Module 2, Section 1, entitled ‘Why is Testing Necessary?’ In this section we have:

Identified reasons why software testing is required

Outlined the types of failure and the most common causes

Defined what is meant by errors, defects and failures

Seen some famous or infamous examples of software failures

We went on to review the role of testing in software development, maintenance and operations and went on to look at managing

quality in testing and how external requirements guide what we do?

Finally we considered the question, 'how much testing is enough?’.

Module 2 Section 2 – What is Testing S2-2-P1 - Objectives Welcome to the second section of Module 2 entitled 'What is Testing?' In this section we will:

Look deeper into the notion of test objectives and reveal some common misconceptions about what testing is

Provide a brief description of the principles of static and dynamic testing

Introduce the V-model and how it applies to early test case preparation

Define the differences between testing and debugging

Finally, we'll take a brief look at the different viewpoints when it comes to testing and the types of testing available.

S2-2-P2 - Introduction In this module, we'll look deeper into the notion of test objectives. Test objectives can vary with the types of test, and when it happens in the project. Early testing, such as requirements and design reviews and early test preparation, find defects in documents which in turn can prevent defects occurring in code. The differing viewpoints of developers and testers mean they detect different kinds of defects. This difference in perspective is the foundation of independent testing and makes it effective. In this section we'll also briefly look at testing versus debugging - which are different activities, of course. S2-2-P3 - Activity Do you think the statement is true or false?

Page 16: Software Testing Intermediate

Software Testing Intermediate

Page 16

'Testing simply involves running a series of tests.' S2-2-P4 - Dynamic and Static Testing Testing also includes reviewing of documents, including source code and static analysis. Both dynamic testing and static testing can be used as a means for achieving similar objectives and will provide information in order to improve both the system to be tested and the development and testing processes. Reviews of documents, including such things as requirements, also help to prevent defects appearing in the code. S2-2-P5 - Static Testing in the Lifecycle Static tests are tests that do not involve executing software. Static tests are primarily used early in the lifecycle. All deliverables, including code, can also be statically tested. All these test techniques find faults, and because they usually find faults early, static test activities provide extremely good value for money. Activities such as reviews, inspections, walkthroughs and static analysis are all static tests. Static tests operate primarily on documentation, but can also be used on code, usually before dynamic tests are done. Most static testing will operate on project deliverables such as requirements and design specification or test plans. However, any document can be reviewed or inspected. This includes project terms of reference, project plans, test results and reports, user documentation etc. Review of the design can highlight potential risks that if identified early can either be avoided or managed. There are techniques that can be used to detect faults in code without executing the software. Review and inspection techniques are effective but labour intensive. Static analysis tools can be used to find statically detectable faults in millions of lines of code.

It is always a good idea to get test plans reviewed by independent staff on the project - usually business people as well as technical experts. S2-2-P6 - Dynamic Testing in the Lifecycle Dynamic tests start with component level testing on routines, programs, class files, and modules. Component testing is the standard terms for tests that are often called unit, program or module tests. The process of assembly of components into testable sub-systems is called integration and component integration tests aim to demonstrate that the interfaces between components and sub-systems work correctly. System-level tests are split into functional and non-functional test types. Non-functional tests address issues such as performance, security, backup and recovery requirements. Functional tests aim to demonstrate that the system, as a whole, meets its functional specification. Acceptance and user acceptance tests address the need to ensure that suppliers have met their obligations and that user needs have been met. S2-2-P7 - Activity - Knowledge Check From time to time throughout this course, we will ask you to perform a 'knowledge check'. The intention here is to reinforce your acquired knowledge, by asking you to recall relevant facts. So let's get started. Using pen and paper, try describing dynamic testing and static testing. Once you have completed this exercise, click on the arrow link on screen to see the glossary definition for each. You might also like to review the last couple of screens.

Page 17: Software Testing Intermediate

Software Testing Intermediate

Page 17

S2-2-P8 - V-model - Early Test Case Preparation Early test case preparation is the natural enhancement to the basic V-model. (We'll cover the V-model in more detail later in course by the way). Whenever a tester prepares tests, they find that the baseline document to be used (the requirement, functional specification or design etc.) has gaps or inconsistencies. Testers naturally question the baseline because they must cover all the situations of interest and have a test oracle for the behaviour of the product to be tested. The obvious application of the early test preparation idea is to get acceptance or system testers involved in the reviews of requirements or designs. Where designers and developers question the legibility and feasibility of the ideas in these documents, testers can attempt to prepare tests (at least at a high level). They look specifically at testability. But testability is specifically based on the notions of completeness, correctness, consistency - these are the issues of most importance in early documents. The nice thing about doing early test case design as a document review approach is not only does it find faults, but it also generates usable test cases. Early testing is cheap to do (you need to do these tasks anyway) and has massive potential to reduce dramatically the number of the most expensive faults. S2-2-P9 - Different Viewpoints and Types of Testing Different viewpoints in testing take different objectives into account. For example, in development testing (for example, component, integration and system testing), the main objective may be to cause as many failures as possible so that defects in the software are identified and can be fixed. In some cases the main objective of testing may be to assess the quality of the software (with no intention of fixing defects), to give information to

stakeholders of the risk of releasing the system at a given time. Maintenance testing often includes testing that no new errors have been introduced during development of the changes. During operational testing, the main objective may be to assess system characteristics such as reliability or availability. S2-2-P10 - Debugging and Testing Debugging and testing are different. Testing can show failures that are caused by defects Debugging is the development activity that identifies the cause of a defect, repairs the code and checks that the defect has been fixed correctly. Subsequent confirmation testing by a tester ensures that the fix does indeed resolve the failure. The responsibility for each activity is very different, that is testers test and developers debug. S2-2-P11 - Activity Can you match the titles to their corresponding definitions? S2-2-P12 - Module Summary This concludes Module 2 Section 2, ‘What is Testing?’ In this section we have:

Looked deeper into the notion of test objectives and revealed some common misconceptions about what testing is

Provided a brief description of the principles of static and dynamic testing

Went on to introduce the V-model and its use in early test case preparation

We went on to outline the different testing viewpoints and the types of testing available

Concluded the section by defining the differences between testing and debugging.

Page 18: Software Testing Intermediate

Software Testing Intermediate

Page 18

Module 2 Section 3 – General Testing Principles S2-3-P1 - Objectives Welcome to the third section of Module 2 entitled 'General Testing Principles'. A number of testing principles have been suggested over the past 40 years or so. Here, we'll explore the seven most fundamental ones that provide a guideline common to all testing. S2-3-P2 - Principles 1 and 2 The first principle we'll consider is a very simple one. It's this, 'Testing can show that defects are present, but cannot prove that there are no defects.' Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness. Principle 2 suggests that 'Exhaustive testing of all program paths is usually impossible.' Exhaustive path testing would involve exercising the software through every possible program path. However, even 'simple' programs have an extremely large number of paths. Every decision in code with two outcomes effectively doubles the number of program paths. A 100-statement program might have twenty decisions in it so might have 1,048,576 paths. Such a program would rightly be regarded as trivial compared to real systems that have many thousands or millions of statements. Although the number of paths may not be infinite, we can never hope to test all paths in real systems. S2-3-P3 - Principle 2 – Exhaustive Testing is Impossible Principle 2 goes on to suggest that 'Exhaustive testing of all inputs is also impossible.' If we disregard the internals of the system and approach the testing from the point of view of all possible inputs and testing these, we hit a similar barrier. We can never hope to test all the infinite number of inputs to real systems.

Even if we used a tool to execute millions of tests, we would expect that the majority of the tests would be duplicates and they would prove nothing. Consequently, test case selection (or design) must focus on selecting the most important or useful tests from the infinite number possible. As such, we need to select tests that are effective at finding faults but are also efficient. So, instead of exhaustive testing, we use risk and priorities to focus our testing efforts. S2-3-P4 - Principles 3 and 4 Principle 3 suggests that testing activities should start as early as possible in the software or system development life cycle, and should be focused on defined objectives. This is because the later we find a defect, the more likely it is that it will be more expensive to fix. This is show by the 'Cost Escalation Model'. Early testing is carried out using reviews which are discussed later in the course. Principle 4 looks at 'Defect Clustering'. Remember we've already seen an illustration of this in an earlier section. Hopefully you will recall the story of the bugs in the kitchen? A small number of modules contain most of the defects discovered during pre-release testing, or show the most operational failures. S2-3-P5 - Activity - Principle 5 Is the following statement true or false? Do you think running the same set of tests continually will continue to find new defects? S2-3-P6 - Principles 6 and 7 Principle 6 suggests that 'testing is context dependent.' In other words the testing we do must take account of the context. For example, safety-critical software is tested differently from an e-commerce site because the risk of failure is so much higher. The testing approach for embedded, mainframe or web based software will also

Page 19: Software Testing Intermediate

Software Testing Intermediate

Page 19

vary. Obvious really, but we need to accommodate these changes in context. Finally, principle 7 states that 'absence-of-errors is a fallacy'. Although a system may have a good specification, and the system has been tested thoroughly against that specification, it is still possible for the system to not meet the needs of its users. This could be because the specification, although of high quality, does not represent the need of the users. This in turn could be because of misunderstandings, or because the business and business needs have changed. Finding and fixing defects does not help if the system has been based on faulty requirements that do not represent the real user's needs and expectations. This concludes this brief module on General Testing Principles. Module 2 Section 4 – Application Domains S2-4-P1 - Objectives Welcome to the fourth section of module two entitled 'Application Domains'. General testing principle six states that “testing is context dependent”. Therefore, when considering how to approach testing it is essential to take into account the type of system that is being tested. The BCS Syllabus uses the term “application domain” to identify groups of systems that have similar characteristics in the way that they affect testing. The application domains identified by the syllabus are:

mainframe applications

client-server applications

web-based applications

PC-based applications In this section we will describe the similarities and differences between typical application domains. We will identify and explain the testing challenges associated with these application domains.

S2-4-P2 - The mainframe application domain Mainframe computers are characterised by having one central processor. This central processor is very powerful and can process a considerable amount of data very quickly. The central processor communicates with the outside world via peripheral processors, often referred to as channels. While the peripheral processors line up tasks and pass data to and from the central processor, data store, terminals and printers, the central processor can concentrate on the main task of processing data. Early mainframe’s used “dumb terminals” or “Green Screens” that allowed the input and output of data and left all processing to the mainframe. However, modern mainframes can have a variety of interfaces that may carry out a small amount of processing. For example, a PC interface may carry out some data validation before passing the data to the mainframe. Characteristics of modern mainframe computers are:

very reliable

highly secure

very powerful data processing

expensive S2-4-P3 - The client-server application domain Client-server systems are characterised by having layers called “tiers”. Each tier serves a purpose. The data tier will contain one or more data servers for storing data. The application tier will contain one or more servers that carry out the main system processes. The user interface tier typically has several networked computers that act as a user interface for the system. These computers are called clients. Other devices such as printers may also be connected to the network. What makes client-server applications different from mainframes is that some processing can be carried out by the client. Characteristics of client-server systems are:

Page 20: Software Testing Intermediate

Software Testing Intermediate

Page 20

some processing can be carried out by the client

systems can vary in complexity from simple systems such as the one shown in this diagram, to large distributed multi-tiered systems of systems

they are flexible, scalable and there is potential for significant interoperability between systems`

multiple points of failure requires redundancy and can lead to unreliability

S2-4-P4 - The web-based application domain A web-based system is the same as a client server system with the exception that information is passed between parts of the system over the World Wide Web. More significantly, web-based systems may be accessed using networks and from clients that the developers have no control of. Characteristics of web-based systems, along with those for client-server, are:

similar tiered structure to client-server information is passed over the World Wide Web

development often has no control over what operating systems, browsers and cohabiting software are running on the client

security issues are especially significant.

S2-4-P5 - The PC-based application domain PC based systems are designed to run on stand-alone computers. These computers may be networked and connected to things such as printers and the World Wide Web, and maybe designed to use these, but their operation will not necessarily be dependent on them. Transferring information from one PC-based system to another will be manual and will use portable file formats such as PDF, GIF and JPG. Examples of PC based systems include COTS systems such as word processors, spreadsheets,

image processing software, some games and accountancy software. Characteristics of PC based systems:

applications are run independently on a PC without connecting directly to another application or server

data transfer between applications uses export and import functions to produce data files

in the case of COTS systems the developers will have no control over the target environment.

S2-4-P6 - Testing challenges for application domain Testing challenges for all application domains include:

creating a realistic test environment

creating realistic test data

selecting appropriate test design techniques and performing functional testing

performing performance testing (performance, load, stress) and reliability testing

performing usability testing

performing portability testing These challenges will apply differently and to different extents for each application domain. When considering how these challenges apply to the application domains, remember that each scenario will have a unique set of circumstances. You will need to consider carefully how these, and any other challenges that may exist, apply. We will look at each of these challenges in turn. S2-4-P7 - Creating a realistic test environment Creating a realistic test environment will always present a significant challenge.

Page 21: Software Testing Intermediate

Software Testing Intermediate

Page 21

For mainframe systems the biggest constraint will be cost. Only in rare cases will an organisation will be able to afford a full test rig. Therefore testing will have to share the live environment. This will require careful configuration to ensure that test scripts are not run using live data and that testing does not use significant system resource when the live system needs it most. For web-based systems, PC based systems and any client-server systems where the target environment and the performance constraints on the live network are not known, creating a realistic test environment can be all but impossible. For client-server systems where the target environment and the performance constraints on the live network are known, cost and logistics can constrain the creating of a realistic test environment. For example, a distributed client-server system may consist of many clients and servers spread over several sites. However, the test environment may be built on one machine with clients connected directly locally. This will mean that when the system is migrated from the test environment to the live environment significant changes and further testing will have to be carried out. S2-4-P8 - Creating realistic test data and design techniques Creating realistic test data will be a challenge for every application domain. Typically mainframe systems handle large amounts of data so the challenge will be creating the significant amount of data needed to carry out testing. For a new system or a COTS system it is likely that the exact nature of the data will not be known. For any system that handles personal information laws governing data protection must be considered. Selecting appropriate test design techniques and performing functional testing will depend on a number of factors:

the risk associated with the system under test

the nature of the application (is it function rich or data rich)

what techniques have been effective in the past

Techniques are discussed in detail in section 5 Test design techniques. S2-4-P9 - Performing performance and reliability testing How significant performing performance testing (performance, load, stress) and reliability testing will depend greatly on the type of system. For example, the performance characteristics of a mainframe system may have driven the choice of application domain in the first place. However, what is most likely to constrain the performance of the overall system will be the network of clients and data servers. This network may be complex, distributed and shared with other systems. Therefore modelling the performance and reliability will be a considerable challenge. This will also be true for client-server systems that may also use a complex distributed network. Note that not all performance characteristics will be appropriate for every system. For example, a COTS system designed for image processing will not be designed to handle a load of any kind but must be capable of performing its actions quickly. Not knowing the exact specification of the target environment and what other cohabiting systems exist will mean that the performance of the live system may vary considerably. S2-4-P10 - Performing usability and portability testing How much usability testing must be performed will depend on the context of the system. For a web-based system that is public facing, such as an Internet retail or social networking website, its success will depend on good usability. A safety critical application, such as a radiography machine or an aircraft navigation system, will require a different type of usability but again good usability will be critical. In cases where a system is being rolled out to a known user base and training will be given, and this may include some web-based applications, usability may not be so critical. However, consideration will always have to be made for the time taken and the number of steps required to

Page 22: Software Testing Intermediate

Software Testing Intermediate

Page 22

complete actions; and consideration will always have too made for access and disability constraints. Portability testing will only be relevant in some situations. For example, a PC-based COTS application or a public facing website will have to operate with different operating systems and as well as sometimes different browser platforms. The combinations of these may be vast and special techniques are required to ensure that the system will be portable to likely operating system/browser platform combinations. If the target system is known then portability testing may not be needed. For example, an internal system designed to work on a standard corporate platform may not require portability testing, it will just need testing on that target environment. S2-4-P11 - Challenges from other types of applications In addition to the challenges that relates to the application domains mentioned in the BCS Intermediate Certificate in Software Testing syllabus, other types of applications may have specific challenges. For example, safety critical systems, where a failure may cause harm or even death to a person, presents specific challenges:

risks are severe therefore testing must be very thorough

regulations often require comprehensive documentation

standards often tightly govern testing

Embedded systems contain object code hardwired onto the microchips. Such systems include toasters, washing machines, calculators and watches. Despite the unit cost of each microchip being small, they are usually created in such quantities that if a significant defect was discovered after the microchips had been built, the loss of money could be very significant. For this reason the testing of embedded systems must be very thorough and in some cases more thorough than for safety critical systems. Also for this reason, true embedded systems nowadays are being used less and less.

Even items such as mobile phones, cameras and cars can usually can be reprogrammed or upgraded. S2-4-P12 - Activity Carefully read through the scenario given on the screen. Make a record of any application domains that appear to be present. For each application domain record the main testing challenge the test team will face. When you are satisfied with your answer move to the next page to reveal the answer. S2-4-P13 - Activity - continued There is no right or wrong answer to this question. The answer given here is a good answer. However, from the small amount of information given in the text given, and the fact that everyone's experiences are different, it is likely that your answer will be different. Do not be alarmed by this and don't forget that the BCS Intermediate examination is a multiple choice examination and you will not be required to come up with your own answer to such a question. You will be required to select the best answer from a list of options. S2-4-P14 - Summary In the exam you will be given scenarios and will be expected to understand what challenges a test team will face and what approach to testing must be taken. It will be important to think of the generic challenges caused by the application domain. For example, is the system a mainframe, client-server, web-based or PC-based system? Or does their system contain elements of more than one application domain? It will also be important to think of a specific challenge unique to that scenario. For example, perhaps the system is a safety critical system, or perhaps the system is an embedded system. Module 2 Section 5 – Expected Results S2-5-P1 - Objectives Welcome to the fourth section of Module 2 entitled 'Expected Results'

Page 23: Software Testing Intermediate

Software Testing Intermediate

Page 23

The fundamental test process requires that an outcome, in other words an expected result, must be predicted before the test is run. Without an expected result the test cannot be interpreted as a pass or fail. Without some expectation of the behaviour of a system, there is nothing to compare the actual behaviour with, so no decision on success or failure can be made. This short section outlines the importance of baselines and expected results. S2-5-P2 – Activity - External Specifications and Baselines Are the following statements true or false? Specifications, requirements and such like define what the software is required to do. Without requirements, developers cannot build, testers cannot test. S2-5-P3 - External Specifications and Baselines (2) It looks like the developer uses the baseline in a very similar way to the tester. They both look for features, then conditions and finally a description of the required behaviour. In fact, the early development thought process is exactly the same for both. Some developers might say that they utilize 'use-cases' and other object-oriented methods, but this reflects a different notation for the same thing. Overall, it’s the same sequence of tasks. What does this mean? It means that without requirements, developers cannot build software and testers cannot test. Getting the baseline right (and early) benefits everyone in the development and test process. What about poor baselines? These tend to be a bigger problem for testers than developers. Developers tend not to question baselines in the same way as testers. There are two mindsets at work but the impact of poor baselines can be dramatic. Developers do question requirements but they tend to focus on issues such as how easy (or difficult) it will be to build the features, what algorithms, system services or new techniques will be required?

S2-5-P4 - External Specifications and Baselines (3) How do testers use specifications? First they identify the features to be tested and then, for each feature, the conditions (or the rules) to be obeyed. For every condition defined, there will usually be a different behaviour to be exhibited by the system and this is inferred from the description of the requirement. Testers have no independent definition of the behaviour of a system other than the system itself, so they have nothing to ‘test against’. By the time a system reaches system test, there is little time to recover the information required to plan comprehensive tests. The testers need them to identify the things that need testing and to compare test results with requirements. S2-5-P5 - Baseline as an Oracle for Required Behaviour A baseline is a generic term for the document used to identify the features to test and expected results. Whether it’s acceptance, system, integration or component testing, there should be a baseline. The baseline says what the software should do. From the baseline, you get your expected results, and from the test, you have your actual results. The baseline tells you what the product under test should do. That’s all the baseline is. We’ve already said that a baseline could be requirements or design document, a functional specification or program specification. Whatever document is used to prepare expected results, that document is the baseline. In rapid application development methods, the baseline might be people’s expectations and in their heads. In a conversion project, the baseline is the regression test. The baseline is where you get your expected results. The next point to be made is the notion of an oracle. An oracle (with a lowercase ‘o’) is a kind of ‘font of all knowledge’. If you ask the oracle a question, it gives you the

Page 24: Software Testing Intermediate

Software Testing Intermediate

Page 24

answer. If you need to know what software should do, you go back to the baseline, and the baseline should tell you exactly what the software should do, in all circumstances. A test oracle tells you the answer to the question, ‘what is the expected result?’ If you’re doing a conversion job (consider a recent project you may have completed), the old system gives you the oracle of what the new system must continue to do. You’re going to convert it without changing any functionality. You must make it ‘compliant’ without changing the behaviour of the software. S2-5-P6 - Expected Results If we don't define expected result before we execute the test, a plausible, but erroneous result may be interpreted as the correct result from a subconscious desire to see the software pass the test. The concern about expected results is that we should define them before we run the tests. Otherwise, we’ll be tempted to say that, whatever the system does when we test it, we’ll pass the result as correct. That’s the risk. Imagine that you’re under pressure from the boss (‘don’t write tests…just do the testing…’). The pressure is immense, so it’s easier not to write anything down, to not think what the results should be, to run some informal tests and pass them as correct. Expected results, (even when good baselines aren’t available) should always be documented. This concludes this short section entitled 'Expected Results'. Module 2 Section 6 – Exit, Completion, Closure or Acceptance Criteria S2-6-P1 - Objectives The title of this brief section is 'Exit, Completion, Closure or Closure Criteria'. All of these terms represent criteria that are defined before testing starts, to help determine when to stop testing. We normally plan to complete testing within a pre-determined timescale, so that if things go to plan, we will stop preparing and executing tests when we achieve some coverage target.

Just as often, however, we run out of time, and in these circumstances, it is only sensible to have some statement of intent to say what testing we should have completed before we stop. The decision to stop testing or continue can then be made against some defined criteria, rather than by 'gut feel'. S2-6-P2 - Exit Criteria When considering exit criteria, the principle is that given there is no upper limit on how much testing we could do, we must define some objective and rational criteria that we can use to determine whether 'we've done enough'. Management may be asked to define or at least approve exit criteria, so these criteria must be understandable by managers. For any test stage, there will tend to be multiple criteria that, in principle, must be met before the stage can end. There should always be at least one criterion that defines a test coverage target. There should also be a criterion that defines a threshold beneath which the software will be deemed unacceptable. Criteria should be measurable, as it is inevitable that some comparison of the target with reality must be performed. Criteria should also be achievable, at least in principle. Criteria that can never be achieved are of little value. S2-6-P3 - Exit Criteria (2) Here are some typical examples of exit criterion which are used regularly:

A percentage coverage has been achieved. This is often 100%

All tests have been executed without failure

All faults have been corrected and re-tested

All outstanding incidents waived

All critical business scenarios covered.

Coverage items are usually defined in terms of requirements, conditions, business transactions, code statements, branches or other entity that can be defined objectively and counted.

Page 25: Software Testing Intermediate

Software Testing Intermediate

Page 25

Module 2 Section 7 – Fundamental Test Process S2-7-P1 - Objectives This section is entitled Fundamental Test Process. In this section we will: • Describe what is meant by a 'test' and look at what we might hope to achieve by testing, in other words the 'expected result'. • Introduce you to the test process and highlight its fundamental importance We will go on to look at the test process in some detail and examine the eight main activities associated with it, namely;

• test planning • test control • test analysis • test design • test implementation • test execution and recording • evaluating exit criteria and

reporting • and finally test closure

S2-7-P2 - Introduction It is important to recognise and discuss the fundamental test process. If you have some previous test experience then this might seem very, very basic, but it is an essential foundation of all that follows. In every test you care to mention, there is an underlying process that is being followed, whether you’re testing a single program or a massive integrated package for a multinational which is to be used in many different countries. S2-7-P3 - What is a test? Let's begin by asking a simple question, what is a test? Do you remember the biology or physics classes you took when you were 13 or 14? You were probably taught the scientific method where you have a hypothesis, and in order to demonstrate whether the hypothesis is true (or not), you set up an experiment with a control and a method for executing a test in a controlled environment. Well, software testing is similar to the controlled experiment. In fact you might call your test environment and work area a

test 'lab'. Testing is a bit like the experimental method for software. In that; • You have an object under test that might be a piece of software, a document or a test plan. • The test environment is defined and controlled. • You define and prepare the inputs - what we’re going to apply to the software under test. • You also have a hypothesis, a definition of the expected results. These four elements define the absolute fundamentals of what a test is. To summarise, you have;

• an object under test • a definition of the environment • a definition of the inputs • a definition of expected outputs or

result S2-7-P4 - What is a test? – continued Have you ever been asked to test without requirements or asked to test without having any software? It's not very easy to do is it? When you run a test, you get an actual outcome. The outcome is normally some change of state of the system under test and outputs (the result). Whatever happens as a result of the test must be compared with the expected outcome (your hypothesis). If the actual outcome matches the expected outcome, your hypothesis is proven. Put simply, that is what a test is. S2-7-P5 - Expected results When we run a test, we must have an expected result derived from the baseline. Just like a controlled experiment, where a hypothesis must be proposed in advance of the experiment taking place, when a test is run, there must be an expected outcome defined beforehand. If you don't have an expected result, there is a risk that the software does what it does and because you have nothing to compare its behaviour to, you may assume that the software works correctly. If you don’t have an expected result at all, you have no way

Page 26: Software Testing Intermediate

Software Testing Intermediate

Page 26

of saying whether the software is correct or incorrect because you have nothing to compare the software's behaviour with. Boris Beizer suggests that if you watch an eight-year old play pool – they put the cue ball on the table; they address the cue ball, hit it as hard as they can, and if a ball goes in the pocket, the kid will say, 'I meant that'. Does that sound familiar? What does a professional pool player do? A pro will say, 'x ball in the y pocket'. They address the cue ball, hit it as hard as they can, and if it goes in, they will say, 'I meant that' and you believe them. It’s the same with testing. A kiddie tester will run some tests and say 'that looks okay' or 'that sounds right…', but there will be no comparison or notion of comparison with an expected result - there is no hypothesis. Too often, we are expected to test without a requirement or an expected result. You could call it 'exploratory testing' but strictly, it is not testing at all. What we are actually looking for is differences between our expected result and the actual result. If we see a difference, the software may have failed, and this result infers the existence of faults in the software. S2-7-P6 - The Test Process The fundamental test process consists of the following main activities:

• test planning • test control • test analysis • test design • test implementation (in other

words preparation) • test execution and recording • evaluating exit criteria and

reporting • and • test closure activities

The most visible part of testing is executing tests. But to be effective and efficient, test plans should also include time allocated to actually planning the tests, designing test cases, preparing for execution and evaluating status.

Although logically sequential, the activities in the process may overlap or take place concurrently. By the way, the object under test need not be machine executable. That is, reviews and inspections have a similar process. This might seem like it is stretching the definition of testing, but in principle, static tests have a very similar process. We'll explore this a bit later in the course. The other key point is that testing, as defined in this course, covers all activities for static and dynamic testing, including inspections, reviews and walkthrough activities. S2-7-P7 - Activity Fill in the missing elements from the Fundamental Test process activities? S2-7 P8 - Test planning Test planning comes after test strategy. Whereas a strategy would cover a complete project lifecycle, a test plan would normally cover a single test stage, for example system testing. Test planning normally involves deciding what will be done according to the test strategy but also should say how we’re going to do things differently from that strategy. The plan must state what will be adopted and what will be adapted from the strategy. Test planning is the activity of verifying the mission of testing, defining the objectives of testing and the specification of test activities in order to meet the objectives and mission. In other words: When defining the testing to be done – the components to be tested are identified. Whether it is a program, a sub-system, a complete system, an interfacing system, there may be a requirement for additional infrastructure. If we’re testing a single component, we may need to have stubs and drivers and other scaffolding, other material in place to help with the test. This is the basic scoping information defined in the plan. Test planning has the following major tasks:

• Determining the scope and risks, and identifying the objectives of testing

Page 27: Software Testing Intermediate

Software Testing Intermediate

Page 27

• Determining the test approach (techniques, test items, coverage, identifying and interfacing the teams involved in testing, testware)

• Determining the required test resources (for example, people, test environment, PCs and so on)

• Implementing the test policy and/or the test strategy

• Scheduling test analysis and design tasks

• Scheduling test implementation, execution and evaluation and

• Determining the exit criteria. In prioritising what to test, and in which order should tests be run, the most important objective is to test the high risk areas first. Risk is discussed later in the course. S2-7-P9 - Test planning – continued Having identified what is to be tested, we would normally specify an approach to be taken for test design. We could say that testing is going to be done by users, left to their own devices (a possible, but not very sophisticated approach) – or that formal test design techniques will be used to identify test cases. Finally, the approach should describe how testing will be deemed complete. Completion criteria (often described as exit or acceptance criteria) state how management can judge that the testing is completed. To summarise, planning is about;

• the software component(s) to be tested

• additional infrastructure to test the component

• the approach to test design • the test completion criteria

We will cover the subject of Test Planning in more detail later in the course. S2-7-P10 - Test control Test control is the ongoing activity of comparing actual progress against the plan, and reporting the status, including deviations from the plan. There are three important points about test control, namely

• Test Control involves taking actions necessary to meet the mission and objectives of the project.

• In order to control testing, it should be monitored throughout the project.

• Test planning takes into account the feedback from monitoring and control activities.

Test control has the following major tasks. These are;

• measuring and analysing results • monitoring and documenting

progress, test coverage and exit criteria

• initiation of corrective actions • and deciding what to do next.

S2-7-P11 - Reviewing the test basis The Requirements are the most common cause of problems for testers. If we have untestable requirements, it is impossible to derive meaningful tests. This can be a serious issue. You might ask, 'if we are unable to build test cases, how do the developers know what to build?' This is a perfectly valid question and highlights a real issue in software development. The problem is that it is quite feasible for a developer to just get on with it and build the system as he sees it. But if the requirements are untestable, it’s impossible to see if he built the right system.' Therefore, early in the test process it will be necessary to review the test basis. The test basis is any document or document from which the requirements of a system can be inferred and which the tests will be based on. Reviewing the test basis is part of the activity known as test analysis. S2-7-P12 - Test analysis Test analysis is the activity where general testing objectives are transformed into tangible test conditions. A test condition is an item or event of a component or system that could be verified by one or more test cases. For example, a function, transaction, feature, quality attribute, or structural element.

Page 28: Software Testing Intermediate

Software Testing Intermediate

Page 28

Testers select the features to test and then identify test conditions, building up an inventory of test conditions until we have enough detail to say that we've covered features, and exercised those features adequately. As we build up the inventory of test conditions, we might, for example, find that there are 100 test conditions to exercise in our test. From the test inventory, we might estimate how long it will take to complete the test and execute it. It may be that we haven’t got enough time. The project manager says, 'you’d like to do 100 tests, but we’ve only got time to do 60'. So, part of the process of test specification must be to prioritise test conditions. We might go through the test inventory and label features and test conditions high, medium and low priority. So, test analysis generates a prioritised inventory of test conditions. Because it is unlikely that we will be given enough time to complete every test that we identify, prioritisation should always be part of test specification. Test analysis has the following major tasks:

• reviewing the test basis • evaluating testability of the test

basis and test objects • identifying and prioritising test

conditions based on analysis of test items, the specification, behaviour and structure.

S2-7-P13 - Test design Test design is the activity where test conditions are transformed into test cases. A test case is a set of input values, execution preconditions, expected results and execution post-conditions, developed for a particular objective or test condition. Formal test design techniques such as partitioning and boundary value analysis should be used to design test cases. Test design techniques are discussed further in section 5. Test Design also involves designing the test environment set-up and identifying any required infrastructure and tools.

Test design has the following major tasks:

designing and prioritising test cases

identifying necessary test data to support the test conditions and test cases

designing the test environment set-up and identifying any required infrastructure and tools

creating bi-directional traceability between test basis and test cases.

S2-7-P14 - Test implementation Test implementation is the activity where test procedures or scripts are specified by combining the test cases in a particular order and including any other information that is needed. Additional information includes test setup instructions, the test environment and commencement criteria. Therefore, from the test cases we can identify and create test data, write test procedures and optionally, automate the tests. Tests scripts may be grouped into suites for efficient test execution. A test suite is a popular term for a collection of tests that are to be executed in a group. Other terms you might encounter for collections of tests include cycles, runs, clusters or sessions. Test implementation has the following major tasks:

• developing, implementing and prioritising test cases

• developing and prioritising test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts

• creating test suites from the test procedures for efficient test execution.

S2-7-P15 - Test execution and recording Test execution is the activity when the environment is set up and the tests are run.

Page 29: Software Testing Intermediate

Software Testing Intermediate

Page 29

Before the test starts, it is sensible to verify that the test environment has been set up correctly, rather like a 'pre-flight check.' This might be done by running a few trial tests through the system. Some people use terms such as dry-run, pipe-cleaning, pre-test checking. Be aware, it is far too common for the first few days of a system test to be lost because of an environmental problem. Make sure you have a mechanism for checking the environment before you embark on full system or acceptance testing. We go to the trouble of creating test scripts for the sole purpose of executing the test, and we should follow test scripts precisely. The intention is that we don’t deviate from the test script, because all the decisions have been made up front. Test cases are run either manually or by using test execution tools, according to the planned sequence. We verify that actual results meet expected results and raise incident reports if they don’t. As we progress through the tests, we log progress. We log test script passes, failures, and we raise incident reports for failures. The identities and versions of the software under test, test tools and testware are all part of the log. S2-7-P16 - Test execution and recording – continued Test execution has the following major tasks:

Verifying that the test environment has been set up correctly

Verifying and updating bi-directional traceability between the test basis and test cases

Executing test procedures either manually or by using test execution tools, according to the planned sequence

Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware

Comparing actual results with expected results

Reporting discrepancies as incidents and analysing them in order to establish their cause (for example, a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed)

Repeating test activities as a result of action taken for each discrepancy. For example, re-execution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing)

S2-7-P17 - Raising incidents As part of Test Execution, we report test failures or discrepancies as incidents and analyse them in order to establish their cause. These test failures could be due to a number of causes such as a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed. Test activities are repeated as a result of action taken for each failure. These activities could be:

re-tests ( or confirmation tests)

or regression tests We will cover re-testing, regression and incident reporting in more detail later in this course. S2-7-P18 - Activity Here's an activity. Which option is part of the 'implementation and execution' area of the fundamental test process?

Page 30: Software Testing Intermediate

Software Testing Intermediate

Page 30

S2-7-P19 - Evaluating exit criteria Evaluating exit criteria is the activity where test execution is assessed against the defined objectives. This should be done for every test, regardless of phase. Test logs are checked against the exit criteria specified in test planning, and an assessment is made as to whether more tests are needed, or if the exit criteria specified should be changed. Often, time pressure forces a decision to stop testing. Often, development slips and testing is ‘squeezed’ to ensure a timely delivery into production. This is a compromise but it may be that some faults are acceptable. When time runs out for testing, the decision to continue testing or to release the system forces a dilemma on the project. “Should we release the system early (on time), with faults, or not?” It is likely that if time runs out you may be left with the fact that some tests have failures and are still outstanding. Some tests you may not have run yet. It is common that the completion criteria are compromised. S2-7-P20 - Evaluating exit criteria – continued If you do finish all of your testing and there is still time leftover, you might choose to write some more tests, but this isn’t very likely. If you do run out of time, there is the third option: you could release the system, but continue testing to the end of the plan. If you find faults after release, you can fix them in the next package. You are taking a risk but there may be good reasons for doing so. However clear-cut as the textbooks say completion criteria are, in the real world it’s not usually that simple. Only in high-integrity environments does testing continue until the completion criteria are met. S2-7-P21 - Test reporting The purpose of reporting is to communicate the progress of the project to stakeholders. This includes incident reporting (mentioned earlier and discussed in more detail later in the course) and writing test summary reports. Not only do the stakeholders want to know whether the project is meeting its deadlines, they will

also want to know what is going well and what is not going so well. Test summary reports will be created at predefined milestones during the project. For example, they may be weekly or even daily. The final test summary report it is often called a test completion report. The test summary report will report progress against the exit criteria. It will include a comprehensiveness assessment, as in how thorough the testing has been, and any deviations from the requirements and any modifications to the exit criteria that have been made. Test reporting is covered in more detail later in the course. S2-7-P22 - Test closure activities From the documentation and outputs of the test phase, we need to collect data from completed test activities to consolidate experience, testware, facts and numbers. Typical Test Closure activities include writing a test summary report, this would normally be written for a system or acceptance test phase. The summary provides stakeholders the evidence they need to make a decision, either to release, to postpone, or in extreme circumstances, to cancel. Checks are also made on which planned deliverables have been delivered, the status of incident reports or change requests, and the documentation deliverables. In line with the system itself, testware is then signed-off and archived, the test environment and the test infrastructure may be backed up for later reuse. Testware is usually handed over to the maintenance organisation. The test team usually have a contribution to make to post implementation reviews, where lessons are learned for future releases and projects, and the improvement of test maturity. S2-7-P23 - Deploying the fundamental test process Although the fundamental test process appears as a rigid sequence of steps, in practice steps will overlap. For example,

Page 31: Software Testing Intermediate

Software Testing Intermediate

Page 31

the activities of test planning and control are ongoing activities that will occur throughout the test process, concurrently with all other activities. There will also be iteration. For example, during the review of the test conditions, or after test execution, it may be decided that further analysis and design of tests is necessary. The fundamental test process is a generic test process that is appropriate for any type of project using any type of system development life cycle. It is equally appropriate for complex, document heavy safety critical projects, as it is for simple, document light commercial-off-the-shelf (or COTS) products. It is also equally appropriate for waterfall, v-model and iterative system development life cycles. The different types of system development life cycle, such and their relationship to the fundamental test process, is explained in section 3.1 Software development models. S2-7-P24 - Activity 'Comparing actual progress against the plan and reporting the status' is part of which test process activity? 'Comparing actual progress against the plan and reporting the status' is part of the Test Control activities. S2-7-P25 - Activity Here's an activity. Try rebuilding the fundamental test process diagram by moving the pieces to the correct position. S2-7-P26 - Summary In this section we have;

Described what is meant by a 'test' and looked at the concept of the ‘expected result'

We looked at the test process and highlighted its fundamental importance and looked in some detail at its eight core activities.

Module 2 Section 8 – The Psychology of Testing S2-8-P1 - Objectives Welcome to Module 2, Section 7. This section is entitled 'The Psychology of Testing'. In this seventh section we will:

Examine what personality attributes make a successful tester

Describe what is meant by independence and look at the levels associated with it

Look at the 'goals' of the tester and testing as a whole, including making sure the system and associated test work correctly

Look at the relationship that exists between testers and other members of the project team.

Finally, we'll look in some detail at the relationship between testers and developers. S2-8-P2 - Introduction Testers often find they are at odds with their colleagues. It can be counter-productive if developers think the testers are ‘out to get them’ or ‘are sceptical, nit-picking pedants whose sole aim is to hold up the project. Less professional managers can convince testers that they do not add value or are a brake on progress. Testers can easily become de-motivated and feel their role is not adding value at all. The psychology of testing may appear counter intuitive to the rest of the team but understanding the psychology helps testers to be more effective and demonstrate their commitment to the quality of the product to be tested. Testers who have not been trained may naturally develop a ‘destructive’ attitude to the products they test but find it hard to reconcile this mentality to the objectives set by their management. Managers may, by using imprecise or inaccurate language, set the wrong goals for their testers.

Page 32: Software Testing Intermediate

Software Testing Intermediate

Page 32

S2-8-P3 - Independence and Levels of Independence The mindset to be used while testing and reviewing is different to that used while analysing or developing. With the right mindset developers are able to test their own code, but separation of this responsibility to a tester is typically done to help focus effort and provide additional benefits, such as an independent view by trained and professional testing resources. Independent testing may be carried out at any level of testing. A certain degree of independence is often more effective at finding defects and failures. Independence is not, however, a replacement for familiarity, and developers can efficiently find many defects in their own code. Several levels of independence can be defined, including:

Tests designed by the person who wrote the software under test. This obviously has a low level of independence

Tests designed by another person. For example, from the development team

Tests designed by a person from a different organisational group. For example an independent test team

Tests designed by a person or persons from a different organisation or company. An example of this might be outsourcing or certification by an external body.

S2-8-P4 - Activity - Goal: When testing software, who has the highest level of independence? Try rearranging the list into the correct order. S2-8-P5 - Goal: Make Sure the Test Works So, what should be the goal of a tester? People and projects are driven by objectives and goals.

People tend to align their plans with the objectives set by management and other stakeholders, for example, to find defects or to confirm that software works. Therefore, it is important to clearly state the objectives of testing. Identifying failures during testing may be perceived as criticism against the product and against the author. Like all professional activities, it is essential that testers have a clear goal to work towards. Let’s consider one way of expressing the goal of a tester. If you asked a group of programmers ‘what is the purpose of testing?’ they’d probably say something like, ‘to make sure that the program works according to the specification’, or a variation on this theme. This is not an unreasonable or illogical goal, but there are significant implications to be considered. If your job as a tester is to make sure that a system works, the implication is that a successful test shows that the system is working. We know that finding a fault is bad news all around because it contradicts our goal - it sets us back, so finding a fault is a ‘bad thing’. S2-8-P6 - Goal: Make Sure the System Works - Implications If making sure the system works is our goal, it undermines the job of the testers because it is de-motivating. It seems that the better we are at finding faults, the farther we get from our goal, and this in itself can be de-motivating. It is also destructive because everyone in the project is trying to move forward, but the testers continually hold the project back. Testers become the enemy of progress and not ‘team players’ as such. If a tester wants to meet their goal and is under pressure, the easiest thing to do is to prepare ‘easy’ tests, simply to keep the peace. The boss will then say ‘good job’. It is the wrong motivation because the incentive to a tester becomes 'don’t find faults', in other words, don’t rock the boat. If you’re not effective at finding faults, you can’t have confidence in the product. You won’t know whether the product will actually work.

Page 33: Software Testing Intermediate

Software Testing Intermediate

Page 33

S2-8-P7 - Goal: Make Sure the System Works - Implications (2) The likely outcome of this false motivation will be that the quality of released software will be low. Why? This is because, if our incentive is not to find faults, we are less likely to be effective at finding them. If it is less likely that we will find them, the number of faults remaining after testing will be higher and the quality of the software will be lower. S2-8-P8 - Goal: Locate Defects So what is a better goal? A better goal is to locate defects, to be error-centric or focus on defects and use that motivation to do the job. In this case, a successful test is one that finds a defect. If finding defects is your aim, that is, you see your job as a defect detective then, when you locate a defect, it is a sign that you are doing a good job. It is a positive motivation. It is also constructive, because when 'you' find a defect it won’t be found by the users of the product. The defect can be fixed and the quality of the product can be improved. Your incentive will now be to create really tough tests. If your goal is to find defects, and you try and don’t find any, then you can be confident that the product is robust. Testers should have a mindset which says finding defects is the goal. When defects are found, it might upset a developer or two, but it will help the project as a whole. If we are effective at finding defects, and then can't find any, we can be confident the system works. S2-8-P9 - Tester Mindset Testing is often seen as a destructive activity, even though it is very constructive in the management of product risks. Looking for failures in a system requires curiosity, professional pessimism, a critical eye, attention to detail, good communication with development peers and experience on which to base error guessing.

But it also requires excellent communication skills. If errors, defects or failures are communicated in a constructive way, bad feeling between the testers and the analysts, designers and developers can be avoided. This applies to reviewing as well as in testing. The tester and test leader need good interpersonal skills to communicate factual information about defects, progress and risks in a constructive way. For the author of the software or document, defect information can help them improve their skills. Defects found and fixed during testing will save time and money later and reduce risks. S2-8-P10 - Tester Mindset (2) In a way testers need a split personality. They need to be able to see a fault from two points of view. Some years ago, we were asked to put a slide together, saying who makes the best testers, and we thought and thought, but eventually all we could think of was, they’ve got to be pedantic and sceptical and a nitpicker. Now, if you called someone a pedant, a sceptic, and a nitpicker, they’d probably take an instant dislike to you. Most people would regard such a description as abusive, because these are personal attributes that we don’t particularly like in other people. When discussing failures with developers however, we must be much more diplomatic. Testers must trust the developers, but mistrust the product. Most developers are great people and do their best, and testers have to get on with them – as they are part of the same team. When it comes to the product, testers should distrust and doubt it. Testers doubt the quality of everything until they have tested it. Nothing works, whatever “works” means, until it has been tested. S2-8-P11 - Tester Mindset (3) Testers are impartial, advisory and constructive to developers. Because it is human nature to take a pride in their work, developers may well take criticism of their

Page 34: Software Testing Intermediate

Software Testing Intermediate

Page 34

work personally. Bear in mind this quote: ‘tread lightly, because you tread on their dreams’. Testers have to be very careful about how to communicate problems to developers. Be impartial, it is the product that is poor, not the person:

Advise them – here are the holes in the road - we don’t want you guys to fall in

Be constructive – this is how we can get out of this hole

Be diplomatic but firm - No, it’s not a feature, it’s a bug!

Sometimes developers think that the bug wouldn’t be there if you didn’t test it. You may have encountered this, ‘it wasn’t there until you tested it’. Testers have to strike a delicate balance. In some ways, it’s like having to deal with a child. Not that developers are children, but you may be dealing a blow to their emotions, so be careful. S2-8-P12 - Communications with Developers Communication problems may occur, particularly if testers are seen only as messengers of unwanted news about defects. In the late 1960s and early 1970s, there was a popular notion that testers should be put into independent teams. If a successful test is one that locates a fault, the thinking went, then the testers should celebrate finding faults, cheering even. Would you think this was a good idea if you were surrounded by developers? Of course not. IBM conducted an experiment some years ago. They set up a test team, who they called the 'black team' because these people were just friends. Their sole aim was to break software. Whatever was given to them to test, they were going to find faults in it. They developed a whole mentality where they were the ‘bad guys’. They dressed in black, with black Stetson hats and long false moustaches all for fun. They were very effective at finding faults in everyone’s work, and had great fun, but they upset everyone whose project they

were involved in. They were most effective, but eventually were disbanded. Technically, it worked fine, but from the point of view of the organisation, it was counter-productive. The idea of a ‘black team’ is cute, but keep it to yourself; it doesn’t help anyone if you crow when you find a fault in a programmer's code. You wouldn’t be happy if one of your colleagues told you your product was poor and laughed about it. It’s just not funny. The point to be made about all this is that the tester’s mindset is critical. Here are some 'positive' ways to improve relationships with others:

Start with collaboration rather than battles - remind everyone of the common goal of better quality systems

Communicate findings on the product in a neutral, fact-focused way without criticising the person who created it, for example, write objective and factual incident reports and review findings

Confirm that the other person has understood what you have said and vice versa.

S2-8-P13 – Code of Ethics Involvement in software testing enables individuals to learn confidential and privileged information. A code of ethics is necessary, among other reasons to ensure that the information is not put to inappropriate use. Recognizing the ACM and IEEE code of ethics for engineers, the ISTQB states the following code of ethics:

Public - Certified software testers shall act consistently with the public interest

Client and employer - Certified software testers shall act in a manner that is in the best interests of their client and employer, consistent with the public interest

Product - Certified software testers shall ensure that the deliverables they provide (on the products and systems they test) meet the highest professional standards possible

Page 35: Software Testing Intermediate

Software Testing Intermediate

Page 35

Judgement- Certified software testers shall maintain integrity and independence in their professional judgment

Management - Certified software test managers and leaders shall subscribe to and promote an ethical approach to the management of software testing

Profession - Certified software testers shall advance the integrity and reputation of the profession consistent with the public interest

Colleagues - Certified software testers shall be fair to and supportive of their colleagues, and promote cooperation with software developers

Self - Certified software testers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession

S2-8-P14 - Summary This concludes the final section in Module 2, entitled the 'The Psychology of Testing'. In this seventh section we have:

Examined what personality attributes make a successful tester

Described what is meant by independence and look at the levels associated with it

Looked at the 'goals' of the tester and testing as a whole, including making sure the test system and associated test, work correctly

Looked at the relationship that exists between testers and other members of the project team.

Finally, we looked in some detail at the relationship between testers and the development team.

Module 3 – Testing through the lifecycle

Module 3 Section 1 – Software Development Models S3-1-P1 - Objectives The syllabus is designed to give testers a broad range of understanding of the principles of testing in as wide an area of applicability as possible. This part of the course will introduce you to the various styles of development methodologies and how testing is fitted into each of these different methods. In this first section of Module 3 we will look at Software Development Models. In this section we will:

Look at some specific examples of development models, including the 'waterfall approach' or sequential model

The iterative incremental models

Rapid Application Development or RAD, and contrast differing approaches

We will also go on to examine the Influences on the test process and look at a typical example

Finally, we will explore how test processes are integrated into the development process and highlight the importance of the V-model.

S3-1-P2 - Development Lifecycles There are various development models, the main ones being:

Waterfall or sequential development model

The ‘waterfall approach’ to development, where development is broken up into a series of sequential stages, was the original textbook method for large projects. There are several alternatives that have emerged in the last ten years or so.

Iterative-incremental models

Incremental prototyping is an approach that avoids taking big risks on big projects. The idea is to run a large project as a

Page 36: Software Testing Intermediate

Software Testing Intermediate

Page 36

series of small, incremental and low-risk projects. Large projects are very risky because by sheer volume, they become complex. There are many people involved, lots of communication through a variety of formal and informal channels, and mountains of documents. The requirements may not be well understood by those involved. This can cause major problems. There are many difficulties associated with running a big project. So, this is a way of just carving up big projects into smaller projects. The probability of project failure is lowered and the consequence of project failure is lessened. Iterative development is the process of establishing requirements, designing, building and testing a system, carried out as a series of smaller developments. Typical examples are: prototyping, Rapid Application Development (RAD), Rational Unified Process (RUP) and agile development models. Regression testing is increasingly important on all iterations after the first one. S3-1-P3 - Development Lifecycles (2) Rapid Application Development, or RAD, is about reducing our ambitions. In the past, it used to be that 80% of the project budget would go on the 20% of functionality that, perhaps, wasn’t that important – the loose ends, bells and whistles. So, the idea with RAD is that you try and spend 20% of the money but get 80% of the valuable functionality and leave it at that. You start the project with specific aims of achieving a maximum business benefit with the minimum delivery. This is achieved by ‘time-boxing’, limiting the amount of time that you’re going to spend on any phase and cutting down on documentation that, in theory, isn’t going to be useful anyway because it’s always out of date. In a way, RAD is a reaction to the Waterfall model, as the Waterfall model commits a project to spending much of its budget on activities that do not enhance the customer’s perceived value for money.

In all of the models of development, there are common stages: eliciting requirements, defining the system, building the system and testing the system. S3-1-P4 - Activity The modelling approach which attempts to avoid big risk on big projects is known as what incremental modelling? Which development approach attempts to spend 20% of the budget whilst achieving 80% of the functionality? S3-1-P5 - The Waterfall Model The standard waterfall model for systems development is an approach that goes through a series of sequential stages. There are various descriptions of the model, with different names for the stages. This model is based on the work of many practitioners and emerged in the 1970s. It has been widely used, particularly on larger projects. The standard reference for estimating the cost of the system is the COnstructive COst MOdel or (COCOMO) developed by Dr. Barry Boehm while he was at TRW. There have been a number of criticisms of the standard waterfall model, including:

Problems are not discovered until system testing

Requirements must be fixed before the system is designed - requirements evolution makes the development method unstable

Design and code work often turn up requirements inconsistencies, missing system components, and unexpected development needs

System performance cannot be tested until the system is almost coded; under capacity may be difficult to correct.

The standard waterfall model is associated with the failure or cancellation of a number of large systems. It can also be very expensive. As a result, the software development community has experimented with a number of alternative approaches. Let’s look at those now.

Page 37: Software Testing Intermediate

Software Testing Intermediate

Page 37

S3-1-P6 - Rapid Application Development - RAD Rapid Application Development has emerged as an alternative to the waterfall method. The simplest way to compare RAD with traditional methods is to consider the three key dimensions of delivery. These are functionality, time and resources. In traditional developments, the functionality to be delivered by a project was fixed using contracts and detailed, signed-off specifications. What this then meant was that timescales and the resources required to deliver were regarded as fixed. Unfortunately, if there was uncertainty in the functionality required, there was no flexibility in project plans to accommodate functionality changes. Consequently, there was dissatisfaction in the final deliverable – either the final system was 'descoped' or the project overran the budget and delivered late. RAD methodologies such as the Dynamic Systems Development Method (DSDM) emerged as a response to inflexible project methodologies. In this case, it is the resources and timescales that are fixed, and the functionality that is variable. What this means is that a project plan is conceived with phases that are fixed in time and resources, and the objective of the project team is to build the most useful functionality in a system in the time available. RAD projects are business-focused so that the functionality that is delivered by these time-boxed projects has a maximum business-value. RAD methodologies are not appropriate for all application developments of course, but they have been adopted by small and large organisations as an alternative to waterfall methods for (typically) smaller projects. S3-1-P7 - Modern iterative methodologies In recent years three interpretations of the iterative model have emerged. These are:

• agile development • extreme programming • scrum

Although initially conceived independently, the creators and implementers of these methodologies discovered there was so much in common that in practice these methodologies have merged. Several of these people got together and created the “Manifesto for Agile Software Development:

• Individuals and interactions over processes and tools

• Working software over

comprehensive documentation

• Customer collaboration over contract negotiation

• Responding to change over

following a plan That is, while there is value in the items on the right, we value the items on the left more.” Collectively these methodologies are known as "Agile". Critical to the success of agile development is the concept of Test Driven Development. Test Driven Development is where the tests are created before the code. The objective of development is code which will pass the tests. Further, the tests themselves become the specification, removing the need to keep such things as requirements and designs up to date. Because of the iterative nature of Agile these tests will be executed many times. Therefore, to be cost-effective these tests must be automated. True Agile development is a rigorous test driven discipline using formal automated test execution. S3-1-P8 - IBM Rational Unified Process (RUP) One other iterative development methodology of note is the IBM Rational Unified Process, often simply referred to as RUP. Although RUP shares a number of characteristics with agile, it is not generally considered to be lightweight enough to qualify as an agile methodology. RUP Iterations may be of one to three months duration and consist of four project phases:

Page 38: Software Testing Intermediate

Software Testing Intermediate

Page 38

• inception • elaboration • construction • transition

These phases do not overlap, as happens with many other agile methodologies, and each iteration begins with more planning and would typically be found on an agile project. S3-1-P9 - Activity Drag the titles of the waterfall model to the correct position on the diagram. S3-1-P10 - Iterative versus Waterfall In recent years, the short-comings of the waterfall methods have become increasingly apparent. A significant number of projects using these methods have overrun on budgets and timescales, ultimately failing to meet requirements. The iterative approach aims to reduce risk by breaking down large projects into less ambitious sub-projects or increments. Each sub-project adopts a waterfall approach, but because the scale is less, the complexity and communications overhead are reduced, and thus the risk of failure is reduced. Each incremental project delivers earlier, so the divergences of requirements and delivered systems are also reduced in scale. As each incremental project is delivered, it is assumed that some of the previous increments will be revised as well as new functionality being delivered. In some ways, each increment represents a final deliverable but also represents a prototype for the next phase. Iterative developments have sometimes been called incremental prototyping. This is probably a better name because it represents what actually happens in iterative developments. S3-1-P11 - Testing Within a Lifecycle Model Testers normally break up the testing into a series of building blocks or stages. The hope is that we can use a 'divide and conquer' approach and break down the complex testing problem into a series of smaller, simpler ones.

The analysis and design of tests for a given test level should begin during the corresponding development activity. Test levels can be combined or reorganized depending on the nature of the project or the system architecture. For example, for the integration of a commercial off the shelf, (or COTS), software product into a system, the purchaser may perform integration testing at the system level (for example, integration to the infrastructure and other systems, or system deployment) and acceptance testing, in other words, functional and/or non-functional, and user and/or operational testing. A series of (usually) sequential stages, each having distinct objectives, techniques, methods, responsibilities are defined. Each test stage addresses different risks or modes of failure. When one test stage is completed, we 'trust' the delivered product and move onto a different set of risk areas. The difficult problem for the tester is to work out how each level of testing contributes to the overall test process. Our aim must be to ensure that there are neither gaps nor overlaps in the test process. S3-1-P12 - Influences on the Test Process What kind of faults are we looking for? Low level, detailed programming faults are best found during component testing. Inconsistencies of the use of data transferred between complete systems can only be addressed very late in the test process, when these systems have been delivered. The different types of faults, modes of failure and risk affect how and when we test. The stages of testing are influenced mainly by the availability of software artefacts during the build process. The build process is normally a bottom-up activity, with components being built first, then assembled into sub-systems, then the sub-systems are combined into a complete, but standalone, system and finally, the complete system is integrated

Page 39: Software Testing Intermediate

Software Testing Intermediate

Page 39

with other systems in its final configuration. The test stages align with this build and integration process. Can we trust the developers to do thorough testing, or the users, or the system testers? We may be forced to rely on less competent people to test earlier or we may be able to relax our later testing, because we have great confidence in earlier tests. All tests need some technical infrastructure. But have we adequate technical environments, tools and access to test data? These can be a major technical challenge. Over the course of the test process, the nature of the purpose of testing changes. Early on, the main aim is to find faults, but this aim changes over time into generating evidence that software works and into building confidence. S3-1-P13 - Staged Testing and Test Objectives Staged testing moves from small to large initially we begin by testing each component in isolation. As tested components become available, we test groups of programs – sub-systems; then we combine sub-systems and test the system; then we combine single systems with other systems and test. The objectives at each level of testing are different. Individual components are tested for their conformance to their specification, whereas groups of components are tested for their conformance to the physical design. Sub-systems and systems are tested for conformance to the functional specifications and requirements. S3-1-P14 - Typical Test Process The text book test strategy presents a layered and staged process, based on a sequence of test activities working from a large number of small scale component tests to single, large scale acceptance tests.

Given the staged test process, we define each stage in terms of its objectives. Early test stages focus on low-level and detailed tests that need single, isolated components in small-scale test environments. This is all that is possible. The testing trend moves towards tests of multiple systems using end-to-end business processes to verify the integration of multiple systems working in collaboration. This requires large-scale integrated test environments. S3-1-P15 - Activity Drag the titles of the typical test process into the correct position on the diagram. S3-1-P16 - From Theory to the V-model The four levels used in this syllabus are:

Component (unit) testing

Integration testing

System testing

Acceptance testing. In practice, a V-model may have more, fewer or different levels of development and testing, depending on the project and the software product. For example, there may be component integration testing after component testing and system integration testing after system testing. Software work products, such as business scenarios or 'use cases', requirements specifications, design documents and code produced during development are often the basis of testing in one or more test levels. References for generic work products include Capability Maturity Model Integration or CMMI or ‘Software Life Cycle Processes’ (IEEE/IEC 12207). S3-1-P17 - V-model: Waterfall and Locks So how do we integrate the test process into the development process? The V-model of testing maps the test phases back to the deliverables from the corresponding development phases. The principle here is that the tests on the right hand side of the model should use the

Page 40: Software Testing Intermediate

Software Testing Intermediate

Page 40

project deliverables on the left hand side as baselines. Ideally, the test phases map directly onto the earlier development phases too, but that is often only an approximate fit. The V-model as described here is a convenient way to link development and test activities and a convenient way for testers to start looking at progress in the earlier test phases for their baselines for testing. Unfortunately, the time delay between say, requirements definition and user acceptance testing, can be so long that potentially, the most significant faults to occur in the development phases (requirements faults) are still not found until the very end of the project. We'll look at this in more detail later in the course. S3-1-P18 - Activity - Knowledge Check Using pen and paper, try answering the following:

1. Name three work-products typically shown in the V-model

2. Name three activities typically shown in the V-model

3. Name three activities usually associated with an iterative model

Once you have completed this exercise, click on each question to view the relevant diagrams. You might also like to review the material covered earlier in this section. S3-1-P19 - The V-model Let's take a brief look at another representation of the V-model. On the left-hand side of the model you can see the mauve dots and the downward arrow in black represents the early, system development phases. The orange dots are review/inspections of the deliverables from the early development phases. The mauve arrows represent the inspection and rework feedback. In between the key development phases, test planning takes place. The deliverables from the early development activities have been reviewed, and then reused to plan the tests on the right hand side, represented by the green arrows.

Verification confirms that 'specified requirements' have been fulfilled whereas validation confirms the requirements for a 'specific intended use' have been fulfilled. Whilst verification checks that the system meets the requirements, validation checks that the original business objectives have been met; because the business objectives may not have been correctly documented in the requirements. Reviews and testing are used to check that each work product conforms to the previous development stage. That each work product meets the requirements for that work product. This is verification. Reviews and testing are also used to ensure that what will be finally delivered will meet its specific intended use. This is validation. The V-model, with its early test design, and different test levels, shows verification and validation carried out throughout the system development life cycle and for each development and testing phase. Note that with iterative development, in the same way as we have just described verification and validation also occurs throughout the development life cycle and for each development and testing phase. S3-1-P20 - Challenges faced by testing So what are the testing challenges faced by testing using each of these development life cycle models? With the waterfall model testing does not occur till late in the project. Defects in the early test phases will not be found until they are expensive to fix. If there are delays during development the knock-on effect is that testing time will be squeezed. A typical symptom of a waterfall project is the test team being informed of the requirements and the design features shortly before the system is due to be released. This gives the test team little time to prepare often leading to ad hoc testing and unknown risks are carried through to live. Some waterfall methodologies even mandate that once the requirements or the design has been signed off they cannot be changed. For software development projects this is clearly very poor practice and in the past that led to project failures. Therefore, for

Page 41: Software Testing Intermediate

Software Testing Intermediate

Page 41

the waterfall model, the main challenge is that the model does not handle change. The V-model attempts to fix these challenges by introducing verification and validation early in the life cycle. To support this, effective configuration management and change management must be implemented and that is the new challenge with the V-model. Effective configuration management and change management is expensive and time-consuming. Typically towards the end of projects the impending deadline to deliver takes precedence over keeping documentation update. The result of this is that behaviours within V-model projects often end up being similar to that of waterfall projects by project completion. Therefore, the V-model can handle change. However, the main challenge is keeping the cost of change under control and not getting bogged down in documentation activities. The iterative models take on fact that change is inevitable and build the development processes around that fact. With iterative models typically the cost of change, and hence the overall project cost, can be kept low. However, the challenge here is that operating within an iterative development life cycle requires new skills, new tools and a different way of thinking to ensure that development is that cost-effective and delivers a suitable product, rather than it is just cost-effective. Particularly critical is the communication between business users and the development and testing teams. S3-1-P21 - Development life cycles As mentioned in section 2, the fundamental test process is a generic test process that is appropriate for any type of project using any type of system development life cycle. If the system development life cycle tells us when testing will occur, the fundamental test process tells us how testing will be carried out. Within the waterfall model the entire test process occurs during the “test” phase. Within the V-model the fundamental test process occurs for every test level. Test planning for a given test level will begin during its corresponding development phase. Test control, test analysis, test design and test implementation will occur

throughout system development, concurrently with other testing and development activities. Test execution, evaluating exit criteria and reporting will occur at the appropriate time according to the test level. The test process will occur in much the same way within the iterative development life cycle. However, as there are many iterations during development, there will also be many iterations of the test process. S3-1-P22 - Other processes with which testing interfaces System development life cycle models showed a relationship software testing has with software development. Software testing will have an interface with many other life cycle processes. Project management is responsible for key milestones and achieving the delivery date. Testing will have to negotiate with project management for the time and resources required for testing. Regular meetings between testing and project management will be required to ensure that changes and issues are dealt with appropriately. Testing must work with configuration management and change management to ensure that, changes are correctly recorded, statuses are updated, work products are correctly produced and there is no duplication of effort. Configuration management is explained in more detail in section 6 Test management. Testing needs to communicate regularly with the business and the user community to ensure that requirements are validated, appropriate risks are mitigated and that business objectives are met. Business analysis and technical writing are responsible for the work products that define the system. They will need to provide support during test specification and review test scripts. Technical support is always required to ensure hardware operates correctly and software is installed and configured correctly.

Page 42: Software Testing Intermediate

Software Testing Intermediate

Page 42

S3-1-P23 - Activity Carefully read through the scenario given on the screen. Make a record of any life cycle methodologies that appear to be present. There may be attributes of more than one. Note any appropriate testing activities to fit with the situation and the development methodology being used. When you are satisfied with you answer move to the next page to reveal the answer. S3-1-P24 - Activity continued There is no right or wrong answer to this question. The answer given here is a good answer. However, from the small amount of information given in the text given, and the fact that everyone's experiences are different, it is likely that your answer will be different. Do not be alarmed by this and don't forget that the BCS Intermediate examination is a multiple choice examination and you will not be required to come up with your own answer to such a question. You will be required to select the best answer from a list of options. S3-1-P25 – Summary This concludes this section entitled 'Software Development Models'. In this first section of Module 3 we looked at Software Development Models. In particular we:

Examined some specific examples of development models, including the 'waterfall approach' or sequential model

The iterative incremental model and

Rapid Application Development or RAD, and compared the approach of each

We went on to examine the influences on the test process and looked at a typical example

Finally, we explored how test processes are integrated into the development process and the importance of the V-model.

Module 3 Section 2 – Test Levels S3-2-P1 – Objectives Welcome to Section 3.2 entitled ‘Test Levels’. In this section of the course we will:

Compare the different levels of testing

Examine the typical objects of testing and the typical targets of testing, for example functional or structural

Look at the related work products, the people who test

Look at the types of defects and failures to be identified.

Each of the test levels has several attributes such as:

Generic objectives

The work product or products being referenced for deriving test cases

The test basis

The test object, in other words, what is being tested

Typical defects and failures to be found

Test harness requirements and tool support

Specific approaches and responsibilities.

S3-2-P2 - Component Testing The first dynamic test stage is component testing. Component testing is also known as unit, module or program testing, but most often it’s referred to as unit testing. In most cases it’s done by programmers or testers with strong programming skills. Unfortunately in recent years, the term 'component based' approaches has emerged and can cause some confusion. Fortunately you won't need to know anything about component-based approaches for the exam. Component Testing may be carried out in isolation from the rest of the system, depending on the context of the development life cycle and the system. Component testing may include testing of functionality and specific non-functional characteristics, such as resource-behaviour, such as memory leaks, or

Page 43: Software Testing Intermediate

Software Testing Intermediate

Page 43

robustness testing, as well as structural testing, for example branch coverage. Typically, component testing occurs with access to the code being tested and with the support of the development environment, such as a unit test framework or debugging tool, and, in practice, usually involves the programmer who wrote the code. A typical test basis for component testing includes:

component requirements

detailed design

and code Typical test objects for component testing include:

components

programs

data conversion/migration programs

and database modules S3-2-P3 - Activity Component testing is most commonly known as ……….. S3-2-P4 - Relationship of Coding to Testing The way that developers initially test is to alternate testing with writing of code – they would normally code a little, test a little. To write a program (say 1,000 lines of code), a programmer would probably write the main headings, the structure, and the main decisions but not fill out the detail of the processes to be performed. In other words, they would write a skeletal program with nothing happening in the gaps. When they are satisfied with the overall structure, they start to fill in the gaps, writing a piece of code that captures information on the screen then test it. In other words they code a little and test a little. Testing mixed with coding is called ad hoc testing. Ad hoc testing does not have a test plan and is not based on formal test case techniques. It is usually

unrepeatable, as programmers can’t be sure what they’ve done, typically they haven’t written it down! S3-2-P5 - Relationship of Coding to Testing (2) As far as component testing is concerned, a test covers the whole component. There is a process to follow for this. If a programmer hadn’t done any testing up to this point, then the program would almost certainly fail a component test. It makes sense for programmers, in the course of developing a program, to do some informal testing. The usual approach is to fix any faults found instantly so they can move on to write the next piece of code. Ad hoc testing usually finishes with coding and typical informal criteria for completing ad hoc testing is:

The program is viable – it executes with valid data

The programmer is not conscious of any faults.

Essentially, the programmer stops testing when they believe they have exercised all the code they have written at least once and are not aware of any faults in the component. S3-2-P6 - Component Test Objectives The purpose of component testing is to demonstrate that the component performs as specified in a program specification or a component specification. This is the place where you ensure that all code is actually tested at least once. The code may never be executed in the system test so this might be the last check it gets before going live. This is the opportunity to make sure that every line of code that has been written by a programmer has been exercised by at least one test. The exit criteria can be considered as another objective. That is, the component must be ready for inclusion in a larger system and it’s trusted, to a degree.

Page 44: Software Testing Intermediate

Software Testing Intermediate

Page 44

S3-2-P7 - Ad hoc testing versus Component Testing As we said a few moments ago, the programmer doesn’t usually have a written plan and doesn't typically log faults when they are carrying out ad hoc testing. However, if you have a formal component test plan, then all incidents should be logged and tracked. To summarise, ad hoc testing:

It is not based on formal case design

It is not repeatable

It is private to the programmer

Faults found are not documented (but are fixed immediately).

Whereas component testing has a test plan:

It is based on formal test case design

It must be repeatable

It is 'public' to the ‘team’

Faults found are logged before fixing.

S3-2-P8 - Analysing a Component Specification The programmer is responsible for preparing the test before writing the code. This test is against the program specification. In order to prepare that test plan, the programmer will need to analyse the component spec to prepare test cases. The key recommendation with component testing is to prepare a component test plan before coding the program. This has a number of advantages and is not increasing the workload, as test preparation needs to be done at some point anyway. Among other questions, specification reviewers ask "How would we test this requirement?" If specifications aren't reviewed, the programmer is the first person to 'test' the specification. When reviewing a specification, it’s important to look for ambiguities, inconsistencies and omissions. Omissions are usually the hardest to spot. If necessary request clarification from the author.

In preparing the tests, the programmer may find bugs in the specification itself. If tests are prepared after the code is written, it is impossible for a programmer to eliminate from their mind, assumptions that they may have made in coding, so tests will be self-fulfilling. Remember, how to build the program may look obvious, but is it obvious how to test it? If you couldn't test it, can you really build it? After all how will you demonstrate completion or success? S3-2-P9 - Activity Identify the test types from the list of typical characteristics. Whether ad hoc or component test S3-2-P10 - Informal Component Testing The problem with most software developers is that they don’t use coverage tools. Informal component testing is usually based on black-box techniques. The test cases are usually derived from the specification by the programmer and usually they’re not documented. It may be that the program cannot be run except using drivers, and maybe, a debugger to execute the tests. It’s all heavily technical, and the issue is – how will the programmer execute tests of a component if the component doesn’t have a user interface? It’s quite possible. The objective of the testing is to ensure that all code is exercised or tested, at least once. It may be necessary to use the debugger to actually inject data into the software to make it exercise obscure error conditions. The issue with informal component testing is – how can you achieve confidence that the code that’s been written has been exercised by a test when an informal test is not documented? What evidence would you look for to say that all the lines of code in a program have been tested? Using a coverage measurement tool is really the only way to show that everything has been executed. But did the code produce the correct results? This can only really be checked by tests that have an expected output which in turn can be compared against actual output.

Page 45: Software Testing Intermediate

Software Testing Intermediate

Page 45

S3-2-P11 - Formal Component Test Strategy In a more formal environment, before code is written, it’s usual to:

Define the test plan before the code is written

Define a target for black and white-box coverage.

Black-box techniques are used early on to prepare a test plan based on the specification. After code is written then we run the tests prepared using the black-box techniques, measuring the coverage. For example, we’re going to design tests to cover all the equivalence partitions. The tests are prepared and then run. But we could also have a statement coverage target. We want to cover every statement in the code at least once. You get this information by running the tests you have prepared with a coverage tool. When you see the statements that have not been covered, you generate additional tests to exercise that code. The additional tests are white-box testing although the original tests may be black-box tests. S3-2-P12 - Integration Testing Integrated systems are harder to test. The greater the scope of integration, the more difficult it becomes to isolate failures to a specific component or system. Systematic integration strategies may be based on the system architecture (such as top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or component. In order to reduce the risk of late defect discovery, integration should normally be incremental rather than “big bang”. For example, when integrating module A with module B you are interested in testing the communication between the modules, not the functionality of either module. To conduct integration tests, you need to understand the architecture and influence integration planning. It’s a technical

activity, performed by the developers, much like component testing. If integration tests are planned before components or systems are built, they can be built in the order required for most efficient testing. A typical test basis for integration testing includes:

software and system design

architecture

workflows

and use cases Typical test objects for integration testing include:

sub-systems database implementation

infrastructure

interfaces

system configuration

configuration data S3-2-P13 - Integration and Component Integration Testing Integration and integration testing is not well understood. If you think about it, integration is really about the process of assembly of a complete system from all of its components. But even a component consists of the assembly of statements of program code. So really, integration starts as soon as coding starts. But when does it finish? Not until a system has been fully integrated with other systems. So integration happens throughout the project. Here, we are looking at integration testing 'in the small'. It is called component integration testing. In the coding stage, you are performing "integration in the very small". Typical Strategies for Coding and Integration include:

Bottom-up

Top-down

Page 46: Software Testing Intermediate

Software Testing Intermediate

Page 46

Big-bang. Each is more appropriate in different situations, and choice is based on programming tool. Testing also affects the choice of integration strategy. We’ll look at these in a little more detail here. S3-2-P14 - Stubs and Top-down Testing The first integration strategy is 'top-down'. This means that the highest level component, say a top menu, is written first. This can't be tested because the components that are called by the top menu do not yet exist. To address this, temporary components called 'stubs' are written as substitutes for the missing code. Then the highest level component, the top menu, can be tested. When the components called by the top menu are written, these can be inserted into the build and tested using the top menu component. However, the components called by the top menu may call lower level components that again do not yet exist. So, again, stubs are written to temporarily substitute for the missing components. S3-2-P15 - Drivers and Bottom-up Testing The second integration strategy is 'bottom-up'. This is where the lowest level components are written first. These components can't be tested because the components that call them do not yet exist. So, temporary components called "drivers" are written as substitutes for the missing code. Then the lowest level components can be tested using the test driver. When the components that call our lowest level components are written, these can be inserted into the build and tested in conjunction with the lowest level components that they call. However, the new components themselves require drivers to be written to substitute to calling components that do not yet exist. So, once again, drivers are written to temporarily substitute for the missing components. S3-2-P16 - Mixed Integration Strategy A mixed integration strategy involves some aspect of bottom-up, top-down and

big-bang. You can see a typical example here. S3-2-P17 - Activity The diagram shown on screen is an example of which type of testing strategy? S3-2-P18 - Activity Can you identify this typical testing strategy? S3-2-P19 - Component Integration Testing As components are component tested, they are ready to be integrated into the larger sub-system which also must be tested. This is known as component integration testing. Each ‘join’ or interface between the new component and other components is tested in turn. Component integration testing is associated with the assembly of tested components into sub-systems. These tests are designed to explore direct and indirect interfaces and consistency between components. Because the programmers link components together, integration ‘in the small’ requires technical knowledge of the programming language used. Tests are based on this knowledge, as such component integration testing is mainly white-box oriented. Typically, multiple programmers work on components and test their own code. When these components are assembled and component integration tested, it is the first time that their work is checked for consistency. If programmers are not talking to each other or integration designs are not documented, inconsistencies between components may not be spotted until component integration testing happens. These tests are typically performed by an individual within a development group. S3-2-P20 - System and Acceptance Testing System and acceptance testing focuses on the testing of complete systems.

Page 47: Software Testing Intermediate

Software Testing Intermediate

Page 47

We'll take a few moments to discuss the similarities and differences between system and acceptance testing, because although the differences are slight they are important. The most significant difference between acceptance and system testing is one of viewpoint. System testing is primarily the concern of the developers or suppliers of software, whereas acceptance testing is primarily the concern of the users of software. In system testing, the test environment should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found in testing. Finally, system testing may include tests based on risks and/or on requirements specifications, business processes, use cases, or other high level descriptions of system behaviour, interactions with the operating system and system resources. S3-2-P21 - Activity - Similarities System tests and acceptance tests are usually on a large scale. True or false S3-2-P22 - System Testing From the software supplier's point of the view, system testing tends to demonstrate that you have met your commitment. This might be in the form of a contract or meeting a specification for a piece of software which you’re going to sell. System testing tends to be inward looking. Because testing is completed by the organisation that developed the software, they will tend to use their own trusted documentation, for example the functional specification which they created. They will go through their baseline document in detail and identify every feature that should be present and prepare test cases so that they can demonstrate that they comprehensively meet every requirement in the specification. A typical test basis for system testing includes:

system and software requirement specification

use cases Functional specification

risk analysis reports Typical test objects for system testing include:

system, user and operation manuals

system configuration

configuration data S3-2-P23 - Functional and Non-functional System Testing There are two aspects of system testing - functional testing and non-functional testing. It’s worth noting that this subject frequently occurs in the examination, so it’s worth covering here. The simplest way to look at functional testing is that users will normally write down what they want the system to do, what features they want to see, what behaviour they expect to see in the software. These are the functional requirements. The key to functional testing is to have a document stating these things. Once we know what the system should do, then we have to execute tests that demonstrate that the system does what it says in the specification. Within system testing, fault detection and the process of looking for faults is a major part of the test activities. It’s less about being confident and more about making sure that the bugs have been removed. Non-functional testing is more concerned with technical requirements – like performance, usability, security, and other associated issues. These are things that, very often, users don’t document well. It’s not unusual to see a functional requirement document containing hundreds of pages and a non-functional requirement document of one page. Requirements are often a real problem for non-functional testing. Another way to look at non-functional testing is to focus on how it delivers the specified functionality. Functional testing is

Page 48: Software Testing Intermediate

Software Testing Intermediate

Page 48

about what the system must do, whereas Non-functional testing is about how it delivers that service. Is it fast? Is it secure? Is it usable? Although system testing can be divided in to functional and non-functional testing, system testers should also investigate data quality characteristics. How a system functions and how well the non-functional attributes perform often depends on data quality. For example, a system that is heavily data driven such as a portal into an art gallery or a content management system will need the quality of the data checking to be sure that the code is performing as expected. S3-2-P24 - Activity Users are primarily concerned with which type of testing? S3-2-P25 - Acceptance Testing Acceptance testing takes place from a user viewpoint. The system is treated as a great big black box and looked at from the outside. Interest is not in how it was built, but how we will use it. How does the system meet our business requirements? Simplistically, does the system help me do my job as a user? If it makes my life harder, I’m not going to use it, no matter how clever or sophisticated it is. Users will test the features that they expect to use and not every single feature offered, either because they don’t use every feature or because some features are really not very important to them. The tests are geared around how the system fits the work to be done by the user and that may only use a subset of the software. A typical test basis for acceptance testing includes:

user requirements

system requirements

use cases

business processes

risk analysis reports

Typical test objects for acceptance testing include:

business processes on fully integrated system

operational and maintenance processes

user procedures

forms

reports

configuration data S3-2-P26 - Acceptance Testing (2) It is usual to assume at acceptance testing that all major faults have been removed by the previous component, link and system testing and that the system 'works'. In principle, if earlier testing has been done thoroughly, then it should be safe to assume the faults have been removed. In practice, earlier testing may not have been thorough and acceptance testing can become more difficult. When we buy an operating system, say a new version of Microsoft Windows, we will probably trust it if it has become widely available. But will we trust that it works for our usage? If we’re an average user and we’re just going to do some word-processing, we’ll probably assume that it is okay. If on the other hand, we are a development shop and we’re writing code associated with device drivers, then the operating system needs to be pretty robust. The presumption that it works is no longer safe because we’re probably going to try and break it. That’s part of our job. So reliability or the assumption whether or not something works, has a personal perspective. S3-2-P27 - Acceptance Testing (3) Acceptance testing is usually on a smaller scale than the system test. Textbook guidelines say that functional system testing should be about four times as much effort as acceptance testing. You could say that for every user test, the suppliers should have run around four tests.

Page 49: Software Testing Intermediate

Software Testing Intermediate

Page 49

On some occasions, the acceptance test is not a separate test, but a subset of the system test, assuming we hire a company to write software on our behalf. The company developing the software will run system testing on their own environment. We will ask them to come in to our test environment and to rerun a subset of their test. We will call this our acceptance test. Acceptance may not require its own test and can sometimes be based on satisfactory conduct of system tests. This happens frequently. S3-2-P28 - Design-based Testing Design-based testing tends to be used in highly technical environments. For example, suppose a company wants to replace its electronic mail server software with a new package. We could say that a technical test of the features will serve as an acceptance test, as it is not appropriate to do a 'customer' or 'user' test, because the users' client software is unaffected. It would be more appropriate to run a test in the target environment (where it will eventually need to run). Given that system testing is mainly black-box, it relies upon design documents, functional specs, and requirements documents for its test cases. We often have a choice of how we build the test. It’s worth referring back to the “V” model, where an activity exists to write requirements, functional specs and then do design. When system testing takes place, typically it’s not just the functional spec that is used. Some tests are based on the design. A supplier providing a custom-built product should not ignore the business requirements, because if these aren’t met then the system won’t be used. So, frequently, some tests may be based on the business requirements as well. Tests are rarely based on the design alone. Let’s think about what the difference is between testing against these various baselines (requirements, functional specs and design documents). Testing against the design document is using a lower level, more technically-oriented document. You could scan the document and identify all of the features that have been built. Remember that it is not necessarily what

the user has asked for. You can see from the design document what conditions, what business rules, what technical rules have been used, and these rules can be tested. A design-based test is very useful because it can help demonstrate that the system works correctly. S3-2-P29 - Activity - Design-based Testing If tests are based on a design, it will tend to be oriented towards what was built and the technology used rather than what was asked for. True or false S3-2-P30 - Requirements-based Testing Acceptance tests and some system tests are usually requirements-based. The requirements document describes what the users want. Reviewing the requirements document should highlight which features should be in the system. And it should say which business rules and which conditions should be addressed. It provides information about what we want the system to do. If it can be demonstrated that the system does all these things, then the supplier has done a good job. But testing may show that actually there are some features that are missing in the system. If we test according to the requirements document, it will be noticeable if things are missing. Also, the test is not influenced by the solution. We don’t know and we don’t care how the supplier has built the product. It’s being tested as if it were a black-box, testing it the way that it should be used and not testing it the way it was built. S3-2-P31 - Requirements versus Specifications Is it always possible to test from the requirements? No. Quite often, requirements are too high-level or we don’t have access to them. If it’s a package, the requirements may be at too high a level. In reality, requirements documents are often too vague to be the only source of

Page 50: Software Testing Intermediate

Software Testing Intermediate

Page 50

information for testing. One of the reasons for having a functional spec is to provide that detail. The requirements are documented in a way that the users understand. The functional spec, which is effectively the response from the supplier, gives the detail, and is usually structured in a different way. In principle, every feature in the functional spec should reflect how it meets the requirements. Quite often, you’ll see two documents delivered –the functional spec and a table of references between a feature of the system and how it meets a customer requirement. In principle, that’s how gaps are spotted. Whoever wrote the table has by default checked that all of the features are covered and that has the real value. But many functional specs will not have a cross-reference table to the requirements. This is a real problem because these documents can be large, 50, 100 or even 500 pages. S3-2-P32 - Problems with Requirements Ill-specified requirements can present real problems. When the supplier comes and delivers the system and it is tested against the requirements, if there is insufficient detail, the supplier is going to say, ‘You never said that you were going to do that, because you didn’t specify it.’ The supplier will expect payment for a product that the users don’t think works. The supplier contracted to deliver a system that met the functional specs, not the business requirements. You have to be very careful. Typically a requirements statement says, “This is what we intend to do with the software” and “This is what we want the software to do.” It doesn’t say how it will do it. It’s a wish list, and that’s different from a statement of actuality. It’s intent, not implementation. From this ‘wish list’ all of the features that need to be tested should be identified. Take an example of a requirements statement that says ‘The system must process orders’. How will it process orders? Well, that’s up to the supplier. When the user writes the requirement, many details might be assumed to exist. But the supplier won’t necessarily have the

same assumptions as the user, they will deliver what they think will work. S3-2-P33 - Problems with Requirements (2) A lot of assumptions are made when detail isn't specified in requirements. A lot of low-level requirements such as field validation and steps of the process don’t appear in a requirements document. Let's look at an example. The SAP system, across all its many integrated packages, is incredibly complicated. Suppose you have a process called ‘The Order Process’. SAP may have 40 screens that support that process. Now, nobody uses all 40 screens to process an order. But SAP could be configured to do just that. How do you specify requirements for your 'simple' process? You have to specify a SAP configuration that identifies only those screens that are relevant to you. All the extra screens are not required. But if you get the specifications wrong, you might exclude a screen that SAP actually needs to capture some data to process the order correctly. S3-2-P34 - Business Process-based Testing The alternative to using the requirements document is to say from a user’s point of view, “We don’t know anything about technology or the package itself, we just want to run our business and see whether the software supports our activities.” Testing from a viewpoint of business process is no different from the unit testing of code. Testing code is white-box testing. In principle, you find a way of modelling the process, whether it’s software or the business, you draw a graph, trace paths, and say that covering the paths gives confidence that enough testing has been done. From a business perspective, the most important processes are identified because you don’t have time to do everything. Which processes do we need to feel confident about in order to give us confidence that the system will be correct? Typically the users would construct a diagram on how they want a process to function. The business may have an end-

Page 51: Software Testing Intermediate

Software Testing Intermediate

Page 51

to-end process where there’s a whole series of tasks to follow, but within that, there are decisions causing alternative routes. To test this, begin with the most straightforward case. Then, start adding other paths to accommodate other cases. A diagram can be drawn of the business process showing the decision points. In other words, you can graph the process. When testers see a graph, as Beizer says, “you cover it”. In other words, you make up test cases to take you through all of the paths. When you’ve covered all the decisions and activities within the main processes, then you can have some confidence that the system supports our needs. S3-2-P35 - Business Process-based Testing (2) Testing business processes is a much more natural way for users to define a test. If you ask users to do a test, give them a functional spec and sit them at a terminal, they wouldn’t know where to start. If you say, construct some business scenarios through your process and use the system, they are far more likely to be capable of constructing test cases. This works at every level, whether it’s the highest level business processes or the detail of how to process a specific order type. Even if the order is processed manually, the decisions taken, whether by a computer or human being, can be diagrammed. S3-2-P36 - User Acceptance Testing We have this notion of ‘fit’ between the system and the business. The specific purpose of the user acceptance test, or UAT, is to determine whether the system can be used to run the business. It’s usually planned and performed by, or on the behalf of, the users. The users could handle everything. Alternatively provide the users with a couple of skilled testers to help them construct a test plan. It’s also possible to have the supplier or another third party carry out the user acceptance test on behalf of the users as an independent test.

This can’t be done without involving the users. They must contribute to the design of the test and have confidence that the test is representative of the way they want to work. There must be confidence in the approach. The biggest risk with an independent test group is that the tests don’t do what the user would do. S3-2-P37 - User Acceptance Testing (2) Here’s an example. Suppose that you went to a local dealer to look for a car. There’s a model that you are interested in, the colour is good, and the mileage looks genuine. The car dealer walks up to you and says, "Hello sir – can I help you?" and you say "I like the look of this car and I’d like to take it for a test drive." What if the car dealer says, "No, no, no – you don’t want to do that. I’ve done that for you." Would you buy the car? Probably not. Assuming that the car dealer is trustworthy, why wouldn’t you buy a car from a dealer that said he’d tested the car out on your behalf? It is probably because his requirements may be different from yours. If he does the test – if he designs the test and executes the test – it’s no guarantee that you’ll like it. Software testing differs from this example in one respect. Driving a car is a very personal thing – the seat’s got to be right, the driving position, the feel, the noise and so on. It’s a personal preference. With software, you just want to make sure that it will do the things that the user wants. So, if the user can articulate what these things are, potentially, you can get a third party to do at least part of the testing. The fundamental point here is that the users have to have confidence that the tests represent the way they want to do business. Packages are a problem because there is no such notion of system testing, you only have acceptance testing. Even if it is a package that you are only going to configure (not write software for), UAT is the only testing that can take place.

Page 52: Software Testing Intermediate

Software Testing Intermediate

Page 52

S3-2-P38 - User Acceptance Testing (3) UAT is usually your last chance to perform validation. In other words, is it the right system for me? The idea of user acceptance testing is that users can do whatever they want. It is their test. You don’t normally restrict users, but they often need assistance to enable them to test effectively. Another approach to user acceptance testing is using a model office. A model office uses the new software in an environment modelled on the business. If, for example, this is a call centre system, then 5 or 6 workstations may be set up, with headsets and telephone connections and staffed by users. The test is then run using real examples from the business, testing the software and processes with the people who will be using it. You will also find out whether their training is good enough to help them do their job. S3-2-P39 - Contract Acceptance Testing In order to prove the suppliers’ contractual obligations have been met, we execute contract acceptance testing. This test can take a variety of forms. It could be a system test done by a supplier. It could be a ‘factory acceptance test’ which is a test completed by the supplier and witnessed by the customer. You might run a ‘site acceptance’ test at the customers’ location. It could even be a user acceptance test. Any contract should have clear statements regarding the acceptance criteria process and timescales. The contract might specify that 100% payment is due on final completion, assuming everything has gone as expected. Alternatively, payment might be staged against particular milestones. This situation is more usual, and is particularly relevant for large projects involving lots of resources over longer timescales. In such cases payments may be staged. For example:

20% may be paid on contract execution

20% on completion of the build and unit testing

20% when the systems test is completed satisfactorily

20% when the performance criteria are met

20% paid when the users are happy.

Contract acceptance testing is any testing that has a contractual significance and in general, it is linked with payment. The reference in the contract to the tests, however, must be specific enough that it is clear to both parties whether the criteria have been met. S3-2-P40 - Alpha and Beta Testing So far in this section we have been looking at variations on system and acceptance testing. But are there more types of testing? In fact there are hundreds. Let’s take a few minutes to look at some of the more common ones. Alpha and beta testing are normally conducted by suppliers of packaged or ‘shrink-wrapped’ software. For example, Microsoft does beta testing for new versions of Windows every few years. Each time they recruit 30,000 or more beta testers. The actual definitions for alpha and beta testing will vary from supplier to supplier, which leaves the test objectives open to interpretation, but the following guidelines generally apply. An alpha test is normally done by users that are internal to the supplier. An alpha test is an early release of a product, that is, before it is ready to ship to the general public or even to the beta testers. Typically it is given to the parties who might benefit from familiarisation with the product. For example, the marketing team can decide how they will promote its features and the technical support people can get a feel for how the product works. Beta testing might be internal, but most beta testing involves customers using a product in their own environment. Sometimes beta releases are made available to large customers because the supplier wants them to adopt the next version. It might take a year or two’s planning to make this happen.

Page 53: Software Testing Intermediate

Software Testing Intermediate

Page 53

A beta release of a product is often a product that’s nearly finished, is reasonably stable and often includes new features that hopefully are of some use to the customer. You are asking your customers to take a view. S3-2-P41 - Alpha and Beta Testing – Intent Following on from our earlier Microsoft beta tester example, you might be wondering:

Why software companies don’t do their own testing?

Who are these people doing the beta testing?

Why are they doing this testing for Microsoft?

This type of beta testing is something different. Microsoft isn’t using 30,000 people to find bugs, their objectives are quite different. Suppose they issued a ‘bug free’ beta version of their product. Do you think that anyone would call them up and say, “I like this feature but could you change it a bit?” They leave bugs in so that people will give them feedback. As such, beta testers are not testers at all. In fact they’re part of a market-research programme. It’s been said that only 30% of a product is planned, the rest is based on feedback from marketing people, internal salesmen, beta programmers, and so on. When they receive 10,000 bug reports in a particular area of the software, they know that this is a really useful feature, because everybody who is reporting bugs must be using it! They probably knew all about the bug before the product was shipped. Conversely, if another bug is only reported three times, then it suggests that the feature isn’t as useful so it is removed from the product. In summary, beta testing may not be testing at all, it may be market research. S3-2-P42 – Summary This concludes Section 3.2 entitled ‘Test Levels’. We began this section by:

Discussing component testing and the relationship of coding to testing

We went on to look at several test types including

Ad hoc versus component testing

Software integration and component Integration testing including top-down and bottom-up approaches and the mixed integration strategy

Later in the section we looked at system and acceptance testing, the similarities and differences between each. We also looked at design-based testing, requirements-based testing and identified some drawbacks with this approach

Finally, we concluded the section by looking at business-process based testing and user acceptance testing and briefly examined some alternative test types including alpha and beta testing.

Module 3 Section 3 – Functional and Structural Testing S3-3-P1 - Objectives Welcome to Module 3 Section 3 entitled ‘Functional and Structural Testing’. In this brief section we will examine two different ways of testing the functionality of software. The functionality of software represents what the software can do. Functionality could be at the user level, where the software performs specific tasks of interest to the user. Or it could be at the component or sub-system level, where the software performs tasks that support the overall functionality, but is not directly accessible to the end users. We can derive tests of that functionality in two ways. These are:

Functional tests, also called black-box or specification-driven tests, are derived directly from the specification

Structural tests, also called white-box or glass-box tests. These are derived from the internal structure of the code. Let’s look at this now.

Page 54: Software Testing Intermediate

Software Testing Intermediate

Page 54

S3-3-P2 - Functional or Black-box Testing Functional tests are based on functions and features, which in turn are described in documents or understood by the testers, and may be performed at all test levels. For example, tests for components may be based on a component specification. Black-box testing is where testers treat the construction and the code of the software as an unknown. In effect you can't see what is in it. Testers just test it based on its specification. The functions that a system, sub-system or component are to perform may be described in work products such as a requirements specification, use cases, or a functional specification, or they may be undocumented. The requirements are used to prepare test cases, and expected results are prepared from the specification. The specification is used as the baseline for results. S3-3-P3 - Structural, White-box or Glass-box Testing The important thing about white-box, glass-box, or structural testing is that you use the internal structure of the software, in other words, the code, to prepare your test cases. In preparing test cases, paths are traced through a model of the software. To do this the code must be visible. In order to design test cases, the testers must be capable of understanding the internals of the software under test. You must, however, still use the specification for the expected results. After all testers don’t look at the code and say, “Well the code tells me that it’s going to do this.” Structural techniques are best used after specification-based techniques, in order to help measure the thoroughness of testing through assessment of coverage of a type of structure. Structural testing may be based on the architecture of the system, such as a calling hierarchy.

The concept of structural testing can be applied to design models, business models or menu structures. S3-3-P4 - Functional versus Structural Testing Low-level or sub-system tests are mainly white-box. What this means is that the developers use the code they have written to guide their test design. Typically, programmers do mainly white-box testing. They code a bit, and then they test a bit. Formal component testing starts with the component specification, but the majority of developer time is spent achieving white-box coverage. Low-level or sub-system tests refer to the unit/module/component level and also the integration between components. Users don’t know how the code works. As such user testing is always black-box testing, and system testing is mainly black-box testing as well. So the process runs from white-box testing progressing to virtually all black-box testing. These include system tests and user acceptance tests. This concludes this brief section on Functional and Structural Testing. Module 3 Section 4 – Non-function System Testing S3-4-P1 - Objectives Non-functional requirements or NFRs, are those that state how a system will deliver its functionality. NFRs are as important as functional requirements in many circumstances but are often neglected. This short section provides an introduction to the most important non-functional test types. S3-4-P2 - Non-functional Test Types Shown on screen is a selection of non-functional test types. The list is by no means comprehensive but does illustrate how diverse non-functional test types can be. Performance and stress testing are the most common form of non-functional test performed, but for the purpose of the

Page 55: Software Testing Intermediate

Software Testing Intermediate

Page 55

examination, you should understand the nature of the risks to be addressed and the focus of each type of test. These tests can be referenced to a quality model such as the one defined in 'Software Engineering -Software Product Quality' (ISO 9126). Note that ISO 9126 classifies security testing and interoperability testing as functional, not non-functional, test types. S3-4-P3 - Non-functional Requirements First, let’s take an overview of non-functional requirements. Functional requirements say WHAT the system should do. Whereas Non-functional is HOW the system does it. For example, it should be reliable, have fast response time, and be usable, and so on. The problem with non-functional requirements is that usually they’re not written down. Users naturally assume that a system will be usable, and that it will be really fast, and that it will work for more than half the day. Many of these aspects of how a system delivers the functionality are assumptions. Typically a functional specification will consist of 200 pages of functional spec and perhaps one page of functional requirements, and one page of non-functional requirements. If they are written down rather than assumed, they usually aren’t written down to the level of detail that they need to be tested against. Suppose you’re implementing a system into an existing infrastructure and you will have a service level agreement that specifies the service to be delivered – the response times, the availability, and so on. In most cases, it is not until this service level agreement is required that the non-functional requirements are discussed. It is common for the first activity of non-functional testing to be to establish the requirements. This concludes this brief section on Non-functional System Testing.

Module 3 Section 5 – Re-testing and Regression Testing S3-5-P1 – Objectives Re-testing and regression testing are frequently confused, in this section we will spend some time covering these two important, distinct, but similar topics. The tester’s principle must be that if they create tests that aim to detect faults, testers know that they’re bound to be successful. Testers ask someone to fix the fault, and once returned, will re-test it to make sure that the fix works correctly. Testers also know that in fixing one fault, they may unwittingly introduce another. In Section 3.5 we will:

Describe re-testing or confirmation testing and regression testing in some detail

Look at some of the business oriented reasons for using them

Examine the potential benefits and possible pitfalls of both test types.

S3-5-P2 - Retesting and Regression Testing When a defect is detected and fixed then the software should be retested to confirm that the original defect has been successfully removed. This is called re-testing or sometimes called confirmation testing. Tests should be repeatable if they are to be used for confirmation testing and to assist regression testing. Confirmation testing is more commonly known as re-testing. Regression testing is the repeated testing, after modification, of an already tested program to discover any defects introduced or uncovered as a result of the change or changes. These defects may be either in the software being tested, or in another related or unrelated software component. The extent of regression testing is based on the risk of not finding defects in software that was working previously. Regression testing may be performed at all test levels, and applies to functional,

Page 56: Software Testing Intermediate

Software Testing Intermediate

Page 56

non-functional and structural testing. Regression test suites are run many times and generally evolve slowly, so regression testing is a strong candidate for automation. S3-5-P3 - Re-testing or Confirmation Testing If we run a test that detects a fault we can get the fault corrected. We then repeat the test to ensure the fault has been properly fixed. This is called re-testing or sometimes called 'confirmation testing’. If we run a test, and that test causes the software to fail, we report the failure. When we receive a new version of the software and repeat the failed test we call that a re-test. This is a test that, on the last occasion you ran it, the system failed and now we repeat that same test to make sure that the failure has been eliminated. Testing attempts to find faults, so always expect to do some re-testing. We know that every test plan we’ve ever run has found faults in the past, so we must always expect and plan to do some re-testing. Does your project manager plan optimistically? Optimistic managers ask the testers: “How long is the testing going to take?” To which the tester replies perhaps, “Four weeks if it goes as well as possible.” The manager pounces on the ‘perfect situation’, and plans optimistically. Re-testing is always required and so planning should reflect that. S3-5-P4 - Regression Testing When a software fault is fixed, it often happens that 'knock-on' effects occur. It’s important to check that when changing the faulty code, the developer has not introduced new faults. Remember there is a 50% chance of introducing regression faults. Regression tests tell us whether new faults have been introduced. In other words whether the system still works after a change to the code or environment has been made.

Re-testing will give confidence that a fault has been corrected, but repeat runs of other tests are necessary to check for unwanted side-effects of the change. These tests are called regression tests because they help testers to detect any regressions in the behaviour of the software under test. When environments are changed, regression testing may also be carried out. When upgrading operating system software or components of any systems, regression testing might take place before deploying a system in a changed environment. Existing, unchanged software might be regression-tested because the change in environment or reused components might also cause the system to fail. S3-5-P5 - Regression Testing (2) A regression test is a check to make sure that when you make a fix to software the fix does not adversely affect other functionality. The big question, “Is there an unforeseen impact elsewhere in the code?” needs to be answered. The need exists because fault-fixing is error-prone. It’s as simple as that. Regression tests tell you whether software that worked before the fix was made, still works. The last time that you ran a regression test, by definition, it did not find a fault; this time, you’re going to run it again to make sure it still doesn’t expose a fault. A more formal definition of regression testing is – ‘testing to ensure a change has not caused faults in unchanged parts of the system’. Some people regard regression testing as a separate stage, but it’s not a separate stage from system/acceptance testing, for example, although a final stage in a system test might be a regression test. There is some regression testing at every test stage, right from component testing through to acceptance testing. S3-5-P6 - Regression Testing (3) Regression testing is most important where you have a live production system

Page 57: Software Testing Intermediate

Software Testing Intermediate

Page 57

requiring maintenance. In the context of this course, maintenance to an existing, production system includes:

Corrective maintenance – such as bug fixing

Adaptive maintenance – such as enhancements

Preventative maintenance – for example changes to pre-empt possible problems in the future.

When users are committed to using your software, the most serious problem is having a bug in code that they’re using today and are dependent on. Users get most upset when you 'go backwards' - that is, a system that used to work stops working. They may not mind losing a few weeks in the schedule because you’re late with a new delivery. They do mind if you inject bugs into a system they trust and are dependent on at the moment. Manual regression testing can be boring, tedious and testers make too many errors themselves. If it's not automated, it is likely that the amount of regression testing being done is inadequate. We’ll look at automation tools later in the course. S3-5-P7 - Activity Can you identify re-testing and regression testing from their description? Click in the drop down box to select the correct answer. S3-5-P8 - Initial Test Failed to Find a Fault To consider how regression testing works in detail, let's consider the first time a test is run. It may be that we have version “X” of the code and we can test using some particular test data. We execute these tests, get some results, which meet our expectations – it’s a ‘pass’. Regression testing is where the software changes under a new version, and we use the same test data and expected results from last time as the new expected results. The question is – do we get the correct, expected results the second time around?

S3-5-P9 - Selective Regression Tests Some organisations use the word 'set' to refer to their collection of regression tests. We'll use 'pack' in this course, but there is no agreed standard for this term. For most environments, keeping an entire system test for regression purposes is just too expensive. There will be so much maintenance to do as software always requires change, this is inevitable. Most organisations choose to retain between 10% and 20% of their system and/or acceptance tests to form a regression test pack. Because it is usually uneconomic to reuse all tests as regression tests, we need to define some criteria for selecting only the most useful ones. Let’s look at some of those now. Key functionality relates to the most business-critical functions - ones that underpin the whole system. Other key functionality might be frequently executed functions or code that is reused through the system by programmers. S3-5-P10 - Selective Regression Tests (2) Most legacy systems have 'hot-spots' that appear to be error-prone. Whenever a change is implemented, these areas of functionality appear to be the most sensitive to change and regularly fail. Clearly, error-prone areas are most likely to fail in the future, so regression tests should focus on these areas. Use regression tests that are the most easily automated. This sounds like a cop-out, but, if the only viable regression tests are those that can be automated this could be the key criterion. A regression test does not necessarily need to exercise only the most important functionality. Many simple, lightweight regression tests might be just as valuable as a small number of very complex ones. If you have a GUI application, a regression test might just visit every window on the screen. A very simple test indeed, but it gives you some confidence that the developers or the database administrator or DBA, haven’t introduced silly faults into the product.

Page 58: Software Testing Intermediate

Software Testing Intermediate

Page 58

This is quite an important consideration. Selecting a regression test is all very well, but if you’re not going to automate it, it’s not likely to be run as often as you like. S3-5-P11 - Automated Regression Testing The tests that are easiest to automate are the ones that don’t find the bugs because you’ve run them once to completion. The problem with tests that did find bugs is that they cannot be automated so easily. This is the paradox of automated regression testing. Even if we do have a regression test pack, life can be pretty tough, because the cost of maintenance can become a considerable overhead. It’s another one of the paradoxes of testing. Regression testing is easy to automate in a stable environment, but when we most need to create regression tests is when the environment isn’t stable. We don’t want to have to rebuild our regression test every time that a new version of software comes along. We want to just run them, to flush out obvious inconsistencies within a system. The problem is that the reason we want to do regression testing is because there is constant change in our applications, which means that regression testing is hard, because we have to maintain our regression test packs in parallel with the changing system. S2-5-P12 - Summary This concludes Section 3.5 on the subject of Re-testing and Regression Testing. In this section we have:

Described the differences between re-testing or confirmation testing and regression testing

Looked at some of the business oriented reasons for using either of them

Examined the potential benefits and possible pitfalls of both test types.

Module 3 Section 6 – Maintenance Testing S3-6-P1 – Objectives The majority of effort in the IT industry is expended on maintenance. Unfortunately textbooks don’t talk about maintenance very much because it's often complicated and 'messy'. In the real world, systems last longer than the project that created them. During this time the system and its environment is often corrected, changed or extended. Maintenance testing is carried out on an existing operational system, and is typically triggered by modifications, migration, or retirement of the software or system. Consequently, the effort required to repair and enhance systems during their lifetime exceeds the effort spent building them in the first place. In Section 3.6 we will examine maintenance testing in a little more detail, including:

The scope of maintenance testing, including key factors such as the triggers for this type of testing and the risks involved

Maintenance routes and the definition of a release

The need for emergency maintenance.

S3-6-P2 - Maintenance Considerations The scope of maintenance testing is related to the risk of the change, the size of the existing system and to the size of the change. Depending on the changes, maintenance testing may be done at any or all test levels and for any or all test types. Typical modifications can include:

Planned enhancement changes, in other words, release-based changes

Corrective and emergency changes

Page 59: Software Testing Intermediate

Software Testing Intermediate

Page 59

Changes of environment, such as planned operating system or database upgrades

Patches to newly exposed or discovered vulnerabilities of the operating system.

The issue with maintenance testing is often that the documentation, if it exists, is not relevant or helpful when it comes to carrying out testing. S3-6-P3 - Maintenance Considerations (2) Maintenance changes are often urgent. This is typically corrective maintenance or bug-fixing rather than new developments. Bug-fixing is often required immediately. If a serious bug has just come to light, it has to be fixed and released back into production quickly. This leads to pressure to avoid elaborate testing. Pressure can also be applied to the developer to make the change in the shortest time possible. This situation doesn’t minimise the developers’ error rate! Maintenance may not appear to require elaborate testing, but this is deceptive. Maintenance may only change part of a system, but the change may have an impact throughout. Determining how the existing system may be affected by changes is called impact analysis, and is used to help decide how much regression testing should take place. S3-6-P4 - Maintenance Routes & Release Definition Essentially, there are two ways of dealing with maintenance changes:

Either, groups of changes are packaged into manageable releases. This is used for adaptive or non-urgent corrective maintenance

Or, Urgent changes are handled as emergency fixes. This is usually for corrective maintenance.

It is often feasible to treat maintenance releases as abbreviated developments and just like normal development, there

are two stages. These are definition and build. Maintenance programmers routinely do a great deal of testing. Half of their work involves figuring out what the software does and the best way to do this is to try it out. When they have changed the system, they need to retest. S3-6-P5 - Activity The scope of maintenance testing is influenced by three factors: the size of the change, the size of the existing system and which other factor? S3-6-P6 - Maintenance and Regression Testing A maintenance package is handled like any other development except testing tends to focus on code changes and ensuring existing functionality is maintained. Unless you are in a highly disciplined environment, regression testing is often allowed to slip. Unless you have in place an automated regression test pack, maintenance regression testing is usually limited to a minimum. That’s why maintenance is risky. If tests from the original development project exist, they can be reused for maintenance regression testing, but it's more common for regression test projects aimed at building up automated regression test packs to have to start from scratch. If the maintenance programmers record their tests, then these can be adapted for maintenance regression tests. Regression testing dominates maintenance effort, as it usually takes more than half of the total effort. So, part of the maintenance budget must be assigned to complete an amount of regression testing and, potentially, automation of that effort. Maintenance fixes are error-prone – remember there’s a 50% chance of introducing another fault. Even if release is urgent and time is short, testing can still go ahead following release.

Page 60: Software Testing Intermediate

Software Testing Intermediate

Page 60

S3-6-P7 - Activity Determining how an existing system may be affected by change is called what? S3-6-P8 - Emergency Maintenance Where emergency changes need to take place, testers could make the change and install it, but test it in their test environment. There’s nothing to stop testers from continuing to test the system once it’s moved into production. In fact this is more common than perhaps it should be. Making a release before all regression testing is complete is risky, but if testing continues, the business may not be exposed for too long as any bugs found can be fixed and released quickly. This concludes this short section on Maintenance Testing. Module 3 Section 7 – Test Environment Requirements S3-7-P1 - Objectives The purpose of the test environment is to unable system to be tested before it goes live. Every project will require a test environment specific to the system that is being developed. The test environment contains hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test. These elements must be put together in such a way to make testing measurable without interfering with the operation of the system and making testing unrealistic. How realistic the test environment is will depend on the objectives of the test. For example, during component testing where code coverage is important but the system is incomplete, studs and drivers will be used. Later in the life cycle during system testing the test environment should correspond to the final target production environment as much as possible. In acceptance testing the test environment may even be an exact replica of the live environment including any physical constrains that exist.

In this section we will identify and explain the principles behind determining what the test environment requirements will be. S3-7-P2 - Considerations for the test environment There are many considerations when specifying the test environment. Most importantly the objectives of testing must be considered. For example, if the objective of testing to demonstrate that the system functionally performs according to the requirements, as it is with most system testing, then the system hardware and cohabiting software should correspond to the final target or production environment as much as possible. If the objective of testing is to demonstrate the system meets business objectives, and will not affect business continuity, then in addition it will be important to replicate the physical environment. When testing system integration or performance, replicating network constraints will be important. Data requirements must be considered. Data can often be expensive and time consuming to create. Anonymizing live data may be considered. Testing on some projects such as integration projects can only be done with live data. Careful compliance to data protection regulations must be maintained. S3-7-P3 - Considerations continued Testing will require terminals, application servers and data servers. Some testing will require specific devices, access to mainframe systems or need some way of emulating or actually connecting to the World Wide Web. All of this hardware must be connected together and located somewhere where it is convenient for the testers to run tests and IT support to make any adjustments that are necessary. Running on the hardware will be software. Not only the application under test, but any stubs, drivers, simulators, test tools and any other computer programs that are necessary to replicate the live environment and facilitate testing. Change control and release management will be a critical part of the process to ensure that all versions of software items are correct.

Page 61: Software Testing Intermediate

Software Testing Intermediate

Page 61

Once test execution begins testers will require support. As already mentioned, change control and release management are required. Development will need to be on hand to support testers when they are raising defects. Business analysts and members of the user community may be required to ensure that business objectives and system behaviour is understood. IT support will be needed to maintain the hardware and software used for testing. Finally it is important to consider the testers themselves. Not only must they have the tools and support they need, they also need facilities that are conducive to working effectively. For example, if the testers’ workstation and printer is on the top floor, but the test lab is in the basement, then inevitably time will be wasted walking between the two locations. S3-7-P4 - Activity Carefully read through the scenario given on the screen. Make a record of any test environment requirements that appear to be present. Also, make a record of what you think are the main testing challenges the test team will face. When you are satisfied with you answer move to the next page to reveal the answer. S3-7-P5 - Activity continued There is no right or wrong answer to this question. The answer given here is a good answer. However, from the small amount of information given in the text, and the fact that everyone's experiences are different, it is likely that your answer will be different. Do not be alarmed by this and don't forget that the BCS Intermediate examination is a multiple choice examination and you will not be required to come up with your own answer to such a question. You will be required to select the best answer from a list of options. S3-7-P6 - Summary This concludes this short section on test environment requirements. We looked at what a test environment is and the many different considerations that need to be made when specifying the test environment.

Remember, the most important thing when answering a question in the exam is to consider the specifics of the scenario given in the exam.

Module 4 – Static Techniques

Module 4 Section 1 – Reviews and the Test Process S4-1-P1 - Objectives Welcome to module 4 entitled Static Techniques. The title of this first section is 'Reviews and the Test Process'. In this section, we will;

look at the software work products that can be examined by the different static techniques

and recognise the importance and value of static techniques for the assessment of software work products.

We will introduce you to the ʻWʼ model and outline the linkages which exist between development and review activities. Finally we will look at different levels of review, including;

Informal reviews

Desk/Buddy Checking

Walkthroughs

Technical Reviews

Formal Inspections

Audits

and Management Reviews. S4-1-P2 - Overview Reviews, static analysis and dynamic testing have the same objective – to identify defects. In contrast to dynamic testing, reviews find defects rather than failures. Reviews and dynamic testing are complementary: the different techniques

Page 62: Software Testing Intermediate

Software Testing Intermediate

Page 62

can find different types of defect effectively and efficiently. Omissions can often be identified in reviews. For example, an omission could be identified in requirements, which is unlikely to be identified in dynamic testing. Typical defects found through reviews are;

• deviations from standards, • requirement defects, • design defects, • insufficient maintainability • and incorrect interface

specifications. Static testing techniques do not execute the software that is being tested; they are manual, in other words ‘reviews’, or automated in other words ‘static analysis.’ Reviews are a way of testing software work products, including code, and can be performed well before dynamic test execution. Any software work product can be reviewed, including requirement specifications, design specifications, code, test plans, test specifications, test cases, test scripts, user guides or web pages. S4-1-P3 - Overview – continued The benefits of carrying out reviews include;

early defect detection and correction,

development productivity improvements,

reduced development timescales,

reduced testing cost and time,

lifetime cost reductions,

and fewer defects and improved communication.

Remember, principal 3 tells us that testing activities should start as early as possible in the development life cycle. This is due to the "Cost Escalation Model'. Early testing is carried out using reviews.

Defects detected during reviews carried out early in the life cycle are often much cheaper to remove than those detected while running tests. For example a defect found in requirements will cost much less to rectify than after any programming has taken place. The main manual review activity is to examine a work product and make comments about it. A review could be done entirely as a manual activity, but tool support can also be used. S4-1-P4 - Activity Now try answering this question. Reviews are typically carried out to identify what? Reviews are carried out to indentify defects, whereas dynamic tests identify failures. S4-1-P5 - What and when to review Let’s introduce you to the 'W' model. Don’t panic, it's really an expanded view of the V-model. In our example the development activities are shown in blue and review/test activities are shown in orange. The early 'test' activities tend to involve reviews of one form or another, although review can be used at almost any stage in the project. What we’ve tried to suggest here is where reviews can be useful for evaluating almost all intermediate deliverables in projects. It is obvious that deliverables like requirements and designs can be subjected to reviews. However, documentation products like project plans, contracts, process documents, as well as requirements and designs can also be reviewed. Further, software code itself can also be reviewed and this is common in many environments. Although not a review, static analysis is a static test technique, and although normally done by tools, is a valuable weapon against faults in code. Code should be reviewed before unit test as reviews are more efficient way of finding faults. Also any code changes identified from a review which takes place after a unit test, could invalidate unit testing results.

Page 63: Software Testing Intermediate

Software Testing Intermediate

Page 63

The greatest return from reviews is with higher level documents, so it’s good to start with plans and requirements rather than code. You can also select documents and code for review on the basis of risk, rather than reviewing 100% of the code. S4-1-P6 - Levels of review 'formality' Letʼs take a few minutes to look at the types of review – including informal versus formal. There are four main types of review which are ordered by their formality. Within the reviews themselves, the other variables are; • the objectives of the review, • the amount of effort required to execute the reviews, • the amount of documentation generated by the reviews, • the ceremony involved, • and the metrics captured. Over the following few pages, we’ll look at each one in a little more detail. The most significant issue with regard to the different review types is their varying levels of formality. Audits and management reviews also have formality. For example, Audits are by nature highly formal. Management reviews on the other had can vary in formality considerably. S4-1-P7 - Levels of review 'formality'- continued The least formal technique is desk or buddy checking. This is a form of review where individuals check their own work as a separate activity from developing the product. Desk checking was most popular in the days when programmers worked in slow bureaucratic environments, but it is still a most effective technique for finding faults in code. A buddy check is where the product is given to a colleague who is qualified to give views on the content of a deliverable.

Done almost as a favour to the author, only one reviewer might be involved and comments might be passed verbally or in an email. Walkthroughs are more formal. There purpose is to communicate information in a walkthrough meeting. A document author might describe to peers how a document has been produced, what its objectives are, and then describe the contents of the document. This might be done at a high level or line-by-line. The walkthrough might be delivered as a lecture, as its purpose is usually to disseminate information rather than find faults. S4-1-P8 - Levels of review 'formality'- continued A technical review is a dual-purpose activity with some formality. Firstly, technical reviews are a faultfinding activity where reviewers actually search out anomalies in the product under review. The second major objective is to achieve consensus on the acceptability of the product and to decide on its readiness, or otherwise to take forward in the project. Finally, inspections tend to be very formal and involve a certain amount of ceremony. The inspection is a ‘pure’ testing activity. Checklists of rules are used to guide the checking activity. Reviewers use the rule sets to search through documents and identify particular types of fault. Formal inspections generate a lot of useful data on the process itself and the metrics gathered can be a valuable input to process improvement activities. S4-1-P9 - Levels of review 'formality'- continued An audit in the context of software testing is an official examination of a document or a set of document s that relates to system specification documentation and test documentation. By nature, audits are a formal process involving checklists, templates, a formal process and often witnessing. For example, an audit of test documentation may be undertaken to ensure that the test process has been

Page 64: Software Testing Intermediate

Software Testing Intermediate

Page 64

followed and identify any process improvement that may be made. An audit may also be undertaken to ensure that testing adequately demonstrates a system behaves as specified or to a particular standard. S4-1-P10 - Levels of review 'formality'- continued Management reviews can vary in formality considerably. The purpose of a management review is to make effective management decisions and how formal the review will be will depend on how formal the decision making process has to be. For example, whether to go live or not is likely to be a formal meeting where completion reports are examined, test managers, development managers and project managers will be interviewed, entry criteria will have been set and must be met, and minutes could well be recorded. However, many decisions that are made by management will be made much less formally. For example, the agreement to an estimate may be made by way of an informal review such as a buddy check. In fact, all the review types mentioned in this section can vary in formality to some extent. For example, a walkthrough may be carried out in such a way that it is more like an informal review. However, some walkthroughs may be carefully documented and have formal entry and exit criteria. Both reviews will still be walkthroughs, but clearly one is more formal than the other. Inspections are the only review types that do not really vary in formality as they are defined by the formal processes that are inspection. S4-1-P11 - Activity Using your mouse drag the review type titles to their description. S4-1-P12 - IEEE Standard for Software reviews The reviews described in the BCS Intermediate Certificate in Software Testing syllabus are broadly based on IEEE Std. 1028-1997, Standard for Software Reviews.

However, the syllabus states that there is no definitive document that captures current best practice or recognises the many valid variations of review practice that occur in organisations. This course identifies key principles and provides a comparative description of a selection of commonly used review types in the expectation that these will provide an adequate basis for the evaluation, selection and effective implementation of appropriate review techniques. S4-1-P13 - Framework for software reviews Each formal review that is carried out should follow a consistent framework. The framework recommended by the syllabus is a simplified version of a framework found in IEEE 1028: Introduction - The purpose and objectives of a review and the typical work products for which it may be used; explain that any type of review requires clear objectives to be defined as an input to the review. Responsibilities - The roles and responsibilities associated with the review; explain that these need to be defined according to the type of review and the nature of the source document(s). Input - The necessary inputs to the review; explain that these may vary according to the type of review. Entry Criteria - The criteria that need to be met before a review can begin; explain that all review types will have some entry criteria, but the nature of the criteria will vary by review type. Procedures - Key procedural aspects for a given review; explain how these vary between the types of review. Exit Criteria - Criteria to be met before a review can be considered complete; explain the nature of these criteria and how they vary between types of review. Output - Typical deliverables for a review type; explain how these vary for each type of review.

Page 65: Software Testing Intermediate

Software Testing Intermediate

Page 65

S4-1-P14 - Reviews and dynamic testing Static testing and dynamic testing are different types of techniques but have a complimentary relationship. They both have the same objective, to identify defects. However, static testing finds defects directly whereas dynamic testing finds them by finding failures first which must be investigated further to locate the defect. This means that each are better suited to finding different types of defects. Typical defects that are easier to find in reviews than in dynamic testing include:

• deviations from standards • requirement defects • design defects • insufficient maintainability • incorrect interface specifications.

S4-1 P15: The value of source documents Source documents are the documents that define in some way the product to be reviewed. For example, a requirements specification will influence the design so would be considered a source document for the review of the design specification. There will be less obvious documents such as standards and templates that will also influence the design. Without source documents, reviews are generally limited to finding defects of consistency and completeness within the document under review. Therefore, reviews are ideally performed when the source documents and those standards to which the product must conform are available. Source documents include:

• business objectives • specifications such as

requirements and designs • strategies and plans • standards

Now let’s look at the review types themselves in more detail. S4-1-P16 - Informal Reviews Let’s begin by looking Informal reviews in a little more detail.

An informal review is an unstructured, often ad hoc review of a document or piece of code carried out by one or more peers. Typically there may be pair programming or a technical lead reviewing designs and code. The usual purpose of the informal review is that an author may want to find faults in an unfinished work. The author asks his peers to spend some time examining the product and return comments, observations, or suggestions for the document under test. Very often, informal reviews are useful as a quick check that the author is still ‘on the right track’. Comments may be informal and not documented, so marked up documents, e-mails, and verbal communications are possible communication methods. S4-1-P17 - Informal Reviews – continued Informal reviews are very common as they require no planning, no training and can be very effective in finding high level defects. However, they may not find the low level or detailed defects that other review types will find. Informal reviews can be used as an entry point for organisations with no culture of performing reviews and can facilitate the development of a culture and a discipline that enables more formal review techniques to be implemented when and where appropriate. The key characteristics of informal reviews include:

• no formal process • there may be pair programming or

a technical lead reviewing designs and code

• optionally may be documented • may vary in usefulness depending

on the reviewer The main purpose of informal reviews: inexpensive way to get benefit from reviews. S4-1-P18 - Walkthroughs A walkthrough is used to communicate the purpose or content of a document or

Page 66: Software Testing Intermediate

Software Testing Intermediate

Page 66

deliverable and to increase the understanding of a selected group. An author might use a walkthrough to help other members of his team understand how a document or piece of work is progressing and could lead the group through scenarios or dry-runs to introduce the document and achieve 'coverage'. Only the presenter prepares for the meeting, in that he/she sets out to describe the key points to be communicated. Although faults may be found during the process, this isn’t the main objective. In some examples, the walkthrough can be delivered as a lecture where no questions are actually asked. It’s a purely an information dissemination technique. On the other hand, a walkthrough may become a semi-formal review with a discussion and identification of issues that are to be resolved. S4-1 P19 - Walkthroughs – continued Walkthroughs are often relatively informal, they may address a number of objectives and role assignments may be shared. The author often leads the walkthrough and walkthroughs are usually restricted to peer groups. The main input is the source document. There may be specifications and standards, but guidelines are often more appropriate. The only entry criteria for a walkthrough would normally be the availability of source documentation and reviewers. Source documents may be distributed before the meeting. Normally the author would make an overview presentation, often using scenarios for requirement and design models and dry runs through code, as a prelude to a general discussion. Reviewers can raise queries or identify defects during the discussion. Defects are normally captured at the walkthrough meeting to enable the author to make corrections after the walkthrough meeting. The walkthrough is considered complete when all reviewers have raised any questions or defects and these have been satisfactorily captured.

The list of defects may form output from a walkthrough or the source document may be marked up for later correction. The key characteristics of walkthroughs include:

• meeting led by author • scenarios, dry runs, peer group • open-ended sessions • optionally: a pre-meeting

preparation of reviewers, review report, list of findings and scribe, who is not the author

• may vary in practice from quite informal to very formal

The main purpose of walkthroughs:

• learning • gaining understanding • defect finding.

S4-1-P20 - Formal or 'Technical' Review The objective of a formal or technical review is to evaluate, detect faults in, and gain a consensus view on the product reviewed. In other words, Reviewers comment on a product and, through consensus, evaluate and improve it. Broader objectives can also include;

• having a discussion, • evaluating alternatives, • solving technical problems • and check conformance to

specifications and standards. To assist in fault finding, reviewers should be technically competent, in that they must have the technical background in order to understand the material and to provide useful comments. This process should be led by a trained moderator. Once the team is selected and the review is scheduled, the author of the product distributes the documents, and the reviewers perform the checking. The reviewers must be given enough time to read the document findings thoroughly and return their comments to the author in time for the meeting. Reviewers meet, log and discuss comments and decide whether to correct, accept or reject the product. Where faults are found, there is a formal process to log and track them. The log is used to ensure

Page 67: Software Testing Intermediate

Software Testing Intermediate

Page 67

faults are corrected and signed-off and that the quality of the document is improved. S4-1-P21 - Formal or 'Technical' Review – continued Reviewers would normally be staff with appropriate technical expertise to evaluate the source documents. Customer or user representatives or other stakeholders may also be invited if they have a well defined role to perform. As well as the review objectives and source documents, inputs may include relevant standards and specifications. Technical reviews will normally be scheduled, but could also be required by managers, for example, to evaluate the impact of incident reports or test results. Thorough preparation is essential for a technical review. Comments should be returned to the review leader and passed on to the author prior to the review meeting. The review leader should confirm that preparation has been adequate before the review begins. The review is complete when the status of the work product is defined and any actions required to meet required standards are defined. The key characteristics of technical reviews include:

• documented and defined defect detection process that includes peers and technical experts

• may be performed as a peer review without management participation

• ideally led by a trained moderator not the author

• pre-meeting preparation is essential

• optionally the use of checklists, review report, list of findings and management participation may be included

• may vary in formality The main purpose of technical reviews: Discuss

• make decisions • evaluate alternatives • find defects

• solve technical problems • check conformance to

specifications and standards. S4-1-P22 - Inspections The objective of Formal Inspections is to detect faults and improve the quality of a product. Other implicit objectives are to generate ideas/data process improvement. Inspections are planned activities and are usually led and managed by a trained moderator (not the author) and are based on peer examination. The participants have defined roles and the process is based on rules and checklists with entry and exit criteria. Inspectors check the product against rules, standards baselines. Issues are logged and categorised so that metrics can be obtained and this is followed by a formal follow-up process. A logging meeting takes place, and the chairperson insists that issues are identified, seeks agreement that issues are real issues, and then action is assigned to correct those issues. Typically, no discussion of the issues is allowed during the meeting. It’s a pure fault finding activity. S4-1-P23 - Audits An audit is an independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications or procedures. The main purpose of an audit is typically comparing documents or processes with an internal or more usually external ʻtemplateʼ, seeking to demonstrate compliance or show non-compliance. The ʻtemplateʼ can be, for example, an applicable standard, or a contractual obligation. Audits are extremely formal, and focus on compliance. As such, they are less effective than other review types at finding defects. A key point is that the evaluation is independent. A lead auditor is responsible for the audit and acts as the moderator.

Page 68: Software Testing Intermediate

Software Testing Intermediate

Page 68

Compliance evidence is collected in a variety of ways: through interviews and witnessing in addition to examining documents. Outcomes include observations, recommendations, corrective actions and a pass/fail assessment. S4-1-P24 - Inspections – continued As well as objectives for the review, inputs normally include appropriate checklists and rule sets for the inspection. Entry criteria are usually very specific and would be expected to include:

• work product conforms to appropriate standards for content and format

• prior milestones have been achieved

• supporting documentation is available

An inspection is considered complete when the output has been defined, all actions have been defined and required data has been collected. An inspection can be used to sample a limited number of pages with the purpose of establishing common defects throughout a work product, which can then be specifically looked for, and establishing a defect density for the document. One possible outcome of this could be process improvement. Inspection principles can be applied in an agile context to maximise the benefits of the technique for individuals. The key characteristics of inspections include:

• led by trained moderator not the author

• usually peer examination • clearly defined roles • uses and creates metrics • formal process based on rules and

checklists with entry and exit criteria

• pre-meeting preparation • inspection report and list of

findings produced • formal follow-up process • process improvement

The main purpose of inspections:

find defects. S4-1-P25 - Management Reviews The purposes of management reviews typically include evaluation of management activities and the support of management decisions. Typical products reviewed would include incident reports, risk management plans, test plans. Management reviews will normally include the decision maker for whom the review is being conducted. Inputs will typically include information on the status of the product being reviewed relative to any plan in place for it. Normally management reviews will be scheduled but a review may be held at the request of a decision maker such as the project manager or test manager. Objectives for the review will need to be in place before the review is initiated. Reviewers will be expected to prepare for the review by studying the appropriate documents and providing comments to the decision maker. Rework will be followed up to ensure timely completion. The review is complete when required outputs have been produced and any rework has been defined with an agreed delivery date. Output would normally include a record of any defects and planned actions. The key characteristics of management reviews include:

• carried out by management staff • technical staff are to provide

information • representation from project and

customer or user team may be present

The main purpose of management reviews:

• to make effective management decisions

S4-1-P26 - Using different types of reviews More than one of the review techniques may be employed on a single product. For example, an informal review could be

Page 69: Software Testing Intermediate

Software Testing Intermediate

Page 69

conducted for an early draft of a document, changes could be made as a result of the review, and then a more formal review type could be performed. Types of review will have a different level of effectiveness depending on the situation. For example, performing an inspection on a document that is an early draft and incomplete will be far too time consuming for the number of defects that it will raise. In fact, there will be so many trivial defects, such as spelling, punctuation and formatting errors, that these will become "white noise" and make it impossible to find the important effects. Note: not every type of review will always being cost effective for a given situation. Remember, reviews in real organisations may not be either named or used exactly as described in this course, but that the characteristics and principles described in this course can be used to evaluate the effectiveness of the actual reviews used by an organisation. S4-1-P27 - Activity Carefully read through the scenario given on the screen. Make a record of any review types that the project in the scenario might benefit from. There may be more than one. Note any appropriate justification for you choices. When you are satisfied with your answer move to the next page to reveal the answer. S4-1-P28 - Activity – continued There is no right or wrong answer to this question. The answer given here is a good answer. However, from the small amount of information given in the text, and the fact that everyone's experiences are different, it is likely that your answer will be different. Do not be alarmed by this and don't forget that the BCS Intermediate examination is a multiple choice examination and you will not be required to come up with your own answer to such a question. You will be required to select the best answer from a list of options. S4-1-P29 - Activity In this next activity we are going to provide practical experience of conducting a review. We will analyse its effectiveness

and consider the potential effectiveness or other types of review in this situation. Reviews are ideally performed in a group. It would be best if you can undertake this activity with one to three other colleagues. However, if you are taking this course on your own you can still perform this activity. We are going to perform a technical review on a project product description to ensure that it is internally consistent and fit for purpose. You will need to download the following documents from the screen:

• Review document: Project description

• Source document: Project brief • Review checklist

Planning Schedule the review meeting at an appropriate time, allowing approximately 30 minutes. Assign roles: if you are on your own you will take the role of a reviewer, if there are others they may take the roles of review leader and scribe. Kick off The review leader ensures that everybody has the appropriate documentation and understands the objectives of the review. Assume that the entry criteria for the review has been met. Individual preparation Spend about half an hour before the review meeting reading through the three documents and noting potential defects, questions and comments. Review meeting During the review meeting the review leader should reiterate the objectives, check that everybody has done their individual preparation and there are no issues to be resolved before the review progresses. The review leaders should progress through the review item section by section and facilitate any discussion that occurs. The scribe should ensure that all defects, questions and comments are recorded.

Page 70: Software Testing Intermediate

Software Testing Intermediate

Page 70

S4-1-P30 - Activity – continued So how did you find the review process? Key points that you should have noticed are: The planning and kick off stage was critical to ensure not only logistical or practical constraints were not present, but to ensure that participants time was not wasted and the objectives were achievable. Dedicating time for individual preparation was also essential to ensure participants effectiveness during the review meeting Having someone with strong facilitation skills ensures that during the review meeting the objectives were achieved within the time constraints. So how effective did you find the review? Did you find defects in the review item? Were you able to gauge the quality of the review item and were you able to quantify this? Key points that you should have noticed are: The level of formality of this technical review was appropriate for the objectives of the review and the level of completeness of the document The source documentation was essential to be able to carry out the review effectively The type and seriousness of the defects found enable you quantify the quality of the product (i.e. a very small number of trivial defects indicates that the quality is good) What about the other types of review? What would the benefits and pitfalls have been if we had use an informal review, a walkthrough or an inspection? Key points that you should have noticed are: An informal review would have been much quicker but would not have found as many defects Walkthroughs may have found more defects so long as not too many people were present

Walkthroughs would have helped us gain an understanding of the document, providing the author was present, but this was not to objective of the review Inspections would have found the most defects but in this case these defects would unlikely be serious and it would have taken an inappropriate amount of time to complete the review. S4-1-P31 - Summary This concludes section 4.1 of this e-learning course entitled 'Reviews and the Test Process'. In this section, we have seen how reviews are carried out in order to find defects and outlined the benefits of carrying out reviews. We introduced the W model and outlined the linkages which exist between development and review activities. Finally we looked at different types of review, including:

Informal reviews

Desk/Buddy Checking

Walkthroughs

Technical Reviews

Formal Inspections

Audits

and Management Reviews. Module 4 Section 2 – Phases of a Formal Review S4-2-P1 - Objectives In this section we will continue our coverage on the types of review, specifically we will walk through the process for a formal review in greater detail. The phases of a Formal Review are:

Planning

Kick-Off

Individual Preparation

Review Meeting

Rework

Page 71: Software Testing Intermediate

Software Testing Intermediate

Page 71

Follow-up. The formality of a review process is related to factors such as the maturity of the development process, any legal or regulatory requirements or the need for an audit trail. S4-2-P2 - Roles and Responsibilities The most formal reviews and inspections in particular, require a range of roles to be defined and performed. A typical formal review will include the following roles: The Manager manages the execution of reviews, allocates time in project schedules and determines if the review objectives have been met. The Moderator is the person who leads the review of the document or documents. Other tasks include planning the review, running the meeting, and follow-up after the meeting. The moderator may mediate between the various parties and is often the person upon whom the success of the review rests. The Author is the writer or person with chief responsibility for the document or documents to be reviewed. The Reviewers are the individuals with a specific technical or business background (also called checkers or inspectors) who, after the necessary preparation, identify and describe findings, such as defects, in the product under review. Reviewers should be chosen to represent different perspectives and roles in the review process and should take part in any review meetings. The Scribe or Recorder documents all the issues, problems and open points that were identified during the meeting. Looking at documents from different perspectives, and using checklists can make reviews more effective and efficient. S4-2-P3 - Planning and Kick-Off The first phase is planning, where project time is allocated in the schedule to conduct the review of a project product in a timely manner. ‘Select Review Team, which is part of the planning exercise, involves allocating and

committing resources to the review team. The review team itself should be selected from technical staff who are qualified to provide informed criticism. Normally, three or four technical staff are involved; the most senior person should chair the meetings and is allowed to make technical decisions, where necessary. Some reviewers may be asked to look at subsets of the material because they may only be interested in part of the material under review, or perhaps are only able to comment on specialist areas. Other reviewers might be invited to submit written comments but not attend a review meeting. Finally, staff who may have an interest but not a direct involvement in the review might be invited to submit comments on direction, quality, understandability or legibility of the document under test. The kick-off phase involves the leader of the review scheduling the review meeting and distributing documents for review. The invitation should explain the objectives, process and documents to the participants and validate the entry criteria (for more formal review types). It is important that enough time is given to the reviewers for them to read the documents and record comments on a form. S4-2-P4 - Activity We know that planning and kick-off are the first two phases of the Formal Review. Can you recall the remaining ones? Using your mouse drag the review phases to their correct position in the diagram. S4-2-P5 - Individual Preparation, Meeting and Rework In the individual preparation phase, each of the participants works on the document on their own before the review meeting, noting potential defects, questions and comments. The review meeting is normally chaired by the senior reviewer and lasts no more than two hours. The author and reviewers attend and more formal reviews will generate minutes, issue lists, metrics etc.

Page 72: Software Testing Intermediate

Software Testing Intermediate

Page 72

During the rework phase corrections are usually made by the author of the product under review. S4-2-P6 - Follow-up Finally, there is a decision made on the acceptability of the product and a consensus reached on the product content. That is, the review team accepts the product and agrees that this is the best way forward. The consensus view provides this confirmation that the project is going down the right path. Normally, the deliverables are changes to the product that’s been reviewed, but the review may highlight problems in baseline documents. For example, a review of a design document might expose discrepancies within the requirements specification. These discrepancies might be a deliverable of the review process itself. The follow-up phase involves the review leader checking that defects have been corrected and that exit criteria are met. If exit-criteria are not met, a further review would be required. Another outcome of the review might be suggestions for improvements to the overall project process and how the process might be improved next time around. S4-2-P7 - Conducting Review Meetings Here is some general guidance on the conduct of review meetings. Overall comments are briefly considered, possibly summarised by the chair. In conducting the review meeting, quite often the chairman of the meeting will review the comments and take a view on the quality of the document overall. General comments made by the other reviewers might be summarised also, to give people a feel for the level of quality of the document and how far away the document is from completion. One by one, the detailed comments are considered, and the comments themselves are documented. Some may not be issues, so there is no action, or

potentially, some problems may not be worth fixing. Other comments might reflect faults that must be fixed, and the document author must correct these. It is possible that serious design issues come to light that undermine the whole basis of the document. Problems of this severity would normally be considered outside of the review and may require the whole project approach or the foundation for the deliverable to be reconsidered. S4-2-P8 – Activity - Possible Review Outcomes Exit criteria would be useful on formal reviews. Do you think this statement is true or false? S4-2-P9 - Possible Review Outcomes The agreed status of the product under review is usually one of the following: Accepted - document is deemed correct and ready to use in subsequent project activities. The second possible outcome is that the document may be accepted with qualifications. Problems still exist in the documents which have to be corrected. The third outcome is that the document is rejected. Potentially, the document requires major revision and a further review. Perhaps time will be lost: the quality of the document has been assessed and it is just not ready. The risk of proceeding with a poor quality document is avoided. S4-2-P10 - Success Factors for Reviews Here are some critical success factors for successful reviews:

Each review has a clear predefined objective

The right people for the review objectives are involved

Defects found are welcomed, and expressed objectively

People issues and psychological aspects are dealt with (for example making it a positive experience for the author)

Review techniques are applied that are suitable to the type and

Page 73: Software Testing Intermediate

Software Testing Intermediate

Page 73

level of software work products and reviewers

Checklists or roles are used if appropriate to increase effectiveness of defect identification

Training is given in review techniques, especially the more formal techniques, such as inspection

Management supports a good review process (for example by incorporating adequate time for review activities in project schedules)

There is an emphasis on learning and process improvement.

S4-2-P11 - Pitfalls Possible pitfalls when conducting formal reviews include:

Lack of predefined objectives. Without clear objectives reviews can become a great opportunity for people to meet, discuss issues and degenerate into an ineffectual talking-shop

Wrong people. Having the wrong people in the review means the review is less effective

A 'simple' process. Although reviews and inspections appear to be simple processes, they still require training to be conducted to enable the participants to execute the reviews in an efficient and effective way.

Other pitfalls are:

Not making review guidelines available and not involving experienced reviewers in the early reviews

Project time pressure means inadequate time given to reviewers

Lack of management support

When a review is scheduled, the individuals involved are not released by management and so the reviews simply do not take place

Process improvement suggestions ignored.

Finally, outputs from all review processes are suggestions for improving the process itself. A major pitfall is to ignore or not follow up the improvements suggestions that are generated by review activities. S4-2-P12 - Summary This brings us to the end of Section 4.2 entitled ‘Phases of Formal Review’. In this section we have:

Identified the 6 distinct phases of the Formal Review

Outlined the key roles required to conduct reviews.

We went on to:

Look at the activities associated with each phase

Identified some of the skills required to conduct review meetings

Listed some of the possible outcomes namely, Accepted, Accepted with qualifications or Rejected.

Finally we identified some of the possible success factors and possible pitfalls of the Formal Review. Module 4 Section 3 – Static Analysis Using Tools S4-3-P1 - Objectives Welcome to Section 4.3 entitled ‘Static Analysis Using Tools’. In this module we will:

Look at the objectives of static analysis

Compare it to dynamic testing. Also in this module:

Typical defects and errors identified by static analysis are compared to those found by reviews and dynamic testing.

We'll also look at the types of code and design defects that may be identified by static analysis tools.

Page 74: Software Testing Intermediate

Software Testing Intermediate

Page 74

S4-3-P2 - Activity What do you think static analysis tools analyse? S4-3-P3 - Background The objective of static analysis is to find defects in software source code and software models. Static analysis is performed without actually executing the software being examined by the tool, whereas dynamic testing does execute the software code. One of the benefits of Static analysis is that it can locate defects that are hard to find in testing. As with reviews which were the subject of the previous section, static analysis finds defects, rather than failures. Static analysis tools analyze program code. Typical examples include control flow and data flow, as well as generated output such as HTML and XML. S4-3-P4 - Static Analysis Defined Static analysis can be defined as ‘an analysis of code with a view to detecting statically detectable faults within the software.’ As such static analysis does not require the software to be running, rather it is an activity conducted on source code. Put simply, a statically detectable fault is something that is visible to the trained eye. One category of fault that can be exposed by these analyses is ‘syntax faults’, for example where the syntax of a language used in a program is not adhered to. Definition-use anomalies are faults where the use of variables within the software does not form sensible patterns or sequences. Control-flow analysis is an activity where, very broadly, we look at the structural complexity of the software. We know that the quality of software is closely related to its complexity – because complex code is often difficult to perfect and is often fault prone. In general, highly complex code is difficult to understand, to maintain, as well as to test.

Metrics can be generated by a complexity analysis, these can be used to derive test cases as well as provide an indirect measure of the quality of the software. Static analysis can be done manually with small programs, but for most practical purposes, static analysis requires automated support. S4-3-P5 - Benefits of Static Testing Static testing can offer many benefits including:

Early detection of defects prior to test execution

Early warning about suspicious aspects of the code or design, by the calculation of metrics, such as a high complexity measure

Identification of defects not easily found by dynamic testing

Detecting dependencies and inconsistencies in software models, such as links

Improved maintainability of code and design

Prevention of defects, assuming lessons are learned in development.

S4-3-P6 - Using Tools for Static Analysis Let’s take a few moments to look at some typical tools which can assist in static analysis. They are typically used by developers who are checking against predefined rules or programming standards, before and during component and integration testing, and by designers during software modelling. A compiler can be regarded as a static analysis tool. But it doesn’t provide a great deal of output, it just tells you that software cannot be compiled, and that there are error messages on the compilation. Once it compiles, it still may not work. A 'lint' tool is a static analyser, but typically one that doesn't go into any depth and does not follow the logic of the program to detect faults. Commercial tools can do what is known as “deep flow” static analysis; this involves analysing the code to a very high degree. These tools carry out very sophisticated

Page 75: Software Testing Intermediate

Software Testing Intermediate

Page 75

tracing of the flow of data and the flow of control through the programs. Typical outputs from a static analysis tool are various recommendations for improving the code. Remember all of this is produced without actually running a test! Typically the programmer writes the code and conducts some informal testing. Then the program is passed through the static analyser. This might identify 50% of the bugs typically identified in running tests. This has the potential to save the programmer a great deal of time. Potential disadvantages include:

They can be expensive

They require a certain amount of configuration

They’re not available for all environments.

Programmers don’t always use them even if they are available. Static analysers do throw out caution messages for items that the programmers know are OK. S4-3-P7 - Typical Defects Found Some static analysis tools are actually free utilities. ‘Lint’ is a static analysis tool, normally associated with the ‘C’ language, which is free with UNIX. Lint identifies legal but bad programming practices. For example, ‘gotos' are generally regarded as ‘bad practice’ in all programming languages. Other examples of bad practice include:

Implicit conversion on assignment. The fault could be that a 16-bit integer variable is forced to store a thirty-two bit integer value

Casting. Casting is the conversion of variables as they are assigned to other variable types. This is considered bad practice because errors could occur when the conversion process itself may compromise the quality of the data being converted.

Subroutines having multiple entry or exit points are generally regarded as a bad idea in software.

Other typical defects identified by static analysis tools include:

Security vulnerabilities

Syntax violations of code and software models

Unreachable or ‘dead’ code

Variables which are never used

Inconsistent interfaces between modules and components

Referencing a variable with an undefined value.

S4-3-P8 - Summary This concludes this brief section on static analysis using tools. In this section we have:

Looked at the objectives of static analysis

Compared it to dynamic testing. We went on to:

Highlight the typical defects and errors identified by static analysis

Looked at the types of code and design defects that may be identified by static analysis tools.

Module 5 – Test Design Techniques

Module 5 Section 1 – Identifying Test Conditions and Designing Test Cases S5-1-P1 - Objectives Welcome to Section 5.1 entitled ‘Test Design Techniques’. The title of this first section is ‘Test Development Process’. In this module, we will explain the differences between:

Test design specifications

Test case specifications

Test procedure specifications. We'll also look at how test cases are written, showing a clear traceability to the requirements and containing an expected result. Test cases can be transformed into a well-structured test procedure specification, at

Page 76: Software Testing Intermediate

Software Testing Intermediate

Page 76

a level of detail for testers to execute the test itself. Test execution schedules set out the sequence of execution of test procedures and must take account of prioritization and technical and logical dependencies. The process of identifying test conditions and designing tests consists of three steps:

Designing tests by identifying test conditions

Specifying test cases

Specifying test procedures. S5-1-P2 - Structure of Test Plans Let’s begin by looking at the structure of test plans. This diagram has been reproduced from the IEEE 829 Standard for Software Test Documentation. The standard defines a comprehensive structure and organisation for test documentation and composition guidelines for each type of document. In the BCS scheme IEEE 829 is being promoted as a useful guideline and template for project deliverables. You don't need to memorise the content and structure of the standard, but the standard number IEEE 829 might well be given as a potential answer in an examination question. However this is a standard for documentation, but makes no recommendation on how you do testing itself. You can see that the IEEE standard gives you the options of creating Test Case Specifications or moving straight from Test Design Specifications to Test Procedures. As long as Test Procedures contain enough information to run a test and provide traceability to the test design a Test Case Specification need not be produced. S5-1-P3 - Test Conditions The Test Development Process can be approached in different ways, from very

informal with little or no documentation, to very formal, as it is described in this section. The level of formality depends on the context of the testing, including the organisation, the maturity of the testing and development processes, time constraints, and the people involved. During test analysis, the test basis documentation is analyzed in order to determine what to test, in other words, to identify the test conditions. A test condition is defined as an item or event that could be verified by one or more test cases, for example, a function, transaction, quality characteristic or structural element. Establishing traceability from test conditions back to the specifications and requirements enables both impact analysis, when requirements change, and requirements coverage to be determined for a set of tests. The ‘Standard for Software Test Documentation’ (IEEE 829) describes a Test Design Specification for the documentation of test conditions. The name of this document may at first be confusing. However, remember that a test condition will be verified by one or more test cases. So think of the list of test conditions as specifying what the design will be, hence test design specification. S5-1-P4 – Test Cases and Expected Results During test design the test cases and test data are created using test design techniques. A test case consists of a set of input values, execution preconditions, expected results and execution post-conditions, developed to cover certain test conditions. The ‘Standard for Software Test Documentation’ (IEEE 829) describes a Test Case Specification for the documentation of test cases. Expected results should be produced as part of the specification of a test case and include outputs, changes to data and

Page 77: Software Testing Intermediate

Software Testing Intermediate

Page 77

states, and any other consequences of the test. If expected results have not been defined then a plausible, but erroneous, result may be interpreted as the correct one. Expected results should ideally be defined prior to test execution. S5-1-P5 - Test Procedures The test cases are assembled in an executable order. This is the test procedure. The test procedure (or test script) specifies the sequence of action for the execution of a test and can be manual or automated. If tests are run using a test execution tool, the sequence of actions is specified in a test script, an automated test procedure. The various test procedures and automated test scripts are subsequently formed into a test execution schedule. That defines the order in which the various test procedures and possibly automated test scripts are executed, when they are to be carried out and by whom. Here we can see a sample Test Execution Schedule. The test execution schedule will take into account such factors as regression tests, prioritization, and technical and logical dependencies. S5-1-P6 - Activity In section 2 we looked at the fundamental test process and defined the terms test condition, test case and test procedure. Drag and drop the following terms to their correct location. A test condition is an item or event of a component or system that could be verified by one or more test cases. A test case is a set of input values, execution preconditions, expected results and execution post-conditions, developed for a particular objective or test condition. A test procedure specification is a document specifying a sequence of actions for the execution of a test.

S5-1-P7 - Activity IEEE 829 defines three test specification documents. Can you identify which if the BCS terms relate to which documents? Drag and drop the following terms to their correct location. A test design specification is a document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases. A test case specification is a document specifying the test cases (objective, inputs, test actions, expected results, and execution preconditions). A test procedure specification is a document specifying a sequence of actions (sequence of test cases) for the execution of a test. S5-1-P8 - Summary We began this section by outlining the three steps involved in the process of identifying test conditions and designing tests, namely;

Designing tests by identifying test conditions

Specifying test cases

And specifying test procedures We went on to look at how a test case consists of a set of input values, execution preconditions, expected results and execution post-conditions, developed to cover certain test conditions. And went on to look at how expected results should be produced as part of the specification of a test case. Finally we looked at how test cases are assembled into executable order in the form of a test procedure specification and looked at a sample test execution schedule.

Page 78: Software Testing Intermediate

Software Testing Intermediate

Page 78

Module 5 Section 2 – Categories of Test Design Techniques S5-2-P1 - Objectives The phrase 'test techniques' normally refers to the techniques used to select the 'best' tests from the infinite number possible. In Section 5.2 we will look at ‘test techniques’ in more detail. This section also describes the important concepts of black-box, white-box and experience-based techniques. Functional and structural techniques are more formal and have two dimensions, namely:

Test design - the process of identifying test cases

Test measurement - the process of setting objective coverage targets and measuring progress towards them.

Finally, experience-based techniques are less formal and rely on the skill, intuition and experience of the tester. S5-2-P2 - Test design techniques – overview The principle of a test design techniques is that you create a model of the software and use the model to derive test cases. It’s usually very complicated to derive tests directly from the code because programs can be very large. There’s often a lot of code in programs that is of no concern. Using a model simplifies the problem of finding useful test cases. Using a model is a sensible thing to do. Think about engineering, engineers may build a physical model of a bridge before they build it. They would also build a mathematical model to calculate stresses, etc. In code production, we might build a prototype, another model, so that we could test certain features before we commit ourselves to building the real code. A model helps us focus on the error-prone features of software and enables us to generate test cases in a reliable way. We create a model and we prepare our test cases from that, based on what you might call the coverage items. Beizer proposed the testers maxim, the phrase,

“if you have a model or graph of software… what does a tester do? He covers it”. So you can draw a diagram, you can trace paths to cover the model. Diagrams are useful to testers. The target may be to get 100% coverage of whatever the model describes. Testers usually have multiple models. Each test technique is specialised at finding a certain category of bugs. Certainly, if testers are in a high-integrity environment, they may use three or four different techniques, offering different coverage and which often complement each other. S5-2-P3 - The differences between static and dynamic testing Static testing includes reviewing documentation, such as specifications and plans; and static analysis, which is the analysis of code, usually carried out with tools. Static testing involves no actual execution of the code. Dynamic testing is where testing is carried out by executing code. Static testing and dynamic testing both have the same objective, to identify defects. However, static testing finds defects directly whereas dynamic testing finds failures from which we can then identify defects. Static testing and dynamic testing will also find different types of defect. The strengths of reviews are that they can help us find defects early in the life cycle when they are likely to be cheapest to fix. Their weakness is that they are not systematic and the defects found will depend on the specific skills and motivations of the individuals carrying out the review. The strength of static analysis is that it can be automated to quickly and cheaply find specific defects that we know occur. The weakness of static analysis is that it will only find known types of defects, there may be unknown types defects in the system, and it will find them regardless of how serious they are or whether or not they lead to a failure.

Page 79: Software Testing Intermediate

Software Testing Intermediate

Page 79

The strength of dynamic testing is that it can be focused on the parts of the system that is most critical and if done well will find the important defects. It can also find defects that static testing has missed. However, the weakness is that it is done late in the life cycle when defects are most expensive to fix. Testing should include reviews, static analysis and dynamic testing. We looked at static testing in the last section. In this section we will be looking different techniques for dynamic testing. S5-2-P4 - Functional or Black-box or Specification-based Testing Testing has too many synonyms. Black-box and white-box are the most common terms and easy to remember. But black-box testing is also called functional or specification-based testing, which is really confusing because there are two uses of the term ‘functional' in the world of testing terminology (we don’t know how this happened, but we are left with it). In the context of black-box testing, functional testing refers to testing done from the functional specification and not the structure of the code. So we have functional versus structural testing. But there is another use of the term functional testing and that is a functional system test. In this case, functional is used to differentiate between testing the functions of a system versus the non-functional characteristics such as performance, reliability and so on. This syllabus refers to specification-based or experience-based approaches as functional techniques and structure-based as well, structural techniques. S5-2-P5 - Black-box Techniques Black-box testing involves treating the construction and the code of the software as an unknown. In other words, you don’t know what’s inside the box, you just test it based on its specification. Testers use its requirements to prepare test cases and prepare expected results from the specification. Here is a list of the functional or black-box test techniques. We'll meet some of them later in the course.

S5-2-P6 - Structure-based or White-box Testing Structure-based testing/white-box testing is based on an identified structure of the software or system. Some typical examples include: Component level - the structure is that of the code itself, for example, statements, decisions or branches Integration level - the structure may be a call tree, in other words a diagram in which modules call other modules System level - the structure may be a menu structure, business process or web page structure. Two code-related structural techniques for code coverage, based on statements and decisions, will be covered later in the course. For decision testing, a control flow diagram may be used to visualize the alternatives for each decision. S5-2-P7 - Structural Testing Explained The 'opposite' of black-box or functional testing is white-box, which is sometimes called glass-box or structural testing. You can see inside the box or in other words, you can see the structure of the code. So, to clarify the functional versus structural definition, structure is where testers can see inside the internals of the software and you can use a diagram of the structure to derive test cases. Whereas with functional testing, testers just look at the required behaviour or functionality; the functional aspects of the products under test, not how they are built. Candidates will have to remember this for the BCS exam. There may be a question on black-box versus white-box or structural versus functional. S5-2-P8 - Structural Testing The important thing about white-box, glass-box, or structural testing is that testers use the internal structure of the software, i.e. the code, to prepare their test cases.

Page 80: Software Testing Intermediate

Software Testing Intermediate

Page 80

In preparing test cases, testers trace paths through a model of the software. The code must be visible and an understanding of the internals is required to do this. However, testers must still use the specification for the expected results, after all you wouldn’t look at the code and say, ‘well the code tells me that it’s going to do this.’ Here is a list of the white-box, glass-box or structural test techniques. We'll look at these in more detail later in the course. S5-2-P9 - Functional versus Structural Testing Programmers do mainly white-box testing because they look at the code. They code a bit, they test a bit, and derive test plans from that. They do look at the specification occasionally. Low-level tests refer to the unit/module/component level and also the integration between components. Unlike programmers, users don’t know how the code works. User testing is always black-box testing, and system testing is mainly black-box testing also. This testing is at a higher level, so the trend is mainly white-box testing progressing to mainly black-box testing. These include system tests and user acceptance tests. S5-2-P10 - Other Structure-based Techniques Later in the course, we'll look in more detail at statement and decision testing. But for now, you should know that there are "stronger" levels of structural coverage beyond decision coverage. In this context stronger means that it takes more tests to achieve 100% coverage. Stronger can be taken to mean more thorough, but also more expensive. Examples of this are branch condition coverage and branch condition combination coverage. The concept of coverage can also be applied at other test levels, for example, at integration level, where the percentage of modules, components or classes that have been exercised by a test case suite could be expressed as module, component or class coverage.

Tool support is essential for the structural testing of code. It is virtually impossible to calculate coverage manually, so specialist tools that 'instrument' the code before testing are used. As tests are run, this instrumentation logs the path of execution and these logs can be used to trace which statements and decisions have been covered. S5-2-P11 - Why Use Techniques? So why use techniques? Well, techniques help us get to the point. Ultimately we want to be proficient and effective testers. We don’t want duplicate tests, but we do want tests that are effective at finding the bugs. And techniques help us do both. Exhaustive testing of all program paths is usually impossible. Exhaustive testing of all inputs is also impossible and even if we could, most tests are duplicates that would prove nothing. So we need to select tests which are effective at finding faults and are efficient. S5-2-P12 - Effectiveness and Efficiency ‘Techniques generate test cases that have a good chance of detecting a fault’. By and large, this statement is true and the tests are good at finding faults. Whether they generate the best tests is impossible to say. A test that exercises the software in ways that we know will work proves nothing.

Effective tests are those tests which are designed to catch specific faults

Efficient tests are those which have the best chance of detecting faults.

5-2-P13 - Test Case Design Techniques We've established that we can't test everything; using test techniques gives us the best chance of detecting errors (because they generate test cases to detect specific types of errors). If you follow a technique, you can’t switch your brain off, but you can follow a mechanical process to derive the test cases. So, it makes for a more reliable process.

Page 81: Software Testing Intermediate

Software Testing Intermediate

Page 81

And using test techniques gives you quantifiable objectives. If you have two testers using the same technique on the same specification, you should get the same test cases. So, it becomes a more reliable, objective way of designing tests. This is a good thing. S5-2-P14 - Test Case Design Techniques (2) Test techniques also enable a separation of concerns. You can separate the test case design and the build of a test. The first step in making a test case is the test case selection or ‘test design’. You don’t need to start creating test data when you design your test cases. You can create test cases and have them reviewed because they are logical test cases. You're not committed to test scripts yet. The second aspect of making a test case is the creation of test data and expected results preparation or ‘test implementation’. Techniques also help testers to 'get to the point', before you have the test scripts, when the paperwork suddenly grows, if you are going to have detailed test scripts. S5-2-P15 - Experience-based Techniques In the case of experience-based techniques, the knowledge and experience of people are used to derive the test cases. Testers, developers, users and other stakeholders can use their knowledge of the software, its usage and its environment. They can also use their knowledge about likely defects and their distribution within the software. The two main experience-based techniques are Error-guessing and exploratory testing. S5-2-P16 - The differences between scripted and unscripted testing Most of the techniques described in section are scripted techniques. Scripted testing is where tests are determined

before test execution and are written down. The advantages of using scripted testing is that the tests are repeatable and test coverage can be measured. However, creating test scripts is time consuming and is therefore expensive. Unscripted testing is where the tests are determined during test execution. The only unscripted technique that is described in this section is exploratory testing. Unscripted testing will avoid the cost of creating test scripts and if done well by experienced testers may find important defects quickly. However, the tests may not be repeatable and the effectiveness of testing using unscripted techniques is very dependent on skills and experience of the tester. S5-2-P17 - Categories of techniques In this section we will look at these functional testing techniques:

equivalence partitioning

boundary value analysis

state transition testing

decision table testing

use case testing Functional testing techniques can also be called black-box or specification-based techniques. If you use the term specification-based techniques remember that non-functional techniques such as performance testing and usability testing can also be referred to as specification-based techniques. We will look at the following structural techniques:

statement testing

decision testing Structural testing techniques can also be called white box, clear box or glass box testing. We will look at the following experience-based techniques:

error guessing

Page 82: Software Testing Intermediate

Software Testing Intermediate

Page 82

exploratory testing With the exception of exploratory testing, all these techniques are scripted techniques. Exploratory testing is an unscripted technique. S5-2-P18 - Summary This concludes the section 5.2 entitled ‘Categories of test design techniques. In this section we described the important concepts of black box, white box and experience-based techniques. And went on to outline how functional and structural techniques are more formal and have two dimensions, namely;

Test design - the process of identifying test cases

And test measurement - the process of setting objective coverage targets and measuring progress towards them.

Finally, we saw how experience-based techniques are less formal and rely on the skill, intuition and experience of the testers involved. Module 5 Section 3 – Equivalence Partitioning S5-3-P1 - Objectives Welcome to Section 5.3. The subject of this section is equivalence partitioning. But what is equivalence partitioning? Well it’s the first of two black-box techniques which we need to cover. It sounds really impressive and vaguely mathematical, but put simply it’s a formalisation of common sense. Although it is a simple concept, it has universal application to all software applications so it’s important that you understand this technique. The scope of this course prevents us from covering all the applications of the technique. Here we will concentrate on the simplest. In this section we will:

Describe what is meant by the term equivalence partitioning

Take a look at some practical examples to aid our understanding

Look at some equivalence classes including Range, Count, Set and Must

Explore Output partitions and hidden partitions and cover some day-to-day examples of each

We’ll go on to discuss how to identify partitions for ranges and conclude the section by considering test selection from partitions.

S5-3-P2 - Equivalence Partitioning When software is developed, the requirements documents normally state rules that the software must obey, for example rule A might be “if m is less than one, then do this”. Rule B might be, “if m is greater than or equal to one, and less than or equal to twelve, then do this…” and so on. Think about the rules for what values a month field can hold. If the month is less than one, that’s invalid. If it’s between one and twelve, that’s okay and the value can be accepted. If it’s greater than twelve, that’s invalid and you print an error. So what we could say is that of all the infinite range of integers that could be entered into a system, it must fall into one of those categories: less than one, between one and twelve, or greater than twelve. If we select one value from each range of numbers, and use this as a test case, we could say that we have covered the rule. For the purposes of equivalence partition coverage, does it matter what values we take? Well not really. Every value in a partition is equivalent. This is where the technique gets its name. By doing this, our infinite number of possible values can be reduced to three, that completely cover the business rules. S5-3-P3 - Activity - Equivalence Partitioning For the purposes of equivalence partition coverage, does it matter what values we take?

Page 83: Software Testing Intermediate

Software Testing Intermediate

Page 83

S5-3-P4 - Equivalence Partitioning Example Equivalence partitioning really just formalises common sense. Most people would create at least one test for each rule they could identify. You’d probably do this without even thinking about it. Users often pick one test case per rule to give the confidence that the rule has been implemented - they don't normally go much further than that. The technique, when used properly, is a systematic way of identifying partitions and ensuring that they are all tested at least once. The other major benefit is that the technique reduces the number of possible tests to a small number; this in turn gives some confidence. Here’s an example of a competition with contestants. Contestants send a voucher from the back of a cereal box to a company and receive a response saying that they can claim a prize. You have thirty days to do so. They can post it the day they receive it (that's zero days), but they obviously can't post it before they receive it (yesterday, so -1 day is invalid). In between the receipt and the cut-off date, you have thirty days. So 30 days after receipt is valid, 31 days is invalid. Suppose you identified these ranges of valid and invalid values? You could say in your test strategy that you would cover all the equivalence partitions. You count the partitions and aim to test each one in turn by selecting just one value in each partition. Now you have an objective coverage measure, not guesswork. This isn’t a strong test strategy, but it’s a strategy. S5-3-P5 - Identifying Equivalence Partitions There’s more to equivalence partitions than ranges of numbers. If you can partition the input values into sets and select a test case from each set, one test case is enough to exercise the processing that implements that rule.

A range is between two numeric values, and between those numbers you have a valid class. An invalid class is outside of the range. So, ranges are straightforward. There may be an equivalence class that is a count. It’s like a range, but it’s a collection of numbers. Another equivalence class is a set. A set just specifies a number of specific values. In this example, the values must be BUS, CAR or TAXI to be valid and everything else is invalid. So, any one of these values would do to test the processing rules for this equivalence class, and any name other than those would be ok to test the invalid. The last type of equivalence class we've called a ‘must’. This might be that the input value must begin with a letter. Anything lowercase from a-z or uppercase from A-Z will be a valid equivalence partition, and any non-alphabetic character will exercise the processing of an invalid one. Note that the ASCII values for A-Z and a-z are not contiguous. Although these ASCII values are not contiguous, the set of characters is well defined in the rule. S5-3-P6 - Identifying Equivalence Partitions (2) When using a set partition you may need to check that the correct values have been coded in the set. You can be pretty sure that if a range of numbers is used that the programmer will use a range type conditional test. In essence you can verify the outcome of the test but you may need extra tests to check the correctness of the outcome - this is covered by BVA, (Boundary Value Analysis), for a range but not for a set. This is true, but that wouldn't be equivalence partitioning. All equivalence partitioning does is demonstrate the existence of ONE value that signifies a rule has been implemented (in some way). In the case of a set with a 'random' set of values to be checked for, testing one value gives little confidence. But, suppose you were dealing with customer IDs in a database. These could be pretty randomly generated. Do you need to test all IDs to demonstrate the ID validation code works? No. All Equivalence partitioning

Page 84: Software Testing Intermediate

Software Testing Intermediate

Page 84

demonstrates is the validation works FOR ONE VALUE. Whether all the content of the 'valid' set are present would be a different test. S5-3-P7 - Output Partitions So far we have looked at input partitions, but output partitions are also important. These are used on the input side of the system, particularly for validation. The rules usually say, these are valid entries and these are invalid entries. But valid-invalid are not the only options. You may have different processing rules for different equivalence partitions. Output partitions exist when the system knows about partitions, but they are not directly input. A good example is income tax bands or brackets. Different salaries place people in different tax brackets. You could choose different salaries to force the system to select different tax brackets. From a testing point of view, you want to be sure to have a test case that exercises each output partitioning rule. For example, people that pay no tax or people that are in the top tax bracket could be processed differently by the system. In an insurance application, factors in the input can be accumulated to form a 'score' which places customers in one risk classification or the other. Testers would want to make sure that you tested each of the predefined risk classifications. Another good example is to think of output partitions based on accumulation of the input. A typical example of this is discounts based on order value. If you spend less than £100, then no discount applies. For a £100-350 spend, the order is discounted by 5%. And for greater than £350, a discount of 10% applies. You clearly have three output partitions that would need to be tested. S5-3-P8 - Hidden Partitions Let’s look at the notion of a ‘hidden partition’. This is not something that testers encounter frequently. Suppose that you had a field which captured numbers between 0 and 999 on a screen; it’s a 3-digit number. Potentially,

the programmer might have used a single byte to hold that number. You can fit 1 to 128 into a single byte, but to hold a greater number, you need to extend to two bytes. This is a hidden partition because it is not obvious by looking at the requirements, but it is a possible source of error. If the tester thought of this, then they might try a number greater than 128. Hidden partitions can occur on input and output. There are internal partitions as well. Unless you generate test cases based on previous experience, knowing where errors might occur, then hidden partitions are more appropriate to white-box testing. S5-3-P9 - Identifying the Partitions for Ranges You have learnt previously that the process of test design finds faults in baseline documents. When using a technique such as equivalence partitioning, you will find anomalies in requirements which were not obvious to you before and immediately cause you problems when trying to design tests. Suppose a requirement stated that a data field must have a value between two numbers that define a range, for example, “…value must be between 23 and 49” Does between “include” the values 23 and 49? If the language used says the value must be between 23 and 49, is that inclusive or exclusive? We don’t know. The range 23 to 49 is valid. Less than 23 and greater than 49 are invalid. But does this mean <=22? Or does it mean <=22.9999999? Are you dealing with integers or real numbers? If you are dealing with integers, and a valid range is inclusive, 23 is a valid number and 22 is invalid. But, if you are dealing with real numbers, the range of invalid numbers might be up to and including 22.9999999. If you don’t think about the decision of the number you might find that your ranges exclude certain values. For example, if you choose the invalid range up to and including 22 and the valid range is from 23 upwards, what does the system do with the value of 22.5? You can see the problem?

Page 85: Software Testing Intermediate

Software Testing Intermediate

Page 85

This kind of problem most often arises where you are dealing with monetary values. For example, if the valid range was between £23 and £49, the invalid ranges would be anything less than or equal to £22.99 and greater than all equal to £49.01. You can see, therefore, that you must consider the pennies and not just the pounds. Thinking about testing in this way throws up anomalies in requirements and uncertainties about precision of the requirements themselves. So don’t forget about the precision of numeric ranges. S5-3-P10 - Activity Here’s a little exercise on the subject. Imagine you have a savings scheme which pays you interest at variable rates like so: 0.5% for the first £1000 credit 1% for the next £1000 1.5% for the remaining balance If you wanted to be sure that the bank was paying you interest at the correct rate, then what valid input partitions would you use? S5-3-P11 - Selecting Tests from Partitions Knowing all this the question arises in your mind, “which value should you select out of the partition?” For example, in a partition of 1-100, which value would you choose for a test? Does it matter which value you take? Well, think about it for a moment. We have selected partitions on the basis that all of the values in that partition are equivalent. So, by definition, it really doesn’t matter which value you choose. You might think that there are several values that are of interest to you as a tester. This might indicate that equivalence partitioning is probably too weak of a strategy for the software under test. This is to be expected. Equivalence partitioning is a “weak” strategy in that there is clearly more than one interesting value that you could choose for each partition. In the case of ranges, we might be more interested in the values at the extreme ends of the partitions, and this is

where boundary values come in to play. We’ll cover those in the next section. S5-3-P12 - Summary This concludes Section 5.3 on the subject of Equivalence Partitioning. In this section we:

Described what is meant by the term equivalence partitioning

Looked at some practical examples to aid your understanding

We went on to look at equivalence classes. output partitions and hidden partitions and described some day-to-day examples of each

Finally we described how to identify partitions for ranges and concluded the section by considering test selection from partitions and saw that once partitions are identified that choosing different values has no consequence on our tests.

Module 5 Section 4 – Boundary Value Analysis S5-4-P1 Objectives Welcome to Section 5.4. In this brief section we’ll be looking at Boundary value analysis. The objective of this section is to provide you with an understanding of this second black-box technique. Like equivalence partitioning, it sounds really impressive and vaguely mathematical. Boundary values make really useful test cases because they flush out some of the most common faults in code. Again, like equivalence partitioning, it is simply a formalisation of some common sense and in the same way is universally applicable. S5-4-P2 - Boundary Value Analysis It’s a fact that many bugs are found in the decision-making code that works around the ‘boundaries’ of the rules, or the equivalence partitions. Decision-making in code is based on comparison statements like ‘greater than’, ‘less than’, ‘greater than or equal to’, ‘less than or equal to’, ‘equal

Page 86: Software Testing Intermediate

Software Testing Intermediate

Page 86

to’, ‘not greater than’ and ‘not less than’. A typical program is full of these intricate little decisions therefore it’s easy for a programmer to get them wrong. Many bugs occur when a programmer tries to implement code that partitions input data. In a routine that validates some data, the rule might be 'off by one'. Take the example of validating a month of the year. If an error in coding has been made, it’s likely to be that either 1 or 12 is treated as part of the invalid partition or 0 or 13 is treated as part of the valid partition. It’s unlikely on a range check that 6 will be invalid, unless the whole equivalence partition has been left out. Experience tells us that tests of the boundaries between partitions are valuable because they find the bugs, if they exist. Boundary value analysis is simply an enhancement to equivalence partitions. S5-4-P3 - Boundary Value Analysis and BS7925-2 So the rule is a simple one, test cases just above, just below and on the boundary find faults The process is a simple extension to equivalence partitioning, in that we:

Identify the equivalence partitions

Identify the boundaries of the partitions

Choose test cases on the boundary and just outside.

Note that although two test values might seem enough, BS 7925-2 suggests that values on the boundary AND either side of the boundary are what is required. S5-4-P4 - Choosing the Boundary can be Subjective Choosing the boundary can be subjective. Consider these two examples. The same requirement, but you could have different test values depending on the boundary: 99, 100, 101 or 98, 99, 100 Which is best? Well because testers can be “one out”, the standard suggests three boundary values not two to guarantee you

get the BEST two test cases. An example is displayed here. S5-4-P5 - Boundary Value Analysis Example As an example, suppose we took a simple requirement that said that valid values would be in the integer range: 0 to 100 and all numbers outside that range are invalid. Equivalence partitioning would give us three partitions:

1. Less than 0 2. Greater than or equal to zero and

less than or equal to 100 3. Greater than 100.

So possible test cases to cover the three partitions might be -5, 56 and 101. Boundary value analysis suggests we use the boundary values and values either side of these. In this case the test cases are: -1, 0, +1, 99, 100 and 101. As a rule, it takes one test case to cover each partition, but three to cover each boundary value. Therefore Boundary value is a stronger test strategy than Equivalence partitioning and subsumes it where the partitions are ranges of values. If you have covered all boundary values you automatically get EP coverage. S5-4-P6 – Exercise: Boundary Value Analysis Here’s a little exercise. Download or open the file using the link on screen and complete the exercises. Then consider the following questions: Aren't the requirements a little ambiguous sometimes? The techniques are useful for finding faults in baseline documents. Even if you throw the test cases away, the faults you find in requirements would still make the technique a useful thing to use. In fact, some people use the process of test design as an accelerated review method. For example, if you were given a requirements document to review,

Page 87: Software Testing Intermediate

Software Testing Intermediate

Page 87

consider preparing some test cases directly from the requirements. Where you have difficulty interpreting requirements to prepare a test case, it is likely that the developers will have the same difficulty when they come to write the code. The process of test case design using systematic methods is a very useful skill to acquire. This concludes this brief section on the subject of boundary value analysis. Module 5 Section 5 – Decision Table Testing S5-5-P1 - Objectives Decision tables are a good way to capture system requirements that contain logical conditions, and to document internal system design. In Section 5.5 on decision table testing we will see how they can be used to record complex business rules that a system has to implement. Decision tables are particularly useful when the number of combinations of logical conditions relevant to the behaviour of the system is large. They are also a compact way of summarising the combinations that are relevant from testing and the same format can be used to document test cases directly. S5-5-P2 - When to Use Decision Tables When using decision tables to derive test cases there are some basic constraints you should be aware of. It is justified when:

The specification is given as a decision table (or can be converted to one)

The order of evaluation of conditions is immaterial

Once a rule is satisfied, no other rule need be examined

If multiple actions can result from a rule, the order they are executed doesn't matter

These conditions are not as restrictive as they first appear because in most situations the order does not matter. S5-5-P3 - A Simple Requirement To introduce the concept of decision tables and how they are used to derive test cases, let's consider a simple application. Here's a typical 'log in' screen for an application. The screen has two fields and a login button. The requirements can be simply stated as follows:

In order to log in, the user must enter a valid username, password and their account must be activated. The system displays the message "User logged In"

Invalid usernames are rejected with the message, "Invalid Username"

If the username is valid, invalid passwords are rejected with the message, "Invalid Password"

If the username and password are valid, but the account is deactivated the log in is rejected with the message "Account Deactivated".

S5-5-P4 - Sample Requirement – Log In Screen Here is the decision table that describes the functionality of the log in screen. Since there are three "inputs" (username, password and account activation status) we represent all combinations of these three inputs in the decision table. Reading down the first column headings, you can see the three input conditions relating to valid usernames, passwords and account activation. Below that, there are the four potential "output" actions, which are the messages produced by the system. As an example, look along the eight columns labelled "Rule 1" to "Rule 8", and focussing on Rule 7. Reading down the column for Rule 7 we can say: ‘If the user enters a valid username and the password matches the username, but the account is deactivated, the system

Page 88: Software Testing Intermediate

Software Testing Intermediate

Page 88

should display the message, ‘Account Deactivated’. Why are there eight rules? Well, that's because there are three logical conditions, each having a true and false value. Consequently, there are 2 x 2 x 2 = 8 combinations. Now, potentially, we could read down each column for the rules, and construct a test case for each. We'd have eight test cases. S5-5-P5 - Decision Table – First Reduction One of the advantages of decision tables is that it is possible to fully define the requirements in a smaller number of rules than the total number of combinations of all the conditions. The number of test cases required to exercise all the rules is also reduced. In our log in screen example, it's clear that if we chose to enter an invalid username in our test cases, we would always expect to get the same output: "invalid username". We could therefore, treat all rules with invalid username as the same rule. Given an invalid username, the values for "password matched" and "account activated" are immaterial. We have simplified the decision table by condensing the original rules 1-4 into a new Rule 1 shaded in grey. Note that this 'reduction' is safe if we can assume that the order of evaluation of the conditions is irrelevant. Reducing the number of rules in this way reduces the number of tests required to cover all requirements. Clicking the tabs on screen will display an example table following a second reduction. S5-5-P6 - Decision Table Testing - Guidance It is common for requirements to be stated in terms of rules in other words no decision table is provided. In this case, you may find that if you created a decision table from these rules, the table would be

very compact, with many immaterial conditions. Written requirements often say things like, "if none of the rules above apply, then …” In effect, a default or catch-all collection of rules may be implied. The "default" rule may need to be expanded to obtain a full set of rules. Often, condition predicates may be stated in terms of boundary values. For example a condition might be X>100. In this case you might use mid-range values such as 50 and 150, but you might also use boundary values: 100 and 101 to expand the list of values to exercise true and false outcomes of that condition. S5-5-P7 – Activity - Exercise: Decision Table Here’s another Exercise. Look at the information provided and try to predict the expected results for Ray, Jean and Mark Once you are happy with your choices click here to reveal the answers. S5-5-P8 - Summary This concludes this brief overview on decision table testing. In this section we:

Outlined the basic constraints when using decision tables to derive test cases

Introduced the concept of decision tables and how they are used to derive test cases

Illustrated some of the advantages that decision tables can offer including how reducing the number of rules can reduce the number of tests

Finally we offered some guidance on the flaws often found in requirements and how best to treat them.

Module 5 Section 6 - State Transition Testing S5-6-P1 - Objectives Some applications lend themselves to state transition testing because the functions that the software performs

Page 89: Software Testing Intermediate

Software Testing Intermediate

Page 89

depend significantly on the existing state of an object. State transition testing is much used within the embedded software industry and technical automation in general. However, the technique is also suitable for modelling a business object having specific states or testing screen-dialogue flows (for example, for internet applications or business scenarios). In Section 5.6 on state transition testing we will begin by:

Giving a few examples of applications where this approach is appropriate

We’ll go on to look at modelling systems using state transitions and show you how these are represented graphically

We will also look at state transition coverage and how to derive test cases

The section concludes with a useful exercise on the subject.

S5-6-P2 - Examples of Systems in States It is probably easiest to introduce the notion of state transitions by thinking about some everyday things. A car is a good example. It can be stationary or moving forwards or backwards. The gear can be in one of several positions namely, one of the forward gears, reverse gear or neutral. The engine can be running or turned off. A software driven system in a telephone exchange also has states. When you pick up a telephone handset, you are establishing a connection with a central telephone exchange. The state of this connection varies depending upon the events (or inputs) to your telephone handset but also depends on the state of the connection to other subscribers. Obvious states are on-hook, off-hook (when you hear the dial tone), dialling 0-9 keys, connected to another subscriber or disconnected by the subscriber you were talking to. Obviously, a telephone exchange is more complicated than that. There may be

hundreds of states. The designers of such systems use “state diagrams” also known as “state-transition diagrams” to document their designs. A complete state diagram may be very complex, with states requiring very precise definition to differentiate them. The states that we have identified here may actually not be real states rather they may actually be combinations of other lower-level states. S5-6-P3 - System States A system state reflects the condition of a system at a point in time. Most systems can have many states and the notion of state-transition describes how the state of a system can change when stimulated by certain events. How do you go from one state to another? You perform an action. In our car analogy, that might be changing gears or braking or both. You can trace paths between states with a graph. And what do testers do? They cover their graph. Building a test plan with test cases that trace paths between states allows testers to exercise the states and the transitions. This is a highly useful technique if you are testing a piece of software that is state-based. S5-6-P4 - System States (2) The state-diagram is a model of the software and defines the valid transitions for every state. State diagrams are also known as state-transition diagrams or state-flow diagrams. Each transition is labelled by the event or events that cause it and the output generated by that transition. States are connected by lines that indicate the valid transitions between states. The structure of state-diagrams allows the tester to define coverage targets. It may be impossible to transition from one state to the other, making the state invalid. If we try to force that state by submitting a certain input, the system’s state might not change, and the output would be rejected. This is documented by labelling the input, perhaps with an error message.

Page 90: Software Testing Intermediate

Software Testing Intermediate

Page 90

Having identified the possible system states, we can define the valid transitions from one state to another. S5-6-P5 - Modelling Systems Using State Transitions Let us assume that we have identified all the states for a system and we have created a diagram documenting this. The first thing to point out is that every transition on the diagram has four attributes:

Firstly, we need to define the initial or starting state

Secondly, we must identify the event or events that will cause the transition to occur

Thirdly, we must identify the final or target state for the transition. (Note, this could actually be the same as the initial state)

Finally, we define the output that might accompany the state-transition. (Of course, the output might be null as well as a recognisable message or other output).

State diagrams make the system easy to understand because they have all the information on how the changes of state occur. For a tester, the state diagram gives sufficient information to design test cases directly, in that the diagram provides initial conditions, events and expected outputs. A state-diagram can be represented by a state table. The state table provides the same information as a state-diagram but in a tabular format. You can see the state table looks very much like a list of simple test cases. S5-6-P6 - State Transition Diagram and Rules Designers of state-based systems find state-diagrams very useful. Testers of such systems can base systematic test design on those same diagrams. The technique is called state-transition testing. Here is an example of the state transition diagram. There are four states, and eight transitions between the states are

documented. The labels for each transition are not shown. Some people use circles to define the states, others use rectangles. In this course, we will use the circular shape as our convention. There are a few simple rules when creating state diagrams:

States should be represented by circular or rectangular shapes

Transitions are directed lines labelled with input and output for example, “hang-up/no tone”

No ‘dead states’ should exist as there’s no way to escape!

Finally no unreachable states should exist.

S5-6-P7 - State Transition Testing Where we have a system modelled by a state-diagram, state transition testing is the preferred method of test design. State transition testing is also formally defined in BS7925-2 and the standard contains some good examples on how you should use this technique. Here are some examples where state transition modelling and testing is appropriate. The first of these is event-driven program environments. An example would be a graphical user interface where events via the mouse, the keyboard, and unsolicited events from other components on a workstation are difficult to test in any other way. A second example might be software components/middleware with no user interface. Such software might be service-oriented providing services on demand from other components or systems. Finally, service-oriented components on servers. Objects running on a server, services running within an operating system and complete operating systems might also be designed using state-diagrams and tested using state transition techniques.

Page 91: Software Testing Intermediate

Software Testing Intermediate

Page 91

S5-6-P8 - State Transition Testing (2) The first step in the state transition process is to create the state-diagram. The next step is to create the state table, assuming we do not know it already. Next, we select data to sensitise the state transitions. This should be relatively easy because every row in the state table gives us this information. We call this sensitising the paths. Essentially, what we are doing is tracing paths through the state-diagram. This is directly compatible to the way we trace paths through a control-flowgraph from derived test values to force software to take certain paths through the code. S5-6-P9 - State Transition Coverage The state transition technique can be refined to use a graduated set of coverage targets based on paths through the state transition diagram. The lowest level coverage covers all the transitions but higher levels exist. The lowest level of coverage is called 0-switch coverage. In this case, every transition on the diagram must be covered at least once. The next higher level of coverage is called 1-switch. In this case, every two-transition path must be covered. That is, for each transition to a state, we must cover all of the transition in and out of that state. 2-switch coverage is the next higher level of coverage. In this case, every three-transition path must be covered. That is, for each transition to a state, we must cover all of the transition in to that state, and all of the transitions out of that state. S5-6-P10 - Chow’s Coverage These levels of coverage were first described by a person named Chow. The diagram on screen shows a state model and examples of one-transition, a two-transition and three-transition paths.

Obviously, the number of test cases required to exercise a higher level of state transition coverage increases dramatically. S5-6-P11 - State Transition and Flow Diagram For example, consider a system that processes money transfers in the banking system. The system operates only during certain hours and in terms of states the system has three identifiable processing states. It can be closed, it can be open for business or it can be in the process of shutting down. There are four possible input events to the system and these are open, cut-off, close and extend. The state flow graph looks a bit complicated, but it works like this: Each state is represented by a node - there are three states so there are three nodes. For each state, some of the available input events (4 of them) can cause state transitions. Some transitions are null, that is the system stays in the same state when some inputs are received. For each input event, the system reacts with a transition and an output. S5-6-P12 - State Transition Table Consider the first state: closed. When the three events cut-off, close and extend are received, the system remains closed and the output for all three inputs is the same: Rejected. On the diagram, the convention for transitions is to label them with the input event and the output separated by a slash. So the example above is labelled: "Cut-off, close, extend/Reject". The fourth input event to the closed state is open. The valid transition to the open state is labelled "open/open queue". This means the output is a message saying "open queue". You can see the other states and transitions in the diagram. State diagrams are very useful for documenting certain types of systems, and they can help to clarify the requirements.

Page 92: Software Testing Intermediate

Software Testing Intermediate

Page 92

This diagram is a graphical representation of the input events, starting state, output and final state combinations for the system. An alternative to this is to document these combinations in a state table like so. This is the conventional layout for a state table, but there are other styles. An alternative layout would be to have input events as rows, and states as columns. S5-6-P13 - Deriving State Transition Test Cases Once we have a state diagram, we can trace paths through the diagram. What does a tester do when we see a graph? We make up test cases to cover it! We can trace interesting paths through the diagram and derive a series of test input events starting at one state and ending up at a final state. This is how systems like telephone switching and operating systems are tested, but on a much larger scale than our simple example. The process for test design is straightforward.

First, nominate one state as the start AND end state.

Second, select a minimum set of covering tests.

Finally, make sure you can detect the transitions during testing.

S5-6-P14 - Early Preparation of State Transition Cases State diagrams are an extremely useful tool for understanding requirements, but they also find faults in people's understanding. These situations all imply there's a bug in our thinking (or state transition diagram). They can be particularly useful in early stages to find faults. Possible questions include:

Have all states been accounted for?

Have we identified all of the possible states?

Have non-equivalent states been merged?

Are some of the states we've identified actually the same? They

probably are if the same set of input events, transitions and outputs correspond.

Have 'impossible' states been excluded? In other words have we some states that clearly cannot exist?

What about dead states? Are some states impossible to leave?

Unreachable states? Are some states impossible to reach?

S5-6-P15 – Activity - Exercise: One- armed Bandit Here’s an activity to try. The details of the question appear on the next few pages. In this exercise, you must create test cases for a one-armed bandit arcade game. The game has three states and there are six possible input events. The machine is usually in the wait state, where it is waiting for a valid coin. Having entered a valid coin, the machine transitions into the ready state and is ready to be played. When the machine handle is pulled, the dials spin and the machine is in the spin state. The six input events to the machine are listed below:

Valid - insert valid coin

Invalid - insert invalid coin

Refund - press refund button

Play – pull handle

Win – randomly generated, player wins

Lose – randomly generated, player loses.

Invalid input events, that is, those that are ignored by the machine, are not shown. For example, if the machine is in the wait state, pulling the handle to play has no effect. A state diagram for the game appears on the next page. S5-6-P16 – Activity One- armed bandit (2) Here is the state diagram for the game. Input events that are invalid are effectively ignored and are not shown on the diagram. You can assume they cause no change of state and no output either. Wait - the machine is inoperable – waiting for a coin

Page 93: Software Testing Intermediate

Software Testing Intermediate

Page 93

Ready - the machine is ready for the game to be played Spin - the game is underway, the dials are spinning Download the linked document to obtain a printable version of this exercise:

a) Create the state table from the diagram above (5 marks)

b) Identify and create test cases to achieve 1-switch coverage (all two-transition paths). Your test cases should identify: initial state, first input event, second state, second input event, final state. (12 marks)

c) If a win caused the game to return your coin as well as give you a new game, how would that change the state diagram? (3 marks).

S5-6-P17 - Summary This concludes Section 5.6 on state transition testing. We began this section by:

Giving a few examples of applications where this approach is appropriate

We went on to look at how to model systems using state transitions and described how these are represented graphically

Later in the section we outlined what is meant by state transition coverage and looked at how test cases are derived

The section concluded with a useful exercise on the subject.

Module 5 Section 7 – Use Case Testing S5-7-P1 - Objectives A Use Case captures a contract between the stakeholders of a system. It captures information about its behaviour. It is described in our glossary as; ‘A sequence of transactions in a dialogue between a user and the system with a tangible result.’

Although Use Cases can be represented graphically, these are less comprehensive and less useful than textual Use Cases. Test cases can be derived from Use Cases in a very straightforward manner. In this section we will:

Look at a sample textual Use Case

Use these samples to illustrate how conditions and test cases can be derived.

So let’s do that now... S5-7-P2 - Introduction to Use Cases The Use Case describes the system's behaviour under various conditions as the system responds to a request from one of the stakeholders, called the ‘primary actor’. The primary actor initiates an interaction with the system to achieve a goal. Different sequences of behaviours (or scenarios) can unfold depending on the particular requests made and the conditions surrounding the requests. The ‘Use Case’ gathers those different scenarios together. Each Use Case has preconditions, which need to be met for a Use Case to work successfully. The ‘Use Case’ terminates with post-conditions, which are the observable results and final state of the system after the Use Case has been completed. It usually has a mainstream (in other words, the most likely) scenario, and sometimes alternative branches. Use Cases are often referred to as scenarios. Use Cases describe the “process flows” through a system based on its actual likely use, so the test cases derived from Use Cases are most useful in uncovering defects in the process flows during real world use of the system. They can also be very useful for designing acceptance tests with customer/user participation. Another benefit of Use Cases is helping to uncover integration defects caused by the interaction and interference of different components, which individual component testing would not see.

Page 94: Software Testing Intermediate

Software Testing Intermediate

Page 94

S5-7-P3 - Use Case Template Here we can see a template for a Use Case. The most important aspects for test design are highlighted in blue. These are:

Preconditions - these reflect the state of the system before the Use Case can execute

End-conditions - (success or failure) these represent the state of the system after the Use Case is completed

Description - the sequence of steps in the scenarios

Extensions - the variations in the standard sequence of steps that can occur and cause different behaviour of the system. These behaviours are recorded in the Use Case and may be different outcomes or perhaps calls to other Use Cases

Sub-variations - the variations that can occur in the standard sequence of steps that may cause different behaviour of the system but is not defined in this Use Case. These changed behaviours may be undefined (to be defined after further analysis).

On the next page we'll see an example of a completed Use Case. S5-7-P4 - Use Case Example In this sample Use Case, we've shaded the important areas, as before. You can see that before the Use Case executes, we must have already collected the buyer details (in a customer record perhaps). The sequence of events in the Use Cases covers the order-taking process, then moves towards delivery, invoicing and collecting payment. A successful outcome would be where the customer has paid and received their goods, as ordered. An unsuccessful outcome, caused by lack of stock, non-payment or such like would be goods not delivered and/or payment not collected.

S5-7-P5 - From Use Case to Test Case If we use the sample Use Case as our requirements, we can create test cases directly. On the next page, we'll see a tabular representation of these variations and how test cases can be derived. First, we use the description step to give us the basic procedure and 'normal case' test values The normal case values would be

Buyer provides address data

Collect order for in-stock items

Takes payment by cash

Goods are delivered and acceptable to customer.

Variations would be those indicated by the extensions which could be

Order items out of stock

Customer pays by credit card

Goods are deemed faulty and returned by the customer.

S5-7-P6 – Exercise: Test Cases from Use Cases Here we are presented with a table for test cases to be derived from the Use Case. We've completed the mainstream or normal scenario. Print off this page and see if you can add the other test cases to cover the extensions and sub-variations. You can find the answer on the next page. S5-7-P7 – Exercise: Answer - Test Cases from Use Cases You can see that we have exercised each one of the extensions and sub-variations in turn. Note that we would need to create variations of Test Case 5 to cover the check and money order sub-variations. In order to keep the diagram as simple as possible, we haven't shown these here. This concludes this Section 5.7 on Use Cases.

Page 95: Software Testing Intermediate

Software Testing Intermediate

Page 95

Module 5 Section 8 – Path Testing S5-8-P1 - Objectives The concept of path testing is based on the knowledge that when a program is run, depending on input and variables, it may follow a different path through the program. Whether it follows the correct or expected path is a potential source of a fault and therefore a matter of concern to the tester. In Section 5.8 we will look at the subject of path testing in more detail. S5-8-P2 - Path Testing We need to introduce the concept of path testing as the basis of the two techniques we will examine. We know that when software is running it follows distinct paths through the program code determined by the outcome of the decisions made within the code. The bug assumption is that the software might follow the wrong paths and so the tester selects test cases to determine whether these faults exist in the code. Path testing is mainly appropriate for component level testing because, once components are assembled into subsystems, the volume of code involved becomes unmanageable. Also, detailed testing of the code is appropriate at this stage, where as for later stages, the tests will focus on different issues. One pre-requisite to designing tests based on paths through the software is that it does require an intimate knowledge of the program structure and obviously, the source code. Often the person best placed to do path testing is the author of the software itself because they have that intimate knowledge. If not the author, perhaps another programmer or tester who understands the way the software was written and can infer the need for test cases based on an understanding of the source code itself. As a ‘white-box’ test activity, path testing is very dependent on the same knowledge required to write code in the first place.

S5-8-P3 - Why Path Testing We plan and perform path testing to achieve two separate objectives. To exercise all statements in a program. Firstly we need some demonstration that all the code that has been written has been exercised at least once. The second objective is to ensure that all of the decisions in the code and all of the outcomes of those decisions have also been exercised. Again, the motivation for this is to ensure that code that has been written has been exercised at least once. If you are not a programmer yourself, you might think these techniques have no value to you, after all, you don’t write code. However, the principles behind the techniques of path testing have some application to any functional requirement that describes an algorithm to be performed, for example, to do a calculation. Many techniques reuse the concepts and terminology of path testing. The concept of paths through models, requirements and business processes are universal. S5-8-P4 - Bug Assumption for Path Testing We assume the program either:

Makes the right decision at the wrong time OR

Makes the wrong decision at the right time.

Essentially, the bugs (the faults) that we are pursuing when we perform path-testing relate to the decision-making within the code under test. We are looking for faults in the code where the program performs a set of instructions that were intended for a different value or the program doesn't perform a set of instructions when it should have. In either case, those type of bugs occur when the program is taking the wrong path. The code examples illustrate a decision being made incorrectly or perhaps the outcome of a decision being incorrect. S5-8-P5 - Models and Coverage Statement coverage is the most basic structural coverage measure. The principle

Page 96: Software Testing Intermediate

Software Testing Intermediate

Page 96

is that we select tests to ensure that every statement is exercised at least once. Branch coverage, on the other hand, relates to the outcomes of decisions made within the software; that is, where the execution path of the component features a decision point. Branch coverage requires that you exercise all the possible outcomes (the branches) of that decision. Most decisions have two possible outcomes e.g. if the predicate “X>10” is evaluated as a Boolean the outcome can be only true or false. Some decisions have more than two outcomes, perhaps three or four or even more. To achieve 100% branch coverage normally requires more test cases than 100% statement coverage. In fact, for almost all software programming languages, branch coverage subsumes statement coverage. What this means is that if you achieve branch coverage, you automatically achieve statement coverage. S5-8-P6 - Coverage Measurement When we aim to achieve a level of structural coverage, we construct test cases, run those tests, and then measure the coverage. If the coverage does not yet meet our target, we add additional test cases to reach the code that have not yet been exercised. This process is incremental. Where we define a coverage measure as a target and measure progress towards that target, we express the measure as a percentage. Coverage is measured as the number of statements executed, divided by the total number of statements within the source code to be tested. Branch coverage is defined as the number of branch outcomes actually executed, divided by the total number of branch outcomes from the code on the test. S5-8-P7 - From Paths to Test Cases We know that by tracing a path through the code, we are simulating an execution path. As we trace more paths, we exercise more and more decision outcomes.

The choice of paths to take is ours, but it is usually possible to select paths that maximise the decision outcomes exercised. This will minimise the number of paths, but may make test cases too complicated. It’s your choice as to how you proceed. It can often be best to select a larger number of simpler paths to make life less complicated. S5-8-P8 - Sensitising the Paths We use the flow graphs to select test cases. We identify all of the nodes that have decision outcomes and for each node we select a test case to cover all of the decision outcomes. The graph is useful in that we don’t need to worry about all of the internal processing, just the decision making so it is simpler than looking at the code. The nice thing about identifying all of the test items to cover in our testing is that we can draw paths through the flow graph and this gives us a template to use for test cases to exercise all of the coverage items, which that path covered. The process of drawing paths to exercise test cases is called ‘sensitising the paths’. We continue to create paths through the code until we’ve covered all of the decision outcomes in the code, and this will give us the desired level of branch coverage. If you trace paths through the flow graph from the first node to the sixth node, there are four possible paths through the flow graph that are unique. To achieve coverage of all the branches of the flow graph (that emanate from notes 2 and 4), only two paths are required. To achieve coverage we could use paths A and B or paths C and D. Each of these two sets of paths cover two of the decision outcomes. Two test cases are enough to cover all links in the flowgraph (A+B or C+D) each path covers two conditions. This concludes Section 5.8 on subject of Path Testing. Module 5 Section 9 – Statement Testing S5-9-P1 - Objectives In previous sections we have looked at how programs can be written using different programming languages and how the flow of control through programs can

Page 97: Software Testing Intermediate

Software Testing Intermediate

Page 97

be documented using flowcharts and flow graphs. In this section we will look at statement testing which is the most basic white-box or structural test technique. We will go on to outline how we can use our knowledge of programming code and programme structure to help us test. S5-9-P2 - What are Statements? The first thing we need to know is “what is a statement?” A statement is a line of code within a program that does something. An executable statement might assign a value to a variable, perform a calculation, update a data field in a database, present some data on a screen and so on. The statements that are not executable are those that provide data definitions or represent punctuation within the program. Typical examples might include ELSEs and ENDIFs – these words are simply placeholders for the decision constructs in some languages. The statement coverage target set objective is to exercise every executable statement at least once. This can be referred to as 100% statement coverage. It is likely that when you run your tests, some statements may be exercised many times and this is to be expected. S5-9-P3 - Sample Code In this example we have reproduced the basic version of the function to extract the roots of a quadratic equation. Each of the statements in the sample code has been labelled as a declaration or punctuation or labelled yes if it is an executable statement. You can see that only just half of the statements in this small piece of code are executable. When we execute this program, the first executable statement of the program is always covered by the test. When we are designing tests we can trace the paths of execution from the very first statement. As we do so we can identify the statements exercised and check them off

as being covered. We create another test to cover a different set of statements hopefully increasing the level of coverage, hopefully achieving the statement coverage target required. S5-9-P4 - Statement Coverage Here is a simple code example which must be covered. On entry to the program, we automatically exercise the first three statements, that is:

Read A

Read B

The decision that follows it. In order to reach the statement after the decision (the first Print statement), and we must make A greater than B, let’s do that by making A=2 and B=1. The false branch of the decision is ignored, and the path of execution carries on to the final Print “finished” statement, so the final statement is also covered. The second test case needs to cover the outstanding statements, that is, the second print statement. We achieve this by setting A less than or equal to B, in other words, A=1 and B=2. In this example we have achieved statement coverage in two tests. S5-9-P5 - Activity Consider the following statement: ‘100 per cent statement coverage exercises all the paths through a program’. Is this statement true or false? This is a true statement. If we aim to test every executable statement this is called full or 100 per cent statement coverage. S5-9-P6 - Procedure for Statement Testing To conclude this short section on Statement Testing we will briefly outline the procedure for statement testing. The first activity is to identify all the executable statements.

Page 98: Software Testing Intermediate

Software Testing Intermediate

Page 98

Then trace execution from first statement. For each decision encountered, choose the true outcome first; write down values for the variable or variables in the predicates that make it true. Ask the question, ‘Are all statements covered?’ If not, select test values to reach the next uncovered statement in the code and continue covering statements as you proceed through the code. Repeat this process until all statements are covered by at least one test case. Finally, for all test cases, record the variable values required to force the execution path you take. This concludes Section 5.9 on Statement Testing. Module 5 Section 10 – Branch/Decision Testing S5-10-P1 – Objectives The subject of this short session is the branch testing. So far in this course we have seen how we can define a generic process for achieving statement coverage in a program. The decision testing process is similar to those we have seen previously with one key difference. We aim to cover all outcomes of every decision. In this section we will:

Define the purpose of decision testing

Outline the process steps involved

See some practical examples. Before we move on it’s worth noting that Decision, Branch and Branch/Decision testing are in fact all the same. S5-10-P2 - Decision Testing Decision coverage aims to ensure that the decisions in a program are adequately exercised. Importantly decision coverage subsumes statement coverage. In other words if we aim for decision coverage, we need not normally attempt to create test cases to achieve statement coverage.

It’s worth noting that the examiners are fond of asking how many tests does it take to achieve both statement coverage and decision coverage. So it is a useful exercise to create tests first to achieve statement coverage and then add more tests to achieve decision coverage if required. The easiest way to look at this is to firstly create tests to achieve statement coverage and then for the created tests, check which decisions have been covered by those tests. Finally create additional tests to exercise the decision outcomes not yet covered by the existing tests. Again, coverage is achieved incrementally. When testers are designing tests to achieve decision coverage, they’re not interested in covering individual statements. They need only identify the decision outcomes first and then trace paths through the code picking up outcomes as they go and identifying the values of variables required to implement those paths. S5-10-P3 - Decision Coverage Shown on screen is an example. You may recognise it as it’s the same one we used for the section on statement coverage. The first activity is to trace the first path through the code until we encounter the first decision and choose the True outcome. As for statement testing, we choose the true outcome of each decision first, and record the values of the variables required to exercise the true outcome. In this case we would choose A=2 and B=1 as values. Testers are not particularly interested in the statements that are exercised as they proceed through the code. Rather, they are identifying the true and false outcomes of every decision and select cases to cover each outcome at least once. Of course some outcomes will be covered more than once, particularly when testing nested decisions. Even so the tester continues to create test cases in order to reach the outcomes that haven’t been covered by previous tests.

Page 99: Software Testing Intermediate

Software Testing Intermediate

Page 99

S5-10-P4 - Sample Code Here’s the sample code that we used for statement coverage. You can see on the right the test cases that we have used to exercise the statements. Under each test case you can see which statements have been covered. Note, that it only takes two test cases to exercise every statement in this function. By making ‘discrim’ greater than or equal to 0, we exercise the true branch and cover all the statements within the IF…THEN…ELSE…ENDIF clause. However, we have not covered the false outcomes of the two decisions in the function, so two test cases are insufficient to achieve decision coverage in this example. S5-10-P5 - Procedure for Decision Testing In general, we can define a process for decision test design. So let’s take a few moments to look at this now. The first activity is to follow the procedure for statement testing and create a covering set of test cases. Secondly, identify all the decisions and mark listing with a “true” or “false” for each outcome. Then for each test case already created, mark the outcomes covered. Ask the question, ‘are all outcomes covered?’ If not, select additional test values to reach the next uncovered outcome in the code and continue covering outcomes as you proceed through the code. It may be necessary to repeat this step until all outcomes are covered by at least one test case. Finally for all test cases, record the variable values required to force the execution path you take. S5-10-P6 - Sample Code Here is the example code shown again with the first two test cases that achieved statement coverage, with the addition of

the third test case required to achieve complete decision coverage. Having completed the first two test cases, there is in fact only one outcome to be covered and the final test case simply requires discrim to be less than zero. The values for a, b, and c have been chosen accordingly. S5-10-P7 - Exercise: Statement and Decision Testing Here’s a little exercise for you. Use the link on screen to open and print the exercise. Once you have the document, complete the questions relating to statement and decision testing. You can also open and print our document to show the model answers. Once you have completed this exercise you will have completed Section 5.10 on Branch/Decision Testing. S5-10-P8 - Cyclomatic Complexity The more complex code is, the more defects are likely to be present, and the harder they will be to find and fix. But how can we measure the complexity of code? Thomas McCabe suggested measuring cyclomatic complexity. For a control flow graph G, the cyclomatic complexity v(G), is defined as: v(G) = e - n + 2p Where: v is cyclomatic complexity e is the number of edges (or links) n is the number of nodes (or vertices) p is the number of parts (or connected software components) For example, a software component may be represented by the control flow graph shown on the screen. Using the formula, 8 - 7 + 2 gives a cyclomatic complexity of 3.

Page 100: Software Testing Intermediate

Software Testing Intermediate

Page 100

Cyclomatic complexity has the following properties: cyclomatic complexity will always be ≥ 1 cyclomatic complexity gives the maximum number of linearly dependent parts inserting or deleting functional statements does not change the cyclomatic complexity a component with a cyclomatic complexity of 1 will only have one path inserting a new edge (or link) will increase the value of cyclomatic complexity by 1 cyclomatic complexity depends only on the number of decision within the component Other ways to measure how complex code is include: lines of code (LOC) S5-10-P9 - Cyclomatic Complexity Here is an exercise. Calculate the cyclomatic complexity for the control from graph shown on the screen. When you think that you have the right answer move to the next screen to see if you are correct. S5-10-P10 - Cyclomatic Complexity The correct answer is five. There were 13 edges, 10 nodes and 1 part. Therefore cyclomatic complexity = 13 - 10 + 2. Try this next one. This time the graph is a flow chart but formula still works in exactly the same way. When you think that you have the right answer move to the next screen to see if you are correct. S5-10-P11 - Cyclomatic Complexity The correct answer is four. As you can see from the diagram there are 14 edges, 12 nodes and 1 part.

Therefore cyclomatic complexity = 14 - 12 + 2. Note: The first and the last lines in the flow chart are not connected to at both ends to a node and is therefore are not counted as a link for the purposes of cyclomatic complexity. Module 5 Section 11 – Error-guessing S5-11-P1 - Objectives The subject of this section is Error-guessing. Error-guessing is not really a technique at all. In fact many programmers and testers, use error-guessing to select test cases based on intuition and experience. In this section we will:

Outline the benefits of error-guessing as a testing approach

Describe the principles of this approach

Look at some typical trap examples

Challenge you with a practical example.

S5-11-P2 - Testing by Intuition and Experience We know from experience that some people have a knack for finding bugs in software. This ‘knack’ is a very useful way to find obscure or unusual situations that might not be exposed by the more formal techniques. The difficulty with error-guessing is that it’s not usually repeatable. In other words it’s unlikely that two testers would derive the same test cases using error guessing as a technique. Having said all this, error-guessed test cases can still have great value and the majority of testing done by programmers is still based on error-guessing techniques. The principle is that error-guessing should be used as a mopping up approach after the more formal test case design techniques have been used. Error-guessed test cases should be documented as part of the test plan.

Page 101: Software Testing Intermediate

Software Testing Intermediate

Page 101

S5-11-P3 - Examples of Traps Here are some examples illustrating how you might use error guessing to derive unusual scenarios to test:

Firstly, a program reading a data file might be presented with

Wrong file, binary file, same file as output, no file

A screen that validates numbers might be given

0, +0, -0, 00.0, the letter O, numbers with spaces, alphabetic or non-printing characters embedded, and so on

A program reading entries for sorting might be given

no data, one record, many identical records, or records that are already sorted, or perhaps, sorted in reverse order

Finally, a menu system might be tested by entering

The same command many times, over and over again to break it. A transaction might be entered normally, re-entered, deleted, aborted, searched for.

These tests would cover the unusual or obscure paths through the system. S5-11-P4 – Exercise: Error-guessing Here’s another exercise for you to try. A mail merge application reads names and addresses from a database, sorts them by surname and generates printed labels with formatted addresses. The database fields are:

First name

Last name

Place name

Street Number

Street name1

StreetName2

Town

County

Postcode. Error-guess some test cases that you would think might expose some faults in

the correctness and formatting of the printed labels. Document the error and the test case you think might expose it. Use the link on screen to open and print the exercise. You can also open and print our document to show the model answers. Once you have completed this exercise you will have completed Section 5.11 on error-guessing. Module 5 Section 12 – Exploratory Testing S5-12-P1 – Objectives Exploratory Testing, or ET for short, is an approach to testing which is not necessarily based on a test technique or coverage. It can be useful in particular situations and is probably the only approach to use when requirements are undocumented. In essence exploratory testing involves:

Diagnosing the intended functionality to work out what the code is meant to do

Then selecting tests that explore this functionality to demonstrate it does what you believe it should.

ET can be used in periods where other testing may have stalled, or as a scheduled activity, where testers are allowed to go 'off road'. ET can be more productive if the planned tests are regarded as 'stale'. ET can re-energise testers, if they are stale too. In this section we will:

Look at the history

Consider the balance between ET and script based testing

Examine some reasons for taking this approach to testing and outline a typical ET process

Finally describe and consider the content of an ET charter.

S5-12-P2 - Introduction to Exploratory Testing (ET) Think back to a time when you were executing scripted tests on a system.

Page 102: Software Testing Intermediate

Software Testing Intermediate

Page 102

Perhaps you thought, “If I were to plan this test again, I would do it differently”. It is said that hindsight is a wonderful thing, but it’s not surprising that halfway through a test, you will have learned an awful lot about the system under test. Given the chance to start over you might have approached it differently. Running tests is an exploratory activity. That is, you are learning more about the software as you proceed through the testing. At first glance, exploratory testing appears to be unsystematic, sloppy and unprofessional. In fact, this is exactly what its critics call it. Whether you believe it is an alternative to planned testing or not, most testers at one time or another do some form of ET. We would recommend you consider ET as a supplement to your planned testing. S5-12-P3 - Introduction to Exploratory Testing continued Exploratory testing is a term first introduced in 1993 by Cem Kaner in his book, “Testing Computer Software”. Since then, James Bach has documented the “method” and promotes it through his conference work and training courses. A growing number of organisations particularly in the US are taking it up as a supplement to or perhaps replacement for traditional plans and documented testing. Exploratory testing is founded on the idea that testing need not be over-planned or over-documented before you run tests. ET allows the tester to learn about the behaviour of the software under test and to explore it. As you think of a new test and then execute it, you learn more and can think of more tests. In effect, test design and test execution are concurrent rather than sequential. S5-12-P4 - Concurrent Design and Execution The essence of ET is that testing is not heavily planned before execution. Rather, an overall strategy or charter is defined. From this point the tester decides how they proceed.

ET sounds like it is distinctly different from traditional scripted testing, but a grey area exists between the two approaches. For example, even the most detailed test scripts leave some details to the discretion of the tester. For example, few test scripts detail how quickly the tester should type at the keyboard or give advice on particular warning signs of failure. Many failures occur and are spotted because the tester is alert, rather than the test script specifically steering the tester to look for them. ET assumes that the scope of testing is specified at a high level. Before an exploratory test session begins, there will be some analysis and a 'charter' for the session will be written. The charter might set out what parts of a product will be tested, what strategies should be used and so on. We will discuss charters later in the section. S5-12-P5 - Balancing ET with Scripted Testing Let’s explore the relationship between ET and scripted testing in a little more detail. Typically, testing becomes exploratory when we’re not certain what to do next. For example, during a test, you’re keeping one eye on what you might be covering next. The behaviour of the system under test gives you the impression that you should actually take a different direction. For instance, you might be focusing on the behaviour of some checkboxes on screen and when you have satisfied yourself that these work correctly, you might then want to move on to the drop-down list perhaps. A scripted approach is more appropriate when we know exactly what we want to do in the test. This is usually only possible if we have good requirements. Here’s an example. It may be that you must provide high quality test documentation for all the purposes, firstly to demonstrate that you are covering the software thoroughly and secondly as a record of the testing performed. Documented testing needs to be budgeted for and not all projects can afford such an “extravagance”.

Page 103: Software Testing Intermediate

Software Testing Intermediate

Page 103

S5-12-P6 - Why do Exploratory Testing? So why should we do exploratory testing? After all the scripted approach to testing fits well into a systematic test process where tests are designed based on well-defined requirements. However, the knowledge you gain when you run tests is not available when you planned those tests. Ideas for new tests are generated during test execution, but because you have to stick to the plan, there’s little opportunity to try these out. As such something of value is lost in the planned test process and formal test design and documentation gets in the way of innovation. In effect the richness of ET is only limited by the breadth and depth of our imagination and our emerging insights into the nature of the product under test. S5-12-P7 - How it Works ET can be used on any project test phase, scripted or not. You might find this analogy useful. Imagine you are a tourist on a bus travelling to a chosen destination. When you get there you intend to explore the locality as thoroughly as time allows. In the same way Exploratory testing might involve following a scripted test to a certain position in the software. At this point you might explore the system under test for while. Some test scripts cannot be interrupted in this way, but it might be possible to do some exploration at the end of the test script. It’s worth considering allocating 10 percent of your test execution time to exploration? But what specific skills are required for this type of testing:

Testers need to understand specifications, design and code

They may need specific skills or tool knowledge used in unit testing

They will still need to utilise test design skills

Testers should understand 'coverage’ concepts.

Finally, it’s worth noting that some developers might be blind to problems in their own code, but be great at exploring others' code S5-12-P8 - Mixing Scripted Tests and ET Prior to execution of your scripted tests, did you ever think the test plan had holes in it? If time isn’t allocated for exploration, there will be insufficient time to fix these holes. Question the value of your tests, because no test process provides complete coverage. So why not allocate some time to exploration? You might allow 10 percent or 80 percent for exploration. It's up to you. You may focus on some particularly risky features? Perhaps you could assign one tester in the team to exploration every day, with every tester (who wants to) taking turns to do exploration. This is a good way to break the monotony of scripted testing and can also be very productive. Alternatively, why not task one tester to explore and 'path find' for other testers to follow. S5-12-P9 - Typical ET Process Let’s take a little time to look at how ET testing normally works. Initially it’s worthwhile assembling your team and brainstorming ideas to target hotspots in the system to be tested. The “charter” is a brief description of the exploration activity. Typical instructions in the charter might be:

Test the ‘format bullets dialogue box’

Try extremes, repeat the test five times

Use particularly large and small numbers and so on.

Testers undertake the exploration in any way they feel comfortable with. Exploration will routinely expose failures. When a problem is identified, explore around it as patterns may emerge which may give clues to other problems in the software. ET techniques help develop

Page 104: Software Testing Intermediate

Software Testing Intermediate

Page 104

testers problem identification skills. When you think you have exhausted the particular area of the software, move on to the next interesting area. Here are some other useful ideas:

Try extremes

When you find a bug, explore around it

Look for patterns, as these can provide clues to other bugs

Remember to log an incident

Document the test that caused the failure

Document other tests that cover 'interesting' situations

Move on to the next interesting area.

We do not recommend that you use ET as your only approach though! S5-12-P10 - ET Charter Outline So what exactly should a charter consist of? The scope of the exploration should be defined. It’s usual to limit exploration sessions to less than two hours, so it’s important to specify the features and areas of exploration in the system under test. You might highlight particular areas of concern that need to be investigated. There may be particular modes of failure that we want to explore, in other words some particular bug types. Where appropriate, we might identify some extreme values or sneaky situations that ought to be tried. In fact you might add anything you can think of in the brainstorming session to the charter. Don’t spend too long brainstorming though, because the time spent brainstorming the charter might be better spent brainstorming tests at the terminal. Once you are well practiced, you’ll find you can run tests almost as quickly as you can think of them. S5-12-P11 – Exercise: Create an Explanatory Testing Charter Here’s a short exercise.

The exercise here is to write a charter for a brief ET session. Shown on screen is a sample dialog box from a word processing application. How long do you think it would take to explore the dialog box, based on your charter? Give yourself ten minutes to complete this exercise. There is no right or wrong answer here, just spend the time sketching out how you would approach the exploration. In principle, charter writing should be done as a brainstorming activity. S5-12-P12 - Summary This concludes Section 5.12 on the subject of Exploratory Testing. In this section we:

Looked at the history and background of Exploratory Testing

Considered the balance between ET and script based testing and saw how a scripted approach is more appropriate when we know exactly what we want to do in the test and how this is usually only possible if we have good requirements

We went on to list some possible reasons for taking this approach to testing and outlined a typical ET process

Finally we considered the content of an ET charter and listed some important considerations including, what to test, the areas of exploration, extreme values and test duration.

Module 5 Section 13 – Test coverage S5-13-P1 - Objectives Coverage enables us to describe how thorough our testing has been. In this section we will explain what coverage is, the fine coverage measures and explained how coverage works for both functional and structural techniques.

Page 105: Software Testing Intermediate

Software Testing Intermediate

Page 105

S5-13-P2 - Test coverage As already mentioned, coverage enables us to describe how thorough testing has been. It is a way to describe what proportion of the system has been tested relative to predefined criteria. For example, we may wish to test every statement in the code. If we are able to do this we can say 100% statement coverage has been achieved. In this example we have used statements to measure coverage. Therefore, we have used statements in the code as coverage items. A coverage item is an entity or property you used as a basis the testing. Coverage is a proportion of these coverage items that have been tested and is expressed as a percentage: coverage = coverage items exercised divided by the total number of coverage items multiplied by 100 Coverage can be used to define exit criteria. For example, exit criteria for unit testing include statement: 100% statement testing must be achieved. S5-13-P3 - Defining coverage measures A coverage item can be a number of things:

a function that a system must perform

a non-functional requirement

an entity or attribute within code, such as a statement or a decision

an entity or attribute of a functional model of the behaviour of a system, such as a partition defined by equivalence partitioning

Coverage items are most effective when they are defined using test design techniques.

S5-13-P4 - Test measurement (coverage) Techniques work because we have a model of the software and we use the model to derive test cases. The model defines coverage items – ‘things’ that have to be exercised or in other words, ‘things’ that you have to create test cases against. You can set a coverage target and then work towards that target. For black box techniques, where testers are using the specification to derive test cases, there are few tools available. Tools that manage requirements sometimes have facilities to calculate coverage, but these tools require that you use a formal specification. These methods are usually too restrictive and difficult to use in most environments. In contrast tools for white box techniques are essential and are widely available. Where you’re using the code itself to generate a model, there are tools for measuring coverage. In fact, you cannot realistically perform coverage measurement manually. S5-13-P5 - Structural coverage measurement tools Coverage measurement tools monitor the execution paths of the software as you run a test. These tools produce a report that tells the tester statements, branches, and decisions that have been taken, and what sequences of statements have been exercised. They can also tell testers whether you’ve met a coverage target. The tool is really valuable to tell you objectively how complete your testing is and how many more tests you need to prepare to meet your target? Tester can use the tool to point to where new tests are needed to achieve thoroughness. If you have a white box coverage target to meet, a coverage measurement tool is essential. Does that mean that if I achieve 100% branch coverage that I’ve got perfect software? No, it’s only an indirect measure of the quality of the testing. 100% branch coverage would indicate that you’ve done a lot of tests and they are

Page 106: Software Testing Intermediate

Software Testing Intermediate

Page 106

evenly distributed through the software covering all the decisions and statements. S5-13-P6 - Summary Not every technique has an associated test coverage measure. However, most do. For each technique that has an associated test coverage measure, appropriate test coverage items have be defined and how to measure test coverage has been explained. This concludes section 5.13 entitled test coverage. Module 5 Section 14 – Choosing Test Techniques S5-14-P1 - Objectives So far in this module we have looked in some detail at several test techniques. Testers frequently ask ‘which techniques should we use and when?’ One of the great challenges to testers is that there are many more test techniques available than there is time to use them. In section we will look at how testers Choose Test Techniques. This is a difficult subject because there is very little evidence to show that one technique is any better than another. Given that our choice of test technique may have dramatic consequences on the resources, skills and budget required, it’s important to explore this in a little more detail. S5-14-P2 - What Influences the Decision? What influences testers when deciding which test technique to use. Firstly let’s consider black-box test design techniques. We know that these are used when preparing tests from a baseline document. Format of requirements for black-box techniques include:

The ranges of numbers

If we see a requirement describing the validation of a numeric value in terms of ranges, we would certainly use Equivalence Partitioning or Boundary Value Analysis as our technique

The classification of data into sets

Where data is classified into sets then Equivalence Partitioning or Classification Trees become the obvious choice

State based

State transition testing has obvious applications where the notion of state is key to the requirement

Logic based

Where we have complex logic then Decision Tables and /or Cause-Effect graphing may be the most appropriate technique.

This list is by no means comprehensive, but it does illustrate where some black-box techniques are more appropriate than others. S5-14-P3 - What Influences the Decision? - (2) So what about white-box techniques? Well, a lot depends on the technology available to the tester. If testers are using a "Third-Generation" programming language such as COBOL, FORTRAN, C, C++, Java and so on, then all the standard white-box test design techniques are at their disposal. However, if the tester is using an object-oriented technology such as C++, Java, Smalltalk and so on, then because of the way the code is written, the standard white-box techniques may be of limited use. Of course, to use white-box techniques effectively requires the use of tools. Such tools might not be available for the testers’ platform. In this case, testers can choose to complete test design but it would be impossible to derive meaningful coverage measures. S5-14-P4 - Contractual Requirements One way for a customer to reduce risk and assure test thoroughness would be to mandate a level of testing. The Component Test Standard BS7925-2 is designed for this purpose.

Page 107: Software Testing Intermediate

Software Testing Intermediate

Page 107

BS7925-2 documents all the popular test design and measurement techniques and as such it allows users or customers to mandate that suppliers use certain test design and measurement techniques. The standard is auditable; making it easier for customers to specify that BS7925-2 be used in its contracts with suppliers. For example, it’s quite reasonable for a customer to mandate in a contract that, ‘the supplier must use boundary value analysis and branch test design techniques and achieve 100% coverage.’ The supplier might not want to be forced into using a specific test technique, but they will know they are fulfilling the customer requirement in an objective way. In effect such statements eliminate the dilemma of which technique to use, so it eliminates the usual problem of deciding 'how much testing is enough'. S5-14-P5 - Activity Carefully read through the scenario given on the screen. Make a record of any test design techniques that the project in the scenario might benefit from. Note down what test level you think they would they be best applied, what coverage measurements would be appropriate and the justification for you choices. When you are satisfied with your answer move to the next page to reveal the answer. S5-14-P6 - Activity There is no right or wrong answer to this question. The answer given here is a good answer. However, from the small amount of information given in the text given, and the fact that everyone's experiences are different, it is likely that your answer will be different. Do not be alarmed by this and don't forget that the BCS Intermediate examination is a multiple choice examination and you will not be required to come up with your own answer to such a question. You will be required to select the best answer from a list of options. S5-14-P7 - Summary In this last section of module five we have looked at test technique selection.

When choosing appropriate test techniques in the examination look at the keywords which indicate certain techniques. For example, a system that exhibits states will suggest states transition testing. Any system that exhibits logic or process flow is likely to suggest decision table testing. Some techniques, such as exploratory testing, will always be appropriate. Care must also be taken choosing which level the technique would be most appropriate. The most important thing to remember here is that the earlier defect is found, the cheaper it is likely to be to fix. This means as much testing must be done as early as possible. Don't be afraid to select techniques such as equivalence partitioning and boundary value analysis for component testing. Finally, remember that whenever a test technique is appropriate its associated test coverage measure will also be appropriate.

Module 6 – Test Management

Module 6 Section 1 – Test Organisation S6-1-P1 - Objectives Welcome to Module 6, entitled ‘Test Management’. The title of this first section is ‘Test Organisation’. In this module, we will consider how the testing team will be organised. In small projects, it might be an individual who simply has to organise his own work. In bigger projects, we need to establish a structure for the various roles that different people in the team have. Establishing a test team takes time and attention in all projects. In this first section we will:

Look at who does what in the testing environment

Discuss the various test team roles including

Test Manager or Test Lead

Tester

Test Analyst

Test Automation Technician

Page 108: Software Testing Intermediate

Software Testing Intermediate

Page 108

Examine the role of other support staff

Discuss the concept of Test Independence and its associated benefits and potential drawbacks.

S6-1-P2 - Who Does the Testing? Let’s begin this section by considering the question ‘who does the testing?’ Well it’s reasonable to expect that the programmers should do ad hoc testing, as they routinely code a little and test a little simply to demonstrate to themselves that the last few lines of code they have created work correctly. It’s informal, undocumented testing and is private to the programmer. No one outside the programming team will see it. Programmers or other team members may do sub-system testing. Subsystem testing is component testing and component integration testing. The programmers who wrote the code and interfaces normally do the testing simply because it requires a certain amount of technical knowledge. On occasions, it might be conducted by another member of the programming team, either to introduce a degree of independence or to spread the workload. System testing addresses the entire system and is usually carried out by independent teams. It is the first point at which we’d definitely expect to see some independent test activity, in so far as the people who wrote the code won’t be doing the testing. For a nontrivial system, it’s a large-scale activity and involves several people. Team members include dedicated testers and business analysts or other people from the IT department, and possibly some users. S6-1-P3 - Who Does the Testing? (2) User acceptance testing, on the other hand, is always independent. Users bring their business knowledge to the definition of a test. However, they normally need support on how to organise the overall process and how to construct test cases that are viable. On occasions there is a need to demonstrate complete independence in testing. This is usually to comply with some regulatory framework or perhaps

there is particular concern over risks due to a lack of independence. An independent company may be hired to plan and execute tests. In principle, third party companies and outsource companies, can do any of the layers of testing from component through system or user acceptance testing, but it’s most usual to see them doing system testing or contractual acceptance testing. S6-1-P4 - Test Team Roles – Test Manager The syllabus defines two testing roles, the Test Leader or Test Manager and the Tester. We’ll take a look at these roles here. A Test Manager is really a project manager for the testing project. That is, they plan, organise, manage, and control the testing within their part of the project. However there are a number of factors, which set a Test Manager apart from other IT project managers. Firstly their key objective is to find faults and it might appear that this is at odds with the overall project’s objective of getting a product out on time. Secondly, to others in the overall project, the Test Manager will appear to be destructive, critical and sceptical. Lastly, a Test Manager needs a set of quite specific technical skills. This is a key role in successful testing projects. You can view a list of typical tasks for a Test Manager by clicking here. S6-1-P5 - Test Team Roles – Tester So what do testers do? Well testers build tests. Working from specifications, testers prepare test procedures or scripts, test data, and expected results. They deal with lots of documentation and their understanding and accuracy is key to their success. As well as test preparation, testers execute the tests and keep logs of their progress and results. When faults are found, the tester will retest the repaired code, usually by repeating the test that detected the failure. Often a large amount of regression testing is necessary because of frequent or extensive code changes, testers execute these too. If automation is

Page 109: Software Testing Intermediate

Software Testing Intermediate

Page 109

well established, a tester may be in control of executing automated scripts also. Depending on the test level and the risks related to the product and the project, different people may take over the role of tester, keeping some degree of independence. Typically, testers at the component and integration level would be developers, whereas testers at the acceptance test level would be business experts and users, and testers for operational acceptance testing would be operators. You can view a list of typical tasks for a Tester by clicking here. S6-1-P6 - Other Test Team Roles – Test Analyst As we mentioned earlier, the syllabus only includes the roles of Test Manager and Tester. But to further broaden your knowledge let’s spend a few moments looking at some other team roles you are likely to encounter. Basically Test Analysts are the people who scope out the testing and gather up the requirements for the test activities to follow. In many ways, they are business analysts because they have to interview users, interpret requirements, and construct tests based on the information gained. The key skills for a Test Analyst are to be able to analyse requirements, documents, specifications and design documents, and derive a series of test cases. They should be good documenters. The test cases must be reviewable and give confidence that the right items have been covered. They will spend a lot of time liaising with other members of the project team. Finally, the Test Analyst is normally responsible for preparing test reports, whether they are involved in the execution of the test or not. S6-1-P7 - Other Test Team Roles – Test Automation Technicians The people who construct automated tests, as opposed to manual tests, are ‘Test Automation Technicians’. These people automate manual tests that have

been proven to be valuable. The normal sequence of events is for the test automation technician to record (in the same way as a tape-recorder does) the keystrokes and actual outputs of the system. The recording of the test scripts is used as input to the automation process where, using the script language provided by the tool, they will be manipulated into an automated test. The role of the test automation technician is therefore to create automated test scripts from manual tests and fit them into an automated test suite. The automated scripts are small programs that must be tested like any other program. These test scripts are often run in large numbers. Other activities include:

Record, code and test automated scripts

Prepare test data, test cases and expected results

Execute automated scripts, log results and prepare reports.

S6-1-P8 - Support Staff Every system has a database as its core. The DBA or database administrator will support the activities of the tester for setting up the test database. They may be expected to help find, extract, manipulate, and construct test data for use in tests. This may involve the movement of large volumes of data as it is common for whole databases to be exported and imported at the end and start of test cycles. The DBA is a key member of the team. There are a whole range of technical staff that needs to be available to support the testers and their test activities, particularly, from system testing through to acceptance testing. Operating system specialists, administrators and network administrators may be required to support the test team, particularly in the non-functional side of testing. In a performance test, for example, system and network configurations may have to change to improve performance. Where automation is used extensively, a key role of any large team involves ‘tool-smiths’, that is, people able to write

Page 110: Software Testing Intermediate

Software Testing Intermediate

Page 110

software as required. They have very strong technical backgrounds, typically programmers. There are two further areas where specialist support is often required. On the technical side, the testers may need assistance in setting up the test environment. From a business perspective, expertise may be required to construct system and acceptance tests that meet the needs of business users. In other words, the test team may need support from experts on the business. S6-1-P9 - Independence When we consider independence in testing, it’s not who runs the test that matters. If a test has been defined in detail, the person running the test will be following instructions (put simply, the person will be following the test script). Whether a tool or a person executes the tests is irrelevant because the instructions describe exactly what that tester must do. When a test finds a bug, it’s very clear that it’s the person who designed that test that has detected the bug and not the person who entered the data. So, the key issue of independence is not who executes the test but who designs the tests. The biggest influence on the quality of tests is the point of the view of the person designing the tests. It’s very difficult for a programmer to be independent. The programmer doesn’t want to see their software fail. Also, programmers are usually under time pressure and they are often more interested in writing new code. These factors make it very difficult for them to construct test cases that have a good chance of detecting faults. Of course, there are exceptions and some programmers can be good testers. However, their lack of independence is a barrier to them being as effective as a skilled independent tester. S6-1-P10 – Independence (2) Getting programmers in the same team to swap programs so that they are planning and conducting tests on their colleague’s programs can be a very useful activity. This brings a fresh viewpoint, they are unlikely to have the same assumptions and they won’t fall into the trap of ‘seeing’ what they want to see. Programmers also

feel less threatened by their colleagues than by independent testers. To recap, if tests are documented, then the test execution should be mechanical and anyone could in effect, execute those tests. Independence doesn’t affect the quality of test execution, but it significantly affects the quality of test design. The only reason for having independent people execute tests is to provide certainty that the tests are actually run correctly, in other words, using a consistent set of data and software (without manual intervention or patching) in the designated test environment. S6-1-P11 - Independence - Options For large, complex or safety critical projects, it is usually best to have multiple levels of testing, with some or all of the levels done by independent testers. Development staff may participate in testing, especially at the lower levels, but their lack of objectivity often limits their effectiveness. The independent testers may have the authority to require and define test processes and rules, but testers should take on such process-related roles only in the presence of a clear management mandate to do so. Testing tasks may be done by people in a specific testing role, or may be done by someone in another role, such as a project manager, quality manager, developer, business and domain expert, infrastructure or IT operations. Here is a summary of other possible options. S6-1-P12 - Independence – Benefits and Drawbacks Historically, the independence of testers has always been emphasised, particularly system test teams and users, who are always independent. There are distinct benefits of independence, but there are also some drawbacks. The benefits of independence include:

Independent testers see other and different defects, and are unbiased

An independent tester can verify assumptions people made during

Page 111: Software Testing Intermediate

Software Testing Intermediate

Page 111

specification and implementation of the system.

Potential drawbacks of independence include:

Isolation from the development team (if treated as totally independent)

Independent testers may be the bottleneck as the last checkpoint

Developers lose a sense of responsibility for quality.

S6-1-P13 - Summary This concludes Section 6.1 Test Organisation. In this first section we:

Looked at who does what in the testing environment

Discussed the various test team roles including the

Test Manager or Test Lead

Tester

Test Analyst

Test Automation Technician

We went on to examine the role of other support staff

Concluded the section by discussing the concept of Test Independence, the options available and its associated benefits and potential drawbacks.

Module 6 Section 2 – Test Management Documentation S6-2-P1 - Objectives In this section we will explain the hierarchy of test management documentation and their relationship with other project documents. We will describe the role and purpose of each document within the test management documentation hierarchy. The purpose of documenting test management information is to communicate plans and progress to management and other team members. Managers need to understand what is going on to ensure issues are dealt with and the project stays on schedule. Other team members need to coordinate their efforts so all tasks are undertaken without duplication of effort.

The benefits of documenting management information include:

Communication to all teams and team members

Strategies and plans can be reviewed

Progress can be monitored

Pitfalls of documenting management information include:

They cost time and money to create

Remember, when deciding how much to document you must balance the benefits of documentation against cost of creating it. S6-2-P2 - Test management documentation hierarchy The hierarchy of test management documentation as defined by the BCS Intermediate Certificate in Software Testing syllabus includes the following documents:

test policy which defines the organisations philosophical approach to testing

test strategy which describes how testing will be done for a program

project test plan which describes the sequence of events, recourses and timescales for an individual project

level or phase test plan which describes the detailed plan for a test level

Some of these documents will be dependent on other project documentation such as project plans and release plans. As well as these there are other test management documents such as different types of reports that will be created which we will discuss later in the course. Remember, there are many alternative ways to document test management information and the way organisations use documentation will vary widely. Even the

Page 112: Software Testing Intermediate

Software Testing Intermediate

Page 112

names of the documents will vary from the ones described in this course. S6-2-P3 - The test policy The purpose of a test policy is to document an organisation’s philosophy towards testing. Typically each organisation would have one test policy. However, sometimes when an organisation is involved in particularly diverse projects, they may decide to have more than one test policy. The test policy must be seen as a directive from senior management. However, usually it will be created by the test team. The test policy will typically include:

The test process to be followed

The levels of testing to be undertaken

Testing success factors

The way that the value of testing will be measured

The organisational approach to test process improvement.

S6-2-P4 - The test strategy The purpose of the test strategy is to describe how testing will be done for a program of one or more projects. It is not a plan so you would not expect to see timescales and names of specific resources, but you would expect to see details of the scope and the types of testing of testing carried to be out. The test strategy is often described as the vehicle for mitigating the risk. For example, product risks will drive the scope and types of testing of testing to be carried out. Test and risk is discussed in more detail later in this module. A test strategy will be created by a senior tester such as a test manager and should be based on the organisations test policy. An organisation running several programs may have several test strategies; but a small organisation, or an organisation whose projects are very similar, may only have one test strategy.

A typical test strategy might contain:

The scope of testing

Risks to be addressed

The types of testing to be undertaken

The levels of testing to be undertaken

Entry criteria into each and exit criteria for each level of testing

Completion criteria for the test effort

Required test environment

Test techniques to be used at each test level

Tools to be used to support the testing

Standards which must be followed

Approach to incident management

Approach to confirmation and regression testing

The extent of software re-use

Approach to process improvement.

S6-2-P5 - The project test plan The purpose of the project test plan is to describe the scope, approach, resources, and schedule of the testing activities that are needed to meet the test strategy. It is also a record of the test planning process. The project test plan will be created by the test manager or test team leader. The project test plan is in effect the implementation of the test strategy so it must be based on the test strategy. Testing must fit in with other project activities and milestones. Therefore, the project test plan must also be based on the project plan. IEEE Standard 829-1998, Standard for Software Test Documentation, states that

Page 113: Software Testing Intermediate

Software Testing Intermediate

Page 113

the test plan should have the following structure:

Test plan identifier

Introduction

Test items

Features to be tested

Features not to be tested

Approach

Item pass/fail criteria

Suspension criteria and resumption requirements

Test deliverables

Testing tasks

Environmental needs

Responsibilities

Staffing and training needs

Schedule

Risks and contingencies

Approvals Test planning is explained in greater detail later in this module. S6-2-P6 - Level test plans Within a project there may be several test levels and several phases. Each level and phase may have its own plan. A level test plan would state the plan a particular test level within a project. For example, under the project test plan there may be an integration test plan, a system test plan and an acceptance test plan. The structure of a level or phase test plan would be the same as a project test plan, or at least very similar. However, a level or phase test plan will be more detailed and more focused. For example, it may describe the day-to-day plan of activities and associated milestones.

A level or phase test plan would be created by a test team leader or a test analyst. S6-2-P7 - Activity Carefully read through the scenario given on the screen. Putting yourself in the position of a senior tester that has been called in to advise on improving the mobile phone division's test documentation, make a record of your answers to the following suggestions: The test policy from the broadband internet services division is the most recently completed test policy so could be used for the mobile phone division Test strategies are fairly static documents so we should not be unduly concerned that they have not been updated for more than 24 months There is no point creating a test strategy too early on the life of a programme as too much will change, it would be better to leave it for a few months till things settle down Each project should be empowered to plan in whatever way they feel is most appropriate for their project Include both the positive and the negative to the above comments and make suggestions as to a way forward. When you are satisfied with your answer move to the next page to reveal the answer. S6-2-P8 - Activity – continued There is no right or wrong answer to this question. The answer given here is a good answer. However, from the small amount of information given in the text, and the fact that everyone's experiences are different, it is likely that your answer will be different. Do not be alarmed by this and don't forget that the BCS Intermediate examination is a multiple choice examination and you will not be required to come up with your own answer to such a question. You will be required to select the best answer from a list of options. Here is our answer.

Page 114: Software Testing Intermediate

Software Testing Intermediate

Page 114

S6-2-P9 - Summary In this section we have looked at the hierarchy of test management documentation and their relationship with other project documents. We have described the role and purpose of each document with in the test management documentation hierarchy. Remember, for the Intermediate Certificate in Software Testing examination you must be able to decide which document a statement should be in. If the statement applies to the organisation as a whole, then it should appear in a test policy. If the statement applies to a programme and is not low level detail then it should appear in a test strategy. If the statement is to do with resources, timescales or something very specific to a project, then it is likely to appear in a test plan. You must look to the question’s scenario to help decide as different scenarios will require testing to be documented differently. Module 6 Section 3 – Test Approaches (Strategies) S6-3-P1 - Objectives Test approaches, or strategies, define the overall approach to risks facing the project. In too many organisations, little thought is applied to test approaches. Frequently the perception is that there can only be one single approach, and is usually based on the V-model. Typically, people copy and amend the approach from their previous projects. We won’t explore test strategy in great detail, but we will:

Consider preventative and reactive approaches

Look at some of the many alternative strategies available including analytical, model-based and process based approaches

Finally we will look at some important considerations when selecting a test approach.

S6-3-P2 - Preventative versus Reactive Test Approaches One way to classify test approaches or strategies is base them on the time at which the bulk of the test design work is begun.

One tack is to take a ‘Preventative Approach.’ You might call this the test early and often approach. This is the preferred way to:

Prevent defects early in the process

Capture defects late in the process.

Rather like planning the route for a long journey, testers aim to correct errors in navigation early, because the longer we ignore errors, the larger the deviation from the required path becomes. In the same way, the economic argument for testing early is unassailable. Defects found early are significantly cheaper to correct than the same defects found later. ‘Reactive approaches’ are where test design comes after the software or system has been produced. This is the typical, somewhat old fashioned, but still far too common approach. Testing is delayed because:

It is perceived to delay progress

It doesn't add value

Distracts the project team. S6-3-P3 - Test Strategies – Alternative Approaches There are many and varied strategies for testing. Of course, some strategies focus on high level issues, such as risk-based strategies, others focus on lower level activities such as error-guessing. Here are a few more to consider. Firstly, there are ‘analytical approaches’ such as risk-based testing where testing is directed to areas of greatest risk. There are ‘model-based approaches’ such as stochastic testing using statistical information about ‘failure rates’, such as reliability growth models, or ‘usage’, such as operational profiles. ‘Methodical approaches’ are failure based, including error guessing and fault-attacks, check-list based, and quality characteristic based. And ‘process- or standard-compliant approaches’ such as those specified by industry-specific standards or the various agile methodologies.

Page 115: Software Testing Intermediate

Software Testing Intermediate

Page 115

S6-3-P4 - Alternative Approaches continued Further test strategies include: ‘Dynamic and heuristic approaches’ such as exploratory testing, where testing is more reactive to events rather than pre-planned, and where execution and evaluation are concurrent tasks. Where test coverage is driven primarily by the advice and guidance of technology and/or business domain experts outside the test team, this is defined as a ‘Consultative approach’ We could also consider ‘regression-averse approaches’ such as those that include reuse of existing test material, extensive automation of functional regression tests, and standard test suites. But remember, different approaches are frequently combined, for example, a ‘consultative, risk-based dynamic approach.’ S6-3-P5 - Selecting an Approach A test approach is the implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed. So which approach should you take? Well, often this decision would be dictated by overarching standards or by those developing a defined approach or strategy. Where discretion is allowed, the selection of a test approach should consider the following:

The risk of failure of the project

hazards to the product and risks of product failure to humans

the environment and the company Safety-related systems pay much more attention to preventative measures and involve safety and hazard analyses to foster a much better understanding of risk so these risks can be addressed directly.

The skills and experience of the people in the proposed techniques, tools and methods

Few projects have the luxury to handpick their test resource, tools, process and technologies. More often, the approach needs to account for the fact that resources may be scarce, inexperienced, business or technically focused. S6-3-P6 - Selecting an Approach (2) The selection of a test approach should also consider:

The objective of the testing and the mission of the testing team.

Testing a software package aims at demonstrating the capabilities of the software, and perhaps trialling different configurations, coupled with a business process. This is different from a completely custom-built system, of course:

Regulatory aspects, such as external and internal regulations for the development process.

Most regulatory frameworks either mandate a level of testing, or 'best practices' that should be followed. In both cases, the test approach should aim to generate the evidence required for accreditation:

The nature of the product and the business.

The approaches required to test medical systems will obviously differ from the approach required to test an e-business website. These differences relate to the application type, the technologies and the risks of failure involved. This concludes this section on the subject of test approaches. Module 6 Section 4 – Entry and Exit Criteria S6-4-P1 - Objectives In this section we will explain the significance of objective test entry and exit criteria, give examples of suitable test entry and exit criteria and explain possible

Page 116: Software Testing Intermediate

Software Testing Intermediate

Page 116

alternative courses of action when test entry and exit criteria are not met. Remember, entry and exit criteria will be defined in the test strategy but will likely be documented in several places, such as in project test plans and level test plans, as well. S6-4-P2 - Entry criteria The purpose of entry criteria is to ensure that everything necessary for test execution is in place before test execution starts. Typical entry criteria may consist of: Test environment availability and readiness. This is to ensure that the correct version of the test environment has been installed correctly. This will include any stubs, drivers or test harness that are necessary. Test tool readiness in the test environment. This is to ensure that test scripts can be followed and results recorded, any automated tests can be run, and defects can be raised. Testable code availability. This is to ensure that the correct version of the system under test has been installed correctly and can be launched. Test data availability. This is to ensure that the correct version of any test data that is required has been loaded and linked correctly. For example, this may include the base data database or data for data driven test. S6-4-P3 - Exit criteria The purpose of exit criteria is to define when to stop testing, such as at the end of a test level or when a set of tests has a specific goal. Typically exit criteria may consist of:

Thoroughness measures, such as coverage of code, functionality or risk.

Estimates of defect density or reliability measures.

These can only be approximate estimates. Some organisations are able to predict defect density with some confidence but that is because they have comprehensive metrics captured over several years and use stable, repeatable processes. Another typical exit criterion is cost. Ultimately, testing has to stop when the money, time, or both run out. At the point the project stakeholders must make a judgement whether the risk of release (the residual risk) is acceptable. This risk has to be balanced with the cost of carrying on testing. In software product companies, cost isn't always the most significant factor. In these environments, delivering new products or new functionality to the market on time are critical. Marketing plans are prepared and executed, product launches announced weeks or months in advance - so deadlines take precedence and may be an exit criterion. S6-4-P4 - Exit criteria – continued Whatever the drivers, exit criteria should always be objective and measurable. The principle is that given there is no upper limit on how much testing we could do, we must define some objective and rational criteria that we can use to determine whether 'we've done enough'. Management may be asked to define or at least approve exit criteria, so they must be understandable by managers. For any test phase, there will tend to be multiple criteria that, in principle, must be met before the phase can end. There should always be at least one criterion that defines a test coverage target. There should also be a criterion that defines a threshold beneath which the software will be deemed unacceptable. Criteria should be measurable, as it is inevitable that some comparison of the target with reality must be performed. Criteria should also be achievable, at least in principle. Criteria that can never be achieved are of little value. S6-4-P5 - Component Integration test completion criteria Some typical test completion criteria for a Component Integration test are listed below.

Page 117: Software Testing Intermediate

Software Testing Intermediate

Page 117

Structural coverage Component Integration test is white box oriented because the developers performing the integration and testing must have an intimate knowledge of the call mechanisms between components. A typical coverage target might be 100 percent coverage of the function calls between components and transfer of data between components. Functional coverage Black box coverage might reference some of the standard test design technique. It’s more likely that you would have to adapt one or more techniques to the specifics of the integration design. For example you might have a target of covering all valid transfers of control and data between components and all invalid transfers. As for component testing, it is unlikely you would wish to have known faults in components at the sub-system level. In this example, the final two criteria are the same as for component testing. S6-4-P6 - (Functional) System Test completion criteria The system testing test completion criteria would be mainly oriented towards black box coverage. In this example, you can see the familiar black box technique coverage measures being used. However, if the testers are basing their tests on transaction flow diagrams, an obvious coverage target would be to cover all transaction flow that was identified within the functional specification. An ideal objective would be to have all tests passed without failure, but it is likely that some faults cannot be fixed in the time available, particularly towards the end of the project, or they cannot be fixed economically. In such cases, some flexibility will be left in the completion criteria to allow the project team to waive some incidents should they be deemed less critical. S6-4-P7 - Acceptance Test completion criteria Typical acceptance test completion criteria tend to include business-oriented

objectives as much as technically, test-oriented criteria. In this example, the criteria include coverage of all critical system features and all branches through business processes. We will assume that these are flowcharted. Ideally, we prefer to have all tests passed but, as in system testing, we make an allowance for the situation where some incidents may not be worth fixing before the go-live date. S6-4-P8 - Large Scale Integration Test completion criteria Large-scale integration test completion criteria are similar to acceptance test completion criteria in that they usually contain business-oriented objectives. However, in some circumstances, the physical interfaces between systems require developers or other technical support staff to create tests that exercise the physical connections between systems. In this case, some testing will be white-box oriented. Test completion criteria may include reference to some white-box attributes of the system interfaces and the systems under test, such as the physical data transfers between systems across specified channels. S6-4-P9 - Decision time So, the time has come to stop testing because we have run out of time. What if our test exit criteria are not met? Some tests may not have been run at all. Of course at this point we have we have to make a decision. It might be the case that some faults may be acceptable, for this release anyhow. Alternatively, you may decide to relax the exit criteria – assess the risk and agree with customer and document the changes. If we released the software now, these faults will be transferred into production. However, the cost of correction of these faults may outweigh the benefits of releasing the software today. If the end user management decided that some faults are bearable in released software,

Page 118: Software Testing Intermediate

Software Testing Intermediate

Page 118

the decision to release might proceed anyway. If exit criteria are met, the software/system can be released to the next test phase or production. S6-4-P10 - Decision time – continued What if you complete all the planned testing early? That is, all tests have been run and passed but there is time left in the test execution phase? You might question the thoroughness of your testing and feel that you have not done enough. Perhaps this will be the time to review the depth of your testing and if necessary, strengthen the test exit criteria. You may need to create additional tests to achieve it. Of course, you might have anticipated many more problems than actually occurred and been too sceptical of the ability of the developers. In this case, you might congratulate them on their great skill. However, this is a less common scenario. More often than not, work expands to fill the time available. S6-4-P11 - Activity The code for a key component of a heart rate monitor will need to be component tested. Tick which of the following you think would be suitable entry criteria and which would be suitable exit criteria. Not all of the options will be appropriate for either. When you are happy with your answer move to the next page to reveal the answer. S6-4-P12 - Summary In this section we explained the significance of objective test entry and exit criteria, gave examples of suitable test entry and exit criteria and explained possible alternative courses of action when test entry and exit criteria are not met. Remember for the examination, as well as exit criteria from the previous test level, entry criteria may depend on factors such

as the availability of a test environment for the phase being entered. Module 6 Section 5 – Test Estimation S6-5-P1 - Objectives Welcome to Section 6.3 entitled ‘Test Estimation’. In any project, estimates of resource requirements, activity duration, tools or specialist skills will be required. As a tester, it's always difficult to predict how long test execution will take when the quality of the software under test is not known. We can't make judgements about the quality until we have tested it. This ‘chicken and egg’ scenario makes estimating difficult. In this section on estimating we will:

Consider what we mean by estimation and how our own experience and that of others can assist us

Look at test estimation as a ‘bottom-up’ activity

Discuss what we need estimates of and just as importantly consider what typically gets forgotten

We will look at the problems inherent in estimating, including allowing insufficient time to test

Finally we will consider other useful sources of estimating knowledge.

S6-5-P2 - What is an Estimate? Estimation is not an exact science. It's an approximate calculation or judgement based on:

Direct experience

Related experience

Informed guesswork. The best basis on which to estimate is when you have direct experience of an identical task. You can reuse this experience to predict how long the same task will take. Perhaps you haven't worked on exactly the same application area before, but given the requirements document, perhaps you can predict how long it will take to specify tests based on a

Page 119: Software Testing Intermediate

Software Testing Intermediate

Page 119

requirement of similar size and complexity. This is related experience. The final option is to use informed guesswork. If at all possible, try and relate the task in hand to previous experience. Ask advice of others who have broader experience. Make sure you discuss your estimates with technically qualified colleagues, especially if they do the work. S6-5-P3 - Test Estimation Estimation can be a 'bottom up' activity. The first action would be to identify all the lowest level activities that can be estimated independently, such as a test plan or procedure. It’s also important to identify the resource requirements. These might be, people, tools, test environments and so on. Although we normally think of resources as human resources, test labs and technology, office space and furniture, test tools, testers, users and external consultants may all be part of the overall resource requirement. Identify everything you need to perform the management, planning, preparation, execution, re-testing or regression testing activities. Be sure to identify dependencies on external resources. Even if you only need a resource for a few hours, identify it on the plan! Make sure you specify how long you will need these resources for. Not every resource is required for the whole duration of a task. For example, technical support staff will need to be involved early enough to give them time to build and configure your test environment. You should also consider the resource utilisation. Will it be 100, 50%, occasionally? How much effort is really required? Document reviewers for example, might be required just a few days over a month. Some resources require huge lead times. For example, if you need to build a new test environment, be sure to assign staff to the definition and procurement activities early enough for them to deliver on time. S6-5-P4 - Usually a ‘Bottom-up’ Activity Another important approach is to break down large complex tasks into smaller,

simpler tasks, as estimation 'in the small' is easier to do. But do you estimate in days, hours, minutes? Unless you are dealing with very short duration tasks, then it’s unlikely you will use minutes. Most estimation is based on tasks that take a few hours or days to complete. Where there are a large number of tasks, try and group similar tasks together so you can estimate "in bulk". You should also consider adding a contingency. Either add a contingency as a percentage of all tasks individually or add it ‘across the board. Don’t do both or you risk over-compensating. Agree a percentage contingency on the risky tasks for example 10 or 15 percent. Although percentages are appropriate as contingencies for effort and costs, it is common to allocate an elapsed number of days between the end of testing and the go-live date as contingency. This contingency is set aside for slippage in testing, not development. Some tasks may overlap, known as double accounting and some tasks may be incomplete due to an inadequate estimate. When you split complex tasks into smaller tasks, make sure the tasks are actually independent. For example, if you split Test Specification into Test Design and Test Preparation, is there any overlap between these two tasks? S6-5-P5 - What do we Need Estimates of? So what will we need estimates of? Well just about everything you can think of needs to be incorporated into your estimation. There is no such thing as a ‘zero-duration task’, but where things like ‘Sign-Off’ must be incorporated into your plans, make them part of another task and incorporate this into the task title so everyone knows what it means. Estimates will be required for all explicit activities such as:

Test planning

Design

Implementation

Execution

Early meetings

Page 120: Software Testing Intermediate

Software Testing Intermediate

Page 120

Test environment

Standards/document templates.

The 'follow-up' activities should also be considered, including:

Re-testing

Incident reviews

Shut-down and clean-up

Handover. S6-5-P6 - What Gets Forgotten? There’s a lot to consider when producing estimates so it’s obvious that some things will be forgotten. Some typical examples include, training, the true cost of tools, relocation activities, the time it takes to organise meetings and the time taken to review or QA the work. Even experienced testers need training. Whether you are testing a package or a custom built application, testers will benefit from a better understanding of the business requirement and the system under test. Testers should be trained in the basics of the test process and test techniques. Set some budget and time aside to get new or untrained users trained. Time and effort will be required to select a new tool. You may want to run a pilot and obtain training and consultancy in how to use it. The project team may need to grow. Remember to set aside time for setting up office layouts, obtaining furniture, installing communications, IT infrastructure and so on. S6-5-P7 - What Gets Forgotten? (2) Testers need time to understand the system under test. Most projects generate vast quantities of documentation, much of which must be read before the testers are comfortable with the requirements and specifications as test baselines. Meetings and interviews take time to organise, a great deal of the project communication takes place in formal and informal meetings. Set aside time for travel and additional meetings. There is often a delay before you can interview key business or technical staff.

Make sure you allocate enough time to staff where the project products are to be reviewed prior to sign-off. Finally, there is always uncertainty in our estimates so set aside some contingency to accommodate the inevitable overruns on some tasks. S6-5-P8 - Problems in Estimating The difficulty with estimating is that the total effort required for testing is indeterminate. Consider test execution for example, you can’t predict before you start how many faults will be detected. You certainly can’t predict their severity. Although some faults might be trivial, others might require significant design changes. You can’t predict when testing will stop because you don’t know how many times you will have to execute your system test plan. Even if you cannot estimate test execution, you can still estimate test design. There are some rules of thumb that can help you work out how long you should provisionally allow for test execution. S6-5-P9 - Allowing Enough Time to Test One reason why testing often takes longer than the estimate is that the estimate hasn’t included all of the testing tasks! In other words, people haven’t allowed for all the stages of the test process. For example, if you’re running a system or acceptance test, the construction, set-up and configuration of a test environment can be a large task. Test environments rarely get created in less than a few days and sometimes require several weeks. Part of the plan must also allow for the fact that we are testing to find faults. Expect to find some and allow for system tests to be run between two and three times. S6-5-P10 - Sources of Estimation Knowledge It can be daunting to estimate the effort required to perform a task for the first time. It might be that you have never estimated before, or that you have no personal experience of the task to be estimated.

Page 121: Software Testing Intermediate

Software Testing Intermediate

Page 121

You need to draw on either your own or corporate experience. You might have to rely on your own intuition and guesswork. If you have no experience at all, it is probably safer to ask the manager involved to give you an estimate based on their experience. The risk-based test approach described in the next section will provide reassurance at a high level, but at a detailed level, seek the advice of someone more experienced. If there are company standards based on metrics derived from previous projects you should use them. However, even the most refined metrics need to be used with judgement. Ultimately, test planning and preparation estimates must be based on some form of hourly rate for a requirement item, for example, the time per page of materially relevant requirements. Better still; the time per hundred words of requirements would eliminate document formatting effects. In all cases you need to use your judgement to calibrate the standard metrics. If, for example, requirements are difficult to understand, are complex or relate to highly critical features, you might increase the estimate. If your standard metrics do not allow for such factors you may still have to exercise judgement, based on your experience in these matters. One thing that is certain, the more iterations of testing during your project, the more accurate your estimates will become. S6-5-P11 - Estimation techniques The following methods of reaching an estimate are listed by the BCS Intermediate Certificate in Software Testing syllabus: Intuition or guessing (also called professional judgment) is where experience is used to come up with an estimate. Its benefits are that it is quick to do and often surprisingly accurate. Its pitfalls are that there is no evidence of how the estimate was derived and just using intuition would make it difficult to improve the estimation process. Consensus of knowledgeable people is the same as intuition or guessing but it is done in groups. The benefit is that more

people should make the estimate more accurate. The pitfall is that estimates will take longer to create. Work Breakdown Structure (WBS) is using information about the test process and the specification of the system to define each task that will be required to carry out testing. Intuition or guessing is used to estimate every task which are added together to create an overall estimate. The benefits are that the estimation process is visible and that it is easy to change estimates as new information comes to light. The pitfalls are that it is time consuming and the test process needs to be well defined before the Work Breakdown Structure can be created. Based on previous metrics is a way to enhance the accuracy of the previous three techniques. The benefit is that the accuracy of estimates can be improved. The pitfalls are that it requires an accurate record of durations from previous projects to have been taken and they must have been projects of a similar nature. Percentage of development time is a formula based approach where testing is assumed to be a certain percentage of the development time. Its benefits are that it is quick and if used on projects of a similar nature in time can become accurate. Its pitfalls are that the development estimate must have been done and it cannot be used if it is the first project. Test Point Analysis (TPA) is also a formula based approach that uses a much more complex formula than just a simple percentage of development time based on Function Point Analysis. Its benefits are that it is a systematic method which if used on projects of a similar nature in time can be become accurate. Its pitfalls are that the Function Point Analysis must have been done first, which requires a detailed functional specification, and when used for the first time may not be accurate. S6-5-P12 - Estimation techniques – continued The estimation techniques:

Intuition or guessing

Consensus of knowledgeable people

Work Breakdown Structure (WBS)

Based on previous metrics

Page 122: Software Testing Intermediate

Software Testing Intermediate

Page 122

Percentage of development time Test Point Analysis (TPA) provides varying levels of accuracy depending on the circumstances. One potential pitfall that they all have is that the accuracy of the estimate might not be known. To mitigate this it is prudent to use more than one technique. If estimates using two techniques agree then there will be more confidence in the estimate. Note that although test estimation as described in this section usually follows a bottom up approach. Some techniques can be considered as top down approaches when they are applied to testing effort as a whole, rather than to constituent parts of the process for the purpose of estimation. For example, Work Breakdown Structure (WBS) and Test Point Analysis (TPA) are clearly bottom up approaches, whereas percentage of development time is clearly a top down approach. Intuition or guessing, consensus of knowledgeable people and estimates based on previous metrics are likely to also be top down approaches but their principals can also be used in conjunction with Work Breakdown Structure (WBS). Finally, remember that deadlines represent targets that are desirable for political and commercial reasons. Estimates should be based on the scope of the given work and not be swayed for political or commercial reasons. S6-5-P13 - Activity In this section we have looked at several test estimation techniques. Each would be most appropriate in different situations. Drag and drop the following techniques to their most appropriate situations. An organisation with a well defined test process and reasonably comprehensive documentation would be able to use this detail to create an estimate based on a Work Breakdown Structure. A new project in a new technology area would most likely have to resort to intuition or guessing as there seems to be no detailed information to use for any of the other techniques.

A project with a mature and detailed functional specification may be able to use Test Point Analysis if Function Point Analysis have been done. An organisation that lacks formal processes and a project with limited documentation but several experienced team members can use a consensus of knowledgeable people to improve estimates. An organisation with rudimentary historical data including over overall development effort and testing effort will be able to estimate based on percentage of development time if the development time has been estimated and the project is of a similar nature. An organisation with a record of past activities along with actual durations can base estimates on these previous metrics. S6-5-P11 - Summary This concludes this section on estimating. In this section we:

Defined what is meant by estimation and explained how our own experience and that of others can assist us with this activity

Discussed what we need estimates of and just as importantly considered listed some of the things which typically get forgotten

Looked at the problems inherent in estimating

Finally we outlined other useful sources of estimating knowledge including company estimating standards.

Module 6 Section 6 – Test Progress Monitoring and Control S6-6-P1 - Objectives Welcome to Section 6.5 entitled ‘Test Progress Monitoring and Control’. Having developed a test plan, the activities and timescales specified within it will need to be constantly reviewed to ensure what was planned is actually happening. The purpose of test progress monitoring is to provide feedback and visibility on these activities.

Page 123: Software Testing Intermediate

Software Testing Intermediate

Page 123

The information to be monitored may be collected manually or automatically and may be used to measure exit criteria, such as coverage. Metrics may also be used to assess progress against the planned schedule and budget. In this section, we will:

Look at some common metrics used for monitoring test preparation and execution

Examine test metrics used for test reporting and test control.

We'll go on to look at the purpose and content of the test summary report document according to the ‘Standard for Software Test Documentation’ - IEEE 829. S6-6-P2 - Common Test Metrics We mentioned in the section introduction that the purpose of test progress monitoring is to provide feedback and visibility on planned testing activities. The progress data is also used to measure exit criteria such as test coverage, for example, 50 per cent requirements coverage has been achieved. Other common test metrics include:

The percentage of work done in test case preparation (or percentage of planned test cases prepared).

This measure will give an indication of progress towards completing test preparation. Of course, it is important to understand that you can count test cases, scripts or conditions. In some respects, it is best to count conditions, as they are usually at a fairly consistent level of detail. Some scripts could be ten times more complex than others so a simple script count may not be reliable. However, counting conditions is more complicated and expensive:

Percentage of work done in test environment preparation.

This is a tricky one. Counting man hours expended against budget may not give an indication of real progress. Better to set some targets for completion, usually some

simple trials and use these as milestones to track progress:

Test case execution, for example the number of test cases run or not run, and test cases passed or failed, is another common test metric.

This is the essential mechanism for tracking progress of execution. The key metrics are test planned, tests run, tests passed, test failed, re-tests passed, re-tests failed. All these measures can be qualified by referencing the severity of the risks to be addressed or the failures encountered. S6-6-P3 - Common Test Metrics (2) Defect information is a frequent metric, this might include defect density, defects found and fixed, failure rate, retest results and so on. This is the classic way of tracking progress toward acceptance. Logging and analysing defects by severity over time, gives you an insight into the real progress towards acceptability:

Test coverage of requirements, risks or code.

Coverage can be tracked both through preparation (coverage of requirements) and through execution (requirements or code):

Testing costs, including the cost compared to the benefit of finding the next defect or to run the next test.

The cost of testing may or may not correspond directly to the other measures. Progress may be slow, and costs may escalate! Of course, the project manager and customer alike are interested in costs and delays, but these are 'inputs' to the test process. Test planned and run, defects opened and closed are the outputs. Metrics should be collected during and at the end of a test level in order to assess:

The adequacy of the test objectives for that test level

The adequacy of the test approaches taken

Page 124: Software Testing Intermediate

Software Testing Intermediate

Page 124

The effectiveness of the testing with respect to its objectives.

S6-6-P4 - Test Reporting Test reporting is concerned with summarising information about the testing endeavour. This might include:

What happened during a period of testing, for example, the dates when exit criteria were met

Analysed information and metrics to support recommendations and decisions about future actions. For example, an assessment of defects remaining, the economic benefit of continued testing, any outstanding risks, and the level of confidence in the tested software.

S6-6-P5 - IEEE 829 Test Summary Reports The IEEE 829 standard includes an outline of a Test Summary Report which could be used for test reporting. The purpose of the Test Summary Report is to summarize the results of the designated testing activities and to provide evaluations based on these results. We have outlined it here for your convenience. S6-6-P6 - Stakeholder interest in the test summary report All stakeholder groups will be interested in the test summary report. The business will be interested as they will want to know what requirements have been successfully delivered, what defects are outstanding, and what deviations from requirements may affect the live system. They will be most interested in the test summary report during test execution, especially towards the end of the project. Project management will be interested in the test summary report throughout the project as they will want to know that all scheduled tasks have been completed, that the plan is on track, and if there are any impediments to continued progress.

The development team will be interested in the test summary report as it will enable them to gauge how well they have done, and what impact their own progress and output is having on the final outcome of the project. They will be most interested in the test summary report during test execution, retesting and regression testing. The test team will be interested in the test summary report throughout the project as they will want to know the plan is on track, what outstanding defects and what deviations from requirements may affect their testing, and if there are any new risks or issues that will impede progress. S6-6-P7 - Test control The concept of Test Control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported. Actions may cover any test activity and may affect any other software life cycle activity or task. Typical examples of test control actions include: Re-prioritisation of tests when an identified risk occurs. For example, suppose a fault is found in a reusable component that is implemented at the core of a new system. Tests of the faulty component need to be scheduled in, regression tests need to be implemented and the overall test schedule may be affected. Changes to the test schedule due to availability of a test environment. Delays in availability or outages in test environments inevitably cause delays in testing. Unless the resources allocated change, or the deadline moves in line with the environment slippage, then some tests will not be run in time. The test manager needs to assess what the potential exists for de-scoping tests and advising the project of the consequences. Set an entry criterion requiring fixes to have been retested by a developer before accepting them into a build.

Page 125: Software Testing Intermediate

Software Testing Intermediate

Page 125

If a component is found to be faulty, the system integrator, or organisation using that component may insist that the component is retested and regression tested before it will be acceptable. (You could easily argue that this criteria ought to have existed beforehand!) S6-6-P8 - Activity Read through the test summary report displayed on the screen. Make a note of any test control actions that you think are appropriate. When you are happy with your answer move on to the next page to review the answer. S6-6-P9 - Activity – continued This is our evaluation of the situation and the best controlling actions moving forward. S6-6-P10 - Summary In this section, we: Looked at some common metrics used for monitoring test preparation and execution and examined test metrics used for test reporting and test control. We went on to look at the purpose and content of the test summary report document according to the ‘Standard for Software Test Documentation’ - IEEE 829. We concluded the section by looking at the concept of Test Control. Module 6 Section 7 – Configuration Management S6-7-P1 - Objectives The subject of this section is Configuration Management. Configuration Management or ‘CM’ is the management and control of the technical resources required to construct a software artefact. A brief definition, but the management and control of software projects is a complex undertaking, and many organisations struggle with chaotic or non-existent control of change, requirements, software components or build.

It‘s this lack of control that causes testers particular problems. In this section we will:

Look at the background to configuration management and why it is needed

Describe the symptoms of poor configuration management

Outline how effective configuration management can address some of these issues.

S6-7-P2 - Background The purpose of configuration management is to establish and maintain the integrity of the products of the software or system through the project and product life cycle. Products could be such things as components, data or documentation. With specific focus on testing, configuration management may involve ensuring that:

All items of testware are identified, version controlled, tracked for changes, related to each other and related to development items or test objects, so that traceability can be maintained throughout the test process

All identified documents and software items are referenced unambiguously in test documentation

The configuration management procedures and infrastructure should be chosen, documented and implemented.

S6-7-P3 - Symptoms of Poor Configuration Management The easiest way to think about why CM is needed is to consider some of the symptoms of poor configuration management. Some of the following scenarios are a clear indicator of poor configuration management:

The developer cannot find the latest version of the source code module in development

The team can't replicate a previously released version of code for a customer

Page 126: Software Testing Intermediate

Software Testing Intermediate

Page 126

Bugs that were fixed suddenly reappear.

This is another classic symptom of poor CM. What might have happened was that the code was fixed and released in the morning, and then in the afternoon it was overwritten by another programmer who was working on the same piece of code in parallel. The changes made by the first programmer were overwritten by the old code so the bug reappeared. Shipping the wrong functionality is another common issue. Sometimes when the build process itself is manual and/or unreliable, the version of the software that is tested does not become the version that is shipped to a customer. And what about testing the wrong code? Imagine, following a week of testing, the testers report the faults they have found only to be told by the developers ‘actually, you’re testing the wrong version of the software’. S6-7-P4 - Symptoms continued Poor configuration management practices are extremely serious because they have significant impacts on testers; most obviously on productivity, but also on morale, as it can be the cause of a lot of wasted work. Also associated with poor CM is the appearance and disappearance of functionality or features. Consider when a feature that wasn’t in your system the last time that you tested it, suddenly appears. Alternatively, tested features might suddenly disappear. Not knowing which customer has which version of code can become a serious support issue, usually undermining customer confidence. Some issues of control are caused by developers themselves, overwriting each other’s work. For example, there are two changes required to the same source module. Unless the developers work on the changes serially, which causes a delay, two programmers may reserve the same source code. The first programmer finishes and one set of changes is released back into the library. Now what should happen is that when the second

programmer finishes, he applies the changes of the first program to his code. Faults occur when this doesn’t happen! The second programmer releases his changed code and overwrites the first programmer’s enhancement of the code. This is the usual cause of software fixes suddenly disappearing. S6-7-P5 - The Answers Configuration Management Provides Configuration Management provides an answer to the question ‘What is our current software configuration?’ When implemented successfully, CM can provide confidence that the changes occurring in a software project are actually under control. Whatever version you’re testing today, you can accurately track down the components and versions comprising that release. And what’s the status of the software? A CM system will track the status of every component in a project, whether that be tested, tested with bugs, bugs fixed but not yet tested, tested and signed off, and so on. So how do we control changes to our configuration? Before a change is made, a CM system can be used to identify, at least at a high level, the impact on any other components or behaviour in the software. Typically, an impact-analysis can help developers understand when they make a change to a single component, what other components call the one that is being changed. This will give an indication as to what potential side effects could exist when the change has been made. Not only will a CM system have information about current status, it will also keep a history of releases so that the version of any particular component within that release can be tracked too. This gives you trace-ability back to changes over the course of a whole series of releases. The CM system can identify all the changes that have been made to the version of software that is currently under test. In that respect, it can contribute to the focus for testing on a particular release.

Page 127: Software Testing Intermediate

Software Testing Intermediate

Page 127

S6-7-P6 - Summary In this section we have:

Looked at the background to configuration management and why it is needed

Described the symptoms of poor configuration management and more importantly outline how effective configuration management can address some of these issues.

Module 6 Section 8 – Risk and Testing S6-8-P1 - Objectives In this section we will explore the meaning of risk and the relevance of risk in software projects. As a Project Manager, understanding risk helps us manage our projects and avoid project failure. As a Tester, risks help us to focus the attention of our project and user management on the most significant potential failures. Risk assessment helps testers to select the tests that have most value and provide information about the risk of failure of the systems under test. In section we will:

Describe what is meant by risk, and consider the concepts of ‘likelihood’ and ‘impact’.

Look at risk types including, political, economic, technical, security and safety risks

Explain two types of software risk, namely project risk and product risk and look at each in some detail

Finally we will look at the concept of risk based testing. S6-8-P2 - Stakeholder Objectives and Risks Every project has objectives, some purpose or aim that the sponsors or stakeholders are prepared to pay for. Typical objectives are increased business, expansion into new markets or decreased costs. Most projects begin when a

business case or feasibility study is approved. Typically, a Project Initiation Document or PID is created. The PID typically sets out the aims and objectives of the project, an outline of costs and timescales, responsibilities and so on. The PID or its equivalent is the best source of information on the cardinal objectives of a system. These objectives are important, because it’s what the sponsors believe they are paying for. Software risks are potential events or things that threaten the cardinal objectives of a project. S6-8-P3 - About Risk Let’s explore what we mean by ‘risk’. Risk can be described as – a factor that could result in future negative consequences, usually expressed as ‘impact’ and ‘likelihood.’ Let’s take a road bridge as an example. We think of bridges as being very safe comparatively. The impact of a bridge collapse is catastrophic. However, the risk is comparatively low because the likelihood of this happening is extremely low. Not everything that has a high impact is high risk because we make the likelihood of it happening so low that it’s almost negligible. Risk only exists where there is uncertainty. That is, if the likelihood of a risk is zero, then it’s not a risk at all. Imagine there is a hole in the road ahead, and it is unavoidable. Then the likelihood of this undesirable event is 100%, then it is inevitable and this is not a risk. In the same way, unless there is the potential for loss, there is no risk. Experience of software projects tells us that they are inherently risky because there’s great uncertainty as to whether we will achieve our desired objectives. S6-8-P4 - Risk Examples Here are some risk examples to consider. Failure of airborne software, the software used to control the behaviour of an aeroplane in flight, is obviously extremely serious. This is a ‘safety critical risk’

Page 128: Software Testing Intermediate

Software Testing Intermediate

Page 128

because people's lives depend on the reliability of such software. Consider a system being developed on behalf of a government department which intended to speed up the processing of refugees claiming political asylum. Failure to deliver or failures in production are likely to lead to political embarrassment. An example of ‘political risk’. If a bank's system fails and it overpays or underpays its customers, either the bank will lose money (at least temporarily) or lose some customers. This is an economic or financial risk. A Commercial Off-The-Shelf (COTS) component might turn out to be too resource-hungry to operate correctly in a technical environment and prove unusable causing delays in the build of a new system, an example of ‘technical risk.’ Most web sites are under constant threat from security hackers. This is a ‘security risk’. If developers leave a security hole in a system, it is likely to be exploited. This could be catastrophic for an E-Commerce site or a bank. S6-8-P5 - Two Types of Software Risk There are two types of software risk. These are:

project risk

and product risk We’ll take a few moments here to look at each in turn and look at some typical example risks for each. S6-8-P6 - Project Risks Project risks relate to the project in its own context. Projects usually have external dependencies such as the availability of skills, dependency on suppliers, constraints or fixed deadlines. External dependencies are project management responsibilities. Project risks are the risks that surround the project’s capability to deliver its objectives, such as:

Supplier issues, such as failure of a third party or contractual issues

Organisational factors, such as skill and staff shortages, personnel and training issues. Other organisational factors might include political issues, such as:

problems with testers communicating their needs and test results

failure to follow up on information found in testing and reviews

or improper attitude toward or expectations of testing

Technical issues might include:

problems in defining the right requirements

the extent that requirements can be met given existing constraints

the quality of the design, code and tests

the quality of the test data. The ‘Standard for Software Test Documentation’ (IEEE 829) outline for test plans requires risks and contingencies to be stated. S6-8-P7 - Further Project Risks Project risks also relate to the internals of the project and the project’s planning, monitoring and control come under scrutiny. Typical risks here are underestimation of project complexity, the effort required or the required skills. The internal management of a project such as good planning, progress monitoring and control are all project management responsibilities. Therefore, project risks are therefore:

Poor planning

Under (or over estimation)

Assignment of the wrongly skilled or under-skilled resources

Poor monitoring

Page 129: Software Testing Intermediate

Software Testing Intermediate

Page 129

Failure to monitor progress against plans

Failure to monitor the results of testing

Poor control

Failure to understand the impact of events occurring

Failure to take any action or the wrong actions when confronted by problems.

S6-8-P8 - Product Risks Product risks relate to the definition of the product, the stability (or lack) of requirements, the complexity of the product, and the fault proneness of the technology, resulting in a failure to meet requirements. Potential failure areas in software or systems are known as product risks. Typical examples of product risk include:

The delivery of error-prone software

The potential that the software/hardware could cause harm to an individual or company.

Poor software characteristics, including functionality, security, reliability, usability and performance.

And software that does not perform its intended functions.

S6-8-P9 - Risks and application domains There are many risks that can be identified in a project. We have already looked at different categories of risk: safety, economic, security, political and technical risks. And we have looked at the difference between product risk and project risk. There will also be different types of risk depending on the application domain: Mainframe applications, for example, have risks associated with the fact you have one large central processor that handles

all the systems transactions. Large national banks mitigate this risk by having more than one identical mainframe system running in parallel. This might be too expensive an option for some organisations who because of this they opt for the different technologies. Client-server systems will have different risks associated with them. For example, rather than having one single point of failure, they have multiple points of failure and so redundancy must be built into the system in several places. Web-based applications, especially public facing websites, have different risks associated with the usability of the interface and the ability of the system to handle the load it might encounter. PC-based applications may have all of these risks too. However, depending on the exact nature of the system there could be a wide variety of other risks to be considered. For example, a system designed to assist medical staff in the field, an accounting system for a very small business and a PC-game will all have associated risks that applies specifically to those systems. S6-8-P10 - Risk interact With so many risks to be identified it is inevitable that these risks will have a relationship with one another. Project risks will also interact with product risks. For example, there is a risk that the specification will be delivered late to testing, causing testing to be squeezed (a project risk); may in turn reduce the quality of the tests created, allowing defects to slip through testing into the live system (a product risk). In practice it will only be possible to manage a finite number of risks. There are also political implications to defining either too few or too many risks. Therefore, the risks that are documented will need to be carefully considered to encompass the risks and mitigation activities that are most important. S6-8-P11 - Risk-based Testing Risks are used to decide where to start testing and where to test more. Testing on the other hand, is used to reduce the risk of an adverse effect occurring, or to reduce the impact of an adverse effect.

Page 130: Software Testing Intermediate

Software Testing Intermediate

Page 130

Testing as a risk-control activity provides feedback about the residual risk by measuring the effectiveness of critical defect removal and of contingency plans. A risk-based approach to testing provides proactive opportunities to reduce the levels of product risk, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test planning, specification, preparation and execution of tests. Risk-based testing draws on the collective knowledge and insight of the project stakeholders to determine the risks and the levels of testing required to address those risks. To minimise the likelihood of product failure, risk management activities will include: The assessment and reassessment of the risks, in other words what can go wrong? Determine what risks are important and should be dealt with and implement actions to deal with those risks. In addition, testing may support the identification of new risks, it may help to determine what risks should be reduced, and lower uncertainty about risks. S6-8-P12 - Risk management If we are going to do risk-based testing we need a formal risk management process to dealing with product and project risk. The activities of risk management are: Risk identification where risks are identified Risk analysis where risks are sequenced according to which ones are most significant Risk mitigation where the mitigation actions for risk are decided on These activities are described in the next few pages. S6-8-P13 - Risk identification Risk identification is the activity where risks are identified. The BCS Intermediate in Software Testing syllabus suggests

several techniques that can be used to do this: Risk workshops are where representation from each stakeholder group meet to discuss and identify risks. Its benefit is that it can be effective as groups of people often are more effective at identifying risks than individuals on their own. The pitfalls are that they can be time consuming and it can be difficult to arrange a convenient time and location for all attendees. Brainstorming is a process of combining and mutating ideas that can be used to identify risks. Its benefit is that risks for new technologies and situations may be identified. Its pitfall is that it may not identify any risks that have not already been thought of and will have been a waste of time. Expert interviews can be used to gain insight into risks from a person who has specialist knowledge in a certain area. The benefit is significant but previously unnoticed risks are identified. The pitfall is that the expert may be bias and these risks may not be important as they are made out to be. Independent assessments can be used to bring in expertise from outside the team or organisation to assist in identifying risks. The benefit is that an objective view may identify risks that the team are not able to identify. The pitfalls are that they can be expensive and in the time available the expert may not have time to fully understand the specifics of the project. Lesson learned is where issues from previous projects can be used to identify risks for the current project. The benefit is that issues will not catch you out more than once. The pitfall is that lessons learnt are often not recorded or recorded in enough detail to be useful. Checklists can be used to identify risks. Typically risks from lessons learned or the risk register from a previous project can be used to create the checklist. The benefit is that you do not have to spend time reinventing your list of risks. The pitfall is that new risks, specific to the current project will not be on the checklist and may be missed.

Page 131: Software Testing Intermediate

Software Testing Intermediate

Page 131

Risk templates take checklists one step further by organising risks in a spreadsheet. The benefit is spreadsheets allow manipulation of the data. For example, to carry out risk analysis, which will be described shortly. The pitfall is that again, new risks specific to the current project will not be on the list and may be missed. S6-8-P14 - Risk analysis Risk analysis is where risks are sequenced according to which ones are most significant, the ones that presents the greatest risk exposure. This is done by considering the likelihood of the risk occurring and multiplying it by the severity of the risk. Risks with a high risk exposure would be considered more "risky" than ones with a low risk exposure. Risk analysis can be carried out qualitatively by using a scale, such as critical, high, medium and low; or quantitatively by calculating a value for the likelihood and severity of the risk based on the known facts. S6-8-P15 - Risk mitigation Risk mitigation is the process through which decisions are reached and protective measures are implemented to reducing risk. There are several categories of responses to risk: Accepting the risk, or just doing nothing, is a common response and is chosen when the cost of any other response would not be cost effective. For example, the likelihood or impact is considered extremely low. Sharing the risk means sharing the cost of any mitigation action or the cost of the risk becoming an issue with a third party. Taking preventative action to avoid or reduce the risk is typically testing. Taking preventative action may also include implementing a tool, or deciding not to go ahead with the development of a particular system or feature. Planning for contingent action could include preserving extra budget or time in case the risk becomes an issue.

S6-8-P16 - Stakeholder involvement in risk management Stakeholders include the business, development, project management and testers. Ideally all the stakeholders would be involved at all stages of risk management. They will all be able to identify risks, will have a view as to how significant a risk is, and will be able to contribute to how the risk will be mitigated. It is likely that different stakeholders will have a different view of risk. This means it may be difficult to get agreement among stakeholders as to what the risks are, how significant they are and how they will be mitigated. Good communication through the use of workshops and regular reporting will assist stakeholders understanding each other's views. S6-8-P17 - Risk and testing Risk and testing are inextricably linked. If there was no risk there would be no testing. Project and product risk influences almost every decision we make on a testing project. Product risk will drive what types of testing and the scope of testing. Project risk will determine how we will manage testing. These decisions are typically documented in the test strategy, which mentioned before, is often described as the vehicle for managing risk. S6-8-P18 - Exercise: Identifying risks Here’s a short exercise. Shown here are the main features of an ATM machine. Use your skill and judgement to list what kind of failures could occur for each of the four requirements shown. For each of the key requirements, make a list of the possible modes of failure. You can print a copy of the exercise by clicking here. Once you have completed the exercise, you can print a copy of the model answer by clicking here. S6-8-P19 - Summary In this section we explored the meaning of risk and the relevance of risk in software projects.

Page 132: Software Testing Intermediate

Software Testing Intermediate

Page 132

We began by discussing what is meant by risk, and considered the concepts of ‘likelihood’ and ‘impact’. We went on to look at risk types including, political, economic, technical, security and safety risks. We described two types of software risk, namely project risk and product risk and looked at each in some detail. Finally we looked at the concept of risk based testing. Module 6 Section 9 – Incident Management S6-9-P1 - Objectives Welcome to Section 6.9, entitled ‘Incident Management’. In this section we will:

Describe what is meant by an incident

Outline the objectives of incident reporting

Examine the typical test execution and incident management process

Discuss the typical content of an incident report and list the key section headings.

We tend to log incidents when someone other than the author of a product executes tests and encounters failures, in system and acceptance testing for example. However the same process can be used for document or code reviews or component testing. A common synonym for incidents is ‘anomaly’. Anomaly is the term used in the IEEE 1044 standard for anomaly classification. IEEE 1044 is covered in the Practitioner syllabus but it is out of scope for our Foundation course. S6-9-P2 - Why Track Incidents? Let’s begin by asking ‘why, when and how we should track incidents?’ Well, since one of the objectives of testing is to find defects, then the discrepancies between actual and expected outcomes need to be logged as incidents. An

incident is ‘any unplanned event occurring that requires further investigation.’ Incidents may be raised during development, review, testing or during use of a software product. Incidents should be tracked from discovery and classification to correction and confirmation of the correction. In order to manage all incidents to completion, an organisation should establish a process for raising, assessing, tracking and reporting on incidents. The classification of incidents adds enormously to their value and rules for classification should also be created. Incidents may be raised for issues in code or the working system, or in any type of documentation including development documents, test documents or user information such as "Help" or installation guides. The objectives of incident reporting are:

To provide developers and other parties with feedback about the problem and to enable identification, isolation and correction as necessary

To provide test leaders a means of tracking the quality of the system under test and the progress of the testing

To provide an input to test process improvement initiatives.

S6-9-P3 - When to Log an Incident We log incidents when a test result appears to be different from the expected result. This could be due to a number of reasons. The tester shouldn’t automatically assume that it’s a software fault. Consider some of the following possibilities:

It could be something wrong with the test itself; the test script may be incorrect in the commands it expected to appear or the expected result may have been predicted incorrectly

Maybe there was a misinterpretation of the requirements

Page 133: Software Testing Intermediate

Software Testing Intermediate

Page 133

Perhaps the tester didn’t follow the script and made an error entering some test data and that is what caused the software to behave differently than expected

It could be that the results themselves are correct but the tester misinterpreted them

The test environment could be at fault. Test environments are often quite fluid and changes are being made continuously to refine their behaviour. A change in the configuration of the software in the test environment could cause a changed behaviour of the software under test

Finally, it could be something wrong with the baseline; that is, the document upon which the tests are being based is incorrect. The requirement itself is wrong.

It could be any of the reasons above, but it could also be a software fault. Testers should be really careful about identifying the root cause of the problem, before calling it a ‘software fault’. S6-9-P4 - Incident Reporting What happens when you run a test and the test itself displays an unexpected result? If the tester is certain that they themselves have not made a mistake in the execution or interpretation of the test, they should stop what they’re doing and complete an incident report. It’s most important that the tester completes the log at the time of the test and not wait a few minutes and perhaps do it when it’s more convenient. The tester should log the event as soon as possible after it occurs. But what goes into an incident report? The tester should describe exactly what is wrong. They should record the test script they’re following and potentially, the test step at which the software failed to meet an expected result. If appropriate, they should attach any output – screen dumps, print outs, any information that might be deemed useful to a developer so that they can reproduce the problem. Part of the incident report should be an assessment on whether the failure in this script has an impact on other tests that have to be completed. Potentially, if a test fails, it may be a test that has no bearing

on the successful completion of any other test. However, some tests are designed to create test data for later tests. So, it may be that a failure in one script may cause the rest of the scripts to be shelved because they cannot be run without the first one being corrected. S6-9-P5 - Incident Reporting (2) Why do we create incident reports with such a lot of detail? Consider what happens when the developer is told that there may be a potential problem in the software. The developer will use the information contained in the incident report to reproduce the fault. If the developer cannot reproduce the fault (because there’s not enough information on the log), it’s unreasonable to expect him to fix the problem. After all, developers cannot start fixing a problem if they have no way to diagnose where the problem might be. Accurate incident reports save a lot of time for everyone. One further way of passing test information to developers is to record tests using a record/playback tool. It is not that the developer uses the script to replay the test, rather, that they have the exact keystrokes, button presses and data values required to reproduce the problem. It addresses the comment, "you must have done something wrong, run it again." This can save a lot of time. S6-9-P6 - Test Execution and Incident Management Process Here we can see a typical test execution and incident management process. You can see that the output of test execution is to raise an incident to cover any unplanned event. It could be that the tester has made an error so this is not a real incident and needn’t be logged. Where a real incident arises, it should be diagnosed to identify the nature of the problem. It could be that we decide that it is not significant so the test could still proceed to completion.

Page 134: Software Testing Intermediate

Software Testing Intermediate

Page 134

S6-9-P7 - IEEE Std 1044-1993 IEEE Std. 1044-1993, Classification for Software Anomalies describes the incident life cycle as having the following steps:

Recognition where the incident is discovered and first recorded

Investigation where it is decided what should be done about the incident

Action where any fix or other activity is auctioned

Disposition where, if appropriate, the incident is closed

Three administrative activities are applied at each step:

Recording the incident

Classifying the incident

Identifying impact of the incident The standard contains tables that define what to recorded, how to classify, and how to identify the impact of the incident. For reasons of semantics IEEE 1044 prefers to use the word anomaly over the words error, fault, failure, incident and bug. For the purposes for this course and the BCS Intermediate Certificate in Software Testing examination the word anomaly can be used interchangeably with the words error, fault, failure, incident and bug. S6-9-P8 - Activity Shown on the screen is a mapping of the IEEE incident management process to a highly simplified incident management process that you might come across on a project. Based on what you have learnt about the incident management processes so far, take look at these processes and make a note of what activities and statuses are missing and should be added to improve the process. When you have decided, move to the next page to reveal our answer. S6-9 P9: Activity – continued With such a highly simplified process the number of improvements that can be made are almost infinite. Here are some things we would like to add:

Under investigation: In reality there will be several stages of investigating a defect. First it will probably go to the test manager as a quality check and because the test manager wants to keep an eye on what incidents are being raised. Next it will go to a triage meeting where the criticality of the incident will be confirmed and it will be decided which team will be responsible for fixing it. Once the team has received the incident the final stage will be deciding which team will fix the defect. In progress: Whilst the defect is being fixed it is likely there will be several interim stages. For example, the developer will be assigned to the work, the developer will undertake the work, someone will check the work, and finally the incident will be reviewed and it will be confirmed that it is ready to pass to configuration management to be put in the next build. Feedback loops: As it stands, feedback loops are missing from the process. For example, during the triage meeting it may be noticed that there is some information missing, or perhaps it is decided that the incident is not a defect and is in fact a feature. Perhaps the developer discovers that there is not enough information present to fix the defect, or the defect is not as originally perceived and requires further information before work can continue. In these cases the incidents will need to passed back to a previous stage. S6-9-P10 - Incident Report Content It is common to split an incident report into two sections, the ‘summary’ and the ‘incident description’. The header information tends to deal with the administrative aspects of the incident including:

Date of incident and issue

The issuing organisation

The author

Approvals and status

The scope, severity and priority of the incident

References, including the identity of the test case specification that revealed the problem.

Page 135: Software Testing Intermediate

Software Testing Intermediate

Page 135

S6-9-P11 - Incident Report Content (2) For a complex incident the ‘detail’ of the incident record can be quite extensive. Typical categories include:

Expected and actual results

Item under test

Test phase or activity

Description of the incident to enable reproduction and resolution

Impact for a complex incident

Severity of the impact on the system

Urgency/priority

Status of the incident, such as open, awaiting confirmation test

Wider concerns or issues such as other areas that may be affected

Change history, such as the sequence of actions taken by the project team.

The structure of an incident report is set out in the IEEE 829 Standard for Software Test Documentation. S6-9-P12 - Activity Shown on screen is the typical test execution and incident management process we saw earlier in the section but we have left out some of the titles. Use your mouse to drag the missing titles from the pallet into the correct location on the diagram. S6-9-P13 – Exercise: Incident Reporting Suppose you were testing an electric kettle. A test involves boiling enough water to make four cups or 1 litre of water to make tea. The kettle has a lid, and sits on a base unit which is plugged into the mains. The power switch is on the base unit. Suppose you did everything 'right' but the kettle didn't boil the water and the kettle and water were still at room temperature at the end of the test. The exercise here is to reproduce a set of instructions (in other words a script) that you think would reliably test the kettle's ability to boil a litre of water in 75 seconds. Imagine you have to handover these details to the technician who built the kettle in order for them to reproduce and diagnose the problem properly.

Assume the incident report will reference this 'script' so that the failure can be reproduced by the technicians. Once you have completed the exercise advance to the next page. S6-9-P14 – Exercise: Answer - Incident Reporting That was easy, wasn't it? Print off our 'ideal' test script using the link provided on screen and read it through. We'll return to your answer in a moment. From the ideal script which you have printed off, what could explain the failure of the kettle to boil? Try to write down a few potential defects or test errors that could have caused the failure. S6-9-P15 – Exercise: Discussion - Incident Reporting Now consider your own script. If you missed a step in the instructions, could that change the diagnosis of the problem? Print off the model answer using the link on screen and take a look at our suggestions. Consider the effect that omitting each step might have on the potential diagnosis. Using your incident report, could the problem have been mis-diagnosed? If so, then how? Typically, we make assumptions that developers will test the system in the same way we do (using the same steps and the same environment) and encounter the same failure. But this is often not the case. It's important to record details of the environment, versions, procedure, expectations, observations and interpretation to allow a technician to reproduce, diagnose and fix the problem using exactly the same software, starting conditions, inputs and expectations. Remember ‘Assume’ makes an ASS of U and ME! S6-9-P16 - Summary This concludes Section 6.8, on the subject of Incident Management. In this section we:

Page 136: Software Testing Intermediate

Software Testing Intermediate

Page 136

Described what is meant by an incident, namely ‘any unplanned event occurring that requires further investigation’

Outlined the objectives of incident reporting and examined the test execution and incident management process

Finally we discussed the typical content of an incident report and listed the key section headings.

Module 7 – Tool Support for Testing

Module 7 Section 1 – Types of Test Tool S7-1-P1 - Objectives Welcome to Section 7.1 entitled ‘Types of Test Tools’. We have seen throughout this course that there are many aspects to testing. It’s unsurprising to find then that there are also a number of tools to support this. These tools are classified in this syllabus according to the testing activities which they support. In this section we will look at these classifications in some detail, specifically:

Management of testing and tests

Static testing

Test Specification

Test execution and logging

Performance Testing and monitoring

Application-specific variations. Some tools clearly support one activity; others many, but are classified under the activity with which they are most closely associated. S7-1-P2 - Introduction to Tools Some commercial tool vendors offer suites or families of tools that provide support for many or all of the activities we listed here. Testing tools can improve the efficiency of testing activities by automating repetitive tasks but they can also improve the reliability of testing for example automating data comparisons or simulating behaviour.

Some types of test tool can be intrusive in that the tool itself can affect the actual outcome of the test. For example, the actual timing may be different depending on how you measure it with different performance tools, or you may get a different measure of code coverage depending on which coverage tool you use. The consequence of intrusive tools is called the probe effect. S7-1-P3 - Tool Support for Management of Testing and Tests The first tool type from the classifications we listed previously is ‘management of testing and tests.’ Management tools apply to all test activities over the entire software life cycle. These tools provide facilities to create, manage and manipulate the materials that would normally be paper based. Examples include, the inventories of test objectives, features in (and out of) scope, test cases and so on. In essence management tools replace the office tools (spreadsheets, databases and word processors) that most testers use. Beyond simple document production, these tools also offer some facilities to handle cross-referencing, traceability and change control. Test management tools drive the creation of tests, execution and logging. They often provide direct interfaces to proprietary test execution tools and incident/defect management facilities are often 'built-in'. Other interfaces include requirements management (to provide coverage reporting) and occasionally configuration management. The tools usually provide support for traceability of tests, test results and incidents to source documents, such as requirement specifications. They may also provide some quantitative analysis or ‘metrics’ related to the tests for example, tests run and tests passed and the test object, such as incidents raised, in order to give information about the test object, and to control and improve the test process.

Page 137: Software Testing Intermediate

Software Testing Intermediate

Page 137

S7-1-P4 – Activity Some types of tool can be invasive, in that they can affect the outcome of the test. What is this commonly known as? S7-1-P5 - Management: Requirements Management Tools Requirements management tools store requirement statements, check for consistency and undefined or ‘missing’ requirements. These tools also allow requirements to be prioritized and enable individual tests to be traced to requirements, functions and/or features. Traceability may be reported in test management progress reports. The coverage of requirements, functions and/or features by a set of tests may also be reported. S7-1-P6 - Management: Incident Management Tools Incidents occur at all stages of testing but incident management tools are generally most useful for system/acceptance testing. If incidents are defined as unplanned events that have a bearing on the successful outcome of a test, the incidents become, in some way, the drivers of progress through the test plan. Because of the negative effect on progress, they need to be logged and controlled carefully. When managed correctly, they can provide valuable insights into the true state of a product under test. In some ways, the nature of a project changes when a product goes into system test. Progress is driven (or hampered) by the incidents being raised, so it is essential that incidents are monitored closely. Incident management tools help with this process. S7-1-P7 - Incident Management Tools continued Incident management tools store and manage incident reports, i.e. defects, failures or perceived problems and anomalies. These tools support management of incident reports by:

Facilitating their prioritisation

Assigning actions to people (for example a fix or confirmation test)

The attribution of status, such as rejected, ready to be tested or deferred to next release.

These tools enable the progress of incidents to be monitored over time, often providing support for statistical analysis and providing reports about incidents. They are also known as defect tracking tools. Lots of proprietary as well as open source incident management tools exist, but often a simple PC based database will suffice. Every organisation has its own ways of recording and reporting incidents and it is common for these tools to be written in-house by one of the test team. A benefit of proprietary tools is that they can offer integration with other CAST components and are often more feature rich. S7-1-P8 - Management: Configuration Management Tools Configuration management or CM tools, are not strictly testing tools, but are typically necessary to keep track of different versions and builds of the software and tests. Typically they:

Store information about versions and builds of software and testware

Enable traceability between testware and software work products and product variants.

CM tools are essential when developing and testing on more than one configuration of the hardware/software environment. Typical examples might include:

Different operating system versions

Different libraries or compilers

Different browsers

Different computers.

Page 138: Software Testing Intermediate

Software Testing Intermediate

Page 138

S7-1-P9 - Tool Support for Static Testing Review tools, also known as review process support tools provide a framework for reviews or inspections. Amongst other things they may:

Store information about review processes

Store and communicate review comments

Report on defects and effort

Manage references to review rules and/or checklists

Keep track of traceability between documents and source code.

Collaborative, web-based tools are emerging and as such review tools could also provide aid for online reviews. Such reviews may be the only way to get reviews done at all, if teams are geographically dispersed. S7-1-P10 - Static: Static Analysis Tools We covered static analysis in the Test Techniques Part of the Foundation Syllabus. Needless to say, static analysis can really only be performed by software tools, on real systems. Over the past ten years or so, these tools have become increasingly sophisticated, and the notion of 'deep flow analysis' means that paths through software can be traced, assertions about the use (and abuse) of data tracked and objective measures of code quality against a user-defined standard can be calculated. In higher integrity environments, static analysers are an essential tool used by developers to keep a check on their coding practices. Surprisingly, a large number of statically detectable faults can be found in released code. Static analysis tools can generate objective measurements of various characteristics of the software, such as the cyclomatic complexity measure and other quality metrics. Static analysis tools can also support developers, testers and quality assurance personnel in finding defects before dynamic testing. These tools have several benefits including:

Helping to enforce coding standards

Generating analysis of structures and dependencies (for example, linked web pages)

Helping developers to understand complex code.

Static analysis tools can also calculate metrics from the code, such as complexity, which in turn can provide valuable information for planning or risk analysis. Static analysis tools are typically used by developers. S7-1-P11 - Static: Modelling Tools Modelling tools are able to validate models of the software. For example, a database model checker may find defects and inconsistencies in the data model. Other modelling tools may find defects in a state model or an object model. These tools can often assist in generating test cases based on the model. Test design tools can also help here and we will look at this next. The major benefit of static analysis tools and modelling tools is their cost effectiveness because they find more defects earlier in the development process. This will result in less re work helping to improve and accelerate the development process. Modelling tools are usually used by developers. S7-1-P12 - Specification: Tool Support for Test Specification Test design tools generate test inputs or actual tests from a number of sources. Typical sources include:

Requirements

Graphical user interface

Design models (state, data or object)

From code. Test design tools may also generate expected outcomes. It may use a test oracle for example. The tests generated

Page 139: Software Testing Intermediate

Software Testing Intermediate

Page 139

from a state or object model are useful for verifying the implementation of the model in the software, but are seldom sufficient for verifying all aspects of the software or system. These tools are very thorough so can save valuable time. Other tools in this category can support the generation of tests by providing structured templates, sometimes called a test frame. These tools generate tests or test stubs, and thus speed up the test design process. Test design tools are mainly concerned with recording the details of test specifications. These tools allow testers to create inventories of test objectives, requirements, risks, and test conditions to generate hard copies of test plans. These tools naturally overlap with test management tools, which can organise and control test execution. Most test management tools offer some test design facilities. S7-1-P13 - Specification: Test Data Preparation Test data preparation tools manipulate databases, files or data transmissions to set up test data to be used during the execution of tests. These tools ensure that live data transferred to a test environment is made anonymous, ensuring data protection. This is a clear benefit. In legacy environments in particular, the creation, maintenance, and manipulation of test data is a significant task consuming perhaps 50% of the overall test budget. In these environments, the 'workhorse' tool might be the test data manipulation tool, not test running. In modern environments, using relational databases, SQL is often the only tool available or required, but this in itself requires that testers understand SQL as a data manipulation language. In many environments, PC-based tools such as Microsoft Access can be linked to back-end databases and used to perform the same function.

S7-1-P14 - Activity Consider the list of benefits shown on screen. These are all expected benefits of which type of testing tool? S7-1-P15 - Tool Support for Test Execution and Logging The main promise of execution and logging tools is automated regression testing. Regression testing is usually regarded as essential by testers, but is also repetitive, boring and error-prone. Consequently, these tools, coupled with a test management or test execution management tool, are usually the first tools acquired. A particular benefit is that the tool can be used to record a test which subsequently can be reused many times, saving testers the laborious and soul-destroying task of running the same tests again and again. These tools can be viewed as specialist software development environments. An automated test script is really a small piece of software having a particular application - running software tests. These tools provide a script development environment, which automate scripts using a script language. These languages are often very similar to, or actually based on, languages such as Visual Basic, C, C++ or (in the mainframe world, Rexx). To make a script reusable, it is often necessary to customise the script to run repeatedly using externally prepared test data and expected results. Most of the tools have sophisticated facilities to read data from prepared spreadsheets, databases or text files. In this way, the process of script creation can be separated from test case design and preparation. Generally these tools include dynamic comparison features and provide a test log for each test run. Test execution tools can also be used to record tests, when they may be referred to as capture playback tools. Capturing test inputs during exploratory testing or unscripted testing can be useful in order to reproduce and/or document a test, for example, if a failure occurs.

Page 140: Software Testing Intermediate

Software Testing Intermediate

Page 140

S7-1-P16 - Execution: Test Harness/Unit Test Framework Tools A test harness may facilitate the testing of components or part of a system by simulating the environment in which that test object will run. This may be done either because other components of that environment are not yet available and are replaced by stubs and/or drivers, or simply to provide a predictable and controllable environment in which any faults can be localized to the object under test. A framework may be created where part of the code, object, method or function, unit or component can be executed, by calling the object to be tested and/or giving feedback to that object. It can do this by providing artificial means of supplying input to the test object, and/or by supplying stubs to take output from the object, in place of the real output targets. Test harness tools can also be used to provide an execution framework in middleware, where languages, operating systems or hardware must be tested together. Test harness/unit test framework tools are typically used by developers. S7-1-P17 - Execution: Test Comparators Test comparators or file comparison tools are most useful when comparing the outputs of batch programs or alternatively, the contents of databases to ensure that expected results have been met. The expected results are usually regression test output files. Typically, the output from batch programs is vast and it is impractical to compare files manually. File comparison tools compare everything in files, down to the last byte. Enhancements to the basic tools can include facilities to mask columns of data or to selectively include or exclude rows of data for detailed comparison. These tools are most often used to help with automated regression testing. Test comparators can determine differences between files, databases or

test results. Test execution tools typically include dynamic comparators, but post-execution comparison may be done by a separate comparison tool. A test comparator may use a test oracle, especially if it is automated. S7-1-P18 - Execution: Coverage Measurement Tools Source coverage or coverage analysis tools are essential to get an independent view on whether we did enough testing with regard to covering the internals of the software. A tool which can analyse the coverage of, perhaps, lines of code within the software itself will give us an insight as to whether our tests are exercising all the areas of the software and whether there are gaps in our testing. These tools show how thoroughly the measured type of structure has been exercised by a set of tests. These tools remove the subjectivity of testers or business users, when assessing the quantity of testing. The principle is that an objective coverage target is set, based on the structure of the code, and the coverage analysis tool is used to find out how much coverage has been achieved. The dependency is removed on scarce, expert knowledge and guesswork. S7-1-P19 - Coverage Measurement Tools (2) Code coverage tools measure the percentage of specific types of code structure that have been exercised. For example, statements, branches or decisions, and module or function calls. Statement coverage is the simplest coverage measure we can use. Statement coverage implies that we have exercised every source code statement in the software. Decision coverage, which describes the decision outcomes, or the branches that the code takes in its decision-making, is another common target. Branch coverage implies we have exercised all the possible outcomes of all the decisions within the software itself. Essentially, program code is 'instrumented' by a tool before it is compiled and

Page 141: Software Testing Intermediate

Software Testing Intermediate

Page 141

executed in a test. The instrumentation logs the paths taken through the code, but does not interfere with the flow of execution of that code. Of course, it will slow down the execution somewhat. Coverage measurement tools are typically used by developers. S7-1-P20 - Execution: Security Tools Security testing tools are used to test the functions that detect security threats and are most commonly used to test e-commerce, e-business and websites. Some typical security tools include virus scanners, network and port scanners and monitoring tools. Virus scanners scan disks and files for suspicious code having 'profiles' that match common viruses and remove them from 'infected' systems. They work inside the system. Network and port scanners scan systems from the outside. The tools scan the network connections and ports on computers, looking for ports that are open, ports having services running on them such as web, FTP, telnet and so on. Usually, redundant ports and services should be shut down to avoid attack. Monitoring tools monitor critical features of systems from both the inside and the outside. Essentially, they do the same job as scanners, but are running continuously. S7-1-P21 - Performance: Dynamic Analysis Tools Dynamic analysis tools allow developers and programmers to look inside code while it is executing. They work by taking control of the software as it executes and can be configured to detect problems. Such tools usually take control of the software and drive it. Dynamically detectable faults are those faults that can only be detected when the software is running. Typical faults include:

Unassigned pointers and pointer arithmetic (to find memory violations)

Monitor the allocation, use and de-allocation to find memory leaks

A memory leak occurs when software 'borrows' memory but never gives it back. Eventually the system runs out of memory and the application (or the operating system itself) crashes

Non-fatal system faults

Other statically undetectable faults.

Dynamic analysis tools are typically used by developers. S7-1-P22 - Performance: Performance Testing Toolkit Performance testing normally requires a range of tools and utilities to achieve fully automated testing. For most testers, their toolkit will consist of some proprietary tools such as load generation, application test running, and built-in utilities such as (server, database, network monitors, middleware logs. Their toolkit may also contain in-house-written application instrumentation or databases to capture and analyse results. Performance testing tools monitor and report on how a system behaves under a variety of simulated usage conditions. They simulate a load on an application, a database, or a system environment, such as a network or server. The tools are often named after the aspect of performance that it measures, such as load or stress, so are also known as load testing tools or stress testing tools. They are often based on automated repetitive execution of tests, controlled by parameters. S7-1-P23 - Performance: Monitoring Tools Monitoring tools are not strictly testing tools but provide information that can be used for testing purposes and which is not available by other means. Monitoring tools continuously analyse, verify and report on usage of specific system resources, and give warnings of possible service problems. They store information about the version and build of the software and testware, and enable traceability.

Page 142: Software Testing Intermediate

Software Testing Intermediate

Page 142

S7-1-P24 - Activity Which of the following pairs of test tools are likely to be most useful during the test analysis and design stage of the fundamental test process? S7-1-P25 - Activity Which of the following tools would typically be used by a tester and which would typically be used by a developer. Drag the following tools to their correct location. Test management tools, test design tools, test data preparation tools, test execution tools, test comparators, security tools and performance/load/stress testing tools would typically be used by testers and static analysis tools, modelling tools, test harness/unit testing framework tools, coverage measurement tools and dynamic analysis tools would typically be used by developers. S7-1-P26 - Tool Support for Specific Application Areas Any of the tools we have covered so far can be specialized for use in a particular type of application or technologies. For example, there are performance testing tools specifically for web-based applications, static analysis tools for specific development platforms, and dynamic analysis tools specifically for testing security aspects. Commercial tool suites may target specific application areas such as embedded systems. Application-specific tools and extensions to proprietary tools that support SAP, Oracle, Peoplesoft and so on are becoming more common. Data Quality Assessment tools to test the quality and integrity of data such as test data. Data is at the centre of some projects such as data conversion/migration projects and applications like data warehouses and its attributes can vary in terms of criticality and volume. In such contexts, tools need to be employed for data quality assessment to review and verify the data conversion and migration rules to ensure that the processed data is

correct, complete and complies to a pre-define context-specific standard. S7-1-P27 - Tool Support Using Other Tools Usually, testers might use a large range of tools above and beyond those listed in the syllabus. Examples of other tools in use include:

Office products, such as Microsoft Word, Excel, and so on

Developer tools such as Visual Basic

Scripting tools such as Perl, TKL or PL/SQL to create traffic-generation or data manipulation utilities

Environment support tools such as backup/restore, SQL, network and server management and monitoring tools.

S7-1-P28 - Summary This concludes Section 7.1 entitled ‘Types of Test Tools’. We saw how test tools are classified in the syllabus according to the testing activities which they support. In this section we looked at:

Management of testing and tests

Static testing

Test Specification

Test execution and logging

Performance Testing and monitoring and

Application-specific variations Module 7 Section 2 – Effective Use of Tools: Potential Benefits and Risks S7-2-P1 - Objectives Welcome to Section 7.2. In this section we'll look at the potential benefits and risks of test automation and tool support for testing. In this section we will:

Outline some of the potential benefits of using tools in the testing process

Describe some of the potential risks

Page 143: Software Testing Intermediate

Software Testing Intermediate

Page 143

Finally we will highlight some important considerations when selecting test execution tools.

S7-2-P2 - Potential Benefits of Tool Support for Testing Simply purchasing or leasing a tool does not guarantee success with that tool. Each type of tool may require additional effort to achieve real and lasting benefits. Potential benefits of using tools include a reduction in the amount of test repetition. For example running regression tests, re-entering the same test data, and checking against coding standards are all highly repetitive activities. Greater consistency and repeatability is also another potential benefit. Typical examples include tests executed by a tool, and tests derived from requirements. Testers can also benefit from objective assessment, for example static measures, coverage and system behaviour. Easy access to information about tests or testing is another clear benefit. Examples include statistics and graphs about test progress, incident rates, performance and so on. S7-2-P3 - Risk of Tool Support for Testing As we have said there are many potential benefits to be gained from the use of tools in testing, but equally there are also risks. Let’s look at just a few of those here:

Firstly there might be unrealistic expectations about the tool, including its functionality and ease of use

Underestimating the time, cost and effort for the initial introduction of a tool, including training and external expertise can present a risk

Underestimating the time and effort needed to achieve significant and continuing benefits from the tool (including the need for changes in the testing process and continuous improvement of the way the tool is used)

Underestimating the effort required to maintain the test

assets generated by the tool can also present risk

Over-reliance on the tool. S7-2-P4 - Considerations for Test Execution Tools Test execution tools have become very common. However, they are frequently used once and once only, ending up on a shelf – in other words, shelfware. Before selecting or implementing such a tool you might like to consider some or all of the following. Test execution tools often require significant effort in order to achieve benefits. Capturing tests by recording the actions of a manual tester seems attractive, but this approach does not scale well to large numbers of automated tests. A captured script is a linear representation with specific data and actions as part of each script. This type of script is "unstable" when unexpected events occur. It is very hard to make the scripts flexible enough to cope with unexpected failures. A data-driven approach separates out the test inputs (the data), usually into a spreadsheet, and uses a more generic script that can read the test data and perform the same test with different data. Testers who are not familiar with the scripting language can enter test data for these predefined scripts. In a keyword-driven approach, the spreadsheet contains keywords describing the actions to be taken (also called action words), and test data. Testers (even if they are not familiar with the scripting language) can then define tests using the keywords, which can be tailored to the application being tested. Technical expertise in the scripting language is needed for all approaches (either by testers or by specialists in test automation). S7-2-P5 - Considerations for Test Execution Tools (2) It’s important to bear in mind that, if you capture a test using the tool's recording facility, you will have to execute this first test manually. As such, no time is saved first time round.

Page 144: Software Testing Intermediate

Software Testing Intermediate

Page 144

Test execution tools are incredibly clever at identifying and tracking GUI screen objects. Although the tools themselves are object-oriented, there are still challenges in script maintenance, in environments where the user interface changes. If the UI changes daily, the cost of automated script maintenance may make the tool prohibitively expensive. When a tool is driving transactions through a system it may look as if the system under test is being given a hard time. But if the scripts are following the same well-trodden paths through the code which are known to be stable. Do such tests provide any confidence? Probably not. Test implementation tools can be difficult to implement with an undisciplined test process. Testers who are not systematic, who don't design tests before implementing tests, or who depend on the expertise of other testers to execute tests reliably, will fail if they take on automated test running tools. Remember 'a fool with a tool is still a fool'! Uncontrolled test environment or buggy or unstable software can also present implementation problems. Tools lack the testers’ ability to adapt. A tool will stop and log a fault every time it recognises a difference between an actual result and an expected result. Without a significant amount of test script programming effort, tools are extremely inflexible robots that have no intelligence. S7-2-P6 - Considerations for Other Types of Tools Here we will discuss the considerations when choosing other types of test tools. Users of performance testing tools need expertise in how to design, prepare and execute the tests and interpret the results. Beyond that, programming and database skills are usually also required. Static analysis tools, when applied to source code can enforce coding standards, but if applied to existing code may generate a lot of messages. Warning messages do not stop the code being translated into an executable program, but should ideally be addressed so that maintenance of the code is easier in the future.

A gradual implementation with initial filters to exclude some messages would be an effective approach. Test management tools need to interface with other tools or spreadsheets in order to produce information in the best format for the current needs of the organisation. Most tools provide a huge range of reporting capability. These reports need to be designed for specific purposes to ensure they are meeting the needs of projects. S7-2-P7 - Summary This concludes Section 7.2 on the subject of effective use of tools. In this section we:

Outlined some of the potential benefits of using tools in the testing process

Described some of the potential risks

We went on to highlight some important considerations when selecting test execution tools

Finally we concluded by taking a brief look at other types of tools including

Performance Testing Tools

Static Analysis Tools

Test Management Tools. Module 7 Section 3 – Introducing a Tool into an Organisation S7-3-P1 - Objectives In this final section we look briefly at the main principles of introducing a software testing tool into an organisation. We'll summarise the goals of a proof-of-concept or piloting phase for tool evaluation. There are factors other than simply acquiring the right tool to consider when we want to provide good tool support and we will look at some of these here. More specifically we will:

Look at tool selection considerations including which activities to automate, skills identification and so on

Examine the tool implementation process and identify some of the

Page 145: Software Testing Intermediate

Software Testing Intermediate

Page 145

critical success factors and key roles required in order to gain a successful outcome

Consider the use of a pilot project, how to evaluate the outcomes and consider some of the benefits this approach can bring

Look at the planned phased installation of a tool and the activities associated with it

Finally we will look at some of the keys to successful implementation.

S7-3-P2 - Principles There are a couple of important principles to follow when introducing a tool into an organisation, these are:

Carry out an appropriate assessment of the current environment

Understanding of the balance between tool evaluation and the need for clear requirements and objective criteria.

Before a tool is selected it is really important to understand the organisational maturity, strengths and weaknesses. Through this understanding opportunities can be identified for an improved test process which in turn will be supported by tools. Tool evaluation will involve conducting a proof-of-concept phase to test the required functionality and determine whether the product meets its objectives. The proof-of-concept could be done in a small-scale pilot project, to minimize the impact of problems if the pilot is not successful. The organisation may also choose to evaluate the tool vendor to determine whether their training and support infrastructure are acceptable. Costs will be another important factor. Finally look at the skills required. Identify internal requirements for coaching and mentoring in the use of the tool. Some tools require considerable training or retraining, coaching and ongoing support. S7-3-P3 - Tool Selection Considerations Once it’s understood what type of tool is needed, the organisation must then decide

which features will be required to support current test processes. At this point, it may be clear that some of the tool features are not required. Don't let these become a distraction from what is most important at this stage - the areas where automated support is essential. You also need to consider the environment where the testing tool will be used. For example a mainframe tool is no use if you only have PCs. The skills of the people using the tool also need to be taken into account. If a test execution tool requires programming skills to write test scripts, it obviously would not be appropriate for use by end-users. These considerations are critical to the successful use of the tool. If the wrong tool is selected, whether it is the wrong tool for the job, the environment or the users, the benefits will not be achieved. S7-3-P4 - Tool Implementation Process The person appointed to manage the implementation of the testing tool within the organisation is critical to its successful use. Even the best tool in the world will not succeed without an enthusiastic, yet diplomatic, ‘champion’ or ‘change agent’. The champion may be a different person from the change agent, or there may be only one person with both roles. The champion’s role is to provide visible support for the tool within the organisation, and may be played by a high-level manager. The change agent is in charge of the day-to-day progress of the tool uptake while it is being phased into the working practices of the organisation. This may or may not be a full-time job. The typical change agent will have been involved in the tool selection process, and is often the same person who was in charge of tool selection. The qualities needed by the change agent (adapted from Barbara Bouldin’s Agents of Change) are:

Recent and extensive experience in testing

Page 146: Software Testing Intermediate

Software Testing Intermediate

Page 146

Progressive rather than reactionary personality

Practical and business orientation, as well as technical background

Highly developed analytical skills. S7-3-P5 - Tool Implementation Process (2) Another role is that of ‘tool custodian’ or ‘technical visionary’; he or she is responsible for technical tool support, implementing upgrades from the vendor and providing internal help or consultancy in the use of the tool. The role must not be underestimated for Computer Aided Software Testing or ‘CAST’. The technical problems which arise are different than those encountered in other areas of programming and software development. A technician with a vision of how test automation works, probably based on past experience or some intensive training, will be key to preventing others from abandoning the tool before it has been given a reasonable chance to deliver benefits. The team which selected the tool may also be the team which helps to implement it. Ideally it should include people from all the different parts of the organisation that would be expected to use the tool. It makes sense to implement tools one at a time. It takes time for the effect of the tool to be appreciated, used, and then used effectively. The tool will also change the way testing is carried out, and may influence the choice of what to automate next. S7-3-P6 - Tool Implementation Process (3) Gaining ongoing management commitment is another key factor. We can assume a degree of support because the tool selection process will have been authorised. However, many people fail to recognise that the cost of the tool is more than the cost of the tool. This paradox is true because the tool purchase price is only a small component of the cost of implementing any tool (or indeed a new technique or method) within an organisation. This is why management commitment is critical at this point of the tool acquisition process.

The change agent must be adequately supported by management, providing visible backing from high-level managers, and providing adequate funding and resourcing to support the project. In order to gain management commitment, the champion or change agent needs to present the business case for the selected tool, summarise the tool selection and evaluation results, and give realistic estimates and plans for the tool implementation process. Managers also need to realise that the first thing which happens when a new tool is used for the first time is that productivity will go down, even when the tool is intended to increase productivity. Adequate time must be allowed for learning and ‘teething problems’, otherwise the tool will be abandoned at its point of least benefit and greatest cost. Advice on the best way to manage the change process within an organisation is available from consultants and there are many books on the subject. S7-3-P7 - Tool Implementation Process (4) Once management commitment has been secured, the change agent needs to create a highly visible publicity campaign. All those who will eventually be affected need to be informed about the changes. The first step is simply to raise interest in the new tool, for example by giving internal demonstrations, sending a newsletter, or just talking to people about it. The most important publicity is from the earliest real use of the tool. The benefits gained on a small scale should be widely publicised to increase the desire and motivation to use the tool. ‘Testimonials’, particularly from converted sceptics, are often more effective than statistics. In parallel with the publicity drive, the change agent and the change management team need to carry out a significant amount of internal market research, talking to the people who are the targeted users of the tool. Find out how the different individuals would want to use the tool and whether it can meet their

Page 147: Software Testing Intermediate

Software Testing Intermediate

Page 147

needs, either as it is or with some adjustments. The lines of communication set up by interviewing potential tool users can also be used to address the worries and fears about using the tool that contribute to people’s resistance to change. S7-3-P8 - Pilot Project Objectives It is best to try out the tool on a small pilot project first. This ensures that any problems encountered in its use are ironed out when only a small number of people are using it. It also enables you to see how the tool will affect the way you do your testing, and to modify your existing procedures or standards to make best use of the tool. The pilot project should start by defining a business case for the use of the tool on this project, with measurable success factors. These are usually something like:

Lessons to be learned

Implementation concerns

Benefits to be gained. Finally learn more about the tool. See how the tool would fit with existing processes and practices, and how they would need to change. Decide on standard ways of using, managing, storing and maintaining the tool and the test assets. For example, decide on naming conventions for files and tests, creating libraries and defining the modularity of test suites. Assess whether the benefits will be achieved at reasonable cost. S7-3-P9 - Evaluation of Pilot After the pilot project using the new tool is completed, the results are compared to the business case for this project. If the objectives have been met, then the tool has been successful on a small scale and can safely be scaled up. The lessons learned on the pilot project will help to make sure that the next project can gain even greater benefits. If the objectives have not been met, then either the tool is not suitable or it is not yet being used in a suitable way (assuming

that the objectives were not overoptimistic). Decide why the pilot was not successful, and decide the next steps to take. Do not attempt to use the tool on a wider scale if you cannot explain why it has not succeeded on a small scale! The overheads for start-up may be much more significant on a small scale and may not have been adequately taken into account in the initial business case. It is best to proceed fairly cautiously when scaling up, and to increase tool use incrementally, one project group at a time. S7-3-P10 - Planned Phase Installation Assuming the pilot project was successful, the use of the tool in the rest of the organisation can now be planned. The success of the pilot needs to be widely publicised, internal training and in-house manuals need to be organised, and the projects to use the tool should be scheduled. The change agent and change management team can act as internal consultants to the new tool users, and can perform a very useful role in co-ordinating the growing body of knowledge about the use of the tool within the organisation. It is very important to follow through on the tool investment by ensuring that adequate training is given in its use. A tool that is not being used properly will not give the benefits which could potentially be realised. Every tool user should be properly trained by the vendor. The internal trainers will need a level of expertise in the tool similar to the vendor’s trainers. This is only achieved with years of experience. They will need time to put together an internal course, which typically takes many times the course duration, typically 10-25 hours’ preparation per course hour. The cost of training is repaid many times through expert use of the tool, and it is this which provides the return on the tool investment. S7-3-P11 - Keys to Success Let’s summarises some of the key factors that influence success in the selection and implementation of a tool.

Page 148: Software Testing Intermediate

Software Testing Intermediate

Page 148

Firstly we need to sell the concept. People will not change unless you make it clear that having the tool will make their life easier. Managers must be committed to improving quality through improved test practices. Tools are often bought to save money when regression testing. Unfortunately, many organisations that do not do regression testing have purchased tools to save money. In such cases the tool cost them more time and money, not less, BUT they found that they had more confidence in their software. Tools do not always save money. Improvements in quality can sometimes be difficult to predict or measure. Of course the tool should provide the required functionality, but it must also fit the way you work or you will have to refine or develop your test process at the same time as implementing the tool. This might prove too difficult to achieve. S7-3-P12 - More Keys to Success Just because we 'know' I.T. doesn't mean we can implement a tool without an organised project. A formal implementation project is another key success factor. Here are a few others worth highlighting.

Use the pilot to identify the quick wins that will help your project gain momentum and support in the tester community

Roll-out the things that work, abandon or rethink the things that didn't work

Learn lessons as you go. ‘Make’ is part of the process to review the automation as part of post-project reviews

Move skilled resources with the tool. Don't scatter the skilled team that learned, through hard work, how to make the tool work. Move some of these people to other project teams to spread the knowledge and give other projects the kick-start they need

Finally, if you are being successful, make a point of recording the quality improvements, time or cost savings. Use these to advertise the progress that is being made to

maintain momentum and retain the support of management in your organisation.

S7-3-P13 – Summary This concludes Section 7.3. In this final section we looked briefly at the main principles of introducing a software testing tool into an organisation and went on to summarise the goals of a proof-of-concept or piloting phase for tool evaluation. More specifically we:

Looked at tool selection considerations including which activities to automate, identifying current skills and so on

Examined the tool implementation process and identified some of the critical success factors and key roles required in order to gain a successful outcome

We went on to consider the use of a pilot project, how to evaluate the outcomes and consider some of the benefits this approach can bring

We looked at the planned phased installation of a tool and the activities associated with it

Finally, we looked at some of the keys to successful implementation of a software testing tool in an organisation.

Page 149: Software Testing Intermediate

Software Testing Intermediate

Page 149

SECTION 3 - Activity Answers

S1-1-P8 - Activity Remember Understand Apply Analyse S2-1-P5 – Activity The result of an error or mistake. S2-1-P12 - Activity Give an indication of the software quality. S2-1-P16 - Activity Test record. S2-1-P21 – Activity 50% Evidence suggests that making changes to software is as likely to introduce other problems. In fact, there's a 50/50 chance that other problems will be introduced. Let's take a look at his now. S2-2-P3 - Activity False A common perception of testing is that it only consists of running tests, for example, executing transactions in the software. This is part of testing, but not all of the testing activities of course. Test activities exist before and after test execution and include activities such as:

Planning and control

Choosing test conditions

Designing test cases

Preparing expected results

Checking actual results against expected results

Evaluating completion criteria

Reporting on the testing process and system under test

Closing the test phase. There can be different test objectives, including such things as:

Finding defects

Gaining confidence about the level of quality and providing information

Preventing defects In acceptance testing, the main objective may be to confirm that the system works as expected, to gain confidence that it has met the requirements. S2-2-P7 - Activity Dynamic testing: Testing that involves the execution of the software of a component or system. Static testing: Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis. S2-2-P11 – Activity Debugging Defect Testing S2-3-P5 - Activity - Principle 5 False If the same tests are repeated again and again, eventually, these test cases will no longer find new defects. This is known as the 'Pesticide Paradox'. When all the defects are removed, our tests will find no more. If we want to find more defects, we need new and different tests to be written to exercise different parts of the software or execute different program paths. S2-5-P2 - Activity – External Specifications and Baselines True. True. As a tester, we will look at a requirement or a design document and identify what we need to test, the features that we're going to have to exercise, and the behaviour that should be exhibited when running under certain conditions. For each condition that we are concerned with, we want an expected result so that we can say whether the system passes or fails the test when we run it. Usually, developers look at a design specification, and work out what must be built to deliver the required functionality.

Page 150: Software Testing Intermediate

Software Testing Intermediate

Page 150

They take a view on what the required features are. Then, they need to understand the rules that the feature must obey. Rules are normally defined as a series of conditions against which the feature must operate correctly, and exhibit the required behaviour. But what is the required behaviour? The developer infers the required behaviour from the description of the requirements and develops the program code from that. Requirements, design documents, functional specifications or program specifications are all examples of baselines. They are documents that tell us what a software system is meant to do. Often, they vary in levels of detail, technical language or scope, and they are all used by developers and testers. Baselines should not only provide all the information required to build a software system but also how to test it. That is, baselines provide the information for a tester to demonstrate unambiguously that a system does what is required. S2-7-P7 - Activity Test control Test design Test closure activities S2-7-P18 – Activity Comparing actual and expected results S2-7-P24 – Activity Test Control S2-7-P25 – Activity Test planning and control Test analysis and design Test implementation and execution Evaluating exit criteria and reporting Test closure activities S2-8-P4 - Activity Members of a different company Members of a different group Members of the same development team The code writer. S3-1-P4 - Activity Iterative Rapid Application S3-1-P9 – Activity Requirement Analysis Logical Design Physical Design Build

Test Implement S3-1-P15 – Activity Ad hoc Unit Integration System Acceptance S3-2-P3 - Activity Unit testing. But it is also referred to as module or program testing. S3-2-P9 - Activity A - Ad hoc B - Component S3-2-P17 - Activity Top-down testing. S3-2-P18 - Activity Mixed integration testing strategy. S3-2-P21 - Activity - Similarities True Let’s take a middle-of-the-road IT application as an example. Imagine you’re building a customer information system, or a help desk application, or a telesales system. The objective of both system and acceptance testing has one aim - to demonstrate that the documented requirements have been met. The documented requirements might be the business requirements or what’s in the functional specification or the technical requirements. In systems and acceptance testing there’s a degree of independence between the designers of the test and the developers of the software. There also needs to be a certain amount of formality because it’s a team effort, system testing is never the responsibility of one individual. Part of the formality is that you run tests to a plan and you manage incidents. Another similarity is that both systems and acceptance tests are usually big tests – they’re usually a major activity within the project.

Page 151: Software Testing Intermediate

Software Testing Intermediate

Page 151

S3-2-P24 - Activity Acceptance Testing Users are primarily concerned with acceptance testing, whereas system testing is the primary concern of suppliers. S3-2-P29 - Activity True Remember that the user’s requirements are translated into a functional spec and eventually to a design document. Think of each translation as an interpretation. Two things may happen:

A resulting feature doesn’t deliver functionality in the way a user intended

A feature is missing. So, if you test against the design document, you will never find a missing requirement because it just won’t be there. A design-based test is also strongly influenced by the system provided. If you test according to the design, the test will reflect how the system has been designed and not how it is meant to be used in production. Tests that are based on design will tend to go through features, one by one, right through the design document from end to end. It won’t be tested in the ways that users will use it, and that might not be as good a test. S3-5-P7 - Activity 1. Regression testing 2. Re-testing S3-6-P5 - Activity The risk of making the change S3-6-P7 - Activity Impact analysis S4-1-P4 - Activity Defects S4-1-P11 - Activity 1. Desk/Buddy Check 2. Walkthroughs 3. Management Reviews 4. Technical Reviews 5. Formal Inspections 6. Audits S4-2-P4 - Activity Planning

Kick-Off Individual Preparation Review Meeting Rework Follow-up. S4-2-P8 - Activity True For more formal reviews a check on exit criteria is a good idea. The agreed status of the product under review is usually one of the following: 1. ‘Accepted’ - document is deemed correct and ready to use in subsequent project activities 2. The second possible outcome is that the document may be ‘Accepted With Qualifications’. Problems still exist in the documents which have to be corrected. These corrections will be checked by the chairperson who will then sign off the document 3. The third outcome is that the document is ‘Rejected’. Potentially, the document requires major revision and a further review. Perhaps time will be lost: the quality of the document has been assessed and it is just not ready. The risk of proceeding with a poor quality document is avoided. S4-3-P2 - Activity Program code S5-1-P6 - Activity Test case Test procedure Test condition S5-1-P7 - Activity Test condition Test procedure Test case S5-3-P3 - Activity - Equivalence Partitioning No. Every value in a partition is equivalent. This is where the technique gets its name. By doing this, our infinite number of possible values can be reduced to three, that completely cover the business rules. So, if our requirement were to exercise each aspect of the rule with only one test case, we could identify the partitions (a, b,

Page 152: Software Testing Intermediate

Software Testing Intermediate

Page 152

and c) and take one value for each partition and then we will have exercised the rule. So, rather than having an infinite number of values, we’ve just three, because we have three partitions and three test cases to cover them. But what is the value of this? Equivalence partitioning is only saying that with any requirement, you can identify the rules and the partitions, and any test case that you select within a partition is equivalent to any other test case in that same partition. They’re always as good as each other and they should invoke the same thing in the code. That’s where this idea of equivalent partitions comes from. So, if you can identify partitions, you have a method for systematically covering all of the rules. You have a method for deriving test cases. It’s not very sophisticated. It’s not a strong strategy, but it is a strategy. S5-3-P10 - Activity 0.5% £0.00 - £1000.00 1% £1000.00 - £2000.00 1.5% >=£2000.01 S5-5-P7 – Activity - Exercise: Decision Table Ray – Flights available; air miles accepted Jean – No flights available Mark – Flights available; air miles accepted S5-9-P5 – Activity - Exercise: Decision Table True S6-4-P11 – Activity The program specification has been reviewed and signed off - will be suitable entry criteria as this should be signed off before programming and component testing begins. The technical specification has been reviewed and signed off - will be suitable entry criteria for integration testing so is not relevant here. The developer has tested the code in the development environment and it works -

will be suitable entry criteria as the developer should have done some ad hoc testing to ensure that the code compiles and works as coded. 100% statement coverage - should be achieved during component testing so this is suitable exit criteria. 100% decision coverage - should be achieved during component testing so this is suitable exit criteria. 100% path coverage - is impossible in all but the most trivial programs so is not suitable entry or exit criteria. 100% requirements coverage - is very unlikely to be possible at component testing as many of the requirements will only be achieved once the system has been integrated. The integration testing strategy has been signed off - would be suitable entry criteria for integration testing so is not relevant here. S6-5-P13 - Activity An organisation with a well defined test process and reasonably comprehensive documentation would be able to use this detail to create an estimate based on a Work Breakdown Structure. A new project in a new technology area would most likely have to resort to intuition or guessing as there seems to be no detailed information to use for any of the other techniques. A project with a mature and detailed functional specification may be able to use Test Point Analysis if Function Point Analysis have been done. An organisation that lacks formal processes and a project with limited documentation but several experienced team members can use a consensus of knowledgeable people to improve estimates. An organisation with rudimentary historical data including over overall development effort and testing effort will be able to estimate based on percentage of development time if the development time has been estimated and the project is of a similar nature.

Page 153: Software Testing Intermediate

Software Testing Intermediate

Page 153

An organisation with a record of past activities along with actual durations can base estimates on these previous metrics. S6-9-P12 – Activity Execute test Incident Diagnose Incident Classify Incident Fix software re-test More tests S7-1-P4 – Activity This is known as the Probe Effect. S7-1-P14 – Activity The use of static analysis tools should provide these type of benefits. S7-1-P24 – Activity (iii) and (iv) S7-1-P25 – Activity Test management tools, test design tools, test data preparation tools, test execution tools, test comparators, security tools and performance/load/stress testing tools would typically be used by testers and static analysis tools, modelling tools, test harness/unit testing framework tools, coverage measurement tools and dynamic analysis tools would typically be used by developers.