istqb notes

16
CHAPTER 1: FUNDAMENTALS OF TESTING DEFINTION OF SOFTWARE TESTING: The process consisting of all life cycle activities, both static and dynamic, concerned with Planning, Preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects. TESTING PRINCIPLE:- Testing is context dependent Exhaustive testing is impossible Early testing Defect clustering Pesticide Paradox Testing shows presence of a defects Absence of errors fallacy CAUSES OF SOFTWARE DEFECTS Errors:- A human action the produces an incorrect results Defects (bug, faults):- A flaw in a component or system that can cause the component or the system to fail to perform its required function. A defect encountered during the execution, may cause a failure to component or system. Failure: Deviation of the component or system from its expected delivery, service or

Upload: madhumohan

Post on 12-Nov-2014

966 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: istqb notes

CHAPTER 1: FUNDAMENTALS OF TESTING

DEFINTION OF SOFTWARE TESTING:

The process consisting of all life cycle activities, both static and dynamic, concerned with

Planning, Preparation and evaluation of software products and related work productsto determine that they satisfy specified requirements, to demonstrate that they are

fit for purpose and to detect defects.

TESTING PRINCIPLE:-

Testing is context dependentExhaustive testing is impossibleEarly testingDefect clusteringPesticide ParadoxTesting shows presence of a defectsAbsence of errors fallacy

CAUSES OF SOFTWARE DEFECTS

Errors:- A human action the produces an incorrect results

Defects (bug, faults):- A flaw in a component or system that can cause the component or the system to fail to perform its required function. A defect encountered during the execution, may cause a failure to component or system. Failure: Deviation of the component or system from its expected delivery, service or result.

DO OUR MISTAKES MATTER?

Defects in software , systems or documents may results in failure , but not all do cause failures.

Page 2: istqb notes

It is not just defects that give rise to failure. Failures can caused by :

Environmental conditions for eg. Radiations burstHuman error in interacting with the software, for eg. Wrong input entered or output being misinterpreted.

Malicious damage: someone deliberately trying to cause a failure in a system.

When we think about what might go wrong we have to consider defects and failures

arising from:

Errors in specification, design and implementation of the software and system

Errors in use of the systemEnvironmental conditionsIntentional damagePotential consequences of earlier errors, intentional damage, defects and

failures.

What is the Cost of defects?

The cost of finding and fixing defects rises considerably across the life cycleIf an error is made and consequent defect is detected in the requirements at

the Specification stage , then it is relatively cheap to find and fix and then specification can be corrected and re-issued.

If the defects detected in the design stage then the design can be corrected andre-issued with relatively little expenses.If the defect is introduced in the requirement , specification and it is not

detected Until accepatance testing or even once the system has been implemented then it will be much more expensive to fix.

Testing and Quality

Testing can give confidence in the quality of software if it finds few or no defects

Page 3: istqb notes

Testing helps us to measure the quality of the software in terms of the number of defects found , the tests run and the system covered by the tests.

Quality : The degree to which a component , system or process meets specified requirements or user or customer needs and expectations.

Validation: Is the right Specification? Verification : Is the system correct to specification?

How much testing is enough? ( Test Principle – Exhaustive testing impossible)

Instead of exhaustive testing, we use risks and priorities to focus testing efforts.Pressures on a project include time and budget as well as pressure to deliver

technical solution that meets customer needs.

Customer and project manager will want to spend an amount on testing the produces Return on Investments for them.

Return on Investments Preventing failures after releases that are costly.

By assessing and managing risk is are of the important activities.

How much testing is enough is according to level of risk, technical and business Risks related to product and project

Detect Defects: Help us understand the risks associated with putting the software into operational .

Fixing the defects improves the quality of the products. Identifying the defects has another benefits to improve the development process and make fewer mistakes in future work.

Page 4: istqb notes

When can we meet our test objective? (Test principle – Early Testing)

Finding the defectsGaining confidence in and providing the information about level of quality.Preventing defects.

Benefits of early testingEarly test design and review activites — finds defects early on when they are

cheap to find and fix.

Fousing on defects can helps us plan our tests --- (Testing Principle – Defect clustering)

Main focus of reviews and other static tests is to carry out testing as early as possible

finding and fixing defects are more cheaply and preventing defects from appearing

at later stages of this project. These activites helps us find out about defects earlier

and identify potential clusters.

Debugging : The process of finding , analyzing and removing the causes of failures in software.

TEST PLANNING

Determine the scope and risks and identify the objectives of testingDetermine the test approach (techniques, test items, coverage,testware) .Implement the test policy and the test strategy.Determine the required resourcesSchedule test analysis and design tasks,test implementation ,execution and evaluationDetermine the exit criteria

TEST CONTROLMeasure and analyze the results of reviews and testing .Monitor and document progress,test coverage and exit criteria.Provide information on testing.Initiate corrective actionsMake decisions.

TEST ANALYSIS AND DESIGN

Page 5: istqb notes

Review the test basis.Identify test conditions based on analysis of test items, their specifications.Design the testsEvaluate testability of the requirements and system.Design the test environment set-up and identify any required infrastructures

and tools

TEST IMPLEMENTATION AND EXECUTION

IMPLEMENTATION:Develop and prioritize our test cases.Create the test suites from the test cases for efficient test execution.

EXECUTION:Execute the test suites and individual test cases.Log the outcome of test execution and record the identities and version, test tools and testwares.Compare the actual results with expected resultsRepeat the activities as a result of action taken for each discrepancy.

EVALUATING EXIT CRITERIA AND REPORTING Check test logs against the exit criteria specified in the test planningAssess if more tests are needed or if the exit criteria specified should be changed.Write a test summary report for stakeholders.

TEST CLOSURE ACTIVITIES Check which planned deliverables with actually delivered and ensure all

incident reports have been resolved through defects repair or deferral. Finalize and archive testware,such as scripts, test environment and

infrastructure. Hand over testware to the maintenance team. Evaluate the testing and analyze the lessons learned for future projects.

PSYCHOLOGY OF TESTING.We need to be careful when we are reviewing and when we are testing.

Communicate findings on the product in a neutral, fact focused without criticizing the person who created it.

Explain that by knowing about this now we can work round it or fix it so the delivered the system is better for the customer.Start with collaboration rather than battles. Remind everyone common goal of better quality system.

Page 6: istqb notes

CHAPTER 2: TESTING THROUGHOUT THE SOFTWARE LIFE CYCLE

In every development life cycle , a part of testing is performed on VERIFICATION Testing and part is focused on VALIDATION Testing.

VERIFICATION: To determine whether it meets the requirements. Is the deliverablebuilt according to the specification?

VALIDATION: To determine whether it meets the user needs ---Is the deliverable Fit for purpose?.

Page 7: istqb notes

V-MODEL Water fall model was one of the earliest models to be designed . It has thenatural timeline where the tasks are executed in a sequential fashion.Draw backs of this model is difficult to get feedback passed backwards up thewaterfalls and there are difficulties if we need to carry out numerous iterations for a particular phase.

The V-Model was developed to address the problems experienced using the traditional Waterfall approach. The V-Model provides guidance thattesting needs to begin as early as possible in the life cycle.

The type V-Model uses four test levels.

Component testingIntegration testingSystem testingAcceptance testing

Iterative life cycles A common feature of iterative approaches is that the delivery is divided

into Increments or builds with each increments adding a new functionality.

Intial increment will contain infrastructure required to support the build functionali\ty

The increment produced by a iteration may be tested at several levelas part of its development.

Testing within a life cyle model In summary, whichever life cycle model is being used, there are severalCharacteristics of good testing:

For every department activity there is a corresponding testing activity. Each test level has test objectives specific to that level. The analysis and design of tests for a given test level should begin during

the corresponding development activity

Page 8: istqb notes

Testers should be involved in reviewing documents as soon as drafts are are available in the development cycle.

Test Levels

Component Testing: Also known as unit, module and program testing, that are separately testable.Component testing may include testing of functionality and specific non-functionalCharacteristics such as resource-behavior (e.g. memory leaks), performance orRobustess testing as well as structural testing.

One approach in component testing, used in Extreme Programming (XP), is toPrepare and automate test cases before coding. This is called a test-first approachor test-driven development.

Integration Testing: Integration testing tests interfaces between components, interactions to differentparts of system such as an operating system, file system and hardware or interfaces between systems.

There may be more than one level of integration testing and it may carriedout on test objects of varying size.

Component integration testing tests the interaction between software components

and after component testing

System integration testing tests the interaction between the different systems and

may be done after system testing.

‘Big-Bang’ Integration testingone extreme is that all component or system are integrated simultaneously, afterwhich everything is tested as a whole. Big –Bang testing has the advantage thateverything is finished before integration testing starts. Disadvantage is timeconsuming and difficult to trace the cause of failures.

Different approach of integration

Top-down approach

Page 9: istqb notes

Bottom- up approach Functional incremental.

System Testing: System testing is concerned with the behavior of the whole system/product as

defined by the scope of a development project or product. System testing should investigate both Functional and non-functional requirements of the system.

System testing requires a controlled test environment with the regard to amongst Others things, control of software versions, testware and the test data.

Acceptance Testing:The goal of acceptance testing is to establish confidence in the system.Acceptance testing is focuses on validation type of testing, determine whetherthe system is fit for purpose. Finding defects should not be the main focusin acceptance testing.

Acceptance testing may occur at more than just a single level.

A Commercial off the Shelf (COTS) software product may be acceptance tested

when it is installed or integrated Acceptance testing of the usability of a components may be done during

the component testing.

Acceptance testing of a new functional enhancement may come before system

testing.

Different types of Acceptance testing Operational Acceptance test (testing of backup/restore, disaster recovery) Compliance Acceptance test (testing is performed against the regulations,

such as legal or Safety regulations). Contract Acceptance test(performed against a contract’s acceptance

criteria for producing custom-developed software).

Page 10: istqb notes

Two stages of Acceptance tests. Alpha testing: Tests take place at the developer’s site. Beta testing: Tests take place at the customer’s site (under real world

working conditions).

Test Types:

Testing of function:Functional testing considers the specified behavior and is often as referred asBlack- box testing.

Function testing can based upon ISO 9216, be done focusing on suitabilityInteroperability, security, accuracy and compliance

Testing functionality can be done from two perspectives: Requirements – based testing uses a specification of the functional

requirement for the system as the basis for desiging tests.

Business- process- based testing uses knowledge of the business processes, which

describes the scenarios involved in day – to- day business use of the system. Use cases are a very useful basis for test cases from business perspective.

Testing of software product characteristics (Non-functional testing)Non-functional testing includes of performance testing, load testing, stress testingUsability testing , maintainability testing, reliability testing and portability testing.

The ISO 9216 standard defines Six quality characteristics and the subdivision

Reliability: sub-characteristics maturity (robustness), fault-tolerance, Recoverability and complianceUsability: understandability, learnability, operability, attractiveness and compliance. Efficiency: Time behavior, resource utilization and compliance.

Page 11: istqb notes

Maintainability: analyzability, changeability, stability, testability and compliance.Portability: adaptability, installability, co-existence, replaceability and compliance.

Testing software structure/architecture (structural testing)

Structural testing is often referred as ‘white-box’ or ‘glass-box’ because we are interested in what is happening ‘inside the box’.

Structural testing is most often used as a way of measuring the thoroughnessof testing through the coverage of a set of structural elements or coverageitems.

Testing related to changes:

Confirmation testing (re-testing):When a test fails and we determine that the cause of failure is softwaredefect, the defect is reported and we can expect a new version of the softwarethat has had the defect fixed. We will need to execute the test again to confirmthat the defect has indeed been fixed. This is known as Confirmation Testing.

Regression testingTesting of a previously tested program following modification to ensure thatdefects have not been introduced or uncovered in unchanged areas of thesoftware as a result of the changes made. It is performed when the software or its environment is changed.

Maintenance Testing:Modification of a software product after delivery to correct defects, to improveperformance or other attributes or to adapt the product to the modified environment .

Impact analysis and regression testing:Usually maintenance testing will consist two parts:

Testing the changes Regression tests to show that the rest of the system has not been

affected by the maintenance work.

Page 12: istqb notes