exploratory testing explained

43
MJ Half-day Tutorials 5/5/2014 8:30:00 AM Exploratory Testing Explained Presented by: Paul Holland Testing Thoughts Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ [email protected] ∙ www.sqe.com

Upload: techwellpresentations

Post on 10-May-2015

205 views

Category:

Technology


1 download

DESCRIPTION

Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of testers to continually optimize the value of their work. It is the process of three mutually supportive activities—learning, test design, and test execution—done in parallel. With skill and practice, exploratory testers typically uncover an order of magnitude more problems than when the same amount of effort is spent on procedurally scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits. Even fewer can articulate the process. Paul Holland looks at specific heuristics and techniques of exploratory testing that will help you get the most from this highly productive approach. Paul focuses on the skills and dynamics of exploratory testing, and how it can be combined with scripted approaches.

TRANSCRIPT

Page 1: Exploratory Testing Explained

MJ Half-day Tutorials

5/5/2014 8:30:00 AM

Exploratory Testing

Explained

Presented by:

Paul Holland

Testing Thoughts

Brought to you by:

340 Corporate Way, Suite 300, Orange Park, FL 32073

888-268-8770 ∙ 904-278-0524 ∙ [email protected] ∙ www.sqe.com

Page 2: Exploratory Testing Explained

Paul Holland Testing Thoughts

An independent software test consultant and teacher, Paul Holland has more than sixteen years of hands-on testing and test management experience, primarily at Alcatel-Lucent where he led a transformation of the testing approach for two product divisions, making them more efficient and effective. As a test manager and tester, Paul focused on exploratory testing, test automation, and improving testing techniques. For the past five years, he has been consulting and delivering training within Alcatel-Lucent and externally to companies such as Intel, Intuit, Progressive Insurance, HP, RIM, and General Dynamics. Paul teaches the Rapid Software Testing course for Satisfice. For more information visit testingthoughts.com.

Page 3: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 1

1

1

Copyright © 1995-2014, Satisfice, Inc.

Paul Holland, Doran Jones, Inc.

[email protected]

www.doranjones.com

My Background� Managing Director, Testing Practice at Doran Jones

� Independent S/W Testing consultant 4/2012 - 3/2014

� 16+ years testing telecommunications equipment and

reworking test methodologies at Alcatel-Lucent

� 10+ years as a test manager

� Presenter at STAREast, STARWest, Let’s Test,

EuroSTAR and CAST

� Facilitator at 50+ peer conferences and workshops

� Teacher of S/W testing for the past 6 years

� Teacher of Rapid Software Testing

� Military Helicopter pilot – Canadian Sea Kings

Page 4: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 2

2

Let’s Do ET! Ready…?

3

The Roadmap

Want to learn how to test?

OKAY LET’S GO!!!

4

(To do ET well, begin by doing it poorly.)

Page 5: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 3

3

Acquiring the competence, motivation,

and credibility to…

Testing is…

create the conditions necessary to…

…so that you help your clients to make

informed decisions about risk.

evaluate a product by learning

about it through experimentation, which includes to

some degree: questioning, study, modeling,

observation and inference, including…

operating a product

to check specific

facts about it…

Call this “Checking” not Testing

Observe Evaluate Report

Interact with the

product in specific

ways to collect

specific

observations.

Apply algorithmic

decision rules to

those

observations.

Report any

failed checks.

means

operating a product

to check specific

facts about it…

Page 6: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 4

4

Tacit

Test ProceduresConsistency

Oracles

Prospective Testing

Learning and

Teaching

Commitment

Management

(inc. estimation)

Recruiting

Helpers

Managing Testing

Logistics

Test Tooling and

Artifact Development

Test Framing

Bug

Advocacy

& Triage

Project

Post Mortem

Creating Archival

Documentation

Guiding Helpers

Discovery of

Curios, Issues &

Risks

Building the

Test Team

Designing

Checks and Tests

Playing with

the Product

Studying

Results

Galumphing

Configuring

Product & Tools

Schedule

Management

Study

Customer

Feedback

Relationship

Building

Making Failure

Productive

Sympathetic

Testing

Maintaining Personal

Health and Motivation

Team

Leadership

Quasi-Functional

Testing

Playing

Programmer

Testing w/Simulated

Conditions

Testing a Simulation

Creating the Test Lab

Studying Specs

Managing Records

Playing

Business Analyst

Opposition

Research

Testability Advocacy

Cultivate Credibility

Testing vs. Checking

�TESTING (think “what testers

do”): the process of evaluating a product by learning about it

through experimentation, which includes to some degree:

questioning, study, modeling, observation and inference.

�CHECKING (think “fact

checking”): the process of making evaluations by applying algorithmic

decision rules to specific observations of a product.

Page 7: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 5

5

10

Exploratory Testing Is…

� an approach to testing…

� that emphasizes the personal freedom and

responsibility of each tester to continually

optimize the value of his work…

� by treating learning, test design, test

execution and result evaluation as mutually

supportive activities that run in parallel

throughout the project.

(applicable to any test technique)

(optimize how?)

Page 8: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 6

6

Exploration is Not Just Actionarrows and cycles

You can put them together!arrows and cycles

(value seeking)

(task performing)

Page 9: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 7

7

You can put them together!arrows and cycles

You can put them together!arrows and cycles

Page 10: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 8

8

QUIZ

� Testing without a specification

� Testing with a specification

� Testing without instructions

� Testing with instructions

� Testing a product for the first time

� Re-testing after a change

� Informal testing

� Formal testing

� Playing… Just playing…

� Fully automated fact checking 15

Which of

these

are

exploratory

testing?

16

The Testing Formality Continuum

Mixing Scripting and Exploration

When I say “exploratory testing” and don’t qualify it,

I mean anything on the informal side of this continuum.

INFORMAL FORMALNot done in any specific way,

nor to verify specific facts.

Done in a specific way,

or to verify specific facts.

Machine

Checking

Human

Checking

Vague/Generic

Test Scripts

“Human

Transceiver”

Matrix/Outline

of Test Conditions

Product

Coverage

OutlinePlaySpecific

Test Data

Survey

Exploratory

Analytical

Exploratory

Page 11: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 9

9

Common Questions in ET

� How do we manage ET?

� How do we record ET?

� How do we measure ET?

� How do we estimate ET?

� How do we learn ET?

� How do we defend ET?

17

“How do we do X with ET that we

could never do with scripted testing

either but nobody noticed?”

18

Tests

Project

Environment

Product

Elements

Quality

Criteria

Perceived

Quality

A Heuristic Test Strategy Model

Page 12: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 10

10

19

Project

Environment

Tests

Product

Elements

Quality

Criteria

Perceived

Quality

A Heuristic Test Strategy Model

This is what people think you do

described actual

“Compare the product to its specification”

20

Page 13: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 11

11

This is more like what you really do

imagined

actualdescribed

“Compare the idea

of the product to

a description of it”

“Compare the actual product to a

description of it”

“Compare the idea

of the product to

the actual product”

21

This is what you find…The designer INTENDS the product

to be Firefox compatible,

but never says so,

and it actually is not.

The designer INTENDS

the product to be Firefox

compatible,

SAYS SO IN THE SPEC,

but it actually is not.

The designer assumes

the product is not

Firefox compatible,

and it actually is not, but the

ONLINE HELP SAYS IT IS.

The designer

INTENDS

the product to be

Firefox compatible,

SAYS SO,

and IT IS.

The designer assumes

the product is not

Firefox compatible,

but it ACTUALLY IS, and the

ONLINE HELP SAYS IT IS.

The designer INTENDS the product

to be Firefox compatible,

MAKES IT FIREFOX COMPATIBLE,

but forgets to say so in the spec.

The designer assumes

the product is not

Firefox compatible,

and no one claims that it is,

but it ACTUALLY IS.

Imagined

Described Actual

Page 14: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 12

12

23

Oracles

An oracle is a way to recognize

a problem that appears during testing.

“...it appeared at least once to meet some

requirement to some degree.”

“...it works”

24

Consistency (“this agrees with that”)

an important theme in oracles� Familiarity: The system is not consistent with the pattern of any familiar problem.

� Explainability: The system is consistent with our ability to explain it.

� World: The system is consistent with objects and states, in the world, that it represents.

� History: The present version of the system is consistent with past versions of it.

� Image: The system is consistent with an image that the organization wants to project.

� Comparable Products: The system is consistent with comparable systems.

� Claims: The system is consistent with what important people say it’s supposed to be.

� Users’ Expectations: The system is consistent with what users want.

� Product: Each element of the system is consistent with comparable elements in the same system.

� Purpose: The system is consistent with its purposes, both explicit and implicit.

� Statutes & Standards: The system is consistent with applicable laws and standards.

Page 15: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 13

13

Familiar Problems

If a product is consistent with problems we’ve seen before,

we suspect that there might be a problem.

Explainability

If a product is inconsistent with our ability to explain it

(or someone else’s), we suspect that there might be a problem.

Page 16: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 14

14

World

If a product is inconsistent with the way the world works,

we suspect that there might be a problem.

History

If a product is inconsistent with previous versions of itself,

we suspect that there might be a problem.

Okay,

so how the #&@

do I print now?

Page 17: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 15

15

Image

If a product is inconsistent with an image that

the company wants to project, we suspect a problem.

Comparable Products

WordPad Word

When a product seems inconsistent with a product that is

in some way comparable, we suspect that there might be a problem.

Page 18: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 16

16

Claims

When a product is inconsistent with claims that important

people make about it, we suspect a problem.

User Expectations

When a product is inconsistent with expectations that a

reasonable user might have, we suspect a problem.

Page 19: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 17

17

Purpose

When a product is inconsistent with its designers’ explicit

or implicit purposes, we suspect a problem.

Product

When a product is inconsistent internally—as when it

contradicts itself—we suspect a problem.

Page 20: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 18

18

Statutes and Standards

When a product is inconsistent with laws or widely

accepted standards, we suspect a problem.

Tacit Explicit

Oth

er

Pe

op

leTe

ste

r Your

Feelings &

Mental Models

Shared

Artifacts

(specs, tools,

etc.)

Stakeholders’

Feelings &

Mental Models

Inference

Observable

Consistencies

ReferenceConference

Experience

Oracles From the Inside Out

Page 21: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 19

19

37

ET is a Structured Process� Exploratory testing, as I teach it, is a structured process

conducted by a skilled tester, or by lesser skilled testers or users working under supervision.

� The structure of ET comes from many sources:− Test design heuristics

− Chartering

− Time boxing

− Perceived product risks

− The nature of specific tests

− The structure of the product being tested

− The process of learning the product

− Development activities

− Constraints and resources afforded by the project

− The skills, talents, and interests of the tester

− The overall mission of testing

In other words,

it’s not “random”,

but systematic.

IP Address

38

Heuristics bring useful structure

to problem-solving skill.

� adjective:

“serving to discover.”

� noun:

“a fallible method for solving a problem or

making a decision.”

“The engineering method is the use of heuristics

to cause the best change in a poorly understood situation

within the available resources.”

-- Billy Vaughan Koen, Discussion of The Method

Page 22: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 20

20

39

Test Design and Execution

Guide testers with personal supervision and concise

documentation of test ideas. Meanwhile, train them

so that they can guide themselves and be accountable

for increasingly challenging work.

Test

Ideas

Achieve excellent test design by

exploring different test designs

while actually testing Product

Product

or spec

40

If You are Frustrated

1. Look over your recent tests and find a pattern there.

2. With your next few tests, violate the old pattern.

3. Prefer MFAT (multiple factors at a time).

4. Broaden and vary your observations.

Boundary Testing

Page 23: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 21

21

41

If You are Confused

1. Simplify your tests.

2. Conserve states.

3. Frequently repeat your actions.

4. Frequently return to a known state.

5. Prefer OFAT heuristic (one factor at a time).

6. Make precise observations.

Dice Game

42

Exploratory Branching:

Distractions are good!

New test

idea

New test

idea

New test

idea

New test ideas occur

continually during an

ET session.

Page 24: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 22

22

43

“Long Leash” Heuristic

but periodically take stock

of your status against your mission

Let yourself be distracted…

‘cause you never know

what you’ll find…

44

ET is a Structured Process

In excellent exploratory testing, one structure tends to

dominate all the others:

Exploratory testers construct a compelling story of

their testing. It is this story that

gives ET a backbone.

Pen Test

Page 25: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 23

23

To test is to construct three stories(plus one more)

Level 1: A story about the status of the PRODUCT-…about how it failed, and how it might fail...

…in ways that matter to your various clients.

Level 2: A story about HOW YOU TESTED it-…how you configured, operated and observed it…

…about what you haven’t tested, yet…

…and won’t test, at all…

Level 3: A story about the VALUE of the testing-…what the risks and costs of testing are…

…how testable (or not) the product is…

…things that make testing harder or slower…

…what you need and what you recommend…

(Level 3+: A story about the VALUE of the stories.)…do you know what happened? can you report? does report serve its purpose?

Why should I be pleased

with your work?

The Roadmap

46

� How often do you account for your progress?

� If you have any autonomy at all, you can risk investing some time in − learning

− thinking

− refining approaches

− better tests

Allow some disposable time

Self-management is good!

Page 26: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 24

24

47

Allow some disposable time

� If it turns out that you’ve made a bad

investment…oh well

☺ If it turns out that you’ve made a

good investment, you might have

− learned something about the product

− invented a more powerful test

− found a bug

− done a better job

− avoided going down a dead end for too long

− surprised and impressed your manager

48

“Plunge in and Quit” Heuristic

� This benefits from disposable time– that part of your work not scrutinized in detail.

� Plunge in and quit means you can start something without committing to finish it successfully, and therefore you don’t need a plan.

� Several cycles of this is a great way to discover a plan.

Whenever you are called upon to test

something very complex or frightening, plunge in!

After a little while, if you are very confused

or find yourself stuck, quit!

Page 27: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 25

25

49

The First Law of Documentation

“That should be documented.”

“That should be documented

if and when and how it serves our purposes.”

Who will read it? Will they understand it?

Is there a better way to communicate that information?

What does documentation cost you?

50

Common Problems with

Test Documentation

� Distracts, and even prevents, testers from doing the best testing they know how to do.

� Authors don’t understand testing.

� Authors don’t own format.

� Templates help hide poor thinking.

� Full of fluff.

� Fluff discourages readers and increases cost of maintenance.

� No one knows documentation requirements.

� Too much formatting increases the cost of maintenance.

� Information for different audiences is mixed together.

� Catering to rare audiences instead of probable user.

� Disrupts the social life of information.

� Long term and short term goals conflict.

� Most people don’t read.

Page 28: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 26

26

51

What Does Rapid Testing Look Like?

Concise Documentation Minimizes Waste

Risk ModelCoverage Model Test Strategy

Reference

Risk CatalogTesting Heuristics

General

Project-

Specific Status

Dashboard

Schedule BugsIssues

52

Consider Automatic Logging

� Exploratory testing works better when the product produces an automatic log of everything that was covered in each test.

� You can also use external logging tools such as Spector (www.spectorsoft.com).

� Automatic logging means that you get something like a retrospective script of what was tested.

Page 29: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 27

27

53

Introducing the Test Session

1) Charter

2) Uninterrupted Time Box

3) Reviewable Result

4) Debriefing vs.

Page 30: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 28

28

Charter:

A clear mission for the session

� A charter summarizes the goal or activity of the session.

� Before a session, it is a statement of how the session will be

focused. Afterward, it should be changed, if necessary, to

reasonably characterize the actual focus of the session.

� General charters are the norm, early in the project:

“Analyze the Insert Picture function”

� More specific charters tend to emerge later on:

“Test clip art insertion. Focus on stress and flow techniques, and make

sure to insert into a variety of documents. We’re concerned about

resource leaks or anything else that might degrade performance over

time.”

Charter Patterns:

Evolving test strategy

� Intake Sessions (Goal: negotiate mission)

“Interview the project manager about testing Xmind.”

− Survey Sessions (Goal: learn product)

“Familiarize yourself with Xmind.”

� Setup Sessions (Goal: create testing infrastructure)

“Develop a library of mindmaps for testing Xmind.”

� Analysis Sessions (Goal: get ideas for deep coverage)

“Identify the primary functions of Xmind.”

“Construct a test coverage outline.”

“Brainstorm test ideas.”

“Prepare a state model for state-based testing.”

“Perform a component risk-analysis to guide further testing.”

“Discover all the error messages in Xmind.”

Page 31: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 29

29

Charter Patterns:

Evolving test strategy

� Deep Coverage Sessions (Goal: find the right bugs)

“Perform scenario testing based on the scenario playbook.”

“Perform a tour that achieves double-transition state coverage.”

“Perform steeplechase boundary testing on the major data items.”

“Test each error message in Xmind.”

“Perform a function tour using the 2300 node mindmap.”

� Closure Sessions (Goal: get ready to release)

“Verify the latest fixes.”

“Re-test tutorial with the latest build.”

“Review help files and readme.”

“Go over deferred bugs with Customer Support people.”

“Perform clean-machine install test.”

Time Box:

Focused test effort of fixed duration

A normal session is 90 minutes long.Base timing can be adjusted based on your context

A real session may be somewhat longer or shorter.

A normal session is uninterrupted.

A real session may be somewhat interrupted.

Real sessions are “normed” for the purposes of

reporting metrics. This is so that our clients don’t

get confused by the numbers.

Page 32: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 30

30

Example

Session 1…

Work for 20 minutes

(fifteen minute interruption)

Work for 75 minutes

(ten minute interruption)

Work for 40 minutes

(end the real session)

Session 2…

Work for 35 minutes

(twenty minute interruption)

Work for 85 minutes

(end the real session)

This results in 2 session reports; 5 hours of real time.

It would be reported as 3 normal sessions.

If a session is so interrupted that your

efficiency is destroyed, abort the session

until you get control of your time.

Page 33: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 31

31

Debriefing:

Measurement begins with observation

� The manager reviews session report to

assure that he understands it and that it

follows the protocol.

� The tester answers any questions.

� Session metrics are checked.

� Charter may be adjusted.

� Session may be extended.

� New sessions may be chartered.

� Coaching happens in both directions.

Reviewable Result:

A scannable session sheet

� Charter

− #AREAS

� Start Time

� Tester Name(s)

� Breakdown

− #DURATION

− #TEST DESIGN AND EXECUTION

− #BUG INVESTIGATION AND REPORTING

− #SESSION SETUP

− #CHARTER/OPPORTUNITY

� Data Files

� Test Notes

� Bugs

− #BUG

� Issues

− #ISSUE

CHARTER

-----------------------------------------------

Analyze MapMaker’s View menu functionality and

report on areas of potential risk.

#AREAS

OS | Windows 2000

Menu | View

Strategy | Function Testing

Strategy | Functional Analysis

START

-----------------------------------------------

5/30/00 03:20 pm

TESTER

-----------------------------------------------

Jonathan Bach

TASK BREAKDOWN

-----------------------------------------------

#DURATION

short

#TEST DESIGN AND EXECUTION

65

#BUG INVESTIGATION AND REPORTING

25

#SESSION SETUP

20

Page 34: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 32

32

Reviewable Result:

Various tools are now available.

� Rapid Reporter

� SessionWeb

� SBTExecute

� Atlassian: Bonfire

� Atlassian: Jira

The Breakdown Metrics

Testing is like looking for worms

Test Design and Execution

Bug Investigation and Reporting

Session Setup

Page 35: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 33

33

Challenges of SBTM

� Architecting the system of charters (test planning)

� Making time for debriefings

� Getting the metrics right

� Creating good test notes

� Keeping the technique from dominating the testing

� Managing chronic interruptions

For example session sheets and metrics see

http://www.satisfice.com/sbtm

Whiteboard

� Used for planning and tracking of test execution

� Suitable for use in waterfall or agile (as long as you have control over your own team’s process)

� Use colours to track:− Features, or

− Main Areas, or

− Test styles (performance, robustness, system)

Page 36: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 34

34

Whiteboard

� Divide the board into four areas:− Work to be done

− Work in Progress

− Cancelled or Work not being done

− Completed work

� Red stickies indicate issues (not just bugs)

� Create a sticky note for each half day of work (or mark # of half days expected on the sticky note)

� Prioritize stickies daily (or at least twice/wk)

� Finish “on-time” with low priority work incomplete

Whiteboard Example

End of

week 1

Out of 7

weeks

Page 37: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 35

35

Whiteboard Example

End of

week 6

Out of 7

weeks

Reporting

� An Excel Spreadsheet with:− List of Charters

− Area

− Estimated Effort

− Expended Effort

− Remaining Effort

− Tester(s)

− Start Date

− Completed Date

− Issues

− Comments

� Does NOT include pass/fail percentage or number of test cases

Page 38: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 36

36

Sample Report

Charter AreaEstimated

Effort

Expende

d Effort

Remainin

g Effort Tester

Date

Started

Date

Completed

Issues

Found Comments

Investigation for high QLN spikes on

EVLT

H/W

Performance 0 20 0 acode 12/10/2011 01/14/2012 1617032

Lots of investigation. Problem was on 2-3

out of 48 ports which just happened to be

2 of the 6 ports I tested.

ARQ Verification under different RA

Modes ARQ 2 2 0 ncowan 12/14/2011 12/15/2011

POTS interference ARQ 2 0 0 --- 01/08/2012 01/08/2012

Decided not to test as the H/W team

already tested this functionality and time

was tight.

Expected throughput testing ARQ 5 5 0 acode 01/10/2012 01/14/2012

INP vs. SHINE ARQ 6 6 0 ncowan 12/01/2011 12/04/2011

INP vs. REIN ARQ 6 7 5 jbright 01/06/2012 01/10/2012

To translate the files properly, had to

install Python solution from Antwerp.

Some overhead to begin testing

(installation, config test) but was fairly

quick to execute afterwards

INP vs. REIN + SHINE ARQ 12 12

Traffic delay and jitter from RTX ARQ 2 2 0 ncowan 12/05/2011 12/05/2011

Attainable Throughput ARQ 1 4 0 jbright 01/05/2012 01/08/2012

Took longer because was not behaving as

expected and I had to make sure I was

testing correctly. My expectations were

wrong based on virtual noise not being

exact.

Weekly Report

� A PowerPoint slide indicating the

important issues (not a count but a list)

− “Show stopping” bugs

− New bugs found since last report

− Important issues with testing (blocking bugs,

equipment issues, people issues, etc.)

− Risks (updates and newly discovered)

− Tester concerns (if different from above)

− The slide on the next page indicating progress

Page 39: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 37

37

Sample Report

0

10

20

30

40

50

60

70

80

90

ARQ SRA Vectoring Regression H/W Performance

Effort(person half days)

Feature

"Awesome Product" Test Progress as of 02/01/2012

Original Planned Effort

Expended Effort

Total Expected Effort

Direction of lines indicates

effort trend since last report

Solid centre bar=finished

Green: No concerns

Yellow: Some concerns

Red: Major concerns

74

Copyright © 1995-2014, Satisfice, Inc.

Page 40: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 38

38

75

Copyright © 1995-2014, Satisfice, Inc.

Appendix A

Reporting the TBS BreakdownA guess is okay, but follow the protocol

� Test, Bug, and Setup are orthogonal categories.

� Estimate the percentage of charter work that fell into each

category.

� Nearest 5% or 10% is good enough.

� If activities are done simultaneously, report the highest

precedence activity.

� Precedence goes in order: T, B, then S.

� All we really want is to track interruptions to testing.

� Don’t include Opportunity Testing in the estimate.

Page 41: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 39

39

Activity HierarchyAll test work fits here, somewhere

all work

non-session

session

opportunity on charter

test bug setup

inferred

Non-Session

61%

Test

28%

Bug

4%

Opportunity

1%

Setup

6%

Work Breakdown:Diagnosing the productivity

� Do these proportions make sense?

� How do they change over time?

� Is the reporting protocol being followed?

0.0

50.0

100.0

150.0

200.0

250.0

300.0

5/26 6/9 6/23 7/7 7/21 8/4 8/18

Page 42: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 40

40

Coverage:Specifying coverage areas

� These are text labels listed in the Charter

section of the session sheet. (e.g. “insert

picture”)

� Coverage areas can include anything

− areas of the product

− test configuration

− test strategies

− system configuration parameters

� Use the debriefings to check the validity of

the specified coverage areas.

Coverage:

Are we testing the right stuff?

� Is it a lop-sided set of

coverage areas?

� Is it distorted

reporting?

Distribution of On Charter Testing

Across Areas

0

20

40

60

80

100

120

• Is this a risk-based test

strategy?

Page 43: Exploratory Testing Explained

Copyright © 1996-2013, Satisfice, Inc.

dddd 41

41

Using the Data

to Estimate a Test Cycle

1. How many perfect sessions (100% on-charter testing) does

it take to do a cycle? (let’s say 40)

2. How many sessions can the team (of 4 testers) do per day?

(let’s say 3 per day, per tester = 12)

3. How productive are the sessions? (let’s say 66% is on-

charter test design and execution)

4. Estimate: 40 / (12 * .66) = 5 days

5. We base the estimate on the data we’ve collected. When

any conditions or assumptions behind this estimate

change, we will update the estimate.