tester's guide
TRANSCRIPT
-
8/8/2019 Tester's Guide
1/28
TESTERSTESTERS
GUIDEGUIDE
-
8/8/2019 Tester's Guide
2/28
Testers Guide
T ABLE OF C ONTENTS
POLICY ............................................................................................................................... 3Terms to understand ............................................................................................................. 3Life Cycle of Testing Process .............................................................................................. 4
Levels of Testing .................................................................................................................. 4Types of Testing .................................................................................................................. 5Testing Techniques .............................................................................................................. 9Web Testing Specifics ....................................................................................................... 13Testing - When is a program correct? ................................................................................ 17Test Plan ............................................................................................................................. 17Test cases ........................................................................................................................... 19Testing Coverage ............................................................................................................... 19What if there isn't enough time for thorough testing? ...................................................... 26Defect reporting ................................................................................................................. 26Types of Automated Tools ................................................................................................. 27
Page 2 of 2
-
8/8/2019 Tester's Guide
3/28
Testers Guide
POLICYPOLICY
We are committed to Continuous Improvement of Quality of Products and Customer Services by adhering to International Standards.
Terms to understandTerms to understand
What is software 'quality'?
Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable.
What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code,requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings. Validation typically involves actual testing and
takes place after verifications are completed.
What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people
including a moderator, reader (the author of whatever is being reviewed), and a recorder
to take notes. The subject of the inspection is typically a document such as a requirements
spec or a test plan, and the purpose is to find problems and see what's missing, not to fix
anything.
QA & testing ? Differences
Software QA involves the entire software development PROCESS - monitoring and
improving the process, making sure that any agreed-upon standards and procedures are
followed, and ensuring that problems are found and dealt with. It is oriented to
'prevention'
Testing involves operation of a system or application under controlled conditions and
evaluating the results (eg, 'if the user is in interface A of the application while using
hardware B, and does C, then D should happen'). The controlled conditions should
include both normal and abnormal conditions. Testing should intentionally attempt to
Page 3 of 2
-
8/8/2019 Tester's Guide
4/28
Testers Guide
make things go wrong to determine if things happen when they shouldn't or things don't
happen when they should. It is oriented to 'detection'.
Life Cycle of Testing ProcessLife Cycle of Testing Process
The following are some of the steps to consider:
Obtain requirements, functional design, and internal design specifications andother necessary documents
Obtain schedule requirements Determine project-related personnel and their responsibilities, reporting
requirements, required standards and processes (such as release processes, change processes, etc.)
Identify application's higher-risk aspects, set priorities, and determine scope andlimitations of tests
Determine test approaches and methods - unit, integration, functional, system,load, usability tests, etc.
Determine test environment requirements (hardware, software, communications,etc.)
Determine testware requirements (record/playback tools, coverage analyzers, testtracking, problem/bug tracking, etc.)
Determine test input data requirements Identify tasks, those responsible for tasks Set schedule estimates, timelines, milestones Determine input equivalence classes, boundary value analyses, error classes Prepare test plan document and have needed reviews/approvals Write test cases Have needed reviews/inspections/approvals of test cases Prepare test environment and testware, obtain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking processes,set up logging and archiving processes, set up or obtain test input data
Obtain and install software releases Perform tests Evaluate and report results Track problems/bugs and fixes Retest as needed Maintain and update test plans, test cases, test environment, and testware through
life cycle
Levels of TestingLevels of Testing
Unit Testing
Page 4 of 2
-
8/8/2019 Tester's Guide
5/28
Testers Guide
The most 'micro' scale of testing; to test particular functions or code modules. Typically
done by the programmer and not by testers, as it requires detailed knowledge of the
internal program design and code. Not always easily done unless the application has a
well-designed architecture with tight code; may require developing test driver modules or
test harnesses.
Integration testing
Testing of combined parts of an application to determine if they function together
correctly. The 'parts' can be code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially relevant to client/server
and distributed systems.
Integration can be top-down or bottom-up:
Top-down testing starts with main and successively replaces stubs with the realmodules.
Bottom-up testing builds larger module assemblies from primitive modules.
Sandwich testing is mainly top-down with bottom-up integration and testing applied to
certain widely used components
Acceptance testing
Final testing based on specifications of the end-user or customer, or based on use by end-
users/customers over some limited period of time.
Types of TestingTypes of Testing
Incremental integration testing
Continuous testing of an application as new functionality is added; requires that various
aspects of an application's functionality be independent enough to work separately before
all parts of the program are completed, or that test drivers be developed as needed; done
by programmers or by testers.
Sanity testing Typically an initial testing effort to determine if a new software version is performing
well enough to accept it for a major testing effort. For example, if the new software is
crashing systems every 5 minutes, bogging down systems to a crawl, or destroying
databases, the software may not be in a 'sane' enough condition to warrant further testing
in its current state.
Page 5 of 2
-
8/8/2019 Tester's Guide
6/28
Testers Guide
Compatability testing
Testing how well software performs in a particular hardware/software/operating
system/network/etc. environment.
Exploratory testing
Often taken to mean a creative, informal software test that is not based on formal test
plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing
Similar to exploratory testing, but often taken to mean that the testers have significant
understanding of the software before testing it.
Comparison testing
Comparing software weaknesses and strengths to competing products.
Load testing
Testing an application under heavy loads, such as testing of a web site under a range of
loads to determine at what point the system's response time degrades or fails.
System testing
Black-box type testing that is based on overall requirements specifications; covers all
combined parts of a system.
Functional testing
Black-box type testing geared to functional requirements of an application; this type of
testing should be done by testers. This doesn't mean that the programmers shouldn't
check that their code works before releasing it (which of course applies to any stage of
testing.)
Volume testing
Volume testing involves testing a software or Web application using corner cases of "task
size" or input data size. The exact volume tests performed depend on the application's
functionality, its input and output mechanisms and the technologies used to build theapplication. Sample volume testing considerations include, but are not limited to:
If the application reads text files as inputs, try feeding it both an empty text file and a
huge (hundreds of megabytes) text file.If the application stores data in a database, exercise the application's functions when the
database is empty and when the database contains an extreme amount of data.
Page 6 of 2
-
8/8/2019 Tester's Guide
7/28
Testers Guide
If the application is designed to handle 100 concurrent requests, send 100 requests
simultaneously and then send the 101st request.If a Web application has a form with dozens of text fields that allow a user to enter text
strings of unlimited length, try populating all of the fields with a large amount of text
and submit the form.
Stress testing
Term often used interchangeably with 'load' and 'performance' testing. Also used to
describe such tests as system functional testing while under unusually heavy loads, heavy
repetition of certain actions or inputs, input of large numerical values, large complex
queries to a database system, etc.
Sociability Testing
This means that you test an application in its normal environment, along with other
standard applications, to make sure they all get along together; that is, that they don't
corrupt each other's files, they don't crash, they don't consume system resources, they
don't lock up the system, they can share the printer peacefully, etc.
Usability testing
Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted
end-user or customer. User interviews, surveys, video recording of user sessions, and
other techniques can be used. Programmers and testers are usually not appropriate as
usability testers.
Recovery testing
Testing how well a system recovers from crashes, hardware failures, or other catastrophic
problems.
Security testing
Testing how well the system protects against unauthorized internal or external access,
willful damage, etc; may require sophisticated testing techniques. Performance Testing
Term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance'
testing (and any other 'type' of testing) is defined in requirements documentation or QA
or Test Plans.
Page 7 of 2
-
8/8/2019 Tester's Guide
8/28
Testers Guide
End-to-end testing
Similar to system testing; the 'macro' end of the test scale; involves testing of a complete
application environment in a situation that mimics real-world use, such as interacting
with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.
Regression testing
Re-testing after fixes or modifications of the software or its environment. It can be
difficult to determine how much re-testing is needed, especially near the end of the
development cycle. Automated testing tools can be especially useful for this type of
testing.
Parallel testing
With parallel testing, users can easily choose to run batch tests or asynchronous testsdepending on the needs of their test systems. Testing multiple units in parallel increases
test throughput and lower a manufacturer's
Install/uninstall testing
Testing of full, partial, or upgrade install/uninstall processes.
Mutation testing
A method for determining if a set of test data or test cases is useful, by deliberately
introducing various code changes ('bugs') and retesting with the original test data/cases to
determine if the 'bugs' are detected. Proper implementation requires large computational
resources.
Alpha testing
Testing of an application when development is nearing completion; minor design changes
may still be made as a result of such testing. Typically done by end-users or others, not
by programmers or testers.
Beta testing
Testing when development and testing are essentially completed and final bugs and
problems need to be found before final release. Typically done by end-users or others, not
by programmers or testers.
Page 8 of 2
-
8/8/2019 Tester's Guide
9/28
-
8/8/2019 Tester's Guide
10/28
Testers Guide
1. Good test case reduces by more than one the number of other test cases
which must be developed
2. Good test case covers a large set of other possible cases
3. Classes of valid inputs
4. Classes of invalid inputs
Boundary testingThis method leads to a selection of test cases that exercise boundary values. It
complements equivalence partitioning since it selects test cases at the edges of a class.
Rather than focusing on input conditions solely, BVA derives test cases from the output
domain also. BVA guidelines include:
1. For input ranges bounded by a and b, test cases should include values a and b
and just above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be
developed to exercise the minimum and maximum numbers and values just
above and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be
designed to exercise the data structure at its boundary.
Test case Design for Boundary value analysis :
Situations on, above, or below edges of input, output, and condition classes have high
probability of success
Error GuessingError Guessing is the process of using intuition and past experience to fill in gaps in the
test data set. There are no rules to follow. The tester must review the test records with an
eye towards recognizing missing conditions. Two familiar examples of error prone
situations are division by zero and calculating the square root of a negative number.Either of these will result in system errors and garbled output.
Other cases where experience has demonstrated error proneness are the processing of
variable length tables, calculation of median values for odd and even numbered
populations, cyclic master file/data base updates (improper handling of duplicate keys,
Page 10 of 2
-
8/8/2019 Tester's Guide
11/28
Testers Guide
unmatched keys, etc.), overlapping storage areas, overwriting of buffers, forgetting to
initialize buffer areas, and so forth. I am sure you can think of plenty of circumstances
unique to your hardware/software environments and use of specific programming
languages.
Error Guessing is as important as Equivalence partitioning and Boundary Analysis
because it is intended to compensate for their inherent incompleteness. As Equivalence
Partitioning and Boundary Analysis complement one another, Error Guessing
complements both of these techniques.
White Box testing
White box testing (logic driven) is based on knowledge of the internal logic of an
application's code. Tests are based on coverage of code statements, branches, paths,conditions. White box testing is a test case design method that uses the control structure
of the procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at
least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. exercise internal data structures to ensure their validity.
Path TestingA path-coverage test allow us to exercise every transition between the program
statements (and so every statement and branch as well).
First we construct a program graph. Then we enumerate all paths. Finally we devise the test cases.
Possible criteria:
1. exercise every path from entry to exit;
2. exercise each statement at least once;
3. exercise each case in every branch/case.
Page 11 of 2
-
8/8/2019 Tester's Guide
12/28
Testers Guide
Condition testingA condition test can use a combination of Comparison operators and Logical operators .
The Comparison operators compare the values of variables and this comparison produces
a boolean result. The Logical operators combine booleans to produce a single boolean
result that is the result of the condition test.
e.g. (a == b) Result is true if the value of a is the same as the value of b.
Myers: take each branch out of a condition at least once.
White and Cohen: for each relational operator e1 < e2 test all combinations of e1, e2
orderings. For a Boolean condition, test all possible inputs (!).
Branch and relational operator testing---enumerate categories of operator values.
B1 || B2: test {B1=t,B2=t}, {t,f}, {f,t}
B1 || (e2 = e3): test {t,=}, {f,=}, {t,}.
Loop Testing
1. For single loop, zero minimum, N maximum, no excluded values:
2. Try bypassing loop entirely.
3. Try negative loop iteration variable.
4. One iteration through loop.
5. Two iterations through loop---some initialization problems can be uncovered
only by two iterations.
6. Typical number of cases;
7. One less than maximum.
8. Maximum.
9. Try greater than maximum.
Data Flow TestingDef-use chains:
1. def = definition of variable
2. use = use of that variable;
3. def-use chains go across control boundaries.
Page 12 of 2
http://www.xoology.com/solo/docs/ProgrammingManual/Programming_Files/Script_Language/Operators.htm#bkmk_Comparison_Operatorshttp://www.xoology.com/solo/docs/ProgrammingManual/Programming_Files/Script_Language/Operators.htm#bkmk_Comparison_Operatorshttp://www.xoology.com/solo/docs/ProgrammingManual/Programming_Files/Script_Language/Operators.htm#bkmk_Logical_Operatorshttp://www.xoology.com/solo/docs/ProgrammingManual/Programming_Files/Script_Language/Operators.htm#bkmk_Comparison_Operatorshttp://www.xoology.com/solo/docs/ProgrammingManual/Programming_Files/Script_Language/Operators.htm#bkmk_Logical_Operators -
8/8/2019 Tester's Guide
13/28
Testers Guide
4. Testing---test every def-use chain at least once.
Stubs for Testing
A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a
system
Stubs for Top-Down Testing
4 basic types:o Display a trace messageo Display parameter value(s)o Return a value from a tableo Return table value selected by parameter
Drivers for Testing
Test Harness or a test driver is supporting code and data used to provide an environment
for testing part of a system in isolation.
Web Testing SpecificsWeb Testing Specifics
Internet Software - Quality Characteristics
Functionality - Verified contentReliability - Security and availabilityEfficiency - Response TimesUsability - High user satisfactionPortability - Platform Independence
WWW Project Peculiarities
Software Consists of large degree of componentsUser Interface is more complex than many GUI based Client-Server applications.User may be unknown ( no training/ user manuals)Security threats come from anywhereUser load unpredictable
Basic HTML Testing
Check for illegal elements presentCheck for illegal attributes presentCheck for tags closeCheck for the tags ,,, Check that all IMG tags should have ALT tag[ALT tags must be suggestive]Check for consistency of fonts, colors and font size.
Page 13 of 2
-
8/8/2019 Tester's Guide
14/28
Testers Guide
Check for spelling errors in text and images.Check for "Non Sense" mark up
Example for Non Sense mark upHello may be written as Hello
Suggestions for fast loading
Web pages weight should be reduced to lesser size as much as possibleDont knock door of the Database every time. Go for the alternate.Example: If your web application has Reports, generate the content of the report in aStatic HTML file in a periodic time. When the user view the report show him thestatic HTML content. No need to go to the database and retrieve the data when theuser hits the report link.Cached Query - If the data which is fetched using a query only changes periodicallyThen we can cache the query for that period. This will avoid unnecessary database
access.Every IMG tags must have WIDTH and HEIGHT attributes.IMG - Bad Example
Hello World
IMG - Good Example
Hello World
All the photographic images must be in "jpg" formatComputer created images must be in "gif" formatBackground image should be less than 3.5k [ the background image should be samefor all the pages(except for functional reasons)]Avoid nested tables.Keep table text size to a minimum (e.g. less than 5000 characters)
Link Testing
You must ensure that all the hyperlinks are validThis applies to both internal and external linksInternal links shall be relative , to minimize the overhead and faults when the web siteis moved to production environmentExternal links shall be referenced to absolute URLsExternal links can change without control - So, automate regression testingRemember that external non- home page links are more likely to break Be careful at links in "What's New" sections. They are likely to become obsolete
Page 14 of 2
-
8/8/2019 Tester's Guide
15/28
Testers Guide
Check that content can be accessed by means of : Search engine, Site MapCheck the accuracy of Search Engine resultsCheck that web Site Error 404 ("Not Found") is handled by means of a user-friendly
page
Compatibility Testing
Check for the site behaviour across the industry standard browsers. The main issuesinvolve how different the browsers handle tables, images, caching and scriptinglanguagesIn cross browsers testing , check for :
Behaviour of buttonsSupport of Java scriptsSupport of tablesAcrobat, Real, Flash behaviour
ActiveX control supportJava compatibilityText size
Browser BrowserversionActiveXcontrols
VBScript JavaScript
Javaapplets
DynamicHTML Frames CSS 1.0 CSS 2.0
InternetExplorer
4.0 andlater Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled
InternetExplorer
3.0 andlater Enabled Enabled Enabled Enabled Disabled Enabled Enabled Disabled
Netscape Navigator
4.0 andlater
Disabled Disabled Enabled Enabled Enabled Enabled Enabled Enabled
Netscape Navigator
3.0 andlater
Disabled Disabled Enabled Enabled Disabled Enabled
Disabled Disabled
BothInternetExplorer and
Navigator
4.0 andlater
Disabled Disabled Enabled Enabled Enabled Enabled Enabled Enabled
BothInternetExplorer and
Navigator
3.0 andlater
Disabled Disabled Enabled Enabled Disabled Enabled
Disabled Disabled
Microsof Unavailabl Disable Disabled Disabled Disable Disabled Disable Disable Disabled
Page 15 of 2
-
8/8/2019 Tester's Guide
16/28
Testers Guide
t WebTV e d d d d
Usability Testing
Aspects to be tested with care:Coherence of look and feel
Navigational aidsUser InteractionsPrinting
With respect to Normal behaviour Destructive behaviour Inexperienced users
Usability Tips
1. Define categories in terms of user goals2. Name sections carefully3. Think internationally4. Identify the homepage link on every page5. Make sure search is always available6. Test all the browsers your audience will use7. Differentiate visited links from unvisited links8. Never use graphics where HTML text will do9. Make GUI design predictable and consistent10. Check that printed pages fit appropriately to paper pages[consider that many people
just surf and print. Check especially the pages for which format is important. E.g.: an
application form can either be filled on-line or printed/filled/faxed]
Portability Testing
Check that links to URLs outside the web site must be in canonicalform( http://www.srasys.co.in )Check that links to URLs into the web site must be in relative form(e.g../aaouser/images/images.gif)
Cookies Testing
What are cookies?
A "Cookie" is a small piece of information sent by the web server to store on a web
browser. So it can later be read back from the browser. This is useful for having the
browser remember some specific information.
Page 16 of 2
http://www.srasys.co.in/http://www.srasys.co.in/ -
8/8/2019 Tester's Guide
17/28
Testers Guide
Why must you test cookies?
Cookies can expireUsers can disable them in Browser
How to perform Cookies testing?
Check the behaviour after cookies expirationWork with cookies disabledDisable cookies mid-wayDelete web cookies mid-wayClear memory and disk cache mid-way
Testing - When is a program correct?Testing - When is a program correct?
There are levels of correctness. We must determine the appropriate level of correctnessfor each system because it costs more and more to reach higher levels.
1. No syntactic errors2. Compiles with no error messages3. Runs with no error messages4. There exists data which gives correct output5. Gives correct output for required input6. Correct for typical test data7. Correct for difficult test data
8. Proven correct using mathematical logic9. Obeys specifications for all valid data10. Obeys specifications for likely erroneous input11. Obeys specifications for all possible input
Test PlanTest Plan
A software project test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. The process of preparing a test plan is a useful way
to think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the 'why' and
'how' of product validation. It should be thorough enough to be useful but not so thorough
that no one outside the test group will read it. The following are some of the items that
might be included in a test plan, depending on the particular project:
Page 17 of 2
-
8/8/2019 Tester's Guide
18/28
Testers Guide
Title of the Project Identification of document including version numbers Revision history of document including authors, dates, approvals Table of Contents Purpose of document, intended audience
Objective of testing effort Software product overview Relevant related document list, such as requirements, design documents, other test
plans, etc. Relevant standards or legal requirements Traceability requirements Relevant naming conventions and identifier conventions Test organization and personnel/contact-info/responsibilities Assumptions and dependencies Project risk analysis Testing priorities and focus
Scope and limitations of testing Test outline - a decomposition of the test approach by test type, feature,functionality, process, system, module, etc. as applicable
Outline of data input equivalence classes, boundary value analysis, error classes Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems Test environment setup and configuration issues Test data setup requirements Database setup requirements Outline of system-logging/error-logging/other capabilities, and tools such as
screen capture software, that will be used to help describe and report bugs Discussion of any specialized software or hardware tools that will be used by
testers to help track the cause or source of bugs Test automation - justification and overview Test tools to be used, including versions, patches, etc. Test script/test code maintenance processes and version control Problem tracking and resolution - tools and processes Project test metrics to be used Reporting requirements and testing deliverables Software entrance and exit criteria Initial sanity testing period and criteria Test suspension and restart criteria Personnel pre-training needs Test site/location Relevant proprietary, classified, security, and licensing issues. Appendix - glossary, acronyms, etc.
Page 18 of 2
-
8/8/2019 Tester's Guide
19/28
Testers Guide
Test casesTest cases
What's a 'test case'?
1. A test case is a document that describes an input, action, or event and an
expected response, to determine if a feature of an application is workingcorrectly. A test case should contain particulars such as test case identifier,
test case name, objective, test conditions/setup, input data requirements,
steps, and expected results.
2. Note that the process of developing test cases can help find problems in the
requirements or design of an application, since it requires completely
thinking through the operation of the application. For this reason, it's useful
to prepare test cases early in the development cycle if possible.
Testing CoverageTesting Coverage
1. Line coverage. Test every line of code (Or Statement coverage: test everystatement).
2. Branch coverage. Test every line, and every branch on multi-branch lines.3. N-length sub-path coverage. Test every sub-path through the program of length
N. For example, in a 10,000 line program, test every possible 10-line sequence of execution.
4. Path coverage. Test every path through the program, from entry to exit. Thenumber of paths is impossibly large to test.
5. Multicondition or predicate coverage . Force every logical operand to take every possible value. Two different conditions within the same test may result in thesame branch, and so branch coverage would only require the testing of one of them.
6. Trigger every assertion check in the program. Use impossible data if necessary.7. Loop coverage. "Detect bugs that exhibit themselves only when a loop is
executed more than once."8. Every module, object, component, tool, subsystem, etc. This seems obvious until
you realize that many programs rely on off-the-shelf components. The programming staff doesn't have the source code to these components, someasuring line coverage is impossible. At a minimum (which is what is measuredhere), you need a list of all these components and test cases that exercise each oneat least once.
9. Fuzzy decision coverage. If the program makes heuristically-based or similarity- based decisions, and uses comparison rules or data sets that evolve over time,check every rule several times over the course of training.
Page 19 of 2
-
8/8/2019 Tester's Guide
20/28
Testers Guide
10. Relational coverage. "Checks whether the subsystem has been exercised in a waythat tends to detect off-by-one errors" such as errors caused by using < instead of
-
8/8/2019 Tester's Guide
21/28
Testers Guide
table is probably in a separate data file that can vary from day to day or frominstallation to installation. By modifying the table, you can radically change thecontrol flow of the program without recompiling or relinking the code. Some
programs drive a great deal of their control flow this way, using several tables.Coverage measures? Some examples:
21.o check that every expression selects the correct table elemento check that the program correctly jumps or calls through every table
elemento check that every address or pointer that is available to be loaded into these
tables is valid (no jumps to impossible places in memory, or to a routinewhose starting address has changed)
o check the validity of every table that is loaded at any customer site.22. Every interrupt. An interrupt is a special signal that causes the computer to stop
the program in progress and branch to an interrupt handling routine. Later, the program restarts from where it was interrupted. Interrupts might be triggered by
hardware events (I/O or signals from the clock that a specified interval haselapsed) or software (such as error traps). Generate every type of interrupt inevery way possible to trigger that interrupt.
23. Every interrupt at every task, module, object, or even every line. The interrupthandling routine might change state variables, load data, use or shut down a
peripheral device, or affect memory in ways that could be visible to the rest of the program. The interrupt can happen at any time-between any two lines, or whenany module is being executed. The program may fail if the interrupt is handled ata specific time. (Example: what if the program branches to handle an interruptwhile it's in the middle of writing to the disk drive?)
24.
The number of test cases here is huge, but that doesn't mean you don't have tothink about this type of testing. This is path testing through the eyes of the processor (which asks, "What instruction do I execute next?" and doesn't carewhether the instruction comes from the mainline code or from an interrupthandler) rather than path testing through the eyes of the reader of the mainlinecode. Especially in programs that have global state variables, interrupts atunexpected times can lead to very odd results.
25. Every anticipated or potential race. Imagine two events, A and B . Both will occur, but the program is designed under the assumption that A will always precede B .This sets up a race between A and B -if B ever precedes A , the program will
probably fail. To achieve race coverage, you must identify every potential race
condition and then find ways, using random data or systematic test case selection,to attempt to drive B to precede A in each case.26.
Races can be subtle. Suppose that you can enter a value for a data item on twodifferent data entry screens. User 1 begins to edit a record, through the firstscreen. In the process, the program locks the record in Table 1. User 2 opens thesecond screen, which calls up a record in a different table, Table 2. The programis written to automatically update the corresponding record in the Table 1 when
Page 21 of 2
-
8/8/2019 Tester's Guide
22/28
Testers Guide
User 2 finishes data entry. Now, suppose that User 2 finishes before User 1. Table2 has been updated, but the attempt to synchronize Table 1 and Table 2 fails.What happens at the time of failure, or later if the corresponding records in Table1 and 2 stay out of synch?
27. Every time-slice setting. In some systems, you can control the grain of switching
between tasks or processes. The size of the time quantum that you choose canmake race bugs, time-outs, interrupt-related problems, and other time-related problems more or less likely. Of course, coverage is an difficult problem here because you aren't just varying time-slice settings through every possible value.You also have to decide which tests to run under each setting. Given a planned setof test cases per setting, the coverage measure looks at the number of settingsyou've covered.
28. Varied levels of background activity. In a multiprocessing system, tie up the processor with competing, irrelevant background tasks. Look for effects on racesand interrupt handling. Similar to time-slices, your coverage analysis must specify
29.o
categories of levels of background activity (figure out something thatmakes sense) ando all timing-sensitive testing opportunities (races, interrupts, etc.).
30. Each processor type and speed . Which processor chips do you test under? Whattests do you run under each processor? You are looking for:
31.o speed effects, like the ones you look for with background activity testing,
ando consequences of processors' different memory management rules, ando floating point operations, ando any processor-version-dependent problems that you can learn about.
32. Every opportunity for file / record / field locking .33. Every dependency on the locked (or unlocked) state of a file, record or field. 34. Every opportunity for contention for devices or resources. 35. Performance of every module / task / object. Test the performance of a module
then retest it during the next cycle of testing. If the performance has changedsignificantly, you are either looking at the effect of a performance-significantredesign or at a symptom of a new bug.
36. Free memory / available resources / available stack space at every line or onentry into and exit out of every module or object.
37. Execute every line (branch, etc.) under the debug version of the operating system. This shows illegal or problematic calls to the operating system.
38. Vary the location of every file. What happens if you install or move one of the program's component, control, initialization or data files to a different directory or drive or to another computer on the network?
39. Check the release disks for the presence of every file. It's amazing how often afile vanishes. If you ship the product on different media, check for all files on allmedia.
40. Every embedded string in the program. Use a utility to locate embedded strings.Then find a way to make the program display each string.
Page 22 of 2
-
8/8/2019 Tester's Guide
23/28
Testers Guide
41. Operation of every function / feature / data handling operation under:42. Every program preference setting. 43. Every character set, code page setting, or country code setting. 44. The presence of every memory resident utility (inits, TSRs). 45. Each operating system version.
46. Each distinct level of multi-user operation. 47. Each network type and version. 48. Each level of available RAM. 49. Each type / setting of virtual memory management. 50. Compatibility with every previous version of the program. 51. Ability to read every type of data available in every readable input file format. If
a file format is subject to subtle variations (e.g. CGM) or has several sub-types(e.g. TIFF) or versions (e.g. dBASE), test each one.
52. Write every type of data to every available output file format. Again, beware of subtle variations in file formats-if you're writing a CGM file, full coverage wouldrequire you to test your program's output's readability by every one of the main
programs that read CGM files.53. Every typeface supplied with the product. Check all characters in all sizes andstyles. If your program adds typefaces to a collection of fonts that are available toseveral other programs, check compatibility with the other programs (nonstandardtypefaces will crash some programs).
54. Every type of typeface compatible with the program. For example, you might testthe program with (many different) TrueType and Postscript typefaces, and fixed-sized bitmap fonts.
55. Every piece of clip art in the product . Test each with this program. Test eachwith other programs that should be able to read this type of art.
56. Every sound / animation provided with the product. Play them all under differentdevice (e.g. sound) drivers / devices. Check compatibility with other programsthat should be able to play this clip-content.
57. Every supplied (or constructible) script to drive other machines / software (e.g.macros) / BBS's and information services (communications scripts).
58. All commands available in a supplied communications protocol. 59. Recognized characteristics. For example, every speaker's voice characteristics
(for voice recognition software) or writer's handwriting characteristics(handwriting recognition software) or every typeface (OCR software).
60. Every type of keyboard and keyboard driver. 61. Every type of pointing device and driver at every resolution level and ballistic
setting. 62. Every output feature with every sound card and associated drivers. 63. Every output feature with every type of printer and associated drivers at every
resolution level. 64. Every output feature with every type of video card and associated drivers at
every resolution level. 65. Every output feature with every type of terminal and associated protocols. 66. Every output feature with every type of video monitor and monitor-specific
drivers at every resolution level.
Page 23 of 2
-
8/8/2019 Tester's Guide
24/28
Testers Guide
67. Every color shade displayed or printed to every color output device (video card / monitor / printer / etc.) and associated drivers at every resolution level. Andcheck the conversion to grey scale or black and white.
68. Every color shade readable or scannable from each type of color input device at every resolution level.
69. Every possible feature interaction between video card type and resolution, pointing device type and resolution, printer type and resolution, and memorylevel. This may seem excessively complex, but I've seen crash bugs that occur only under the pairing of specific printer and video drivers at a high resolutionsetting. Other crashes required pairing of a specific mouse and printer driver,
pairing of mouse and video driver, and a combination of mouse driver plus videodriver plus ballistic setting.
70. Every type of CD-ROM drive, connected to every type of port (serial / parallel / SCSI) and associated drivers.
71. Every type of writable disk drive / port / associated driver. Don't forget the funyou can have with removable drives or disks.
72. Compatibility with every type of disk compression software. Check error handling for every type of disk error, such as full disk.73. Every voltage level from analog input devices. 74. Every voltage level to analog output devices. 75. Every type of modem and associated drivers. 76. Every FAX command (send and receive operations) for every type of FAX card
under every protocol and driver. 77. Every type of connection of the computer to the telephone line (direct, via PBX,
etc.; digital vs. analog connection and signaling); test every phone control command under every telephone control driver.
78. Tolerance of every type of telephone line noise and regional variation(including variations that are out of spec) in telephone signaling (intensity,
frequency, timing, other characteristics of ring / busy / etc. tones). 79. Every variation in telephone dialing plans. 80. Every possible keyboard combination. Sometimes you'll find trap doors that the
programmer used as hotkeys to call up debugging tools; these hotkeys may crasha debuggerless program. Other times, you'll discover an Easter Egg (anundocumented, probably unauthorized, and possibly embarrassing feature). Thebroader coverage measure is every possible keyboard combination at everyerror message and every data entry point. You'll often find different bugs whenchecking different keys in response to different error messages.
81. Recovery from every potential type of equipment failure. Full coverage includeseach type of equipment, each driver, and each error state. For example, test the
program's ability to recover from full disk errors on writable disks. Includefloppies, hard drives, cartridge drives, optical drives, etc. Include the variousconnections to the drive, such as IDE, SCSI, MFM, parallel port, and serialconnections, because these will probably involve different drivers.
82. Function equivalence. For each mathematical function, check the output againsta known good implementation of the function in a different program. Complete
Page 24 of 2
-
8/8/2019 Tester's Guide
25/28
Testers Guide
coverage involves equivalence testing of all testable functions across all possibleinput values.
83. Zero handling. For each mathematical function, test when every input value,intermediate variable, or output variable is zero or near-zero. Look for severerounding errors or divide-by-zero errors.
84. Accuracy of every graph, across the full range of graphable values. Includevalues that force shifts in the scale.85. Accuracy of every report. Look at the correctness of every value, the formatting
of every page, and the correctness of the selection of records used in each report.86. Accuracy of every message .87. Accuracy of every screen. 88. Accuracy of every word and illustration in the manual. 89. Accuracy of every fact or statement in every data file provided with the product. 90. Accuracy of every word and illustration in the on-line help. 91. Every jump, search term, or other means of navigation through the on-line
help.
92. Check for every type of virus / worm that could ship with the program. 93. Every possible kind of security violation of the program, or of the system whileusing the program.
94. Check for copyright permissions for every statement, picture, sound clip, or other creation provided with the program.
95. Verification of the program against every program requirement and published specification.
96. Verification of the program against user scenarios. Use the program to do realtasks that are challenging and well-specified. For example, create key reports,
pictures, page layouts, or other documents events to match ones that have beenfeatured by competitive programs as interesting output or applications.
97. Verification against every regulation (IRS, SEC, FDA, etc.) that applies to thedata or procedures of the program.
98.Usability tests of:
99. Every feature / function of the program. 100. Every part of the manual. 101. Every error message. 102. Every on-line help topic. 103. Every graph or report provided by the program. 104.
Localizability / localization tests: 105. Every string. Check program's ability to display and use this string if it is
modified by changing the length, using high or low ASCII characters, differentcapitalization rules, etc.
106. Compatibility with text handling algorithms under other languages(sorting, spell checking, hyphenating, etc.)
107. Every date, number and measure in the program. 108. Hardware and drivers, operating system versions, and memory-resident
programs that are popular in other countries.
Page 25 of 2
-
8/8/2019 Tester's Guide
26/28
Testers Guide
109. Every input format, import format, output format, or export format that would be commonly used in programs that are popular in other countries.
110. Cross-cultural appraisal of the meaning and propriety of every string and graphic shipped with the program.
What if there isn't enough time for thorough testing?What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused.Since it's rarely possible to test every possible aspect of an application, every possiblecombination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgementskills, common sense, and experience. (If warranted, formal methods are also available.)Considerations can include:
Which functionality is most important to the project's intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance
expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest-risk aspects of the application? What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio?
Defect reportingDefect reporting
The bug needs to be communicated and assigned to developers that can fix it. After the
problem is resolved, fixes should be re-tested, and determinations made regardingrequirements for regression testing to check that fixes didn't create problems elsewhere.
The following are items to consider in the tracking process:
Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
Bug identifier (number, ID, etc.)
Page 26 of 2
-
8/8/2019 Tester's Guide
27/28
Testers Guide
Current bug status (e.g., 'Open', 'Closed', etc.) The application name and version The function, module, feature, object, screen, etc. where the bug occurred Environment specifics, system, platform, relevant hardware specifics Test case name/number/identifier
File excerpts/error messages/log file excerpts/screen shots/test tool logs thatwould be helpful in finding the cause of the problem Severity Level Tester name Bug reporting date Name of developer/group/organization the problem is assigned to Description of fix Date of fix Application version that contains the fix Verification Details
A reporting or tracking process should enable notification of appropriate personnel atvarious stages.
Types of Automated ToolsTypes of Automated Tools
1. code analyzers - monitor code complexity, adherence to standards, etc.
2. coverage analyzers - these tools check which parts of the code have been
exercised by a test, and may be oriented to code statement coverage,
condition coverage, path coverage, etc.
3. memory analyzers - such as bounds-checkers and leak detectors.4. load/performance test tools - for testing client/server and web applications
under various load levels.
5. web test tools - to check that links are valid, HTML code usage is correct,
client-side and
6. server-side programs work, a web site's interactions are secure.
7. other tools - for test case management, documentation management, bug
reporting, and configuration management.
Condition Case description Expected resultFocus on Fax no. Enter Valid Fax No's IT should accept the no entered
Enter Alphabets, Check for the message IT should popup an error msg.
Page 27 of 2
-
8/8/2019 Tester's Guide
28/28