teterprofessionaltester.com/files/pt-issue5.pdf · rikkert van erp [email protected]...
TRANSCRIPT
Testing across boundaries
E s s e n t i a l f o r s o f t w a r e t e s t e r sTE TER
September / October 2010 v2.0 number 5£ 4 ¤ 5/
Including articles by:
Rachael Wettach and Paul Malcolm Simpl Thomas Frühauf SYNLOGIC Chris Schotanus Logica
SUBSCRIBEIt’s FREE for testers
Making testingmore systematic
I
IIIII
IV
V
VI
Endorsed by Platinum sponsor Gold sponsors
€50 Amazon gift cardfor readers of Professional Tester
40% discount onpurchase of asecond ticket
10% discount forreaders of ProfessionalTester
40 lectures presented by international speakers from 10 countries.Tutorials, hands-on workshops, keynotes.Exhibition hall with leading companies from the QA & testing industry.Affordable: top quality, half the price of similar conferences!Social activities and networking dinner: meet the speakers in a friendlyatmosphere and enjoy Spanish food and dance!
TE TER
I
IIIII
IV
V
VI
How to work everywhere at onceMany of the best software testers, consultancies
and service providers work for long periods for few
organizations. What they have learned might be
very valuable to certain other testers, or even
provide ideas transferable to testing in general, but
it tends to remain in its silo. In this issue, we're
especially grateful to those contributors who have
generously shared examples of their work for
readers to study and compare with their own.
Professional Tester's mission is to provide practical
help and inspiration to testers everywhere,
because we believe software quality affects
people's lives and is important. Whatever your role,
you are invited to tell us about what you're doing,
how you've met its challenges and solved its
problems, and your opinions.
You don't have to be a brilliant writer or spend lots
of time; it's your knowledge and understanding we
want, on as wide or narrow a topic as you like. You
can even use a language other than English if you
prefer. We know that serving your employer or
client comes first and will do all the work needed
to prepare your submission for publication.
EditorEdward Bishop
Managing DirectorNiels Valkering
Art DirectorChristiaan van Heest
SalesRikkert van Erp
Contributors to this issue:
PublisherJerome H. Mol
Rachael Wettach Paul Malcolm
Thomas FrühaufChris Schotanus
Contact
3
From the editor
We aim to promote editorial independence and free debate: views expressed by contributors are not necessarily those of the editor nor of the proprietors. ©Test Publishing Ltd 2010. All rights reserved. No part of this publication may be reproduced in any form without prior written permission. “Professional Tester” is a trademark of Test Publishing Ltd.
4
10
13
17
21
Please send a short email about your idea to
[email protected]: I read and
respond personally to everything received.
In the next issue: best-of-breed tools
The same problem applies to many excellent
testing tools. Some of the most successful remain
in their niche and their creators are so busy
working for a few delighted clients they never reach
a wider usership. The perfect solution to a difficult
problem may well be out there, but how to find it?
Evaluating one tool properly is a major
undertaking. Working through long lists of them is
impossible. And the one you need may not be on
the list anyway.
In the last PT of 2010, we will learn more about
some important tools and what makes them so
good. Hopefully, we will also unearth some little-
known gems. Please help us by tipping us off
about the tool, of any type, that helps you most
with your testing.
Edward BishopEditor
Professional Tester is published by Test Publishing Ltd.
IN THIS ISSUEMaking testing more systematic
Feature
ON THE WEB
Visit professionaltester.com for the latest news and commentary
Test documentation:getting the best from the people who matter mostRachael Wettach and Paul Malcolm's way to make standard documents more standard
We've got it covered – justThomas Frühauf's way to control regression testing expansion using coverage measurement
Changing tests weakens themEdward Bishop on TOSCA Testsuite's way to avoid test maintenance and its dangers
TestFrame: From Test Policy To Test CasesChris Schotanus explains how Logica integrates strategic, tactical and operational testing
Rude Coarse AnalysisWhat made nationalrail.co.uk freeze when its users needed it the most?
Geoff Quentin's Consolidated Test Process will continue in the next issue
The View From Kent The Professional Tester Reader’s Award worldwide test qualifications
Advertising feature
PT - September 2010 - professionaltester.com
Making testingmore systematic
How to work everywhere at once
Effective, accurate project and test documentation may not be fashionable in development-led environments such as commercial software houses and web-only ventures, but for testers with wider and greater responsibilities it’s still essential. It plays an important role in defining responsibilities at every level; it is the most formal, visible and auditable means of communication between all involved; most importantly, it is the fabric and fuel of the best and most effective test technique of them all – the review.
Every organization can benefit from improving its documentation procedures, and testing as an industry could be transformed by creating greater commonality of structure and content of the fundamental elements. Templates in standards such as IEEE 829 and examples in books are useful but neither tells one explicitly enough how to actually go about compiling and
Rachael Wettach and Paul Malcolm share some of Simpl's templates and the help in using them it provides to its clients
writing the necessary information, especially in the hardest, early stages. Interpretation and implementation vary widely even between experienced, dedicated testers, and they need input from others such as subject matter experts who cannot be expected to have or gain expert knowledge of how what they provide will be used and the implications if it is incomplete or inaccurate. This variation creates problems, risks and inefficiencies when teams and organizations collaborate.
So when we at Simpl work towards effective business solutions with our client organizations in the UK’s National Health Service and private healthcare, we offer them not just templates but guidance on how to complete them – a recipe for a more complete, higher-quality first draft which will make the review process faster and even more effective at reducing risk. Below are some sections excerpted from two of the templates, including that guidance. All the templates carry the usual configuration management elements including history of change, distribution lists and sign-off, and require that every paragraph is numbered for traceability, but we have omitted all this information here to make the guidance text more readable.
We do not say that the templates represent the right or best way to build test documentation, but suggest that as well as being a useful way to assist and capture information from external testers and other project participants, verbose, explicit description of this kind as opposed to sparse templates and frameworks that try to be very general may be a route towards wider standardization of test documentation and all the advantages that might bring.
by Rachael Wettach and Paul Malcolm
getting the best from the people who matter most
The wide variation in test documentation and the risk caused by poor examples is unacceptable. Testers need to work harder on standardizing our most fundamental tool and technique
Making testing more systematic
PT - September 2010 - professionaltester.com 4
Test documentation:
How to read the templates{Text in curly brackets} is guidance on how to complete the document. It should be deleted before use or delivery.
Italic text is part of the document and should remain, but can be edited according to the guidance.
Underlined and Subheadings must remain in the document unchanged.
Excerpts from the test strategy template
Project background{Identify and describe briefly the project. Include enough information to distinguish it from any similar or related projects including its start date, any external participants (customers and partners) and any known alternative titles for or references to it used by others}
Document purpose
•
•
•
•
•
{Delete any that do not apply and add any special purposes that apply to this specific project}
Related documents
The following documents were referred to in the preparation of this test strategy.
{Identify all relevant documents and other sources including their configuration management information and, where applicable, from where they were
HEADINGS
INTRODUCTION
The purpose of this document is to: define the testing process to be carried out
explain the overall approach to testing to be used
describe how separate parts of the testing effort are to be integrated
describe in overview the strategy for progressing from one test phase to the next
describe in overview the strategy for progressing from the final test phase to delivery
obtained. Include also the date ranges on which they were used in case of CM failure}
Glossary
All test documentation for this project is to use the following definitions of terms.
{Define any terms used that you believe any reader may not know or may misunderstand, paying particular attention to acronyms, abbreviations, IT components, test phases, test types}
Project-related risks
{State how project risks have been identified and where they have been documented}
As the project proceeds further risks may be identified and documented. In that case, this test strategy may need to be modified to maintain alignment with identified risks.
{State how the risk information will be used by testing}
Information about project risks will be used to:• decide the areas on which testing will
focus• quantify the testing effort required by
each area• prioritize testing activities accordingly
{Add any additional uses/influences of the risk information applying to this specific project}
Test-related risks
The following high-level test-related risks have been identified.
{List all the identified risks that could impact effectiveness or timely completion of testing. For each, detail the possible undesirable event, envisaged consequences if it occurs, and any action you can suggest that might mitigate it}
TEST RISKS/ASSUMPTIONS
Assumpt ions
The following assumptions have been made in preparing this test strategy and if they are found not to hold the strategy may need to be modified.
{List all assumptions made, whether or not you believe they are safe. There is no need to justify them}
Overall approach
{Relate what has been decided about the approach(es) to be taken to testing, with particular attention to anything distinctive about this specific project}
{If there is to be more than one test plan document, list them all}
{If the test effort is to be divided into time periods, list them and what is intended to be achieved during each. Identify also any apparent overlaps or gaps}
{If the test effort is to be divided by entity or team, list them and what is intended to be achieved during each. Identify also any apparent overlaps or gaps}
In scope
The following are to be tested and are covered by this test strategy.
{List the areas and features of the product that are to be tested}
Out of scope
The following are not to be tested and are not covered by this test strategy.
{List areas and features of the product that are not to be tested}
{List connected or related systems, peripherals and interfaces that are not to be tested}
{List any types of testing or product characteristics not covered by this test strategy}
TEST OVERVIEW
Making testing more systematic
5PT - September 2010 - professionaltester.com
TEST STRATEGYTest phases
The testing project will be divided into the following test phases:
• unit • component integration • system • system integration • user acceptance • operational acceptance
{Delete any that are not to be done, add any others that are, and substitute alternative locally-used names where appropriate. Define what each additional or renamed phase is intended to achieve in the glossary}
Entry and exit criteria
{For each test phase, list the events that must be shown to have taken place before the activities it includes can be started, in this format:}
At or before the commencement of {insert test phase, eg unit testing} the following events must take place and be documented and approved:
{Always include detail of:
• review and approval of relevant project documents
• prerequisites including resources, test environments, test data, tools and technical support
• definition and approval of an issue management process
then add any additional entry criteria applying to this specific project, for example external approvals, auditing etc}
{Now for each test phase except the last, list the events that must be shown to have taken place before the activities it includes can be ceased, in this format:}
At or before the end of {insert test phase} the following events must take place and be documented and approved:
{Always include detail of proportions or numbers of test cases not passed, outstanding issues and unmitigated project risks, all broken down by priority. Then add any additional entry criteria applying to this specific project, for example external approvals, auditing etc}
{For the final test phase, list the events that must be shown to have taken place before final delivery}
Test types
The testing project will include the following test types:• functional • functional (GUI) • regression • load • stress• infrastructure • connectivity • disaster recovery • failover• backup/restore • security • contract acceptance
{Delete any that are not to be done, add any others that are, and substitute alternative locally-used names where appropriate. Define what each additional or renamed type is intended to achieve in the glossary}
{Then for each test type append the test phases which will include it. For example:
• functional – in all test phases • load – in system and system integration
If you are not sure whether a phase will need to include a test type, assume it will}
Test deliverables
Before or at its completion the testing project will deliver the following documents:
{List all documents currently planned to be delivered. Include always:
TEST MANAGEMENT
• the test plan for each of the test phases identified in the previous section
• the test summary report for each of the test phases identified in the previous section
• the test issue report for each of the test phases identified in the previous section
• specification of the test material, ie test cases, scripts and data. This may form a separate document for some or all of the test phases, or one document may contain material for more than one phase
• a vehicle to provide traceability between all project requirements, risks, test cases and issues
• a final test summary report covering all test phases}
Excerpts from the test summary report template
Test phase background
{Identify the project and the test phase reported. Reference the test strategy for the project. State the actual start date of this phase, the start and end date of the period being reported and the actual end date of the phase if it is complete}
Document purpose
The purpose of this document is to:• communicate the findings of testing • summarize the results of testing • provide auditability
{Add any special purposes applying to this specific project}
Test progress
{Assess briefly the progress of the activities planned for this phase including test analysis, design, specification and execution, (a) during and (b) up to the end of the period reported. State the current projected date for the end of this phase}
Major issues{Describe briefly all test issues that have occurred in the period reported and that
INTRODUCTION
TEST OVERVIEW
Making testing more systematic
PT - September 2010 - professionaltester.com 6
have affected or may affect test progress. For each, quote its issue management system reference}
Exit criteria
{State the exit criteria for the phase as defined in the test strategy}
{If this is not the first test summary report produced in the project, reference the previous one and detail any changes to the exit criteria made since the end of its reporting period, with brief explanation of why they were deemed necessary}
{State whether or not the current exit criteria have been met during the period reported}
Scope
{If this is the first test summary report produced in the project, reference the test strategy and detail any changes to the items in scope made up to the end of the period reported; or, if this is not the first test summary report produced in the project, reference the previous one and detail any changes to the items in scope made since the end of its reporting period, with brief explanation of why they were deemed necessary}
Risks identified
{List any new risks identified in the period reported. For each, detail the possible undesirable event, envisaged consequences if it occurs, and any action you can suggest that might mitigate it}
Test execution progress
{State the number of test cases and/or scripts intended to be executed during this phase (i) not yet ready for execution; (ii) of which execution has not yet been attempted; (iii) of which execution has been attempted but was not completed successfully}
{State the number of test cases and/or
TEST RESULTS
scripts executed successfully within this phase, (a) during and (b) up to the end of the period reported}
Issues
{State the number of issues currently open, broken down by criticality as defined in the issue management system used for this phase}
{State the number of issues (i) raised; (ii) retested and closed, (a) during and (b) up to the end of the period reported}
PT - September 2010 - professionaltester.com 8
Simpl acts as a trusted IT advisor and integrator to healthcare organizations.For details see http://simplgroup.com
Making testing more systematic
������������������������������ �����
�����������
������������������������
����������������������������������������� ���������������������������
����������������������������������������������������� ������������� ��������� ���� ������������ ������������������������������� ��������� ���� �����������������������������
����������������� ���� ������ �������������������������� ���������������������������������������������������������������������������������������������������������������������������
������������������������������������������������������������������������������������������������������������������������������ ������� ������� ������� ����������� ������� ���������������������
������������� ����������������������������������������������������������������������������������������������������������������
���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������
����
����
�����
������������
��
���
��� ��������
�����
�
����
����������������������������������
������������������������������� �����
��������
������
�����������
It is often said that it is better to concentrate on improving the software process than on improving individual products. Because a mature process is repeated, improving it brings widespread rather than localized benefit. This idea is very useful when applied to the QA and testing parts of the process, which should
Thomas Frühauf describes a practical method and how it was developed
aim to follow the same procedures and rules for all products. It's less applicable to development: after all, repeating a development process exactly would result in the same product.
In order to get new and updated software into production as quickly and cheaply as possible developers need to find new and changed ways of working. That can impact on testing in unexpected ways, as happened when we at SYNLOGIC attempted to improve our testing process by basing the extent of testing on code coverage.
Testing vs development effort: improving the ratioWhen considering purchase of third-party software, I like to ask questions about the process used to deliver it. A salesperson once answered one by saying “the testing effort invested in this product is four times the development effort”. At first I thought this figure might be skewed by including random, disorganized testing, but no: it referred to systematic work, and on explanation I realised that approximately the same applies to our own process. The bulk of the testing is for regression, re-running test cases which have already passed. Every time a new feature is released or a new defect is found, the regression test case library is extended. Although execution is automated to a large extent, it still takes very significant time and effort, purely in order to prevent reoccurrence of failure caused by defects already found and fixed. This prompted a discussion. Must quality be so expensive? Could a similar level of assurance against regression be achieved with fewer test cases, freeing resources for more effective and creative testing? We agreed it probably could, but how to know which cases to keep and which to discard?
by Thomas Frühauf
We've got it covered – just
Effectiveness and efficiency of testing can both be improved by measuring the code coverage achieved
Making testing more systematic
PT - September 2010 - professionaltester.com 10
Deciding how much testing is enough is a long-standing challenge in testing. This particular problem, deciding how much regression test execution is enough, ie how many test cases it should include, can be approached by analyzing the change that has triggered the regression testing cycle. Understanding that can provide guidance as to in what functions regression is least likely to have occurred. However that understanding can be hard to achieve, especially when development is non-sequential and change is continuous, as in agile project management styles (in our case, Scrum).
We settled on another approach. Starting from the first release (we use a monthly release cycle), we would measure the code coverage achieved by our tests. If it was insufficient, we would add more tests
by the next release, aiming to increase it. With each release more tests would be created and executed, but not necessarily added to the regression library: that would happen only if the coverage achieved by it fell below the target, typically when new code was added to classes and/or new classes were added to the build.
There are many different measurements of code coverage, but for our purposes the simplest – the proportion of “blocks” of code executed during testing, where a block is a sequence of instructions with one entry and one exit point – was judged most appropriate. That meant we did not need one of the many sophisticated coverage measurement tools available, but could use features built into our current development environment, Microsoft Visual Studio. First, the
command line utility VSInstr is used to instrument (ie add probes to) all the binary files to be measured:
C:\testdir>vsinstr –coverage
filename.dll
Next the coverage monitoring utility VSPerfMon is started. This will generate the code coverage information and store it in a new file:
C:\testdir>start vsperfmon
–coverage -output:filename.coverage
Now the regression test library is executed, then coverage monitoring is halted:
C:\testdir>vsperfcmd –shutdown
The content of the coverage file can now be viewed in Visual Studio (figure 1).
A nasty surpriseThe first time we did this, overall coverage achieved by what we thought was a comprehensive test case library was a shocking 56%. This turned out to be due to the presence of a lot of code that had not yet been made accessible via the user interface. In other words, the developers had deliberately included unreachable code they were still developing and unit testing and which our black box, UI-based tests could not cover. We considered asking them to remove this code from builds delivered to testing. That is not difficult to achieve using configuration management, but creates extra work that is arguably not appropriate to Scrum and similar project management styles.
Patience and its rewardSo instead we take advantage of the hierarchical breakdown of coverage of individual classes and objects provided by Visual Studio's display (see figure 1). Using this the developers can tell us, quickly and informally, which are probably showing low coverage because of unreachable blocks, and which contain few or no unreachable blocks so can be better
Making testing more systematic
11PT - September 2010 - professionaltester.com
Figure 1: Microsoft Visual Studio's hierarchical display of coverage information
covered immediately by adding more test cases. In the second case, they can also give useful information about the functions that employ that class, helping us to design test cases specifically to increase coverage. Although the overall coverage may remain low due to the unreachable code, at this stage we aim for very high coverage of the reachable code. In the future, we will look into the possibility of using also a static analysis tool to discover and measure the unreachable code ourselves.
When refactoring either “activates” or removes unreachable code, and removes other redundant code, coverage increases without adding test cases. So by the time of the first production release, the regression library contains sufficient test cases to achieve very high coverage – typically 95% or better. More importantly, it does not include many tests that are not
necessary to achieve that coverage, because its growth has been controlled consistently during development. Executing a regression test cycle takes about 30% less effort than if all new tests had been added to the library without reference to coverage measurement.
We enter the production and maintenance phase with a complete but efficient regression library. After that, releases are far less frequent, so we have comfortably enough time to keep it so and to achieve very complete coverage and assurance of each new release before it goes live, no matter how much new and refactored code it deploys
Making testing more systematic
Thomas Frühauf is a Senior Consultant for IT Quality & IT Process Management at SYNLOGIC (http://www.synlogic.ch), a Swiss software and consulting company specializing in optimization of BPM and IT projects for medium and large enterprises. He is grateful to Ernst Lebsanft, CEO of SYNLOGIC, for helpful discussions contributing to this article.
E s s e n t i a l f o r s o f t w a r e t e s t e r sTE TER
15 November 2010#6
Best-of-breed tools
Next issue
Test automation can save effort, but that is not the most important reason for doing it. While efficiency is desirable, effectiveness must be paramount in testing. Labour-intensive, time-consuming testing that detects defects is preferable to fast, cheap testing that misses them. Test tools are used because they are better at certain tasks than people:• a person designing test cases, even
using techniques explained in standards and books, makes arbitrary choices; a tool applies a defined algorithm
• people can interpret a test procedure, however well written, in various ways, and even an individual person can introduce variation in their interpretation over time; a tool executes an automated test with absolute consistency
• a person can fail to notice discrepancy between actual and expected outcomes; a comparator cannot
• a person takes selective, sample measurements which may be inappropriate or unrepresentative; a tool gathers all available data and analyses it statistically.
So automating test execution makes testing more consistent; automating test checking makes testing more effective; automating test reporting makes testing more accurate; and automating test case design makes testing more systematic.And finally, automation can reduce manual effort, freeing human resources for creative, problem-solving, process-improving work that brings products into use or to market faster and at lower cost.
Edward Bishop describes how TOSCA Testsuite™'s dynamic business steering eliminatestest maintenance
Changing a test to make it run means it’s not the same testAs every tester knows, there are practical obstacles to achieving these benefits. The most obvious and best-understood one is that maintaining automated tests when the user interface of the system under test is changed by its developers is problematical. Various approaches to helping with this exist, including scripting languages, keyword- and data-driven frameworks, and image recognition. All can work well in certain situations, but all have disadvantages too. It is better, as it always has been, to control and restrict change as much as possible by effective early-lifecycle testing of requirements, specifications and designs.
However even if that is achieved and an interface design remains unchanged throughout development, the extent to which tests that drive the interface can be automated is limited by that design itself. This more common and fundamental problem continues to worsen as front-end technologies become more advanced, giving designers new tools to build increasingly complex and interactive user experiences which are self-modifying depending on the changing data they present.
As a simple example, consider an application which displays a list of pending transactions and where each item in the list contains not only information but navigation elements and input controls: perhaps, an “expand” button the operator can click to view the transaction in more detail and update it.
This is not an advanced or unusual design: commonplace business administration applications built in, for example, SAP typically work in this way. So do web applications with search and
by Edward Bishop
Changing tests weakens them
Systematic automated testing means basing tests on business requirements alone. Massaging them to fix instability is not testing but development and reduces their value
13PT - September 2010 - professionaltester.com
Making testing more systematic
browse functions, trading platforms and many others. An automated test including such operations will fail to execute if the position and/or representation of the transaction in the list changes because of other events, for example the creation and modification of other transactions. It may also report test failure incorrectly because its expected outcomes include the presence of information whose display position has moved (perhaps out of a viewing window) or format has changed for similar reasons.
The test maintenance approaches mentioned above have evolved to solve not this problem but a completely different one: change to the static design of the interface, not normal variation in what is displayed which is a correct part of its design. Using them to modify the test to make it run introduces risk and inefficiency. Continual manual engineering of scripts, framework data or images takes significant effort and causes the test to diverge from its original design based on test analysis of business requirements; systematic testing is replaced by parallel development of a system and its tests (often by developers) to keep them compatible. The only other method available is repeated reset of the test environment and data to a defined “start state”. Whichever is used, testing becomes limited and unrepresentative. It loses potential to detect defects.
The only solution to the real test maintenance problemTOSCA Testsuite, a powerful test management, design, execution and data generation toolset, includes TOSCA Commander, the only tool that can create tests that will execute repeatedly without maintenance in this situation. It enables automation where other tools are defeated, enabling far more testing to be automated. It achieves this by making the entire test, not just its input data, truly dynamic.
Like some other contemporary tools, TOSCA includes a “wizard” that analyses user screens or windows (figure 1),
PT - September 2010 - professionaltester.com 14
Figure 1: A sequence
of three screens from
an HTML application
1c
1b
1a
Making testing more systematic
created using virtually any development technology, identifying and capturing information about control objects. Uniquely, it then uses that information to represent the controls in an alternative interface, called a module (figure 2), designed to make creation of effective test cases easy. The tester:• accesses all input and output objects
directly, maintaining the relationship between them and contextual information such as labels; it is not necessary to navigate through the interface to the desired controls and because the interface design and its technical implementation remain always linked, the identity and meaning of each object is always readily apparent, making test case design simple and durable
• defines test inputs based purely on analysis of business requirements, using where appropriate TOSCA’s built-in test generation facilities which include implementation of all-combinations, pairwise and orthogonal-array techniques
• enters or selects specific expected outputs that will cause test incidents if not found present and correct at execution
• captures and stores outputs for use in dynamic generation of further test inputs.
The last activity is the most important. It enables the tester to base any or all of the test inputs on variables whose values are determined when the test is executed, not absolute values which must be known when the test is created and limit its scope when they change. The path through the test procedure is never stored, but is generated when the test is executed from the definition of the test alone. Once a test has been dynamized in this way, its execution remains stable regardless of how the SUT’s data changes. The test can fail to execute only if the system under test behaves in an unexpected way: in other words, when a test incident occurs.
Tricentis call this concept business dynamic steering and believe its use can
increase the extent to which testing is automated from the typical 30% to up to 80%. Depending on accuracy and quality of business requirements definition and test analysis, that means a similar increase in test systematization.
What if the interface design does change?Because a module is a logical representation of the interface, most changes are handled automatically. TOSCA simply re-analyzes the screens and windows it represents, updating the
module as needed wherever objects have changed or even moved between screens. Test cases created using the module remain unaffected, and new ones to test the changes are addedin exactly the same way as beforethe change.
Where the changes to the interface are more extensive – for example when screens or objects are added – the tester simply uses the wizard again to add what is new to the appropriate modules. It is then immediately obvious what new inputs are needed to update the existing test cases
A free trial of TOSCA Testsuite is available at http://www.tosca-testsuite.com
15PT - September 2010 - professionaltester.com
Figure 2: Controls from all three screens in a TOSCA Testsuite module
Making testing more systematic
Test Logica yourself
Quality, “fi t for purpose” is what our clients need. To avoid risks and meet expectations, that is why we test. We have earned a reputation. Will you join us to uphold it?
Testing is a profession, not a side issue. We look for professionals who understand this. You act as devil’s advocate, you see the impact of failure, and you share your view in open and honest communication. We value your professionalism and your eff orts to develop yourself, Logica and our clients. We off er challenging assignments at blue chip companies, a vibrant international community of peers and plenty of opportunity to enjoy and build your career with us.
DO YOU SHARE OUR AMBITIONS? If so, go to www.werkenbijlogica.nl and meet our colleagues at the Logica Quest for freedom, career and knowledge. Or send an email to [email protected]. For career possibilities in the UK, go to www.logica.com.
0230 Ad ProfTester.indd 1 07-09-10 15:30
standards of the test process, the organization
type that is used for testing within the company
and the relationship with IT policy and quality
management. Such an explicitly defined test
policy offers a framework for planning and
execution of all test activities. In this way testing
ensures that the products fit the company’s
goals. This is done by measuring if they meet
quality requirements and by stating the
remaining product risks.
The test policy is made to work with the quality
management and the IT policy of the
organization. That’s why the different policies
can differ in just details. As said before, the test
policy describes the standards for the test
process. But this content is not explicit enough
to be used directly in the projects. The policy is
a framework that is used for checking all test
activities.
A first step to actually using the test policy will
be the global test strategy. Test Maturity Model
Integration [4] states that at level 2 of the
model, the test strategy has to be based on the
test policy.
The overall test strategy describes at a high
level how the test process is performed within
the framework of the test policy. Starting at the
test policy, one can define a number of subjects
that will be applicable for every test project. For
instance, the overall test strategy contains the
test levels that have to be executed.
Where there is just one test policy within a
company, one might define several different test
strategies. Many organizations use different
system environments, often called development
infrastructures. These organizations still use
large mainframes on which applications are
developed in COBOL or PL/1. Newer platforms
like client/server, SOA, ERM and other
environments that use different programming
languages co-exist with the old. For every
environment there is a specific system
development method that’s used. In that case
you may need to define a different overall test
strategy for each environment.
That’s what makes the overall test strategy the
basis for all the test strategies that apply to
individual projects. This is what gives it a
connection to the tactical level: test
management.
Test managementISTQB defines test management as the
planning, estimating, monitoring and control of
test activities [5].
Risk and Requirement Based Testing (RRBT)
[2] is the TestFrame approach for test
management that is shown in the test
management model (figure 2).
The model describes the activities of the test
manager – like writing a test strategy, preparing
an estimate and planning.
It also describes the activities during the
execution of a test project like progress and
incident management, and reporting to the
principle and other stakeholders. All activities in
the test management model are linked. That’s
what makes RRBT an integral approach.
Phases in test managementThe test management model can be separated
into two parts: test project preparation and test
project execution. The test project preparation
will be performed only once in most cases. It
consists of the following steps:
• Risk analysis and test strategy: This is the
main phase of RRBT. The test manager
identifies product risks and their priorities
• Estimate: The test manager identifies
resources that are needed to run the test
project
• Planning: The test manager identifies timelines
for the project and what needs to be delivered
by Chris Schotanus
TestFrame:
From Test Policy To Test Cases
17PT - September 2010 - professionaltester.com
Advertising feature
The TestFrame standard for testing as defined
by Logica can be represented schematically as
a pyramid (figure 1). The TestFrame pyramid
covers three test levels: strategic, tactical and
operational. The test policy is defined at the
strategic level. It’s applied at the tactical level
for the management of test projects. At the
operational level within these projects, several
test levels and types are executed using the
test method, test tools and resources that are
defined in the test policy.
The different levels of the TestFrame pyramid
are described by Logica in three individual
publications: TestGrip [1], Successful Test
Management: an integral approach [2]; and
TestFrame: an approach to structured testing
[3]. In this article these different levels are
integrated with each other to form a top to
bottom test standard.
Test policyA test policy is a translation of the company
strategy into instructions forming a measurable
framework for all test activities. It describes
Chris Schotanus on how Logica's methodology integrates testing from top to bottom
An organization's test policy is derived from the vision and mission of the organization, and its influence should be carried right through to the design of tests themselves
TestPolicy
Test Management
Test Method
Test Tools
Ope
ratio
nal
Tact
ical
Sat
egic
tr
Figure 1: the TestFrame test pyramid
• Test organization: The test manager identifies
who will do the testing. Are these people
system developers or people from an internal
or external pool of testers? Is testing
outsourced completely?
The right hand side of the test management
model covers the management of the test
process during test execution. Test execution
means the design of the test cases and their
actual execution. The test manager has certain
governing responsibilities and acts as a liaison
with the stakeholders. His activities are:
• Progress control: The test manager monitors
whether everything is going according to the
plan. He bases his judgement on the number
of hours spent on tasks, the number of
designed and executed test cases and the
quality of the test object
• Incident management: During test execution
the testers will face situations that deviate
from what was expected. The test manager
consults the stakeholders about these
incidents
• Reporting and advice: The test manager
writes reports at the end of each test level. At
the end of the entire test project, he writes the
end report. It contains advice about the
acceptance of the tested system
• Evaluation and transfer: At the end of the test
project the project is evaluated. Questions
such as what went wrong, what went well and
what can be improved are asked and
answered. This is when the data related to
metrics is stored. All test products are handed
over for archiving or reuse
During all test phases the test policy and the
test strategy are the basis for decisions. In
subsequent sections the product risk analysis –
one of the most important parts of RRBT – is
detailed. The product risk analysis is the link
between test analysis and test execution. That
means the “real” testing begins here.
Test project preparation: product and project risksRisk management plays an important role in
daily management. Within a test project there
are two types of risks: test project risks and
product risks. Test project risks refer to
situations that may endanger the test project.
Insight into product risks is important in
instances where decisions must be made to
take an information system into production. In
this article we lay more emphasis on the
product risks. If you’d like to know more about
project risks during a test project, see [2].
The product risk analysis Product risks are the risks that arise from the use
of a system by a company or organization.
Stakeholders are all those people and/or
departments that have a special interest in the
correct functioning of an information system (figure
3). They are the people for whom the test project is
being conducted. That’s why, to reach the most
accurate risk analysis, the stakeholders’ input is
essential. The stakeholders and test manager
should perform a product risk analysis together.
The test strategy clearly defines what exactly to
test and to what extent. It contains the acceptance
criteria and it forms the basis of the budget and
planning of the test project [2]. The test strategy is
the starting point for the test team during the
design and execution of the test. This makes the
product risk analysis one of the first and maybe the
most important of all test activities. The product
risks are the basis for the determination of the
focus of the test effort. A wrong product risk
analysis will reduce the effectiveness and efficiency
of all subsequent (test) activities.
Risk priorityAfter the determination of the risks, another equally
important step is the assignment of a priority to
each of the risks. During the creation of the test
strategy, it allows the test manager to state which
test specification techniques will be applied – the
higher the risk the stronger the technique. These
priorities also help determine the order in which the
tests will be designed and executed. The most
important risks go first. The priority of product risks
is indicated using the mnemonic MoSCoW:
Must test: the functionality must be tested,
otherwise no acceptance
Should test: it’s important to test this functionality,
but the test can be skipped after consulting the
stakeholders
Could test: if there is some time left, we will test this
Won’t test: will not be tested.
The priority of a risk amongst others consists of
the importance of the feature to the
organization, the risk factor in case a certain
part is not handled well and rules and
regulations from outside the organization. The
priority that is assigned to a product risk is also
called test priority.
Matching of product risks and requirements
Product risks and requirements are inextricably
bound. In short, each risk must be mitigated by
defining one or more requirements. We can use
the outcome of the product risk analysis as a
first test in which we check whether the
collection of defined requirements is complete.
But remember that we do not test yet if the
requirements are defined correctly.
For this process to work correctly, the risks and
requirements should not have been identified
and defined in one common session and
preferably not by the same people. Prior
knowledge of the requirements will bias the
participants during the risk analysis process.
Requirements and prioritiesAfter all requirements have been checked
against the product risks and after we’ve
created a limited list or requirements and risks,
the requirements will carry two types of priority:
the test priority, derived from the risk priority as
it was identified during the product risk analysis
and the functional priority. The functional priority
indicates the importance of the functionality to
the stakeholders. The indicators for functional
priority are:
Must have: cannot be missed
Should have: almost essential; it would be very
nice to have this feature if at all possible
Could have: not essential but we could have this
if it does not affect anything else
Won’t have: not essential this time but would
like in the future.
The test priority and the functional priority are
not directly related. They can but will not
necessarily be the same. A feature of a system
that seems unimportant may become a high
risk after it’s implemented [3]. Besides product
risks, there are other elements that can cause
the test priority to be high:
• The complexity of a part of the system
• The composition of the system development
organization
• The application of new methods of techniques
During the test process test priority is used as
an indicator.
PT - September 2010 - professionaltester.com 18
Advertising feature
PREPARATION TEST EXECUTION
TESTORGANISATION
PROGRESSMANAGEMENT
INCIDENTMANAGEMENT
REPORTING& ADVICE
EVALUATION &HANDOVER
RISKANALYSISAND TESTSTRATEGY
PLANNING
ESTIMATION
RISK ANDREQUIREMENT
BASED TESTING
Figure 2: the test management model of RRBT
The project test strategyThe project test strategy is the document in
which we detail the overall test strategy. This is
where the test manager documents how the
system will be tested – like the division in test
levels and sequences – using the overall test
strategy as a guide. For that, he will arrange a
number of items. A first step is the classification
of the product risks. In RRBT the risks are
classified based on the quality attributes as
defined in the standard ISO 9126 [6]. The
product risks and the requirements are divided
into the following categories (figure 3).
The project test strategy contains a cluster
matrix and a number of cluster cards. The
cluster matrix is an overview that a test
manager uses to connect stakeholders, quality
attributes and test levels. For each test level per
stakeholder the responsibility is documented.
Per item in the cluster matrix a cluster card is
drafted. These cluster cards contain all
information that is needed for a tester to specify
the tests for a specific cluster such as the test
basis and test techniques to be used and the
test priority. The cluster card is the work
package for the test coordinator or analyst.
Test project execution: structured testingThe product risk analysis, the cluster matrix and
the cluster cards are part of the project test
strategy. Besides the planning and estimate,
this document is the starting point for the test
project execution. The test project execution
consists of a number of (more or
less)sequential steps: the test levels. A test
phase will be organized for each test level that
is defined in the test strategy. Cluster cards are
the actual transition from test project
preparation to test project execution.
The right side of the test management model of
RRBT is passed through several times: once for
each defined test level. During this time, the test
coordinator is in charge of the daily operation
within a test level whereas the test manager has
overall control of all the test levels. The test
coordinator reports to the test manager on
progress within a test level and issues the
phase or test level report. The test priority leads
in setting up these reports.
Test preparationIn cluster cards, the test manager articulates
the priority of the test cluster. The test
coordinator then creates a detailed test plan
based on the cluster cards and the project test
plan provided by the test manager. The detailed
test plan documents which cluster cards,
depending on the priorities, will be addressed
first. The test coordinator will then distribute the
cluster cards to the test analysts that are
members of his team. This distribution may
have different basis. For instance the expertise
of the test analyst could be a factor. It could also
be that the division of a system into specific
parts determines the distribution of cluster
cards among the test analysts.
Test analysisThe test analyst is responsible for the actual
analysis of the test basis, the design of the test
conditions and test cases and the definition of
the test data. The analyst has a hierarchy of
products in his possession which consists of
test clusters, conditions and cases (figure 4).
Test executionThe test execution takes place on the basis of
the results of the analysis, the test cluster,
conditions and test cases. The test cluster is
used as a test execution scheme. A test
execution scheme provides the execution
sequence of the test cases. The test cases are
recorded in the test execution scheme within
the context and the sequence in which they
must be executed. Furthermore, the test
execution scheme must be readable to the test
executor. He must be able to identify, what,
when and with what data the information
system should be tested. Readability is
particularly important in case that test executor
is someone else than the person who has
performed the test analysis; this is to ensure,
that no differences in interpretation are possible.
For this reason the test cases are documented
using action words and parameters [3].
In a manual test, the test executor will execute
all selected test cases step by step. First he will
use the clusters that belong to the product risks
that have the highest priority. He will execute the
test cases in this cluster in the sequence in
which they are recorded. This creates the
possibility to judge the quality of the system
under test as early as possible. The results of
the test execution are recorded with the test
conditions.
In the TestFrame test process the
documentation of the test cases is done in such
a way that the test cases can be executed both
manually and automatically. During automated
19PT - September 2010 - professionaltester.com
test execution, the test clusters serve as the
test execution scheme, just as with manual test
execution. Test automation software that is
created by a test engineer uses the clusters as
input and executes the test cases in the
sequence as defined by the test analyst and
recorded in the test cluster. During automated
test execution a test log is produced which
contains the results of the test execution.
ReportingOn a regular basis, the test manager will have
to provide insight into the progress, the quality
of the information system and possible events
around the test project. By reporting regularly
the test project remains at the forefront of the
minds of the client and stakeholders. These
regular reports include but are not limited to
product risks, related requirements and what
risks remain to be tested. Based on this
information the client and stakeholders can
decide to proceed to the next test level or to
bring the information system into production.
In test strategy it is defined what will and will not
be tested, ie precisely which product risks will
be covered during the test project. The project
test plan and the detailed test plans describe
how the test manager has arranged the test
project. During the duration of the test project,
the test manager will have to keep the client
and stakeholders informed on the progress of
the test project. After all, they have been
involved in defining test strategy and test plan
so they will want to know the status of the
quality of the information system under test.
Test products and their relationship with the test strategy: product risk analysisThe test policy is taken into account when
setting up the product risk analysis, since the
test policy documents which components of the
information system are of critical importance to
the organization.
Test clustersThe test manager articulates in the test
strategy, cluster cards that relate to a unique
quality attribute or, if the attributes are well
compatible, a number of closely related quality
attributes. The test coordinator can decide to
break down a cluster card into one or more test
clusters.
Advertising feature
Figure 3: quality attributes according to ISO 9126
Functionality:SuitabilityAccuracyInteroperabilitySecurityFunctionalitycompliance
Reliability:MaturityFault toleranceRecoverabilityReliabilitycompliance
Usability:UnderstandabilityLearnabilityOperabilityAttractivenessUsabilitycompliance
Efficiency:Time behaviourResource utilizationEfficiencycompliance
Maintainability:AnalysabilityChangeabilityStabilityTestabilityMaintainabilitycompliance
Portability:AdaptabilityInstallabilityCo-existenceReplaceabilityPortabilitycompliance
PT - September 2010 - professionaltester.com 20
Advertising feature
He then hands these test clusters, together
with the cluster card, to a test analyst for
further analysis. The cluster cards enable the
test analyst to begin testing.
Test conditionsTest conditions are an important means of
communication with the stakeholders. Test
conditions are closely related to the
requirements of the system. This is seen in
the use of test conditions at various test
levels.
The test priority indicates the priority during
the elaboration of test conditions into test
cases.
The test analyst starts by defining/executing
the “must test” test conditions. Moreover the
test conditions will be reviewed by the
stakeholders. This way the stakeholders know
what exactly will be tested.
Test casesTest conditions are elaborated into test cases.
Test cases describe how and with which
values the test actions are performed to prove
that the requirement related to the test
condition is correctly implemented. The test
priority also determines the sequence in
which the test conditions are elaborated. The
test analyst will start with elaborating the
“must test” test conditions. Of course it’s
important that the test conditions are
approved by the stakeholders.
Incident managementAn incident in testing is a deviation in actual
outcome of a test with respect to the expected
outcome. This could be either an anomaly in
the information system under test, or an error
in the infrastructure, the documentation or the
(detailed) designs. For incidents the product
risk and the subsequent test priority that is
associated with it are input to the priority of
the incident. Incidents associated with high
impact product risk will get priority in
assigning bug-fix capacity.
Summary The TestFrame standard for test comprises
TestGrip for the strategic level, RRBT for the
tactical level and the TestFrame test process
for the operational level.
TestGrip describes the development of a test
policy that is derived from the vision and
mission of the organization. The test policy is
closely related to the IT and quality policy of
that organization and is potentially influenced
by external factors from outside the
organization such as law and regulations.
The test policy establishes a framework for
the overall test strategy. Where applicable, it
outlines the test approach to the testing of
systems, made specific per development
environment. This overall test strategy for
instance contains which test levels are
applied and which methods and techniques
are used during the design of the test cases.
This is where the test policy connects with
RRBT. The test manager who applies this
method will use the contents of the test policy
and the overall test strategy as a starting point
for the management of the test project. The test
manager performs, together with the
stakeholders, the product risk analysis. In the
project test strategy he documents how the
risks of different priorities are tested. These
agreements are recorded in cluster cards.
Determined by the project test strategy one or
more test levels are planned and executed as
subprojects. The cluster cards are the starting
points of these subprojects. Test analysts
elaborate the test level conducted by a test
coordinator. With the aid of TestFrame the cluster
cards are elaborated into test clusters, test
conditions and test cases. The test clusters and
test conditions “inherit” the test priorities that are
assigned to the requirements based on the risks.
The test priority is a determinant to the
elaboration, the test specification techniques to
use and the sequence of test execution.
During the execution of the test, deviations of
actual and expected results will undoubtedly
arise. Consequently, product risks play an
important rule during the issue management
process.
At pre-designated times, and as he deems
necessary, the test manager reports on the
progress of the test project. He reports on both
the status of the project and the quality of the
system, per the product risks connected to the
test conditions. The reporting will be determined
by the percentage of the various risks covered by
the test. This information will help the test
manager to formulate advice on whether to
proceed to the next test level or to bring the
system into production.
And that closes the circle: the product risks
determine how the test project will be executed.
At least they determine the sequence and depth
in which the test is prepared and executed.
Finally, the progress reports give insight into the
status of the system in relationship to the product
risks. These reports are the basis for the decision
to bring the system into production
CLUSTERTEST
CONDITIONTESTCASE
ACTIONWORD
Figure 4: test clusters, conditions, cases and action words
Chris Schotanus is principal consultant, testing and quality management at Logica. This article was first published in Dutch at http://computable.nl/artikel/ict_topics/development/3304921/1277180/invloed-van-productrisicos-op-testen.html
[1] TestGrip: Grip on IT Processes Through Test Policy and Test Organisation by Iris Pinkster, Rik
Marselis, Jos van Rooyen and Chris Schotanus. Logica, ISBN 978-9071195013
[2] Successful Test Management: An Integral Approach by Iris Pinkster, Bob van de Burgt, Dennis
Janssen and Erik van Veenendaal. Springer, ISBN 978-3540228226
[3] TestFrame: An Approach to Structured Testing by Chris Schotanus. Springer, ISBN 978-
3642008214
[4] Test Maturity Model Integration (TMMi). See http://www.tmmifoundation.org
[5] ISTQB Standard glossary of terms used in Software Testing Version 2.1. See http://istqb.org
[6] ISO 9126-1:2001 Software engineering-Product quality-Part 1: Quality model
Our speculations in RCA are usually inspired by information that is both partial and unreliable, but when public-facing websites fail much is apparent to anyone interested in the events (as well as to a large number of customers who are not but just want to get served). The consequences often include hasty announcements and apologies which, as in this case, give additional insight to the root causes.
On the morning of Monday 2nd February 2009, parts of the UK woke up to the heaviest snow for some years. Commuters needing to know whether they should catch their usual train or seek alternatives turned to the obvious source: nationalrail.co.uk, the train operators' official information site.
The user experienceMany visitors could obtain no response at all; some saw the message “The server is temporarily unable to service your request due to maintenance downtime or capacity problems”. Either of these behaviours is an availability failure.
The impactAs always in these circumstances, the business interests of the site's owners and operators were damaged badly: a travel information website that works only when most services are running normally and disappears when they are disrupted by predictable events is not much of an asset. Whatever the weather, the traffic won't
reach the same levels again, because many users who experienced or just heard about the failure will never return. The usefulness, reputation and commercial potential of nationalrail.co.uk will not recover.
However the site's context of use means the consequences were even worse, and extended farther: they included financial cost to industry and the taxpayer, worsened disruption to both rail and other transport systems as travellers starved of advice took the wrong actions, impact on other businesses including essential public services whose key workers lost vital time, and increased, not necessarily wanted traffic to other organizations' websites and call centres.
The fix (not)When the site became available again, more snow was forecast. The following message appeared prominently on the home page: “Due to the expected bad weather on Friday and over the weekend National Rail Enquiries is running a cut down website in order to serve as many requests for information as possible. Our apologies for the inconvenience caused”. So the availability failure was addressed by deliberately causing a reliability failure (definition: some functions which worked previously now do not).
The root causeWere those responsible for the site aware of its limitations? It seems they may not have been, because immediately after (and therefore almost certainly before) the failure “Ask Lisa”, the website's “virtual assistant” (ie help search function), responded to the question “Why was your website down?” by stating: “Our website is one of the busiest and most used websites in the UK. To make sure it is available when you need it we use the very latest
Engineering works?
Rude Coarse Analysis, Professional Tester's occasional feature on legendary software failures, recalls last year's UK National Rail Enquiries website disaster
21PT - September 2010 - professionaltester.com
Rude Coarse Analysis
Good reliability and availability testing is challenging.Good non-functional specification can help – if it is done
technology to ensure that all of our online services are available 24 hours a day” [http://lisa.nationalrail.co.uk/NRE/bot.htm?isJSEnabled=1&entry=why%20was%20your%20website%20down retrieved 6th February 2009].
At some later date, this response was extended to read additionally “Sometimes things can go wrong and this may mean that the website responds at a slower speed than we would like it to. This is always a temporary situation and we aim to resolve any problems as soon as possible. Often the speed of our website is influenced by users' internet connection. For example, if you are on 'Dial Up', internet browsing is generally slower than using Broadband”. [http://lisa.nationalrail.co.uk/NRE/bot.htm?isJSEnabled=1&entry=why%20was%20your%20website%20down retrieved 23rd August 2010].
PT - September 2010 - professionaltester.com 22
Rude Coarse Analysis
Leaving possible criticisms about relevance and contemporaneity of information and statement of the obvious aside, this does not address the question, unless “at a slower speed than we would like it to” can be used to mean “not at all”.A spokesman for National Rail Enquiries said “website enquiries were up 800% compared to a normal Monday morning” and “more than 32,000 users were visiting every second” [http://news.bbc.co.uk/1/hi/technology/7865018.stm]
Most people travelling on “a normal Monday morning” probably have a good idea of their route and services the night before. It's hard to imagine huge numbers accessing the website for information at a time when most of them are probably supposed to be preparing for or on their way to work. So this would appear to be a fairly typical load multiplied by eight. In
commercial web terms, that is a modest peak.
More information is needed on the measurement to which the second statement refers, but it can't mean 32,000 individuals per second were trying to access the site. That would represent nearly 2 million a minute, or the UK's entire population in about half an hour! Many of the hapless customers were probably retrying earlier attempts, and perhaps HTTP requests queued or buffered cumulatively by a load balancer, or packets incoming to a gridlocked TCP/IP device, were counted.
We may never know how many people tried to use the site, but the maximum number of simultaneous users the relevant functions should be capable of supporting is easy to estimate by assuming that, in the worst case, every traveller will check status of the services they want to use. That number is calculable from the average number of tickets (including season tickets) sold for travel on “a normal Monday morning”. If that calculation was done, the number would presumably have been included in specifications and, therefore, tested against. Assuming the test results were accurate, surely after seeing them “Lisa” would not have wanted to appear so confident?
So our guess at the root cause is that the required reliability of the site was not adequately or correctly specified.
How could testing have helped?One approach to improving reliability and performance is to use dynamic analysis, sometimes in combination with simulated load, to discover “bottlenecks” – items and areas where change seems most likely to achieve good benefit, for example inelegant sections of code, inefficient interfaces between components or devices, or hardware resource shortfalls. Perhaps this was done for nationalrail.co.uk before the failure; it was almost certainly done after it.However this is not testing in the usual sense, that is comparing observed with expected behaviour. The only systematic
Top 5 reasons to be a tester instead of a developer
Nivedita Kyatam, quality control consultant at SYSTIME Global Solutions in Mumbai, has received a free subscription to the printed edition of Professional Tester for her top 5 reasons to be a tester.
Send your testing top 5 – sincere, cynical or both – to [email protected]
way to do that, and so reduce the risk of failure caused by variations in traffic, is testing under load to generate empirical evidence of how a system will behave as demand for it varies. Successful web organizations use such testing to protect their sites from failure caused by traffic conditions by informing accurate assessment of the risk. It's a specialized and challenging area of testing, supported by many sophisticated and powerful tools, and like all testing it is largely meaningless without good quality requirements on which to base it. If our speculation on that matter above is correct, effective testing
under load would have been very difficult indeed to achieve, and may not even have been attempted.
Testing under load becomes even more important when change occurs. Repeating it – exactly – is the only way to tell whether that change will provide sufficient (or any) reduction of that risk, or indeed increase it. Without systematic testing under load based on accurate,
comprehensive requirements, discovering and attempting to remove bottlenecks, whether by modifying code, changing configuration or adding infrastructure, is just tinkering and trusting to luck. The fact that similar failure reoccurred 11 months later [http://www.theregister.co.uk/2010/01/05/rail_chaos retrieved 23rd August 2010], on a Tuesday when the weather was less severe than in 2009, mayindicate that luck ran out
23PT - September 2010 - professionaltester.com
Rude Coarse Analysis
Although we might not be that fond of users, especially when they are reporting failure (“it didn’t like it”; “something came up about some error or something”; “the whatsit thingy disappeared” etc), testers have to at least understand their point of view. Developers don’t and that’s why they’re so bitter (it’s also because their robot didn’t get on Robot Wars)
To drunk, non-IT people at parties “tester” could seem a little bit glamorous. It could sound something like “test pilot”, especially if you slur your speech and are wearing sunglasses. Saying you are a developer just makes them think “So what? So is monomethyl-p-aminophenol hemisulphate”
It’s much easier for testers than for developers to move into where the proper money is: sales and marketing. We have both product and business knowledge, plus thick skins and tenacity because of our experience failing to negotiate budget for testing
Testing is tantalizing. It hurts you, but keeps you interested. A crank-the-handle system that will run as smoothly as a Swiss watch, optimize itself continuously and detect all important defects seems so close. Yet something always comes along to keep it just out of our grasp. Something called developers
Testing is the hardest, most frustrating, least and worst understood discipline in IT and must rank fairly high on those attributes in the whole of human endeavour, which itself is increasingly dependent on (and held back by) software. If we weren’t testers, we’d have a lot less to moan about. What fun would that be?
If you have an opinion on this analysis, know any more detail, or can describe or comment on this or any software failure, please help Professional Tester's readers to learn from it. Email [email protected]
15% off for readers of
Professional TesterQuote
PT2010
On the following pages, please find a selection of our renowned iqnite exhibitors.
Meet experienced industry insiders and eminent experts at the iqnite conference at London.
Register now and discuss current standards and trends as part of the software quality community.
iqnite-conferences.com/uk
iqnite 2010 United Kingdom4 October 2010 | London
Anzeigenstrecke iqnite UK_02.indd 2 19.08.2010 18:40:51 Uhr
Programme
Time 09.30 - 09.45 CONFERENCE OPENING
Time 09.45 - 10.30 Keynote
Overcoming Hurdles in the Quality arena to do More with LessKriss Akabusi MBE
Time 10.30 - 11.00 EXHIBITION AND COFFEE BREAK
Automation
Chair: Mark Mitton MBE, Deutsche Bank
ROI of Testing
Chair: Helen Willington, Sodexo
Time 11.00 - 11.40
Performance Testing
Speaker to be confirmed
Time 11.00 - 11.40
Choosing Team Members: The Good, The Bad & The Ugly
Steinar Hovi, HoviTec
Time 11.45 - 12.30
High Automation for Functional Tests in Banking – What are the Prerequisites?
Bruno Hinterberger, UBS
Time 11.45 - 12.30
How to Sell a “Quality Initiative” When It’s Good Enough Already?
Thomas Spielmann, Centrica
Time 12.30 - 1.30 EXHIBITION AND LUNCH BREAK
Time 1.30 - 2.15 Pecha KuchaA wide range of topics presented in an innovative way 6 minutes and 40 seconds to explain each topic
Time 2.15 - 2.45 EXHIBITION AND COFFEE BREAK
Testing in the Cloud
Chair: Geoff Thompson, UK Testing Board/Experimentus
Agile
Chair: Thomas Spielmann, Centrica
Time 2.45 - 3.25
Cloud Security – Exploding the Myth
Colm Fagan, Espion
Time 2.45 - 3.25
Winning Big with Agile Acceptance Testing – Lessons Learned from 50 Successful Projects
Gojko Adzic, Neuri
Time 3.30 - 4.15
Testing SaaS – How to get it right
Tapan Chavan, Taylor & Francis
Time 3.30 - 4.15
Refactoring and Test: Industry and Academia Crossover Themes
Steve Counsell, Brunel University
Time 4.15 - 4.45 EXHIBITION AND COFFEE BREAK
4.45 - 5.30 Keynote The Leadership Deficit – Why the UK economy wastes £billions every yearPatrick Mayfield, pearcemayfield
5.30 - 6.30 NETWORKING SESSION
Anzeigenstrecke iqnite UK_02.indd 3 19.08.2010 18:40:51 Uhr
Can you predict the future?Forecast tests the performance, reliability and scalability
of IT systems. Combine with Facilita’s outstanding
professional services and expert support and the future
is no longer guesswork.
visit Facilita at:
4th October Guoman Tower Hotel. London
Powerful multi-protocol testing software
TM
Facilita Software Development Limited. Tel: +44 (0)1260 298 109 | email: [email protected] | www.facilita.com
Make a sustainable impact to software quality with Test Professional 2010, the integrated testing toolset that delivers a complete plan-test-track workflow and helps you make informed, timely decisions to drive down the risks associated with your software releases.
Visit Our BoothWatch live demos. Talk to an expert. Get the inside scoop.
For more details on Visual Studio 2010 quality tools,Visit www.microsoft.com/visualstudio/test.
Download trials, demos, whitepapers and more!
File high quality bugs with rich diagnostics for your developers. Take full advantage of a task-driven user interface and features like Fast Forward for Manual Testing so you can focus your time and energy on high-value tasks.
With tight integration to Team Foundation Server* you will gain in-context collaboration between all team roles, greatly increasing your visibility to the overall project while providing full traceability of user stories and requirements, progress reports, and real-time quality metrics.
WHAT WILL YOU DO WITH VISUAL STUDIO TEST PROFESSIONAL 2010?
ELIMINATE “NO-REPRO”WITH RICH, ACTIONABLE BUGS.
NOW I CAN
*To use Test Professional 2010 you need Team Foundation Server 2010(licensed separately unless purchased with an MSDN subscription).
MICROSOFT INTRODUCES
Make a sustainable impact to software quality with Test Professional 2010, the integrated testing toolset that delivers a complete plan-test-track workflow and helps you make
WITH RICH, ACTIONABLE BUGS.
MICROSOFT INTRODUCES
Anzeigenstrecke iqnite UK_02.indd 4 19.08.2010 18:41:08 Uhr
Can you predict the future?Forecast tests the performance, reliability and scalability
of IT systems. Combine with Facilita’s outstanding
professional services and expert support and the future
is no longer guesswork.
visit Facilita at:
4th October Guoman Tower Hotel. London
Powerful multi-protocol testing software
TM
Facilita Software Development Limited. Tel: +44 (0)1260 298 109 | email: [email protected] | www.facilita.com
Anzeigenstrecke iqnite UK_02.indd 5 19.08.2010 18:41:14 Uhr
SQS. Quality Training for Professional Testers. Guaranteed.It’s in the net!
A call from your training provider to say that your course is cancelled is quite frankly bad news! Unlike other software testing training providers, SQS’s guaranteed dates allow you to book with confi dence that your course will defi nitely run!
So there is now no need to book with any other provider and run the risk of rescheduling or cancellation, just book with SQS.
Please see the SQS Training website for courses that are guaranteed to run!
SQS Group Ltd UK | 7-11 Moorgate | London, EC2R 6AF | Phone: +44 (0) 20 7448 4682
SQS Group Ltd Ireland | 4-5 Dawson Street | Dublin 2 | Phone: +353 (0) 1 671 7487
[email protected] | www.sqstraining.com/training
Visit us at
Anzeigenstrecke iqnite UK_02.indd 6 19.08.2010 18:41:16 Uhr