ada test canata & ri

16
AdaTEST, Cantata and RIA23 Executive Summary This pape r de sc ri be s how AdaTEST and Ca nt at a ca n be us ed to assist wi th the development of software to the standard RIA 23, Safety Related Software for Railway Signalling. In particular, it shows how AdaTEST and Cantata can be used to meet the verification and testing requirements of the standard. It also shows that AdaTEST and Cantata have been produced to a standard of sufficiently high integrity that their use for dynamic testing will not compromise the safety integrity of the software being tested. The material presented here is suitable for inclusion in a justification for the use of the products on a safety critical software development. IPL is an independent software house founded in 1979 and based in Bath. IPL was ac cr ed ite d to IS O9001 in 1988 , and ga ined Ti ckIT accr edit at ion in 1991 . Both AdaTEST and Cantata have been produced to these standards. Copyright This document is the copyright of IPL Information Processing Ltd. It may not be copied or distributed in any form, in whole or in part, without the prior written consent of IPL.  IPL  Eveleigh House Grove Street  Bath  BA1 5LR UK Phone: +44 (0) 1225 444888 Fax: +44 (0) 1225 444400 email [email protected]  Last Update:07/08/96 11:35 File: RIA23.DOC 

Upload: priyaspv

Post on 03-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 1/16

AdaTEST, Cantata

and

RIA23

Executive Summary

This paper describes how AdaTEST and Cantata can be used to assist with the

development of software to the standard RIA 23, Safety Related Software for Railway

Signalling. In particular, it shows how AdaTEST and Cantata can be used to meet the

verification and testing requirements of the standard. It also shows that AdaTEST and

Cantata have been produced to a standard of sufficiently high integrity that their use

for dynamic testing will not compromise the safety integrity of the software being

tested.

The material presented here is suitable for inclusion in a justification for the use of the

products on a safety critical software development.IPL is an independent software house founded in 1979 and based in Bath. IPL was

accredited to ISO9001 in 1988, and gained TickIT accreditation in 1991. Both

AdaTEST and Cantata have been produced to these standards.

Copyright

This document is the copyright of IPL Information Processing Ltd. It may not be

copied or distributed in any form, in whole or in part, without the prior written

consent of IPL.

 IPL

 Eveleigh House

Grove Street 

 Bath

 BA1 5LR

UK 

Phone: +44 (0) 1225 444888 

Fax: +44 (0) 1225 444400

email [email protected] Last Update:07/08/96 11:35

File: RIA23.DOC 

Page 2: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 2/16

©IPL Information Processing Ltd

2

1. IntroductionAdaTEST and Cantata are tools which support the dynamic testing, dynamic

(coverage) analysis, and static analysis of Ada, C and C++ software. The original

requirements for the tools included the suitability for testing safety critical software,

although both tools are sufficiently flexible for the testing of any Ada, C or C++

software.

This paper describes how AdaTEST and Cantata can be used to enable a software

development to meet the verification and testing requirements of BRB/LU Ltd/RIA

Technical Specification No 23: 1991, Safety Related Software for Railway Signalling.

It should be read in conjunction with the standard.

To assist in cross referencing to the standard, techniques which are identified by the

standard are shown in italics in the text of this paper. A full cross reference matrix is

attached as Appendix A.

The description consists of two parts. Section 2 looks at AdaTEST and Cantata as

tools to facilitate developing software to meet the requirements of the standard.

Section 3 looks at the integrity of the tools themselves, including standards used

during their development.

2. AdaTEST and Cantata Capabilities and Use

2.1. Overview of Requirements

The RIA 23 standard identifies a number of techniques which can be applied to the

development of software at a number of levels of required integrity. These techniquesare listed in annex A and annex B of the standard.

In annex A of the standard, the tables which identify techniques related to verification

and test are:

TABLE A7  Code and Test

TABLE A8 Software Integration

TABLE A9 System Integration

TABLE A10 Assessment

TABLE A11 Verification and Test

TABLE A12 Test Checklist

TABLE A13 Software for Data Driven Systems

TABLE A14 Data Preparation Techniques

Many of these techniques are derived from IEC/SC65A/WG9, Software for

Computers in the Application of Industrial Safety Related Systems. Other techniques

are described in annex B of RIA 23.

2.2. General Description of AdaTEST and Cantata Ref: A7; [WG9] B3.1.6;

 A8; [WG9] B3.2.3;

Page 3: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 3/16

©IPL Information Processing Ltd

3

 A9; B5;

 A11; B6.

 A14;

AdaTEST and Cantata support the analysis and testing of Ada, C, and C++ software at

all levels, from module test through to full integration testing. The facilities providedby AdaTEST and Cantata are summarised in Table 1.

Testing Analysis

Test Preparation Dynamic Testing Static Analysis Dynamic Analysis

• Scripts in Ada, C or

C++

• Script generation

from test definition

• Test definition

• generation fromCASE tool

• Test execution

• Result collection

• Result verification

• Timing analysis

Stub simulation• Host testing

• Target testing

• Metrics:

Counts

Complexity

• Flowgraphs

• Structure

• Dataflow*1

• Instrumentation

• Coverage:

Statements

Decisions

Conditions

MCDC*2

Calls

Call-Pairs*1

Exceptions*2

• Trace

• Assertions

• Paths

Table 1 - AdaTEST and Cantata Facilities

*1: Cantata only *2: AdaTEST only

Testing with AdaTEST and Cantata is centred on a dynamic test harness. The test

harness can be used to support testing at all levels, from module test through to fullintegration testing, in both host and target environments. In the host environment tests

can either be run directly or under the control of a debugger. In the target environment,

tests can be run directly, under an emulator, or under a debugger. The AdaTEST and

Cantata simulation facilities enable input/output devices to be simulated in the host

environment.

Tests for both functional testing and structural testing are controlled by a structured

test script. Test scripts are written in the respective application language, Ada, C or

C++, and can be produced independently from the code of the application. Test scripts

are fully portable between host and target environments. Test scripts may be written

directly by the user, or can be generated automatically by AdaTEST or Cantata from aset of test case definitions. Test case definitions can in turn either be written directly

by the user, or can be imported from a CASE tool.

AdaTEST and Cantata static analysis provides static code analysis of source and also

instruments code in preparation for later dynamic code analysis. The results of static

analysis are reported in a list file and can be made available to the test script through

the instrumented code. Static code analysis results can also be exported to

spreadsheets and databases.

AdaTEST and Cantata dynamic code analysis provides a number of analysis

commands which can be incorporated into test scripts. During test execution these

commands are used to gather test coverage data, to trace execution, to verify data

Page 4: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 4/16

©IPL Information Processing Ltd

4

flow, and to report on and check the results of  static code analysis and dynamic code

analysis.

When a test script is executed, AdaTEST and Cantata produce a test report giving a

detailed commentary on the execution of the tests, followed by an overall pass/fail

summary at the end of the report. The anticipated outcomes of  static code analysis,

dynamic code analysis and dynamic testing can thus be specified in a single test script

and automatically checked against actual outcomes.

To assist configuration management, all AdaTEST and Cantata outputs include date

and time information. The reports output by AdaTEST and Cantata would typically be

incorporated into test logs, review logs, safety logs etc.

Instrumented code is not a compulsory part of testing with AdaTEST and Cantata. The

AdaTEST and Cantata test harnesses can be used in a 'non-analysis' mode with

uninstrumented code.

2.3. Static Analysis Ref: A7; [WG9] B3.1.6;

 A8; [WG9] B3.6.7;

 A9; B3;

 A10; B5;

 A11; B9;

 A14; B10.

AdaTEST and Cantata static code analysis calculates metrics, analyses program

structure and data, draws flowgraphs, and instruments source in preparation for dynamicanalysis.

Metrics collection provides counts of the use of language constructs and complexity

analysis metrics. Metrics are reported in static analysis listings, and can be made

available to the test script through the instrumented code. Static code analysis includes

the detection of syntax errors in the code, with code containing syntax errors reported as

being uncompilable. Static code analysis results can also be exported to spreadsheets

and databases.

From a test script, metrics can be checked against limits. A limit of 0 will result in a test

failure if the corresponding construct is used at all. This includes counts of comments

and identifiers, but obviously the meaningfulness of comments and identifer namescannot be checked automatically.

The ability to check counts of usage and complexity enables the detection of poor 

 programming structure or excessive complexity and the detection of  deviations from

required standards and procedures to be automated. For example, a programming

standard might permit a maximum of two loops in a module of code. With AdaTEST or

Cantata a check can be built into a test script to fail modules containing more than two

loops.

The structure of code is analyzed to identify redundant code, unused subprograms, types,

constants and variables, and poor data use, such as variables which are never written. An

abstracted form of each Boolean expression is derived in preparation for later test 

coverage analysis. In addition to reporting structure and data analysis in static analysis

Page 5: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 5/16

©IPL Information Processing Ltd

5

listings, AdaTEST and Cantata can also output a flowgraph representation of analyzed

code to a postscript printer. These facilities are a useful aid to establishing the correct 

 flow of control, the correct reference and use of data elements and the logical

correctness and completeness of the software. They are also useful in the detection of 

semantic errors and the detection of redundant or unreachable code.

The static analysis provided by AdaTEST and Cantata does not support: automated

checking of  compliance of the code to the design specification; symbolic execution;

 formal proof ; or rigorous correctness argument . However, the advanced dynamic code

analysis facilities, as described in the following section, provide some compensation for

this. If  dynamic code analysis is an unacceptable alternative, a specialised tool such as

SPARK Examiner (Program Validation Ltd) or MALPAS (TA Consultancy Services

Ltd), should be used in addition to AdaTEST or Cantata.

It is worth noting that the primary purpose of the dynamic testing and dynamic analysis

facilities of AdaTEST and Cantata is to check  compliance of the code to the design

specification and check the correct reference and use of all data elements.

AdaTEST and Cantata can also generate control flow graphs of Ada, C and C++ code,

which can be used to identify paths through the code. However, a static analysis tool

should not be used as the primary means of establishing the number of discrete paths in

code and the likely test needs, such information should always be available from

design specifications.

2.4. Test Coverage Anlysis Ref: A7; A12.9;

 A8; A12.10;

 A9; A12.18;

 A11; A12.19;

[WG9] B3.1.6; A12.20;

 A14.4 A12.21;

 A12.7; B1;

 A12.8; B6 .

The dynamic code analysis part of AdaTEST and Cantata monitors test coverage

through a number of coverage metrics.

Statement Coverage is provided for all executable statements within the software undertest. Statement Statistics reports can be used to show the number of statements executed 

as a cumulative result of all tests performed  and to determine the number of statements

executed by the current test . Automated checking for 100% Statement Coverage will

ensure that every code statement is executed at least once.

Decision Coverage includes the coverage of branches, case alternatives, select

alternatives and loops. Decision Statistics reports can be used to provide a history of 

branch outcomes executed during test together with the frequency of such outcomes.

Decision Statistics reports can also be used to show that every loop is executed with

minimum, maximum and intermediate iteration counts. Automated checking for 100%

Decision Coverage will ensure that every branch outcome is executed at least once.

Page 6: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 6/16

©IPL Information Processing Ltd

6

A range of Boolean Expression Coverage metrics provide coverage analysis and reports

for multiple conditions. Automated checking for 100% Boolean Expression Coverage

will ensure that every predicate term of every branch is exercised .

Call coverage can be used to check that all modules are invoked at least once. Call pair

coverage can be used to check that all invocation routes to a module are exercised .

Related analysis provided by AdaTEST and Cantata includes execution path verification

and execution tracing.

For Ada software, the Exception Statistics report available through AdaTEST gives a

detailed analysis of the execution of exception handlers. Automated checking for 100%

Exception Coverage will ensure that all exception handlers have been executed by the

tests.

AdaTEST and Cantata Data Assertions can be used to verify that specified conditions

(such as variables holding particular values) should be true at least once during test

execution. Another form of assertion provided by AdaTEST and Cantata is the Dynamic

Assertion. Dynamic Assertions are specified in a similar form to Data Assertions,however, Dynamic Assertions are used to verify assertion conditions which must always

be true.

Thus Data Assertions and Dynamic Assertions can be used to verify that test cases fulfil

boundary value analysis, including correct positioning in all boundaries of the input 

data space, error guessing, cause effect diagrams, event tree analysis, input variable

extreme positions and the correct accuracy of arithmetic calculations at all critical

 points.

The results of AdaTEST and Cantata test coverage analyses, including coverage of 

assertion conditions, can be checked against specified levels, reported in detail, exported

and imported between test scripts, or superimposed upon a flowgraph of the code tested.These advanced coverage and assertion facilities provided by AdaTEST and Cantata

provide a dynamic alternative to many functions which would often be associated with

static code analysis.

2.5. Execution Tracing Ref: A7; A12.12;

 A8; A12.13;

 A9; A12.14;

 A11; A12.16;

 A12.2; A12.17;

 A12.6; B2;

 A12.11; B6.

AdaTEST and Cantata dynamic code analysis also provides trace facilities and

flowgraph output which can be used to trace the flow of control within the software.

Tracing may be selected for subprogram entry points, statements, decisions, boolean

expressions and exceptions.

Trace output and flowgraph output can be used during dynamic path testing to establish

that every path in the code is executed at least once by the testing process.

Page 7: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 7/16

©IPL Information Processing Ltd

7

By inspecting trace output for task entries and functions connected to interrupts, it is

possible to monitor the testing of  the maximum possible combination of interrupt 

sequences or a significant combination of interrupt sequences. However,

instrumentation of the code prevents such tests from being executed in real time.

AdaTEST and Cantata Data Assertions and Dynamic Assertions may be used to support

Transaction analysis at a code level and boundary value analysis. Transaction analysis

can also be assisted by trace output. During this process the correct positioning of all

boundaries in the input data space, including input variable extreme positions, and the

correct accuracy of all arithmetic calculations at critical points can also be verified by

dynamic assertions. Data flow tracing, in association with Data Assertions and Dynamic

Assertions, can be used to verify that every assignment to each memory location is

executed , every reference to each memory location is executed , and all mappings from

inputs are executed .

2.6. Timing Analysis Ref: A8; A12.5;

 A9; B4;

 A11; B7.

AdaTEST and Cantata provide timing analysis facilities which can be used to measure

and check  timing constraints and act as a performance monitor . The timing analysis

facilities are particularly useful during load testing.

Timing analysis uses the clock provided by the environment, so accuracy and resolution

are therefore dependant on the environment. In circumstances where greater accuracy or

resolution is required, code can be included within a test script to interface to suitable

timer devices. The results of such timings can then be checked from the test script.

2.7. Dynamic Testing Ref: A7; [WG9] B3.3.3;

 A8; A12.1;

 A9; A12.2;

 A10; A12.3;

 A11; A12.4;

 A14; A12.12;[WG9] B2.1; A12.13;

[WG9] B3.1.1; A12.14;

[WG9] B3.1.2; A12.18;

[WG9] B3.1.3; A12.19;

[WG9] B3.2.1; B6;

[WG9] B3.2.3; B12.

[WG9] B3.3.2;

The dynamic testing facilities of AdaTEST and Cantata enable the user to exercise the

software using test scripts structured as a series of test cases. AdaTEST and Cantata

Page 8: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 8/16

©IPL Information Processing Ltd

8

place no restrictions on the techniques the user may apply to select test cases. Both

 functional testing and structural testing techniques can be used within test scripts.

Test case selection criteria and techniques identified by RIA 23 are all compatible with

AdaTEST and Cantata:

 Boundary value analysis;

 Error guessing;

Cause effect diagrams;

 Event tree analysis;

 Representative test cases;

Specific test cases;

Volumetric and statistically probable test cases;

 Input variable extreme positions;

 Every assignment to each memory location executed;

 Every reference to each memory location executed;

 All mappings from inputs executed;

Correct positioning in all boundaries of the input data space;

Correct accuracy of all arithmetic calculations at all critical points.

The state of selected variables and data elements at selected points in the test process

can be checked against expected values. Such checks can be made from the test script

provided that the data is visible to the test script, including during simulation of stubs, or

through Dynamic Assertions or Data Assertions. AdaTEST also provides facilities for

checking that Ada exceptions are raised when expected and are not raised when they are

not expected, which can be useful when defensive programming has to be tested.

The expected outcomes of dynamic tests, static code analysis and dynamic code

analysis can be built into test scripts. These expected outcomes can then be checked

automatically by AdaTEST and Cantata against actual outcomes when the tests are

executed. This enables comprehensive check lists to be fully automated.

If error seeding is used, the test script can be executed with the seeded software and the

effectiveness of the seeding evaluated by comparison of the test reports for the seeded

and un-seeded software.Each test script is named and each test case numbered to provide a unique identity for 

each test case applied during the tests.

Loops can be incorporated into test scripts to drive tests from tables of data. By driving

tests from tables of data a direct correspondence between a test script and software

developed using tabular specification methods can be maintained.

2.8. Simulation Ref: A8; [WG9] B3.1.5;

 A9; A12.5;

 A11; B4.

Page 9: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 9/16

©IPL Information Processing Ltd

9

 A14;

Software referenced by the software under test may be either built into a test executable,

or simulated using the AdaTEST and Cantata stub simulation facilities. If stub

simulation is used to simulate device interfaces, the behaviour of the software under

various external device conditions can be tested before the actual devices are available.

Another use of AdaTEST and Cantata is for a test script to directly provide instructions

to operating system facilities and device drivers. In this way entire subsystems can be

simulated using the tools, which is an essential part of software and system integration.

Simulation is often the only way to provide realistic loads during load testing.

2.9. Test Reports Ref: A7; [WG9] B3.6.4;

 A8; [WG9] B3.6.5;

 A9; [WG9] B3.6.9;

 A10; [WG9] B3.8.4;

 A11; B6.

 A14;

When a test script is executed, AdaTEST and Cantata output test reports giving full

details of the execution of each test case. The location of errors detected by the test 

 process is marked in the test report by inclusion of comprehensive diagnostic

information whenever a check fails. A summary of the type and number of errors

detected in the test process is given in at the end of the test report, together with an

overall pass/failure for the test script.

During the software development process a test script may be executed a number of 

times. The report output by each successive execution could be used as an input to a

reliability growth model.

To assist configuration management, all AdaTEST and Cantata outputs include date and

time information. When dynamic analysis facilities are used, the filenames of original

uninstrumented code and the date and time of instrumentation are also included in the

test report.

The reports output by AdaTEST and Cantata would typically be incorporated into test

logs, review logs, safety logs etc., forming a valuable input to activities such as Fagan

inspections, formal design reviews and walk throughs.

2.10. Language Ref: A7; [WG9] B3.6.7;

 A11; B13.

 A14;

AdaTEST and Cantata are specialised tools for the testing of software written in the

Ada, C and C++ languages. They are capable of analyzing and testing code written in

the full Ada, C and C++ languages or subsets of the languages. Some application

specific languages such as SPARK are in fact subsets of Ada and can therefore beanalyzed and tested using AdaTEST.

Page 10: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 10/16

©IPL Information Processing Ltd

10

Test scripts are normally written in a minimal subset of the respective language, but the

full capabilities of the language are available for use in scripts when required. The tools

ensure that AdaTEST and Cantata commands are used in a structured and disciplined

manner.

Specialised extensions to AdaTEST and Cantata functionality may be developed in Ada,

C and C++ and used in test scripts. An example given in this paper is use of a high

resolution timer.

AdaTEST and Cantata provide no facilities to directly help with formal proof of a

 program. However, it is possible to formally annotate test scripts and to include test

scripts in formal proofs. It is conceivable that a development of safety critical software

could enforce a formally proven language subset for use in test scripts.

Test scripts may either be written manually, or created automatically by AdaTEST and

Cantata from a file of test case definitions. The test case definitions can either be written

manually, or imported to AdaTEST or Cantata from a CASE tool such as Teamwork 

(Cadre) or Software Through Pictures (IDE). However, where tool integrity is an issue,automatic generation of test scripts should not be used without a manual verification of 

the correctness of the resulting test script.

Languages other than Ada, C and C++ can be tested provided that they are accessible

through language interfacing (including machine code insertions and assembler). The

static and dynamic analysis capabilities of AdaTEST and Cantata will be unavailable or

severely restricted when testing software written in other languages.

2.11. Intrusive and Non-Intrusive Testing Ref: A7; A11;

 A8; [WG9] B3.1.6;

 A9; B4.

Use of AdaTEST and Cantata for test coverage requires the instrumentation of the

software under test, using the instrumenter and static analyzer which forms part of each

tool. Obviously, performing tests on code which is instrumented would not be compliant

with the requirements for safety critical software. AdaTEST and Cantata solve this

problem by operating in one of two modes.

In Analysis mode, AdaTEST and Cantata expect the software under test to be

instrumented for test coverage analysis. All analysis commands are fully executed.

In Non Analysis mode, AdaTEST and Cantata expect the software under test to be

uninstrumented. All analysis commands are skipped, but all other test harness

commands are still fully executed. When AdaTEST and Cantata are used for load 

testing, tests should be run in Non Analysis mode.

The two modes can be used to run each test script twice. Initially, Analysis mode can be

used to ensure a thorough coverage of the software under test by the test script. Once full

coverage is achieved, the same test script can be run again in Non Analysis mode on

uninstrumented software, to ensure that test results are unchanged. Naturally, checks are

made to ensure the compatibility of the software under test with the mode of the tool.

Page 11: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 11/16

©IPL Information Processing Ltd

11

3. Tool Integrity and Development Standards Ref: A7;

 A11;

 A14;

 B8.

AdaTEST and Cantata are in widespread use, providing additional confidence in the

tools integrity. Clients and Certification Authorities wishing to audit the development

and continuing maintenance of the products may do so by arrangement with IPL.

AdaTEST and Cantata have been developed according to the IPL Quality Management

System (QMS), which has been accredited to ISO 9001 and TickIT.

The integrity of the tools used in the development of safety critical software is a

significant consideration. Tool integrity is particularly important where high level

languages such as Ada, C and C++ are used, as even safe subsets of high level languages

are often unsuitable for reverse translation.

AdaTEST and Cantata support the development of all standards of Ada, C and C++

software, including safety critical software. The development of AdaTEST and Cantata

followed IPLs normal ISO9001/TickIT development standards with some additional

high integrity measure for certain parts of the products.

Key points of the development and ongoing maintenance are:

a) The IPL quality management system, accredited to ISO 9001 and TickIT.

b) The IPL Software Code of Practice, forming part of the Quality Management

System.

c) Hazard analysis and the maintenance of a hazard log (AdaTEST).

d) Independent Audit (AdaTEST).

e) The use of a safe language subset for the core functionality. Minimal exceptions

to this subset for other functionality only where absolutely necessary and

 justified in the hazard reporting process (AdaTEST). Tests can be built using just

the core functionality.

f) Configuration Management.

g) Rigorous dynamic testing, from module level upwards.

The safety critical development standards used included consideration of those used for

the EuroJet project, and Defence Standards 00-55 and 00-56. Formal methods were not

used in the specification of AdaTEST or Cantata.

AdaTEST and Cantata are in widespread use, providing additional confidence in the

tools integrity. Clients and Certification Authorities wishing to audit the development

and continuing maintenance of the products may do so by arrangement with IPL.

4. ConclusionAdaTEST and Cantata are well suited to the development of software to the RIA 23

standard, facilitating a high degree of automation of the verification and test techniquesrequired for effective use of the standard.

Page 12: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 12/16

©IPL Information Processing Ltd

12

AdaTEST and Cantata have been developed to the highest practical standard for

software verification tools. They have both proven integrity and reliability through

extensive use.

It is believed that AdaTEST and Cantata are the only tools to offer this comprehensive

functionality and the only testing tools developed to such high standards. However, this

does not preclude the use of AdaTEST and Cantata for testing software which is not for

safety critical use.

Page 13: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 13/16

©IPL Information Processing Ltd

13

Appendix A

Cross Reference Matrix

The following matrix cross references from techniques identified by RIA 23 to the

section of this paper where the technique is discussed.

RIA 23 ReferenceCross

Reference

A

7

A

8

A

9

A

10

A

11

A

13

A

14

Reference Technique Paragraph

X [WG9] B2.1 Defensive

Programming

2.7

X X X X [WG9] B3.1.1 Boundary Value

Analysis

2.4, 2.5, 2.7

X X X X X [WG9] B3.1.2 Error Guessing 2.4, 2.7X X [WG9] B3.1.3 Error Seeding 2.7

X X X X [WG9] B3.1.5 Simulation 2.8

X X X X [WG9] B3.1.6 Test Coverage 2.2, 2.3, 2.4,

2.11

X X X X X [WG9] B3.2.3 Functional Testing 2.2, 2.7

X [WG9] B3.2.1 Cause Effect Diagrams 2.4, 2.7

X X X X X X [WG9] B3.3.2 Check-lists 2.7

X [WG9] B3.3.3 Event Tree Analysis 2.4, 2.7

X [WG9] B3.3.4 FMECA See note 1

X [WG9] B3.3.5 Fault Tree Analysis See note 1

X [WG9] B3.5.2 Common Cause

Failure Analysis

See note 1

X [WG9] B3.5.3 Markov Models See note 1

X X X X X X [WG9] B3.6.4 Fagan Inspections 2.9

X X X X X X [WG9] B3.6.5 Formal Design

Reviews

2.9

X X X [WG9] B3.6.7 Formal Proof of  

Program

2.3, 2.10

X X [WG9] B3.6.8 Sneak Circuit Analysis See note 1

X X X [WG9] B3.6.9 Walk Throughs 2.9

X X X X X [WG9] B3.8.4 Reliability Growth

Model

2.9

A12.1 Representative test

cases

2.7

A12.2 Specific test cases 2.7

Page 14: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 14/16

©IPL Information Processing Ltd

14

RIA 23 ReferenceCross

Reference

A

7

A

8

A

9

A

10

A

11

A

13

A

14

Reference Technique Paragraph

A12.3 Volumetric and

statistically probable

test cases

2.7

A12.4 Input variable extremes 2.4, 2.7

A12.5 External device

conditions

2.8

A12.6 Static and dynamic

path testing

2.5

A12.7 Every code statement

executed

2.4

A12.8 Every branch outcome

executed

2.4

A12.9 Every predicate term

of every branch,

exercised

2.4

A12.10 Every loop executed

with minimum,

maximum and

intermediate iteration

counts

2.4

A12.11 Every path executed 2.5

A12.12 Every assignment to

each memory location,

executed

2.5, 2.7

A12.13 Every reference to

each memory location,

executed

2.3, 2.5, 2.7

A12.14 Mappings from inputs,

executed

2.3, 2.5, 2.7

A12.15 Timing constraintschecked

2.6

A12.16 Maximum possible

combination of 

interrupt sequences

2.5

A12.17 Significant

combinations of 

interrupt sequences

2.5

A12.18 Correct positioning in

all boundaries of the

input data space

2.4, 2.5, 2.7

A12.19 Correct accuracy of 2.4, 2.5, 2.7

Page 15: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 15/16

©IPL Information Processing Ltd

15

RIA 23 ReferenceCross

Reference

A

7

A

8

A

9

A

10

A

11

A

13

A

14

Reference Technique Paragraph

arithmetic calculations

A12.20 All modules invoked 2.4

A12.21 All invocation routes to

a module exercised

2.4

X X B1 Multiple Condition

Coverage

2.4

X B2 Transaction Analysis 2.5

X X B3 Complexity Analysis 2.3

X X X B4 Load Testing 2.6, 2.8,2.11

X X B5 Static Code Analysis 2.2, 2.3

- Detection of syntax

errors

2.3

- Detection of semantic

errors

2.3

- Detection of 

redundant or

unreachable code

2.3

- Detection of poor

programming structure

or excessive

complexity

2.3

- Compliance of the

code with the design

specification

2.3

- Logical correctness

and completeness

2.3

- Correct flow of 

control

2.3

- Correct reference and

use of all data

2.3

- Number of discrete

paths and likely test

needs

2.3

- Deviations from

required standards and

procedures

2.3

X X X X B6 Dynamic Code

Analysis

2.2, 2.4, 2.5

Page 16: Ada Test Canata & RI

7/28/2019 Ada Test Canata & RI

http://slidepdf.com/reader/full/ada-test-canata-ri 16/16

©IPL Information Processing Ltd

16

RIA 23 ReferenceCross

Reference

A

7

A

8

A

9

A

10

A

11

A

13

A

14

Reference Technique Paragraph

- Number and identity

of test cases

2.7

- Flow of control 2.5

- Number of statements

executed by current

test

2.4

- Number of statements

executed as a

cumulative result of all

tests

2.4

- History of branch

outcomes executed

during test together

with the frequency of 

such executions

2.4

- State of selected

variables and data

elements at defined

points in the test

process

2.7

- Number and location

of any errors detected

in the test process

2.9

X X B7 Performance Monitor 2.6

X X X B8 Reverse Translation 3

X X B9 Symbolic Execution 2.3

X X X X X B10 Rigorous Correctness

Argument

2.3

X B11 Formal Syntax

Specification

See note 1

X B12 Tabular Specification

Methods

2.7

X B13 Application Specific

Languages

2.10

Note 1: The techniques marked, while highly applicable to safety critical

software, are not within the scope of tools for static analysis, dynamic

analysis or dynamic testing.