automated test case generation and execution from models

70
Dharmalingam Ganesan, Mikael Lindvall, Christoph Schulze Software Architecture and Embedded Systems Division Fraunhofer Center for Experimental Software Engineering College Park Maryland Automated Testcase Generation and Execution from Models © 2012 Fraunhofer USA, Inc.

Upload: dharmalingam-ganesan

Post on 11-May-2015

3.442 views

Category:

Technology


1 download

DESCRIPTION

A practical approach for end-to-end test automation is discussed. The approach is based on model-based testing. The presentation discusses several industrial case studies of applying model-based testing to automatically generate innumerable number of ready-to-run, executable test cases.

TRANSCRIPT

Page 1: Automated Test Case Generation and Execution from Models

Dharmalingam Ganesan, Mikael Lindvall, Christoph Schulze

Software Architecture and Embedded Systems Division

Fraunhofer Center for Experimental Software Engineering

College Park

Maryland

Automated Testcase Generation

and Execution from Models

© 2012 Fraunhofer USA, Inc.

Page 2: Automated Test Case Generation and Execution from Models

2

Fraunhofer Center – Maryland (FC-MD)

� Applied Research and Tech Transfer

� Affiliations� University of Maryland, College Park� Fraunhofer Germany� Close ties to NASA, FDA

� Focus on Software Engineering

� Contract R &D for industry, government

� Clients/partners: Bosch, Biofortis, DoD, FDA, JHU, JHU/APL, NASA, Optimal Solutions, etc.

© 2012 Fraunhofer USA, Inc.

Page 3: Automated Test Case Generation and Execution from Models

3 Fraunhofer Center – Maryland (FC-MD)Located at MSquare

© 2012 Fraunhofer USA, Inc.

Page 4: Automated Test Case Generation and Execution from Models

4

Goal� Introduce an approach for test case generation

� Demonstrate the applicability of the approach on different types of systems:� Web-based systems� Systems with GUIs (e.g., Java Swing)� Software components with interfaces

� After the presentation we expect you to be able to:� Understand ideas of model-based testing� Develop models from requirements

© 2012 Fraunhofer USA, Inc.

Page 5: Automated Test Case Generation and Execution from Models

5

Motivation� Bugs can lead to deaths, life-threatening, financial loss, unsatisfied customers, etc.

� Software testing effort at least 50% of development effort� Mission-critical systems even up to 75%

� Software testing is necessary to find the presence of bugs

© 2012 Fraunhofer USA, Inc.

Page 6: Automated Test Case Generation and Execution from Models

6

Tester 1 – hands on testing

While (true) {tested_manually();found_bugs();reported_to_developers();developers_returned_newVersion();if(tired) {break;}

}shipped_the_product();customer_found_bugs();customer_unhappy();company_lost_business();

Cartoon Reference: H. Robinson. Intelligent Test Automation.

© 2012 Fraunhofer USA, Inc.

Page 7: Automated Test Case Generation and Execution from Models

7

Tester 2 – testing with scripts

While (true) {wrote_test_scripts();found_bugs();reported_to_developers();developers_changed_code();developers_returned_newVersion();test_scripts_broke();changed_test_scripts();if (tired) {

break;}

}shipped_the_product();customer_found_bugs();customer_unhappy();company_lost_business();

© 2012 Fraunhofer USA, Inc.

Page 8: Automated Test Case Generation and Execution from Models

8 Tester 3 – monkey banging on a keyboard

While (true) {monkey_randomly_press_keys();found_bugs();reported_to_developers();developers_returned_newVersion();manually_tested_missing_scenarios();if (tired) {

break;}

}shipped_the_product();customer_found_bugs();customer_unhappy();company_lost_business();

© 2012 Fraunhofer USA, Inc.

Page 9: Automated Test Case Generation and Execution from Models

9

Tester 4- model-based testing

developed_model_of_system();

While (true) {generated_test_cases_from_model();found_bugs();reported_to_developers();developers_returned_newVersion();updated_the_model(); // if needed

if (fullModelCovered() & noBugsFound()) {break;

} }shipped_the_product();customer_happy();

© 2012 Fraunhofer USA, Inc.

Page 10: Automated Test Case Generation and Execution from Models

10

MBT @ Fraunhofer CESE� At Fraunhofer, we have a long history of various Model Based Development and Testing

� NASA’s SARP program sponsors us in this work

� We are developing the Fraunhofer Approach for Software Testing (FAST) based on MBT

� The FAST approach has been applied to several NASA systems as well as other commercial systems

© 2012 Fraunhofer USA, Inc.

Page 11: Automated Test Case Generation and Execution from Models

11

Model-based Testing (MBT) –Idea� Develop a model of the system under test

� The model contains actions and expected results

� Automatically derive test cases from the model

� Execute the test cases

� Decide when to stop testing based on model coverage!

© 2012 Fraunhofer USA, Inc.

Page 12: Automated Test Case Generation and Execution from Models

12

Benefits� Allow all stakeholders to review test coverage

� Generate plenty ready-to-run test cases

� Immediate return-on-investment� No costly editing of test cases� A software product that is very well tested

© 2012 Fraunhofer USA, Inc.

Page 13: Automated Test Case Generation and Execution from Models

13

13

Page 14: Automated Test Case Generation and Execution from Models

14

Terminology� Nodes of the model are called states

� Edges of the model are called transitions

� A test case is a path from start to exit

� A random test is generated using a random walk from the start state to the exit state

� A test suite is a collection of test cases

� Model coverage means that each transition is included in at least one test case

14

Page 15: Automated Test Case Generation and Execution from Models

15 Workflow

15

1. Define test objective and software to test

3. Create models

4. Setup infrastructure (TestMonkey and DataProvider)

5. Generate test cases

2. Analyze requirements, software under test,existing test cases!

6. Execute test cases, analyze results, remove bugs

Page 16: Automated Test Case Generation and Execution from Models

16

Tools used by the FAST� Modeling: Yed Graphical Editor from yWorks� Model traversal and test generation: Jumbl- Uni. Tennessee� Test Execution:

� Junit (Java) � CuTest (C)� Selenium (Web)� UISpec (Java Swing)� Sikuli (Image-based testing of legacy systems)

� Glue scripts:� Conversion of Yed models to Jumbl models� Preparing a test suite from generated test cases� Generation of system-specific build files (e.g., makefiles)� Helper scripts to clean-up generated files

© 2012 Fraunhofer USA, Inc.

Page 17: Automated Test Case Generation and Execution from Models

17

Testing Web-based Systems� We have used the FAST approach to test web-based systems

� We used use-case documentation (as well as knowledge about the running system) to build a model of the system under test

� Automatically derived test cases from the model

© 2012 Fraunhofer USA, Inc.

Page 18: Automated Test Case Generation and Execution from Models

18

Example: Search Form

Main scenarios:1. Some valid search input

a) that matches dbb) that doesn’t match db

2. Invalid search input (evil)3. No search input

Endless number of test input combinationsOur approach: build model(s) that can be extended as necessary

Page 19: Automated Test Case Generation and Execution from Models

© 2011 Fraunhofer USA, Inc.19

3. No search input2. Invalid search

input (evil)1. Some valid search input

a) that matches db b) doesn’t match db

An Example: Search Model

Page 20: Automated Test Case Generation and Execution from Models

© 2011 Fraunhofer USA, Inc.

An Example: Search Model

20

Result: 4 Matches of 35 records in database

Tester: Let’s try with something we know exist in the database

Name: BendersonCity: College Park

Page 21: Automated Test Case Generation and Execution from Models

© 2011 Fraunhofer USA, Inc.

An Example: Search Model

21

Tester: Let’s try with something that doesn’t exist in the databaseName: Fake NameCity: Fake Town

Result: 0 Matches of 35 records in database

Page 22: Automated Test Case Generation and Execution from Models

© 2011 Fraunhofer USA, Inc.

An Example: Search Model

22

Result: System crash!

Tester: Let’s try something evil!Name: O’HareCity: Chicago

Page 23: Automated Test Case Generation and Execution from Models

© 2011 Fraunhofer USA, Inc.

An Example: Search Model

23

Tester: Let’s try an “empty” search

Result: 35 Matches of 35 records in database

Page 24: Automated Test Case Generation and Execution from Models

© 2011 Fraunhofer USA, Inc.

An Example: Search Model

24

A more complex scenario

Once we have the model, we can generate an endless number of such test cases!

Page 25: Automated Test Case Generation and Execution from Models

© 2011 Fraunhofer USA, Inc.

An Example: Search Model

25

© 2011 Fraunhofer USA, Inc.

Page 26: Automated Test Case Generation and Execution from Models

Ready-to-run generated test cases!

26

This automatically generated test can be executed immediately and/or integrated into the daily build.

© 2012 Fraunhofer USA, Inc.

Page 27: Automated Test Case Generation and Execution from Models

27

Running the generated test cases

� We need to map each action into some code

� For web-testing, there are several tools (e.g., HttpUnit, Selenium)

� These tools offer APIs to interact with web-testing

� We used Selenium for executing the generated test cases

© 2012 Fraunhofer USA, Inc.

Page 28: Automated Test Case Generation and Execution from Models

28

Example: detected defect

We can registerClarkson’s

Page 29: Automated Test Case Generation and Execution from Models

Clarkson’s

However, if we search forClarkson’s

We get this error

Example: detected defect

System crash!

Other examples of detected defects: • Loss of state when moving back and forth between pages• Records missing in search results due to sorting issues

Page 30: Automated Test Case Generation and Execution from Models

30 Pagination issue (513 registrations in total)

© 2012 Fraunhofer USA, Inc.

Page 31: Automated Test Case Generation and Execution from Models

31

Sorting issue

Missing 198:

© 2012 Fraunhofer USA, Inc.

Page 32: Automated Test Case Generation and Execution from Models

32

Another Example: FDA CFR Part 11

� CFR Part 11 is a legal standard for electronic records and electronic signatures

� Rules for user account management are covered

� Challenge: Test for conformance to CFR Part 11

� Approach: � Developed a model of Part 11� Automatically derived test cases

© 2012 Fraunhofer USA, Inc.

Page 33: Automated Test Case Generation and Execution from Models

33

(Snippet) Model of Part 11

© 2012 Fraunhofer USA, Inc.

Page 34: Automated Test Case Generation and Execution from Models

34

Test Monkey Interface//This is a technology-agnostic interface for testing the Login/Password use case.

public interface ILoginPasswordTestMonkey{ public void enterSystem(); public void gotoLoginPasswdPage(); public void enterGeneratedLoginPasswd(); public void enterValidLoginPasswd(); public void enterInvalidLoginPasswd(); public void clickLogin(); public boolean isLoginSuccessful(); public void clickLogout(); public boolean isLogoutSuccessful(); …

} © 2012 Fraunhofer USA, Inc.

Page 35: Automated Test Case Generation and Execution from Models

35

Sample black-box test case

[Start]."Enter application"[Application Home]."Recover password using invalid credentials"[Password Recovery Warning]."Recover password using valid credentials"[Password Recovered]."Exit"

[Start]."Enter application"[Application Home]."Enter valid login and password"[Logged In]."Request change password"[Change Password Page]."Enter valid password"[Logged In]."Logoff"[Application Home]."Enter expired password"[Expired Password Warning]."Enter valid login and password"[Logged In]."Exit"

© 2012 Fraunhofer USA, Inc.

Page 36: Automated Test Case Generation and Execution from Models

36

Sample generated Junit caseimport part11.Factory;import part11.ILoginPasswordTestMonkey;import part11.ILoginPasswordDataProvider;import junit.framework.TestCase;

public class Part_11_9 extends TestCase {private ILoginPasswordTestMonkey monkey;private ILoginPasswordDataProvider dataProvider;

protected void setUp() {// Get a user account data providerdataProvider = Factory.createLoginPasswordDataProvider();

// Get a test monkeymonkey = Factory.createLoginPasswordTestMonkey(dataProvider);

// Start the system under testmonkey.enterSystem();

} © 2012 Fraunhofer USA, Inc.

Page 37: Automated Test Case Generation and Execution from Models

37

Sample generated Junit case …public void testMethod() {// Test using valid login and password

monkey.gotoLoginPasswdPage();monkey.enterValidLoginPasswd();monkey.clickLogin();monkey.waitForPageToLoad();assertTrue("Valid user should login",

monkey.isLoginSuccessful());

// Test password and confirm password mismatch

monkey.clickChangePassword();monkey.enterMismatchPasswordConfirmPassword();assertTrue("Password mismatch not detected",

monkey.isPasswordMismatchMsgShown());

© 2012 Fraunhofer USA, Inc.

Page 38: Automated Test Case Generation and Execution from Models

38

Sample generated Junit case …

// Test Logoutmonkey.clickLogout();monkey.waitForPageToLoad();assertTrue("Valid user should be able to logout",

monkey.isLogoutSuccessful());

// Test using invalid login and passwordmonkey.gotoLoginPasswdPage();monkey.enterInvalidLoginPasswd();monkey.clickLogin();monkey.waitForPageToLoad();assertTrue("Login should fail on invalid account",

monkey.isLoginErrorMsgShown());

}

© 2012 Fraunhofer USA, Inc.

Page 39: Automated Test Case Generation and Execution from Models

39

Testing of software components

� We can also use FAST to test source code

� API specs and existing test cases were used to develop models

� Executable test cases were derived from models

�Let us discuss two NASA systems:� GMSEC (modeling of Java APIs)� OSAL (modeling of C APIs)

© 2012 Fraunhofer USA, Inc.

Page 40: Automated Test Case Generation and Execution from Models

40

FAST @ NASA GMSEC

Page 41: Automated Test Case Generation and Execution from Models

41

FAST @ NASA GMSEC …� State-of-the-practice: Test cases are hand-crafted

� New initiative started to evaluate the feasibility of the FAST approach

� Modeled a portion of the GMSEC Software Bus based on existing test cases and documentation

� Automatically generated test cases

� Found a few problems (already fixed now)

© 2012 Fraunhofer USA, Inc.

Page 42: Automated Test Case Generation and Execution from Models

42

Hand-crafted test case (snippet)

public static void main( String args[] ) {Status result = new Status();Connection conn = new Connection();ConnectionConfig cfg = new ConnectionConfig( args );

// Create the connectionresult = ConnectionFactory.Create( cfg, conn );checkError( false, result, "Creating the connection object" );

// Disconnectresult = conn.Disconnect();checkError( true, result, "Disconnecting before connection is established" );

// Connectresult = conn.Connect();checkError( false, result, "Establishing the connection to the middleware" );

} //..main() © 2012 Fraunhofer USA, Inc.

Page 43: Automated Test Case Generation and Execution from Models

43 Manually developed test cases –source of Inspiration• We reviewed existing Java test cases

• Found that the tester has used certain permutations of API-usage

• Also, both good and “evil” cases are considered

• We used these test cases as a source of reference for building API usage models

© 2012 Fraunhofer USA, Inc.

Page 44: Automated Test Case Generation and Execution from Models

44

FAST @ NASA GMSEC …public interface IConnection

{

public Status Connect();

public Status Disconnect();

}

APIs of the module under test

© 2012 Fraunhofer USA, Inc.

Page 45: Automated Test Case Generation and Execution from Models

45

FAST @ NASA OSAL� Operating System Abstraction Layer

� Isolates flight software from real time operating systems and hardware.

� Implementation for the real time systems RTEMS and VxWorks and POSIX compliant non-real time systems.

� Used for mission critical embedded systems

� Provides support for file-system, tasks, queues, semaphores, interrupts, hardware abstraction, I/O ports and exception handling

© 2012 Fraunhofer USA, Inc.

Page 46: Automated Test Case Generation and Execution from Models

46

FAST @ NASA OSAL …� Why is it important that OSAL is bug free?

� Flight software is mission critical and needs to be of very high quality

� OSAL is the building block of the core flight software product line

� OSAL is used in many NASA missions, e.g. the Lunar Renaissance Orbit

� If OSAL has issues, it might result in catastrophic failure

© 2012 Fraunhofer USA, Inc.

Page 47: Automated Test Case Generation and Execution from Models

47

FAST @ NASA OSAL …

© 2012 Fraunhofer USA, Inc.

Page 48: Automated Test Case Generation and Execution from Models

48

NASA OSAL - Architecture

© 2012 Fraunhofer USA, Inc.

Page 49: Automated Test Case Generation and Execution from Models

49

Sample APIs

/******************************************************************************** Directory API ******************************************************************************/// Makes a new directoryint32 OS_mkdir (const char *path, uint32 access);

// Opens a directory for searchingos_dirp_t OS_opendir (const char *path);

// Closes an open directoryint32 OS_closedir(os_dirp_t directory);

// Removes an empty directory from the file system.int32 OS_rmdir (const char *path);

© 2012 Fraunhofer USA, Inc.

Page 50: Automated Test Case Generation and Execution from Models

50

Example of an OSAL model

© 2012 Fraunhofer USA, Inc.

Page 51: Automated Test Case Generation and Execution from Models

51

Inside “Directory Handling”

© 2012 Fraunhofer USA, Inc.

Page 52: Automated Test Case Generation and Execution from Models

52

Inside “Open Directory”

© 2012 Fraunhofer USA, Inc.

Page 53: Automated Test Case Generation and Execution from Models

53

API doc of open directory/*--------------------------------------------------------------------------------------Name: OS_mkdir

Purpose: makes a directory specified by path.

Returns: OS_FS_ERR_INVALID_POINTER if path is NULLOS_FS_ERR_PATH_TOO_LONG if the path is too long to be stored locallyOS_FS_ERR_PATH_INVALID if path cannot be parsedOS_FS_ERROR if the OS call failsOS_FS_SUCCESS if success

Note: The access parameter is currently unused.---------------------------------------------------------------------------------------*/

int32 OS_mkdir (const char *path, uint32 access);

© 2012 Fraunhofer USA, Inc.

Page 54: Automated Test Case Generation and Execution from Models

54

Inside Open Invalid Directory

© 2012 Fraunhofer USA, Inc.

Page 55: Automated Test Case Generation and Execution from Models

55

Sample IMonkey Interface

� int32 removeDirectoryValid(void);� int32 removeDirectoryPathNull(void);� int32 removeDirectoryPathTooLong(void);� int32 removeDirectoryPathUnparsable(void);

� int32 removeDirectoryCurrent(void);� int32 removeDirectoryNotEmpty(void);� …

© 2012 Fraunhofer USA, Inc.

Page 56: Automated Test Case Generation and Execution from Models

56

Sample generated Test in CuTestvoid Testosal_Filesystem_min_2(CuTest* tc){ status = makeFilesystemValid();CuAssertIntEquals_Msg(tc,"Filesystem could not be created",

OS_FS_SUCCESS, status);

status = mountFilesystemValid();CuAssertIntEquals_Msg(tc,"Filesystem could not be mounted",

OS_FS_SUCCESS, status);

pointer = openDirectoryValid();CuAssertTrue(tc, pointer != NULL);…status = removeFilesystemValid();CuAssertIntEquals_Msg(tc,"Filesystem could not be removed”, status);

}© 2012 Fraunhofer USA, Inc.

Page 57: Automated Test Case Generation and Execution from Models

57

Issues found using FAST

• File-descriptors after removing file-system:• After somewhat long tests we would run out of file-descriptors

• This would even happen with a newly created file-system

• OSAL does not remove file-descriptors for files open when the file-system is removed

• Unable to create and open files

© 2012 Fraunhofer USA, Inc.

Page 58: Automated Test Case Generation and Execution from Models

58

Issues found using FAST

• Some wrong error codes returned (and unimplemented features)

Test function Error message Expected Actual

checkFilesystemValid()

Filesystem Not

Checked OS_FS_SUCCESS OS_FS_UNIMPLEMENTED

copyFileLongSourceFilename()

Filesystem error code

expected OS_FS_ERROR OS_FS_ERR_NAME_TOO_LONG

copyFileNonExistingSourceFile()

Filesystem error code

expected OS_FS_ERROR OS_FS_SUCCESS

renameFileLongSourceFilename()

Filesystem error code

expected OS_FS_ERROR OS_FS_ERR_NAME_TOO_LONG

© 2012 Fraunhofer USA, Inc.

Page 59: Automated Test Case Generation and Execution from Models

59

Code Coverage using FAST� FAST helps achieving good code coverage!

� New model improves the old model by covering more lines not covered during testing

60

65

70

75

80

85

90

95

Min 10 25 50 100 1000

osfileapi.c (newmodel)

osfilesys.c (newmodel)

osfileapi.c (oldmodel)

osfilesys.c (oldmodel)

© 2012 Fraunhofer USA, Inc.

Page 60: Automated Test Case Generation and Execution from Models

60

Code Coverage Analysis� Never passing a permission other than OS_READ_WRITE for creating and opening file

switch(access) {

case OS_READ_ONLY: …

case OS_WRITE_ONLY: …

case OS_READ_WRITE: …

default: …

}

� No passing of NULL parameters to several functions

if (path == NULL || filestats == NULL)

return OS_FS_ERR_INVALID_POINTER;

60 © 2012 Fraunhofer USA, Inc.

Page 61: Automated Test Case Generation and Execution from Models

61

Code Coverage Analysis

03/23/201261

� Never passing a string parameter with certain length to several functions

if (strlen(path) >= OS_MAX_PATH_LEN)

return OS_FS_ERR_PATH_TOO_LONG;

� Calling of function of underlying operating system never returns an error

status = close((int) OS_FDTable[filedes].OSfd);

if (status == ERROR)

© 2012 Fraunhofer USA, Inc.

Page 62: Automated Test Case Generation and Execution from Models

62

MBT - Limitations� Modeling requires specification of SUT� Developers are not used to modeling� Slower feedback compare to manual testing� Some experience is needed to define the architecture of models

� Difficult to document individual test cases� Note: Models summarize all test cases� Some customers require document of each test case

� Difficult to locate the root cause of failures in (lengthy) generated test cases� Require manual work to squeeze failed test cases

© 2012 Fraunhofer USA, Inc.

Page 63: Automated Test Case Generation and Execution from Models

63

MBT – Limitations …� For web-based testing:

� creation of the data provider requires effort to explore the SUT

� if Ids of web elements change or dynamic, then tests may not work (testability issue)

� During development, models and data providers may need rework because� requirements and APIs change (frequently)� lack of contracts

© 2012 Fraunhofer USA, Inc.

Page 64: Automated Test Case Generation and Execution from Models

64

MBT- Limitations …� Test oracle problem for distributed publish-subscribe systems� Cannot assign one test oracle for a given state and action � Due to decoupling of publishers and subscribers in space and time

� Requires runtime monitoring and global coordination models

� Difficult to execute generated tests in parallel due to shared resources (e.g., files)

© 2012 Fraunhofer USA, Inc.

Page 65: Automated Test Case Generation and Execution from Models

65 Our Experience with MBT -Summary � We found some bugs during the initial exploration of the system

� Some spec issues found during the modeling phase

� Some bugs found during execution of test cases

� It took some effort to get started with MBT� Learning of new approach, tools, etc

� It definitely cuts testing effort and adds fun!

© 2012 Fraunhofer USA, Inc.

Page 66: Automated Test Case Generation and Execution from Models

66 Our Experience with MBT –Summary …� Generate several hundred (or thousands) ready-to-run test cases!

� The test code embedded in the model is reused to generate many test cases!

� Immediate return-on-investment� You get test cases for your current model that can be part of the daily build

� No editing of test cases� Source code changes means updating the model and regenerating test cases

� A software product that is very well tested!

© 2012 Fraunhofer USA, Inc.

Page 67: Automated Test Case Generation and Execution from Models

67

Thanks!� Dr. Dharmalingam Ganesan

� ([email protected])

� Dr. Mikael Lindvall� ([email protected])

© 2012 Fraunhofer USA, Inc.

Page 68: Automated Test Case Generation and Execution from Models

68

Acknowledgements� NASA IV & V : Lisa P. Montgomery

� NASA Goddard Space Flight Center (GSFC):� GMSEC Team:

� Lamont Ruley, Robert Wiegand, Sharon Orsborne� CFS Team:

� Dave Mccomas, Nicholas Yanchik, Alan Cudmore� White sands: Markland Benson� Sally Godfrey

� Fraunhofer CESE: � Rance Cleaveland� Frank Herman� Myrna Regardie

© 2012 Fraunhofer USA, Inc.

Page 69: Automated Test Case Generation and Execution from Models

69

Acknowledgements - Interns� Palmi Valgeirsson

� Gunnar Cortes

� Faraz Ahmed

� Henning Femmer� Vignir Örn Guðmundsson

Page 70: Automated Test Case Generation and Execution from Models

70

Acronyms� API: Application Program Interface

� CFE: Core Flight Executive

� CFR: Code of Federal Regulations

� CFS: Core Flight Software

� FAST: Fraunhofer Approach for Software Testing

� FDA: Food and Drug Administration

� GMSEC: Ground Mission Services Evolution Center

� MBT: Model-based Testing

� OSAL: Operating System Abstraction Layer