scenario testing of mobile webview-based native applications...

125
Jolien Coenraets applications Scenario testing of mobile webview-based native Academiejaar 2011-2012 Faculteit Ingenieurswetenschappen en Architectuur Voorzitter: prof. dr. ir. Daniël De Zutter Vakgroep Informatietechnologie Master in de ingenieurswetenschappen: computerwetenschappen Masterproef ingediend tot het behalen van de academische graad van Begeleider: Heïko Desruelle Promotor: prof. dr. ir. Frank Gielen

Upload: others

Post on 31-Aug-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

Jolien Coenraets

applicationsScenario testing of mobile webview-based native

Academiejaar 2011-2012Faculteit Ingenieurswetenschappen en ArchitectuurVoorzitter: prof. dr. ir. Daniël De ZutterVakgroep Informatietechnologie

Master in de ingenieurswetenschappen: computerwetenschappen Masterproef ingediend tot het behalen van de academische graad van

Begeleider: Heïko DesruellePromotor: prof. dr. ir. Frank Gielen

Page 2: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based
Page 3: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

Acknowledgment

This thesis could not have been written without the help of Heıko Desruelle, who served

as my supervisor. I would like to thank him for his guidance and coaching during this

year. He helped me focus on the subject at hand and sparked my interest for the academic

world. Secondly, I would like to express my gratitude towards prof. dr. ir. Frank Gielen

for believing in the subject I proposed and allowing me to investigate the matter. I gained

a lot of insight in the subject that will prove useful in my future career.

I would like to thank my colleagues at G-flux, who helped by suggesting new angles for

me to explore. I would also like to thank the Bike To The Moon committee at IBBT,

without them there would not have been a Moonbiker application or a case study.

Further, I would like to thank my fiance Jenne for thoroughly proofreading this thesis and

revising where necessary. I am grateful for the endless support I received from him and

my parents during my studies.

Finally, I would like to thank everyone who contributed directly or indirectly to this thesis.

Jolien Coenraets, May 2012

Page 4: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

Copyright notice

“The author(s) gives (give) permission to make this master dissertation available for con-

sultation and to copy parts of this master dissertation for personal use. In the case of

any other use, the limitations of the copyright have to be respected, in particular with re-

gard to the obligation to state expressly the source when quoting results from this master

dissertation.”

Jolien Coenraets, May 2012

Page 5: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

Scenario testing of mobile webview-basednative applications

by

Jolien COENRAETS

Master’s thesis submitted to obtain the academic degree of

master of computer science engineering

Academic year 2011–2012

Promotor: prof. dr. ir. F. GIELEN

Advisor: ir. H. DESRUELLE

Faculty of Engineering and Architecture

Ghent University

Department of Information Technology

Chairman: prof. dr. ir. D. DE ZUTTER

Summary

The fragmentation of the mobile market is pushing application developers to use web or

webview-based native applications (also known as hybrid applications) instead of normal

native applications. They allow a larger reuse of code since they use web technology

(HTML, CSS and JavaScript) to build an application. Unfortunately, testing web or

webview-based native applications is hardly supported by the existing test solutions. Nev-

ertheless it is very important since it influences the user experience of the application.

This dissertation tries to create an automated scenario test solution for web and webview-

based native applications. A proof of concept is developed which executes scenario tests

on webview-based Android Cordova applications by using JUnit. This proof of concept is

evaluated in a case study that tests four scenarios on an application of this type.

Keywords

Scenario testing, webview-based native applications, hybrid applications, Cordova, JUnit

Page 6: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

Scenario testing of mobile webview-based nativeapplications

Jolien Coenraets

Supervisor(s): prof. dr. ir. Frank Gielen, ir. Heıko Desruelle

Abstract—This article aims at creating an automated scenario test solu-tion for web and webview-based native applications. These types of appli-cations are used more and more because of the increasing fragmentationof the mobile market, but there are almost no test solutions for them. Aproof of concept is developed which executes scenario tests on webview-based Android Cordova applications by using JUnit. This proof of conceptis evaluated in a case study that tests four scenarios on an application of thistype.

Keywords— Scenario testing, webview-based native application, hybridapplication, Cordova, JUnit

I. INTRODUCTION

DEVELOPERS of smartphone applications are increasinglysuffering from the mobile market’s fragmentation. Not

only the different operating systems such as Android and iOSneed to be supported, but also their different versions and thison a growing amount of devices, all with their own hardwarespecifications. This fragmentation is especially a problem onAndroid. Developing native applications is resource-consumingsince each platform uses a different programming language sono code can be reused. This causes a shift from native appli-cations towards web or webview-based native applications (alsoknown as hybrid applications). Since these applications run in abrowser or webview and are programmed by using web technol-ogy (HTML, CSS and JavaScript), the major part of the code canbe reused on the different operating systems. Webview-basednative applications allow to package the web technology as anative application and thus to distribute them through the appstores. [1], [2]

At the moment, testing these types of applications is a time-consuming activity since most of the testing needs to be donemanually. The large fragmentation requires the tests to takeplace on a large number of devices and thus magnifies the prob-lem. Automation of this tedious testing task would be an extraadvantage of using web or webview-based native applications.

Section 2 of this article discusses the development of a testingsolution by reviewing the existing tools, envisioning an archi-tecture and elaborating on the proof of concept. In Section 3 theproof of concept is used to test an application and a conclusionis made in Section 4.

II. DEVELOPING A TEST SOLUTION

There already are some test solutions to test web applicationsor JavaScript code, such as Selenium or QUnit. Different W3Cinitiatives are dealing with this subject and MobiWebApp is cre-ating its own test solution for web applications [3]. However, inall the existing solutions webview-based native applications areoverlooked. The combination of a native application and codewritten in JavaScript leads to a special situation which makes

the existing solutions inapplicable. Also, these solutions are fo-cused on unit testing while scenario testing is far more importantto smartphone applications since this tests the behavior of theapplication when the user is interacting with it [4]. Since smart-phone applications behave differently in different situations (e.g.with or without Internet connection) it is important that the ap-plication is tested in all these situations.

A. Architecture

The elaboration of the architecture was done by using theattribute-driven-design technique [5] and with usability, exten-sibility and scalability as most important quality attributes. Dur-ing the first iteration, the Broker pattern was applied which leadto the architecture in Figure 1 [6]. The left side shows the com-ponents that interact with the user (the website and the commandline) and the right side contains the components that execute thetests. The device connector connects the different devices to thetest system and executes the tests on them, the scheduler sched-ules the tests and the processor processes the results. The inter-mediate component provides the communication between bothsides.

During the elaboration of the different components, the deviceconnector showed up as being the most challenging one. The au-tomated executing of tests on webview-based native applicationsin particular seemed to be a challenge. The partly native appli-cations could be tested by using a native testing framework, butonly if the JavaScript code of the application could be accessedfrom within the test. It was decided to create a proof of conceptfor the device connector which could be integrated into a largertest solution later on.

B. Proof of concept

The proof of concept tests Cordova applications (also knownas PhoneGap applications [7]) on Android devices by using JU-nit [8]. Cordova was chosen since this is a widely used opensource framework to create webview-based native applications.The proof of concept could not address every operating systemwithin the time limit. Android was chosen for its open sourcecharacter but also for its integration with JUnit. Android has ex-tended JUnit with quite some features that alleviate the testingof Android applications, e.g. manipulation of the application’slife cycle. JUnit was meant for unit testing but can be used forscenario testing as well by looking at a scenario as a unit.

Cordova creates a bridge from JavaScript to Java and backbut for the proof of concept the other direction is needed sincethe application needs to be manipulated from within a JUnit testwhich is written in Java. This is implemented by creating a newclass which can be reached from the activity of the application,

Page 7: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

Fig. 1. Overview of the architecture

which in turn can be reached from the JUnit test project. TheCordova plugin mechanism is not used since this is designed tobridge the gap from JavaScript to Java and not the other wayaround. The proof of concept succeeds in manipulating theHTML elements within the application such as firing events onit (clicking buttons), or adapting or requesting the content, prop-erties or attributes of the element. The proof of concept allowssending commands to the device or to the Android emulator.The native buttons, such as menu and back, can be controlled aswell on the device as on the emulator. Other functionality onlytargets the emulator such as adapting the environment by chang-ing the battery status or the voice connection. By making use ofthe emulator, the user’s location or phone calls can be mimickedtoo.

The development of the proof of concept revealed severalproblems. At the moment, it is not possible to handle pop-up dialogs since the test project can not access the object thatis representing these, which means that the buttons can not beclicked. No screenshots of the device can be taken, only screen-shots of the application itself. This may misrepresent the appli-cation since it does not show what is actually happening on thedevice screen. Another screen might be on top, hiding the appli-cation under test. Once the application is brought to the back-ground, there is no way of bringing it back to the front. Thismay cause problems if native buttons like menu are used sincethese are executed on the application on the foreground. Filesthat are needed or created during the tests can not be pushed toor pulled from the device automatically. This needs to happenmanually before or after running the tests. The same applies tothe collection of artifacts such as screenshots or log messages.

Because of these problems the proof of concept can only beapplied to a limited amount of applications or scenarios. Theseproblems have to be solved before this solution can be integratedinto a larger test solution.

III. CASE STUDY

The case study tests the Moonbiker application of smartphoneapplication company G-flux [9]. The Moonbiker applicationtracks the route of a user while cycling, registers the cycleddistance and sends this result to the website to generate rank-ings and statistics. This application is created with Cordova andintensively uses the GPS technology. The application behavesdifferently in different situations. If there is no Internet connec-tion, for example, the application will save the data to send itat a later time when there is a connection. This combinationof GPS technology, a webview-based native application and thedifferent actions in different situations makes it very hard to testthis application. An automated test solution would be a great

Fig. 2. Example of an error that can be detected by checking an HTML fieldvalue

improvement for G-flux.Four scenarios were tested during the case study, two of them

make use of the GPS technology (while cycling) and each sce-nario reacts in a different way in case there is an Internet con-nection or not. One of these scenarios contains accepting anincoming call while cycling. In the end only two out of the fourscenarios could be tested because of the limitations or problemsof the proof of concept which were mentioned before. Still, sev-eral bugs could be found and solved. G-flux mentioned that theycould not have found these bugs without the proof of concept,or it would have taken a lot of time spent at searching and de-bugging.

Figure 2 shows an example of a bug that can be detected bychecking an HTML field value in this application. It should notbe possible to cycle a negative amount of kilometers.

IV. CONCLUSION

The proof of concept proves that native testing solutions canbe used to test webview-based native applications. Besides itshows that the scenario tests can be executed in different situa-tions which alleviates the testing effort and improves the qualityof the applications. Unfortunately the proof of concept is stilla work in progress since the problems discussed impose largelimitations on the types of applications or scenarios that canbe tested with it. Once these problems are solved this solutioncould be integrated in an automated scenario test solution.

REFERENCES

[1] Adam M. Christ, “Bridging the mobile app gap,” Sigma: Inside the digitalecosystem, vol. 11, pp. 27–32, October 2011.

[2] Maximiliano Firtman, Programming the Mobile Web, O’Reilly, first editionedition, 2010.

[3] W3C, “MobiWebApp Test Suites Report Year 1,” http://mobiwebappw3c.files.wordpress.com/2011/10/testing_report-year1.pdf, September 2011.

[4] Cem Kraner, “An introduction to scenario testing,” http://www.kaner.com/pdfs/ScenarioIntroVer4.pdf, June 2003.

[5] Software Engineering Institute, “Attribute-Driven-Design method,”http://www.sei.cmu.edu/architecture/tools/define/add.cfm.

[6] Len Bass, Paul Clements, and Rick Kazman, Software Architecture in Prac-tice, Second Edition, Addison-Wesley Professional, 2003.

[7] Cordova, “Apache project website,” http://incubator.apache.org/cordova.

[8] JUnit, “Project website,” http://www.junit.org.[9] G-flux, “Company website,” http://www.g-flux.com.

Page 8: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

Scenario-testing van mobiele webview-gebaseerdenative applicaties

Jolien Coenraets

Supervisor(s): prof. dr. ir. Frank Gielen, ir. Heıko Desruelle

Abstract—In dit artikel wordt geprobeerd een geautomatiseerde scena-rio test oplossing voor web en webview-gebaseerde native applicaties tecreeren. Door de groeiende fragmentatie van de mobiele markt wordendeze applicaties meer en meer gebruikt. Nochtans zijn ze erg moeilijk te tes-ten. Er wordt een proof of concept oplossing ontwikkeld die scenario testsuitvoert op webview-gebaseerde Android Cordova applicaties door gebruikte maken van JUnit. Deze proof of concept oplossing wordt geevalueerd ineen case study waarbij vier test scenario’s op een dergelijke applicatie wor-den uitgevoerd.

Keywords—Scenario testing, webview-gebasseerde native applicaties, hy-brid applicaties, Cordova, JUnit

I. INTRODUCTIE

SMARTPHONE applicatie ontwikkelaars hebben steedsmeer te lijden onder de fragmentatie van de markt. Niet

alleen verschillende besturingssystemen, zoals Android en iOS,moeten ondersteund worden, maar ook hun verschillende ver-sies en dit op een erg groot aantal toestellen met elk hun eigenhardwarespecificaties. Vooral bij Android is deze fragmenta-tie een probleem. Native applicaties ontwikkelen vereist vol-doende tijd en mankracht aangezien elk besturingssysteem eenandere programmeertaal gebruikt en code dus niet hergebruiktkan worden. Daardoor schakelen steeds meer ontwikkelaarsover op web of webview-gebaseerde native applicaties (ook ge-kend als hybrid applicaties). Aangezien deze applicaties in debrowser of webview draaien en met web technologie (HTML,CSS en JavaScript) ontwikkeld worden kan de code voor eengroot deel hergebruikt worden in de verschillende besturings-systemen. Webview-gebaseerde native applicaties laten toe omdeze web technologie toch als native te verpakken en zo ook inde app stores te plaatsen. [1], [2]

Het testen van deze applicaties is momenteel een tijdrovendebezigheid aangezien het meeste manueel moet gebeuren. Degrote fragmentatie zorgt er voor dat de tests uitgevoerd moetenworden op een groot aantal devices, wat het probleem enkel gro-ter maakt. Indien dit testen geautomatiseerd kan worden zou diteen extra voordeel zijn om web en webview-gebaseerde nativeapplicaties te gebruiken.

Sectie 2 van dit artikel bespreekt de ontwikkeling van eentest oplossing door eerst de bestaande oplossingen te bespreken,daarna een architectuur te belichten en uiteindelijk de proof ofconcept oplossing te bespreken. In sectie 3 wordt d.m.v. de ont-wikkelde proof of concept oplossing een bestaande applicatiegetest en daarna volgt een conclusie.

II. ONTWIKKELING VAN EEN TEST OPLOSSING

Er bestaan reeds verschillende test oplossingen om JavaScriptcode of web applicaties te testen, zoals QUnit of Selenium.Er bestaan verschillende W3C initiatieven rond dit onderwerp

en MobiWebApp is bezig een test platform voor web appli-caties te ontwikkelen [3]. Bij alle bestaande test oplossingenworden webview-gebaseerde native applicaties echter over hethoofd gezien. De combinatie van een native applicatie met codedie in JavaScript geschreven is resulteert in een speciale situa-tie die overal uit de boot valt. Daarnaast ondersteunen de be-staande oplossingen vooral unit-testing waar smartphone appli-caties scenario-testing vereisen aangezien dit de scenario’s testdie gebruikers zullen uitvoeren met de applicatie [4]. Aangeziensmartphone applicaties dikwijls op een andere manier reagerenin andere omstandigheden (bv. wel of geen internet) wordt deapplicatie best ook getest in al deze verschillende situaties.

A. Architectuur

De uitwerking van de architectuur gebeurde volgens deattribute-driven-design techniek [5], waarbij als belangrijkstequality attributes gebruiksvriendelijkheid, uitbreidbaarheid enschaalbaarheid werden gekozen. Tijdens de eerste iteratie werdhet Broker patroon toegepast en werd de architectuur uit Figuur1 bekomen [6]. De linkerzijde bevat de interfaces naar de ge-bruiker toe (website en command line) en aan de rechterkantbevinden zich de componenten die de tests uitvoeren. De deviceconnector bevat de verbinding met de toestellen en voert de testsuit, de scheduler plant de tests op de verschillende toestellen ende processor verwerkt de resultaten. De intermediate componentzorgt voor de communicatie tussen alle andere componenten.

Tijdens de uitwerking van de verschillende componentenkwam de device connector naar voor als zijnde de grootste uit-daging. Vooral het automatisch uitvoeren van tests op webview-gebaseerde native applicaties leek een uitdaging. Via native tes-ting oplossingen zouden testen geautomatiseerd kunnen worden,maar dan moeten die aan de JavaScript code van de applicatiekunnen. Er werd beslist om een proof of concept voor de deviceconnector te maken dat later eventueel kan geıntegreerd wordenin een grotere test oplossing.

B. Proof of concept

De proof of concept oplossing werd gemaakt om Cordova(vroeger gekend als PhoneGap [7]) applicaties te testen op An-droid door gebruik te maken van JUnit [8]. Er werd gekozenvoor Cordova applicaties aangezien dit een veel gebruikt frame-work is om webview-gebaseerde native applicaties te ontwikke-len en omdat de source code vrij beschikbaar is. Niet alle opera-ting systemen konden uitgewerkt worden binnen de tijdslimieten er werd gekozen om te beginnen met ondersteuning voor An-droid applicaties. Dit omdat Android open source is maar ookomdat Android JUnit uitgebreid heeft om het testen van de ap-plicaties nog gemakkelijker te maken. JUnit is bestemd voor

Page 9: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

Fig. 1. Overzicht van de architectuur

unit-testing maar kan gebruikt worden voor scenario-testingdoor elk scenario als een unit te zien. Door gebruik te maken vanJUnit kan er gebruik gemaakt worden van de reeds voorzienefuncties om Android applicaties te testen, zoals het beınvloedenvan de levenscyclus van een applicatie.

Cordova legt reeds een brug van JavaScript naar Java en te-rug, maar voor deze proof of concept oplossing was de andererichting nodig aangezien de applicaties vanuit JUnit beınvloedmoeten kunnen worden en dus vanuit de Java code. Dit werdgeımplementeerd door een nieuwe klasse te maken die kan be-reikt worden via de activity van de applicatie, die op zijn beurtbereikbaar is vanuit een JUnit test project. Het Cordova plu-gin mechanisme werd niet gebruikt aangezien dit ontworpen isom vanuit JavaScript naar Java te gaan en niet voor de andererichting. De proof of concept oplossing slaagt er in om HTMLelementen in de applicatie te manipuleren, zoals het uitvoerenvan events (klikken op een knop), of het aanpassen of opvragenvan inhoud, eigenschappen of attributen. De oplossing maakthet ook mogelijk om commando’s naar een smartphone of naarde Android emulator te sturen. Zo kan de menu knop zowel opeen toestel als op de emulator bestuurd worden. Het aanpassenvan de omgeving is enkel mogelijk via de emulator die naast hetaanpassen van bv. batterijstatus en internet- of telefoonverbin-ding ook locaties en telefoonoproepen kan simuleren.

De ontwikkeling van deze proof of concept oplossing brachtook enkele problemen naar boven. Zo is het op dit moment nietmogelijk om pop-up dialoogvensters te manipuleren aangezienhet test project niet aan dit object kan en er dus ook geen ac-ties op kan uitvoeren. Verder kunnen geen screenshots van hetvolledige toestel genomen worden, maar enkel van de applicatiezelf. Dit kan een verkeerd beeld geven aangezien dit niet weer-geeft wat er echt op het scherm van het toestel staat. Eens deapplicatie naar de achtergrond verdwenen is kan ze niet naar devoorgrond gebracht worden, wat problemen geeft indien knop-pen zoals menu en back gebruikt worden. Daarnaast kunnenbestanden niet in de test zelf van en naar het toestel verplaatstworden maar moet dit vooraf of achteraf en manueel gebeuren.Hetzelfde geldt voor het verzamelen van alle artefacten zoalsscreenshots en log berichten.

De bestaande problemen zorgen ervoor dat de proof of con-cept oplossing slechts voor een beperkt aantal applicaties ofscenario’s kan gebruikt worden. Vooraleer deze oplossinggeıntegreerd kan worden in een groter geheel zullen deze pro-blemen opgelost moeten worden.

III. CASE STUDY

Voor de case study werd de Moonbiker applicatie vansmartphone applicatie bedrijf G-flux getest [9]. Deze applica-

tie neemt de route van een fietsende gebruiker op, registreerthet aantal gefietste kilometers en stuurt dit door naar de web-site om zo allerlei rangschikkingen bij te houden. Deze appli-catie is gemaakt met Cordova en maakt intensief gebruik vanGPS technologie. Daarnaast gedraagt de applicatie zich andersindien er geen internet verbinding is. In dat geval gaat de appli-catie alles opslaan om op een later tijdstip de gegevens door testuren. Deze combinatie van gebruik van GPS technologie, eenwebview-gebaseerde native applicatie en de verschillende actiesin varierende omgevingen maakt het erg moeilijk om deze appli-catie te testen. Een geautomatiseerde test oplossing zou G-fluxzeker vooruit kunnen helpen.

In de case study werden vier scenario’s getest waarvan tweegebruik maakten van GPS locatie (tijdens het fietsen) en elk sce-nario anders reageert in het geval er wel of geen internet ver-binding is. Tijdens een van de scenario’s werd een telefoonop-roep ontvangen tijdens het fietsen. Uiteindelijk konden slechtstwee van de vier scenario’s getest worden wegens bovenver-melde problemen met de proof of concept oplossing zelf, maarwerden er toch verschillende bugs gevonden en opgelost. G-fluxliet weten dat zij deze bugs zonder de proof of concept oplossingniet of pas veel later gevonden zouden hebben, waarschijnlijk naurenlang zoeken en debuggen.

IV. CONCLUSIE

De proof of concept oplossing toont aan dat het gebruikvan een native testing oplossing voor het testen van webview-gebaseerde native applicaties mogelijk is. Daarnaast wordt ookaangetoond dat scenario tests in varierende situaties uitgevoerdkunnen worden, wat het testen van deze applicaties sterk zal ver-gemakkelijken en de kwaliteit verbeteren. De proof of conceptoplossing is echter nog geen kant-en-klaar product. Er zijn nogenkele essentiele problemen die moeten worden opgelost voor-aleer deze oplossing op grote schaal gebruikt kan worden. Eensdeze problemen van de baan zijn kan er gekeken worden om deproof of concept oplossing te integreren in een geautomatiseerdetest oplossing.

REFERENCES

[1] Adam M. Christ, “Bridging the mobile app gap,” Sigma: Inside the digitalecosystem, vol. 11, pp. 27–32, October 2011.

[2] Maximiliano Firtman, Programming the Mobile Web, O’Reilly, first editionedition, 2010.

[3] W3C, “MobiWebApp Test Suites Report Year 1,” http://mobiwebappw3c.files.wordpress.com/2011/10/testing_report-year1.pdf, September 2011.

[4] Cem Kraner, “An introduction to scenario testing,” http://www.kaner.com/pdfs/ScenarioIntroVer4.pdf, June 2003.

[5] Software Engineering Institute, “Attribute-Driven-Design method,”http://www.sei.cmu.edu/architecture/tools/define/add.cfm.

[6] Len Bass, Paul Clements, and Rick Kazman, Software Architecture in Prac-tice, Second Edition, Addison-Wesley Professional, 2003.

[7] Cordova, “Apache project website,” http://incubator.apache.org/cordova.

[8] JUnit, “Project website,” http://www.junit.org.[9] G-flux, “Company website,” http://www.g-flux.com.

Page 10: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

CONTENTS i

Contents

Table of contents ii

Glossary iii

1 Introduction 1

1.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 State of the art 3

2.1 The mobile ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.2 Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.3 Testing and debugging mobile applications . . . . . . . . . . . . . . . . . . . 22

2.4 Working groups and projects . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3 Design of the test framework 38

3.1 Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.2 NABC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.3 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

3.4 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4 Technical research 51

4.1 Cordova . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4.2 JUnit for Android . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

4.3 Android emulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Page 11: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

CONTENTS ii

4.4 Weinre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5 Proof of concept 63

5.1 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5.3 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

5.4 Encountered problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

6 Case study 82

6.1 Situation of the case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

6.2 Test scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

6.3 Implementation of the test scenarios . . . . . . . . . . . . . . . . . . . . . . 87

6.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

7 Conclusions 95

7.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

A Scenarios 97

A.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

A.2 Quality attribute scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

A.3 Functional scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

B Screenshots Moonbiker application 104

Bibliography 108

List of Figures 110

List of Tables 111

Page 12: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

Glossary iii

Glossary

adb

Android Debug Bridge — command line tool to communicate with an Android device

or emulator. 72, 77

API

Application Programming Interface — a specification that describes the functionality

of a software component and how the functionality should be used. 16, 17, 22

app store

application store — an electronic market that contains all the applications for a

certain OS. 10, 13

continuous integration

automated build process that aims to improve the quality of the software. Often

includes an automated build environment, automated test runs and deployment and

a code repository. 25, 37, 40, 58, 96

DOM

Document Object Model — a platform and language-neutral interface that allows

dynamic access and updates of the content, structure and style of an HTML docu-

ment . 16, 17, 72

GPX

GPS Exchange Format — an xml format used to save geographical data like routes,

tracks and waypoints. 74, 84

Page 13: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

Glossary iv

hybrid application

another name for a webview-based native application. 10

KML

Keyhole Markup Language — an xml format used to save geographical data, mostly

used by Google. 84

native application

an application that has been developed for use on a particular mobile platform. 10

OS

Operating System — in this dissertation refers to a mobile operating system such

as Android, iOS or Windows Phone 7. 7

scenario testing

a software testing activity that focuses on testing specific use cases from the user’s

point of view. 23, 29

test runner

responsible for running a series of tests and gathering results for all of them. 32

W3C

World Wide Web Consortium — an international community that develops open

standards to ensure the long-term growth of the Web. 12

WAP

Wireless Application Protocol — mobile wireless network protocol that was used for

mobile internet on the first mobile phones. 3

weakly typed

used in programming languages when variables are generic types rather than specific

types such as String, boolean, etc. 17

web application

an application that runs inside a browser. For this dissertation web-applications will

be limited to mobile web-applications (accessed on a mobile device). 10

Page 14: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

Glossary v

Web IDL

an interface definition language that can be used to describe interfaces that are

intended to be implemented in web browsers. 31

web technology

in this dissertation refers to the combination of HTML, CSS and JavaScript. 10, 12

web widget

a small application that can be installed and executed in a web browser by an end

user. 19

webview

a native class that renders web pages and can be embedded inside a native application

to show contents created with web technology. 10, 19

webview-based native application

an application that is written in web technology but is deployed and executed as a

native application. Often called hybrid app. 10

XP

a software development methodology which is intended to improve software quality

and responsiveness to changing customer requirements. The full name is Extreme

Programming. 22, 25

Page 15: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

1

Chapter 1

Introduction

1.1 Context

The fragmentation problem on mobile devices is becoming larger every day. This makes

testing mobile applications very resource-consuming since they have to be tested on as

much targeted devices as possible. It is especially hard to test web or webview-based

native applications, since these are created with web technology and cannot make use of

the native test solutions. A lot of mobile applications act differently in varying situations

such as with or without Internet connection. Simulating these different situations is not

included in any automated test solution at this moment, but will definitely help developers

to improve the quality of the applications.

1.2 Goal

The goal of this dissertation is to create a scenario testing solution for web and webview-

based native applications that is able to perform the tests in different environments. Later

on this will be refined to creating a proof of concept for the most challenging part of this

solution: automate the execution of scenario tests for webview-based native applications

by using the native testing framework.

Page 16: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

1.3 Overview 2

1.3 Overview

Chapter 2: State of the art

This chapter includes an overview of the mobile ecosystem, the different types of mobile

applications, how they are tested nowadays and the work in progress done by the W3C.

Chapter 3: Design of the test framework

The design of the test framework includes a vision, NABC, functional and non-functional

requirements and an overview of the architecture.

Chapter 4: Technical research

To start developing the proof of concept, some more profound research had to be done,

which is described in this chapter.

Chapter 5: Proof of concept

This chapter describes the proof of concept that has been developed for this dissertation.

It elaborates on the implementation and the problems that were encountered during the

development.

Chapter 6: Case study

The proof of concept is used to test G-flux’s Moonbiker application. Based on this case

study, an evaluation of the proof of concept could be made.

Chapter 7: Conclusions and future work

This dissertation is concluded with the evaluation of this proof of concept and the future

work to be done to improve it.

Page 17: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3

Chapter 2

State of the art

This chapter will describe the latest changes in the mobile industry, explain some old

and new technologies that are used to create smartphone applications and discuss several

tools that can be used to test them. Problems that are encountered when testing these

applications will be discussed, together with possible solutions.

2.1 The mobile ecosystem

Before the details of mobile applications are explained, an overview of the environments

in which they will be used and tested needs to be given. The article ‘The what, why

and how of mobile applications’ by Daniel Y. Na [1] gives a clear overview of this mobile

ecosystem, just as Chapter 1 (The Mobile Jungle) of the book ‘Programming the Mobile

Web’ by Maximilano Firtman [2].

2.1.1 The history of mobile devices

The history of mobile devices starts in 1987 with the introduction of GSM, the first mobile

phone standard that has found its place in the market. 25 years later the mobile ecosystem

has changed a lot, with the introduction of the smartphone and recently the tablet.

Mobile phones had little functionality in the beginning: they were meant for calling, that

was it. Gradually features were added, such as vibration mode, sms, bluetooth and WAP.

Page 18: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.1 The mobile ecosystem 4

Together with the increased number of features, the number of users increased. Where it

was first only for businessmen, in the end even 6 year old children had one.

Almost at the same time as mobile phones were released, the Personal Digital Assistant

(better known as PDA) was launched. These PDAs were like pocket computers. They had

touch screens and memory cards and could be connected to a computer to synchronize

the data. PDAs had useful applications like text editors and spreadsheet programs.

Around 1996, Nokia thought that it would be better if these two devices would be merged

into one and the smartphone was born. Smartphones have changed a lot since then and

they now have GPS-functionality, mobile Internet, e-mail-functionality, lots of applications

and lots of memory capacity. Just like mobile phones and PDAs, these devices were first

only for business people, but since a couple of years they are used by a broad range of

people, from children to elderly.

Another type of mobile device is the tablet. Tablets are quite old, the first one was released

in the fifties and during the nineties Apple released the Apple Newton which never became

a success. Tablets only recently boomed (2010), with the introduction of multi-touch, the

thinner screens and the launch of the iPad by Apple. They are characterized by their

bigger screen sizes and their focus on documents and browsing.

In this dissertation mobile devices will be limited to smartphones and tablets, since a

network connection and mobile browser are needed to use web applications. These func-

tionalities are often not available on other mobile devices.

2.1.2 Mobile brands and operating systems

There is a big variety of players on the mobile market. It is not clear yet who will come

out on top. The mobile market is changing as we speak. Only one thing is certain: the

smartphone market is booming. Gartner studies show that smartphone sales grew with

47% during the fourth quarter of 2011 and are expected to grow with 39% during 2012

[3].

Page 19: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.1 The mobile ecosystem 5

Figure 2.1: Worldwide smartphone sales to end users by operating system (in millions of units

sold), by Gartner [4, 5, 6, 3]

Smartphone sales

Smartphone sales more than doubled within two years, as visualized by Figure 2.1. Where

smartphones used to be for business men only, teens and adolescents are using them too

now, which leads to a big growth in sales. The good sales records are due to the popularity

of Apple iOS and Android. Android introduced budget smartphones while Apple has a

loyal community behind their iPhone. There is a peak in smartphone sales during the

fourth quarter of each year due to the popularity of smartphones as a gift during the

holidays.

Operating systems

There was a shift in the leading operating system over the last two years. Figure 2.2

depicts a substantial growth of Android, who released their first smartphone only in

2008. Gartner expected in the beginning of 2010 that Android would be the no. 1

operating system by 2014 with a market share of 29.6 % [7]. They already reached this

share before the end of 2010.

Page 20: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.1 The mobile ecosystem 6

Figure 2.2: Worldwide smartphone sales to end users by operating system (in % market share),

by Gartner [4, 5, 6, 3]

Android’s success is due to its open source character. Vendors are willing to use it because

they can adapt it to their own wishes but still keep the similar Android interface towards

the users. The open-source approach also brings disadvantages for developers: it is hard to

know if the application will work the same on all Android devices, just because each vendor

can make changes in the operating system. There also is a large variety of screen sizes and

hardware specifications. By using the Android Compatibility Test Suite, automatic tests

created with JUnit, which will be described in more detail in Sections 2.3.3 and 4.2, can

be run on different devices to see whether the application is compatible with all of them.

These devices need to be physically attached to the system, which leads to a lot of devices

that have to be purchased and thereby to an increasing development cost. These different

reasons resulted in a small drop in their market share at the end of 2011, as shown on

Figure 2.2.

iOS is the operating system Apple uses for its iPhone, iPad and iPod. It is as popular as

these devices since they are the only devices that use this operating system. Figure 2.2

shows that iOS’s market share is slowly increasing, due to the fact that they only have

one smartphone they sell. During the fourth quarter of 2011, iOS took a part of Android’s

share with the release of the iPhone 4S.

Page 21: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.1 The mobile ecosystem 7

The fact that iOS is only used on one type of smartphone certainly has advantages. A

developer knows which screen resolutions and sizes are available and on which hardware

specifications the application should work. Apple also offers a great backward compati-

bility so an older device can still benefit from the latest novelties in the operating system.

On the other hand developers depend entirely on the whims of Apple. Apple checks all

applications that pass through their application store (which is the only way to install an

application), and this process often takes one week or even more.

The other operating systems are losing market share to Android and iOS. Symbian is

the worst affected by the arrival of Android. It is the operating system that Nokia used

for its phones before they started using Meego, an open-source mobile operating system

they still use for some devices but which has already been replaced by Tizen. Recently, at

the end of 2011, they started to release smartphones that use Windows Phone. Symbian’s

market share will definitely get smaller by the years, and possibly Windows Phone will

take the leftover parts. Symbian was one of the first operating systems used on mobile

phones, but did not evolve when the other platforms did. This gave the operating system

a bad image.

BlackBerry OS is the operating system used for BlackBerry devices, which are produced

by RIM (Research In Motion). BlackBerry is not as popular any more as it was before.

It used to be the no. 1 for business people, but RIM perceives lots of competition from

Android devices and iPhones. Opinions on their future are divided. Some people think

they will gain market share, others think they won’t.

Microsoft’s share has declined over the years but with their new operating system Win-

dows Phone 7, this could change. The first phones are being sold at the moment of writing

so it is too early to make predictions. The future will tell how this operation system will

evolve.

Fragmentation of the Android market

The Android platform is characterized by its fragmentation. There are several types of

fragmentation: by platform version, by device model and by OS version. The difference

between platform version and OS version is that Android releases the platform versions,

but everyone can adapt these versions and hence create a separate OS version. For example

Page 22: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.1 The mobile ecosystem 8

Figure 2.3: Android platform distribution on May 1,2012 by Android

Figure 2.4: Android platform historical distribution on May 1,2012 by Android

Figure 2.5: Android device fragmentation as experienced by TweetDeck. Each color represents

a different device.

Page 23: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.1 The mobile ecosystem 9

Samsung, HTC, Motorola and Sony all have Android 4.0 phones (same platform) with their

own adapted version of Android 4.0, leading to four different OS versions for one platform

version.

Figure 2.3 1 shows the platform fragmentation as it was on May 1, 2012. Android 2.3.3 has

almost 2/3th of the share but the other versions are taking their part too. As a developer

it is very hard to support all these versions, let alone that the application could be tested

on all of them. A big part of the problem are the device manufacturers that don’t update

the OS of their sold devices. Chances are small that a device with Android version 2.2

will receive an update to 4.0. A new version can only be obtained by buying a new phone,

which most people don’t when their phone is only one or two years old. This leads to a

lot of old versions being in circulation. This in contrast to Apple which makes it possible

for devices that are several years old to be updated to the most recent OS version. Figure

2.4 2 depicts a historical chart of the Android platform fragmentation.

A second source is the device fragmentation. There are lots of different Android devices

on the market, each having their own characteristics such as memory, processor, gps

antenna, screen, presence of a hardware keyboard and many more. This leads to a different

performance on each device and influences the user experience. While some users think the

app is super fast, others may find it slow. For location enabled applications, the quality

of the application depends on the quality of the GPS signal and hence also on the quality

of the GPS chip. Different screen sizes force developers to use responsive user interfaces.

The device fragmentation as experienced by the TweetDeck beta app in 2010 is shown on

Figure 2.5 3.

The last form of fragmentation is the OS version. Since it is hard to create a bug free

operating system, each OS version solves some platform bugs but also introduces new

ones. Besides the versions of the brands, people can create their own Android OS version

starting from the Android release or download one from specialized websites such as XDA

Developers 4. A developer can not assume that a certain type of device will have a certain

1Graph taken from http://developer.android.com/resources/dashboard/

platform-versions.html2Graph taken from http://developer.android.com/resources/dashboard/

platform-versions.html3Graph taken from http://tweetdeck.posterous.com/android-ecosystem4http://www.xda-developers.com

Page 24: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.1 The mobile ecosystem 10

OS version since the user may have installed another one himself.

2.1.3 Smartphone and tablet applications

One of the characteristics that distinguishes smartphones (or tablets) from mobile phones

in general is the ability to install extra applications on the smartphone or tablet. These

applications can be downloaded from the Internet via the app stores of the different plat-

forms.

There are generally three different types of applications for smartphones and tablets:

native applications, web applications and webview-based native applications.

Native applications are applications that have been developed for use on a particular

platform or device. For example when developing for iOS this is done in Objective C,

while developing for Android is done in Java. Table 2.1 shows the language that each

platform uses.

Web applications are websites that represent mobile applications in which some or all

parts are downloaded from the web each time they run. These applications are written

with web technology, such as HTML, CSS and JavaScript.

Webview-based native applications are applications that are written with web tech-

nology but are deployed and executed as native applications. These native applications

contain a renderer (called a webview) with a web page that contains the functionality.

Most of the times the application does not need to download parts from the Internet,

although this is still possible. These applications are often called hybrid applications,

but the term webview-based native applications is preferred since this better covers the

subject.

Comparison between native, web and webview-based native applications

These tree types of applications all have their advantages and disadvantages. Before

starting with the development of a mobile application it is important to choose the right

application type. Choosing the wrong one can give unneeded extra work or problems.

There is no application type that is always the right one.

Page 25: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.1 The mobile ecosystem 11

OS Programming lan-

guage

Platforms it can be used on

Android Java, portions can be

in C or C++

Only Android (because of Dalvik

VM)

Symbian C++ Only Symbian

iOS Objective-C Only iOS

RIM Java Only BlackBerry (because of RIM

API)

Windows Phone C# Only Windows Phone (not even on

Windows Mobile, the predecessor)

Table 2.1: Programming languages on different platforms [8]

Figure 2.6: Development of native (left), webview-based native (middle, called hybrid) and web

applications (right). Source: [8]

Page 26: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.1 The mobile ecosystem 12

Table 2.2 indicates that native applications have the best performance and can use the

most functionality. Their big disadvantage is the development effort if the application

needs to support multiple platforms, which is also shown on Figure 2.6. Each platform

has its own programming language, which is shown in Table 2.1. It is hard for a developer

to master all these languages and APIs, so developing a native application for several

platforms requires a bigger programming effort. This problem can be solved with web or

webview-based native applications since they use web technology to create the applications

(Table 2.3 and 2.4). Only the web technology languages (HTML, CSS and JavaScript) have

to be learned and all platforms can be supported. In webview-based native applications a

small part of the code base has to be platform specific and thus in the native language.

Native applications can make use of the native GUI elements, which is not possible for web

applications and only possible in a limited way for webview-based native applications. A

native user interface is very recognizable for the user and leads to a better user experience.

In a web or webview-based native application the user interface is created in HTML. There

are frameworks to create a similar interface as the native ones, such as Sencha Touch, but

the difference is often obvious. It is not only about the user interface itself, but also about

the interaction with the operating system. Each platform has its own way of interacting

with the user. Android for example comes with a back button embedded in the OS or in

the hardware so it does not need a back button in the GUI. iOS does not have this back

button since it expects it to be programmed in the user interface of the application itself.

A similar difference is the menu button which is made available by Android, but that

should be implemented inside an iOS application. Developers can make their own choice

between ignoring the native UI habits and create one user interface for all platforms, or

creating a different interface for each of them.

Most browser functionality is standardized by W3C, which means that it needs to go

through their standardization process 5. First, W3C receives a submission which they

turn into a note that is published for discussion. If the discussion on the note shows

that this is a valid request, a working group is created which, after a while, publishes a

working draft that contains the work in progress. Sometimes a candidate recommendation

is published too. Once the working draft reaches its final stage, it is turned into a proposed

recommendation which is still work in progress but will not change very much. The final

5http://www.w3schools.com/w3c/w3c_process.asp

Page 27: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.2 Technologies 13

step is the W3C recommendation that should be implemented by all browsers since this is

a standard now. This process often takes a lot of time (several years) and therefore, some

browsers already implement a working draft while others don’t, leading to fragmentation

problems. Webview-based native platforms like Cordova (see Section 2.2.2) take care of

these problems by implementing working drafts on each platform. Native applications

can use the functionality that is supported in a new platform version right away, without

waiting for any standardization between the different platforms.

Web applications purely use the browser’s API, so they are limited in functionalities. This

can be solved with webview-based native applications where the core functionalities, such

as the menu or back button on Android, can be accessed too. This requires native code

so these extensions have to be written for each platform.

Since web applications and webview-based native applications run in a browser they have

an extra layer that needs to process all the commands, resulting in a lower performance.

In some browsers JavaScript is not optimized, which is often noticed by the user and leads

to a bad user experience. This in contrary to native code, which is usually optimized,

specifically for heavy operations or monitoring operations.

Native and webview-based native applications can be downloaded from an app store, which

makes the application available for everyone who wants to use it. The procedure to upload

an application is sometimes cumbersome and the application is not readily available on

the market — it can take up to one week or more for iOS. A web application can not

be downloaded from a store, a user needs the right link to find the application. Since

it is just a website, it depends on the developer itself when it becomes available. A

web application is a lot easier for applications that are only used for several days, like

promotional applications or conference apps.

A more in depth analysis of this comparison can be found in [8] or [9].

2.2 Technologies

2.2.1 Languages

The term web technology indicates the combination of HTML (structure and content),

CSS (presentation) and JavaScript (behavior). It is a very powerful combination that is

Page 28: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.2 Technologies 14

Native applications

Advantages Disadvantages

• Access to all core functionality

• Optimized performance

• Directly use new functionality

• Native user interface

• Use other applications

• Downloadable from the app

store

• Every platform has its own

language

Table 2.2: Advantages and disadvantages of developing native applications [8]

Web applications

Advantages Disadvantages

• Same language for all plat-

forms

• No download needed

• No core functionality (only

browser functions)

• No optimized performance

• Wait for standardization be-

fore using new features

• Web interface

Table 2.3: Advantages and disadvantages of developing web applications [8]

Page 29: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.2 Technologies 15

Webview-based native applications

Advantages Disadvantages

• Same language for all plat-

forms

• Access to all core functionality

• Use other applications

• Downloadable from the app

store

• Small parts still platform de-

pendent

• No optimized performance

• Wait for standardization be-

fore using new features

• Web interface

Table 2.4: Advantages and disadvantages of developing webview-based native applications [8]

gaining popularity. The best example is the Metro interface of Windows 8, which relies

on the power of this combination.

HTML

HTML (HyperText Markup Language) is the main markup language for web pages. HTML

elements are the basic building blocks of web pages and web applications. HTML has an

XML structure and consists of tags (like <html>). These tags represent the building

blocks and elements, which can have several attributes and/or child elements. The HTML

standard is very important to the Web, since every web page uses this language to show

its content.

The first version of HTML was proposed by Tim-Berners Lee in 1991. Now, the HTML

standard is managed by the W3C (World Wide Web Consortium). They work together

with industries to create and implement new standards. After HTML4 was released (in

1999) the W3C decided to stop improving this standard and replace it with a new one

(XHTML). The industry did not follow this vision and in the early 2000’s they started to

work on a new version of HTML themselves. They created a new working group, indepen-

dently from W3C, called WHATWG (Web Hypertext Application Technology Working

Group). The goal of this new standard, called HTML5, was to make it easier for devel-

opers to create web applications in web pages and thus not use HTML only for static

content. Several years later (in 2007), W3C and WHATWG joined forces to create a new

Page 30: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.2 Technologies 16

Figure 2.7: Structure of a Document Object Model Tree

HTML5 standard. 5 years later, the standard is still in a working draft version and plans

are to release it in 2014. [10]

HTML5 introduces several new tags that add new functionalities or that are semantic re-

placements for common uses of the general <div> element. Important new APIs include

the canvas element that lets users draw 2D images within a browser, offline web applica-

tions, drag-and-drop functionality, web storage that allows developers to save key-value

pairs, geolocation, file support and much more.

Only the last two years HTML5 is becoming mainstream, despite of the standard not being

released yet. Steve Jobs announced in 2010 that Apple banned Flash because they believe

HTML5 is the future and will replace Flash in the end 6. Even Microsoft is jumping on

the HTML5 bandwagon with their new Metro style for Windows 8 being developed with

HTML5. HTML5 is seen as a possible solution for the mobile market fragmentation, since

every smartphone has a browser.

The DOM — Document Object Model — is a platform and language-neutral interface that

allows dynamic access and updates of the content, structure and style of an HTML docu-

ment. It is standardized by the W3C and very often used when dealing with HTML pages.

In web and webview-based native applications, the DOM is accessed with JavaScript meth-

ods and is altered to change the user interface, update text fields, dynamically load records,

etc. All HTML elements and tags, which are called nodes, can be accessed and changed,

6http://www.apple.com/hotnews/thoughts-on-flash

Page 31: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.2 Technologies 17

just as their child elements, attributes, properties and contents. An example of a DOM

node tree is shown in Figure 2.7.

JavaScript

JavaScript (JS) is a prototype-based scripting language that is a dynamic, weakly typed

and general purpose programming language. It is a multi-paradigm language, supporting

object-oriented, imperative, and functional programming styles.

JavaScript is used a lot as client-side scripting language in web pages to enhance the user

interface or make the website more dynamic. A lot of browser APIs are available, such

as APIs to use the location of the user, play different types of media like video or music,

XmlHTTPRequest, touch events and much more. There are also APIs for use in mobile

devices which allow access to mobile specific functionality such as contacts list. The DOM

of the web pages can be changed, which implies that their contents can be changed at

runtime. Since the language is weakly typed, it is very flexible. Code can be injected and

evaluated at runtime with the eval function, which requires a piece of code as String and

then executes it.

A large disadvantage of such a flexible language, is the error handling. JavaScript does not

throw a lot of errors, most of the times it just stops execution. On top of that, there is no

step by step debugger for JavaScript. The only thing a developer can do to evaluate the

code at runtime is to add console.log(...) in the code to display a value. This way

he can see where exactly the code stops executing. The combination of these problems

makes it a difficult language to use for developing complicated functions.

Next to client-side scripting, JavaScript can also be used for server-side scripting. Where

client-side JavaScript is executed in the user’s browser, server-side JavaScript is executed

on the server itself. Different libraries exist to use server-side JavaScript. The most known

one is node.js 7. These libraries are the counterpart of PHP for server-side scripting.

7http://nodejs.org

Page 32: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.2 Technologies 18

CSS3

CSS (Cascading Style Sheets) is a style sheet language used for describing the presentation

(look and formating) of HTML web pages. It separates styling from content and structure

which improves flexibility in layout. The CSS code can be saved in a separate file. By

applying a different style sheet to the website, a different look can be created.

CSS is easy to use. The class attribute indicates which style needs to be used on an

HTML element. In the CSS file the properties of each style are listed. CSS is heavily

used to give web and webview-based native applications a native look and feel. For each

platform there is a separate CSS file that loads the correct look and feel.

2.2.2 Techniques to create web or webview-based native applications

Web or webview-based native applications can be created in various ways, with or without

the use of extra tools or libraries.

Adjust a website to create a web application

The easiest way to create a web application is to create a website that behaves differently

on a mobile device. E.g., one that has a different layout, takes advantage of the user’ s

location or uses smaller pictures. Little adaptation is needed to create such a webapp, just

use a different CSS file that is loaded when a user is using a mobile device, and maybe

add some JavaScript. There is no need in changing the content of the website, content

that is not important on a mobile device can simply be omitted. These kinds of web pages

are often called ‘responsive’, because they respond to different screen sizes. Since this

is an enhanced website, a browser and an active Internet connection are required. This

technique will only work when the website is content-driven. In case it contains a lot of

JavaScript that needs a different execution on different types of screens, it will be much

harder to adapt the website.

HTML also offers the possibility to make the website available offline, by adding the files

needed when offline to a manifest that downloads these files to the device when the user

surfs to the website. When he is offline it uses the cached files to show the website or web

Page 33: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.2 Technologies 19

application. 8

Create a web widget

The term web widget can imply many things, but in this context it refers to a packaged web

application of any degree of complexity as described by the W3C 9. This can be a simple

form, a shared calendar, a game or even a full-blown productivity application. These web

widgets are created with web technology (HTML, CSS, JavaScript). All the resources are

packaged in a zip file, to be run on a desktop, mobile device or on a server; with all the

same functionality that can be expected from a webapp, including server access. When

added to the manifest, web widgets have access to local system resources as well, just like

local applications. Web widgets are not defined completely yet, since they are part of the

non-finished W3C Web applications working group.

Embed a webview in a native application

In case a website needs to be transformed to a native application, this can be accomplished

by using a webview and thus creating a webview-based native application. The application

shows one view, the webview, that displays the homepage of the website. If it needs to be

available offline, all the necessary HTML-, CSS- and JavaScript-files need to be included in

the application. Since a native application is created, it can be made available through the

app stores. Access to native functions can be obtained by using a bridge from JavaScript

to native code and back.

Cordova

Creating a JavaScript to native bridge for all kinds of native functionality is a lot of work,

especially when testing it on all sorts of devices. Some platforms already have this bridge,

but it works differently on all of them so not all the code can be reused that easily. That

is why Cordova has been created.

Cordova 10 is an HTML5 app platform that allows developers to author native applications

8http://www.w3.org/TR/offline-webapps9http://www.w3.org/TR/widgets-apis

10http://incubator.apache.org/cordova

Page 34: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.2 Technologies 20

with web technology and get access to APIs and app stores. It has been downloaded more

than 500 000 times, there are thousands of Cordova apps in the stores and it is used by

companies like Adobe, IBM, Fox and many others. By using Cordova, these bridges to

the native code don’t need to be written anymore and developers can focus on developing

the application itself.

Cordova started in 2008 as the open source framework of open standards called PhoneGap

and was developed by Nitobi, a Canadian company developing web applications. When

Nitobi was acquired by Adobe in 2011, PhoneGap changed names to Cordova and its

code was donated to the Apache Foundation which means that it will stay open source

[11]. Cordova can be used for free in free, commercial or open source programs, as it is

released under the Apache License 11. Adobe keeps its own copy of Cordova and still calls

it PhoneGap 12. A lot of people still use the name PhoneGap instead of Cordova, but it

represents the same thing.

The framework offers support for native functions on iOS, Android, Windows Phone 7,

BlackBerry, webOS, Symbian and Bada but the best supported platforms are iOS, Android

and Windows Phone 7. It includes support for different kinds of sensors (accelerometer,

compass, geolocation and network) and native functionalities (camera, contacts, file, me-

dia, notifications and storage). On top of that it has a plug-in based architecture that

allows developers to write their own plug-ins, which are written in partially native and

partially JavaScript code, or to download them from the PhoneGap plugin github 13.

What Cordova does not offer is a full native user interface, like native views, buttons or

lists. Visually attractive webview-based native applications can be created by making use

of HTML5 and CSS3, or use a GUI framework like Sencha touch 14, XUI 15 or JQuery

Mobile 16 that already contains styles that make HTML elements look like native ones.

Next to the open source framework, Adobe also offers a build service, support and training.

The PhoneGap Build service 17 allows developers to push the JavaScript, HTML and CSS

code to the build server, where the service automatically creates applications that can be

11http://www.apache.org/licenses12http://www.phonegap.com13https://github.com/phonegap/phonegap-plugins14http://www.sencha.com/products/touch15http://xuijs.com16http://jquerymobile.com17https://build.phonegap.com

Page 35: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.2 Technologies 21

launched in the app stores of the different platforms. It offers a write once, run everywhere

service, as long as no proprietary plugins are used. These services are paid services.

The internal structure of Cordova will be described in depth in Section 4.1.

Trigger.io

Trigger.io 18 is one of the direct competitors of Cordova. They claim that it is much easier

to create applications with Trigger.io than with Cordova. Developers don’t need to set up

a native development environment like Eclipse or XCode but can just start developing. It

also includes a debug tool, something Cordova does not offer.

Trigger.io only supports Android, iOS and Heroku, which is a web application platform.

Windows Phone is in beta and will be available soon. Native features can be accessed

similarly to Cordova but there is no plugin based architecture that allows developers to

extend the Trigger.io functionality.

Trigger.io is not open source but a free license is available. Paid licenses offer support,

training and access to beta functionality such as Trigger.io for Windows Phone.

Titanium Mobile

Another framework that is often mentioned in this context is Titanium Mobile 19. Ti-

tanium Mobile is a variant of the Titanium framework, which is used to make cross-

platform native development easy. It compiles web based code written in HTML, CSS

and JavaScript to native applications. Technically speaking, it does not create web or

webview-based native applications because they consist purely of native code without we-

bview. They provide lots of APIs to access native functionality. The user interface written

in JavaScript is translated to a native user interface. Titanium is not open source, but

the smallest license is free.

18https://trigger.io19http://www.appcelerator.com/products/titanium-mobile-application-development

Page 36: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.3 Testing and debugging mobile applications 22

2.3 Testing and debugging mobile applications

Things are changing in the mobile web applications area. It seems that web or webview-

based native applications will be booming in the coming years. But this will only be

the case if there is a developer-friendly way to test these applications on all the differ-

ent devices. Although a lot of developers still underestimate the importance of testing,

techniques as SCRUM, XP and Agile programming are growing, especially in the mobile

development sector. These techniques are all based on test-driven development (TDD),

which means that tests are run each time a commit is made. The changes in the commit

are only accepted if all the tests succeeded. This requires an automated build environment

with a testing framework attached to it. And there lies the problem for JavaScript and

webview-based native applications: a lot of testing has to be done manually, which con-

sumes time that a developer would rather use to actually develop the applications. These

mobile web applications, and especially webview-based native applications, will only reach

full potential if they can be tested automatically. [12]

2.3.1 Different types of testing

There are different types of testing, each one used for a specific purpose or scope. Func-

tional testing for example, is about testing the functionality of the application. This

means testing whether the user is able to do what he or she is supposed to be able to do

with the application. It is important to know that the application fulfills all the promised

requirements. The counterpart of functional testing is called non-functional testing

ands means that non-functional requirements such as security, performance or usability

are tested. This is important too since it helps to prevent security or performance issues

and avoids user complaints.

White-box and black-box testing are two opposite forms of testing that differ in the vis-

ibility of the code under test. When doing white-box testing, the tester has access to

the internal code, data structures and algorithms of the application. White-box testing

includes code coverage, to check if all the code is tested and no parts are forgotten, and

private and public API testing, to see if the interfaces work as expected under all cir-

cumstances. Black-box testing treats the code as a black box and runs the application

with random parameters to test different partitions of input (equivalence partitioning) or

Page 37: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.3 Testing and debugging mobile applications 23

to test the boundaries (boundary-value analysis). Grey-box testing is the intermediate

solution that has some notice of the underlying algorithms and structures while executing

black-box tests.

Different levels of testing exist too. Testing at the most detailed level is called unit testing.

Unit testing is a testing method that is used to test specific parts of the code at function

level, which are called units. The units are tested with different types of input, to see

if they produce the correct output. If the underlying functions are correct, chances are

higher that the total program will be correct. The next testing level is called integration

testing. Its goal is to test whether the integration of several software components works

well, such as the integration between the client- and the server-side. This is done by

exposing the defects in the interfaces and interactions between the different components.

Testing on the highest level is called system testing. This tests the total system or

application, after the different components have been integrated.

Regression testing is used to check if old bugs have come back into the application.

Each time a bug is solved, a test should be written to test if it came back. These tests

should be run at least before each release to prevent old bugs from popping up again.

When doing scenario testing, complete use cases are tested, where a use case is described

as a list of steps to achieve a goal. The focus lies on a good user experience when users are

performing different tasks in the application. It is important to test the use cases under

different circumstances and check whether the application always reacts in the correct way.

This is the type of testing that will be used in the proof of concept described in Chapter

5. [12, 13]

2.3.2 Influencing the environment

Mobile web-apps and webview-based native apps are evolving from mobile versions of

websites to full-featured apps. Developers are chasing the limits of HTML5, JavaScript

and CSS, but also the limits of the smartphones themselves. These developers often would

like to know the impact of their application on the status of the smartphone.

For some applications it is crucial that tests are performed in different environments and

under different conditions, but with the large amount of types of devices and the large

amount of possible influences, it is very hard to perform all these tests and to replicate

Page 38: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.3 Testing and debugging mobile applications 24

bugs that only happen in certain circumstances. If these influences could be tested on a

simulator or emulator this would be a step forward, but it would even be better if those

tests could be performed on real devices. It would be a huge help for developers and testers

if they could perform automated tests in certain environments and on different types of

devices.

Different situations a developer might want to test are:

• How does the application affect the battery under various conditions (for example

with screen on or off) or on different devices?

• How much memory does the application use in a certain scenario? Does it depend

on the type of device?

• Are the notifications called from JavaScript executed on all types of devices? Did

the phone vibrate at the right time and did the music start playing at that moment?

• Does the application act correctly if there is no Internet? What if the connection is

lost while downloading a file?

• Is the application showing the correct information if the user is in a certain place

(based on a GPS location)? Does it show the correct speed while moving?

• What if there is no carrier, or if the carrier type is changed?

• How does the brightness of the screen affect the battery when the application is

running? What happens when the screen is turned off?

• Does the application react appropriately when the device’s Bluetooth connection is

lost or found?

Unfortunately there are not many testing platforms or tools that help monitoring or adapt-

ing these variables on real devices. Native testing frameworks do offer the possibility to

mimic certain information providers such as GPS, carrier and Internet but these have not

been integrated in any JavaScript testing framework so far.

Page 39: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.3 Testing and debugging mobile applications 25

2.3.3 Available testing and debugging solutions

Testing and debugging of native applications

Native application testing is well supported on all platforms. All the native platforms

offer testing solutions to enable developers to test their applications thoroughly. These

testing solutions are able to control the user interface of the applications. On emulators

or simulators, they are also able to manipulate the environment such as network and data

connection or gps coordinates.

Applications developed for the Android platform use JUnit 20 to be tested. The process

of testing native Android applications is described in Section 4.2. JUnit is an open source

unit testing framework for Java that is very popular and often used as a reference in terms

of testing. A test project that relates to a Java project is created. It is structured in

test cases that contain several tests. In the tests (which are written in Java too) different

methods are executed, and assertions (for example assertTrue) are used to check if the

results are as expected. When these tests are run, a report is generated that tells which

asserts did not give the expected result and thus failed. Tests can run in any order, which

is an advantage since the code is better tested.

JUnit is very popular, due to its ease of use in automated build and test systems. In ‘Test-

driven development concepts, taxonomy and future direction’ [14] Janzen and Saiedian are

claiming that JUnit is responsible for the popularity of test-driven development and XP

techniques. Because JUnit is so often used as reference for other languages, the family of

testing frameworks that are JUnit-like are called xUnit.

When developing iOS applications, different tools can be used to automate the testing

process. Apple recommends their own testing tool called UIAutomation 21, but little

documentation can be found about it. In UIAutomation, the tests are written in JavaScript

and they control the user interface of the application. The tool does not aim at test-driven

development, nor at using it from within a continuous integration solution or other tool.

It focuses on regression and user interface testing. Since few documentation can be found

about it and the source is not publicly available, it can be hard to figure out how to use it.

20http://www.junit.org21http://developer.apple.com/library/ios/#documentation/DeveloperTools/

Reference/UIAutomationRef/_index.html

Page 40: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.3 Testing and debugging mobile applications 26

This is why many developers use other tools such as Frank 22 or FoneMonkey 23 (recently

renamed to MonkeyTalk). They succeed at fulfilling the developer’s testing needs, but are

often payed solutions.

Debugging an application is no problem on both platforms. All development tools have

a debug mode, which enables breakpoints in the code and stops the execution at these

points. Developers can go step by step through the code and meanwhile inspect the values

of the variables at that moment. The step by step process is very useful to help find and

solve bugs.

JavaScript testing

The behavior of web and webview-based native applications is programmed with JavaScript,

which makes it important to find a way to test this JavaScript code. Different frameworks

already exist, mainly for unit testing of JavaScript code, but only the most prominent

ones will be described below. The greater part of these platforms are meant to be used

for server-side or client-side JavaScript that is executed on a server or in a ‘normal’ web

browser. Web applications can be tested by using one of these solutions, but it is much

harder to use them for testing webview-based native applications since the JavaScript is

executed in a webview, not on a server or in a browser. Some tools publish their test

results on a web page which makes automation from outside the application impossible.

All tools require that the application is altered to enable the test run, e.g. by adding a

library and the test code. Testing this altered application can not ensure that the applica-

tion without these files still works fine, since the test frameworks often override JavaScript

functions to be able to run the tests or process the results. Nonetheless using some of

these solutions in a specific way could help testing webview-based native applications. All

platforms described below are open source.

JSUnit 24 is a JUnit port to JavaScript. It provides similar functionality, but is written

entirely in JavaScript so it can be used directly in the browser. Tests are included in an

HTML page which is shown in a browser. The test results are shown on this HTML page.

It was meant to test function results of web pages in desktop browsers. The JSUnit server

22http://www.testingwithfrank.com23http://www.gorillalogic.com/testing-tools/fonemonkey24http://www.jsunit.net

Page 41: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.3 Testing and debugging mobile applications 27

allows automatic testing of the web page on different browsers at the same time. JSUnit

requires an HTML page that contains the test and thus cannot be used for webview-based

native applications since the web page that contains the application itself can not be shown

too. JSUnit is no longer maintained and is replaced by Jasmine.

Jasmine 25 is the successor of JSUnit and is created by Pivotal Labs because they were

missing a solid behavior-driven JavaScript test framework where a browser environment

was not needed to test the code. This means that Jasmine can be used to test a simple

JavaScript project, a web page, a server environment, etc. It still offers the same function-

ality as JSUnit, but the syntax has changed and the functionalities were better mapped

onto JavaScript development. This framework provides test runners and suites and a lot

of matchers and spies, which can be used to test callback events. Jasmine offers a very

basic JavaScript support, e.g. the DOM can not be accessed, but is extension based which

allows the addition of extra functionality. There are extensions for DOM checking, HTML

canvas checking, and many others. Jasmine can be integrated in a continuous integration

environment and is actively maintained. It is build based on different older JavaScript

testing frameworks that are not maintained anymore such as JSUnit, JSpec 26, ScrewUnit

27 and JSSpec 28.

QUnit 29 uses a different grammar than Jasmine. The grammar is more similar to JU-

nit with its modules and tests. QUnit is originally designed to test JQuery, which is a

lightweight JavaScript framework used to manipulate the DOM and add animations and

tools for rapid web development. It can test all forms of JavaScript code (server- and

client-side) but it does need a browser to run the tests in and to display the visual re-

port. It has a start and stop method to make it possible to test asynchronous methods

(stop when going into the method and restart when the callback is received). It can be

linked with continuous integration tools, although this is mainly supported for non-mobile

browsers.

JsTester 30 allows validation of JavaScript code in Java. Native Java tests use a bridge

to the JavaScript code and can execute the code by using eval. JsTester is focused on

25http://pivotal.github.com/jasmine26http://visionmedia.github.com/jspec27https://github.com/nkallen/screw-unit28http://code.google.com/p/jsspec29http://docs.jquery.com/QUnit30http://jstester.sourceforge.net/index.html

Page 42: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.3 Testing and debugging mobile applications 28

testing results of methods and existence of instances within JavaScript itself. It can be

used together with JUnit. It is not actively maintained anymore.

TestSwarm 31 provides distributed continuous integration testing for JavaScript. Its

goal is to alleviate JavaScript testing on different kinds of browsers, such as desktop and

mobile browsers. The main focus is not to create another JavaScript testing library but

to create an integration with other libraries such as QUnit, JSUnit, JSSpec, Selenium,

etc. It contains test runners for most desktop and mobile browsers. TestSwarm uses a

server-oriented architecture and saves all tests, configurations and browsers in a database.

On the clients, a test runner is running in the browser which pings to the server for

new tests that need to run in this type of browser. After a run, one test in one type

of browser, the results are sent to the server where a schematic overview of which tests

failed or succeeded on which types of browsers can be found. Next to the test runners,

TestSwarm proactively corrects some bad results that are coming in from clients that are

due to unreliable browser behavior. For example the test runners act appropriately when

the client loses its connection with the server and tests are automatically run again if there

was a problem that was related to any kind of time-out. TestSwarm is not easy to set up

and can not be used for webview-based native applications since there are no test runners

for webviews.

Selenium

Selenium 32 is the most famous tool for integration testing of web applications. It is best

known for the automation of testing web applications and websites in desktop browsers,

but it recently made the switch to mobile browsers by introducing Selenium 2.0. Selenium

offers scenario testing: developers can record or describe a test case that an actual user

would execute and test them against the new code in a lot of browsers. It is a testing tool

that runs in the browsers itself.

Selenium launched their 2.0 version in July 2011, and the most important change was

that it had been merged with WebDriver to allow automated testing of web applications

on mobile browsers. WebDriver was developed earlier (2009) by Google to test their web

applications because they thought that Selenium could not fulfill their needs. Now that

31http://swarm.jquery.org32http://seleniumhq.org

Page 43: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.3 Testing and debugging mobile applications 29

these two are merged, testing web applications and websites on mobile browsers became

easier.

Selenium can be used for webview-based native applications, but needs to run in an

adapted webview which can introduce bugs that will not be found when using the de-

fault webview or the other way around. Webview-based native applications often already

run in an adapted webview, which can not be replaced so easily with the webview that is

required for Selenium since the Selenium webview lacks functions that are needed to run

the webview-based native application successfully. To use Selenium 2.0 on mobile devices

the specific WebDrivers for Android, iOS or the HTMLDriver need to be used. Other

platforms are not supported yet.

Putting these disadvantages aside, Selenium has lots of good things too. It is a very good

example of what a scenario testing framework should be able to do. It can run tests or test

suites within the website or web application and it provides a lot of commands that can be

used for testing. It has three types of commands: actions (manipulate state, for example

clicks), accessors (examine states of the application and store them) and assertions (verify

states of the applications). The tests can be described manually or recorded by using the

record button and adapting the recorded tests. Selenium can be used in a continuous

build environment.

Debugging web and webview-based native applications

Debugging web and webview-based native applications is much harder than debugging na-

tive applications, since JavaScript does not have a debugger. If a developer wants to debug

or inspect the code at runtime, the only option he has is to print regular console.log()

messages that include the value of variables or that mark where in the code the bug is

situated. Unfortunately, a lot of console loggers literally print [object] when an object

is printed that consists of multiple fields. In that case the log message has to be adapted

and the code needs to be rerun to print all the necessary fields of that object).

Integrated debugging tools such as Webkit Web inspector 33, Opera DragonFly 34 or

Mozilla Firebug 35 allow inspection of an HTML page and its contents at runtime and in

33http://trac.webkit.org/wiki/WebInspector34http://www.opera.com/dragonfly35https://addons.mozilla.org/nl/firefox/addon/firebug

Page 44: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.4 Working groups and projects 30

the browser itself. In webkit browsers it is even possible to start this inspection by right

clicking on the page and select ‘Inspect element’. These tools show the DOM structure

and the CSS styles that are linked to an HTML tag and allow to run JavaScript code on

this website.

Weinre 36 is a debugging tool just as those described above, except that it is designed

to work remotely, and in particular, to allow a developer to debug web pages in a mobile

webkit-based browser or webview. The debugging view is not shown within the browser

or webview that contains the application, but on a separate web page that can be accessed

by a non-mobile browser, resulting in more space to show the debugging view. This is not

yet possible with the previously mentioned debugging tools since they need to run in the

browser itself. The developer of Weinre is Patrick Mueller, who is working on the Web

Inspector debugger at IBM Emerging Technologies. He started working on Weinre in 2010

as an experiment, but it soon became a solid project. Weinre was always closely related

to PhoneGap since it was almost the only way to debug PhoneGap applications. Since

PhoneGap’s switch to Cordova, Weinre is a part of the Cordova project. Adobe runs its

own Weinre server 37 that developers of PhoneGap applications can use freely. Section 4.4

will explain how Weinre works internally. [15, 16].

2.4 Working groups and projects

2.4.1 W3C Groups

The Mobile Web Initiative 38 (MWI) is the W3C initiative that bundles all the W3C

working groups, interest groups or projects that address the Web. Its mission is to ensure

that the Web will be available on as many kinds of devices as possible. Several of the

MWI working groups work on standards about mobile web pages, web applications and

testing these applications. Most of these groups were only recently started and are still

evolving. This does not mean that these subjects are new or not-standardized, only that

the standards are not yet accepted by W3C. It indicates that there is a lot of interest in

these techniques and that there is a high demand for approved standards and procedures.

36http://people.apache.org/˜pmuellr/weinre37http://debug.phonegap.com38http://www.w3.org/Mobile

Page 45: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.4 Working groups and projects 31

This demand is still increasing because of the large fragmentation of the mobile market,

which makes it hard for developers to achieve a high device coverage.

Web applications working group

‘The mission of the Web Applications (WebApps) Working Group is to pro-

vide specifications that enable improved client-side application development on

the Web, including specifications both for application programming interfaces

(APIs) for client-side development and for markup vocabularies for describing

and controlling client-side application behavior.’ - W3C [17]

The Web Applications working group is part of the Rich Web Client activity and wants

to simplify the development of web applications and packaging them. W3C calls the

packaged web applications ‘widgets’, and they can be installed easily on all kinds of devices.

They already have created lots of APIs and guidelines, such as the definition of Web IDL

which is a language to define web interfaces, a web sockets API that handles two-way

communication between server and host, a web workers API that emulates threading in

web technology and web intents as being the web equivalent of Android Intents. This

group is supposed to end in May 2014.

Device APIs working group

‘The mission of the Device APIs and Policy Working Group is to create client-

side APIs that enable the development of Web Applications and Web Widgets

that interact with devices services such as Calendar, Contacts, Camera, etc.

Additionally, the group will produce a framework for the expression of security

policies that govern access to security-critical APIs (such as the APIs listed

previously).’ - W3C [18]

The device API working group was started in 2009 and is now starting to publish some

results. The most important goal of this working group is to create APIs, described with

Web IDL, to standardize mobile device features such as vibration, notification or battery

status. This is useful for the collection of mobile device data in this dissertation, such as

battery status information. Most of the results are still working drafts, but companies are

already starting to implement these APIs. This group should be finished in July 2013.

Page 46: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.4 Working groups and projects 32

Browser testing and tools working group

‘The mission of the Browser Testing and Tools Working Group is to produce

technologies for use in testing, debugging, and troubleshooting of Web appli-

cations running in Web browsers.’ - W3C [19]

This group has as goal to deliver some APIs that people can use in all the browsers to test

their web applications. They especially focus on simulating user actions and reporting

more things to the developer, such as console messages. In the end, these APIs should be

included in all browsers. This group has only recently started (October 2011) and should

be finished by December 2013.

Web testing interest group

‘The mission of the Web Testing Interest Group, part of the Web Testing

Activity, is to develop and deploy testing mechanisms and collateral materials

for testing of Web technologies. In particular, tests developed as well as the

testing framework should work on non-desktop devices such as mobile devices,

web-enabled television sets etc.’ - W3C [20]

This dissertation fits perfectly within this interest group. The group was only recently

started (August 2011) as part of the Web Testing Activity, which aims to simplify testing

web technologies of all kinds. Within this interest group, people are looking for solutions

as testing frameworks, test runners that run a series of tests, test reporting, suites, etc.

They should be finished by December 2013.

2.4.2 Projects

MobiWebApp

‘The MobiWebApp project supports the use of Web technology for developing

mobile Internet services, bringing the advantages of Web applications from the

desktop to the mobile world. MobiWebApp includes support for European

outreach, training, the development of test suites and standardization in the

area of mobile Web applications. ’ - MobiWebApp [21]

Page 47: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.4 Working groups and projects 33

MobiWebApp 39 is a W3C project that focuses on disseminating the results of the web ap-

plications working group to European developers and technology providers. This project

is supported by the European Commission for Information society and media and runs

from September 2010 until August 2012. They work closely together with the Web Test

Interest group and founded the Browser testing and Tools Working Group. In the descrip-

tion of the project [21] they say: “To increase platform interoperability, MobiWebApp

supports the creation of test suites for standards relevant for mobile Web applications.”.

An automated testing framework for web and webview-based native applications totally

fits in there. In fact, they already started to work on one themselves.

After their first year of work, MobiWebApp published their results on testing in a test

report [22]. In this document they describe the architecture of a test framework that they

are planning on developing during the second year of the project (starting from October

2011 until August 2012). As shown in Figure 2.8 they want to create a total test suite

that allows automation. Figure 2.9 depicts how a testcase could be deployed. All the

requirements for this test framework are written down in the test report.

This test framework can be a major step forward, but is only focused at testing web

applications. The scope of this project is testing web applications, not webview-based

native application so there still is no solution for these applications. It would be best if a

solution for webview-based native applications meets the requirements for a test framework

described in this project.

MOSQUITO

‘MOSQUITO will help identify barriers of fragmentation which could prevent

the full development of mobile Internet applications and services. MOSQUITO

will support the standardization of mobile Internet services and promote col-

laboration in the industry. MOSQUITO will support actions relating to in-

teroperability for mobile applications and services. MOSQUITO will give its

support to the cross-sector convergence of IT, Telecoms and Media within the

context of mobile Internet applications and services.’ - MOSQUITO [23]

39http://mobiwebapp.eu

Page 48: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.4 Working groups and projects 34

Figure 2.8: Integration of test suite development and specification authoring [22]

Figure 2.9: Test case deployment on W3C test server [22]

Page 49: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.4 Working groups and projects 35

MOSQUITO 40 is a European project that closely works together with MobiWebApp.

MOSQUITO’s target is to alleviate the barriers of internal fragmentation within smart-

phone application development. They are doing research about what is causing this in-

ternal fragmentation, how it could be solved and what test cases should be tested to be

certain that the application is working as good as possible on all types of devices. The most

frequent fragmentation issues are API specification violation, APIs that behave differently

on different devices, libraries that are missing and graphics performance. Since iOS is less

fragmented than Android, their problems and solutions will mainly target Android [24].

Within the MOSQUITO project they created a list of test cases that should be run to test

specific parts of an application that are sensitive to internal fragmentation problems [25].

These test cases are sometimes really straightforward (‘If the application is supposed to

write files, can it write a file?’) but very necessary since these functions should actually

work in each browser. In the document a test framework is described that lists all these

tests and lets testers fill in the test results manually. This is a good start but developers

will not use this solution if the results can’t be generated automatically. It is also not

possible to indicate the device type of the test run, which may influence the test results.

It would be better if a tester could indicate the device used when testing to see which

devices are causing bugs and which ones don’t.

Webinos

‘The webinos project will define and deliver an Open Source Platform and

software components for the Future Internet. The main platform delivery will

be in the form of web runtime extensions, and complimentary infrastructure

components. ’ - webinos [26]

Webinos focuses not only on mobile web applications for smartphones, but also on appli-

cations for home media, in-car units and PCs. Webinos works together with MOSQUITO

and MobiWebApp but the project’s focus is much wider. Besides the scope, the goal is

different too. Where MobiWebApp wants to inform people about the capabilities of web

applications, webinos wants to create a new platform that can be used to create these mo-

bile web applications as a reaction to the increasing fragmentation in the market of mobile

40http://www.mosquito-fp7.eu

Page 50: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.5 Conclusion 36

devices. Because this is a large project — running from September 2009 until August

2013 and supported by the European Commission for Information society and media —

which may be widely used in the future, it is important that a new test framework could

support these types of applications too.

2.5 Conclusion

Due to both internal and external fragmentation on mobile devices, webview-based native

applications are growing. Before this technique becomes widely used a better testing

solution for this type of applications should be available. This testing solution should

be able to automatically run tests in different circumstances and allow interaction with

the user interface. It should support as many devices as possible, as well as emulators

and simulators. Due to the lack of a debugger, the very basic error handling and the

flexible nature of JavaScript, testing and debugging is much harder for applications that

use JavaScript than for native applications.

There are some platforms to test web applications that can be run in a browser. The more

device specific APIs like contacts or camera are used, the bigger the testing problem gets.

Webview-based native applications are totally left out of the picture since they cannot use

the common native testing tools but also cannot use the common JavaScript testing tools.

These applications can not be tested within browsers since they use native code as well.

The W3C groups and projects were only recently started and most of them have not much

results yet according to a test framework. MobiWebApp is working on one but is focused

on web applications. MOSQUITO has a few test cases, more guidelines than actual tests,

to test an application on cross-platform compatibility and webinos focuses more on web

applications for a broader range of devices like TVs, cars and fridges.

There definitely is a need for a test framework that can support the testing of web and

webview-based native applications preferably within different conditions such as when

there is a network connection or not, while moving or when low on battery. It is also very

important to add as less test-specific code as possible in the application, since this can

always introduce extra bugs that are not there when this code is omitted, or ‘fix’ bugs that

are there when omitted. And lastly, it would be a major advantage if the test framework

could be used on real devices as well as on emulators and if it could be automated within

Page 51: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

2.5 Conclusion 37

a continuous integration environment.

Page 52: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

38

Chapter 3

Design of the test framework

In this chapter the design of the test framework will be explained. It starts with a vision

that clarifies the mission, key features and quality attributes. The NABC describes the

needs, approach, benefits and competitors of such a framework. Several scenarios are

described and an architecture is suggested.

3.1 Vision

3.1.1 Mission

Develop a system to test user scenarios on mobile web applications and webview-based na-

tive applications. These tests should be done under different circumstances, with different

test device specific features and the results should be reported in a clear way. This test

system should be able to access JavaScript code and DOM objects, in contrast to other

test systems that can only access native elements and code.

3.1.2 Key features

Describe scenarios and test cases in a cross-platform way

The main goal of the test system is creating test cases for user scenarios, and thus

simulating user behavior to see if the user perceives the application in the right

way on each type of device. The goal is to alleviate the testing effort, so a cross-

platform approach is chosen, which means that the tests can be reused as much as

Page 53: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3.1 Vision 39

possible across different platforms and types of devices. Each test case will represent

a user scenario and will be split up in several smaller tests which all test a specific

functionality.

Describe the circumstances under which test scenarios need to run

Sometimes an application needs to behave differently in different environments,

which leads to an exponential amount of test cases. With this test system a tester

can describe these environments, let the tests run automatically in all these environ-

ments and check if the output is as desired. Environments can be related to battery

status, GPS signal, network connection and much more.

Run the test scenarios under different circumstances and on different types

of devices

Once the test cases and the associated environments are described, these tests can

be run automatically on a series of devices that is specified on the basis of a list

of characteristics of the devices. The tests are actually run on all these devices, so

these tests will give the same results as in real life situations.

Monitor execution-specific statuses during testing

During the execution of the tests, different execution-specific statuses can be moni-

tored like memory usage, battery usage or data traffic. This data will be accessible

after the tests are finished, and the results of different devices can be compared.

Plugin-based testing functionality

The mobile ecosystem is changing rapidly, that is why a plugin based model was

chosen to implement the testing features. There can be plugins for different kinds of

user actions, for different device specific features such as remote control, bluetooth

and lots of other things. Because the test system is plugin based, new features can

easily be added without affecting the previous tests or the need to upgrade the entire

system.

Clear reporting

Reporting is another important goal, next to the testing itself. The test system is

worth nothing without a good tool that visualizes the results. It should be possible

to clearly see which tests failed and which succeeded, and where there may be a

problem. There will be different filters and search or sort functions to visualize all

Page 54: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3.1 Vision 40

the test data in a clear way.

Integration with automated build platforms

In a lot of companies, testing is just one part of the development flow. To optimize

this work flow it is important that the test system can be integrated with the existing

tools that developers and testers use everyday, like continuous integration tools such

as Jenkins.

3.1.3 Main quality attributes

Usability

Usability is the most important quality attribute. Developing mobile applications is

becoming very complicated with all the different platforms and tools. The developer

is looking for something that eases the development and testing procedures. This

includes a testing tool that is easy to work with and does not introduce an extra

overhead to the development team. Usability includes three factors: learnability,

efficiency and satisfaction.

Learnability is about a flat learning curve, which makes it easier to start using the

test solution. If there would be a steep learning curve, developers will postpone the

setup of the system and not use it at all.

Efficiency means that it should be easy and straightforward to use. Small tasks need

small actions and only larger tasks may require larger actions.

Satisfaction is about good feedback to the user. If a test failed, the user wants

to know whether he did something wrong or if a problem occurred in the test. In

both cases he wants to know exactly what went wrong, without having to search the

Internet or debug the application for hours.

Extensibility

The system needs to be extensible with new features, types of devices, test methods,

new operating systems, etc since it is very likely that the mobile industry will change

in the future. It should also be possible to integrate the system with other systems,

so it needs to be extensible in this way too.

Scalability

Page 55: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3.2 NABC 41

A test system should scale together with the demands of the developer. There are a

lot of different devices that the developer possibly wants to test on. He could also be

developing different applications that all need to be tested on these devices. There

can be a big difference between the amount of tests that need to be executed for a

certain app. That is why the test system needs to be scalable in many ways.

There are other qualities that have to be kept in mind too, but are less important than

the previous three. One of them is the correctness of the system. If tests are performed,

these should be performed the way the tester intended. E.g. if a tester says that a test

case needs a network connection, he has to be sure that this is the case when the test

case is run. If the test system tells that the test passed, this should be true. Another

quality attribute that is important is the performance. This is linked to scalability, since

the system should respond quickly even when there are many tests and/or many devices

attached to the system. If the execution of the tests takes longer when using the test

system compared to doing it manually, the goal of usability is missed.

3.2 NABC

Need

The growing community of mobile web application and webview-based native appli-

cation developers needs a solid testing platform to test apps in all sorts of conditions

and on a broad range of devices. Nowadays a lot of this testing is executed manu-

ally, which is a resource-consuming and tedious job. An automated scenario testing

framework for these kinds of apps would definitely help the developer create more

and better applications.

Approach

The testing framework will be a cross-platform framework that makes use of the

native testing frameworks, but takes test cases written in JavaScript. It will provide

a JavaScript library that testers can use to describe their scenarios, the circumstances

and the types of devices they want to test on. The central server will execute these

tests on the selected devices of the device farm that is connected to this server.

Reports will be provided in a visual way, but also in a format that can be used in

an automated build environment.

Page 56: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3.3 Scenarios 42

Benefits

The framework alleviates the task of the developers and testers to test the applica-

tions. They can spend more time on writing high-quality apps and new tests that

can be automated. By using the framework the quality of the application is greatly

improved and errors are noticed quickly after they are introduced.

This framework is created especially for scenario testing of mobile web apps and

webview-based native apps on lots of devices. It has special features that simplify

gathering the required information for reporting and debugging purposes.

By mimicking the user’s behavior in different environments and with different set-

tings, bugs are found before the users have to report them, resulting in a better user

experience and better sales results.

Because this framework is cross-platform, tests only need to be written once and

they can run on different operating systems like Android, iOS and Windows Phone

7. If there are smaller differences between these devices (for example in the GUI)

exceptions can be added to the scenarios.

Competitors

Different testing frameworks already exist such as Selenium, TestSwarm and com-

binations of different testing components such as QUnit combined with the build

automation framework Jenkins. All these solutions are platforms that were created

for other testing purposes but can be used for web applications if small (or large)

adjustments are made. They don’t make use of all the possibilities of JavaScript and

HTML and they are not able to do profound scenario testing.

W3C is creating a new testing platform as we speak, focused on web applications.

In their first proposal they did not include the possibility to automatically run tests

in different environments nor to use it for webview-based native applications.

3.3 Scenarios

This section will give a rough overview of some scenarios and use cases to help explain the

test framework’s work flow. A more detailed description of these scenarios, including the

metrics, can be found in Appendix A.

Page 57: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3.3 Scenarios 43

Figure 3.1: Use case diagram with functional scenarios

3.3.1 Different actors

Different actors participate in the test framework. The user is the person who uses

the test system to run tests. This can be a tester or a developer of applications. The

developer is the person who develops the test system itself and/or creates extensions.

The administrator is the person who installs the test system and maintains it. In some

cases this person will be the same as the user. An external system is a system with

which the test framework interacts, such as a continuous integration system.

3.3.2 Functional scenarios

The functional scenarios described below are visualized in the use case diagram in Figure

3.1.

Write test cases

Page 58: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3.3 Scenarios 44

A user of the test system can write test cases in a cross-platform way by using

JavaScript. Within these tests the user can denote the circumstances that can be

used to run these tests in. Different test blocks can be easily linked to each other,

where each block tests a small part of the application, such as a page, or performs a

small action, such as login in. It should be possible to let these blocks be created by a

user who has a lot of insight in testing and let the user who develops the application

just link test blocks to each other.

Add applications and tests

A user can add applications to the test framework and attach different tests to this

application. The tests have to be written before they can be attached. Tests can

also be added by using a configuration file.

An external system can add applications and tests through a command line interface

and by using the configuration files.

Plan tests on devices

A user can plan a test run of an application that was already added to the test

framework. He can select the test cases that need to be executed and the devices

that this test run needs to be executed on. The devices can be chosen by providing

a specific device type, but also by setting some parameters such as memory, screen

size or device brand. The user can also upload a configuration file that selects the

tests that need to run or the devices that will be subject to the test run.

An external system can plan tests by using the command line and two configuration

files, one to select the tests and one to select the devices.

The test system automatically plans these tests on the needed devices that are

attached to the system. The user can see this planning on a screen of the test

framework.

Run tests on devices

Once the tests are planned, they will be run on the different devices. This is done

automatically by the test system. The user can track the status of the test execution

on a screen of the test system. If tests are already executed he can immediately see

the results on this page. If some tests still need to be executed the user can see when

they are planned and what the expected finish time would be.

Page 59: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3.3 Scenarios 45

Process results

Once all the tests have run, the results need to be processed. This task can take

quite some time and resources since there possibly is a lot of data that needs to be

gathered, structured and analyzed. All the results need to be gathered on one or

more screens of the test framework so the user can have an overview or detailed look

at the test results.

A unified file with an overview of the results can be created for use by an external

system.

View results

Once the results are processed, the user is notified through a notification channel

such as e-mail. He can see an overview of the test results (failed or passed), but

he can also take a more detailed look at the test result to see why a test failed. A

detailed test result shows the result of the subtests but also the log file, screenshots

or other artifacts.

The processed results can be sent to an external system in a unified file format that

can be handled by the external system.

Setup system

The administrator has to set up the system before the test framework can be used.

This test framework needs to run on one or more servers that need to have the

correct hardware configuration and the software installed.

Add devices

The administrator adds devices that can be used for testing. These devices are

attached to the different servers that will be used for testing. The administrator

physically adds the device to the server and then adds the device in the software. He

fills out all the characteristics of the device, which can also be retrieved automatically

from an online database.

Add extensions

The administrator adds extensions to the test framework that allow new test func-

tions to be used in the test runs. Therefore he configures the software so it uses the

required extension files.

Write extensions

Page 60: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3.3 Scenarios 46

A developer can write extensions for the test framework that allow new test functions

to be used in the tests. Once these extensions are finished, they are handed to the

administrator who adds them to the test framework.

3.3.3 Quality attribute scenarios

Usability

Introduction to the test system

A user or administrator who wants to start using the test system can make use of

the tutorials, examples and documentation that will be provided to learn how to use

the test system. Users or administrators should find it easy to start using the test

system and the time it takes to test a ‘Hello world’ application should be limited.

Use the test system efficiently

The test system should be used efficiently. Test runs that contain errors, such as files

that are missing, should be discovered before the test run is planned and executed.

The user should be informed of this problem while creating the test run. This can

be done by adding input checks and inform the user in case abnormalities are found.

It should be possible to correct mistakes without loss of previous input. The reports

with the test results should be updated frequently so the user is provided with

accurate feedback. Test runs on devices should be parallelized as much as possible

as long as there are different devices available.

Extensibility

Write a minor plugin

A developer wants to extend the test framework with extra test functions such as

new controls or device functions that can be tested. He can create a new plugin that

will not affect other functionality of the test system. It should be easy to create and

test a minor plugin.

Deploy and use a minor plugin

A minor plugin can be added to the test system by the administrator at runtime by

adding the plugin to a configuration file. This can be done without affecting the test

Page 61: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3.4 Architecture 47

system. Test cases that want to make use of this new functionality need to change

the test code, others don’t.

Write a major extension

A developer wants to add support for a different smartphone operating system or

change a large functionality within the test framework. These changes need to be

backwards compatible and make it possible to still use ‘old’ tests in test runs. The

new functionality can be used in new test cases.

This change can not be deployed at runtime but needs a scheduled maintenance

moment at which no tests can be executed.

Scalability

Number of devices

The number of devices that is running tests at the same time can vary. If the

majority of devices is not used, it should be possible to deactivate these devices.

They should be reactivated when these are needed.

In case a certain type of device is being used almost all the time, the administrator

will be informed and advised to add an extra device of this type.

Server load

Together with the number of devices that are running tests, the number of servers

that are needed to run these tests can vary too. The devices where the tests are

run on need to be coupled to the same servers as much as possible. That allows the

other servers to be deactivated when few tests are running at the moment. When

needed, these servers can be activated again.

3.4 Architecture

The software architecture of the test framework is created based on the attribute-driven

design method 1 that takes into account the architectural drivers, quality attributes and

scenarios. It is an iterative approach that details the architecture level at each iteration.

1http://www.sei.cmu.edu/architecture/tools/define/add.cfm

Page 62: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3.4 Architecture 48

Figure 3.2: Component diagram top level

The first iteration starts with the highest level of the architecture. During each iteration

each component of the higher level is elaborated in more detail until no more composition

is possible and the final level is reached.

3.4.1 First iteration

The architectural drivers for the first level are extensibility and scalability. This level

has to deal with extra components that can be added and with multiple servers that can

be needed for different tasks. Extensibility and scalability are closely related to each other

at this level, since they both are a variant on modifiability. The first one represents the

modifiability of the functionality of the system, the second one the modifiability of the

load of the server.

The tactics for modifiability consist of three parts: localize changes, prevent the ripple

effect and defer binding time. Defer binding time is less important for this level since the

focus is on extensibility and scalability. The most important tactic is the prevention of

the ripple effect. If one part of the system is changed, for example the website, this may

not have any effect on the scheduling of the tests. The same is valid for scalability: if the

amount of servers is scaled, this cannot have any effect on the test runs themselves. To

prevent this ripple effect, information needs to be hidden, existing interfaces should be

maintained, the communication paths restricted and an intermediary could be used. The

third tactic is to localize changes. If changes to the system are localized, the ripple effect

will be prevented too. This includes semantic coherence, anticipating on expected changes,

generalizing modules, limiting possible options and abstracting common services. Some

of these actions are less important for this situation, such as the abstraction of common

Page 63: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3.4 Architecture 49

services. [27]

The pattern used is the broker pattern [27]. The different components are visualized in

Figure 3.2. On the left side, the components that interact with the user, administrator

or external system can be found. They include the website, to be used by users and

administrators, and the command line interface, to be used by the external system or

administrators. On the right side, the components that schedule and run the tests and

that process the results are shown. The scheduler schedules the tests based on the devices

that are connected to the system and the devices that a certain test needs to run on.

The device connector contains the connection with the devices itself and runs the tests on

them. Afterwards the results are processed by the processor who creates the web pages

that display the results, keeps track of previous results and generates a unified file for

external systems. The intermediate is the link between all these components and redirects

commands from the left side to the correct subsystem on the right side and the other way

around.

3.4.2 Second iteration

The elaboration of the device connector seemed far more challenging than the elabora-

tion of the other components. It was especially hard to find a way to run automated tests

on web applications that run in a mobile device’s browser or to run tests on webview-based

native applications. These types of applications are the ones that can not be tested with

other existing systems so it is important that this test system finds a solution for this

problem. Therefore the remainder of this dissertation will be about the creation of a proof

of concept to investigate how such a device connector could be created.

The other components are more straightforward. The website should have a high usability

since the usability of the system itself is reflected by the usability of the website too.

The command line interface is about processing the commands, sending the right

instructions to the intermediate and return the results. The scheduler should contain

several algorithms to plan the test runs in the most optimal way on the different devices

that are attached to the system. The processor should be designed for its processing

task and include a database structure to keep track of the results. The intermediate

links the different components and fulfills a message-passing role.

Page 64: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

3.5 Conclusion 50

3.5 Conclusion

During the elaboration of the design of the test framework the real challenge for this

dissertation was found: what should the device connector look like and how can scenario

tests for webview-based native applications be executed and processed automatically?

The remainder of this dissertation will focus on creating a proof of concept that solves

some of the encountered issues. Its goal will be to allow automation of scenario testing of

webview-based native applications. Later on, concepts or ideas from this proof of concept

can be reintegrated in the total test framework.

Page 65: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

51

Chapter 4

Technical research

Before a proof of concept for the device connector could be created some extra research

had to be done. This chapter will discuss some subjects at a more detailed level. This

information is needed for a better understanding of the proof of concept.

4.1 Cordova

Cordova (also known as PhoneGap, see Section 2.2.2) is an HTML5 app platform that

allows developers to author native applications with web technologies and get access to

APIs and app stores. It is very helpful when creating applications for several platforms like

iOS, Android and BlackBerry without having to learn three different languages. Cordova

lets developers write applications in web technologies and then wraps it as a native appli-

cation that can be downloaded from the OS-specific app stores. Cordova makes excessive

use of the combination HTML5, CSS3 and JavaScript.

Recently, W3C started standardizing JavaScript APIs to support native functions within

mobile browsers, such as accessing the contacts list, calling phone numbers or using the

GPS antenna. In Section 2.1.3 the long and tedious process of standardization is described.

This process causes a difference in implementation timing, resulting in browsers or web-

views that support non-standardized APIs before the others do. This difference in browser

or webview functionality generates a lot of trouble for developers, who need to make differ-

ent versions of their apps or websites to be able to provide a good user experience. Cordova

leverages this problem for webview-based native applications by implementing the future

Page 66: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

4.1 Cordova 52

W3C standards on platforms where it is not supported yet. By using Cordova, developers

are sure that all the functionalities offered by Cordova are really available on these plat-

forms. The list of functionality is depicted in Table 4.1. The most important platforms,

iOS, Android and Windows Phone 7 (see Section 2.1.2), support all the functionality.

4.1.1 How the framework works

This section will explicate how Cordova for Android works. All information is related to

Cordova Android 1.7.0 (released on May 1, 2012) since this version will be used in the

proof of concept. For more information the reader is referred to Section 4.1.3 where the

bridge between JavaScript and native code will be discussed, to the Cordova website or

to the source code itself.

A Cordova application is a normal native application but instead of loading a native GUI

class to start the application, it loads a webview class in which the web content is shown.

A webview is similar to the browser but is a class that can be embedded inside another

application. It can load web pages from the Internet or load a web page that is included

in the application itself. The latter is done by Cordova. Each platform has some kind of

webview class. They may be called differently but the main function is the same: they

render content created with web technology.

This webview class loads the index.html file that is located in the assets folder. All

the web code is situated in the assets folder. It contains all the HTML, JavaScript and

CSS code, together with images or other files needed inside the application. To create an

application with Cordova the JavaScript file(s) need to be included in the index.html

and the Cordova library should be added to the application’s project. There are various

techniques to do this, depending on the platform that the application is developed for. The

tutorial can be found on PhoneGap’s website 1. In index.html the actual application is

started. It starts by loading the Cordova code which is finished once the deviceReady

event is fired. Only after this event is fired it will be possible to execute Cordova functions.

Now a user interface can be created with HTML and CSS and linked to the application’s

logic with JavaScript. How to use the Cordova functions from JavaScript is described in

their APIs2.

1http://phonegap.com/start2http://docs.phonegap.com/en/1.6.1/index.html

Page 67: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

4.1 Cordova 53

Fea

ture

iOS

3G

-iO

S

3G

S+

An

dro

idB

B5.

xB

B6.

0+w

ebO

SW

P7

Sym

bia

nB

ada

Acc

eler

om

eter

yy

yy

yy

yy

y

Cam

era

yy

yy

yy

yy

y

Com

pass

ny

yn

nn

yn

y

Conta

cts

yy

yy

yn

yy

y

Fil

ey

yy

yy

ny

nn

Geo

loca

tion

yy

yy

yy

yy

y

Med

iay

yy

nn

ny

nn

Net

work

yy

yy

yy

yy

y

Not

ifica

tion

(ale

rt)

yy

yy

yy

yy

y

Not

ifict

aion

(sou

nd

)y

yy

yy

yy

yy

Not

ifict

aion

(vib

rati

on

)y

yy

yy

yy

yy

Sto

rage

yy

yy

yy

yy

n

Tab

le4.1

:P

hon

eGap

sup

port

edfe

atu

res

an

dop

erati

ng

syst

ems

(Ph

on

eGap

1.7

.0)

Page 68: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

4.1 Cordova 54

Figure 4.1: Architecture of Cordova / PhoneGap

4.1.2 Plugin structure

The list of Cordova functionalities is of good size but not everything is included. To solve

this problem, Cordova is plugin-based, which means that developers can write their own

plugin that links native code to JavaScript. This plugin-based mechanism is also used

inside Cordova itself. A lot of plugins can be found on GitHub 3.

Figure 4.1 visualizes the architecture of Cordova. The upper level is the mobile web

application level. This level contains the actual app and all the code that is app specific.

Just below this level the JavaScript engine can be found. This is where the Cordova API

is located. This level is split up into the PhoneGap core and a plugin. For the user the

PhoneGap / Cordova core is seen as one solid block but actually it also consists of several

plugins, one for each large functionality such as notification, geolocation, etc. The lowest

level is the native level. This level contains the native code that is used by the core and

plugins.

To write a plugin, a JavaScript and a native file are needed. Both these files are needed for

each platform, so for example a JavaScript file + Java file for Android and a JavaScript

file + h- and m-file for iOS, i.e. Objective-C. A JavaScript file is needed for each platform

since it contains the link to the native code, which is different on each platform. Figure

4.2 contains a sequence diagram that shows how the plugin mechanism works. The colors

are related to the colors in Figure 4.1, to indicate in which level each class or file resides.

The green YourPlugin is the JavaScript file that is needed for the custom plugin and

3https://github.com/phonegap/phonegap-plugins

Page 69: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

4.1 Cordova 55

Figure 4.2: Plugin call sequence diagram

the purple one is the native file used for it. To add this plugin to the Cordova framework

and to use it inside an application, a line should be added to the XML file that contains

all the plugins and links the name of the plugin to the native class that needs to be called.

The plugin also needs to be registered to the Cordova PluginManager, which is done when

the file is loaded. Detailed instructions can be found on the PhoneGap wiki 4.

In YourApplication the function yourFunction of the custom plugin YourPlugin

is called. The actual line in JavaScript will look something window.plugins.your

plugin.yourFunction(success,error);. It is an asynchronous call, which means

that it needs a function success that can be called with a return value (if necessary)

and an error function that can be called in case of problems. In this JavaScript function

the Cordova execute function needs to be called, which will pass the callback functions,

the action name and the arguments to the native method execute of YourPlugin.

This method will execute the correct native code, for example by calling an extra func-

tion like yourFunction. The method execute needs to return a variable of the

class PluginResult, which contains the callbacks and the return value(s). Cordova

will call the correct callback, in this case the successCallback, depending on the

pluginResult that is returned.

4.1.3 Bridging the gap between JavaScript and native code

This section will describe how Cordova bridges the gap between JavaScript and native

code. Figure 4.3 shows a detailed sequence diagram starting from the moment a plugin

4http://wiki.phonegap.com/w/page/36752779/PhoneGap%20Plugins

Page 70: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

4.1 Cordova 56

Figure 4.3: Bridging the gap between JavaScript and Java (on Android)

Page 71: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

4.1 Cordova 57

call is made from the JavaScript code to the callback being called. The same colors were

used as in Figure 4.1 so green is in the JavaScript layer and purple in the native one.

The diagram only shows the situation on Android, other platforms may bridge the gap

differently.

The sequence diagram starts by calling exec with the successCallback and error

Callback functions, the service (a String with the name of the plugin), the action

(a String with the action that needs to be taken) and the args/arguments (as a JSONAr-

ray) as parameters. Inside Cordova, this generates a prompt with a structured String

as message. The reason the prompt is used is discussed in the next paragraph. The

CordovaChromeClient is connected to the Android Webview class and overrides the

onJSPrompt method, which is called whenever a prompt is called from the JavaScript

code. This prompt mechanism is the link between the JavaScript and native code. If

the onJSPrompt method recognizes the structure in the String, it knows that it is

a Cordova message and depending on the keywords in the structure it takes other ac-

tions. In case of a plugin call it forwards the arguments to the PluginManager, which

looks for the correct class instance by making use of the plugins.xml file that contains

a name-class pair for each plugin. When this instance is found, the PluginManager

calls the execute method of the plugin instance, which is where the native plugin code

was added. Based on the action argument, the right action can be taken and the cor-

rect pluginResult returned. This pluginResult is handled by the PluginManger.

Depending on the value of pluginResult, the success- or errorCallback are called by

using the sendJavascript method of DroidGap (which is the Cordova extension of

WebView). The PluginManager sends an eval with the function in it to JavaScript

so the correct callback is called. The link between native and JavaScript code is created

with a XMLHttpRequest server which puts code to be executed in a queue with notify

and the JavaScript code regularly pulls the code to be executed from this server with

getResponse. Now the eval with the correct callback as JSCode is executed and the

application can process the result.

Android provides a direct JavaScript to Java bridge but this mechanism includes a major

bug when ran on the Android 2.3 emulator. This bug makes it impossible to test any

Cordova application on this type of emulator, which is important for developers since

Android 2.3 was and still is a frequently used version. The prompt mechanism to bridge the

Page 72: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

4.1 Cordova 58

gap between JavaScript and Java is used because Cordova wanted to allow the developers

to test their applications on this type of emulator and to be more independent of the

Android implementation of this bridge.

4.1.4 Testing the Cordova framework

Cordova itself has to be tested too. Different ways to test it are described on the Apache

wiki 5.

One of the techniques used is the ‘mobile-spec’ test, which is a series of tests Cordova

bundled that need to work on a smartphone that uses the Cordova framework. It consists

of two parts. The first part is an automatic Jasmine test that is run inside a Cordova

application and starts automatically when the application is started. Not all tests pass on

all devices, so Cordova suggests to execute the tests before making changes to the Cordova

code (native or JavaScript), write down the number, make the changes, run the tests again

and compare the numbers. The second part is a series of tests that need to run manually.

Cordova created a Cordova-application that guides the tester through all the tests. This

application contains a button for each test which shows a web page with a description of

the test and then starts it. There are 10 tests that need to be run manually and test the

compass, notifications and other Cordova APIs.

On the wiki they also describe two platform specific test techniques, one for Android and

one for iOS. For the Android platform they use a combination of JUnit, WebDriver and

QUnit to run these tests. No source code could be found so this could not be further

examined. On iOS the test scenario includes two manual tests.

Testing the Cordova framework is not automated at all. It would be better if the testing

processes could be integrated in a continuous integration environment, which runs these

tests each time a commit is made. This leads to a much higher quality of the framework

and less developing time would be spent at testing.

5http://wiki.apache.org/cordova/RunningTests

Page 73: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

4.2 JUnit for Android 59

4.2 JUnit for Android

JUnit is a test framework that is widely used by Java developers. Its focus is to do unit

testing: testing small parts of the code, e.g. on function level. If an application contains,

e.g. a calculation method, this function should be tested for the correct output in different

circumstances (different parameters or environment variables).

Recently (in September 2011) version 4.0 was released, and it contains quite some dif-

ferences in structure like extra setup and teardown functions that are called before the

testing is started. Here, version 3.8 is described since this is the version that is supported

by Android and that will be used in the proof of concept.

4.2.1 JUnit in general

To start testing Java code with JUnit, a test project that shares the same namespace, but

with test behind it, should be created. The test code resides in a separate project and

namespace and does not influence the code.

A JUnit TestCase is built up from a one-time setup method for the class, called when

the class is created, a setup method that is called before each test, several test methods,

a teardown method that is called after every test and a one-time teardown method that

is called when the class is disposed. Each test method starts its name with test and is

recognized as a test method automatically. By using annotations (keywords with an @

before it) extra information can be provided for the interpreter, e.g. @Suppress tells the

interpreter not to run this test.

The testing of the Java code happens inside the test methods. These tests often consist

of 3 phases: a phase to set up a certain environment, one to test if the code is doing

the right thing in this situation and a last one to undo the changes that have been made

for this test. When the testcase is run, JUnit will run the tests in a random order, so

it is important that each test cleans up afterwards and creates the correct environment

itself. The assert functions are used to check if the code produced the correct output

or values. JUnit itself provides several assert functions, like assertEquals to check if

two variables are equal and assertTrue to check if a boolean variable or a condition is

true. A test can be failed by calling the fail method.

Page 74: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

4.2 JUnit for Android 60

Figure 4.4: Architecture of JUnit for Android

When JUnit has run the testcase, the user can see which tests failed and which passed. If

one of the asserts inside a test method fails, JUnit will stop this test method and start with

another one, since it presumes that something went wrong and possibly all the following

methods in the test method will give a wrong result as well. If a test fails, the stacktrace

of the moment of failure is shown. When a message is added to the assert function this

message is shown on failure which can help find the reason.

4.2.2 JUnit for Android

Android extended JUnit to enable developers to unit test Android applications. While

JUnit is not really developed to handle GUI interactions, this is necessary to test a smart-

phone application. Android extended the standard JUnit to make this possible. With this

extension GUI interactions can be performed from within the test, but Android also allows

mock objects or methods and influencing of the application’s life cycle, network connec-

tions, etc. Android calls this ‘instrumentation’: a set of control methods or hooks in the

Android system. Figure 4.4 shows how the different components of JUnit for Android

relate with each other.

Page 75: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

4.3 Android emulator 61

Android provides several specific classes that extend the JUnit test case, each one created

for a specific scenario. A good explanation about which class to use for which test situation

and how to do JUnit testing for Android in general can be found in the Android Application

Testing Guide [28].

An Android test project essentially is an Android application that runs on the device or

emulator and sends commands to the other application. This implies that files needed

during testing need to be saved on these devices. Files that are created wile testing are

saved to a folder on the device too. Libraries or command line tools that can not be used

on Android can not be used in the test project, such as the Android command line tools

to access the Android device.

4.3 Android emulator

The Android SDK comes with a lot of tools that ease the development of Android apps,

including an Android emulator. It is a virtual mobile device that runs on a computer

and emulates the Android hardware and software. It may be a bit slow since it emulates

instead of just simulating, but it is very useful. Different device configurations (AVDs -

Android Virtual Devices) can be installed on the emulator to emulate different kinds of

devices with different Android versions or hardware configurations.

The emulator allows emulation of network connection and speed, data connection and

speed, gps signals, telephone calls, sms sending and receiving, and power statuses. All

these parameters can be changed through the Eclipse plugin for Android, which has an

emulator control view, or through a telnet connection with the emulator. The emulator

can be started with the command line or via the Eclipse plugin.

Unfortunately some settings of the emulator lead to unexpected behavior. Switching the

data connection off for example, does not imply that no data can be sent anymore. It

only implies that the 3G symbol is not shown in the notification bar and if the value of

this setting is explicitly requested, it returns that it is off. Sending a GPS coordinate to

the emulator allows to add the elevation but if the current location of the emulator is

requested, the elevation is always 0. These quirks are bothersome to deal with if the tests

rely on these settings.

Page 76: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

4.4 Weinre 62

Figure 4.5: Schematic representation of weinre messages

4.4 Weinre

Weinre was already explained briefly in Section 2.3.3. This section will elaborate further

on how it works internally.

Weinre consists of a webpage in which users can see the debugger (the debug client) and

a JavaScript file that needs to be included in the webpage that will be inspected (debug

target). These two communicate with each other through a debug server which is an

XMLHttpRequest server. The communication is shown on Figure 4.5.

First the debug server has to be set up. It can be run on the localhost but it can also be a

publicly available server, such as the one PhoneGap is providing 6. If it is used as a public

debug server the security information, that can be found on the website, should definitely

be read. Multiple users can use the same debug server. To distinguish the different users,

user ids can be chosen and used in the debug target.

To send commands from the debug client to the debug target and vice-versa, JavaScript

code is injected into the application. This code overrides several other commands already

present in every web page. This can cause problems or make the application or web page

behave strangely. One of the things that can happen is that some parts of the console

output are shown in the Weinre client while other parts are still shown in Eclipse or

another tool.

6http://debug.phonegap.com

Page 77: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

63

Chapter 5

Proof of concept

The goal of the proof of concept is to find a way to make automated scenario testing for

webview-based native applications possible. The ideas used in this proof of concept will

help to create a device connector that fits in a larger testing framework.

5.1 Specifications

The proof of concept will aim at testing webview-based native applications only and not

at testing web applications. It could be possible that web applications can be tested

with the proof of concept too, but this is not the goal. Testing webview-based native

applications is the hardest at the moment and by solving the problems encountered with

these applications it is possible that testing web applications will become easier too.

Webview-based native applications can be created in different ways such as embedding a

webview or using a platform like Cordova or Trigger.io. This proof of concept will test

webview-based native applications built with Cordova. This because Cordova is widely

used, has a large community and the code is open source. If needed, Cordova itself could

be adapted to allow testing the applications.

The proof of concept will make use of the native testing frameworks that are already avail-

able, since they allow automation of test runs and often offer extra testing functionality

such as mock objects. There was not enough time to address several OS platforms, so only

Android was addressed during this dissertation. JUnit for Android is well automatable

and can be extended with extra functionalities.

Page 78: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.2 Implementation 64

The proof of concept is created based on Cordova for Android 1.7.0 and targets Android

versions starting from version 2.2 up until 4.0.

5.2 Implementation

To create the proof of concept changes were made on the level of the Android test project

as well as on the level of the Cordova application. The Android test project was changed

to enable communication with the emulator and the application, as will be described in

Section 5.2.3. The application itself was changed too to enable communication with the

Android test project, as described in Section 5.2.2. The communication makes use of the

bridge from Java to JavaScript explained in Section 5.2.1.

To allow these changes to be made, the project needed to be set up in a special way. Since

the Cordova classes are extended, the Cordova source code could not be added as a jar,

but needed to be included as another project. Otherwise Eclipse could not find the classes

that are extended. This makes it harder to use the test framework.

The internal operation of the test framework is abstracted as much as possible for the user

of the test framework. This is one of the modifiability tactics that could already be taken

into account.

5.2.1 Bridging from Java to JavaScript and back

The communication between the application and the test project is very important to make

this proof of concept work. This communication channel will allow sending JavaScript

commands to the application from within the test project and returning any values from

within the JavaScript of the application to the test project.

Section 4.1.3 describes how the bridge JavaScript - Java - JavaScript works in Cordova.

This direction can be used in the proof of concept, but the other direction is needed

too. It should be possible to send commands from the test project (Java) to the ap-

plication (JavaScript) and return a value to the test project (Java). The JavaScript -

Java - JavaScript bridge uses the plugin mechanism and the PluginManager, so maybe

this mechanism can be reused to support the new direction too. The exec function

of the PluginManager could be called to activate a method of a TestPlugin. This

Page 79: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.2 Implementation 65

TestPlugin then executes a JavaScript method by using the sendJavaScript func-

tion. It waits for a result from the JavaScript part of the TestPlugin that returns

the return value to the test project. This approach would allow easy integration of the

TestPlugin to the application since it is uses the Cordova plugin mechanism. Figure

5.1 depicts this scenario.

DroidGap contains a protected instance of a PluginManager that can be reached by

adding a get method to DroidGap. But is it a good idea to use the PluginManager for

this purpose as it was not designed for it? It is no problem to call the PluginManager’s

exec method but it might be hard to find the correct arguments to pass, for example to

determine a correct callbackID. It might be possible that the PluginManager needs to

be adapted in order to perform the actions needed for this proof of concept. This could

break Cordova and it will be difficult to keep the code up to date since Cordova releases a

new version every month. It is possibly better to create a new class specifically to support

the bridge in the other direction, which may be included into the PluginManager after

it has proven its profit.

An extra native class, called TestAPI, is created solely for the purpose of creating the

bridge from Java to JavaScript and back. It contains different methods that can be

executed on the application itself such as accessing HTML elements and performing actions

on these elements. The functionality contained in this class will be described in Section

5.2.2. An instance of the TestAPI is saved in the DroidGap activity and can be accessed

from outside this activity. It is saved in the application itself, not in the test project, since

it is contained in the application’s activity. The TestAPI executes JavaScript by calling

the sendJavascript method of the DroidGap class. The JavaScript code, provided as

parameter, will call a plugin, called TestPlugin which will execute the JavaScript inside

the application and return the value back to the native class of the plugin. This native

plugin class will call the TestAPI’s recieveResult method to allow the testAPI to

return this value.

The bridge from Java to JavaScript and back is depicted on Figure 5.2. Blue classes

represent Java classes that reside in the test project, purple classes represent Java classes

from the application itself and the green class is a JavaScript class that is contained in the

application. All the classes used in this sequence diagram will be explained in the next

sections.

Page 80: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.2 Implementation 66

Figure 5.1: First approach to bridge the gap between Java and JavaScript

Page 81: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.2 Implementation 67

Figure 5.2: Second approach to bridge the gap between Java and JavaScript

5.2.2 Application

Several changes were made to the application itself in order to create the bridge. A

plugin that contains a native and JavaScript file was added, the TestAPI was added and

DroidGap was extended in a new activity class called TestableDroidGap.

Test API

The TestAPI class is the center of the bridge and enables communication between the test

project and the application itself. The class is entirely abstracted towards the user. All the

functionality of TestAPI is called from the testcase itself, represented by MyTestCase

on Figure 5.2 which is an extension of DroidGapTestCase, or from the HTMLElement

class that contains all the functionality related to HTML elements. Both classes will be

described in the next section.

The TestAPI is able to handle several calls to JavaScript at the same time. It generates

a unique callbackID for each call, which is passed to the TestPlugin and used when

this TestPlugin calls the receiveResult method. The receiveResult method

maps the callbackID on the object that is returned by using a HashMap. JUnit requires

methods to be synchronous. If a call to JavaScript is made, the function needs to wait until

the result is received, otherwise JUnit may end the test or execute the check condition

before the action is actually executed. Each method that makes a call to JavaScript thus

has to wait until the result is received, which implies that only one call can be made at

the same time and thus makes it unnecessary to use a HashMap to process the result.

Page 82: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.2 Implementation 68

Since this does not cause any problems and is a more generic approach, the HashMap

implementation is kept. The code of the method getAttributeValue can be found in

Listing 5.1. All the methods follow a similar approach.

Listing 5.1: The method getAttributeValue of TestAPI

1 public String getAttributeValue(String id, String attributeName, long

timeoutValue) throws Exception{

2 int callbackID = getNewCallbackID();

3 mainActivity.sendJavascript("window.plugins.test.getAttributeValue(\"" +

id + "\",\"" + attributeName + "\"," + callbackID + ");");

4 waitForResult(callbackID, timeoutValue);

5 if(hasResult(callbackID)){

6 return getResultAsString(callbackID);

7 }else{

8 return null;

9 }

10 }

The test plugin

The TestAPI calls methods of the JavaScript part of the TestPlugin. The TestPlugin

then executes the JavaScript code that will change or retrieve a value, retrieve the DOM

tree or fire an event. The TestPlugin also has an eval function that executes the given

String and can thus execute every JavaScript code fragment. After the execution of a

JavaScript command, a JSONArray with arguments is returned to the Java part of the

plugin. This array contains the result, an errorcode (0 in case there was no problem) and

the callbackID. If applicable, it can also contain other fields such as the id of the element

where the code was executed on. Listing 5.2 shows the code that will be executed with

the sendJavaScript method of Listing 5.1.

In case the method could not be executed or generated an error, this error is caught with

a try-catch structure. The JSONArray will now contain the executed code instead of the

result value and the errorcode is given a value that indicates what went wrong. This should

help the developer to see what exactly went wrong. Without a decent error mechanism,

he would be left in the dark and debugging would take far more time. In the Java part

of the plugin these errors are transformed in the correct Exception class, which is then

Page 83: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.2 Implementation 69

thrown. A list of error codes and their causes is provided in Table 5.1.

Listing 5.2: The method getAttributeValue of TestPlugin (JavaScript)

1 TestPlugin.prototype.getAttributeValue = function(id, attributeName,

callbackID){

2 var item = document.getElementById(id);

3 if(item){

4 var text = item.getAttribute(attributeName);

5 if(text){ //Normal situation

6 cordova.exec(

7 function(){}, //Success callback from the plugin

8 function(){}, //Error callback from the plugin

9 ’Test’, //Tell PhoneGap to run "Test" Plugin

10 ’returnAttributeValue’, //Tell plugin, which

action we want to perform

11 [text, 0, id, attributeName, callbackID]);

12 }else{

13 //InvalidAttributeException

14 cordova.exec(

15 function(){}, //Success callback from the plugin

16 function(){}, //Error callback from the plugin

17 ’Test’, //Tell PhoneGap to run "Test" Plugin

18 ’returnAttributeValue’, //Tell plugin, which

action we want to perform

19 ["document.getElementById(\"" + id + "\").getAttribute(\"" +

attributeName + "\");", 4, id, attributeName, callbackID])

;

20 }

21 }else{

22 //InvalidIDException

23 cordova.exec(

24 function(){}, //Success callback from the plugin

25 function(){}, //Error callback from the plugin

26 ’Test’, //Tell PhoneGap to run "Test" Plugin

27 ’returnAttributeValue’, //Tell plugin, which

action we want to perform

28 ["document.getElementById(\"" + id + "\");",1 , id,

attributeName, callbackID]);

29 }

30 }

Page 84: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.2 Implementation 70

Error

code

Exception Parameters Cause

0 No exception, everything was

OK.

1 InvalidID

Exception

Code; ID The HTML element with this id

could not be found.

2 InvalidEvent

Exception

Code; ID; Event The event could not be exe-

cuted on this HTML element.

3 EvalException Code; Error mes-

sage

The code could not be exe-

cuted. There should be more

information in the error mes-

sage.

4 InvalidAttribute

Exception

Code; Attribute

name; ID

The value of the attribute could

not be changed or retrieved on

the HTML element with this id.

5 InvalidProperty

Exception

Code; Property

name; ID

The value of the property could

not be changed or retrieved on

the HTML element with this id.

Table 5.1: Error codes inside the test framework

The Java part of the plugin does not much more than retrieving the values from the

JSONArray, creating the correct Exception if needed and calling TestAPI’s receiveResult

function.

Extend DroidGap

The DroidGap class was extended to save an instance of the TestAPI class and add a

getter for this variable. There was also a getter added to retrieve the WebView of the

activity since this is needed for actions like taking screenshots. This new class is called

TestableDroidGap.

Page 85: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.2 Implementation 71

5.2.3 Test project

Changes needed to be made to the Android test project too. A new test case class

(DroidGaptestCase) was created that contains a link to the application under test and

to the emulator in case it is used. Section 4.2 describes how JUnit for Android works in

general.

Extend the Android test case

If an Android test project is created, several classes can be used to write the test cases

in. Each class has its own testing purpose such as testing an application, a library, etc.

For this proof of concept the ActivityInstrumentationTestCase2 class is chosen

since this class allows modification of the activity’s life cycle and to close and restart the

application. It also allows sending keys to the device or emulator. DroidGapTestCase

is an extension of this class and contains several instances of classes that are used for this

proof of concept and will be described in the next paragraphs.

An instance of the class Emulator was added, which represents an emulator and will be

discussed in Control the emulator. Other added instances were an instance of the class

MemoryManager that will control the memory (described in Follow up of memory usage)

and an instance of the class HTMLDocument (described in Access HTML elements and

content). A timeout value was added too. It determines the time that may pass before a

command sent to the application fails because there was no answer. This is added to avoid

the test being blocked when there was something wrong in the execution of the JavaScript

within the application.

The setUp method, which is executed before each test, inits all these variables, starts the

activity and waits until the device ready event is fired from the application. It also saves

the memory usage at that time to be able to compare it when the test is finished and to

calculate the used memory during this test. This comparison is made in the tearDown

method, which is called each time a test is finished.

clickMenuItem is a method that presses the menu button and then clicks the native

menu item in this menu. The method takeApplicationScreenshot takes a screen-

shot of the application under tests and saves the image with the filename and in the folder

Page 86: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.2 Implementation 72

that is given as parameter of the method. These screenshots will be saved on the device or

emulator, not on the computer or server that runs the tests. To create this screenshot, the

WebView is retrieved from the activity. This WebView renders the application and this

rendering can be retrieved and saved as an image. Since only the WebView is retrieved,

this method does not take a screenshot of the total device like adb does, but only of the

application under test. Even if the application is not shown on the screen but running in

the background, the screenshots will still show the application as if it were on top.

The rest of this class contains getters and setters for several variables or for easier execution

of some methods.

Access HTML elements and content

Scenario testing includes manipulating the user interface, because this is what the user

does to interact with the application. Since this proof of concept deals with webview-based

native applications, manipulating the user interface means manipulating the HTML page

and elements. Manipulations can include triggering events such as button presses or

changing properties or values of certain HTML elements.

An HTML element of the web page is represented by an instance of the class HTMLElement.

An object of this instance is created by providing it with the id that is given to this ele-

ment in the web page. This class contains several methods to change or retrieve properties

or attributes of this element, retrieve the inner HTML or trigger events such as click,

focus or blur. All these functions use the instance of TestAPI that is saved in the

activity and can thus be retrieved from the test project. This class also contains three

assert functions (assertClick, assertFocus and assertBlur) that fail in case the

events were not triggered correctly.

Once the user interface is manipulated, the test scenario should check whether the correct

action was performed. Since the proof of concept manipulates web pages, performed

actions should result in differences in HTML content and / or DOM structure of this

web page. This could be checked by retrieving the HTML contents of the web page and

comparing it with how the page should look like at this point of the test case. This

is handled in the HTMLDocument class, which is able to retrieve the DOM tree of the

WebView and compare it with another DOM tree. For this comparison, the xmlunit

Page 87: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.2 Implementation 73

library is used.

The DOM tree can be retrieved with the getDomTree method. Comparison can be done

in two ways. The isSimilarHTMLDocument checks whether the structure of the DOM

tree is the same, possibly with different content inside the tags. isIdenticalHTML

Document checks if the two DOM trees are identical in structure and in contents. Both

methods have an assert version that fails the test in case the DOM trees are not similar

or identical.

Follow up of memory usage

The class MemoryManager contains several methods to monitor or retrieve memory usage

information. Therefore it uses the standard functionality that is available in Android and

Android test classes.

The method getAvailableMemory retrieves the total amount of memory that is still

available. The method isLowOnMemory returns whether the device is low on memory

or not since this may influence the application’s behavior. getProcessMemoryInfo

returns an object of the class MemoryInfo that contains several types of memory in-

formation such as PrivateDirty, PSS and SharedDirty 1. This class also has a

helper function getPid to retrieve the PID of the application under test since this

is needed to retrieve the memory information. MemoryManager contains an assertion

that can be used in tests to check whether the device is not in a low memory condition:

assertNotInLowMemoryCondition.

Control the emulator

All the above classes and methods can be used on devices and emulators. When testing

on real devices, the application can be tested, but the circumstances can not be altered.

Some applications need to be tested on specific locations or with specific circumstances. By

using the emulator, these circumstances can be mimicked. The Emulator class bundles

all the methods that can alter a specific setting or behavior on the emulator. It abstracts

the different classes that are used internally to send these commands to the emulator.

1http://developer.android.com/reference/android/os/Debug.MemoryInfo.html

Page 88: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.2 Implementation 74

The emulator can be accessed by a telnet connection which can be set up with the

TelnetClient class of Apache’s commons.net library. The port number of the em-

ulator has to be provided manually and can not be retrieved automatically. Once the

connection is set up, it is just a matter of sending the right command over the connection

and process the result in the right way. All the console messages (both from and to the

emulator) are written down in a separate stream that can be monitored when testing.

Most of the methods of the emulator are getters or setters and return or accept some

kind of enum that represents the different options that are possible. The proof of concept

makes use of enums since they shield the actual command for the developer and tester and

make the code more robust since there cannot be typos in the command. This resulted

in a bunch of enums that represent battery health, status or AC, network delay or speed,

GPRS state, power status, telephone status and sensors. All these enums map directly

on one of the options that are supported by the emulator and that can be found on the

Android Emulator documentation page 2.

Besides the getter and setter methods there are also methods to simulate calling with the

emulator or sending text messages to it. The calling is simulated, so when calling from the

emulator to a real number or the other way around, no actual call will be made. The call

method enables placing a call to the emulator, putting it on hold or busy or accepting it.

To close the call it needs to be canceled. It is also possible to retrieve a list of open calls.

Some applications are aware of the location of the user, for example a GPS or tracker

application. Locations can be simulated on the emulator, by sending a fixed point or by

using a GPX file to replay a tracked route. When using a GPX file, there are two options

to replay the route: by using a fixed time interval between two points or by taking into

account the actual time passed between the reception of two GPX points while tracking

the route.

The GPXPlayer class supports both options, respectively called playFileInterval

and playFileNaturalSpeed. The method playFileNaturalSpeed can even take

a speeding factor that will speed up the replay of the locations. In each of these functions,

the GPXPlayer reads the GPX-file that is provided as parameter, loads it into an XML

document and saves the track segment. This way it is easy to step through all the locations

in the file. The file needs to be saved on the emulator in order to be accessible by the test.

2http://developer.android.com/guide/developing/devices/emulator.html

Page 89: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.2 Implementation 75

The method playFileInterval takes an interval (in milliseconds) as parameter and

sends a point to the emulator each interval seconds. This could be implemented by

calling Thread.sleep(interval) for each location, but the execution time of the

subsequent commands can vary which leads to a varying time difference between two

points. This problem is solved by taking into account the time that was needed to execute

these subsequent commands and subtract it from the interval time. In case this results

in a negative time interval, the new location is sent to the emulator right away. If the

interval parameter is smaller than the time it takes to execute these commands, this

results in a best effort approach that sends each point as soon as possible.

The method playFileNaturalSpeed executes in a similar way, but this method does

not use an interval value, but uses the time that passed between two locations. Each

location point in the GPX has a timestamp which is used to calculate the difference. This

difference can be divided by a speeding factor to speed up the replay of the file. This

method results in a more realistic way of emulating a user’s movements since location

points don’t arrive at a fixed speed in real life situations.

These two methods are executed in a different thread to allow other methods to be ex-

ecuted while the locations are being sent, such as user interface interactions or check

methods. The sending of the points can be paused and resumed at any time. The method

waitUntilStopped allows waiting until the sending of all the points is completed.

Listing 5.3 shows the method runNaturalSpeed, which is called in a separate thread

by playFileNaturalSpeed.

Listing 5.3: The method runNaturalSpeed of GPXPlayer, which is called by the method play-

FileNaturalSpeed

1 public void runNaturalSpeed(){

2 while (running && position < points.size()) {

3 try {

4 Element el = points.get(position);

5 long nowTime = sdf.parse(el.getChildText("time")).getTime(); //

Parses the time based on a SimpleDateFormat

6 //If not the first point

7 if(previousTime != 0 && previousRealTime != 0){

8 long timeDifference = (nowTime - previousTime)/speedFactor;

9 long timePassed = System.currentTimeMillis() - previousRealTime;

10 long difference = timeDifference - timePassed;

Page 90: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.3 Usage 76

11 if(difference < 0)

12 difference = 0;

13 Thread.sleep(difference);

14 }

15 previousTime = nowTime;

16 previousRealTime = System.currentTimeMillis();

17 this.sendPoint(el); //Sends the point to the emulator

18 this.position++;

19 } catch (Exception e) {

20 e.printStackTrace();

21 }

22 }

23 //If no more points stop running

24 if( position >= points.size()){

25 running = false;

26 }

27 }

5.3 Usage

The proof of concept uses JUnit for Android which means that each test case will have a

structure as described by JUnit. This structure includes a setUp and tearDown method

that are called respectively at the beginning or end of a test method. Each test method

should start with test in the method’s name and should contain one use case in one

specific environment. In order to use the functionality described above, the test case class

should extend DroidGapTestCase, as is the case with MyTestCase on Figure 5.2. The

test framework will be used in a case study in Chapter 6.

Two advices can be given when using this proof of concept. The first one is to always

use a try-catch structure inside the test method since a lot of methods from this proof of

concept throw an exception in case of a problem. If these exceptions are not caught, the

test will stop executing but not fail, so the error might not be noticed when looking at

the result. By using a try-catch structure, the exception is handled properly and should

call JUnit’s fail method. It might be a good idea to attach a finally-clause too. If a test

method requires to start with a logged out state and the previous test method did not log

out because of an error, this test method will fail because it was started in a wrong state.

Page 91: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.4 Encountered problems 77

Since JUnit executes the test methods in a random order, it is important that a test is

always brought back to the default state when finished.

A second advice is to create separate methods for interactions with the user interface or

checks of specific pages. A lot of these actions, such as logging in, need to be executed in

each test and by creating a separate method for this action, the tester can be sure that

each time the same conditions are tested.

5.4 Encountered problems

During the development of the test framework several problems were encountered that

could not be solved at this time. This does not mean that they can never be solved, only

that the solution could not be found within this dissertation.

Automate file pushing and pulling to and from the device and / or emulator

Some of the methods described in the test framework require files to be saved on the device

or emulator or they save new ones to it. Examples are the GPXPlayer that requires the

GPX to be available on the emulator and the takeScreenshotFromActivity method

that saves the screenshot on the device or emulator.

If the test framework will be used in an automated environment, files need to be pushed

to and pulled from the device within the test case itself, so the developer or tester is sure

that the files are available on each device. For the screenshots it is even more important

since they will be overwritten in the next execution of the test case on that device. To

make sure that only the screenshots of this test run are collected, the screenshots should

be deleted from the device after each test run.

Pushing and pulling to and from the device is possible from the command line by using

adb. Since it is possible to execute command line commands from within a Java project,

it looked like it could be possible to use the adb command line tool from within the JUnit

test case. This seemed not possible since the test case is an Android application that runs

on the device or emulator itself, next to the application under test. Since it runs on the

device or emulator it can not access files or command line tools on the computer or server,

Page 92: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.4 Encountered problems 78

but only tools that are on the device or emulator itself. This makes it impossible for now

to push or pull files to or from the device or emulator from within a JUnit test.

Take screenshots of the device, not only the activity

The method takeScreenshotFromActivity allows to take a screenshot from the ac-

tivity under test. This method saves the content of the webview to a PNG-file. The

webview does not contain the notification bar which shows GPS connection, time and

other notification, so this bar is not included in the screenshot. Another problem occurs

when the activity goes to the background or when another view, such as a pop-up dialog,

comes on top. This is not noticeable at all from the screenshots only, since the screenshot

of the activity still gives the same result.

This problem could also be fixed by using adb, since it allows to take screenshots from the

total device.

Handle dialog pop-ups

At this time, the test framework does not support handling dialog pop-ups which means

that it is not possible to automate clicking on one of the buttons. This blocks applications

that do use these pop-up views and thus makes it hard or even impossible to test these.

If a dialog pop-up is opened from Cordova (with navigator.notification.alert

or navigator. notification.confirm), this creates another view, that is not di-

rectly linked to the webview. The Hierarchy Viewer, one of the Android tools that shows

the hierarchy of the views of an application, shows how the views are build up in both

situations. The normal hierarchy of the views, when there is no pop-up, is shown in

Figure 5.3. This figure shows that the webview is embedded in several other views with

the PhoneWindow$DecorView as the highest view. The moment the pop-up is shown

(Figure 5.4) this hierarchy totally changes, and only the PhoneWindow$DecorView is

kept. Views can be retrieved with the getViewById method of Android, but then the

view needs to have a known id, which is not the case. When a new native dialog is created

an optional id can be provided. Unfortunately Cordova does not do this so there seems

to be no way to retrieve the pop-up dialog.

Page 93: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.4 Encountered problems 79

Figure 5.3: The hierarchy of views in the application when the webview is shown (without pop-

up).

Since the view of the pop-up dialog can not be retrieved, the buttons can not be clicked

and the dialog stays on top.

Bring the activity to the front

When using the emulator, phone calls can be emulated from within the test cases. Once the

emulator starts receiving the call, he sends the application under test to the background

and shows the dialer screen with the incoming call. If the call is put on hold or busy or

it is accepted, the same screen is shown. When the call is ended, the dialer shows the

screen with the call log. The application under test is still in the background. This is fine

as long as only JavaScript functions have to be executed. But if the test case uses native

functions after the phone call, like clicking the menu button, this is done on the activity

that is on top, which is the dialer. This results in an error in the test.

To solve this problem the activity under test should be brought back to the front. This is

not possible with getActivity() since this only brings the activity to the front when

it is started and not when it runs in the background. The second idea was to use an

Android intent since intents can launch applications. This worked when using adb to send

the intent (action:main, category:launcher). Unfortunately it does not seem to work when

executing it from within the test project. The intent is received by DroidGap, it prints

a log message from the startActivityForResult method and is passed to Android

itself by calling the super method. Then nothing happens, there is no error message or

warning. It could be the request code that is causing the problem since it has -1 as value

when startActivityForResult is called.

As long as an activity can not be brought back to the front, or be restarted, the amount

of scenarios that can be used with this proof of concept is limited.

Page 94: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.4 Encountered problems 80

Figure 5.4: The hierarchy of views in the application when the pop-up is shown.

Page 95: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

5.5 Conclusion 81

Bundle test results and logs in an XML file

The different types of test artifacts, like emulator logs, device logs, test results, etc. are

not bundled in this proof of concept, which makes automation hard. On each test run

all the test results are written to the console, together with the output of the application

and of the emulator. The log messages could be saved manually but they also contain a

lot of useless information from other activities which are running while the tests run. It

would be better if the output to the console was bundled in some form of xml that could

be saved to the computer or server and parsed to a structured web page.

This functionality was not included in this proof of concept because of multiple reasons.

The first reason is that there was not enough time. Secondly, if there would have been

time and this XML file could have been created, it would have been written down to the

device itself and it could not have been retrieved from the device automatically which is

needed for an automated test framework.

5.5 Conclusion

This proof of concept shows that it is possible on Android to use the native testing frame-

work (JUnit) for automated scenario testing of webview-based native applications created

with Cordova. Although this proof of concept still has some major issues, it seems that

some scenarios could already be executed automatically when using this test framework.

The issues described in Section 5.4 will need to be solved, otherwise this test framework

can not be used widely. Almost every application will use a pop-up dialog at some time

so the applications and / or scenarios that could be tested will be very limited.

The next chapter will use the proof of concept in a case study.

Page 96: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

82

Chapter 6

Case study

To evaluate the utility of the proof of concept, it was used to test an application created

with Cordova.

6.1 Situation of the case study

6.1.1 G-flux

G-flux 1 is a Belgian startup that creates smartphone applications for outdoor sports, such

as running and cycling. All their apps are developed for iOS and Android and therefore

they rely on the Phonegap/Cordova technology. This is rather uncommon for location

based applications. The user interfaces are created with Sencha Touch. The difficulties

with testing Cordova applications, combined with these of testing GPS based applications

make it impossible for them to do test-driven design.

G-flux has two applications. Their first application is a cycling GPS app for Android and

iOS, called BikeFlux. BikeFlux helps people find a cycling friendly route to any location

and shows the route on a map.

Their second application, Moonbiker, is part of a larger product called Bike To The Moon.

Bike To The Moon helps companies to provide a splendid cycling policy towards their

employees. It consists of a website which motivates the employees to keep cycling and

that challenges them to take the bicycle even more. Together with the website comes

1http://www.g-flux.com

Page 97: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

6.1 Situation of the case study 83

the Moonbiker application, which helps the employees to track their kilometers. The

Moonbiker application will be tested in this case study.

6.1.2 The Moonbiker application

The Moonbiker application is a rather simple application. It tracks the user’s kilometers

and sends them to the website. Besides this tracking functionality a limited amount of

statistics of the user’s own progress in biking to the moon is shown on the start screen. A

detailed description of the flow of the application, together with some screenshots can be

found in Appendix B.

To use the application a login is needed, which is only available if the company subscribed

for Bike To The Moon. Once logged in, the user will be logged in automatically the next

time. Different statistics are shown on the start screen, like the distance the user or his

team already cycled and the distance that is still to be done.

Press the ‘Start trip’ button to start registering a trip. The screen shows if there is a good

gps fix. If this is the case it starts registering and shows the amount of kilometers already

traveled. To stop the trip the ‘Stop trip’ button needs to be pressed. The user will be

asked if he really wants to stop this trip and if he wants to share this trip to the website or

discard it. Once that is done, the application is automatically brought back to the start

screen. The number above the ‘Sync’ button indicates that at least one file needs to be

synced which is done automatically, just as the adaptation of the statistics.

This application can be used if there is no network connection. In that case the files are

saved on the phone and the number above the ‘Sync’ button will show how much files are

still on the phone. By clicking this ‘Sync’ button the user can force synchronization. This

is not required since the application detects automatically when the network connection

is turned on again.

The menu contains buttons to show the settings (this is not used often and will not be

tested in this case study), to log out, to go to the start screen, synchronize (this button

will not be tested since the button on the start screen is more often used) and an about

screen (this will also not be tested). The back button responds as would be expected by

a user: go to the previous step in the process or close the application.

Page 98: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

6.1 Situation of the case study 84

6.1.3 Testing the application

Testing an application that relies so heavily on GPS technology is always difficult. There

are two options: simulate the situation in the emulator or go outside and start cycling.

Simulating is hard, since all the points have to be provided manually to the emulator

through the Eclipse plugin or a telnet connection. With the Eclipse plugin a GPX or KML

file could be used, or a script could be written for the telnet connection, but since there

were no ready-made solutions to automate the other parts about testing the application,

G-flux decided to go outside and cycle.

Then a second problem arose. When testing outside, without the phone being connected to

a computer, the logs that are generated can not be seen. G-flux solved this issue partially

by logging their messages to a log file that is saved on the phone, but there are still errors

that include important information but can’t be logged to this file because they can’t be

intercepted. Problems in JavaScript often don’t bubble up to the user interface like in

Java. Java crashes show a pop-up dialog, but JavaScript crashes don’t. They make parts

of the applications stop while others continue running, resulting in what users call ‘strange’

behavior. Sometimes this behavior is defined clearly and developers can, after some time

and experience, make good guesses of what the problem could be. Unfortunately, there

exist bugs that are not noticed that clearly.

For some problems, the G-flux team needs to go outside with their laptops connected to

their phones and go cycling and then hope that the bug will happen at that time, since

some bugs are only triggered by specific situations or devices. G-flux has several Android

devices and tries to have at least one phone from each larger brand but it is impossible

for a startup to have a large collection of smartphones. Unfortunately there are bugs that

only happen on specific devices or even on the same devices but with different settings,

configurations or other apps installed that influence the behavior of the G-flux apps.

Testing is a problem for this company since user experience is dropping if the apps don’t

work perfectly. A (semi-)automated platform that tests Android applications would be

a big step forward. It will still be impossible to test every possible configuration and

scenario, but if it could test the most common ones, a lot of bugs could be prevented.

Page 99: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

6.2 Test scenarios 85

6.2 Test scenarios

Below different scenarios that G-flux would want to test at each release are described.

This list is not exhaustive but is limited to non similar scenarios.

Scenario 1

Start the application (not logged in). Log in to the application. The start screen should

be visible. Log out and close the application.

Circumstances:

• With network connection: It should work like described.

• Without network connection: It should not be possible to log in.

Scenario 2

Start the application (not logged in). Log in to the application. The start screen should be

visible. Stop the application now (by pressing the back button) and reopen the application.

The application should log in automatically and be redirected to the start screen. Log out

and close the application.

Circumstances:

• With network connection: It should work like described.

• Without network connection during first login: It should not be possible to log in.

• With network connection during first login but without network connection during

second login: It should be able to log in to the application without problem.

Scenario 3

Start the application (not logged in) and log in. The start screen should be visible. Press

the start trip button and start cycling. After a while stop cycling and press the stop trip

button. The distance shown on the screen with the bicycle and the pop-up dialog should

be the same and correct (deviation of 0.1 km allowable). Share the trip. Return to the

Page 100: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

6.2 Test scenarios 86

start screen. Within 5 seconds the distance should be updated and increased with the

same number as the app recorded. Log out and close the application.

Circumstances:

• With network connection: It should work like described.

• Without network connection during log in: It should not be possible to log in.

• Without network connection between start and stop trip: It should work like de-

scribed.

• Without network connection after stop trip: The trip will not be sent, instead a

number will appear above the sync button. The screen will not be updated. When

turning on the network connection again, the application should synchronize the files

within 30 seconds.

• Without gps fix: The screen should show that there is no gps fix. No km will be

registered. If the distance is 0.0 km, the application redirects immediately to the

start screen when the stop trip button is pressed. If the gps fix is lost during cycling

the distance should be lower than or equal to the actual distance cycled.

Scenario 4

Start the application (not logged in) and log in to the application. The start screen should

be shown. Press the start trip button and start cycling. While cycling, a phone call is

received. Pick up the phone, call while cycling and stop the call after at least 1 minute

of calling. Bring the application to the front. The distance should still be correct. After

a while stop cycling and press the stop trip button. The distance shown on the screen

with the bicycle and the pop-up dialog should be the same and correct (deviation of 0.1

km allowable). Share the trip. Return to the start screen. Within 5 seconds the distance

should be updated and increased with the same number as the app recorded. Log out and

close the application.

Circumstances:

• With network connection: It should work like described.

• Without network connection during log in: It should not be able to log in.

Page 101: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

6.3 Implementation of the test scenarios 87

• Without network connection between start and stop trip: It should work like de-

scribed.

• Without network connection after stop trip: The trip will not be sent, instead a

number will appear above the sync button. The screen will not be updated. Turn

on the network connection again, the application should synchronize the files within

30 seconds.

• Without gps fix: The screen should show that there is no gps fix. No km will be

registered. If the distance is 0.0 km, the application redirects immediately to the

start screen when the stop trip button is pressed. If the gps fix is lost during cycling

the distance should be lower than or equal to the actual distance cycled.

• App is closed while calling: When the application is reopened, it logs in automati-

cally. A pop-up is received to tell that a problem happened and not all points were

registered. The cycling screen should be visible and should contain the distance that

was tracked until the app was closed.

6.3 Implementation of the test scenarios

6.3.1 Adapting the application and the proof of concept

Since the proof of concept can not deal with pop-up dialogs (see Section 5.4) and the

application contains several of them, the application had to be adapted. It was adjusted

in a way that pressing the ‘Stop trip’ button would immediately redirect to the start screen

while the application assumes that the user really wants to stop his trip and share it on

the website. These are reasonable assumptions for the test case.

The Moonbiker application uses Sencha Touch for the user interface, which creates its

user interface in a special way. Sencha Touch’s UI is totally based on HTML div elements

in combination with CSS and JavaScript. The UI is created with JavaScript, which is

translated in HTML code with CSS styles attached to the different tags and is injected in

the application at runtime. This makes it hard to debug the UI, especially when there is

a styling problem. The structure of the HTML body is changed since Sencha Touch puts

divs in divs in divs. Sencha Touch methods need to be used to retrieve a certain element,

such as Ext.getComp(id). Sencha Touch elements also don’t use the ‘normal’ methods

Page 102: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

6.3 Implementation of the test scenarios 88

like onClick for handling click events on an object. Instead a handler that responds to a

Sencha Touch event should be attached to an element. It is not possible to throw a Sencha

Touch event from outside Sencha, so during testing the handlers will be called directly. To

see if a screen is visible or not, the DOM methods from the proof of concept can not be

used since Sencha Touch uses CSS properties to make divs visible. This can be retrieved

by checking the value of the element’s attribute hidden.

To solve all these Sencha Touch related issues, a special class SenchaElement, which is

an extension of HTMLElement, was added to the proof of concept that can handle the

retrieval of Sencha Touch elements, call the handlers and see which screen is visible.

6.3.2 Re-usable checks, methods and scenarios

User actions

Since several actions needed to be executed during each test, a separate method was

created for each of these actions. The methods are used in Figures 6.2, 6.3 and 6.4 and

are represented like Figure 6.1b.

The method waitForApplication waits until the HTML element with id ‘login’ can

be found, since this implies that the GUI of the application is loaded, which is needed to

run the tests successfully. The method login fills in the login screen, takes a screenshot

and presses the ‘Log in’ button, logout presses the menu button and the ‘Log out’ option

and close presses the back button thereby closing the application. startTrip checks

whether the correct screen is shown and then presses the ‘Start trip’ button. The code

is shown in Listing 6.1. stopTrip checks whether the correct screen is shown and then

presses the ‘Stop trip’ button, getUserDistance retrieves the distance the user has

cycled if the start screen is shown and getCycledDistance retrieves the distance that

is cycled during a trip in case the cycling screen is shown. sync presses the ‘Sync’ button

if the start screen is shown.

Listing 6.1: The method startTrip

1 private void startTrip() throws Exception{

2 SenchaElement measure = new SenchaElement("measure", this);

3 assertTrue("Measure screen should be visible when starting the trip.",

measure.isVisible());

Page 103: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

6.3 Implementation of the test scenarios 89

4

5 SenchaElement startMeasure = new SenchaElement("startmeasure", this);

6 SenchaElement measuring = new SenchaElement("measuring", this);

7 assertTrue("Start measure screen should be visible when starting the

trip.", startMeasure.isVisible());

8 assertTrue("Measuring screen should not be visible when starting the

trip.", measuring.isHidden());

9

10 SenchaElement startTripButton = new SenchaElement("startTripButton",

this);

11 startTripButton.fireEvent();

12 }

Check methods

The previous methods all execute some user action. Since several tests need to be repeated

in each test method, methods that test distinct pages such as the start page or the cycling

page were created too. These methods are used in Figures 6.2, 6.3 and 6.4 and are

represented like Figure 6.1a.

The method checkStartMeasureScreen checks if the start screen is shown and then

checks if the values shown on this screen are valid. The distance the user has cycled should

be smaller than or equal to the distance the team cycled. This again should be smaller

than or equal to the total distance cycled, which should be smaller than or equal to the

distance to the moon. This method also checks whether a known bug occurred that can

be recognized by the numbers not having decimals.

checkMeasuringScreen checks whether the measuring screen is shown and whether the

distance is a valid value. This method also checks for the known bug. The code is shown

in Listing 6.2. checkCycledDistance compares the distance that was cycled, shown

on the measuring screen, with the distance that should be cycled, given as parameter in

the test.

Listing 6.2: The method checkMeasuringScreen

1 private void checkMeasuringScreen() throws Exception{

2 SenchaElement measure = new SenchaElement("measure", this);

3 SenchaElement startMeasure = new SenchaElement("startmeasure", this);

Page 104: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

6.3 Implementation of the test scenarios 90

4 SenchaElement measuring = new SenchaElement("measuring", this);

5 HTMLElement distanceDiv = new HTMLElement("distanceSpan", this);

6

7 assertTrue("Measure screen should be visible when cycling.", measure.

isVisible());

8 assertTrue("Start measure screen should not be visible when cycling.",

startMeasure.isHidden());

9 assertTrue("Measuring screen should be visible when cycling.",

measuring.isVisible());

10 assertTrue("Distance shown wen cycling should contain ’.’ (bug).",

distanceDiv.getInnerHTML().contains("."));

11 }

General scenarios

Some scenarios can be executed with different parameters such as different GPX files,

speed factors or login credentials. Each of these scenarios was given a separate method to

allow them to be called with different parameters. The test methods themselves (starting

with test) call these methods with the correct parameters.

6.3.3 Executed scenarios

The first scenario can be executed on devices, as well as on the emulator. Only the case

with network connection was tested, since the case without it shows a pop-up telling the

user that logging in is not possible without network connection and these pop-ups can not

be handled. Also, if this scenario is run on a real device, the network connection setting

can not be changed from within the test. This scenario is visualized on Figure 6.2.

The second scenario is implemented up until the closing of the application. After closing

the application, it should be reopened and the user should be automatically logged in.

Unfortunately it seemed not possible to restart the application so the test could not be

further implemented. Since this scenario could not be completed it is omitted from the

test runs. Figure 6.3 shows how the scenario should look like in case it could fully be

programmed.

The third scenario can only be executed on an emulator since the user’s movements need to

Page 105: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

6.3 Implementation of the test scenarios 91

(a) Check method to check a

screen, value, ... (b) User action method

(c) An action that is pro-

grammed in this test

(d) A method that saves an

artifact, such as a screen-

shot

(e) A value that is saved for

later use (f) A test or loop

Figure 6.1: Legend of the flow diagrams of the test scenarios

Figure 6.2: Test flow of scenario 1

Figure 6.3: Test flow of scenario 2

Page 106: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

6.4 Evaluation 92

Figure 6.4: Test flow of scenario 3

be emulated. This scenario could be fully implemented, except when there is no network

connection while logging in since this shows a pop-up dialog. Each of the described

circumstances was tested in a different test method since the flow was different in each

case. It is visualized on Figure 6.4.

The fourth scenario can only be executed on the emulator since the user’s location has

to be simulated and a phone call should be made. This scenario was implemented up

until the call is stopped and the call log is shown by Android itself. At that moment

the application is brought to the background and no solution could be found to bring the

application back to the front. Since the menu button has to be pressed to log out, the

menu of the call log appears, which of course does not have a log out button. This fails

the test. This test was omitted from the test runs.

6.4 Evaluation

Two out of the four proposed scenarios for this application could be implemented. The

scenarios that failed were due to limitations of the proof of concept, such as dealing with

pop-up dialogs, failing to bring a screen back to the front or restarting the application.

Page 107: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

6.4 Evaluation 93

Scenario Manual testing on real

device

Manual testing on

emulator

Automatic testing on

emulator

1 32 57 49

2 44 88 —

3 928 — 666

4 996 — —

Table 6.1: Comparison of testing execution times (in seconds). Cycling on the emulator in sce-

nario 3 was emulated twice as fast as in real life.

These limitations clearly limit the applicability of the test framework. The tests also could

not be fully automated, for example on each build step, but had to be started manually.

The generated files had to be transfered manually after each test run.

The proof of concept proved its usefulness by finding at least four bugs in the application

that could not have been found easily otherwise. All these bugs happened only occasionally

but could be provoked with this test framework. One of the bugs was caused by writing

two points at the same time to an output file, which made the tracker stop writing to the

file at all. This error did not show up in the user interface nor in the log files saved to the

device. Since this only happened a few times in real life, it was really hard to find this

bug. It was an important one because the operation of the application relies on the file

that was being written when the bug happened.

To see if the proof of concept reduced the testing effort, it was compared to manual testing

on a real device and to manual testing on the emulator 2. In Table 6.1 a comparison is

made based on the execution time of the tests during normal circumstances. Several tests

could not be executed on the emulator either manually or by using the proof of concept.

The previous paragraph described why these scenarios could not be tested with the proof

of concept. When testing manually on the emulator, a problem arose with the GPX that

was used in the proof of concept and that contained the route that was tracked during

the manual tests. The Eclipse plugin did not accept the GPX file, nor the KML file that

was created from this GPX. Since it is a time based comparison, it is important that the

same amount of points are used to simulate the cycling behavior while testing.

2The real device was an HTC Desire with Android 2.2.2, the emulator used Android 2.3.3. The cycled

track was 3.1 km.

Page 108: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

6.5 Conclusion 94

Since not all data is filled in in the table, it is hard to make a good comparison. Manual

testing on the device seems faster, except when cycling. This is due to the device that

was used for the tests, HTC Desire, which is a rather fast device. The emulator on

the other hand, is know as a very slow device and when using it for manual testing,

the tester is slowed down in execution by the slow response times of the emulator’s user

interface. The proof of concept does not encounter this problem since all user interactions

are programmed. The speed of the emulator also depends on the speed of the computer or

server it runs on. If this computer or server is executing lots of other programs, this will be

noticed in the execution time of the proof of concept. This is not such a big problem, since

the testing happens automatically once it is started and the tester can do other things

while the tests are being executed.

The comparison can not be made based on time alone, other factors have to be kept in

mind too. If a scenario contains cycling, the tester should go outside and do a little cycling

when he is testing on a real device. During the winter or in bad weather conditions it

is not a pleasant task and it may be skipped. Cycling in real life also takes some time,

while cycling on the emulator can be accelerated. In the fourth scenario the tester should

receive a call while cycling which implies that a second device or person is involved that

calls the other device. This makes the scenario more cumbersome to execute and makes

it easier to forget a step in the process.

6.5 Conclusion

The proof of concept is a step in the right direction. The test effort decreased, just as the

debugging effort since some problems were easier to find with an automated test. This

resulted in an improvement of the quality of the application. G-flux is rather satisfied with

the proof of concept and will continue to use it and test as many scenarios as possible

with this test framework.

On the other hand it is obvious that this proof of concept is more a work in progress than

a finished product. The limitations described in Section 5.4 need to be resolved in order

to use this framework thoroughly.

Page 109: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

95

Chapter 7

Conclusions

This dissertation started with a state of the art analysis about testing web and webview-

based native applications. This analysis showed that there are some flaws in the testing

solutions that exist at the moment. These flaws were targeted by creating an architecture

that focuses on scenario testing of web or webview-based native applications in different

situations. The elaboration of this architecture revealed that the most important challenge

would be to create the component that will actually run the tests on the different devices.

A proof of concept for this component was created which targeted scenario testing of

Cordova Android applications in different situations by using JUnit.

The proof of concept certainly proved that it is possible to create such a solution, but

different problems still exist. To evaluate the proof of concept, it was used to test an

existing Cordova Android application. This case study showed that it was an improvement

of the current test process, but also revealed the limited application possibilities because

of the mentioned problems.

7.1 Future work

Before this proof of concept can actually be used by a broad range of people and to test

a broad range of applications, several subjects need some more work and some questions

need to be answered.

Further development of the proof of concept

The proof of concept needs to be further developed and the problems mentioned in Section

Page 110: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

7.1 Future work 96

5.4 need to be resolved. Some research might be needed since the solutions do not seem

to be straightforward. Next to these problems, the proof of concept should be much more

automated and be included in a continuous integration environment.

The proof of concept should also be better embedded inside Cordova itself in order to

make scenario testing of these applications easier. This can be done in several ways, such

as transforming it into a plugin or include the test solution in Cordova.

Extend the proof of concept

The proof of concept is applicable on Cordova Android applications, but what about iOS or

Windows Phone applications? Is a similar approach, using the native testing framework to

do scenario testing in different circumstances, possible there too? And if this is possible,

is there a way to reuse the code written for one platform on another one? Can it be

used to test webview-based native applications that were created in other ways than with

Cordova? It is also possible that, if some adaptations were made, this proof of concept

could be used for web applications too, thereby broadening the amount of applications

that could benefit from this testing framework.

Integration with other testing frameworks

In Chapter 3 a start was made for the design of a scenario testing framework for web and

webview-based native applications. Once the problems are solved, how can the proof of

concept be merged with this larger vision?

Maybe the proof of concept can show some interesting concepts that can be used for the

other test solutions that are in design or development right now, such as the focus on

scenario testing, testing in different circumstances and making use of the native testing

framework to test webview-based native applications.

Page 111: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

97

Appendix A

Scenarios

A.1 Terminology

The user is the person who uses the test system to run tests. This can be a tester or

a developer of applications. The developer is the person who develops the test system

itself and/or creates extensions. The administrator is the person who installs the test

system and maintains it. In some cases this person will be the same as the user. An

external system is a system with which the test framework interacts, such as a continuous

integration system.

A.2 Quality attribute scenarios

A.2.1 Usability: introduction to the test system

Source The user

Stimulus The user wants to get started with the test system, learn about the

features and how to use the test system.

Artifact The test system

Environment At runtime

Response Tutorials and accompanying examples are supplied. Accurate and recent

documentation is supplied.

Page 112: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

A.2 Quality attribute scenarios 98

Response

measure

More than 70% of the users finds it not hard to get started with the

system. More than 70% finds the tutorials and examples clear enough.

It takes users one day (8 hours) on average to get started with the

system.

A.2.2 Usability: use the test system efficiently

Source The user

Stimulus The user wants to use the test system efficiently.

Artifact The test system

Environment At runtime

Response The test system checks input to see if it is correct and valid on submit

time. In case the input is wrong or incorrect, the user is given the

possibility to correct it. Reports are updated each minute to represent

the latest test progress and results. Test runs on devices are parallelized

as long as there are different devices available that can execute these

tests.

Response

measure

At least 70% of the users is not feeling that he or she is loosing time

because of cumbersome procedures.

A.2.3 Extensibility: write a minor plugin

Source The developer

Stimulus The developer wants to extend the test system with a minor function-

ality. Examples of this functionality can be: new user controls (touch,

remote control, speech, etc.), new device functions (play movie, get bat-

tery status, etc.), new output format (xml, txt, etc.), new link with other

platform (git, subversion, jenkins, etc.) and much more.

Artifact The plugin structure that contains the extension

Environment At development time

Response The developer creates a new plugin with code that does not affect other

functionality.

Page 113: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

A.2 Quality attribute scenarios 99

Response

measure

The plugin is created, no other plugins need to be changed. The other

functionality is not affected. The plugin can be created within one

month.

A.2.4 Extensibility: deploy and use a minor plugin

Source The user and administrator

Stimulus The user wants to use a plugin in its test cases.

Artifact A configuration file of the test system and the test cases where the users

wants to use this functionality.

Environment At runtime for the test system, at write time for the test cases

Response The new plugin is added to the config file by the administrator, without

affecting the system (besides the fact that the new functionality can be

used). The test cases that want to make use of this new functionality

need to be changed, other test cases do not need to be changed.

Response

measure

Only the test cases that use the new functionality are affected. Only

one configuration file of the test system is changed.

A.2.5 Extensibility: write a major extension

Source The developer

Stimulus The developer wants to extend the test system with a major functional-

ity. A major functionality can be the support of a new operating system,

the support of a new type of device or other functionality changes that

require a change of multiple components of the test system.

Artifact The test system

Environment At design time

Response A new version of the test system is created that implements the new

functionality, but is also backwards compatible.

Response

measure

Test cases created for a previous version of the test system are not

affected. Functionality that was already available in the previous version

is still available in this version. The new version of the test system can

be created within four months.

Page 114: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

A.2 Quality attribute scenarios 100

A.2.6 Scalability: number of devices

Source The test system

Stimulus The load (% of the time used for testing) of a device changes.

Artifact The device

Environment At runtime

Response If the load of a device is above 90% and there is a spare copy of this

device left, this spare device is activated. If the load is above 80%, and

there is no spare device left, the administrator is warned. If the load is

below 10% and there is another copy of this device activated, this device

is deactivated.

Response

measure

The system scales the devices, based on the actual load and the planned

load for the next 15 minutes. The tests itself are not affected and exe-

cution is guaranteed within 5 hours if there is no mismanagement from

the administrator (not adding extra devices when there is warning) and

there are no tests that take an excessive lot of time.

A.2.7 Scalability: server load

Source The system

Stimulus The load of one of the servers that connects the devices is changed. The

load of this server is the average of the load of the devices connected

with it.

Artifact The test system, the new server

Environment At runtime

Response If the load of the server is above 80% and there is another server avail-

able that is connected with at least one of the devices with the highest

load, this server is turned on. If there is no such server available, the

administrator is warned. If the load of the server is below 10% and from

each activated device connected to this server there is another device of

this kind connected to another server, then this server is switched off. If

this is not the case the adminstrator is warned because he might want

to switch the devices from server.

Page 115: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

A.3 Functional scenarios 101

Response

measure

The system scales the servers, based on the actual load and the planned

load for the next 15 minutes. The tests itself are not affected and exe-

cution is guaranteed within 5 hours if there is no mismanagement from

the administrator (not adding extra devices when there is warning) and

there are no tests that take an excessive lot of time.

A.3 Functional scenarios

A.3.1 Plan and run tests on devices

Description

The user (or the external system) wants to plan a series of tests. Therefore he needs to

specify on which kinds of devices he wants the tests to run and the tests that need to run.

Priority / complexity

High / Medium

Pre-conditions

• The test system is set up, with at least one device attached to it.

• There is at least one application and test added to the test system.

• The user (or external system) is logged in to the system.

Trigger

• The user wants to plan some test(s).

• The external system wants to plan some test(s).

Main success scenario

1. On the website, the user selects the application that needs to be tested.

2. The user clicks the ‘Run test’ button.

3. The user is asked to select the test cases that will run. He selects the test cases and

presses ‘Next’.

4. The user is asked to select the devices that the tests will run on. He selects the

parameters and presses ‘Run tests’.

Page 116: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

A.3 Functional scenarios 102

5. The tests are planned by the test system. The user is free to perform other actions

on the website. Once the planning is finished, the user is able to see a detail of the

scheduled tests for each device on the detail page for this test run.

6. When the tests are being run, the detail page of this test run is updated so the user

can follow the test progress. He can see if tests that already ran succeeded or not.

7. If all the tests are finished, the user receives an e-mail with a summary of the results,

and he can see these results on the detail page of the test run on the website.

Extensions

1. In case an external system wants to schedule the tests, this can be done through the

command line interface. The external system provides as parameters: the folder

were the results can be put, the name of the application, a config file that contains

the test cases that will run and a config file with the parameters of the devices that

the application will be tested on. Go to step 5.

3. The user can also upload a config file containing all the necessary information or select

the tests that were previously used.

4. The user can also upload a config file containing all the necessary information or select

the device parameters that were previously used.

7. In case an external system initiated the test run, the results are placed in the folder

provided as parameter in the command line.

Post-conditions

• The tests are planned and run.

• The tests are run on the specified devices.

A.3.2 Write test cases

Description

The user needs to write test cases before he can actually test the application. In the test

cases he can test different kinds of functionality.

Page 117: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

A.3 Functional scenarios 103

Priority / complexity High / Medium

Pre-conditions

• The user has an application to test.

Trigger

• The user wants to write some tests.

Main success scenario

1. The user creates the correct file structure for the tests. This file structure consists

of one test project that contains multiple user scenarios. User scenarios consist of

different test blocks, that contain the flow for smaller parts like logging in.

2. The user starts writing different test blocks that can be used in the scenarios (logging

in, showing a screen). These blocks handle the different actions in different circum-

stances, for example with or without Internet access. The blocks define in which

circumstances they can be run and can be parametrized. They also have different

exits, for example one for when there was Internet and one for when there was no

Internet.

3. Now the user can combine different blocks and link the correct exits to the entrances.

This way he can build a scenario like the one the user would follow. He also fills in

certain parameters like the circumstances or necessary data files.

4. The user repeats steps 2 and 3 until all the scenarios are finished.

Extensions

3. This step can also be made by using a graphical interface with which the user can

create a flow chart from the different blocks.

Post-conditions

• The tests are written.

• The tests are valid and can be used in the test system.

Page 118: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

104

Appendix B

Screenshots Moonbiker application

When the Moonbiker application is opened, screen B.1a is shown. To use the application

a login is needed, which is only available for companies who subscribed for Bike To The

Moon. Fill in the e-mail address and password to log in to the application, as shown on

screen B.1b. Once logged in the main screen of the application is visible, screen B.1c. It

contains different statistics. The first number is the distance the user cycled, the second

one is the distance the user’s team cycled and the third one is the distance that the

company already cycled. The distance that is still to be done is shown too.

To start registering a trip, press the start trip button on screen B.1c. Screen B.1d will

be shown, which indicates that there is no good GPS signal yet. The user can already

start cycling but the trip will only start registering when there is a good fix. Once this

is the case the distance is adjusted automatically, as shown on screen B.1e. To stop the

trip, press the stop trip button. The file will be automatically sent to the website and the

application is redirected to screen B.1c.

The application will synchronize the trips and statistics automatically. After a while (5

seconds) the distances on the start screen should be adjusted, as shown on screen B.1f.

Page 119: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

105

(a) Start screen of the appli-

cation

(b) Logging in to the appli-

cation

(c) Overview of distance cy-

cled and distance to go

(d) Start cycling - no gps fix

yet

(e) Cycling - distance is ad-

justed in real time

(f) Overview of distance -

distance is adjusted

Page 120: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

BIBLIOGRAPHY 106

Bibliography

[1] D. Y. Na, “The what, why and how of mobile applications,” Sigma: Inside the digital

ecosystem, vol. 11, pp. 20–26, October 2011.

[2] M. Firtman, Programming the Mobile Web. O’Reilly, first edition ed., 2010. Chapter

1 is addressing The Mobile Jungle, Chapter 2 Mobile browsing, Chapter 8 JavaScript

Mobile, Chapter 12 Widgets and Offline Webapps, Chapter 12 Testing, Debugging

and Performance.

[3] Gartner, “Gartner says worldwide smartphone sales soared in fourth quarter of

2011 with 47 percent growth.” http://www.gartner.com/it/page.jsp?id=

1924314, February 2012.

[4] Gartner, “Gartner says 428 million mobile communication devices sold worldwide

in first quarter 2011, a 19 percent increase year-on-year.” http://www.gartner.

com/it/page.jsp?id=1689814, May 2011.

[5] Gartner, “Gartner says sales of mobile devices in second quarter of 2011 grew 16.5

percent year-on-year; smartphone sales grew 74 percent.” http://www.gartner.

com/it/page.jsp?id=1764714, August 2011.

[6] Gartner, “Gartner says sales of mobile devices grew 5.6 percent in third quarter of

2011; smartphone sales increased 42 percent.” http://www.gartner.com/it/

page.jsp?id=1848514, November 2011.

[7] Gartner, “Gartner says android to become no. 2 worldwide mobile operating system

in 2010 and challenge symbian for no. 1 position by 2014.” http://www.gartner.

com/it/page.jsp?id=1434613, September 2010.

Page 121: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

BIBLIOGRAPHY 107

[8] A. M. Christ, “Bridging the mobile app gap,” Sigma: Inside the digital ecosystem,

vol. 11, pp. 27–32, October 2011.

[9] R. Meier and M. Mahemoff, “HTML5 versus Android: Apps or web for mobile

development?.” http://www.google.com/events/io/2011/sessions/

html5-versus-android-apps-or-web-for-mobile-development.html,

May 2011. Google I/O presentation.

[10] M. Pilgrim, HTML5: Up and running. O’Reilly, first ed., 2010.

[11] Nitobi, “Nitobi enters into acquisition agreement with

Adobe.” http://www.phonegap.com/2011/10/03/

nitobi-enters-into-acquisition-agreement-with-adobe-2, Octo-

ber 2011.

[12] G. J. Myers, T. Badgett, and S. Corey, The art of software testing. Word Association

Inc., third ed., 2011.

[13] C. Kraner, “An introduction to scenario testing.” http://www.kaner.com/pdfs/

ScenarioIntroVer4.pdf, June 2003.

[14] D. Janzen and H. Saiedian, “Test-driven development concepts, taxonomy, and future

direction,” Computer, vol. 38, pp. 43 – 50, sept. 2005.

[15] P. Mueller, “Debugging mobile web applications with weinre.” http://www.

oscon.com/oscon2011/public/schedule/detail/19890, July 2011. OS-

CON slides.

[16] P. Mueller, “Debugging mobile web applications with weinre (phonegap

day presentation video).” http://www.youtube.com/user/PhoneGap#p/u/3/

Kbdv9nU9ZDU, July 2011. PhoneGap day presentation.

[17] W3C, “Web Applications Working Group.” http://www.w3.org/2008/

webapps/.

[18] W3C, “Device APIs Working Group.” http://www.w3.org/2009/dap/.

[19] W3C, “Browser Testing and Tools Working Group.” http://www.w3.org/

testing/browser/.

[20] W3C, “Web Testing Interest Group.” http://www.w3.org/testing/ig/.

Page 122: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

BIBLIOGRAPHY 108

[21] W3C, “MobiWebApp Factsheet.” http://cordis.europa.eu/fp7/ict/ssai/

docs/call5-mobiwebapp.pdf, June 2010.

[22] W3C, “MobiWebApp Test Suites Report Year 1.” http://mobiwebappw3c.

files.wordpress.com/2011/10/testing_report-year1.pdf, September

2011.

[23] MOSQUITO, “Mosquito project description.” http://www.mosquito-fp7.eu/

about.

[24] Mosquito, “Platform industry perspective.” http://www.mosquito-fp7.eu/

documents/27225/43761/D1.1+-+Platform+Industry+Perspective+

v1.3.pdf?version=1.0, August 2011.

[25] Mosquito, “Definition of interop test specifications methods applicable for the mobile

applications domain.” http://www.mosquito-fp7.eu/documents/27225/

43761/D3.1+-+Tests+Specifications+v2.1.pdf?version=1.0, August

2011.

[26] webinos, “webinos Factsheet.” http://webinos.org/content/

webinos-factsheet.pdf, June 2011.

[27] L. Bass, P. Clements, and R. Kazman, Software Architecture in Practice, Second

Edition. Addison-Wesley Professional, 2003.

[28] D. T. Milano, Android Application Testing Guide. Packt publishing, first ed., 2011.

Page 123: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

LIST OF FIGURES 109

List of Figures

2.1 Worldwide smartphone sales by operating system (in millions of units sold) 5

2.2 Worldwide smartphone sales by operating system (in % market share) . . . 6

2.3 Android platform distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.4 Android platform distribution (historical) . . . . . . . . . . . . . . . . . . . 8

2.5 Android device fragmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.6 Development of native, webview-based native and web applications . . . . . 11

2.7 Structure of a Document Object Model Tree . . . . . . . . . . . . . . . . . . 16

2.8 Integration of test suite development and specification authoring . . . . . . 34

2.9 Test case deployment on W3C test server . . . . . . . . . . . . . . . . . . . 34

3.1 Use case diagram with functional scenarios . . . . . . . . . . . . . . . . . . 43

3.2 Component diagram top level . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.1 Architecture of Cordova / PhoneGap . . . . . . . . . . . . . . . . . . . . . . 54

4.2 Plugin call sequence diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.3 Bridging the gap between JavaScript and Java (on Android) . . . . . . . . . 56

4.4 Architecture of JUnit for Android . . . . . . . . . . . . . . . . . . . . . . . . 60

4.5 Schematic representation of weinre messages . . . . . . . . . . . . . . . . . . 62

5.1 First approach to bridge the gap between Java and JavaScript . . . . . . . . 66

5.2 Second approach to bridge the gap between Java and JavaScript . . . . . . 67

5.3 Hierarchy of views (normal) . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

5.4 Hierarchy of views (pop-up) . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

6.1 Legend of the flow diagrams of the test scenarios . . . . . . . . . . . . . . . 91

6.2 Test flow of scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Page 124: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

LIST OF FIGURES 110

6.3 Test flow of scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

6.4 Test flow of scenario 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Page 125: Scenario testing of mobile webview-based native applications …lib.ugent.be/fulltxt/RUG01/001/887/083/RUG01-001887083... · 2012. 11. 21. · Scenario testing of mobile webview-based

LIST OF TABLES 111

List of Tables

2.1 Programming languages on different platforms . . . . . . . . . . . . . . . . 11

2.2 Advantages and disadvantages of developing native applications . . . . . . . 14

2.3 Advantages and disadvantages of developing web applications . . . . . . . . 14

2.4 Advantages and disadvantages of developing webview-based native applica-

tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4.1 PhoneGap supported features and operating systems . . . . . . . . . . . . . 53

5.1 Error codes inside the test framework . . . . . . . . . . . . . . . . . . . . . 70

6.1 Comparison of testing execution times . . . . . . . . . . . . . . . . . . . . . 93