summer research scholars 2015
TRANSCRIPT
Washington and Lee University
Web Application Testing Research
Azmain Amin
Mina Shnoudah
Summer Research Scholars
Professor Sara Sprenkle
September 7, 2015
Abstract
A web service allows two or more devices to interact with each other over a network. Due to the
standardized nature of their architecture, web services allow different platforms, normally
incompatible, to seamlessly interact with each other. Businesses use web services to transfer data
to and from different platforms, which might be in different locations or use different languages.
A glitch in the function of a web service might incur losses for firms. Regression testing can
identify glitches after a web service is updated. Our aim is to automate regression testing for web
services. We generate XML based test cases using logged user sessions. We replay these test
cases using SOAP UI, in a fault seeded web service and measure the efficacy of the test suite
based on how many faults they can correctly identify and their code coverage.
Web services play vital roles for both commercial and personal purposes. Businesses need an
efficient way of accessing databases (some of which might be in different geographical
locations) quickly, and they do this using web services. As can be seen in Figure 1, web services
enable clients i.e. users/machines to interact with servers and transfer data over a network.
Businesses use web services to make a common platform over which they can transfer data
across different machines/platforms which normally could not interact with each other due
difference in compatibility. Companies also use web services to make their data available for
public consumption through APIs (Application Programming Interface). APIs gives us a
guideline of how to access data owned by various entities i.e. we can use NPR’s API to access
the latest news headlines or Twitter’s API to get and post tweets.
Figure 1: This diagram shows how different clients access data on a database using a web
service.
Source: http://sharepoint-works.blogspot.com/2012/12/basics-of-web-services.html.
There are two main types of web services: SOAP and RESTful. SOAP (Simple Object Access
Protocol) is standardized protocol of transferring information over the internet which exclusively
uses XML format to transfer data. XML (Extensive Markup Language) is a standard language
which is both human and machine readable. XML defines a set of rules which is used to encode
documents enabling them to be read by any machine/platform. RESTful web services use both
XML and JSON (JavaScript Object Notation). JSON does not use tags like XML and therefore
consumes less resources. JSON is lighter and quickly gaining ground on XML. Another critical
difference between SOAP and RESTful web services is that SOAP web services use SOAP
envelopes to send/receive messages while RESTful web services use an URI (Uniform resource
identifier) to send request and get back the data in a JSON or XML format.
Figure 2: The difference between SOAP and RESTful web services. Source:
http://downloads.eviware.s3.amazonaws.com/web_site_images/soapui/web_images/Dojo/REST_
vs_Soap.png.
It is not uncommon to confuse web services with web applications since the difference is very
subtle. A web application lets human users interact with databases. Web services, on the other
hand, allow both user to machine as well as machine to machine interaction, even though they
are in different platforms. Web applications have a clear user interface (UI) where you can see
interact with it graphically. Web services usually do not have UIs. Most of the time, web
applications use web services to access data from different databases. To better visualize how
they interact, Figure 3 shows a simple example of a web service that uses a web application on a
desktop client, but only fetches the data using a web service for the mobile client.
Figure 3: A simple example of a web service and web application working together.
Source: http://paulhammant.com/images/trunk_vs_apps_mobile.jpg.
Use of web services is increasing rapidly. Web services are vendor, platform and language
independent. They offer interoperability which allows businesses to coordinate information from
different branches seamlessly, despite differences in platform and machines being used. Any
information on a web service can be accessed by permitted machines. This is possible because
web services are very standardized; all the requests and responses are transferred using
standardized data type and protocol, i.e. requests are made via HTTP call and the service
responds using XML or JSON. This also reduces cost associated with making customized
programs to integrate all the different applications use by the business.
Web services are employed by almost all businesses. Therefore, their efficacy is crucial. A
problem or delay in a service might cause thousands of dollars in loss for businesses. To make
sure web services run smoothly, it is imperative to test them thoroughly before deployment.
Testing is so important that developers design test cases before they start developing the actual
program/software. This is called test driven development (TDD). TDD reduces development
time in the long run, since the finished product is fully tested and ready to deploy. Moreover, the
test cases can form the basis from which the next test suite (a collection of test cases) can be
made. There are two fundamental parts of testing: generating test cases and playing them through
oracles to see how many errors the test cases can detect. Errors are usually intentionally seeded
by the developer. This is known as fault seeding. The efficacy of a test suite is measure by how
well they can find these faults with the help of an oracle. Below are two instances of the same
test case, except the test case on the right was fault seeded.
Figure 4: On the left, Soap UI ran through the test case and found no errors. On the right is a test
step that is intentionally fault seeded. The test step “Random Cat” contains a JsonPath Match
assertion that checks to see if the animal value of the response is “Dog”, which should fail since
it returned a cat.
An oracle checks certain conditions of the response to determine whether or not the test case
contains an error. Sometimes, all the oracle needs to tell if the response contains an error or not
by using the status code returned in the response. All responses return a status code ranging from
five classes numbered 100 to 599. A common status code most people are familiar with is the
“404 Not Found”, which means the requested data could not be found. Status codes fall into one
of these five classes, depending on what number the status code starts with: 1 means the response
is informational, 2 signifies a successful request and response, 3 signifies that the request was
redirected to complete the request, 4 denotes a client side error, and 5 indicates a server side
error. However, sometimes the request returns a 200 status code, indicating that the response is
successful, but the response does not return what the user expected. For example, we could
search to get tweets from a misspelled twitter handle, and the web service would return a 200
status code with no tweets.
Since web services are updated periodically, regression testing is the best way to test them.
Regression testing is selective retesting of a system or component to ensure that modifications
have not caused unintended effects and that the system still does what it's supposed to do. For
example, if a web service that finds dogs in a shelter was updated to find cats in the shelter,
regression testing would check to see that the web service still finds dogs. Regression testing is
even more important for businesses which sell web services to different clients. When a client
purchases a web service, they sign a Service Level Agreement (SLA) which guarantees that the
web service will satisfy certain criteria, e.g. response time less than 50ms. When web services
are updated, regression testing can make sure that the updates do not violate the SLA. This
important for both the consumer and the seller.
Although testing is crucial for efficient web services, there is not enough testing for them
currently, especially not automated testing. Our goal is to minimize the amount of human time
and effort it takes to test web services and instead generate automated test cases that can find the
bugs and errors in web services. We want these automatically generated test cases to find out
bugs in a web service after they have been updated (regression testing).
Methods
We start by logging user sessions. A user session starts when a unique user starts using a web
service and ends when the user quits and stops interaction with it. Usually, to access a web
service, you have to input a unique key, sometimes called the authentication key. We define a
user as someone who has a unique key and group his/her requests to make a user session. The
requests then are converted into XML based test cases. For example, we have a sequential list of
requests which needs to yield a specific result like the ISBN of a book. The test case will check if
the sequence of requests yield this result or not. We use Soap UI to replay these test cases on
fault seeded web service. The efficacy of the test cases are then calculated based upon how many
of the faults they succeed in identifying. Moreover, their efficacy also depends on the code
coverage. Code coverage is a measure used to describe the percentage of source code that a test
suite tests. The test suite should also have a good function call coverage, meaning it should test
all the functions of the web service and report if the function was not called properly. We will
then evaluate how well our automated testing worked. If it is satisfactory, we will employ it in a
real web service.
Figure 5: Our testing method.
Summer Work
We started exploring the world of web services by first starting to use the Petfinder API. The
Petfinder API provided us with good data that filled our local database. Using Python and
Django, we made a website, https://servo.cs.wlu.edu/animals/. This website contained the data
by sending a GET request to the Petfinder API twice a day and storing the data in our database.
The website was built by incorporating multiple web services. We actively used the POST
method in our website through the Twitter API. Whenever a new animal was put up for adoption,
a POST request was sent to Twitter to post a tweet about the new animal on the
@RockBridgePets Twitter handle.
After we had collected a sufficient amount of data using the Petfinder API, we began designing
our own web service. We wanted to start with a simple API, so we decided on two web service
methods, search and simple. Search is an advanced search method that finds the pets in the
database that matches pets with the input parameters and returns all the pet information. Simple
is a search method that returns basic information, such as pet name, animal, pet ID, and the time
spent in shelter. After developing these two methods for our web service, we created a page that
documented our API methods as well as parameters, which can be seen on
https://servo.cs.wlu.edu/animals/api_doc/. The following parameters can be searched using our
API: animal (cat/dog), id, status (Adoptable, Adopted or Animal Removed), age (Baby, Young,
Adult, Senior), sex (Male or Female), and size (S=small, M=medium, L=large, XL=extra-large).
After we had developed our own API to test, we started looking for testing tools. Some of the
testing tools we found were Soap UI, RestClient UI, and AlertSite UXM. RestClient UI is made
to test RESTful web services, yet there is no way to know if the data sent/received was accurate.
AlertSite UXM did not show up until halfway through software testing APIs. Soap UI offered
the best overall testing opportunities for our research through its use of test suites, which are a
collection of test cases, running test cases automatically after coding them, and returning the
information at each test step. To get familiar with Soap UI, we began by creating a test case for
Petfinder API. This test case, which can be seen in Figure 6, contained seven steps, composed of
five GET requests, a property transfer, and a script.
Figure 6: Our first test case in Soap UI.
The first step, Random, queries key, output, format, and shelterid parameters to the Petfinder
API are responds with a random pet from the shelterid’s shelter. Next, the property transfer step
took the animal value “Cat” (or “Dog”) from Random’s response and input the value as the
animal parameter in the next GET request, named Copy of Random. The third step is a
GroovyScript step that gets the animal property from Copy of Random, “Cat” or “Dog”, and
makes it lowercase “cat” or “dog”. Without this step, Copy of Random gives the error “invalid
arguments” because the Petfinder API only recognizes animal parameters that are lowercase. The
remaining four test steps are Random Cat, Random Dog, Random Male, and Random Female,
which selects a random cat, dog, male, and female, respectively.
Soap UI checks whether or not a test case is successful by a feature called assertions. Assertions
are oracles that can check certain aspects of the response, such as if the response contains a word
or value we expect, e.g. “Dog” or “Cat”. If the conditions of the assertions are met, the test case
passes, otherwise, it fails. After getting familiar with Soap UI’s features, we began testing our
own API.
Figure 7: A more intensive test case that utilizes Soap UI’s property transfers and groovy script
in testing our web service methods.
Future Work
Our future work will analyze the reports of our web service test cases. We will find patterns in
test cases that fail to figure out what is causing most of the errors and fix them. Through this test
driven testing, we will be able to identify the weak points of the web service and further work to
improve it. After making sure the current version of our API is functioning successfully, we will
intentionally add faults into the web service test cases to see if Soap UI catches the faults
successfully. We will save these test cases for future regression testing after we update the web
service. After running automated tests on fault seeded test cases, where Soap UI catches
intentional faults and errors, we will update the API to include more methods and changes to the
database models as needed. After each update to the web service, we will run not only new test
cases, but all the older ones as well to ensure that the web service can still perform those test
cases without error. We will examine the amount of time and computer resources spent on
testing, how much of our web service functions and code were tested, and how many seeded
faults the testing uncovered. Through this testing and analyzing our methods, we will be able to
find the most effective method of testing our web service and reduce the amount of human time
and effort it takes to test a web service.