canopus: a domain-specific language for modeling ... · pdf file canopus: a domain-specific...

Click here to load reader

Post on 26-Oct-2020

1 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • Canopus: A Domain-Specific Language for Modeling Performance Testing

    Maicon Bernardino∗†, Avelino F. Zorzo∗, Elder M. Rodrigues∗‡ ∗Postgraduate Program in Computer Science – Faculty of Informatics (FACIN)

    Pontifical Catholic University of Rio Grande do Sul (PUCRS) – Porto Alegre, RS, Brazil †FEEVALE University – Novo Hamburgo, RS, Brazil

    ‡Federal University of Health Sciences of Porto Alegre (UFCSPA) – Porto Alegre, RS, Brazil bernardino@acm.org, avelino.zorzo@pucrs.br, elderr@ufcspa.edu.br

    Abstract—Despite all the efforts to reduce the cost of the testing phase in software development, it is still one of the most expensive phases. In order to continue to minimize those costs, in this paper, we propose a Domain-Specific Language (DSL), built on top of MetaEdit+ language workbench, to model performance testing for web applications. Our DSL, called Canopus, was developed in the context of a collaboration1 between our university and a Technology Development Laboratory (TDL) from an Information Technology (IT) company. We present, in this paper, the Cano- pus metamodels, its domain analysis, a process that integrates Canopus to Model-Based Performance Testing, and applied it to an industrial case study.

    Index Terms—performance testing; domain-specific language; domain-specific modeling; model-based testing.

    I. INTRODUCTION AND MOTIVATION

    It is well-known that the testing phase is one of the most time-consuming and laborious phases of a software develop- ment process [1]. Depending on the desired level of quality for the target application, and also its complexity, the testing phase can have a high cost. Normally defining, designing, writing and executing tests require a large amount of resources, e.g. skilled human resources and supporting tools. In order to mitigate these issues, it would be relevant to define and design the test activity using a well-defined model or language and to allow the representation of the domain at a high level of abstraction. Furthermore, it would be relevant to adopt some technique or strategy to automate the writing and execution of the tests from this test model or language. One of the most promising techniques to automate the testing process from the system models is Model-Based Testing (MBT) [2].

    MBT provides support to automate several activities of a testing process, e.g. test cases and scripts generation. In addition, the adoption of an MBT approach provides other benefits, such as a better understanding on the application, its behavior and test environment, since it provides a graphical representation about the System Under Test (SUT). Although MBT is a well-defined and applied technique to automate some testing levels, it is not fully explored to test non-functional requirements of an application, e.g. performance testing. There are some works proposing models or languages to support

    1Study developed by the Research Group of the PDTI 001/2016, financed by Dell Computers with resources of Law 8.248/91.

    the design of performance models. For instance, the SPT UML profile [3] relies on the use of textual annotations on models, e.g. stereotypes and tags to support the modeling of performance aspects of an application. Another example is the Gatling [4] Domain-Specific Language (DSL), which provides an environment to write textual representation of an internal DSL based on industrial needs and tied to a testing tool.

    Although these models and languages are useful to support the design of performance models and also to support testing automation, there are a few limitations that restrict their integration in a testing process using an MBT approach. Despite the benefits of using an UML profile to model specific needs of the performance testing domain, its use can lead to some limitations. For instance, most of the available UML design tools do not provide support to work with a well- defined set of UML elements, which is needed to work with a restricted and specialized language. Thus, the presence of several unnecessary modeling elements may result in an error- prone and complex activity.

    Most of these issues could be mitigated by the definition and implementation of a graphical and textual DSL to the performance testing domain. However, to the best of our knowledge, there is little investigation on applying DSL to the performance testing domain. For instance, the Gatling DSL provides only a textual representation, based on the Scala language [4], which is tied to a specific load generator technology - it is a script-oriented DSL. The absence of a graphical representation could be an obstacle to its adoption by those performance analysts that already use some type of graphical notation to represent the testing infrastructure or the SUT. Furthermore, as already stated, the use of a graphical notation provides a better understanding about the testing activities and SUT to the testing team as well as to developers, business analysts and non-technical stakeholders. Another limitation is that the Gatling DSL is bound to a specific workload solution.

    Therefore, it would be relevant to develop a graphical modeling language for the performance testing domain to mitigate some of the limitations mentioned earlier. In this paper, we propose Canopus, a DSL that aims to provide a graphical and textual way to support the design of performance models, and that can be applied in a model-based performance

    2016 IEEE International Conference on Software Testing, Verification and Validation

    978-1-5090-1827-7/16 $31.00 © 2016 IEEE DOI 10.1109/ICST.2016.13

    157

  • testing approach. Hence, our DSL scope is to support the per- formance testing modeling activity, aggregating information about the problem domain to provide better knowledge sharing among testing teams and stakeholders, and centralizing the performance testing documentation. Moreover, our DSL will be used within an MBT context to generate performance test scripts and scenarios for third-party tools/load generators. It is important to mention that during the design and development of our DSL, we also considered some requirements from a Technology Development Lab (hereafter referred to as TDL) of an industrial partner, in the context of a collaboration project to investigate performance testing automation. These requirements were: a) the DSL had to allow for representing the performance testing features; b) the technique for devel- oping our DSL had to be based on Language Workbenches; c) the DSL had to support a graphical representation of the performance testing features; d) the DSL had to support a textual representation; e) the DSL had to include features that illustrate performance counters (metrics, e.g. CPU, memory); f) the DSL had to allow the modeling of the behavior of different user profiles; g) traceability links between graphical and textual representations should require minimal human intervention; h) the DSL had to be able to export models to specific technologies, e.g. HP LoadRunner, MS Visual Studio; i) the DSL had to generate model information in an eXtensible Markup Language (XML) file; j) the DSL had to represent different performance test elements in test scripts; and, k) the DSL had to allow the modeling of multiple performance test scenarios. The rational and design decisions for our DSL are described in [5].

    In this paper, we discuss the performance testing domain and describe how the metamodels were structured to compose our DSL. Besides, we also present how Canopus was designed to be applied with an MBT approach to automatically generate performance test scenarios and scripts. To demonstrate how our DSL can be used in practice, we applied it throughout an actual case study from the industry.

    This paper is organized as follows. Section II introduces related background on performance testing and DSL, and discusses related work. Section III presents an analysis about the performance domain and describes the Canopus metamod- els. Section IV describes a model-based performance testing process that integrates Canopus. Section V presents how we applied Canopus in an industrial case study. Section VI concludes the paper with some future directions and points out lessons learned.

    II. BACKGROUND

    This section introduces Model-Based Performance Testing and Domain-Specific Languages, and discusses the related work.

    A. Model-Based Performance Testing

    Performance engineering is an essential activity to im- prove scalability and performance, revealing bottlenecks of the SUT [6], and can be applied in accordance with two distinct

    approaches: a predictive-based approach and a measurement- based approach. The predictive-based approach describes how the system operations use the computational resources, and how the limitation and concurrence of these resources af- fect such operations, usually, the design is supported by formal model, e.g. Finite State Machine (FSM). As for the measurement-based approach supports the performance testing activity, which can be classified as load testing, stress testing and soak (or stability) testing [7]. Each one of these differ from each other based on their workload and the time that is available to perform the test. They can also be applied to different application domains such as desktop, mobile, web service, and web application.

    Based on that, a variety of testing tools have been devel- oped to support and automate performance testing activities. The majority of these tools take advantage of one of two techniques: Capture and Replay (CR) or Model-Based Testing (MBT). In a CR-based approach, a performance engineer must run the tests manually one time on the web application to be tested, using a tool on a “capture” mode, and then run the load genera

View more