Download - tese fernando carvalho correção v6 final
Pós-Graduação em Ciência da Computação
“An Embedded Software Component Quality Evaluation
Methodology”
Fernando Ferreira de Carvalho
PHD THESIS
Universidade Federal de Pernambuco [email protected]
www.cin.ufpe.br/~posgraduacao
RECIFE, FEBRUARY/2010
UNIVERSIDADE FEDERAL DE PERNAMBUCO
CENTRO DE INFORMÁTICA
PÓS-GRADUAÇÃO EM CIÊNCIA DA COMPUTAÇÃO
Fernando Ferreira de Carvalho
“An Embedded Software Component Quality Evaluation
Methodology"
ESTE TRABALHO FOI APRESENTADO À PÓS-GRADUAÇÃO EM
CIÊNCIA DA COMPUTAÇÃO DO CENTRO DE INFORMÁTICA DA
UNIVERSIDADE FEDERAL DE PERNAMBUCO COMO
REQUISITO PARCIAL PARA OBTENÇÃO DO GRAU DE DOUTOR
EM CIÊNCIA DA COMPUTAÇÃO.
A PHD. THESIS PRESENTED TO THE FEDERAL UNIVERSITY OF PERNAMBUCO IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF PHD. IN COMPUTER SCIENCE.
ADVISOR: Silvio Lemos Meira
RECIFE, FEBRUARY/2010
Carvalho, Fernando Ferreira de An embedded software component quality evalua tion methodology / Fernando Ferreira de Carvalho. - Recife: O Autor, 2010. 209 p. : il., fig., tab. Tese (doutorado) – Universidade Federal de P ernambuco. CIn. Ciência da Computação, 2010.
Inclui bibliografia e apêndices. 1. Ciência da computação. 2. Sistemas embarca dos. 3. Qualidade de componentes de software. 4. Engenhar ia da computação. I. Título. 004 CDD (22. ed.) MEI2010 – 074
R ESUMO Um dos maiores desafios para a indústria de embarcados é fornecer produtos com alto nível de qualidade e funcionalidade, a um baixo custo e curto tempo de desenvolvimento, disponibilizando-o rapidamente ao mercado, aumentando assim, o retorno dos investimentos. Os requisitos de custo e tempo de desenvolvimento têm sido abordados com bastante êxito pela engenharia de software baseada em componentes (CBSE) aliada à técnica de reuso de componentes. No entanto, a utilização da abordagem CBSE sem as devidas verificações da qualidade dos componentes utilizados, pode trazer conseqüências catastróficas (Jezequel et al., 1997). A utilização de mecanismos apropriados de pesquisa, seleção e avaliação da qualidade de componentes são considerados pontos chave na adoção da abordagem CBSE. Diante do exposto, esta tese propõe uma Metodologia para Avaliação da Qualidade de Componentes de Software Embarcados sob diferentes aspectos. A idéia é solucionar a falta de consistência entre as normas ISO/IEC 9126, 14598 e 2500, incluindo o contexto de componente de software e estendendo-o ao domínio de sistemas embarcados. Estas normas provêem definições de alto nível para características e métricas para produtos de software, mas não provêem formas de usá-las efetivamente, tornando muito difícil aplicá-las sem adquirir mais informações de outras fontes. A Metodologia é composta de quatro módulos que se complementam em busca da qualidade, através de um processo de avaliação, um modelo de qualidade, técnicas de avaliação agrupadas por níveis de qualidade e uma abordagem de métricas. Desta forma, ela auxilia o desenvolvedor de sistemas embarcado no processo de seleção de componentes, avaliando qual componente melhor se enquadra nos requisitos do sistema. É utilizada por avaliadores terceirizados quando contratados por fornecedores a fim de obter credibilidade em seus componentes. A metodologia possibilita avaliar a qualidade do componente embarcado antes do mesmo ser armazenado em um sistema de repositório, especialmente no contexto do framework robusto para reuso de software, proposto por Almeida (Almeida, 2004). Palavras-chaves: Sistemas embarcados, Qualidade de componentes software,Avaliação de qualidade, Desenvolvimento baseado em component (CBD)
A bstract One of the biggest challenges for the embedded industry is to provide products of high quality and functionality at low cost and short time-to-marketing, thus increasing the Return on Investment (RoI). The cost and development time requirements have been addressed quite successfully by component-based software engineering (CBSE) combined with component reuse technique. However, the use of CBSE approach without the quality assurance of the components used can bring catastrophic results (Jezequel et al., 1997). The use of appropriate mechanisms for search, selection and quality evaluation of components are considered key points in CBSE adoption. In this way, this thesis proposes an Embedded Software Components Quality Evaluation Methodology. Its focus is to improve the lack of consistency between the standards ISO/IEC 9126, ISO/IEC 14598, ISO/IEC 25000, also including the software component quality context and extend it to the embedded domain. These standards provide a high-level definition of characteristics and metrics for software products but do not provide ways to be used effectively, making them very difficult to apply without acquiring more knowledge from supplementary sources. The Methodology consists of four modules that complement each other in quality direction through an evaluation process, a quality model, evaluation techniques grouped by levels of quality and an embedded metrics approach. Thus, it helps the developer of embedded systems in selection of components to evaluate which component best fits in the system requirements. It is also used in third party evaluation when contractors are hired by components suppliers to achieve trust in its components. The methodology permits the evaluation of the quality of embedded component before being stored in a repository system, especially in the context of robust framework for software reuse, proposed by Almeida (Almeida, 2004).
Keywords: Embedded systems, Software Component Quality, Quality Evaluation, Component-based development (CBD).
L ist of Figures
Figure 1.1: The Robust Framework for Software Reuse. .................................... 7 Figure 1.2: The quality evaluation layer was divided of two modules................8 Figure 1.3: Detailing the quality evaluation layer in robust framework.............9 Figure 2.1: An embedded system encompasses the CPU as well as many other resources............................................................................................................... 15 Figure 2.2: Component technology for small embedded Systems ................... 17 Figure 2.3: The characteristics and its priority in automotive domain............25 Figure 3.1: SQuaRE Architecture ......................................................................37 Figure 3.2: ISO/IEC 25040...............................................................................42 Figure 3.3: Structure of Prediction-Enabled Component Technology (Wallnau, 2003). ................................................................................................................... 51 Figure 3.4: Process of obtaining, evaluating and storing components ............56 Figure 3.5: Research on embedded software component quality and certification timeline. ........................................................................................... 57 Figure 4.1: Embedded Software Component Quality Evaluation Methodology............................................................................................................................... 61 Figure 4.2: Flow Diagram of Embedded Software Component Quality Evaluation Methodology. .....................................................................................63 Figure 4.3: The EQP describe in Business Process Modeling ..........................66 Figure 4.4: Activities of establish evaluation requirements described in BPM...............................................................................................................................68 Figure 4.5: Activities of specify the evaluation described in BPM. ..................72 Figure 4.6: Activities of evaluation design described in BPM.......................... 75 Figure 4.7: Activities of execute the evaluation described in BPM. .................78 Figure 4.8: The proposed EQM, in graphic way. ..............................................86 Figure 4.9: The EQL hierarchy........................................................................100 Figure 4.10: Different EQL for different quality characteristics. ................... 101 Figure 4.11: The Overview of GQM Paradigm .................................................112 Figure 5.1: Baud Rate Converter Evaluation Architecture. ............................ 135 Figure 5.2: Siemens ECU used to simulate the vehicle behavior. .................. 136 Figure 5.3: Develop board model LPC-P2148 Olimex.................................... 136 Figure 5.4: K-line interface board used to connect ECU and microcontroller............................................................................................................................. 137 Figure 5.5: Evaluation of quality characteristics: BRConverter. ................... 143 Figure 5.6: Quality in use characteristics: BRConverter................................ 143
L ist of Tables
Table 2.1: Summary of relevant characteristics in Crnkovic’s research. .......... 21 Table 2.2: Priority classes used to classify the importance of the different quality attributes. .................................................................................................24 Table 3.1: Software quality standards. ..............................................................32 Table 3.2: IEC61131............................................................................................33 Table 3.3: DO-178B............................................................................................34 Table 3.4: Characteristics and Sub-Characteristics in SQuaRE project. .......... 41 Table 4.1: The detail of the EQP, modules, activities and steps........................65 Table 4.2: Summary of Establish Evaluation Requirements module...............68 Table 4.3: Summary of evaluation module specification..................................72 Table 4.4: Summary of the evaluation module design...................................... 75 Table 4.5: Summary of the evaluation module execution................................. 77 Table 4.6: The Embedded software component Quality Model and its parts ..82 Table 4.7: Summary of embedded quality characteristics research in different domain..................................................................................................................84 Table 4.8: Changes in the proposed EQM, in relation to ISO/IEC 25010. ......85 Table 4.9: Quality Attributes for sub-characteristics at Runtime and Life-cycle............................................................................................................................... 91 Table 4.10: 3ª Part of EQM: Additional Information. ......................................98 Table 4.11: Guidelines for selecting evaluation level. ..................................... 102 Table 4.12: Embedded Quality Level – EQL and the evaluation techniques. 104 Table 4.13: Embedded Quality Attribute X Evaluation Techniques...............106 Table 4.14: The suggested metrics to use by quality evaluation methodology..............................................................................................................................116 Tabela 5.1: Quality Attributes selected based in EQL I................................... 139 Tabela 5.2: Evaluation Techniques selected by evaluation staff .....................141
A cronyms
Terms - Descriptions B2B - Business to Business CBD - Component-Based Development CBSE - Component-Based Software Engineering C.E.S.A.R. - Recife Center for Advanced Studies and Systems CMU/SEI - Carnegie Mellon University’s Software Engineering Institute COTS - Commercial Off-The-Self CBSD - Component-Based Software Development COM - Component Object Model CCM - CORBA Component Model CMM - Capability Maturity Model CMMI - Capability Maturity Model Integrated EQL - Embedded software component evaluation techniques on Quality Level EQM - Embedded software component Quality Model EQP - Embedded software component Quality Process EMA - Embedded Metrics Approach GQM - Goal Question Metric Paradigm ISO - International Organization for Standardization IEC - International Electro-technical Commission OMG - Object Management Group PECT - Prediction-Enabled Component Technology PACC - Predictable Assembly from Certifiable Components RiSE - Reuse in Software Engineering group SPICE - Software Process Improvement and Capability dEtermination UART - Universal Asynchronous Receiver Transmitter XML - eXtensible Markup Language
C ontents
1 Introduction .................................................................1
1.1 Motivation ..............................................................................1 1.2 Problem formulation ............................................................. 4 1.3 General Objective .................................................................. 5 1.4 Specific Objetive .................................................................... 5 1.5 Proposed solution.................................................................. 6 1.6 Out of Scope ........................................................................ 10 1.7 Statement of Contributions .................................................. 11 1.8 Organization of the Thesis....................................................12
2 Embedded Systems Design .........................................14
2.1 Basic concepts for Component-Based embedded systems...15 2.2 Specific requirements for embedded systems......................19 2.3 The needs and priorities in research ................................... 27 2.4 Summary ............................................................................. 29
3 Correlates Works and Component Quality, Evaluation
and Certification: A Survey ..............................................31
3.1 Correlates Works..................................................................31 3.1.4.1 ISO/IEC 2501n (Quality Model Division) .......................................... 40 3.1.4.2 ISO/IEC 2504n (Quality Evaluation Division) .................................. 42 3.1.4.3 ISO/IEC 2502n (Quality Measurement Division) ............................. 42
3.2 A Survey: Embedded Software Component Quality, Evaluation and Certification......................................................... 43 3.3 Failures in Software Component Certification.................... 53 3.4 Software Component Evaluation and Certification............. 54 3.5 Summary of the Study......................................................... 57 3.6 Summary ............................................................................. 58
4 Embedded Software Component Quality Evaluation
Methodology ................................................................... 59
4.1 Embedded software component Quality evaluation Process (EQP) ............................................................................................ 64
4.2 Embedded software component Quality Model (EQM).......81 4.2.1.1 Characteristics and Sub-Characteristics ............................................ 83 4.2.1.2 Quality Attributes of EQM...................................................................91
4.3 Embedded software component evaluation techniques based on Quality Level (EQL) ............................................................... 100 4.4 Embedded Metrics Approach (EMA)................................ 109 4.5 Summary ............................................................................125
5 The Experimental Study ...........................................126
5.1 Introduction .......................................................................126 5.2 The Experimental Study.....................................................127 5.3 The Definition ................................................................... 128 5.4 The Planning ..................................................................... 128 5.5 The project used in the experimental study .......................134 5.6 The Architecture, Environment and Scenarios ..................135 5.7 The Instrumentation ......................................................... 138 5.8 The Execution.................................................................... 138 5.9 The Analysis and Interpretation.........................................139 5.10 Lessons Learned..............................................................146 5.11 Summary ............................................................................147
6 Conclusions and future works ..................................148
6.1 Contributions .................................................................... 148 6.2 Future Work .......................................................................150 6.3 Academic Contributions..................................................... 151
7 References ................................................................ 152
Appendix A..................................................................... 165
Appendix B..................................................................... 178
Appendix C.....................................................................183
Chapter 1 - Introduction 1
1 Introduction
Embedded systems are at the heart of many electronic systems in use
today. Added v0alue to products is to a large extent provided by the software.
Furthermore, production cost reduction is imperative and is often achieved by
introducing software that permits the use of less complex hardware. Domains in
which the use of software is now essential include, among others, the
automotive, medical systems, process control, and manufacturing industries.
Embedded systems are often systems consisting of software and hardware. The
software part incorporates many software components that must cooperate
without fail. It is platform-dependent, consequently difficult to port, upgrade,
and customize, and offers limited opportunities for reuse, even within a single
application domain.
The demands that companies in these electronic products must satisfy
include low production costs, short time to market and high quality. The cost
and time to market issue is addressed by means of the rapidly emerging
Component-based software engineering (CBSE) approach. In CBSE, embedded
systems are built as an assembly of components already developed and prepared
for integration.
1.1 Motivation
A common characteristic of all systems is the increasing importance of
software. For example, software development costs for industrial robots today
constitute about 75% of total costs, while in car industry it is currently about
30%. Some ten to fifteen years ago this number was about 25% for robots and
negligible for cars (Crnkovic, 2005).
Chapter 1 - Introduction 2
The following sections show the main motivations that led the preparation
of this work.
1.1.1 CBSE approach and reuse technique
One of the most compelling reasons for adopting component-based
approaches in embedded software design is the premise of reuse. The idea is to
build software from existing components primarily by assembling and replacing
interoperable parts. The implications for reduced development time and
improved product quality make this approach very attractive (Krueger, 1992).
In a real environment, a developer that retrieves a faulty component from
the repository would certainly lose his trust in the system, becoming
discouraged to make new queries. Thus, it is extremely important to assert the
quality of the assets that are stored into the repository before making them
available for reuse. Despite this importance, the software engineering
community had not explored these issues until recently. In this way, a new
research area arose: components evaluation and quality assurance (Wohlin et
al., 1994), (Mingins et al., 1998), (Morris et al., 2001), (Schmidt, 2003),
(Wallnau, 2003). However, several questions still remain unanswered, such as:
(i) How quality evaluation should be carried out?
(ii) What are the requirements for a evaluation process? and,
(iii) Who should perform it? (Goulão et al., 2002a).
This is the reason why there is still no well-defined standard to perform
component quality evaluation (Voas et al., 2000), (Morris et al., 2001).
There are needs for a number of improvements in CBSE approach
applied in embedded industry, following Crnkovic (Crnkovic, 2005). One of the
main improvements is directly related to quality.
“Component quality evaluation: In order to transfer components
across organizations, techniques and procedures should be
developed for ensuring the trustworthiness of components.”
Chapter 1 - Introduction 3
1.1.2 Software Components Market Inhibitors
The Carnegie Mellon University’s Software Engineering Institute
(CMU/SEI) (Bass et al., 2000) studied industry trends in the use of software
components from technical and business perspectives. A distinct set of
inhibitors to adopting software component technology emerged from the
conducted surveys and interviews of earlier adopters of software component
technology.
From this data and from the interviews, the study concludes that the
market perceives the following key inhibitors for component adoption,
presented here in decreasing order of importance:
• Lack of available components;
• Lack of stable standards for component technology;
• Lack of quality components; and
• Lack of an engineering method to consistently produce quality systems
from components.
The software engineering community is already attempting to reduce the
gaps related to the two first inhibitors. However, in relation to the third
inhibitor, the community effort is still an incipient. Further research is required
in order to assure the production of certified components, especially when
combined with the lack of component-based software engineering techniques
that deliver predictable properties (the last inhibitor).
Further, according to Voas (Voas, 1998), to foster an emerging software
component marketplace, it must be clear for buyers whether a component’s
impact is positive or negative. Ideally, buyers should have this information
before buying a component. Component buyers could then choose an
appropriate component according to its quality level. With this information,
system builders could make better design decisions and be less fearful of
liability concerns, and component vendors could expect a growing marketplace
for their products.
Chapter 1 - Introduction 4
1.1.3 The Future of Software Components
Important researches on software components, such as Heineman
(Heineman et al., 2001), Councill in (Councill, 2001), Crnkovic (Crnkovic,
2005), Wallnau (Wallnau, 2003), Wallnau (Wallnau, 2004), Schneider & Han
(Schneider & Han, 2004) and Andreou & Tziakouris (Andreou & Tziakouris,
2007) indicates that the future of software components is quality evaluation and
certification. These authors state that to know the component quality is a
necessary precondition for CBSE to be successfully adopted and to achieve the
associated economic and social benefits that CBSE could yield. With the success
of CBSE, software developers will have more time to develop, instead of
spending their time addressing problems associated with understanding and
fixing someone else’s code. Qualified components used during development will
have predetermined and well established criteria, thus reducing the risk of
system failure and increasing the likelihood that the system will comply with
design standards.
1.2 Problem formulation
The previous section stated that designers increasingly often build
systems using reusable components due to the complexity of their designs.
Therefore, there is an increasing need to efficiently and effectively qualify such
systems. Quality assurance which can effectively cope with this situation and
take advantage of the component-based structure needs to be developed.
The evaluation of the quality of components (i.e. the assessment of their
quality attributes) needs to be done by independent parties, at least until
software vendors acquire the level of maturity that hardware vendors currently
have. The software industry still far from counting with the hardware data
sheets and catalogues available for hardware components that describe all their
characteristics. However, it is necessary to have them for software components
too if it wants to talk about a “real” Component-based Software Engineering.
Many organizations struggle in their attempts to select and evaluate an
appropriate component in their system. For this reason, a well-defined and
consistent software component quality assurance is essential for to transfer
Chapter 1 - Introduction 5
components across organizations and the component market adoption (i.e.
without an efficient mechanism to select/evaluate the component quality).
However, the main drawback of the existing international standards is
that they provide very generic definition, quality models, guidelines, and
address only software product’s quality issues, which are very difficult to apply
to specific domains such as embedded components and CBSE.
In a survey of the state-of-the-art (Carvalho et al., 2009b) was noted that
there is a lack of processes, methods, techniques and tools available for
evaluating component quality, specifically for embedded domain. This necessity
is pointed out by different researchers (Voas, 1998), (Morris et al., 2001),
(Wallnau, 2003), (Alvaro et al., 2005), (Bass et al., 2003), (Softex, 2007) and
(Lucrédio et al., 2007). Most researchers agree that component quality is an
essential aspect of the CBSE adoption and software reuse success.
This thesis address in the following problem:
“The lack of mechanisms, process, methods and guidelines to
support the quality evaluation for software component in embedded
systems domain.”
1.3 General Objective
Provide an environment that supports the quality evaluation for software
components which will be used to develop embedded systems. Allowing
designers better knowledge of the quality of components to be used in the
system under development through a quality report component, which will
provide detailed information about the component quality under different
viewpoints. In this environment the designer can evaluate different components
and verify which best fits the system requirements under development.
1.4 Specific Objetive
The proposed methodology’s main objectives are:
Chapter 1 - Introduction 6
i. Serve as a tool for components acquirers or developers, helping
them in the components selection to see which component best fits
the system requirements to be built;
ii. To be used in third party evaluation when contractors are hired by
component suppliers in order to achieve trust in its components in
industry;
iii. To enable the quality evaluation of embedded component before
being stored in a repository system for reuse purpose, especially in
the context of robust framework for software reuse, proposed by
Almeida (Almeida, 2004).
1.5 Proposed solution
This thesis defines an Embedded Software Component Quality
Evaluation Methodology, which includes a quality evaluation process, the
steps of definition of embedded quality model, evaluation techniques based on
quality level and an embedded metrics approach. The methodology is based on
a set of modules, activities, steps and guidelines.
Moreover, the process is based on the state-of-the-art in the area and its
foundations and elements are discussed in details. In this way, the main goals of
this thesis were to define Embedded Software Component Quality Evaluation
Methodology that is composed of four inter-related modules in order to assure
the component quality degree. This methodology was proposed with basis in the
standards ISO/IEC 25010, ISO/IEC 9126, ISO/IEC 14598 adapted for
component context and embedded domain.
Different from other approaches in the literature (Goulão et al., 2002b),
(Beus-Dukic et al., 2003), (Cho et al., 2001), (Gui and Scott, 2007) that provide
only isolated aspects to assure the component quality, this thesis tries to
investigate and make available a complete methodology with all necessary
mechanisms to execute the component evaluation activities.
Through these evaluations it is expected a continuous evolution of the
whole methodology in a way that the software industry can start to trust in it
and evaluate its own embedded software components.
Chapter 1 - Introduction 7
Moreover, this thesis is a part of a big project, a robust framework for
software reuse (Almeida et al., 2004), in order to establish a standard for
component development; to develop a repository system; and to develop a
general purpose and embedded software component quality evaluation process.
In order to generate a well-defined model for developing, evaluating quality,
storing and, subsequently, making it possible for embedded system industry or
software factories to reuse software components. This project has been
developed in collaboration between the academia
(UFPE) and industry (C.E.S.A.R.7). The RiSE group8 is the link between these
parts.
Figure 1.1: The Robust Framework for Software Reuse.
However, to better support the embedded domain and our specific
quality characteristics, the component quality evaluation was divided into two
parts, as shown the Figure 1.2. The first of these, focuses on general purpose
software component evaluation, in general, the software component used in
IBM-PC compatible. The other, is a component quality evaluation that aligns to
7 Recife Center for Advanced Studies and Systems, http://www.cesar.org.br. 8 Reuse in Software Engineering (RiSE) group – http://www.rise.com.br.
Chapter 1 - Introduction 8
specific requirements and constraints to develop quality characteristics for
embedded domain.
The framework (Figure 1.1) that is being developed has two layers. The
first layer (on the left) is composed of best practices related to software reuse.
Non-technical factors, such as education, training, incentives, and
organizational management are considered. This layer constitutes a
fundamental step prior to the introduction of the framework in the
organizations. The second layer (on the right) is composed of important
technical aspects related to software reuse, such as processes, environments,
tools, and a evaluation process, which is the focus of this thesis.
In order to achieve the main objective, the process is based on the
following foundations:
Figure 1.2: The quality evaluation layer was divided of two modules.
First, the correct usage of methodology should follow a well-defined and
controllable evaluation process. After, there must be defined an embedded
quality reference model. There must be a series of techniques that allow
one to evaluate whether a component conforms to the reference model in
Chapter 1 - Introduction 9
different levels of quality. Finally, a set of metrics are needed, in order to
track the components properties and the enactment of the evaluation process.
These four main modules:
• Embedded software component Quality evaluation Process,
(EQP) ;
• Embedded software component Quality Model (EQM);
• Embedded software component evaluation techniques based
on Quality Level (EQL) ; and,
• Embedded Metrics Approach (EMA).
These are the modules of an Embedded Software Component Quality
Evaluation Methodology (Figure 1.3) that is being investigated as a part of the
RiSE project.
Figure 1.3: Detailing the quality evaluation layer in robust framework.
Chapter 1 - Introduction 10
The methodology will allow that the components produced in a Software
Reuse Environment have assured quality before being stored on a Repository
System. In this way, software engineers would have a greater degree of trust in
the components that are being reused.
1.6 Out of Scope
As the proposed reuse process is part of a broader context, a set of related
aspects will be left out of its scope. Nevertheless, as these aspects were
envisioned since the initial definitions of the process, they can be added in the
future with some adjustments. Thus, the following issues are not directly
addressed by this work:
• Cost Model: Cost estimation is a key requirement for CBSD before the
actual development activities can proceed. Cost is a function of the
enterprise itself, its particular development process, the selected
solution, and the management and availability of the resource during the
development project (Cechich et al., 2003), (Mahmooda et al, 2005). A
cost model is useful to help the software engineer during the analysis of
the software component available to purchase (or to select or to
evaluate). However, it makes sense when, first, you have defined
processes, methods, techniques and tools to execute the selection and/or
the evaluation task in order to identify the cost/benefit to purchase or to
evaluate a component.
• Formal Proof: Meyer (Meyer, 2003) and Karlsson (Karlsson, 2006)
proposes a formal approach in order to acquire trust in software
components. His idea is to build or to use software components with fully
proved properties or characteristics. The intention is to develop software
components that could be reliable to the software market.
This thesis does not consider cost model because the whole methodology
that is the basis for embedded software component evaluation will be
considered in this first moment and, after that a cost model to help
organizations to define if it is viable evaluate certain kinds of components (or
not) will be useful. The formal quality assurance is not considered mainly
Chapter 1 - Introduction 11
because the software component market is not inclined to formally specify their
software components. This kind of approach is highly expensive, in terms of
development time and level of expertise that is needed, and component
developers still do not know if it is cost effective to spend effort in this direction
without having specific requirements such as strict time constraints or high
reliability. However, the Embedded software component evaluation techniques
based on Quality Level (EQL) provides formal proof evaluation techniques that
could be useful in some scenarios, depending of the customer’s necessities and
the cost/benefit to do so;
1.7 Statement of Contributions
As a result of the work presented in this thesis, the following
contributions can be enumerated:
• An extensive study of the key developments in the field of quality,
evaluation and certification of embedded software component, in an
attempt to analyze this research area and identify trends to follow;
• A survey of the state-of-the-art of quality, evaluation and certification
of general purpose and embedded software component in order to
understand and identify the weak and strong points of existing
processes (Carvalho et al., 2009a), (Carvalho et al., 2009b);
• Definition of the Embedded software component Quality Model
(EQM) (Carvalho et al., 2009c), (Carvalho et al., 2009d), based on
the new standard, the Software Product Quality Requirements and
Evaluation (SQuaRE) project (ISO/IEC 25000, 2005), Component
Quality Model (CQM) (Alvaro et al., 2006) and Bertoa Quality Model
(Bertoa et al., 2002);
• Development of the Embedded software component evaluation
Techniques based in Quality Level (EQL) in order to provide different
thoroughness levels of evaluation techniques and a set of guidelines
for selecting those evaluation levels (Carvalho et al., 2009e),
(Carvalho et al., 2009f);
Chapter 1 - Introduction 12
• Development of the Embedded software component Quality
evaluation Process (EQP) in order to provide a high quality and
consistent evaluation process (Carvalho et al., 2009g);
• Development of the Embedded Metrics Approach based in Goal
Question Metric that is composed of a set of valuable measures to be
considered as starting point during the component evaluations
(Carvalho et al., 2009g); and
• Development of an Embedded Software Component Quality
Evaluation Methodology aiming to provide a complementary
mechanism to standards ISO/IEC to evaluate the embedded software
component quality (Carvalho et al., 2009g).
1.8 Organization of the Thesis
The remainder of this thesis is organized as follows.
Chapter 2 presents a brief overview of embedded system design,
component-based development areas and requirements for embedded in
different domain. The main concepts of these topics are considered.
Chapter 3 presents, in the first part, the survey of the state-of-the-art of
the embedded software component quality, evaluation and certification area
that was performed. The failure cases that can be found in the literature are also
described in this chapter. Subsequently, in the final part, this chapter describes
the aspects related to the software quality, evaluation and certification concepts
and standardization. The intention is to show that software reuse depends on
quality.
Chapter 4 presents the Embedded Software Component Quality
Evaluation Methodology proposed and its related modules. Session 4.1 presents
the Embedded software component Evaluation Process (EQP), describing all
activities and steps that should be followed to execute the component evaluation
activity in a more controllable way. Session 4.2 describes the proposed
Embedded software component Quality Model (EQM), showing its
characteristics, its sub-characteristics, the quality attributes that are related to
the model. Session 4.3 presents the Embedded software component evaluation
Chapter 1 - Introduction 13
techniques on Quality Level (EQL) that is composed of a set of evaluation
techniques grouped by quality levels in order to provide flexibility to the
component evaluation. Further, a set of guidance is delineated to direct the
evaluation staff during the selection of levels. Session 4.4 presents the
Embedded Metrics Approach and the paradigm adopted to define the metrics is
also presented. A set of metrics usage group by quality level are presented.
The Experimental Study will be shown in Chapter 5 to demonstrate the
feasibility and practicality through an example of real world application. All
activities of the evaluation process were developed and, at the end, the results
were interpreted, summarized and an experimental conclusion is done relating
the strengths and weaknesses.
The conclusions of the developed work and the analysis of the proposed
methodology are shown in Chapter 6, as well as contributions to academia and
industry. The enhancements and features that were not addressed or developed
in this work were listed in the topic further work.
The papers reviewed and used as inputs for the development of this
thesis are listed alphabetically in Chapter 7 as references.
Three appendices were added at the end of this thesis. Appendix A shows
the step by step instructions for performing a quality evaluation using the
methodology proposed. The Appendix B shows the evaluator’s feedback in the
use of the methodology for quality evaluation of the embedded software
components, showing the strengths and weaknesses found in the
implementation and evaluation. Appendix C details the process and results
achieved in the BRConverter evaluation, which was used in the experimental
study reported in Chapter 5.
Chapter 2 - Embedded Systems Design 14
2 Embedded Systems Design
Embedded systems comprise a scale from ultra small devices with simple
functionality, through small systems with sophisticated functions, to large,
possibly distributed systems, where the management of the complexity is the
main challenge. An embedded system may be represented by a dedicated
processor surrounded by dedicated hardware systems, performing very specific
tasks (Junior et al., 2004a). Further, it can distinguish between systems
produced in large quantities, in which the low production costs are very
important and low-volume products in which the system dependability is the
most important feature. All these different requirements have an impact on
feasibility, on use, and on approach in component-based development. A
common characteristic of all systems is increasing importance of software. For
example, software development costs for industrial robots currently constitute
about 75% of total costs, while in car industry it is currently about 30%. Some
ten, fifteen years ago this number was about 25% for robots and negligible for
cars (Crnkovic, 2005). A second common characteristic is increasing
interoperability. While previously the embedded systems were mainly used for
controlling different processes, today they are integrated with information
systems of infotainment technologies. In this chapter, a short overview of
embedded systems design will be shown.
Chapter 2 - Embedded Systems Design 15
Figure 2.1: An embedded system encompasses the CPU as well as many other
resources.
2.1 Basic concepts for Component-Based embedded systems
In classic engineering disciplines a component is a self contained part or
subsystem that can be used as a building block in the design of a system. In
software engineering, there are many different suggestions for precise
definitions of components in component based software development. The best
accepted understanding of component in the software industry world is based
on Szyperski’s definition (Szyperski et al. 1998). From this definition it can be
assumed that a component is an executable unit, and that deployment and
composition can be performed at run-time.
In the domains of embedded systems this definition is largely followed, in
particular the separation between component implementation and component
interface. A component can be delivered in a form of a source code written in a
high-level language, and allows build-time (or design-time) composition. This
more liberal view is partly motivated by the embedded systems context, as will
be discussed below.
Many important properties of components in embedded systems, such as
timing and performance, depend on the characteristics of the underlying
hardware platform. Kopetz and Suri (Kopetz & Suri et al., 2003) propose to
distinguish between software components and system components. Extra-
Chapter 2 - Embedded Systems Design 16
functional properties, such as performance, cannot be specified for a software
component in isolation. Such properties must either be specified with respect to
a given hardware platform, or be parameterized on (characteristics of) the
underlying platform. A system component, on the other hand, is defined as a
self-contained hardware and software subsystem, and can satisfy both
functional and extra functional properties.
2.1.1 Component-based approach for small embedded systems
The specific characteristics of embedded systems lead to specific
requirements of component technologies. In particular the approaches in
development process and component specifications using interfaces are
different from those implemented in the component technologies widely used in
other domains.
The component interface summarizes the properties of the component
that are externally visible to the other parts of the system. As for embedded
systems non-functional properties are as important as functional there is a
tendency to include specification of extra-functional properties in the
component interface (for example timing properties). This allows more system
properties to be determined when the system is designed, i.e. such interface
enables verification of system requirements and prediction of system properties
from properties of components.
In general-purpose component technologies, the interfaces are usually
implemented as object interfaces supporting polymorphism by late binding.
While late binding allows connecting of components that are completely
unaware of each other beside the connecting interface, this flexibility comes
with a performance penalty and increased risk for system failure. Therefore the
dynamic component deployment is not feasible for small embedded systems, for
reasons of performance, confiability and limited resources.
Taking into account all the constraints for real-time and embedded
systems, we can conclude that there are several reasons to perform component
deployment and composition at design time rather than run-time (Crnkovic &
Larsson, 2002). This allows composition tools to generate a monolithic
Chapter 2 - Embedded Systems Design 17
firmware for the device from the component-based design and in this way
achieve better performance and better predictability of the system behavior.
This also enables global optimizations, e.g., in a static component composition
known at design time, connections between components could be translated
into direct function calls instead of using dynamic event notifications. Finally,
verification and prediction of system requirements can be done statically from
the given component properties.
This implies that the component-based characteristic is mostly visible at
design time. To achieve an efficient development process tools should exist
which will provide support for component composition, component adaptation
and static verification and prediction of system requirements and properties
from the given component properties.
There may also be a need for a run-time environment, which supports the
component methodology by a set of services. The methodology enables
component intercommunication (those aspects which are not performed at
design time), and (where relevant) control of the behavior of the components.
Figure 2.1 shows different environments in a component life cycle. The figure is
adopted from (Crnkovic & Larsson, 2002).
Figure 2.2: Component technology for small embedded Systems
2.1.2 Component-based approach for large embedded systems
For large embedded systems the resource constraints are not the primary
concerns. Complexity and interoperability play a much more important role.
Also due to complexity, the development of such systems is very expensive and
cutting the development costs is highly prioritized. For this reason general-
purpose component technologies are of more interest than in the case for small
systems.
Chapter 2 - Embedded Systems Design 18
The technology mostly used in large systems is Microsoft COM and
recently .NET, and to a smaller extent different implementations of CORBA,
although none of these technologies provides support for real-time. The systems
using these technologies belong to the category of soft-real time systems. Often
a component technology is used as a basis for additional abstraction level
support, which is specified either as standards or proprietary solutions. The
main reason for wide use of component-based technology is the possibility of
reusing solutions in different ranges of products, efficient development tools,
standardized specifications and interoperation, and integration between
different products.
One successful example of the adoption of a component-based technology
is the initiative OPC Foundation (OLE process control Foundation,
www.opcfoundation.org), an organization that consists of more than 300
member companies worldwide, it is responsible for a specification that defines a
set of standard interfaces based upon Microsoft’s OLE/COM and recently .NET
technology. OPC consists of a standard set of interfaces, properties, and
methods for use in process-control and manufacturing-automation applications.
OPC provides a common interface for communicating with diverse process-
control devices, regardless of the controlling software or devices in the process.
The application of the OPC standard interface makes possible interoperability
between automation/control applications, field systems/devices and
business/office applications.
Another example of a component-based approach is development and
use of the standard IEC 61131 (ISO/IEC 61131-3, 1995). IEC 61131 defines a
family of languages that includes instruction lists, assembly languages,
structured text, a high level language similar to Pascal, ladder diagrams, or
function block diagrams (FBD). Function blocks can be viewed as components
and interfaces between blocks are released by connecting in-ports and out-
ports. Function lock execution may be periodic or event-driven. IEC 61131 has
been successfully used in development of industrial process automation systems
for many years.
Chapter 2 - Embedded Systems Design 19
Large embedded systems that must fulfill real-time requirements usually
do not use general-purpose component-based technologies. However in some
cases, such as for ABB controllers, a reduced version of COM has been used on
top of a real-time operating system (Lüders et al., 2002). The reused version
includes facilities for component specification using the interface description
language (IDL), and some basic services at run-time such as component
deployment has been used. These services have been implemented internally.
Different communication protocols and I/O drivers have been identified as
components.
2.2 Specific requirements for embedded systems
Embedded systems vary from very small systems to very large systems.
For small systems there are strong constrains related to different recourses such
as power or memory consumption and others. In most of the cases, embedded
systems are real-time systems. For these as well as for large embedded systems
the demands on reliability, functionality, efficiency and other characteristics
that depends on domain or application. Finally, in many domains, the product
life cycle is very long – in can stretch to several decades.
All these characteristics have strong implications on requirements. Most
of the requirements of embedded systems are related to non-functional
characteristics, generally designated as extra-functional properties or quality
attributes. Unfortunately, the priority quality attributes vary according to
domain application. These properties can be classified in run-time and life cycle
extra-functional properties. In this way, research developed by Crnkovic
(Crnkovic, 2005) lists four main characteristics that must be considered to
embedded systems, as shown the Table 2.1, they are listed below:
(i) Real-time properties: a violation of time requirements even of a
proper functional response violates the system functionality. The real-
time properties:
a - Response time or latency,
b - Execution time,
c - Worst case execution time,
d - Deadline.
Chapter 2 - Embedded Systems Design 20
(ii) Dependability is defined as an ability of a system to deliver a
service that can justifiably be trusted and an ability of a system to avoid
failures that are more severe and frequent than is acceptable to the users
(Crnkovic, 2005). The main means to attain dependability are related to
avoidance of faults: fault prevention, fault tolerance, fault removal and
fault forecasting and it is characterized by several attributes (Avižienis et
al., 2001):
a - Reliability,
b - Availability,
c- Integrity,
d- Safety,
e -Confidentiality
f - Maintainability.
(iii) Resource consumption. Many embedded systems have strong
requirements for low and controlled consumption of different resources.
The reasons may be the size of the systems and/or the demands on lower
production costs. Examples of such restrictions and constraints are:
a - Power consumption,
b - Memory consumption,
c - Execution (CPU) time,
d - Computation (CPU) power.
(iv) Life cycle properties. In many domains the embedded systems
have very long life time running round the clock year after year. During
the lifetime of a system several generations of hardware and software
technologies can be used. The long life systems must be able to cope with
these changes introduced either into the surrounding environment or
into the systems themselves.
In this way, the research concludes that many of the most important
requirements of the embedded systems are related to extra-functional
properties. This implies that development and maintenance of such systems are
very costly. In particular activities related to verification and guaranteed
behavior (formal verification, modeling, tests, etc.) and maintenance (adaptive
maintenance, debugging, regressive testing, etc.) require a lot of effort. For
Chapter 2 - Embedded Systems Design 21
these reasons the technologies and processes that lead to lower costs for these
activities are very attractive and desirable.
Table 2.1: Summary of relevant characteristics in Crnkovic’s research.
Characteristics Sub-characteristics
Real-time properties Response time or latency
execution time
worst case execution time
deadline
Dependability Reliability
Availability
integrity
confidentiality
safety
Resource consumption Power consumption
computation (CPU) power
memory Consumption
execution (CPU) time,
Life cycle properties maintainability
2.2.1 Industrial Automation
Typical application domains of industrial automation are in control of
industrial processes, power supply, and industrial robots. Industrial automation
domain comprises a large area of control, monitoring and optimization systems.
They typically include large pieces of software that have been developed over
many years (often several decades). Most control systems are manufactured in
rather large volumes, and must to a large extent be configurable to suit a variety
of customer contexts. They can be classified according to different levels of
control (Crnkovic & M. Larsson, 2002):
(i) Process level (for example, a valve in a water pipeline, a boiler, etc.),
(ii) Field level that concerns sensors, actuators, drivers, etc.
(iii) Group control level that concerns controller devices and
applications which control a group of related process level devices in a
closed-loop fashion,
Chapter 2 - Embedded Systems Design 22
(iv) Process control level i.e. operator stations and processing
systems with their applications for plant-wide remote supervision and
control and overview the entire process to be controlled,
(v) Production or manufacturing management level that includes
systems and applications for production planning.
Notice that, even if the higher levels are not embedded, they are of
uttermost importance as they need to be interoperable with the lower level
which greatly influences the possible choices of the component model and in
fine the design choices. The integration requirements have in many cases led to
a decision to use component technologies which are not appropriate for
embedded systems but provide better integration possibilities. Depending on
the level, the nature of the requirements and the implementation will be quite
different. In general, the lower the level, the stronger are the real-time
requirements (including timing predictability) and the resource
limitations. Also, the component based approach will include different
concepts at different levels. The most important quality attributes in industrial
automation, following the researchers, is:
(i) lowest levels:
a - Availability,
b - Timeliness,
c - Reliability
(ii) higher levels:
a - Performance,
b - Usability, and
c - Integrability.
2.2.2 Automotive
To provide a domain specific classification of the importance of quality
attributes for software in vehicles, and discusses how the attributes could be
facilitated by a component technology. Åkerholm (Åkerholm et al., 2005)
executed a research in main vehicle industries. The research method is divided
into three ordered steps:
Chapter 2 - Embedded Systems Design 23
1 - During the first step a list of relevant quality attributes was gathered;
2 - In the next step technical representatives from a number of vehicular
companies placed priorities on each of the attributes in the list reflecting
their companies view respectively;
3 - Finally a synthesis step was performed, resulting in a description of
the desired quality attribute support in a component technology for
vehicular systems.
The list of quality attributes have been collected from different literature
trying to cover qualities of software that interest vehicular manufactures. In
order to reduce a rather long list, attributes with clear similarities in their
definitions have been grouped in more generic types of properties, e.g.,
portability and scalability are considered covered by maintainability. Although
such grouping could fade the specific characteristics of a particular attribute, it
put focus on the main concerns. In the ISO/IEC 9126 standard (ISO/IEC 9126,
2001), 6 quality attributes (functionality, reliability, usability, efficiency,
maintainability, and portability) are defined for evaluation of software quality.
However, the standard has not been adopted fully in this research; it is
considered too brief and does not cover attributes important for embedded
systems (e.g., safety, and predictability). Furthermore, concepts that sometimes
are mixed with quality attributes (for example fault tolerance) are not classified
as quality attributes, rather as methods to achieve qualities (as for example
safety). Finally, functionality is of course one of the most important quality
attributes of a product, indicating how well it satisfies stated or implied needs.
However, it focuses on quality attributes beyond functionality often called extra-
functional or non-functional properties. The resulting list of quality attributes is
presented below. Having been presented with the basic characteristics of quality
attributes related to component technologies tailored for vehicular systems
below:
1. Safety
2. Reliability
3. Predictability
4. Usability
5. Extendibility
Chapter 2 - Embedded Systems Design 24
6. Maintainability
7. Efficiency
8. Testability
9. Security
10. Flexibility
Representatives from the technical staff of several companies have been
requested to prioritize a list of quality attributes, reflecting each of the
respective companies’ view. The attributes have been grouped by the company
representatives in four priority classes as shown in Table 2.2 The nature of the
quality attributes imply that no quality attributes can be neglected. It is essential
to notice that placing an attribute in the lowest priority class (4) does not mean
that the company could avoid that quality in their software, rather that the
company does not spend extra efforts in reaching it. The following companies
have been involved in the classification process:
• Volvo Construction Equipment
• Volvo Cars
• Bombardier Transportation
• Scania
• ABB Robotics
Table 2.2: Priority classes used to classify the importance of the different quality
attributes.
Priority Description
1 very important, must be considered
2 important, something that one should try to
consider
3 less important, considered if it can be achieved
with a small effort
4 Unimportant, do not spend extra effort on this
As the last step the authors provide a discussion where we have combined
the collected data from the companies with the classification of how to support
different quality attributes in (Larsson, 2004). The combination gives an
abstract description of where, which, and how different quality attributes should
Chapter 2 - Embedded Systems Design 25
be supported by a component technology tailored for usage in the vehicular
industry.
Figure 2.3: The characteristics and its priority in automotive domain.
Y-axis: priority of quality attributes in a scale 1 (highest), to 4 (lowest). X-
axis: the attributes, with the highest prioritized attribute as the leftmost, and
lowest as rightmost. Each of the companies has one bar for each attribute,
textured as indicated below the X-axis.
2.2.3 Medical
Wijnstra (Wijnstra, 2001) described their experience with quality
attributes and aspects in the development of a medical imaging product family.
To meet such requirements for a range of medical imaging products, a product
family has been defined. Wijnstra distinguished between quality attributes and
aspects as views. A quality attribute of an artifact is an observable property of
the artifact. A quality attribute can be observable at development-time or at run-
time.
Following Wijnstra, quality attributes and aspects are used to add
structure to the various phases of the development process. They form a
supporting means for achieving completeness, i.e. have all relevant concerns
been taken into account? In a product family context where the family members
are constructed from a component-based platform, it is especially useful to
Chapter 2 - Embedded Systems Design 26
achieve aspect-completeness of components, allowing system composition
without worrying about individual aspects.
Software architecture is defined as “the structure or structures of the
system, which comprise software components, the externally visible properties
of those components, and the relationships among them”. The architecture
must accommodate the quality attributes as imposed on the system, which can
be handled via structures in the high-level architecture, aspects, or rules &
guidelines.
The most characteristic and important quality attributes found in
Wijnstra’s research for the medical imaging product family is given below:
1. Reliability
2. Safety
3. Functionality
4. Portability
5. Modifiability
a. Configurability
b. Extensibility and Evolvability
6. Testability
7. Serviceability
2.2.4 Consumer Electronics
Consumer electronics products, such as TV, VCR, and DVD, are
developed and delivered in form of product families which are characterized by
many similarities and few differences and in form of product populations which
are sets of products with many similarities but also many differences.
Production is organized into product lines -this allows many variations on a
central product definition.
A product line is a top-down, planned, proactive approach to achieve
reuse of software within a family or population of products. It is based on use of
a common architecture and core functions included into the product platform
and basic components. The diversity of products is achieved by inclusion of
different components. Because of the requirements for low hardware and
Chapter 2 - Embedded Systems Design 27
production costs, general-purpose component technologies have not been used,
but rather more dedicated and simpler propriety models have been developed.
An example of such a component model is the Koala component model used at
Philips (Ommering et al., 2000), (Ommering, 2002). Koala is a component
model and an architectural description language to build a large diversity of
products from a repository of components. Koala is designed to build consumer
products such as televisions, video recorders, CD and DVD players and
recorders, and combinations of them.
2.2.5 Other domains
There are many domains in which embedded systems are used
extensively. Some of them are: Telecommunication, avionics and aerospace,
transportation, computer games, home electronics, navigation systems, etc.
While there are many similarities between these domains there are also very
different requirements for their functional and extra-functional properties. The
consequences are that the requirements for component-based technologies are
different, and consequently it expects to have one component model that
summarizes all quality attributes in common in different domains. The
expectations are that only one embedded quality component models will exist,
some common characteristics, such as basic principles of component
specifications through interfaces, basic composition and run-time services,
certain patterns, and similar.
2.3 The needs and priorities in research
The component-based approach on system level, where hardware
components are designed with embedded software, has been successfully used
for many years. Also large-grain generic components like protocol stacks,
RTOSs, etc. have been used for a long time. In addition to this, technology
supporting a component-based approach has been developed; either in the form
of proprietary component models, or by using reduced versions of some widely
used component models. Still, there are needs for a number of improvements.
Some of them are the following (differently important for different domains):
Chapter 2 - Embedded Systems Design 28
- There is a lack of widely adopted component technology standards
which are suitable for embedded systems. For smaller-size embedded systems,
it is important that a system composed of components can be optimized for
speed and memory consumption, which is still missing in most of the
component technologies available today.
- There is also a need for interoperability between different component
technologies. Specification technology is not sufficiently developed to guarantee
a priori component interoperability.
- Most current component technologies do not support important extra-
functional properties
- There is a need for generic platform services, for, e.g., security and
availability.
- Tools that support component based development are still lacking
- There is a lack of efficient implementations of component
methodologies (i.e., middleware), which have low requirements on memory and
processing power.
Major needs for the further development of component technology for
embedded systems are the following (Brinksma et al., 2001).
- Need for adopted component models and methodologies for embedded
systems. A problem is that many application domains have application-
dependent requirements on such a technology.
- Need for light-weight implementations of component methodologies. In
order to support more advanced features in component-based systems, the run-
time platform must provide certain services, which however must use only
limited resources.
- Obtaining extra-functional properties of components: Timing and
performance properties are usually obtained from components by
measurement, usually by means of simulation. Problems with this approach are
that the results depend crucially on the environment (model) used for the
measurements may not be valid in other environments, and that the results may
depend on factors which cannot easily be controlled. Techniques should be
developed for overcoming these problems, thereby obtaining more reliable
specifications of component properties.
Chapter 2 - Embedded Systems Design 29
- Platform and vendor independence: Many current component
technologies are rather tightly bound to a particular platform (either run-time
platform or design platform). This means that components only make sense in
the context of a particular platform.
- Efforts to predict system properties: The analysis of many global
properties from component properties is hindered by inherent complexity
issues. Efforts should be directed to finding techniques for coping with this
complexity.
- Component certification: In order to transfer components
across organizations, techniques and procedures should be
developed for ensuring the trustworthiness of components.
- Component noninterference: Particularly in safety critical applications,
there is a need to ensure separation and protection between component
implementations, in terms of memory protection, resource usage, etc.
- Tool support: The adoption of component technology depends on the
development of tool support. The clearly identified priorities of CBSE for
embedded systems are:
- Predicting system properties. A research challenge today is to predict
system properties from the component properties. This is interesting for system
integration, to achieve predictability, etc.
- Development of widely adopted component models for real-time
systems. Such a model should be supported by technology for generating
necessary runtime infrastructure (which must be light-weight), generation of
monitors to check conformance with contracts, etc.
2.4 Summary
At the beginning of the chapter the basic concepts about embedded
system are defined. After that, the component-based software development
(CBSD) approach and features that are relevant to these systems are described.
First, the focus was on small embedded systems, showing the specific
constraints of these systems. In the second phase the characteristics and
limitations of large embedded systems were discussed.
Chapter 2 - Embedded Systems Design 30
In the second part of this chapter the search results in the literature of the
main quality characteristics that involve embedded systems in different
application areas were presented, including:
� Industrial Automation
� Automotive
� Medical
� Electronics Consumer
� General
This research was of great importance to the work, because the quality
characteristics that make up the quality reference model, which will be shown in
Chapter 4 of this thesis, was based on this survey.
Finally, the chapter ends by listing the needs and priorities in research
into the use of the component-based development (CBD) approach for
embedded systems. The list was constructed by reports in the literature by
several researchers in the field of embedded systems.
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 31
3 Correlates Works and Component
Quality, Evaluation and Certification:
A Survey
This Chapter presents the correlates works and a survey of cutting edge
embedded software component quality, evaluation and certification research, in
an attempt to analyze the trends in CBSE/CBD applied embedded systems
projects and to probe some of the component quality, evaluation and
certification research directions.
3.1 Correlates Works
One of the main objectives of software engineering is to improve the
quality of software products, establishing methods and technologies to build
software products within given time limits and available resources. Given the
direct correlation that exists between software products and processes, the
quality area could be basically divided into two main topics (Pressman, 2005):
• Software Product Quality: aiming to assure the quality of the
generated product (e.g. ISO/IEC 9126 (ISO/IEC 9126, 2001), ISO/IEC
12119 (ISO/IEC 12119, 1994), ISO/IEC 14598 (ISO/IEC 14598, 1998),
SQuaRE project (ISO/IEC 25000, 2005) (McCall et al., 1977), (Boehm et
al., 1978), among others); and
• Software Processes Quality: looking for the definition, evaluation
and improvement of software development processes (e.g. Capability
Maturity Model (CMM) (Paulk et al., 1993), Capability Maturity Model
Integrated (CMMI) (CMMI, 2000), Software Process Improvement and
Capability dEtermination (SPICE) (Drouin, 1995), among others).
Currently, many institutions are concerned with creating standards to
properly evaluate the quality of the software product and software development
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 32
processes. In order to provide a general vision, Table 3.1 shows a set of national
and international standards in embedded domain.
Table 3.1: Software quality standards.
Standards Overview
ISO/IEC 61131 component-based approach for industrial systems
RTCA DO 178B guidelines for development of aviation software
ISO/IEC 61508 Security Life cycle for industrial software
ISO/IEC 9126 Software Products Quality Characteristics
ISO/IEC 14598 Guides to evaluate software product, based on practical
usage of the ISO 9156 standard
ISO/IEC 12119 Quality Requirements and Testing for Software Packages
SQuaRE project
(ISO/IEC 25000)
Software Product Quality Requirements and Evaluation
IEEE P1061 Standard for Software Quality Metrics Methodology
ISO/IEC 12207 Software Life Cycle Process.
NBR ISO 8402 Quality Management and Assurance.
NBR ISO 9000-1-2 Model for quality assurance in Design, Development,
Test, Installation and Servicing
NBR ISO 9000-3 Quality Management and Assurance. Application of the
ISO 9000 standard to the software development process
(evolution of the NBR ISO 8402).
CMMI
(Capability
Maturity Model
Integration)
SEI’s model for judging the maturity of the software
processes of an organization and for identifying the key
practices that are required to increase the maturity of
these processes.
ISO 15504 It is a framework for the assessment of software
processes.
The embedded software market has grown in the decade, as well as the
necessity of producing software with quality and trust. Thus, obtaining quality
certificates has been a major concern for software companies. In 2002, Weber
(Weber et al., 2002) showed how this tendency influenced the Brazilian
software companies from 1995 until 2002.
The number of companies looking for standards to assure the quality of
their products or processes has grown drastically in the recent past. The graph
on the left shows this growth in relation to ISO 9000, which assures the Quality
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 33
Management and Assurance. The graph on the right shows this growth in
relation to CMM, which assures the software development processes quality.
However, there is still no standard or effective process to certify the
quality of pieces of embedded software, such as components. As shown in
Chapter 1, this is one of the major inhibitors to the adoption of CBD. However,
some ideas of software product quality assurance may be seen in the SQuaRE
project (described next), which will be adopted as a basis for defining a
consistent quality methodology for embedded software components.
3.1.1 ISO/IEC 61131
IEC 61131 was developed by the International Electro-technical
Commission with the input of vendors, end-users and academics, to provide a
generic programming environment for the industry. IEC 61131 consists of five
parts listed below, in Table 3.2.
Table 3.2: IEC61131
Part Title
Part 1 General information
Part 2 Equipment and test requirements
Part 3 PLC programming languages
Part 4 User guidelines
Part 5 Communications
IEC 1131-3 is the international standard for programmable controller
programming languages. As such, it specifies the syntax, semantics and display
for the following suite of PLC programming languages:
• Ladder diagram (LD)
• Sequential function Charts (SFC)
• Function Block Diagram (FBD)
• Structured Text (ST)
• Instruction List (IL)
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 34
One of the primary benefits of the standard is that it allows multiple
languages to be used within the same programmable controller. This allows the
program developer to select the language best suited to each particular task.
3.1.2 RTCA DO 178B
A special committee (SC-145) of the Radio Technical Commission for
Aeronautics (RTCA) convened in 1980 to establish guidelines for developing
airborne systems. The report “Software Considerations in Air-borne Systems
and Equipment Certification” was published in January 1982 as the RTCA
Document Order (DO)-178 (and revised as DO-178A in 1985).
Table 3.3: DO-178B
DO-178B Certification Requirements
All items are not required at all certification levels.
DO-178B Documents: DO-178B Records:
Plan for Software Aspects of Certification (PSAC) Software Verification Results (SVR)
Software Development Plan (SDP) Problem Reports
Software Verification Plan (SVP) Software Configuration Management
Records
Software Configuration Management Plan (SCMP) Software Quality Assurance Records
Software Quality Assurance Plan (SQAP)
Software Requirements Standards (SRS)
Software Design Standards (SDS)
Software Code Standards (SCS)
Software Requirements Data (SRD)
Software Design Description (SDD)
Software Verification Cases and Procedures (SVCP)
Software Life Cycle Environment Configuration Index
(SECI)
Software Configuration Index (SCI)
Software Accomplishment Summary (SAS)
Due to rapid advances in technology, the RTCA established a new
committee (SC-167) in 1989 with the objective of updating the DO-178A by
focusing on five areas: documentation integration and production, system
issues, software development, software verification, and software configuration
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 35
management and software quality assurance. The resulting document, DO-
178B, as shown the Table 3.3, provides guidelines for applicants developing
software-intensive airborne systems.
The targeted DO-178B certification level is either A, B, C, D, or E.
Correspondingly, these DO-178B levels describe the consequences of a potential
failure of the software: catastrophic, hazardous-severe, major, minor, or no-
effect.The first column of Table 3.3 shows the required documentation for
evaluation and possible certification, while the second column shows the result
record of the evaluation process.
The DO-178B certification process is most demanding at higher levels. A
product certified to DO-178B level A would have the largest potential market,
but it would require thorough, labor-intensive preparation of most of the items
on the DO-178B support list. Conversely, a product certifying to DO-178B level
E would require fewer support items and be less taxing on company resources.
Unfortunately, the product would have a smaller range of applicability than if
certified at a higher level.
3.1.3 ISO/IEC 61508
The International standard IEC 61508 “Functional safety of electrical /
electronic / programmable electronic safety-related systems (E/E/PES)” is
intended to be a basic functional safety standard applicable to all kinds of
industry. IEC 61508 defines functional safety as: “part of the overall safety
relating to the EUC (Equipment Under Control) and the EUC control system
which depends on the correct functioning of the E/E/PE safety-related systems,
other technology safety-related systems and external risk reduction facilities”.
The standard covers the complete safety life cycle, and may need
interpretation to develop sector specific standards. It has its origins in the
process control industry sector.
The safety life cycle has 16 phases which roughly can be divided into three
groups as follows: phases 1-5 address analysis, phase 6-13 address realization
and phase 14-16 address operation. All phases are concerned with the safety
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 36
function of the system. The standard has seven parts. Parts 1-3 contain the
requirements of the standard (normative), while 4-7 are guidelines and
examples for development and thus informative.
Central to the standard are the concepts of risk and safety function. The
risk is a function of frequency (or likelihood) of the hazardous event and the
event consequence severity. The risk is reduced to a tolerable level by applying
safety functions which may consist of E/E/PES and/or other technologies.
While other technologies may be employed in reducing the risk, only those
safety functions relying on E/E/PES are covered by the detailed requirements of
IEC 61508.
IEC 61508 has the following views on risks:
• zero risk can never be reached
• safety must be considered from the beginning
• non-tolerable risks must be reduced (ALARP)
3.1.4 ISO/IEC 25000 (SQuaRE project)
The SQuaRE (Software Product Quality Requirements and Evaluation)
project (ISO/IEC 25000, 2005) has been created specifically to make two
standards converge, trying to eliminate the gaps, conflicts, and ambiguities that
they present. These two standards are the ISO/IEC 9126 (ISO/IEC 9126, 2001),
which define a quality model for software product, and ISO/IEC 14598
(ISO/IEC 14598, 1998), which define a software product evaluation process,
based on the ISO/IEC 9126.
Thus, the general objective for this next series is to respond to the
evolving needs of users through an improved and unified set of normative
documents covering three complementary quality processes: requirements
specification, measurement and evaluation. The motivation for this effort is to
supply those responsible for developing and acquiring software products with
quality engineering instruments supporting both the specification and
evaluation of quality requirements.
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 37
SQuaRE will also include criteria for the specification of quality
requirements and their evaluation, and recommended measures of software
product quality attributes which can be used by developers, acquirers and
evaluators. SQuaRE consists of 5 divisions as shows in Figure 3.1.
Figure 3.1: SQuaRE Architecture
The Quality Requirements Division (ISO/IEC 2503n)
(ISO/IEC25030, 2007) contains the standard for supporting the specification of
quality requirements, either during software product quality requirement
elicitation or as an input for an evaluation process:
• Quality requirements and guide: to enable software product quality to be
specified in terms of quality requirements;
The Quality Model Division (ISO/IEC 2501n) (ISO/IEC 25010)
contains the detailed quality model and its specific characteristics and sub-
characteristics for internal quality, external quality and quality in use. This
division includes:
• Quality model and guide: to describe the model for software product
internal and external quality, and quality in use. The document will
present the characteristics and sub-characteristics for internal and
external quality and characteristics for quality in use.
The Product Quality General Division (ISO/IEC 2500n)
(ISO/IEC 25000, 2005) contains the unit standards defining all common
models, terms and definitions referred to by all other standards in the SQuaRE
series. Readers are reminded that the Quality Management theme will deal with
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 38
software products, in contrast to the distinct processes of Quality Management
as defined in the ISO 9000 family of standards. This division includes two unit
standards:
• Guide to SQuaRE: to provide the SQuaRE structure, terminology,
document overview, intended users and associated parts of the series, as
well as reference models;
• Planning and management: to provide the requirements and guidance
for planning and management support functions for software product
evaluation.
The standards in the Quality Measures Division (ISO/IEC 2502n)
(ISO/IEC 25020, 2007) were derived from ISO/IEC 9126 and ISO/IEC 14598.
This division covers the mathematical definitions and guidance for practical
measurements of internal quality, external quality and quality in use. In
addition, it will include the definitions for the measurement primitives for all
other measures. This theme will also contain the Evaluation Module to support
the documentation of measurements. This division includes:
• Measurement reference model and guide: to present introductory
explanations, the reference model and the definitions that is common to
measurement primitives, internal measures, external measures and
quality in use measures. The document will also provide guidance to
users for selecting (or developing) and applying appropriate measures;
• Measurement primitives: to define a set of base and derived measures,
being the measurement constructs for the internal quality, external
quality and quality in use measurements;
• Measures for internal quality: to define a set of internal measures for
quantitatively measuring internal software quality in terms of quality
characteristics and sub-characteristics;
• Measures for external quality: to define a set of external measures for
quantitatively measuring external software quality in terms of quality
characteristics and sub-characteristics;
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 39
• Measures for quality in use: to define a set of measures for measuring
quality in use. The document will provide guidance on the use of the
quality in use measures.
The Quality Evaluation Division (ISO/IEC 2504n) (ISO/IEC
25040) contains the standards for providing requirements, recommendations
and guidelines for software product evaluation, whether performed by
evaluators, acquirers or developers:
• Quality evaluation overview and guide: to identify the general
requirements for specification and evaluation of software quality and to
clarify the generic concepts. It will provide a methodology for evaluating
the quality of a software product and for stating the requirements for
methods of software product measurement and evaluation;
• Process for developers: to provide requirements and recommendations
for the practical implementation of software product evaluation when the
evaluation is conducted in parallel with development;
• Process for acquirers: to contain requirements, recommendations and
guidelines for the systematic measurement, assessment and evaluation of
software product quality during acquisition of “commercial-off-the-shelf”
(COTS) software products or custom software products, or for
modifications to existing software products;
• Process for evaluators: to provide requirements and recommendations
for the practical implementation of software product evaluation, when
several parties need to understand, accept and trust evaluation results;
• Documentation for the evaluation module: to define the structure and
content of the documentation to be used to describe an Evaluation
Module.
The next section will present more close the Quality Model Division,
Quality Evaluation Division and Quality Measurement Division. These three
divisions are the basis of the SQuaRE project and contain the
guidelines/techniques that guide this thesis during the software component
quality methodology proposal. It is important to say that those five modules
from SQuaRE have been in its draft version and, probably, some modifications
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 40
will be made until its final version. Thus, the idea has accomplished the
following modification according to its evolutions.
3.1.4.1 ISO/IEC 2501n (Quality Model Division)
The ISO/IEC 2501n (ISO/IEC 25010) is composed of the ISO/IEC 9126 -
1(ISO/IEC 9126, 2001) standard, which provides a quality model for software
product. At the present time, this division contains only one standard: 25010 –
Quality Model and guide. This is an ongoing standard in development.
The Quality Model Division does not prescribe specific quality
requirements for software, but rather defines a quality model, which can be
applied to every kind of software. This is a generic model that can be applied to
any software product by tailoring it to a specific purpose. The ISO/IEC 25010
defines a quality model that comprises six characteristics and 27 sub-
characteristics (Table 3.4). The six characteristics are described next:
• Functionality: The capability of the software to provide functions
which meet stated and implied needs when the software is used under
specified conditions;
• Reliability: The capability of the software to maintain the level of
performance of the system when used under specified conditions;
• Usability: The capability of the software to be understood, learned, used
and appreciated by the user, when used under specified conditions;
• Efficiency: The capability of the software to provide the required
performance relative to the amount of resources used, under stated
conditions;
• Maintainability: The capability of the software to be modified; and
• Portability: The capability of software to be transferred from one
environment to another.
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 41
Table 3.4: Characteristics and Sub-Characteristics in SQuaRE project.
Characteristics Sub-Characteristics Functionality Suitability
Accuracy Interoperability Security Functionality Compliance
Reliability Maturity Fault Tolerance Recoverability Reliability Compliance
Usability Understandability Learnability Operability Attractiveness Usability Compliance
Efficiency Time Behavior Resource Utilization Efficiency Compliance
Maintainability Analyzability Changeability Stability Testability Maintainability Compliance
Portability Adaptability Installability Replaceability Coexistence Portability Compliance
The usage quality characteristics (i.e. characteristics that can be obtained
from the developer or designer feedback of the product) are called Quality in
Use characteristics and are modeled with four characteristics: effectiveness,
productivity, security and satisfaction.
However, the main drawback of the existing international standards, in
this case the ISO/IEC 25010, is that they provide very generic quality models
and guidelines, which are very difficult to apply to specific domains such as
COTS components and CBSD. Thus, the quality characteristics of this model
should be analyzed in order to define and adequate to the component quality
characteristics context.
A quality model serves as a basis for determining if a piece of software
has a number of quality attributes. In conventional software development, to
simply use a quality model is often enough, since the main stakeholders that are
interested in software quality are either the developers or the customers that
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 42
hired these developers. In both cases, the quality attributes may be directly
observed and assured by these stakeholders.
3.1.4.2 ISO/IEC 2504n (Quality Evaluation Division)
The ISO/IEC 2504n (ISO/IEC 25040) is composed of the ISO/IEC 14598
(ISO/IEC 14598, 1998) standard, which provides a generic model of an
evaluation process, supported by the quality measurements from ISO/IEC 9126.
This process is specified in four major sets of activities for an evaluation,
together with the relevant detailed activities (Figure 3.2). This is an ongoing
standard in development.
Figure 3.2: ISO/IEC 25040
The ISO/IEC 2504n is divided in five standards: ISO/IEC 25040 –
Evaluation reference model and guide; ISO/IEC 25041 – Evaluation modules;
ISO/IEC 25042 – Evaluation process for developers; ISO/IEC 25043 –
Evaluation process for acquirers; and ISO/IEC 25044 – Evaluation process for
evaluators. Besides providing guidance and requirements for the software
product evaluation process (ISO/IEC 25040 and ISO/IEC 25041), it provides
three other standards that contain guides for different perspectives of software
product evaluation: developers, acquires and evaluators.
3.1.4.3 ISO/IEC 2502n (Quality Measurement Division)
The ISO/IEC 2502n (ISO/IEC 25020, 2007) division tries to improve the
quality measurements provided by previous standards like ISO/IEC 9126-2
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 43
(external metrics) (ISO/IEC 9126, 2001), ISO/IEC 9126-3 (internal metrics) and
ISO/IEC 9126-4 (quality in use metrics). However, this standard improves some
aspects of quality measurement and the most significantly is the adoption of the
Goal-Question- Metrics (GQM) paradigm (Basili et al., 1994), thus, the metrics
definition becomes more flexible and adaptable to the software product
evaluation context.
The ISO/IEC 2502n is divided in five standards: ISO/IEC 25020 -
Measurement reference model and guide; ISO/IEC 25021 – Measurement
primitives; ISO/IEC 25022 – Measurement of internal quality; ISO/IEC 25023
– Measurement of external quality; and ISO/IEC 25024 – Measurement of
quality in use. These standards contain some examples in how to define metrics
for different kinds of perspectives, such as internal, external and quality in use.
3.2 A Survey: Embedded Software Component Quality, Evaluation and Certification
Some relevant research works explore the theory of component
certification in academic scenarios, but the literature is not so rich in reports
related to practical embedded software component quality evaluation
experience (Carvalho et al., 2009a). In this way, this chapter presents a survey
of embedded software component quality, evaluation and certification research,
from the early 90s until today. Another relevant aspect observed is that in the
pioneering works the focus was mainly on mathematical and test-based models
while more recently researchers have focused on techniques and models based
on predicting quality requirements.
Works related to quality evaluation and certification process in order to
evaluate embedded software component quality are surveyed (Carvalho et al.,
2009b). However the literature contains several works related to software
component quality achievement, such as: component and contracts (Urting et
al., 2001), Feature Modeling of CBD embedded (Kalaoja, 1997), Security as a
new dimension in embedded (Kocher at al., 2004), Quality attributes of a
Medical Family (Wijnstra, 2001), Usage-based software certification process
(Voas, 2000), component testing (Councill, 1999), (Beydeda & Gruhn, 2003),
component verification (Wallin, 2002), (Reussner, 2003). Since the focus of this
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 44
survey is on processes for assuring component quality, it does not cover these
works, which deal only with isolated aspects of component quality.
The first research published about embedded software certification
system was focused on mathematical and test based models. In 1993, Poore
(Poore et al., 1993) developed an approach based on the usage of three
mathematical models (sampling, component and certification models), using
test cases to report the failures of a system later analyzed in order to achieve a
reliability index. Poore et al. were concerned with estimating the reliability of a
complete system, and not just the reliability of individual software units,
although, they did consider how each component affected the system’s
reliability.
One year later, in 1994, Wohlin et al. (Wohlin et al., 1994) presented
the first method of component certification using modeling techniques, making
it possible not only to certify components but to certify the system containing
the components as well. The method is composed of the usage model and the
usage profile. The usage model is a structural model of the external view of the
components, complemented with a usage profile, which describes the actual
probabilities of different events that are added to the model. The failure
statistics from the usage test form the input of a certification model, which
makes it possible to certify a specific reliability level with a given degree of
confidence. An interesting point of this approach is that the usage and profile
models can be reused in subsequent certifications, with some adjustments that
may be needed according to each new situation. However, even reusing those
models, the considerable amount of effort and time that is needed makes the
certification process a hard task.
One different work that can be cited was published in 1994. Merrit
(Merrit, 1994) presented an interesting suggestion: the use of components
certification levels. These levels depend on the nature, frequency, reuse and
importance of the component in a particular context, as follows:
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 45
• Level 1: A component is described with keywords and a summary and
is stored for automatic search. No tests are performed; the degree of
completeness is unknown;
• Level 2: A source code component must be compiled and metrics are
determined;
• Level 3: Testing, test data, and test results are added; and
• Level 4: A reuse manual is added.
Although simple, these levels represent an initial component quality level
model. To reach the next level, the component efficiency and documentation
should be improved. The closer to level four, the higher is the probability that
the component is trustworthy and may be easily reused. Moreover, Merrit
begins to consider other important characteristics related to component
certification, such as attaching some additional information to components, in
order to facilitate their recovery, defining metrics to assure the quality of the
components, and providing a component reutilization manual in order to help
its reuse in other contexts. However, this is just a suggestion of certification
levels and no practical work was actually done to evaluate it.
Subsequently, in 1996, Rohde et al. (Rohde et al., 1996) provided a
synopsis of in-progress research and development in reuse and certification of
embedded software components at Rome Laboratory of the US Air Force, where
a Certification Framework (CF) for software components was being developed.
The purpose of the CF was: to define the elements of the reuse context that are
important to certification; to define the underlying models and methods of
certification; and, finally, to define a decision-support technique to construct a
context-sensitive process for selecting and applying the techniques and tools to
certify components. Additionally, Rohde et al. developed a Cost/Benefit plan
that describes a systematic approach of evaluating the costs and benefits of
applying certification technology within a reuse program.
After analyzing this certification process, Rohde et al. found some points
that require improved formulation in order to increase the certification quality,
such as the techniques to find errors (i.e. the major errors are more likely to be
semantic, not locally visible, rather than syntactic, which this process was
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 46
looking for) and thus the automatic tools that implements such techniques. In
summary, Rohde et al. considered only the test techniques to obtain the defects
result in order to certify software components. This is only one of the important
techniques that should be applied to component certification.
Two years later, Voas (Voas, 1998) defined a certification methodology
using automated technologies, such as black-box testing and fault injection to
determine if a component fits into a specific scenario.
This methodology uses three quality assessment techniques to determine
the suitability of a candidate COTS component;
(i) Black-box component testing is used to determine whether the
component quality is high enough;
(ii) System-level fault injection is used to determine how well a
system will tolerate a faulty component;
(iii) Operational system testing is used to determine how well the
system will tolerate a properly functioning component, since even these
components can create system wide problems.
The methodology can help developers to decide whether a component is
right for their system or not, showing how much of someone else’s mistakes the
components can tolerate.
According to Voas, this approach is not foolproof and perhaps not well
suited to all situations. For example, the methodology does not certify that a
component can be used in all systems. In other words, Voas focused his
approach in certifying a certain component within a specific system and
environment, performing several types of tests according to the three
techniques that were cited above.
Another work involving component test may be seen in (Wohlin and
Regnell, 1998), where Wohlin and Regnell extended their previous research
(Wohlin et al., 1994), now, focusing on techniques for certifying both
components and systems. Thus, the certification process includes two major
activities:
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 47
(i) usage specification (consisting of a usage model and profiles), and
(ii) certification procedure, using a reliability model.
The main contribution of that work is the division of components into
classes for certification and the identification of three different ways of
certifying software systems:
(i) Certification process, in which the functional requirements
implemented in the component are validated during usage-based
testing in the same way as in any other testing technique;
(ii) Reliability certification of component and systems, in
which the component models that were built are revised and
integrated to certify the system that they form; and,
(iii) Certify or derive system reliability, where the focus is on
reusing the models that were built to certify new components or
systems.
In this way, Wohlin and Regnell provided some methods and guidelines
for suitable directions to support software component certification. However,
the proposed methods are theoretical without experimental study. According to
Wohlin et al., “both experiments in a laboratory environment and industrial
case studies are needed to facilitate the understanding of component
reliability, its relationship to system reliability and to validate the methods
that were used only in laboratory case studies” (pp. 09). Until now, no progress
in those directions was achieved.
Another relevant work was presented in 2000 by Jahnke, Niere and
Wadsack (Jahnke et al., 2000). They developed a methodology for semi-
automatic analysis of embedded software component quality. This approach
evaluates data memory (RAM) utilization by the component. This approach is
applied in Java technology development components. Java is a multiplatform
language, so components developed from a project in a determinate platform
could be easily ported to another platform. This technology employs intensive
use of RAM, additionally a virtual machine to interpret the code is necessary.
However, embedded systems usually have a small RAM capacity.
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 48
The work is restricted because it verifies the component quality from only
one point of view, use of data memory, among many other characteristics to be
considered. Furthermore, the language most commonly used for embedded
development is C and C++, Java is widely used for the development of desktops
systems.
In 2001, Stafford et al. (Stafford et al., 2001) developed a model for the
component marketplaces that supports prediction of system properties prior to
component selection. The model is concerned with the question of verifying
functional and quality-related values associated with a component. This work
introduced notable changes in this area, since it presents a CBD process with
support for component certification according to the credentials (like a
component quality label), provided by the component developer. Such
credentials are associated to arbitrary properties and property values with
components, using a specific notation such as <property,value,credibility>.
Through credentials, the developer chooses the best components to use in the
application development based on the “credibility” level.
Stafford et al. also introduced the notion of active component dossier, in
which the component developer packs a component along with everything
needed for the component to be used in an assembly. A dossier is an abstract
component that defines certain credentials, and provides benchmarking
mechanisms that, given a component, will fill in the values of these credentials.
Stafford et al. finalized their work with some open questions, such as: how to
certify measurement techniques? What level of trust is required under different
circumstances? Are there other mechanisms that might be used to support
trust? If so, are there different levels of trust associated with them and can
knowledge of these differences be used to direct the usage of different
mechanisms under different conditions?
Besides these questions, there are others that must be answered before a
component certification process is achieved, some of these apparently as simple
as: what does it mean to trust a component? (Hissam et al., 2003), or as
complex as: what characteristics of a component make it certifiable, and what
kinds of component properties can be certified? (Wallnau, 2003).
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 49
In 2002, Comella-Dorda et al. (Comella-Dorda et al., 2002) proposed
a COTS software product evaluation process. The process contains four
activities, as follows:
(i) Planning the evaluation, where the evaluation staff is defined, the
stakeholders are identified, the required resources is estimated and
the basic characteristics of the evaluation activity is determined;
(ii) Establishing the criteria, where the evaluation requirements
are identified and the evaluation criteria is constructed;
(iii) Collecting the data, where the component data are collected,
the evaluations plan is done and the evaluation is executed; and
(iv) Analyzing the data, where the results of the evaluation are
analyzed and some recommendations are given. However, the
proposed process is an ongoing work and, until now, no real case
study was accomplished in order to evaluate this process, becoming
unknown the real efficiency to evaluate software components.
With the same objective, in 2003 Beus-Dukic et al. (Beus-Dukic et al.,
2003) proposed a method to measure quality characteristics of COTS
components, based on the latest international standards for software product
quality (ISO/IEC 9126, ISO/IEC 12119 and ISO/IEC 14598). The method is
composed of four steps, as follows:
(i) Establish evaluation requirements, which include specifying the
purpose and scope of the evaluation, and specifying evaluation
requirements;
(ii) Specify the evaluation, which include selecting the metrics and
the evaluation methods;
(iii) Design the evaluation, which considers the component
documentation, development tools, evaluation costs and expertise
required in order to make the evaluation plan; and
(iv) Execute the evaluation, which include the execution of the
evaluation methods and the analysis of the results.
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 50
However, the method proposed was not evaluated in a real case study,
and, thus its real efficiency in evaluating software components is still unknown.
In 2003, Hissam et al. (Hissam et al., 2003) introduced Prediction-
Enabled Component Technology (PECT) as a means of packaging predictable
assembly as a deployable product. PECT is meant to be the integration of a
given component technology with one or more analysis technologies that
support the prediction of assembly properties and the identification of the
required component properties and their possible certifiable descriptions. This
work, which is an evolution of Stafford et al.’s work (Stafford et al., 2001),
attempts to validate the PECT and its components, giving credibility to the
model, which will be further discussed in this section.
During 2003, a CMU/SEI’s report (Wallnau, 2003) extended Hissam
work (Hissam et al., 2003), describing how component technology can be
extended in order to achieve Predictable Assembly from Certifiable Components
(PACC). This new initiative is developing technology and methods that will
enable software engineers to predict the runtime behavior of assemblies of
software components from the properties of those components. This requires
that the properties of the components are rigorously defined, trusted and
amenable to certification by independent third parties.
SEI’s approach to PACC is PECT, which follows Hissam et al.’s work
(Hissam et al., 2003). PECT is an ongoing research project that focuses on
analysis – in principle any analysis could be incorporated. It is an abstract
model of a component technology, consisting of a construction framework and a
reasoning framework. In order to concretize a PECT, it is necessary to choose an
underlying component technology, to define restrictions on that technology to
allow predictions, and to find and implement proper analysis theories. The
PECT concept is portable, since it does not include parts that are bound to any
specific platform. Figure 3.3 shows an overview of this model.
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 51
Figure 3.3: Structure of Prediction-Enabled Component Technology (Wallnau,
2003).
A system built within the PECT framework can be difficult to understand,
due to the difficulty of mapping the abstract component model into the concrete
component technology. It is even possible that systems that look identical at the
PECT level behave differently when realized on different component
technologies.
Although PECT is highly analyzable and portable, it is not very
understandable. In order to understand the model, the mapping to the
underlying component technology must be understood as well.
This is an ongoing work in the current SEI research framework. This
model requires a better maturation by the software engineering community in
order to achieve trust in it. Therefore, some future works are being
accomplished, such as: tools development to automate the process, the
applicability analysis of one or more property theories, non-functional
requirements certification, among other remarks. Moreover, there is still the
need for applying this model in industrial scenarios and evaluating the validity
of the certification.
Another expressive work was proposed by Magnus Larsson in 2004
(Larsson, 2004), A predictability approach of the quality attributes, where one
of the main objectives of developing component-based software systems is to
enable integration of components which are perceived as black boxes. While the
construction part of the integration using component interfaces is a standard
part of all component models, the prediction of the quality attributes of the
component compositions is not fully developed. This decreases significantly the
value of the component-based approach to building dependable systems. The
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 52
authors classify different types of relations between the quality attributes of
components and those of their compositions. The types of relations are
classified according to the possibility of predicting the attributes of
compositions from the attributes of the components and according to the
impacts of other factors such as system environment or software architecture.
Such a classification can indicate the efforts which would be required to predict
the system attributes that are essential for system dependability and in this way,
the feasibility of the component-based approach in developing dependable
systems.
According to composition principles we can distinguish the following
types of attributes:
(i) Directly compassable attributes. An attribute of an assembly
which is a function of, and only of the same attribute of the
components involved.
(ii) Architecture-related attributes. An attribute of an assembly
which is a function of the same attribute of the components and of the
software architecture.
(iii) Derived attributes. An attribute of an assembly which depends
on several different attributes of the components.
(iv) Usage-depended attributes. An attribute of an assembly which
is determined by its usage profile. System environment context
attributes. An attribute which is determined by other attributes and
by the state of the system environment.
This work is very useful, because after the component composition we
have to predict and assure the system’s quality, but before this, the component
quality must be known.
Finally, in 2006 Daniel Karlson (Karlsson., 2006) presented novel
techniques for the verification of component-based embedded system designs
were presented. These techniques were developed using a Petri net based
modeling approach, called PRES+. Two problems are addressed: component
verification and integration verification.
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 53
With component verification the providers verify their components so
that they function correctly if given inputs conforming to the assumptions
imposed by the components on their environment. Two techniques for
component verification were proposed:
• The first technique enables formal verification of SystemC designs
by translating them into the PRES+ representation.
• The second technique involves a simulation based approach into
which formal methods are injected to boost verification efficiency.
Provided that each individual component is verified and is guaranteed to
function correctly, the components are interconnected to form a complete
system. What remains to be verified is the interface logic, also called glue logic,
and the interaction between components. Each glue logic and interface cannot
be verified in isolation. It must be put into the context in which it is supposed to
work. An appropriate environment must thus be derived from the components
to which the glue logic is connected. This environment must capture the
essential properties of the whole system with respect to the properties being
verified. In this way, both the glue logic and the interaction of components
through the glue logic are verified. Finally, the author presents algorithms for
automatically creating such environments as well as the underlying theoretical
framework and a step-by-step roadmap on how to apply these algorithms.
This approach verifies the component from only one perspective:
functionality, but there is another perspective that has to be considered. The
other consideration is formal verification, because much time and financial
effort are employed. So, formal verification is used only in few cases when it is
mandatory.
3.3 Failures in Software Component Certification
This section describes two failure cases that can be found in the
literature. The first failure occurred in the US government, when trying to
establish criteria for certificating components, and the second failure
happened with an IEEE committee, in an attempt to obtain a component
certification standard.
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 54
3.3.1 Failure in National Information Assurance Partnership
(NIAP).
One of the first initiatives attempting to achieve trusted components was
the NIAP. The NIAP is an U.S. Government initiative originated to meet the
security testing needs of both information technology (IT) consumers and
producers. NIAP is collaboration between the National Institute of Standards
and Technology (NIST) and the National Security Agency (NSA). It combines
the extensive IT security experience of both agencies to promote the
development of technically sound security requirements for IT products,
systems and measurement techniques.
Thus, from 1993 until 1996, NSA and the NIST used the Trusted
Computer Security Evaluation Criteria (TCSEC), a.k.a. “Orange Book.”5, as the
basis for the Common Criteria6, aimed at certifying security features of
components. Their effort was not crowned with success, at least partially
because it had defined no means of composing criteria (features) across classes
of components and the support for compositional reasoning, but only for a
restricted set of behavioral assembly properties (Hissam et al., 2003).
3.3.2 Failure in IEEE.
In 1997, a committee was gathered to work on the development of a
proposal for an IEEE standard on software components quality. The initiative
was eventually suspended, in this same year, since the committee came to a
consensus that they were still far from getting to the point where the document
would be a strong candidate for a standard (Goulao et al., 2002a).
3.4 Software Component Evaluation and Certification
After presenting the main standards to reach software product quality,
this section will discuss the main concepts involving embedded software
5 http://www.radium.ncsc.mil/tpep/library/tcsec/index.html 6 http://csrc.nist.gov/cc
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 55
components evaluation and certification, which is an attempt to achieve trust in
embedded software components.
According to Stafford et al. (Stafford et al., 2001), certification, in
general, is the process of verifying a property value associated with something,
and providing a certificate to be used as proof of validity.
A “property” can be understood as a discernable feature of “something”,
such as latency and measured test coverage, for example. After verifying these
properties, a certificate must be provided in order to assure that this “product”
has determined characteristics.
Focusing on a certain type of certification, in this case component
certification, Councill (Councill, 2001) has given a satisfactory definition about
what software component certification is, definition that was adopted in this
thesis:
“Third-party certification is a method to ensure that software
components conform to well-defined standards; based on this certification,
trusted assemblies of components can be constructed.”
To prove that a component conforms to well-defined standards, the
certification process must provide certificate evidence that it fulfills a given set
of requirements. Thus, trusted assembly – application development based on
third-party composition – may be performed with basis on the previously
established quality levels.
Still, third party evaluation is often viewed as a good way of bringing trust
in software components. Trust is a property of an interaction and is achieved to
various degrees through a variety of mechanisms. For example, when
purchasing a light bulb, one expects that the base of the bulb will screw into the
socket in such a way that it will produce the expected amount of light. The size
and threading has been standardized and a consumer “trusts” that the
manufacturer of any given light-bulb has checked to make certain that each bulb
conforms to that standard within some acceptable tolerance of some set of
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 56
property values. The interaction between the consumer and the bulb
manufacturer involves an implicit trust (Stafford et al., 2001).
In the case of the light-bulb there is little fear that significant damage
would result if the bulb did not in fact exhibit the expected property values. This
is not the case when purchasing a gas connector. In this case, explosion can
occur if the connector does not conform to the standard. Gas connectors are
certified to meet a standard, and nobody with concern for safety would use a
connector that does not have such a certificate attached. Certification is a
mechanism by which trust is gained. Associated with certification is a higher
requirement for and level of trust than can be assumed when using implicit trust
mechanisms (Stafford et al., 2001).
When these notions are applied to CBSD, it makes sense to use different
mechanisms to achieve trust, depending upon the level of trust that is required.
In order to achieve trust in components, it is necessary to obtain the
components that will be evaluated. According to Frakes et al. (Frakes & Terry,
1996), components can be obtained from existing systems through
reengineering, designed and built from scratch, or purchased. After that, the
components are certified, in order to achieve some trust level, and stored into a
repository system, as shows in Figure 3.4.
Figure 3.4: Process of obtaining, evaluating and storing components
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 57
A component is certifiable if it has properties that can be demonstrated in
an objective way, which means that they should be described in sufficient detail,
and with sufficient rigor, to enable their certification (Wallnau, 2003). In order
to do that a well-defined component quality model is needed, which
incorporates the most common software quality characteristics that are present
in the already established models, such as functionally, reliability and
performance plus the characteristics that are inherent to CBSE.
Regarding the certification process, the CBSE community is still far from
reaching a consensus on how it should be carried out, what are its requirements
and who should perform it. Further, third party certification can face some
difficulties, particularly due to the relative novelty of this area (Goulao et al.,
2002a).
3.5 Summary of the Study
Figure 3.5 summarizes the timeline of research in the embedded software
component quality, evaluation and certification area, where the circle represents
the proposed standard in this research area, from 1989 to 2006 (Figure 3.5).
Besides, there were two projects that failed (represented by an “X”). The arrows
indicate that a work was extended by
another.
Figure 3.5: Research on embedded software component quality and certification
timeline.
The research in the embedded component quality, evaluation and
certification area follows two main directions based on: (i) Formalism: How
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 58
to develop a formal way to predict component properties? And, How to build
components with fully proved correctness properties? And (ii) Component
Quality Model: How to establish a well-defined component quality model and
what kinds of component properties can be certified?
However, these works still need some effort to conclude the proposed
models and to prove their trust, and needs a definition on which requirements
are essential to measure quality in components. Even so, a unified and
prioritized set of CBSE requirements for reliable components is a challenge in
itself (Schmidt, 2003).
3.6 Summary
This Chapter was divided in two parts. This first part of the Chapter
presented the main correlates works in the context of this thesis, embedded
software components quality evaluation in different domain. It also presents
SQuaRE project, a software product quality requirements and evaluation
standard that have some ideas regarding component quality assurance.
The second part of the Chapter presented a survey related to the state-of-
the-art in the embedded software component quality, evaluation and
certification research. Some approaches found in the literature, including the
failure cases, were described. Through this survey, it can be noticed that
embedded software components quality, evaluation and certification is still
immature and further research is needed in order to develop processes,
methods, techniques, and tools aiming to obtain well-defined standards for
embedded software component evaluation. Since trust is a critical issue in
CBSE, this Chapter also presented some concepts of component certification, in
general as shown, and some research is needed in order to acquire the reliability
that the market expects from CBSE.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 59
4 Embedded Software Component
Quality Evaluation Methodology
During the survey of the state-of-the-art, in Chapter 3, it has been noted
that there is a lack of method, process, techniques, guidelines and tools available
in the literature to be used to evaluate the quality of embedded software
components in a practical and effective way. This need is also indicated by
several researchers (Voas, 1998), (Morris et al., 2001), (Wallnau, 2003), (Alvaro
et al., 2005), (Bass et al., 2000), Softex (Softex , 2007) and Lucrédio et al.
(Lucrédio et al., 2007). Following Alvaro (Alvaro et al., 2005), most researchers
agree that component quality is an essential aspect of the CBSE and software
reuse adoption success.
Thus, the main idea in this work is to propose a methodology to evaluate
the quality of embedded software components, which it is based on ISO/IEC
2005 Standard, the so called SQUARE project. It was designed to measure the
quality of products and software processes in a general way. These standards
provide high level definition, without specify with them must be implemented in
fact, becoming very difficult to apply them without acquiring more knowledge
from other sources (i.e. books, papers, etc.). The proposed methodology was
created to meet the needs of the embedded systems design industry providing a
process for quality evaluation of embedded software component in a viable and
practical way, through activities, steps, tools, etc. Nowadays, the standards
address only the quality of processes and software products while the
methodology aims to extend evaluating for software component in embedded
domain.
The evaluation of the quality of embedded software needs to be done by
independent parties, at least until software vendors acquire the level of maturity
Chapter 4 - Embedded Software component Quality Evaluation Methodology 60
that hardware vendors currently have. They still cannot count on the hardware
data sheets and catalogues available for hardware components, which describe
all their characteristics in detail.
The methodology was developed with two main goals: (i) Evaluate the
embedded software component quality, from the view point of acquirers and
evaluators; and (ii) to be the default quality evaluation layer applied to
embedded software component before storing the component in a system
repository of the Robust Framework for Software Reuse project (Almeida et al.,
2004), which it is in development by RiSE group. According to Councill
(Councill, 2001), it is better to develop components and systems from scratch
than reuse a component without quality or with unknown quality, consequently
running the risk of negatively impacting the project planning, quality and time-
to-market.
The layer of robust framework for software reuse that considers the
quality aspects of the components was divided into two groups;
• Quality of general purpose software component, which is executed in
a hardware platform Intel x86, (IBM-PC compatible, server, desktop,
notebooks and others) with operational systems: Windows, Linux,
Solaris, MACOS and others supported by x86 platform.
• Quality of embedded software components, which are executed in
different kinds of hardware platforms with different kinds of
operational systems.
The Methodology consists of four independent modules, each one
complementing the other, for a complete embedded software component quality
evaluation. The methodology modules are described below and displayed
graphically in Figure 4.1.
1. Embedded software component Quality evaluation Process, (EQP) ;
2. Embedded software component Quality Model (EQM); 3. Embedded software component evaluation techniques based on
Quality Level (EQL) ; and, 4. Embedded Metrics Approach (EMA).
Chapter 4 - Embedded Software component Quality Evaluation Methodology 61
Figure 4.1: Embedded Software Component Quality Evaluation Methodology.
To conduct an evaluation, trying to ensure a precise and reproducible
measure, so that the evaluation can provide similar results when performed by
different teams, an Embedded Software Component Quality Evaluation
Process – EQP, was proposed. This evaluation process is the first module of
proposed methodology, which is contemplated to sub-modules, activities, steps,
and guidelines. The other three modules of the methodology, EQM, EQL and
EMA, described before, serve as input for the first two sub-modules of the
evaluation process, as shown in Figure 4.2. Consistent and good evaluation
results can only be achieved by following a high quality and consistent
evaluation process (Comella-Dorda et al., 2002).
In the second module, the specific quality model for components in
embedded domain is defined, which is based on the ISO/IEC 25000 standard,
called Embedded software component Quality Model – EQM. It is the
reference quality model to perform the evaluation. It is defined in the first sub-
Chapter 4 - Embedded Software component Quality Evaluation Methodology 62
module of EQP, and after it is used for to create the evaluation requirements
document, as shown the Figure 4.2. Differently from other software product
quality models found in the literature, such as (McCall et al., 1977), (Boehm et
al., 1978), (Hyatt et al., 1996), (ISO/IEC 9126, 2001), (Georgiadou, 2003), this
model should consider in the embedded domain, the Component-Based
Development (CBD) characteristics and attributes quality that improves reuse.
From the embedded quality model defined, it necessary to measure the
compliance of the component under evaluation with quality characteristics and
quality attributes which are present in the EQM. Then a set of metrics must be
defined to evaluate the compliance of the component with the reference model
defined previously. This module is called Embedded software component
evaluation techniques based on Quality Level - EQL, it is based on
quality levels for evaluation, where metrics are grouped by quality levels. For
each quality characteristic a set of evaluation techniques is defined, which it is
sub-divided into three quality levels, and the level used depends on depth of
evaluation and the application domain. The EQL is used in second sub-module
of EQP and it serves as input for the specification of evaluation activity, as can
be seen in Figure 4.2.
Metrics are defined to quantify the evaluation techniques proposals and
serve as measurement instruments. For each evaluation technique should be
taken, at least, one metric defined. The Embedded Metric Approach - EMA
is the fourth module of the methodology, is it based on Goal Question Metrics -
GQM paradigm (Basili et al., 1994). GQM defines a measurement model on
three levels: (1) Conceptual level (goal), (2) Operational level (question) and (3)
Quantitative level (metric). A set of metrics, based on the models, is associated
with every question in order to answer it in a measurable way. As the EQL, the
Embedded Metrics Approach is also used in second sub-module of EQP and it
serves as input for the specification of evaluation activity, as Figure 4.2.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 63
Figure 4.2: Flow Diagram of Embedded Software Component Quality Evaluation Methodology.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 64
The proposed methodology includes two of the three possible
perspectives presented in the SQUARE project. From the acquire perspective,
they want to find out which components best fits its needs and constraints.
From the perspective of independent evaluators, the component’s owners wish
to demonstrate quality and reliability of its components by evaluating the
quality by third party evaluators.
The developer’s perspectives is not contemplated by the methodology,
since the tasks, activities and knowledge required in certain technologies and
domains to evaluate software components makes it very hard for only one
developer to execute all activities, independent of his knowledge.
4.1 Embedded software component Quality evaluation Process (EQP)
This section describes the quality evaluation process for embedded
software component; it was proposed (Carvalho et al., 2009g) for evaluation
module in the Embedded Software Component Quality Evaluation Methodology
context. The Embedded Quality Process (EQP) is the default process to evaluate
embedded quality component in RiSE project.
The evaluation process presented is based on ISO/IEC 25040 which
defines a process for evaluating the quality of software products in the abstract
and high level way, without specifying how the evaluation is performed in fact.
Moreover, the proposed process intended to evaluate specific software
components instead of products, considering the embedded domain. A set of
works for process and product evaluation was found in the literature (McCall et
al., 1977), (Boegh et al., 1993), (Beus-Dukic et al., 2003), (Comella-Dorda et al.,
2002) and it help as inspiration for the definition of the quality evaluation
process.
The embedded software components quality evaluation process is divided
into four modules for a complete evaluation, with each of these modules being
sub-divided into activities of lower level that provide steps, guidelines,
requirements and instrumentation for conducting the evaluation, as shown in
Table 4.1 and Figure 4.3.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 65
Table 4.1: The detail of the EQP, modules, activities and steps.
Embedded software component Quality evaluation Process (EQP) ;
Modules Activities Steps Select the evaluation staff Define evaluation requirements
Establish the evaluation purpose
Define scope, purpose and objectives Define the components to be evaluated Specify the architecture and the environment Identify components Define the scenarios
Establish Evaluation
Requirements
Specify embedded quality model
Define the embedded quality model (internal, external and quality in use characteristics) Define the EQL for evaluation
Select quality attributes Select metrics Select evaluation techniques
Establish the evaluation criteria Specify metrics on EQL
Specify the Evaluation
Establish rating levels for metrics Define score for metrics
Estimate resource and schedule for evaluation Select instrumentation (equipments/tool) Define the’ instrumentation setup
Design the Evaluation
Produce evaluation plan
Document the evaluation plan Setup the instrumentation, environment and scenarios Execute the evaluation plan
Measure characteristics
Collect data Requirement vs. criteria Analyze the results Compare with criteria Consolidate data Write the component evaluation report Build the component quality label Record lessons learned
Execute the Evaluation
Assess results
Make recommendations
The evaluation process proposed includes two of the three perspectives
addressed by ISO / IEC 2504N, the quality evaluation process of the view point
of acquirers and third-party evaluators.
• Process for acquirers – to contain requirements, recommendations and
guidelines for the systematic measurement, assessment and evaluation of
quality during acquisition of embedded software component, or for
modifications to existing embedded software component;
Chapter 4 - Embedded Software component Quality Evaluation Methodology 66
• Process for evaluators – to provide requirements and recommendations
for the practical implementation of embedded component evaluation,
when several parties need to understand, accept and trust evaluation
results.
The quality evaluation from the developer’s view point is not covered by
this thesis, because providing requirements and recommendations for the
practical implementation of software product evaluation when the evaluation is
conducted in parallel with development.
The Figure 4.3 showed the whole EQP using Business Process Modeler
Notation (BPM) (Williams, 1967). The BPMN is a standard for business process
modeling, and provides a graphical notation for specifying business processes in
a Business Process Diagram (BPD), based on a flowcharting technique very
similar to activity diagrams from Unified Modeling Language (UML). The
primary goal of BPMN is to provide a standard notation that is readily
understandable by all stakeholders.
The evaluation process comprises a number of steps which divide the
evaluation into discrete activities, see Figure 4.3.
Figure 4.3: The EQP describe in Business Process Modeling
Chapter 4 - Embedded Software component Quality Evaluation Methodology 67
The evaluation requirements are formal records of the agreement of the
evaluation staff as to what will be covered by the evaluation process. It provides
a list of software quality characteristics which are to be evaluated, their
evaluation level and it identifies the source of data and evidence which can be
used in the evaluation process.
The evaluation specification - besides the more formal description of the
evaluation requirements - consists of identification of the received documents
and classification of the available items into component, process and supportive
information in order to identify which evaluation techniques can be applied.
To design the evaluation process, evaluation techniques are related to the
quality characteristics and evaluation levels, and evaluation modules to
evaluation techniques.
The activity of performing the evaluation means applying the evaluation
modules to the related documents and collecting the results for each of them.
The final step of the evaluation process is that of producing an evaluation report
documenting the findings of the evaluation process and drawing conclusions.
The four modules that comprise the EQP be described in detail below
The concept of evaluation modules is introduced in order to structure the
evaluation process. Without considering an appropriate structuring the process
would quickly become intractable, unwieldy and complex. Therefore we have
looked for a well-structured encapsulated description of component software
quality characteristics and the metrics and evaluation techniques attached to
them. Such a description lists the evaluation methods applicable for embedded
software quality characteristics and names the required component and process
information. It also defines the format for reporting results of applying the
metrics and techniques.
4.1.1 Establish Evaluation Requirements
In the first module of the EQP, the scope, purpose, objectives and
requirements of evaluation will be defined. This module includes the definition
of the component for evaluation, the architectures, environment and scenarios.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 68
The evaluation staff is formed and it must agree on which quality characteristics
present in the embedded quality model are to be evaluated and to which
evaluation level they are to be assured. The formal record of this agreement of
what will be covered by the evaluation is called the evaluation requirements.
Table 4.2: Summary of Establish Evaluation Requirements module.
Module Establish Evaluation Requirements Inputs Component’s Documentation
Embedded Quality Model (EQM) Evaluation Training
Output Evaluation Requirements Specification Control/Feedback Evaluation of Specification
Activities 1 - Establish the evaluation purpose 1.1. Select the evaluation staff 1.2. Define evaluation requirements 1.3. Define scope, purpose and objectives 2 - Identify components 2.1. Define the components to be evaluated 2.2. Specify the architecture and the environment 2.3. Define the scenarios 3 - Specify embedded quality model 3.1. Define the embedded quality model (internal, external and quality in use characteristics)
Perform by Evaluation Staff
Figure 4.4: Activities of establish evaluation requirements described in BPM.
Quality can be defined as the totality of characteristics of an embedded
component that bear on its ability to satisfy stated or implied needs (ISO/CD
8402-1, 1990). To make this definition more operational the Embedded Quality
Chapter 4 - Embedded Software component Quality Evaluation Methodology 69
Model that will be defined in next section 4.3 is used. Table 4.2 shows the
summary of the establish evaluation requirements module, and Figure 4.4,
shows the module in graphic form.
i. Establish the evaluation purpose
This step defines scope, objectives and requirements of the component
evaluation. It is divided into three activities: The first activity is to select the
members to compose the evaluation staff. The next activity depicts the
definition of the evaluation requirements, while the last activity defines the
objectives, scope and purpose of the evaluation.
a. Select the evaluation staff: The importance of an effective
evaluation staff should not be underestimated, since without an
effective staff, an evaluation cannot succeed. According to
Comella-Dorda, (Comella-Dorda et al., 2003) a good balance of
knowledge is also important for a good staff. So, people with
knowledge in different areas such as technical experts, domain
experts, contracts personnel, business analysts, security
professionals, maintenance staff and end users are essential to
perform the evaluation with success.
b. Define evaluation requirements: The evaluation staff must
determine which system requirements are legitimate requirements
for the embedded component, and to determine any additional
evaluation requirements that are not directly derived from system
requirements.
c. Define scope, purpose and objectives: Often, the scope,
purpose and objective are not documented. Moreover, people tend
to be more productive when they have a clear picture of the
ultimate scope, purpose and objective of the evaluation. An
evaluation requirements specification, created by the evaluation
staff to define the scope and constraints of the evaluation, can
record the answers to these questions. The evaluation
requirements specification should contain:
i. goals of the evaluation;
Chapter 4 - Embedded Software component Quality Evaluation Methodology 70
ii. scope of the evaluation;
iii. names of staff members and their roles;
iv. statement of commitment from both evaluators and
management;
v. summary of factors that limit selection;
vi. summary of decisions that have already been made.
ii. Identify components
This step identifies the components that will be subject to evaluation.
They will specify the architecture and execution environment according to the
component documentation, and will define the usage scenarios in which
components will be submitted for evaluation.
a. Define the components to be evaluated: This activity will
define the component that will be submitted for evaluation. The
component documentation that specifies the target architecture
and execution environments is a requirement for this activity.
b. Specify the architecture and the environment: The
evaluation staff should analyze the component documentation and
specify the architecture, environment, peripherals and other
things necessary for the correct performance of the component in
accordance to requirement and constraints found in component
documentation. If the embedded component is to be evaluated
independently of a system, the evaluation staff should define, as
precisely and in as much detail as possible, the environment that
the component will be evaluate.
c. Define the scenarios: After the architecture and environment
specification, a list of scenarios of the component used should be
defined. The list of scenarios is associating the embedded
component for an application and its domain.
iii. Specify embedded quality model
The evaluation staff should create a specific quality model for the
evaluation. It should be based in EQM, which will be presented in the next
section, 4.2, in this chapter. At this moment the evaluation staff selects a sub-set
Chapter 4 - Embedded Software component Quality Evaluation Methodology 71
of the quality characteristics to compose the specific EQM for performing the
evaluation.
a. Define the embedded quality model (internal, external
and quality in use characteristics): In this task, the
evaluation staff decides which embedded quality characteristics
from Internal Characteristics, External Characteristics and Quality
in use Characteristics will compose the EQM for the evaluation, as
previously explained. However, it is possible that there are quality
characteristics that are not covered by that model. In this case, the
evaluation staff should define the “new” characteristic, sub-
characteristics and quality attributes and it should explain why
and the importance of the new quality characteristic and set a
definition for it. At the end of the EQP the staff should analyze if it
is advantageous to insert those new quality characteristics to
compose the EQM definitely;
4.1.2 Specify the Evaluation
This module includes guidelines to help the evaluation staff to define the
optimum evaluation level for each quality characteristics according to
application domain. In this way, evaluation techniques are proposed, metrics
and criteria are established and score levels are defined. The quality evaluation
techniques are used in conjunction with quality characteristics importance for
the evaluation staff to define the evaluation techniques for the embedded
component. The second activity is to specify the metrics that will be used in the
evaluation. The embedded metrics approach was created to aid the specification
of metrics for the evaluation. The last activity is to define a meaning for the
measurement values. The Table 4.3 shows the summary of the evaluation
module specification, and Figure 4.5, shows the module in graphic form.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 72
Table 4.3: Summary of evaluation module specification.
Module Specify the Evaluation Inputs Embedded Quality Model (EQM)
Quality Characteristics Importance Embedded Quality Level (EQL) Embedded Metrics Approach (EMA)
Output Specification of the Evaluation Control/Feedback Evaluation Plan
Activities 1 - Select metrics 1.1 - Define the EQL for evaluation 1.2 - Select quality attributes 1.3 - Select evaluation techniques 2 - Establish the evaluation criteria 2.1 - Specify metrics on EMA 3 - Establish rating levels for metrics 3.1 - Define score for metrics
Perform by Evaluation Staff
Figure 4.5: Activities of specify the evaluation described in BPM.
i. Select Metrics
The evaluation staff should define, following the guidelines, one of the
three quality levels present in EQL for each quality characteristic selected to
compose the evaluation, considering the importance of each quality
characteristic. To finalize this activity, the evaluation staff must select the
quality attribute and evaluation techniques based on the defined quality model,
which metrics will be assigned and then measured.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 73
a. Define the EQL for evaluation: The evaluation staff will
define the quality level, EQL, in which the component should be
evaluated. For this choice, they must use the set of guidelines for
defined evaluation level as decision criteria. The definition of the
EQL is independent for each quality characteristic, for example
the evaluation staff can define that functionality characteristics
will be evaluated in EQL III and Efficiency will be evaluated in
EQL I, and so on.
b. Select Quality Attributes: For each characteristic and sub-
characteristic defined for the component evaluation, the
evaluation staff must select at least one quality attribute to
measure the quality characteristic. The Evaluation Staff should use
the Embedded Quality Model (EQM) as base to select the quality
attributes to compose the evaluation process, but new quality
attributes can be added.
c. Select the Evaluation Techniques: Each EQL defines a set of
evaluation techniques to be applied in component evaluation. But
the evaluation staff, using their experience and knowledge must
analyze and decide if those evaluation techniques proposed are
useful in component evaluation or not. If it is necessary to define
new evaluation techniques for complementing the evaluation, the
evaluation staff is free to create according to necessity.
ii. Establish the evaluation criteria
Following Comella-Dorda (Comella-Dorda et al., 2003) evaluation
criteria are the facts or standards by which the fitness of components is judged.
Evaluation criteria should be derived from requirements. Thus, the first step in
establishing evaluation criteria must be determining appropriate evaluation
requirements.
a. Specify Metrics on EMA: Metrics must be selected or specified
by evaluation staff following the Embedded Metrics Approach
(EMA), which is based in Goal Question Metric (GQM) and taking
as reference the EQM. The EMA is defined to track the quality
Chapter 4 - Embedded Software component Quality Evaluation Methodology 74
characteristics properties. Through the Metric Approach the
evaluation staff will specify metrics to evaluate the quality
attributes defined in the select metrics step.
iii. Establish rating levels for metrics
The interpretation of the metrics results is established in this activity.
The evaluation staff should score for all metrics defined previously, in order to
understand and interpret the results.
a. Define score for Metrics: For each metric established, the
evaluation staff should define an expected rating for the metric to
facilitate its analysis. The expected rating level will be defined by
the evaluation staff as basis in the EQL selected. To improve
analysis, the establishment of a scale for rating level is
recommended. The ISO/IEC 25000 does not use a scale for
quality achievement level. The ISO/IEC 15504-2 (ISO/IEC 15504-
2, 2003) defined a scale level to be used as reference, as follows:
� N (Not Achieved): 0% to 25% - The presence of the quality
attribute is little or none;
� P (Partially achieved): 26% to 50% - The quality attribute
aspects is partially achieved;
� L (Largely achieved): 51% to 75% - The quality attribute
aspects is largely implemented but it is not completely
achieved; and
� F (Fully achieved): 76% to 100% - The component provide
all necessary fulfillments for the quality attribute under
evaluation.
This scale is proposed by ISO 15504-2, but the evaluation staff should
analyze if the scale proposed is appropriate to evaluation, if not the
evaluation staff can propose a new scale or modify it. This is
dependent on the reliability level expected by the component, i.e. the
EQL defined to evaluate the component.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 75
4.1.3 Design the Evaluation
After determining the specification of evaluation, the next module is the
design of evaluation, where the details of the evaluation are defined. In this
module the instrumentation (equipment and tools) are selected or developed
and its configuration is defined, an evaluation schedule is created, the resource
are defined, and finally, evaluation plan is built.
Table 4.4: Summary of the evaluation module design.
Module Design the Evaluation Inputs Specification of evaluation
Instrumentation support Output Evaluation Plan
Control/Feedback Component Quality Report Activities 1 - Produce evaluation plan
1.1 - Estimate resource and schedule for evaluation 1.2 - Select instrumentation (equipments/tool) 1.3 - Define the instrumentation setup 1.4 - Document the evaluation plan
Perform by Evaluation Staff
Figure 4.6: Activities of evaluation design described in BPM.
This module is developed by the evaluation staff, which receives as input
the specification of evaluation, instrumentation support, Embedded Quality
Model (EQM) and Embedded Metrics Approach, as show in Table 4.4 and
Chapter 4 - Embedded Software component Quality Evaluation Methodology 76
graphically in Figure 4.6. The description of the planning of the evaluation
should be as detailed as possible, reporting methods and instrumentation
necessary.
i. Produce Evaluation Plan
Planning for each evaluation is different, since the evaluation may involve
both different types of components (from simple to extremely complex) and
different system expectations placed on the end product (from trivial to highly
demanding) (Comella-Dorda,2003). This activity is composed of four steps until
the evaluation plan is defined, as show in Table 4.4.
a. Estimate resource and schedule for evaluation: The
evaluation staff should create a time schedule to perform the
component evaluation, considering personal resource available,
methods and techniques, equipment and tools that support the
evaluation. Unfortunately, there are few specific techniques
available in the literature for estimating resources and schedule
for component evaluation (Boegh, 1993). However, the cost
estimation of evaluation is not contemplated in this evaluation
process, it is out of scope. The evaluation schedule should be as
detailed as possible, and it should be composed of tasks and
activities, time estimation, stakeholder, and instrumentation
necessary to execute the task or activity.
b. Select instrumentation (equipment/tool): For the
component quality evaluation, some instrumentation, equipments
and tools, should be selected or developed to measure or assist the
measurement of the quality attributes. The instrumentation of
evaluation can be a computer program like CCC, Cunit, Findbugs,
spreadsheet or electronics devices like oscilloscope, multimeter,
voltmeter and others used in the measurement. The tasks and
activities defined in the last step may require one or more
instruments to execute or assist in an execution, so this resource
should be provided.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 77
c. Define the instrumentation setup: After selecting the
instrumentation, the evaluation staff should define the parameters
to configure the equipment and software tool. This step is
important because it sets initial configuration for the equipment
and defines the initial parameter to software tools.
d. Document the evaluation plan: In this step the evaluation
staff will build a document with the data collected during the prior
steps. This document is called Evaluation Plan and it is the output
of the Design of Evaluation Module.
4.1.4 Execute the Evaluation
The implementation of the evaluation plan means preparing the
environment, setting up the instrumentation and applying the evaluation
techniques corresponding to the selected EQL. At the end of evaluation, the
evaluation staff should to collect and to consolidate data and to build the
documents, reports and component quality label. The output will be a collection
of documents, measurement reports and the component quality label resulting
from the application metrics approach following a quality evaluation level. The
Table 4.5 summarized the Execute the Evaluation activity and showing the main
parts of this module. The Figure 4.7 shows this module in a graphical form.
Table 4.5: Summary of the evaluation module execution.
Module Execute the Evaluation Inputs Evaluation Plan Output Component quality report (
- Component quality label - Document of the lessons learned and make recommendations)
Control Embedded Metrics Approach (EMA) Activities 1 - Measure characteristics
1.1 Setup the instrumentation, environment and scenarios 1.2 Execute the evaluation plan 1.3 Collect data 2 - Compare with criteria 2.1 Requirement vs. criteria 2.2 Analyze the results 2.3 Consolidate data 3 - Assess results 3.1 Write the component evaluation report 3.2 Build the component quality label 3.3 Record lessons learned 3.4 Make recommendations
Mechanism 1- Evaluation staff 2- Instrumentation support
Chapter 4 - Embedded Software component Quality Evaluation Methodology 78
Figure 4.7: Activities of execute the evaluation described in BPM.
i. Measure characteristics
Once the evaluation plan was built, in this module it will be executed using the
following steps:
a. Setup the instrumentation, environment and scenarios:
The equipment setup and tool configuration is very important for
a correct measure, as well the correct environment configuration
and scenarios construction. The configuration parameters must be
present in the evaluation plan;
b. Execute the evaluation plan: The selected evaluation level is
applied according to the schedule given in the evaluation plan. The
results of applying the individual modules are recorded in the
component evaluation report. Observations made during the
evaluation process also have to be included in the lesson learned
record, make recommendations document and in the component
evaluation report. The application of an evaluation level comprises
three steps (Boegh, 1993):
� Measurement according to metrics identified by the
quality evaluation level (EQL);
Chapter 4 - Embedded Software component Quality Evaluation Methodology 79
� Assessment by comparing measurement results with the
acceptance criteria;
� Reporting the measurements results and assessment
results.
Finally, the results of applying the individual evaluation level are
aggregated to an overall result of the evaluation.
c. Collect data: Collecting data involves executing the evaluation
plans to determine the performance of component against the
evaluation criteria that have been developed. Different criteria and
different situations require different data collection techniques.
The specific techniques selected will be in part determined by the
degree of confidence necessary in the results.
ii. Compare with criteria
Evaluation criteria are the facts or standards by which the fitness of
component is judged. Perhaps the most critical step in the entire component
evaluation process is establishing appropriate criteria (Comella-Dorda et al.,
2003). The criteria selected will determine whether to ask the right questions
regarding the viability of the component in system. This step is divided into
three activities, as following:
a. Requirement vs. criteria: It is natural and appropriate to look
first to the requirements as a basis for evaluation. Unfortunately,
an initial focus on system requirements leads them to believe that
generating the actual criteria for component evaluation is trivial,
where each requirement is directly transformed into a criterion.
However, following Comella-Dorda (Comella-Dorda et al., 2003)
the simple transformation is not likely to achieve the desired
result. This step is only an activity of reflection on the requisites
and consequently the criteria before developing the analysis of
results;
b. Analyze the results: Analysis is required for reasoning about
the data collected. Data analysis involves reasoning about the
Chapter 4 - Embedded Software component Quality Evaluation Methodology 80
consolidated data in order to make a recommendation. Analysis is
a very creative task, and the best approach is simply the
application of sound and careful reasoning;
c. Consolidate data: Consolidation almost always implies some
loss of detailed information. This is the price that is paid for
condensing a large mass of information into some more quickly
comprehensible format. A trade-off must be struck between the
need for easy understanding (a high level of consolidation) and the
risk of losing too much information.
iii. Assess results
The goal of evaluation is to provide information to the decision-maker.
The evaluators must focus their recommendations on the information that the
decision-maker needs. This can vary according to the type of application and the
characteristics of the decision-maker. This emphasis “flavored” both the type of
data gathered and the format and content of recommendations presented to the
decision maker. There are four main outputs of the methodology:
a. Write the component evaluation report: The component
evaluation report is a dossier of component quality registered in a
document, discovered facts, assessment results, classifications,
etc. It details all that is known about a given embedded
component at a point in time. There is one component evaluation
report for each component evaluated. The evaluation report serves
as a source of information for the embedded design that will
architect, design, and integrate the embedded system using the
component.
b. Build the component quality label: The Component Quality
Label is a graphical seal which summarizes quick information in
one page. It shows the results values obtained in the quality
evaluation of the six quality characteristics and the results of
user’s feedback for quality in use characteristics, Productivity,
Satisfaction, Security and Effectiveness.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 81
c. Record lessons learned: The evaluation record is a description
of the evaluation itself. Information in the evaluation record
includes evaluation plans; personnel involved; dates and details of
meetings and evaluation tasks; environment or context in which
the products were evaluated; specifics about product versions,
configurations, and customizations; results all evaluation
activities; rationale for decisions made; and lessons learned that
might be useful for subsequent evaluations.
d. Make recommendations: The summary/recommendations
document provides a synopsis of the evaluation activity and the
resulting findings, along with the message the evaluation staff
wants to convey to the decision-maker. It includes both the staff’s
analysis of fitness and of evaluation deficiencies (e.g., any need for
further evaluation, confidence in results).
4.2 Embedded software component Quality Model (EQM)
The evaluation process, in general, is not simple. First, there must be a
reference, which we believe to be the "ideal" to compare with the object under
evaluation. From this evaluation, attempts to quantify how much closer the
object under evaluation is to the reference object.
In embedded software component quality evaluation methodology, the
reference model is the embedded quality model (EQM). However, there are
several difficulties in the development of such a model (Goulão et al., 2002a),
such as: (1) which quality characteristics should be considered, (2) how to
evaluate them, and (3) who should be responsible for such evaluation.
Normally, the software component evaluation occurs through models
that measure its quality. These models describe and organize the component
quality characteristics that will be considered during the evaluation. So, to
measure the quality of a software component it is necessary to develop a quality
model. Thus, this section presents an Embedded software component Quality
Model (EQM) and its parts that compose the model.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 82
There is no consensus on how to define and categorize embedded
software product quality characteristics (Bertoa et al., 2002). The quality model
proposed in this methodology (Carvalho et al., 2009c), (Carvalho et al., 2009d)
is an adaptation of a “generic” quality model proposed by standard
ISO/IEC2500 to deal with CBSD and the embedded domain.
The EQM defined, as shown in the Table 4.6, is composed of three parts:
(i) quality characteristics and sub-characteristics, quality attributes, metrics,
(ii) quality in use and (iii) additional information. Some relevant component
information is not supplied in other component quality models analyzed,
(Goulão et al., 2002b), (Bertoa et al., 2002), (Meyer, 2003), (Simão et al.,
2003), (Alvaro et al., 2005), the negative and positive aspects of each model
were considered and contribute to defining the EQM.
Table 4.6: The Embedded software component Quality Model and its parts
Embedded software component Quality Model (EQM) =
Quality Characteristics
Characteristics Sub-characteristics Attributes Evaluation Techniques
+ Quality in Use Characteristics
Productivity Satisfaction Security Effectiveness
+ Additional Information
Technical Information Organizational Information Marketing Information
Component Version CMMi Level Development time
Programming Language Organization’s Reputation Cost
Design and Project Patterns Time to market
Operational Systems Supported Targeted market
Compiler Version Affordability
Compatible Architecture Licensing
Minimal Requirements
Technical Support
Compliance
4.2.1 Quality Characteristics
Following the ISO/IEC2500 terminology, a quality characteristic is a set
of properties of a software product by which its quality can be described and
evaluated. A characteristic may be refined into multiple levels of sub-
Chapter 4 - Embedded Software component Quality Evaluation Methodology 83
characteristics. An attribute is a quality property to which a metric can be
assigned. The term "metric" is defined as a measure of extent or degree to which
a product possesses and exhibits a certain quality characteristic (Boehm et al.,
1976).
A Quality model is the set of characteristics and sub-characteristics, as
well as the relationships between them that provide the basis for specifying
quality requirements and for evaluating quality (Bertoa et al., 2002).
Is a consensus between the researches the necessity to identify and to
classify quality characteristics in different criteria.
1. by Coverage: discriminate between those characteristics that make sense
for individual components and those that must be evaluated at the
software architecture level;
• local; characteristics for individual components software,
• global; characteristics for software architecture level,
2. by observation or measurement: the moment in which a characteristic
can be observed or measured;
• Runtime; characteristics observable at runtime,
• Design/Life-Cycle; characteristics observable during the product
cycle-life,
3. by metrics application: the local where the metrics is applied;
• internal; measure the internal attributes of the component during
design and coding phases (“white-box” metrics),
• external; concentrates on the system behavior during testing and
component operation, from an “outsider” view. (“black-box” metrics).
4.2.1.1 Characteristics and Sub-Characteristics
Table 4.8 shows the characteristics and sub-characteristics which define
the ISO 25000 general software quality model (second column) and Embedded
Chapter 4 - Embedded Software component Quality Evaluation Methodology 84
Quality Model (Third column). The main idea is to refine and customize it in
order to accommodate to the particular characteristics of components in
embedded domain.
Some changes from ISO/IEC 25010 standard were made in order to
develop a consistent and adequate model to evaluate embedded software
components. These changes were based on research in the main characteristics
in automotive, industrial, medical and others embedded domain, as shown in
Table 4.7, and a consensus of members and a set of software and specialist
quality engineers in embedded from a Brazilian software factory.
Table 4.7: Summary of embedded quality characteristics research in different
domain
Industrial Automation Automotive Medical General
(Larsson, 2002) (Akerholm, 2005) (Wijnstra, 2001) (Crnkovic, 2003)
At low level: Safety Reliability Real-time properties
Availability Reliability Safety Response time or latency
Timeliness Predictability Functionality Execution time
Reliability Usability Portability Worst case execution
time
At high level: Extendibility Modifiability Deadline
Performance Maintainability Configurability Dependability
Usability Efficiency Extensibility
and Evolvability Reliability
Integrability Testability Security Availability
Security Serviceability Integrity
Flexibility Confidentiality
Safety
Resource consumption
Power consumption
Computation (CPU)
power
Memory
Consumption
Execution (CPU) time,
Life cycle properties
The sub-characteristics that were identified as relevant were maintained,
others that proved to be uninteresting were eliminated, others had their name
changed to adequate it to the new context, and new sub-characteristics that
complement the EQM with important information was added.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 85
Table 4.8 and Figure 4.8 summarize the changes that were performed in
relation to ISO/IEC 25010. The characteristics and sub-characteristics that are
represented in bold were added due to the need for evaluating certain CBSD-
related properties that were not covered on ISO/IEC 25010. The sub-
characteristic that is crossed was present in ISO/IEC 25010, but was removed in
the proposed model. Finally, the sub-characteristic in italics had its name
changed.
Table 4.8: Changes in the proposed EQM, in relation to ISO/IEC 25010.
Characteristics ISO/IEC 2500 Sub-Characteristics
EMQ Sub-Characteristics
Functionality Accuracy Compliance Security Suitability Interoperability
Accuracy Security Suitability Interoperability Real-time (Time Behavior) Self-contained
Reliability Recoverability Fault Tolerance Maturity
Recoverability Fault Tolerance Maturity Safety
Usability Attractiveness Learnability Understandability Operability
Attractiveness Learnability Understandability Operability Configurability
Efficiency Resource behavior Time Behavior
Resource behavior Time Behavior Energy consumption Data Memory utilization Program Memory utilization
Maintainability Stability Changeability Testability Analyzability
Stability Changeability Testability Analyzability
Portability Replaceability Installability Adaptability
Replaceability Deployability Flexibility Reusability
As can be seen in Table 4.8 and in Figure 4.8, the sub-characteristics
crossed out, Compliance. Suitability, Interoperability, Maturity, Learnability,
Understandability, Operability and Analyzability, disappear in order to
adequate the model to the component context in embedded domain. These sub-
Chapter 4 - Embedded Software component Quality Evaluation Methodology 86
characteristics are not present in the research, as show in Table 4.7, and in
conjunction with a set of embedded engineers and quality engineers of a
Brazilian software factory, C.E.S.A.R., it was decided not to contemplate these
characteristics in the EQM.
Figure 4.8: The proposed EQM, in graphic way.
The proposed EQM is composed of six quality characteristics, as follows:
The Functionality characteristic maintains the same meaning for
components as for software products. This characteristic expresses the ability of
an embedded software component to provide the required services and
functions, when used under specified conditions;
Extended sub-characteristics:
• Accuracy: this sub-characteristic evaluates the percentage of results
obtained with the correct precision level demanded;
Chapter 4 - Embedded Software component Quality Evaluation Methodology 87
• Security: indicates how the component is able to control the access to
its provided services; and
Renamed sub-characteristics added:
• Real-time: indicates the ability to perform a specific task at the specific
time, under specified conditions. It is essential to determine the correct
functionality of the embedded component. In embedded domain
practically all the systems had real-time requirements. This sub-
characteristic replaced the time behavior sub-characteristics, found in
efficiency characteristic, because for embedded systems the time
behavior is relationship with the functionality of the component and it is
called real-time properties.
New sub-characteristics added:
• Self-contained: The function that the component performs must be
fully performed within itself; The Self-contained sub-characteristic is
intrinsic to software components and must be analyzed.
The Reliability characteristic expresses the ability of the software
component to maintain a specified level of performance, when used under
specified conditions;
Extended sub-characteristics:
• Recoverability: indicates whether the component can handle error
situations, and the mechanism implemented in that case (e.g.
exceptions); and
• Fault Tolerance: indicates whether the component can maintain a
specified level of performance in case of faults.
New sub-characteristics:
• Safety is an attribute involving the interaction of a system with the
environment and the possible consequences of the system failure. Safety
Chapter 4 - Embedded Software component Quality Evaluation Methodology 88
is very important for reliability and dependable embedded systems, so
this sub-characteristic is fundamental to compose the quality model.
The Usability characteristic expresses the ability of a software
component to be understood, learned, used, configured, and executed, when
used under specified conditions. Its sub-characteristics have a different
meaning for software components. The reason is that, in CBSD, the end-users of
components are the application developers and designers, more than the people
that have to interact with them.
Extended sub-characteristics:
• Attractiveness: indicates the capability of the software component to
be attractive to the user;
New sub-characteristics:
• Configurability: is the ability of the component to be configurable (e.g.
through a XML file or a text file, the number of parameters, etc.); The
Configurability is an essential feature that the developer must analyze in
order to determine whether a component can be easily configured.
Through this sub-characteristic, the developer is able to preview the
complexity of deploying the component into a certain context.
The Efficiency characteristic expresses the ability of an embedded
software component to provide appropriate performance, relative to the amount
of resources used (hardware/software/energy/etc);
Extended sub-characteristics:
• Resource behavior: It indicates the amount of the resources used,
under specified conditions.
Renamed sub-characteristics:
Chapter 4 - Embedded Software component Quality Evaluation Methodology 89
• Time behavior: It had its name changed to Real-Time and it is part of
functionality characteristics, because the time behavior in embedded
systems is related to component’s functionality.
New sub-characteristics:
• The Energy consumption indicates the quantity of energy required for
the component to perform your functionality. The Energy consumption is
crucial for portable, mobile, autonomy system and others. It is essential
to determine the project viability, battery size, system autonomy and
others.
• The Data Memory utilization indicates the quantity of data memory
necessary for the component to perform its functionality. The data
memory utilization sub-characteristics gain emphasis in embedded
domain because the resource such as data memory is very limited.
• The Program Memory utilization indicates the quantity program or
code memory necessary for the component to store its program/code.
Similarly to data memory utilization, the program Memory utilization
sub-characteristics is very important in embedded domain because the
amount of program memory available is very limited.
The Maintainability characteristic describes the ability of an embedded
software component to be modified, include corrections, improvements or
adaptations to the software, due to changes in the environment, in the
requirements, or in the functional specifications;
Extended sub-characteristics:
• Stability: indicates the stability level of the component in preventing
unexpected effect caused by modifications;
• Changeability: indicates whether specified changes can be
accomplished and if the component can easily be extended with new
functionalities; and
• Testability: it measures the effort required to test a component in order
to ensure that it performs its intended function.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 90
The Portability characteristic is defined as the ability of an embedded
software component to be transferred from one environment to another. In
CBSD, portability is an intrinsic property to the nature of components, which
are in principle designed and developed to be re-used in different environments.
Extended sub-characteristics:
• Replaceability: indicates whether the component is “backward
compatible” with its previous versions; and
Renamed sub-characteristics:
• Deployability; the Installability sub-characteristics had a new name in
the proposed model, it is called Deployability. After being developed, the
components are deployed (not installed) in an execution environment to
make their usage possible by other component-based applications that
will be further developed. This has been given the new name of
Installability.
• Flexibility; Adaptability was the last sub-characteristic that changed
names. Now, it’s called Flexibility. It indicates whether the component
can be adapted or flexible to different specified environments.
New sub-characteristics:
• Reusability: it is ability of the component to be reused. This sub-
characteristic evaluates the reusability level through some points, such
as: the abstraction level, platform-specific, absence of business rules, etc.
The reason why embedded industry have adopted component-based
approaches to embedded design is the premise of reuse. Thus, the
Reusability sub-characteristic is very important to be considered in this
model.
Once defined the characteristics and sub-characteristics of the proposed
quality model, will show the quality attributes used to measure the
characteristics of embedded components. Quality attributes will be divided into
Chapter 4 - Embedded Software component Quality Evaluation Methodology 91
two main categories, depending on whether the attributes are discernible at
run-time or observable during the product life cycle (Bertoa et al., 2002).
An attribute is a measurable physical or abstract property of an entity
(Alvaro et al., 2005). A metric defines the measurement method and the
measurement scale. The measurement process consists of assigning a number
or category to an attribute, according to the type of metric that is associated to
that attribute (Square project). Next, the quality attributes of the EQM will be
presented.
4.2.1.2 Quality Attributes of EQM
This section describes the quality attributes for each sub-characteristic
proposed in EQM. Table 4.9 shows the embedded software component quality
attributes that are observable at runtime and life-cycle. The quality attributes
that are observable during life cycle (Table 4.9), could be measured during the
development of the component or the component-based system, by collecting
relevant information for the model. The table groups the attributes by
characteristics and sub-characteristics, and indicates the metrics and kind of
metrics used for evaluating each attribute.
Table 4.9: Quality Attributes for sub-characteristics at Runtime and Life-cycle.
Characteristics
Sub-
Characteristics
(Runtime)
Sub-
Characteristics
(Life-cycle)
Attributes
1. Response time (Latency)
2. Execution time
3. Throughput (“out”)
4. Processing Capacity (“in”)
Real-time
5. Worst case execution time
Accuracy 6. Precision
7. Data Encryption
8. Controllability Security
9. Auditability
Functionality
Self-contained 10. Dependability
11. Error Handling Recoverability
12. Transactional
13. Mechanism availability Fault Tolerance
14. Mechanism efficiency
15. Reachability Graph
Reliability
Safety 16. Integrity
Chapter 4 - Embedded Software component Quality Evaluation Methodology 92
Configurability 17. Effort to configure
18. Understandability Usability Attractiveness
19. Effort to operate
20. Peripheral utilization Resource behavior
21. Mechanism efficient
22. Amount of energy consumption Energy consumption
23. Mechanism efficient
24. Amount of Data memory utilization Data Memory utilization
25. Mechanism efficient
26. Amount of program memory utilization
Efficiency
Program Memory
utilization 27. Mechanism efficient
Stability 28. Modifiability
29. Extensibility
30. Change effort Changeability
31. Modularity
32. Test suite provided
33. Extensive component test cases
34. Component tests in a set of
environment
Maintainability
Testability
35. Formal proofs
Deployability 36. Complexity level
Replaceability 37. Backward Compatibility
38. Mobility Flexibility
39. Configuration capacity
40. Domain abstraction level
41. Architecture compatibility
Portability
Reusability 42. Modularity, cohesion, coupling and
simplicity
Next, a brief description of each quality attributes is presented:
Functionality Characteristic:
Run-time Sub-Characteristics
Real-time
1. Response time (Latency): This attribute measures the time taken
since a request is received until a response has been sent;
2. Execution time: This attribute measures the time that the
component executes a specified task, under specified conditions.
3. Throughput (“out”): This attribute measures the output that can
be successfully produced over a given period of time;
4. Processing Capacity (“in”): This attribute measures the amount
of input information that can be successfully processed by the
component over a given period of time;
Chapter 4 - Embedded Software component Quality Evaluation Methodology 93
5. Worst case execution time: This attribute measures the time that
the component executes a specified task, under any conditions
Accuracy
6. Precision: This attribute evaluates if the component executes as
specified by the user requirements
Security
7. Data Encryption: This attribute expresses the ability of a
component to deal with encryption in order to protect the data it
handles;
8. Controllability: This attribute indicates how the component is able
to control the access to its provided interfaces;
9. Auditability: This attribute shows if a component implements any
auditing mechanism, with capabilities for recording users access to
the system and to its data;
Life-cycle Sub-Characteristics
Self-contained
10. Dependability: This attribute indicates if the component is not self-
contained, i.e. if the component depends on other components to
provide its specified services;
Reliability Characteristic:
Run-time sub Characteristics:
Recoverability
11. Error Handling: This attribute indicates whether the component
can handle error situations, and the mechanism implemented in that
case;
12. Transactional: This attribute verifies the presence of behaviors,
logic and component’s transactional structures;
Fault Tolerance
13. Mechanism availability: This attribute indicates the existence of
fault-tolerance mechanisms implemented in the component;
14. Mechanism efficiency: This attribute measures the real efficiency
of the fault-tolerance mechanisms that are available in the
component;
Safety
Chapter 4 - Embedded Software component Quality Evaluation Methodology 94
15. Reachability graph: This attribute analyzes formally the
reachability graph in order to find unsafe states or deadlock;
16. Integrity: This attribute is defined as the absence of improper
system state alterations;
Usability Characteristic:
Run-time Sub-Characteristics:
Configurability
17. Effort to configure: This attribute measures the ability of the
component to be configured;
18. Understandability: This attribute measures the capacity of the
component to be understood and its correct use;
Life-cycle Sub-Characteristics
Attractiveness
19. Effort to operate: This attribute analyses the complexity to operate
the functions provided by the component;
Efficiency Characteristic:
Run-time Sub-Characteristics:
Resource behavior
20. Peripherals utilization: This attribute specifies the Peripherals
used by a component;
21. Mechanism efficient: This attribute quantifies how complex a
component is in terms of the computer program, or set of algorithms,
need to implements mechanism to reduced peripherals utilization and
tries quantify the efficiency of the mechanism;
Energy consumption
22. Amount of energy consumption: This attribute quantifies the
energy consumed by the component to perform its functionality;
23. Mechanism efficient: This attribute analyzes how complex a
component is in terms of the computer program, or set of algorithms,
, assesses the need to implement a mechanism to reduce energy
consumption and attempts to quantify the efficiency of the
mechanism;
Data Memory utilization
Chapter 4 - Embedded Software component Quality Evaluation Methodology 95
24. Amount of Data Memory utilization: This attribute quantifies
the data memory utilization by the component to perform its
functionality;
25. Mechanism efficient: This attribute quantifies how complex a
component is in terms of the computer program, or set of algorithms,
assesses the need to implement a mechanism to reduced data memory
utilization and attempts to quantify the efficiency of the mechanism;
Life-cycle Sub-Characteristics
Program Memory utilization
26. Amount of Program Memory utilization: This attribute
quantifies the program memory utilization by the component to
perform its functionality;
27. Mechanism efficient: This attribute quantifies how complex a
component is in terms of the computer program, or set of algorithms,
assesses the need to implement a mechanism to reduce program
memory utilization and attempts to quantify the efficiency of the
mechanism;
Maintainability Characteristic:
Run-time Sub-Characteristics:
Stability
28. Modifiability: This attribute indicates the component behavior
when modifications are introduced; and
Life-cycle Sub-Characteristics:
Changeability
29. Extensibility: This attribute indicates the capacity to extend a
certain functionality of the component (i.e. what percentage of the
functionalities could be extended));
30. Change effort: This attribute measures the number of customizable
parameters that the component offers (e.g. number of parameters to
configure in each provided interface);
31. Modularity: This attribute indicates the modularity level of the
component in order to determine if it is easy or not to modify it, based
in its inter-related modules;
Testability
Chapter 4 - Embedded Software component Quality Evaluation Methodology 96
32. Test suite provided: This attribute indicates whether some test
suites are provided for checking the functionality of the component
and/or for measuring some of its properties (e.g. performance);
33. Extensive component test cases: This attribute indicates if the
component was extensively tested before being made available to the
market;
34. Component tests in a set of environment: This attribute
indicates in which environments or platforms a certain component
was tested;
35. Formal proofs: This attribute indicates if the component tests were
formally proved;
Portability Characteristic:
Run-time Sub-Characteristics:
Deployability
36. Complexity level: This attribute indicates the effort needed to
deploy a component in a specified environment.
Life-cycle Sub-Characteristics:
Replaceability
37. Backward Compatibility: This attribute is used to indicate
whether the component is “backward compatible” with its previous
versions or not;
Flexibility
38. Mobility: This attribute indicates in which containers this
component was deployed and to which containers this component
was transferred;
39. Configuration capacity: This attribute indicates the percentage of
the changes needed to transfer a component to other environments;
Reusability
40. Domain abstraction level: This attribute measures the
component’s abstraction level, related to its business domain;
41. Architecture compatibility: This attribute indicates the level of
dependability of a specified architecture;
42. Modularity, cohesion, coupling and simplicity: This attribute
analyzes the modularity level, internal organization, cohesion,
Chapter 4 - Embedded Software component Quality Evaluation Methodology 97
coupling and simplicity between the internal
modules/packages/functionalities of the component;
For each quality attribute proposed is necessary, at least one metric.
However, the measurement of those quality attributes described during this
section will be presented on next section in this chapter, which describes the
paradigm adopted to define the metrics and give a set of examples to help the
evaluation staff during the metrics definition, respectively.
4.2.2 Quality in Use characteristics
The second part of the proposed quality model (EQM) is quality in use
characteristics; this shows the component behavior in the real world. It is
measured through feedback of the user’s satisfaction in the use and application
of the component in the real environment and analyzes the results according to
their expectations. These features bring valuable information to users who wish
to use this component; this is called Quality in use characteristics and is
composed by Effectiveness, Productivity, Safety and Satisfaction.
A description of the each quality in use characteristic is showed below.
• Effectiveness: The capability of the embedded software component to
enable users to achieve specified goals with accuracy and completeness in
a specified context of use.
• Productivity: The capability of the embedded software component to
enable users to expend appropriate amounts of resource in relation to the
effectiveness achieved in a specified context of use.
• Safety: The capability of the embedded software component to achieved
acceptable levels of risk of harm to people, business, software, property
or the environment in a specified context of use.
• Satisfaction: The capability of the embedded software component to
satisfy users in a specific context of use.
Following the categorization defined by Alvaro (Alvaro, 2009), five-
category rating scale is used, “Very Satisfied”, “Satisfied”, “Dissatisfied”, “Very
Dissatisfied” and “Don’t Know”. This evaluation is subjective, and therefore it
Chapter 4 - Embedded Software component Quality Evaluation Methodology 98
must be analyzed very carefully, possibly confronting this information with
other facts, such as the nature of the environment and the user’s characteristics.
4.2.3 Additional Information
The third and last part of the quality model proposed is composed of
additional information, which provides useful subsidies to help the user in
component selection. These characteristics are called Additional Information
and are composed of: Technical Information, Organization Information and
Market Information. A brief description of the each type of additional
information is shown below.
Technical Information is important for developers to analyze the actual
state of the component. Besides, it is interesting to the customer to know who is
responsible for that component, i.e. who maintains that component.
Organization Information was identified. The Marketing information
expresses the marketing characteristics of a software component and it becomes
important to analyze some factors that bring credibility to the component
customers.
Table 4.10: 3ª Part of EQM: Additional Information.
Additional Information
Technical Information Organization
Information
Market Information
Component Version CMMi Level Development time
Programming Language Organization’s Reputation Cost
Design and Project Patterns Time to market
Operational Systems Supported Targeted market
Compiler Version Affordability
Compatible Architecture Licensing
Minimal Requirements
Technical Support
Compliance
Chapter 4 - Embedded Software component Quality Evaluation Methodology 99
Marketing Information provides relevant component information to
market. The main concern is that these characteristics are the basic information
to whatever kind of components available for sale.
Technical Information:
• Component Version: current version of the component;
• Programming Language: Language and technology from which
component was developed;
• Design and Project Patterns: Patterns used in the component
development;
• Operational Systems Supported : Operation Systems in which the
component was executed correctly;
• Compiler and Compiler Version: Compiler and Version of Compiler
which the component was compiled;
• Compatibles architectures: The list of architectures or platforms that
embedded component support;
• Minimal requirements: Minimal processor capability, amount of code
and data memory used by component and minimal resource necessary;
• Technical Support: the responsible to supply technical information
about the component.
• Compliance: this information indicates if the component is conforms to
any standard (e.g. international standard, certificated in any
organization, etc).
Organizational information:
• CMMi Level: CMMi Level of the organization that developed the
component;
• Organization’s Reputation: prestige of the organization that
developed the component.
Marketing information:
• Development time: The time consumed to develop a component;
Chapter 4 - Embedded Software component Quality Evaluation Methodology 100
• Cost: The cost of the component;
• Time to market: The time consumed to make the component available
on the market;
• Targeted market: The targeted market volume;
• Affordability: How affordable is the component; and
• Licensing: The kind of licensing available for the software component.
4.3 Embedded software component evaluation techniques based on Quality Level (EQL)
The proposed evaluation technique is level oriented in order to allow
flexibility in the component quality evaluation (Carvalho et al., 2009e),
(Carvalho et al., 2009f). There are three levels which form an increasing set of
evaluation requirements, where Embedded Quality Level I - EQL I, is the lowest
level and EQL III is the highest level and the next level of evaluation, EQL I+1,
contains all the evaluation techniques present in EQL I, and so. The Figure 4.9
shows the hierarchy of the quality evaluation level.
Figure 4.9: The EQL hierarchy.
The evaluation level defines the depth of the evaluation. Levels can be
chosen independently for each characteristic. Table 4.11 gives some indication
as to which level given embedded software component should be evaluated. The
cost of evaluation will depend on the level of evaluation, the size of the
component and other factors, but the higher level of quality will tend to be more
costly the evaluation.
Not all the selected quality characteristics and sub-characteristics
proposed need to be evaluated with the same degree of thoroughness and depth
Chapter 4 - Embedded Software component Quality Evaluation Methodology 101
for all types of embedded software components. Figure 4.10 shows an example
of an embedded component whose quality characteristics were evaluated in
different quality levels (EQL). The evaluation of quality level of a component
used in a nuclear system should be more rigid than that of a component used for
entertainment because the application risks are much larger. To implements
this flexibility, the evaluation should be level-oriented. So, components with
different application risks must also be evaluated differently.
Figure 4.10: Different EQL for different quality characteristics.
The model is constituted of quality characteristics levels where the
components can be evaluated. There are three levels (they constitute a
hierarchy), which identify the depth of the evaluation: evaluation at different
levels gives different degrees of confidence in the quality of the embedded
software component and the component could increase its level of reliability
and quality as it evolves. Thus, each company/customer decides which level is
better for evaluating its components, analyzing the cost/benefits of each level.
The closer to the last level, the higher is the probability that the component is
trustworthy, contains a consistent documentation and can be easily reused.
The EQL III contains more rigorous evaluation techniques (requiring a
high amount of time and effort to execute the evaluation) which are applied to
give more confidence to the embedded software component. On the other hand,
as you decrease the EQL the techniques used are less rigorous and,
consequently, less effort is applied during the evaluation.
The evaluation levels can be chosen independently for each characteristic
(i.e. for functionality the component can be evaluated using the techniques from
Chapter 4 - Embedded Software component Quality Evaluation Methodology 102
EQL I, for reliability those techniques from EQL II, for usability those
techniques from EQL III and so on). The idea is to provide more flexibility
during the selection levels as well as, in order to facilitate the model usage and
accessibility. Table 4.11 gives some indication as to which level given embedded
software component should be evaluated. Each vertical column of Table 4.11
represents different layers in which the embedded software component should
be considered when evaluating its potential damage and related risks. The level
of damage in each layer is the first guideline used to decide which EQL is more
interesting for the organization; the important aspects are those related to
environment, to safety/security and to economy. However, these are mere
guidelines, and should not be considered as a rigid classification scheme. Those
few guidelines were based on (Boegh et al., 1993), (Solingen, 2000), (ISO/IEC
25000, 2005) and extended to the component context.
Table 4.11: Guidelines for selecting evaluation level.
Level Environment Safety/Security
to person Economic Domain
EQL I
Small
Damage to
Property
Low risk to people Negligible
economic loss Entertainment,
EQL II
Medium
Damage to
Property
Small/Medium number
of people disabled
Significant
economic loss
Household, Security,
Control systems
EQL III
Recoverable/
Unrecoverable
environment
Damage
Threat to human lives,
Many people killed
Large economic loss,
Financial disaster
Medical, Financial
Transportation,
Nuclear systems
Further, a set of appropriate evaluation techniques were defined.
Relevant works from literature that propose evaluation techniques for software
products were analyzed (Boegh et al., 1993), (Solingen, 2000), (Tian, 2004),
(ISO/IEC 25000, 2005) and the experience of some embedded system
engineers and embedded software quality from a Brazilian software factory,
C.E.S.A.R. helped during this definition.
Moreover, a set of works from the literature about each single technique
were analyzed in order to identify the real necessity of those evaluation
techniques. The analyzed works were from diverse areas, such as:
software/component Quality Attributes (Larsson ,2004), software/component
Chapter 4 - Embedded Software component Quality Evaluation Methodology 103
testing (Freedman, 1991), (Councill, 1999), (Gao et al., 2003), (Beydeda and
Gruhn, 2003) software/component inspection (Fagan, 1976), (Parnas and
Lawford, 2003), software/component documentation (Kotula, 1998),
(Lethbridge et al., 2003), (Taulavuori et al., 2004), component interfaces and
contracts (pre and post-conditions) (Beugnard et al., 1999), (Reussner, 2003),
software/component metrics (Brownsword et al., 2000), (Cho et al., 2001),
software/component reliability (Wohlin & Regnell, 1998), (Hamlet et al., 2001),
(McGregor et al., 2003), software component usability (Bertoa et al., 2006),
software/component performance (Bertolino & Mirandola, 2003), (Chen et al.,
2005), component reusability (Caldiera & Basili, 1991), (Gui & Scott, 2007) and
component proofs (Hall, 1990). In this way, those selected techniques bring,
each one for each specific aspect, a kind of quality assurance to software
components that are essential to integrate them into the evaluation techniques.
The establishment of what each level is responsible for is very valuable to
the evaluation staff during the definition of the evaluation techniques for those
levels. In other words, the evaluation staff knows what is most important to
consider during the component evaluation and try to correlate these necessities
with the most appropriate evaluation level. The intention was to build a model
where the techniques selected to represent each level should complement each
other in order to provide the quality degree needed for each EQL. The EQL and
the evaluation techniques are presented on Table 4.12. Additionally, in order to
understand the meaning of each level, a brief description is presented next:
Chapter 4 - Embedded Software component Quality Evaluation Methodology 104
Table 4.12: Embedded Quality Level – EQL and the evaluation techniques. Characteristics EQL I EQL II EQL III
Functionality
• Evaluation measurement (time analysis)
• Precision analysis
• Functional testing (black box)
• System test
• Structural tests (white-box) with coverage criteria
• Dependency analysis
• Code inspection
• Formal proof
Reliability • Suitability analysis
• Code inspection
• Error prevent, handle and recover analysis
• Fault tolerance analysis
• Algorithmic complexity
• Structural tests (white-box)
• Reliability growth model
• Formal proof
• Dependability analysis
Usability
• Effort to configure analysis
• Inspection of user interfaces
• Documentation analysis (use guide, architectural
analysis, etc)
• Conformity to standards interface
• Analysis of the pre and post-conditions in laboratory • User mental model
Efficiency • Evaluation measurement (memory, energy and
resource)
• Tests of performance (memory, consumption and resource)
• Algorithmic complexity • Performance and resource profiling analysis
Maintainability
• Changeability analysis
• Documents Inspection
• Analysis of the test-suite provided
• Code metrics and programming rules
• Static analysis
• Extensibility analysis
• Analysis of the component development process
• Traceability evaluation
• Component test formal proof
Portability
• Backward compatibility analysis
• Configurable analysis
• Deployment analysis
• Hardware/software compatibility analysis
• Mobility analysis
• Conformity to programming rules
• Cohesion of the documentation with the source code
analysis
• Cohesion, coupling, modularity and simplicity analyses
• Domain abstraction analysis
• Analysis of the component’s architecture
Chapter 4 - Embedded Software component Quality Evaluation Methodology 105
• EQL I: In this quality level the goal is to ensure that the documentation
is consistent with the component’s functionalities, its effort to configure,
use, reuse and maintain, and check its compatibility with the specified
architecture;
• EQL II: On the second level the aim is to analyze the execution in the
environment, subjecting it to a set of metrics and evaluation techniques,
verify how the component can avoid faults and errors, analyzing the
provided and required interfaces and the use of best practices;
• EQL III: in the last quality level the source-code of the component is
inspected and tested, and the algorithm complexity is examined in order
to prove its performance. The formal proof of the component’s
functionalities and reliability is required in this level in order to achieve
the highest possible level of trust. The aim of this level is to assure the
component’s performance; and increase the trust in the component as
much as possible.
One of the main concerns during EQL definition is that the levels and the
evaluation techniques selection must be appropriate to completely evaluate the
quality attributes proposed on the EQM, presented in section 4.2. This is
achieved through a mapping of the Quality Attributes X Evaluation Technique.
For each quality attribute proposed on the EQM, it is interesting that at least
one technique is proposed in order to cover it completely, also to facilitate its
proper measurement. Table 4.13 shows this matching between the EQM quality
attributes and the proposed EQL evaluation techniques.
Table 4.13 shows that the main concern is not to propose a large amount
of isolated techniques, but to propose a set of techniques that are essential for
measuring each quality attribute, complementing each other and, thus,
becoming useful to compose the Embedded Quality Level Evaluation
Techniques.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 106
Table 4.13: Embedded Quality Attribute X Evaluation Techniques.
Sub- Characteristic
Quality Attributes
Evaluation Techniques
EQL I
EQL II
EQL III
Response time (Latency) Evaluation measurement (time analysis)
Throughput (“out”) Structural tests (white-box)
Processing Capacity (“in”) Structural tests (white-box)
Execution time Evaluation measurement (time analysis)
Real-Time
Worst case execution time Formal proofs
Precision analysis
Functional test (black box) Accuracy Precision Structural test (white-box) with coverage criteria
System test Data encryption Code inspection
System test Controllability Code inspection
System test
Security
Auditability Code inspection
Dependency analysis
Functionality
Self-contained Dependability Code inspection
Sub- Characteristic
Quality Attributes
Evaluation Techniques
EQL I
EQL II
EQL III
Code inspection
Error prevent, handle and recover analysis
Algorithmic complexity Reliability growth model
Error Handling
Formal proof
Recoverability
Transactional Structural tests (white-box)
Mechanism available Suitability analysis
Code Inspection Fault tolerance analysis Reliability growth model
Fault Tolerance
Mechanism efficiency
Formal Proof
Reachability Graph Formal proof
Code Inspection Algorithmic Complexity
Reliability
Safety Integrity
Dependability analysis
Chapter 4 - Embedded Software component Quality Evaluation Methodology 107
Sub- Characteristic
Quality Attributes
Evaluation Techniques
EQL I
EQL II
EQL III
Effort to Configure analysis
Inspection of user interfaces
Conformity to standards interface
Analysis of the pre and post-conditions in laboratory
Effort for configure
User mental model
Configurability
Understandability Documentation analysis (Use Guide, architectural analysis, etc)
Inspection of user interfaces
Usability
Attractiveness Effort to operate Evaluation measurement
Sub- Characteristic
Quality Attributes
Evaluation Techniques
EQL I
EQL II
EQL III
Evaluation measurement Peripheral utilization
Tests of performance Algorithmic Complexity
Resource Behavior
Mechanism Efficient Performance and resource profiling analysis Evaluation measurement Amount of Energy
Consumption Tests of performance Algorithmic Complexity
Energy consumption
Mechanism Efficient Performance and resource profiling analysis Evaluation measurement Amount of Data
Memory Utilization Tests of performance Algorithmic Complexity
Data Memory Utilization
Mechanism Efficient Performance and resource profiling analysis Evaluation measurement Amount of Program
Memory Utilization Tests of performance Algorithmic Complexity
Efficiency
Program Memory Utilization
Mechanism Efficient Performance and resource profiling analysis
Sub- Characteristic
Quality Attributes
Evaluation Techniques
EQL I
EQL II
EQL III
Evaluation measurement
Code metrics and programming rules
Documents Inspection Stability Modifiability
Static Analysis
Extensibility Extensibility analysis
Change Effort Changeability analysis Changeability
Modularity Code metrics and programming rule
Test suit provided Analysis of the test-suite provided
Extensive component test cases
Analysis of the component development process
Component tests in a set of environment Traceability evaluation
Maintainability
Testability
Formal Proofs Component Test Formal Proof
Chapter 4 - Embedded Software component Quality Evaluation Methodology 108
Sub- Characteristic
Quality Attributes
Evaluation Techniques
EQL I
EQL II
EQL III
Deployability Complexity level Deployment analyses
Replaceability Backward Compatibility Backward compatibility analysis
Mobility Mobility analyses Flexility Configuration capacity Configuration analyses
Cohesion of the documentation with the source code analysis Domain abstraction
level Domain abstraction analysis
Conformity to programming rules
Analysis of the component’s architecture
Architecture compatibility
Hardware/Software compatibility analysis
Portability
Reusability
Modularity, Cohesion, Coupling and Simplicity
Modularity, Cohesion, Coupling and Simplicity analyses
The evaluation techniques proposed for each quality attribute, are
suggestions for the evaluation process of the embedded component quality.
However, the evaluation staff is free to use them completely or partially as well
as define new evaluation techniques and new quality attributes that may be
needed.
The evaluation team is responsible for the defining/selecting process,
methods, instrumentation, etc., to perform the evaluation techniques. The
evaluation staff’s feedback is recommended to improve the evaluation
techniques available and record the new techniques proposed, in order to
provide a greater amount of techniques for further evaluation.
The evaluation cost is out of scope in this thesis, however it is observed
empirically that the cost of evaluation tends to increase with the highest level of
quality evaluation (cost->EQL III>EQL II>EQL I).
Finally, the main idea is that the evaluation is performed incrementally,
in the first step the component is evaluated in the first quality level (EQL I). For
component evolution a new quality evaluation in greater depth is executed (EQL
II), and so. However, the evaluation staff is free to choose the most appropriate
quality level to evaluate each quality characteristic, and it not necessarily to start
the EQL I.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 109
The evaluation techniques proposed for each quality attribute, are
suggestions for the methodology for component quality evaluation. However,
the evaluation staff is free to use them in their entirety or partiality and to define
new valuation techniques and new quality attributes that may be needed.
The evaluation staff is responsible to define/select process methods,
tools, etc. to perform the evaluation techniques chosen. It is recommended that
the evaluation staff gives feedback about the proposals evaluation techniques in
order to improve the quality of such techniques and record the new techniques
proposed in order to provide greater amount of techniques for further
evaluation.
The cost of the evaluation is beyond the scope of this thesis, however it is
observed empirically that the cost of evaluation tends to increase with the
highest level of quality assessment (EQL III).
Finally, the idea is that the quality evaluation is done incrementally,
where the first component’s evaluation is performed in the first level (EQL I) for
all quality characteristics, and according to need, the next component’s
evaluation should be selected features that will be assessed at EQL II, and so.
But it is the evaluation staff who chooses the EQL that will evaluate each quality
characteristic, and not necessarily have to start the EQL I.
4.4 Embedded Metrics Approach (EMA)
As with any engineering discipline, software development requires a
measurement mechanism for feedback and evaluation. Measurement is a
mechanism for creating a corporate memory and an aid in answering a variety
of questions associated with the enactment of any software process (Basili et al.,
1994).
Measurement also helps, during the course of a project, to assess its
progress, to take corrective action based on this assessment, and to evaluate the
impact of such action. Thus, some benefices are listed below:
• It helps support project planning;
Chapter 4 - Embedded Software component Quality Evaluation Methodology 110
• it allows us to determine the strengths and weaknesses of the
current processes and products (e.g., What is the frequency of
certain types of errors?);
• it provides a rationale for adopting/refining techniques (e.g., What
is the impact of the technique XX on the productivity of the
projects?);
• it allows us to evaluate the quality of specific processes and
products (e.g., What is the defect density in a specific system after
deployment?).
• According to many studies made on the application of metrics and
models in industrial environments, measurement, in order to be
effective must be:
• Focused on specific goals;
• Applied to all life-cycle products, processes and resources;
• Interpreted based on characterization and understanding of the
organizational context, environment and goals.
According to Basili et al. (Basili et al., 1994), the measurement must be
defined in a top-down fashion. It must be focused, based on goals and models. A
bottom-up approach will not work because there are many observable
characteristics in software (e.g., time, number of defects, complexity, lines of
code, severity of failures, effort, productivity, defect density), but which metrics
one uses and how one interprets them is not clear without the appropriate
models and goals to define the context.
There are a variety of mechanisms for defining measurable goals that
have appeared in the literature: the Quality Function Deployment approach
(Kogure & Akao, 1983), the Goal Question Metric approach (Basili et al., 1994),
(Basili, 1992), (Basili & Rombach, 1988), (Basili & Selby, 1984), (Basili & Weiss,
1984), and the Software Quality Metrics approach (Boehm et al., 1976), (McCall
et al., 1977). However, in this work the Goal-Question-Metric (GQM) paradigm
was adopted to measure quality of embedded software component (Carvalho et
al., 2009g), because this approach measure quality in a simple and efficient way.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 111
The GQM was the same technique proposed to use in ISO/IEC 25000 looking to
track the software product properties.
Goal Question Metric (GQM): A method used to define
measurement on a software project, process and product. GQM defines a
measurement model on three levels: (1) Conceptual level (goal): A goal is
defined for an object, for a variety of reasons, with respect to various models of
quality, from various points of view, and relative to a particular environment.
(2) Operational level (question): A set of questions is used to define models
of the object of study and then focuses on that object to characterize the
assessment or achievement of a specific goal. (3) Quantitative level
(metric): A set of metrics, based on the models, is associated with every
question in order to answer it in a measurable way.
Figure 4.11 presents a high-level visual synopsis of the definition phase of
the GQM paradigm, illustrating the outputs of first three steps of Basili’s GQM
process, the hierarchy of goals, questions, and, ultimately, meaningful metrics.
The result of the application of the Goal Question Metric approach
application is the specification of a measurement system targeting a particular
set of issues and a set of rules for the interpretation of the measurement data.
The resulting measurement model has three levels:
Chapter 4 - Embedded Software component Quality Evaluation Methodology 112
Figure 4.11: The Overview of GQM Paradigm
1. Conceptual level - GOAL: A goal is defined for an object, for a
variety of reasons, with respect to various models of quality, from
various points of view, relative to a particular environment.
Objects of measurement are:
• Products: Artifacts, deliverables and documents that are
produced during the system life cycle; E.g., specifications,
designs, programs, test suites.
• Processes: Software related activities normally associated
with time; E.g.,specifying, designing, testing, interviewing.
• Resources: Items used by processes in order to produce
their outputs; E.g., personnel, hardware, software, office
space.
2. Operational level - QUESTION: A set of questions is used to
characterize the way the assessment/achievement of a specific goal
is going to be performed based on some characterizing model.
Questions try to characterize the object of measurement (product,
process, resource) with respect to a selected quality issue and to
determine its quality from the selected viewpoint; and
Chapter 4 - Embedded Software component Quality Evaluation Methodology 113
3. Quantitative level - METRIC: A set of data is associated with
every question in order to answer it in a quantitative way. The data
can be:
Objective: If they depend only on the object that is being measured and
not on the viewpoint from which they are taken; E.g., number of versions of a
document, staff hours spent on a task, size of a program.
Subjective: If they depend on both the object that is being measured and
the viewpoint from which they are taken; E.g., readability of a text, level of user
satisfaction.
A GQM model is a hierarchical structure starting with a goal. The goal is
refined into several questions that usually break down the issue into its major
components. Each question is then refined into metrics, some of them objective,
some of them subjective. The same metric can be used in order to answer
different questions under the same goal. Several GQM models can also have
questions and metrics in common, making sure that, when the measure is
actually taken, the different viewpoints are taken into account correctly (i.e., the
metric might have different values when taken from different viewpoints).
It is intended that the evaluation considers, during the metrics definition,
a set of factors in which could improve the collected results, such as: Meaning of
the Metric, Costs and Complexity to measure, Repeatability, Reproductively,
Validity, Objectivity and Impartial. Those factors are essential during the
elaboration of the metrics using the GQM technique.
The open literature typically describes GQM in terms of a six-step process
where the first three steps are about using business goals to drive the
identification of the right metrics and the last three steps are about gathering
the measurement data and making effective use of the measurement results to
drive decision making and improvements. In a recent seminar, Basili (Basili,
1994) described his six-step GQM process as follows:
1. Develop a set of corporate, division and project business goals and
associated measurement goals for productivity and quality;
Chapter 4 - Embedded Software component Quality Evaluation Methodology 114
2. Generate questions (based on models) that define those goals as
completely as possible in a quantifiable way;
3. Specify the measures needed to be collected to answer those questions
and track process and product conformance to the goals;
4. Develop mechanisms for data collection;
5. Collect, validate and analyze the data in real time to provide feedback
to projects for corrective action; and
6. Analyze the data in a postmortem fashion to assess conformance to the
goals and to make recommendations for future improvements.
The GQM paradigm is a default embedded metrics approach used for
measurement in the embedded software component quality evaluation
methodology. The GQM helps the evaluation staff to implement metrics defined
by evaluation techniques in order to track the properties of the quality attributes
described on EQM.
In order to help the evaluation staff during the execution of the GQM
step, a set of metrics based in propose methodology was created.
The measure defined in GQM paradigm can be classified in two ways:
objective and subjective measure. The subjective measure is described as the
product of some mental activity developed by somebody. On the other hand, the
objective measure is frequently performed by equipment or physical apparatus
in a determinate subject. The objective measure, it is usually easier to be
repeated that the subjective measure.
Based in this information, the metrics are defined in this methodology, as
much as possible, in an objective way, , even though it is not possible to perform
so many times, because so many quality attributes are, in this definition,
subjective, and there are a set of metrics that depends on the evaluation context,
evaluation staff and the data that could collected during the evaluation,
becoming subjective metrics.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 115
Following Bertoa (Bertoa et al., 2006) there is a complete absence of
metrics available in literature that could help evaluate software component
quality attributes objectively. Thus, in this section will be create a set of
suggestions of different kinds of metrics definition to support the evaluation. It
is important to highlight that those metrics must be defined in a component
evaluation context. The one presented here is only to guide the evaluation staff
during the definitions of the metrics in the first’s component evaluation process.
The Table 4.14 presents the complete steps for the quality evaluation,
starting with quality characteristics, sub-characteristics, quality attributes, EQL
level, Evaluation Techniques, Metrics from EML, interpretation and references.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 116
Table 4.14: The suggested metrics to use by quality evaluation methodology.
Charac.
Sub-
Character
Quality Attributes
Evaluation Techniques
EQL I
EQL II
EQL III
Goal Question Metric Interpretation Reference
Response time (Latency)
Evaluation measurement (Time analysis)
Evaluate the time taken from receiving a request until a response is sent
What is the average time between the response times?
(Σ Time taken between a set of invocations per each provided interface) / Number of invocations
0 <= x <= 100; which closer to 100 being better
(Bertoa et al., 2002), (Brownsword et al.,2000)
Execution time
Evaluation measurement (Time analysis)
Evaluate the time used by processor to execute the task
What is the average time of task execution?
(Σ Time task execution) / Number of execution
0 <= x <= 100; which closer to 100 being better
(Bertoa et al., 2002), (Brownsword et al.,2000)
Throughput (“out”)
Structural Tests (white-box)
Analyses the amount of output that can be successfully produced over a given period of time.
How much output can be produced with success over a period of time?
(Amount of output with success over a period of time * 100) / Number of invocations
0 <= x <= 100; which closer to 100 being better
(Beydeda and Gruhn, 2003), (Gao et al.,2003)
Processing Capacity (“in”)
Structural Tests (white-box)
Analyses the amount of input information that can be successfully processed by the component over a given period of time
How much input can be processed with success over a period of time?
(Amount of input processed with success over a period of time * 100) / Number of invocations
0 <= x <= 100; which closer to 100 being better
(Beydeda and Gruhn, 2003), (Gao et al., 2003)
Real-Tim
e
Worst case execution time
Formal Proofs Determine the maximum time taken to perform the task in the worst case
What is the maximum time to execute the task in the worst case?
Maximum of Execution time in a Formal Model of the component
x > 0; which closer to 0 being better
(Boer et al., 2002)
Precision analysis (Evaluation measurement)
Evaluates the percentage of the results that were obtained with precision
Based on the amount of tests executed, how many test results return with precision?
Precision on results / Amount of tests
0 <= x <= 1; closer to 1 being better
(Bertoa et al., 2002)
Functional Tests (black box)
Validates required functional features and behaviors from an external view
How precise are required functions and behaviors of the component?
Number of precise functions and correct behavior / Number of functions
0 <= x <= 1; closer to 1 being better
(Beydeda and Gruhn, 2003), (Gao et al.,2003)
Accuracy
Precision
Structural Tests (white-box)
Validation of program structures, behaviors, and logic of component from an internal view
How well structured is the code and logical implementation of the component?
Number of functions with good implementation (well structured and logical) / Number of function
0 <= x <= 1; closer to 1 being better
(Beydeda and Gruhn, 2003), (Gao et al., 2003)
System Test Evaluate the encryption of the input and output data of the component.
How complete is the data encryption implementation?
Number of services that must have data encryption / Number of services that have encryption
0 <= x <= 1; closer to 1 being better
(Gao et al., 2003)
Data Encryption
Code Inspection
Verify coding style guidelines are followed, comments in the code are relevant and of appropriate length, naming conventions are clear and consistent, the code can be easily maintained
How complaint is the component using systematic approach to examining the source code
Number of functions complaint to systematic approach / Number of specified functions
0 <= x <= 1; closer to 1 being better
(Fagan, 1976), (Parnas and Lawford, 2003)
Functionality
Security
Controllability System Test Evaluates if the component provides any control mechanism.
How controllable is the component access?
Number of provided interfaces that control the access / Number of provided interfaces
0 <= x <= 1; closer to 1 being better
(Gao et al., 2003)
Chapter 4 - Embedded Software component Quality Evaluation Methodology 117
Code Inspection
Verify coding style guidelines are followed, comments in the code are relevant and of appropriate length, naming conventions are clear and consistent, the code can be easily maintained
How complaint is the component using systematic approach to examining the source code
Number of functions complaint to systematic approach / Number of specified functions
0 <= x <= 1; closer to 1 being better
(Fagan, 1976), (Parnas and Lawford, 2003)
System Test
Evaluate if the component provides any audit mechanism.
How controllable is the component audit mechanism?
Number of provided interfaces that log-in the access (or any kind of data) / Number of provided interfaces
0 <= x <= 1; closer to 1 being better
(Gao et al., 2003)
Auditability
Code Inspection
Verify coding style guidelines are followed, comments in the code are relevant and of appropriate length, naming conventions are clear and consistent, the code can be easily maintained
How complaint is the component using systematic approach to examining the source code
Number of functions complaint to systematic approach / Number of specified functions
0 <= x <= 1; closer to 1 being better
(Fagan, 1976), (Parnas and Lawford, 2003)
Dependency analysis
Evaluates the ability of the component to provide itself all functions expected
How many functions does the component provide by itself?
Number of functions provided by itself / Number of specified functions
0 <= x <= 1; closer to 1 being better
(Gao et al., 2003)
Self-contained
Dependability
Code Inspection
Verify coding style guidelines are followed, comments in the code are relevant and of appropriate length, naming conventions are clear and consistent, the code can be easily maintained
How complaint is the component using systematic approach to examining the source code
Number of functions complaint to systematic approach / Number of specified functions
0 <= x <= 1; closer to 1 being better
(Fagan, 1976), (Parnas and Lawford, 2003)
Charac.
Sub-
Character.
Quality Attributes
Evaluation Techniques
EQL I
EQL II
EQL III
Goal Question Metric Interpretation Reference
Code Inspection
Verify coding style guidelines are followed, comments in the code are relevant and of appropriate length, naming conventions are clear and consistent, the code can be easily maintained
How complaint is the component using systematic approach to examining the source code
Number of functions complaint to systematic approach / Number of specified functions
0 <= x <= 1; closer to 1 being better
(Fagan, 1976), (Parnas and Lawford, 2003)
Prevent, Handle and Recover Analysis
Analyze the prevent, handle and recover error mechanism existent on the component in order to guarantee the error treatment
How many functions provide any mechanism that can avoid, manipulate and recover error situations?
Number of mechanism implemented to avoid, handling and recover error / Number of specified functions
0 <= x <= 1; closer to 1 being better
(Bertoa et al., 2002), (Brownsword et al., 2000)
Reliability
Recoverability
Error Handling
Reliability growth model
Discover reliability deficiencies through testing, analyzing such deficiencies, and implementation of corrective measures to lower the rate of occurrence.
Which reliability deficiencies have been implemented corrective measures to lower the rate of occurrence?
Number of reliability deficiencies have been implemented corrective measures / Number of reliability deficiencies found through testing
0 <= x <= 1; closer to 1 being better
(Wohlin & Regnell, 1998), (Hamlet et al., 2001), (McGregor et al., 2003)
Chapter 4 - Embedded Software component Quality Evaluation Methodology 118
Formal Proof
Analyzes if the recovery of the error situation are formally proven
How many cases of error recovery are formally proven?
Number of cases of error recovery are formally proven / Number of cases of error recovery implemented
0 <= x <= 1; closer to 1 being better (Boer et al., 2002)
Transational Structural Tests (white-box)
Verify the presence of behaviors, logic and structures transactional of component from an internal view
How transactional is the behaviors, logic and structures of the component?
Number of function with transactional implementation / Number of function
0 <= x <= 1; closer to 1 being better
(Cho et al., 2001)
Mechanism available
Suitability analysis
Evaluates the functions that contain fault tolerance mechanism
How many functions provide the fault tolerance mechanism?
Number of functions that contain any kind of fault tolerance mechanism / Number of functions
0 <= x <= 1; closer to 1 being better
(Wohlin & Regnell, 1998), (Hamlet et al., 2001), (McGregor et al., 2003)
Code Inspection
Verify coding style guidelines are followed, comments in the code are relevant and of appropriate length, naming conventions are clear and consistent, the code can be easily maintained
How complaint is the component using systematic approach to examining the source code
Number of functions complaint to systematic approach / Number of specified functions
0 <= x <= 1; closer to 1 being better
(Fagan, 1976), (Parnas and Lawford, 2003)
Fault tolerance analysis
Evaluates the efficiency of the fault tolerance mechanism
How is the efficiency of the functions that provide any kind of fault tolerance mechanism? What it is the range of the data lost?
Number of functions that contain any kind of fault tolerance mechanism / Number of mechanisms that are considered efficient - Total number of interfaces that exchanged data to outside / Number of interface that lost data
0 <= x <= 1; closer to 1 being better
(Bertoa et al., 2002), (Brownsword et al., 2000)
Reliability growth model
Discover fault tolerance deficiencies through testing, analyzing such deficiencies, and implementation of corrective measures to lower the rate of occurrence.
Which fault tolerance deficiencies have been implemented corrective measures to lower the rate of occurrence?
Number of fault tolerance deficiencies have been implemented corrective measures / Number of fault tolerance deficiencies foud through testing
0 <= x <= 1; closer to 1 being better
(Wohlin & Regnell, 1998), (Hamlet et al., 2001), (McGregor et al., 2003)
Fault Tolerance
Mechanism efficiency
Formal Proof
Analyzes fault tolerance mechanism that have been formally proven
How many fault tolerance mechanisms are formally proven?
Number of fault tolerance mechanism are formally proven / Number of cases of fault tolerance mechanism implemented
0 <= x <= 1; closer to 1 being better (Boer et al., 2002)
Reachability Graph
Formal Proof
Analyzes formally the reachability graph in order to found unsafe states or deadlock
The reachability graph is covered only for safe state and it is deadlock free?
Number of unsafe states and deadlock found in reachability graph
x >= 0; which x = 0 being better
(Boer et al., 2002)
Safety
Integrity Code Inspection
Verify coding style guidelines are followed, comments in the code are relevant and of appropriate length, naming conventions are clear and consistent, the code can be easily maintained
How complaint is the component using systematic approach to examining the source code
Number of functions complaint to systematic approach / Number of specified functions
0 <= x <= 1; closer to 1 being better
(Fagan, 1976), (Parnas and Lawford, 2003)
Chapter 4 - Embedded Software component Quality Evaluation Methodology 119
Algorithmic Complexity
Quantifies how complex a component is in terms of the computer program, or set of algorithms, need to implement the functionalities
How complex is the program component to implement the component functionalities?
Value of cyclomatic complexity (M = E − N + 2P) M = cyclomatic complexity; E = the number of edges of the graph; N = the number of nodes of the graph; P = the number of connected components
M >= 0; which closer to 0 being better
(McCabe, 1976), (Cho et al., 2001)
Dependability analysis
Evaluates the dependency of the component of the external source or external factors
How dependent is the component of external factors or external source?
Number of functions provided by itself without external dependency / Number of specified functions
0 <= x <= 1; closer to 1 being better
(Bertoa et al., 2002), (Brownsword et al., 2000)
Charac.
Sub-
Character
Quality Attributes
Evaluation Techniques
EQL I
EQL II
EQL III
Goal Question Metric Interpretation Reference
Effort to Configure analysis
Evaluates the time necessary to configure the component.
How much time is needed to configure the component in order to work correctly in a system?
Time spent to configure correctly
The faster it is to configure the component the better, but it depends of the component and environment complexity.
(Brownsword et al., 2000), (Bertoa et al., 2002)
Inspection of user interfaces
Analyzes the ability to configure all provided and required functions
How many configurations are provided by each interface?
Number of configurations in all provided interfaces / Number of provided interfaces
0 <= x <= 1; closer to 1 being better
(Beugnard et al., 1999), (Reussner, 2003), (Parnas and Lawford, 2003)
Conformity to standards interface
Inspects the component in order to verify if the provided/required interfaces is compliance with the specified standards
How compliant are the user interfaces with the specified standards
Number verified of compliance interfaces with the specified standard / Number specified of compliance interfaces with the specified standard
0 <= x <= 1; closer to 1 being better
(Bertoa et al., 2002)
Analysis of the pre and post-conditions in laboratory
Analyzes the pre and post-condition in laboratory in order to verify if the provided/required services is compliance with the specified conditions defined in documentation
How provided/required services is compliant with the specified conditions defined in documentation
Number of provided/required services compliance with a documentation / Number of provided/required services
0 <= x <= 1; closer to 1 being better
(Beugnard et al., 1999), (Reussner, 2003)
Effort for configure
User mental model
Analyzes the mental understanding of what the component is doing for the user
What is the level of mental understanding of what the component is doing for the user?
Number of functions understood mentally / Number of provided functions
0 <= x <= 1; closer to 1 being better
(Farooq & Dominick, 1988)
Configurability
Understandability
Documentation analysis (Use Guide, architectural analysis, etc)
Analyses the documentation availability its efficiency and efficacy
How many documents are available with quality to understand the component functions?
Amount of documents with quality / Amount of documents provided
0 <= x <= 1; closer to 1 being better
(Kallio et al., 2001), (Kotula, 1998), (Lethbridge et al., 2003), (Taulavuori et al., 2004)
Usability
Attra
ctiven
ess Effort to operate
Evaluation measurement
Analyses the complexity to operate the functions provided by the component
How many operations are provided by each interface?
Number of operations in all provided interfaces / Number of provided interfaces
0 <= x <= 1; closer to 1 being better
(Brownsword et al., 2000), (Bertoa et al., 2002), (Bertoa & Vallecillo, 2004)
Chapter 4 - Embedded Software component Quality Evaluation Methodology 120
Inspection of user interfaces
Analyzes the ability to operate all provided and required functions
How much time it is needed to operate the component?
All functions usage / time to operate
The lower the better (Σ usage time of each function)
(Beugnard et al., 1999), (Reussner, 2003), (Parnas and Lawford, 2003)
Charac.
Sub-
Character.
Quality Attributes
Evaluation Techniques
EQL I
EQL II
EQL III
Goal Question Metric Interpretation reference
Evaluation measurement
Analyzes the amount of peripherals required for its correct operation.
How many peripherals are sufficient for the component to work correctly?
Amount of peripherals necessary for the component to work correctly
List of Peripheral required. x > 0; which closer to 0 being better
(Bertoa et al., 2002), (Brownsword et al., 2000)
Peripheral utilization
Tests of performance
Evaluates and measures peripheral utilization in other contexts and operation environments to make sure that it satisfies the performance requirements
The component has the same peripheral utilization when in other contexts and operation environments?
Number of context that the peripheral utilization is the same / Total Number of context
0 <= x <= 1; closer to 1 being better
(Gao et al., 2003)
Algorithmic Complexity
Quantifies how complex a component is in terms of the computer program, or set of algorithms, need to implement mechanism to reduce peripherals utilization
How complex is the program component to implement the mechanism to optimize the peripherals utilization?
Number of mechanism implemented to optimize the peripherals utilization
x >= 0; which closer to ∞ being better
(Cho et al., 2001)
Resource Behavior
Mechanism Efficienty
Performance and resource profiling analysis
Investigates the component's behavior in dynamic/execution mode in order for performance analysis to determine the efficiency of the mechanism implemented
How much peripherals utilization was saved or reduced by the mechanism implemented?
Amount of peripherals utilization after the mechanism efficienty implementation / Amount of peripherals utilization before the mechanism efficienty implementation
0 <= x <= 1; closer to 0 being better
(Bertolino & Mirandola, 2003), (Chen et al., 2005)
Evaluation measurement
Analyzes the amount of energy consumption to its perform the task.
How much amount of energy is consumed for the component to perform the task?
Amount of energy necessary for the component to perform the task.
x > 0; which closer to 0 being better
(Bertoa et al., 2002), (Brownsword et al., 2000)
Amount of Energy Consumption Tests of
performance
Evaluates and measures energy consumption in other contexts and operation environments to make sure that it satisfies the performance requirements
The component has the same peripheral utilization when in others context and operation environment?
Number of context that the peripheral utilization is the same / Total Number of context
0 <= x <= 1; closer to 1 being better
(Gao et al., 2003)
Efficiency
Energy consumption
Mechanism Efficienty
Algorithmic Complexity
Quantifies how complex a component is in terms of the computer program, or set of algorithms, need to implement mechanism to reduced energy consumption
How complex is the program component to implement the mechanism to optimize the energy consumption?
Number of mechanisms implemented to optimize the energy consumption
x >= 0; which closer to ∞ being better
(Cho et al., 2001)
Chapter 4 - Embedded Software component Quality Evaluation Methodology 121
Performance and resource profiling analysis
Investigates the component's behavior in dynamic/execution mode in order to analyze performance to determine the efficiency of the mechanism implemented
How much energy was saved or reduced by the mechanism implemented?
Amount of energy consumption after the mechanism efficiently implemented / Amount of energy consumption before the mechanism efficiently implementation
0 <= x <= 1; closer to 0 being better
(Bertolino & Mirandola, 2003), (Chen et al., 2005)
Evaluation measurement
Analyzes the amount of data memory required for its correct operation.
How much data memory is enough for the component to work correctly?
Amount of data memory necessary for the component to work correctly
x > 0; which closer to 0 being better
(Bertoa et al., 2002), (Brownsword et al., 2000)
Amount of Data Memory Utilization Tests of
performance
Evaluates and measures data memory utilization in other contexts and operation environments to make sure that it satisfies the performance requirements
Does the component has the same data memory utilization when in other contexts and operation environments?
Number of contexts that the data memory utilization is the same / Total Number of contexts
0 <= x <= 1; closer to 1 being better
(Gao et al., 2003)
Algorithmic Complexity
Quantifies how complex a component is in terms of the computer program, or set of algorithms, need to implement mechanism to reduce data memory utilization
How complex is the program component to implement the mechanism to optimize the data memory usage?
Number of mechanisms implemented to optimize the data memory usage
x >= 0; which closer to ∞ being better
(Cho et al., 2001)
Data M
emory Utilization
Mechanism Efficienty
Performance and resource profiling analysis
Investigates the component's behavior in dynamic/execution mode in order to analyze performance to determine the efficiency of the mechanism implemented
How much program memory was saved or reduced by the mechanism implemented?
Amount of program memory used after the mechanism efficiently implementation / Amount of program memory used before the mechanism efficiently implemented
0 <= x <= 1; closer to 0 being better
(Bertolino & Mirandola, 2003), (Chen et al., 2005)
Evaluation measurement
Analyzes the amount of program memory required for its correct operation.
How much program memory is enough for the component to work correctly?
Amount of program memory necessary for the component to work correctly
x > 0; which closer to 0 being better
(Bertoa et al., 2002), (Brownsword et al., 2000)
Amount of Program Memory Utilization Tests of
performance
Evaluates and measures program memory utilization in other contexts and operation environments to make sure that it satisfies the performance requirements
Does the component has the same program memory utilization when in others contexts and operation environments?
Number of contexts that the program memory utilization is the same / Total Number of contexts
0 <= x <= 1; closer to 1 being better
(Gao et al., 2003)
Algorithmic Complexity
Quantifies how complex a component is in terms of the computer program, or set of algorithms, need to implement mechanism to reduce program memory utilization
How complex is the program component to implement the mechanism to optimize the program memory usage?
Number of mechanisms implemented to optimize the program memory usage
x >= 0; which closer to ∞ being better
(Cho et al., 2001)
Program M
emory Utilization
Mechanism Efficienty
Performance and resource profiling analysis
Investigates the component's behavior in dynamic/execution mode in order to analyze performance to determine the efficiency of the mechanism implemented
How much program memory was saved or reduced by the mechanism implemented?
Amount of program memory used after the mechanism efficiently implemented / Amount of program memory used before the mechanism efficiently implemented
0 <= x <= 1; closer to 0 being better
(Bertolino & Mirandola, 2003), (Chen et al., 2005)
Chapter 4 - Embedded Software component Quality Evaluation Methodology 122
Charac.
Sub-
Character.
Quality Attributes
Evaluation Techniques
EQL I
EQL II
EQL III
Goal Question Metric Interpretation reference
Evaluation measurement
Evaluates the flexibility to change the component source code in order to improve its functions
How modifiable is the component?
Execute a set of modifications and analyze the component behavior
Analyze the amount of modifications done and the amount of modifications that works well
(Bertoa et al., 2002), (Brownsword et al.,2000)
Code metrics and programming rules
analyze if the rules related to a programming language was used in the component implementation by collecting a set of metrics
How the component implementation follows the rules related to a programming languages?
Number of functions that used the program rules / Number of component functions
0 <= x <= 1; closer to 1 being better
(Brownsword et al., 2000), (Cho et al., 2001)
Documents Inspection
Examines documents in detail based in a systematic approach to assess the quality of the component documents
What is the quality level of component´s documents
Number of documents with quality / Number of documents available
0 <= x <= 1; which closer to 1 is better
(Fagan, 1976) (Parnas and Lawford, 2003)
Stability
Modifiability
Static Analysis
checks the component errors without compiling/executing it through tools
How many errors does the component have in design time?
Number of errors found in design time.
x <= 0; closer to 0 being better (Brownsword et al., 2000)
Extensibility Extensibility analysis
Evaluates the flexibility to extend the component functions
How extensible is the component?
Execute a set of extensions and analyze the new component behavior
Analyze the amount of extensions done and the amount of extensions that work well
(Brownsword et al., 2000), (Bertoa et al., 2002), (Bertoa & Vallecillo, 2004)
Change Effort
Changeability analysis
Analyzes the customizable parameters that the component offers
How many parameters are provided to customize each function of the component?
Number of provided interfaces / Number of parameters to configure the provided interface
0 <= x <= 1; which closer to 1 is better
(Brownsword et al., 2000)
Changeability
Modularity Code metrics and programming rule
Analyzes the internal organization of the component
How logically separated are the component concerns? Packaging analysis
If the component contains some packages that isolate each logical concern it probably has good modularity. On the other hand, if the component doesn’t contain a well defined internal structure the modularity level is slower.
(Brownsword et al., 2000), (Cho et al., 2001)
Test suit provided
Analysis of the test-suite provided
Analyzes the ability of the component to provide some test suite for checking its functions
is there any test suite? How is the coverage of this test suite based on the whole component functions?
Analysis of the test suites provided Number of test suites provided / Number of functions
0 <= x <= 1; closer to 1 being better
(Brownsword et al., 2000)
Extensive component test cases
Analysis of the component development process
Analyzes if the component was extensively tested before being made available to the market
How many tests cases are executed? What is the coverage of these test cases?
Number of functions / Number of test cases * Further it is interesting to analyze the number of bugs that were corrected during the test case
The test cases coverage is very important to be analyzed and the number of bugs discovered during the execution of the tests.
(Brownsword et al., 2000)
Maintainability
Testability
Component tests in a set of environment
Traceability evaluation
Analyzes the environments where the component can work well
In which environment can this component be executed without errors?
Number of environments that work well / Number of environments defined on specification
0 <= x <= 1; closer to 1 being better (Gao et al., 2003)
Chapter 4 - Embedded Software component Quality Evaluation Methodology 123
Formal Proofs
Component Test Formal Proof
Analyzes if the tests are formally proved
How is the coverage of the proof in the test cases? Proofs Analysis
It is interesting to note if the amount of formal proof covers the whole test cases provided by the component. As higher it is better.
(Boer et al., 2002)
Charac.
Sub-
Character
Quality Attributes
Evaluation Techniques
EQL I
EQL II
EQL III
Goal Question Metric Interpretation reference
Deploya
bility
Complexity level
Deployment analyses
Analyzes how complex it is to deploy a component in its specific environment(s)
How much time does it take to deploy a component in its environment?
Time taken for deploying a component in its environment
Estimate the time first and then compare with the actual time taken to deploy the component
(Gao et al., 2003)
Replace
ability
Backward Compatibility
Backward compatibility analysis
Analyzes the compatibility with previous versions
What is the compatibility with previous versions?
Correct results / Set of same invocations in different component versions
0 <= x <= 1; closer to 1 being better
(Bertoa et al., 2002), (Brownsword et al., 2000)
Mobility Mobility analyses
Analyzes the ability of the component to be transferred from one environment to another
Can the component be transferred to other environments without any changes?
Analyze the component constraints and environment constraints - Analyze the component specification Deploy the component in environment specified on documentation * Possible metric: Number of environments where the component works correctly / Number of environments described in its specification
0 <= x <= 1; closer to 1 being better
(Brownsword et al., 2000), (Bertoa et al., 2002)
Flexility
Configuration capacity
Configuration analysis
Analyzes the ability of the component to be transferred from one environment to another, considering the related changes
How much effort is needed to adapt the component to a new environment?
- Analyze the component constraints and environment constraints - Analyze the component specification - Deploy the component in environment specified on documentation - Time taken to adapt the component in its specified environments
Analyze the time taken to deploy the component in each environment defined
(Brownsword et al., 2000), (Bertoa et al., 2002)
Portability
Reusability
Domain abstraction level
Cohesion of the documentation with the source code analysis
Analyze the compliance of the documentation provided and the source code of the component
How compliant is the documentation provided and the source code of the component
Number of function compliance to component documentation / Number of function of the component
0 <= x <= 1; closer to 1 being better
(Kallio et al., 2001), (Bay and Pauls, 2004), (Kotula, 1998), (Lethbridge et al., 2003), (Taulavuori et al., 2004)
Chapter 4 - Embedded Software component Quality Evaluation Methodology 124
Domain abstraction analysis
Analyzes the correct separation of concerns in the component
Can the component be reused in other domain applications? Does the component have inter-related business code?
Analyzes the source code and tries to reuse the component in other domains
If the component does not contain business code related to specific domain and can be reused around a set of domains, it is good candidate to be reused. On the other hand, if it does have code related to a specific domain and it becomes difficult to reuse it around some domain, the component is not good candidate to be reusable and should be revised.
(Bay and Pauls, 2004), (Gui & Scott, 2007)
Conformity to programming rules
analyze if the rules related to a programming language was used in the component implementation
How the component implementation follows the rules related to a programming language?
Number of function that used the program rules / Number of component functions
0 <= x <= 1; closer to 1 being better
(Brownsword et al., 2000), (Cho et al., 2001)
Analysis of the component’s architecture
Analyzes the level of dependability of a specified architecture
Was the component correctly designed based on the architecture constraints defined?
Analysis of the component design based on some documentation and source code
Understand the architecture constraints and analyze if the component follows that one specification during its development and implementation. Based on this, the component can be considered good to be reused or not.
(Kazman et al., 2000), (Choi et al., 2008)
Architecture compatibility
Hardware/Software compatibility analysis
Analyzes the real compatibility of the component with architecture listed in the documentation
How compatible is the component with the architecture listed in the documentation
Number of architecture really compatible / Number of architecture listed as compatible
0 <= x <= 1; closer to 1 being better (Choi et al., 2008)
Modularity, Cohesion, Coupling and Simplicity
Modularity, Cohesion, Coupling and Simplicity analyses
Analyzes the internal organization, cohesion and coupling between the internal modules/packages/functionalities of the component
How simple, logically separated, cohesion and coupling level, are the component concerns?
Analysis of the inter-related parts
The component’s organization should be low coupling, high cohesiveness, packages that isolate each logical concern, easily understandable and (re)usable. The simpler the better.
(Caldiera & Basili, 1991), (Gui & Scott, 2007)
Chapter 4 - Embedded Software component Quality Evaluation Methodology 125
4.5 Summary
This chapter detailed the four modules which composed the methodology
for quality evaluation of embedded software component. At the beginner, was
explained the problem of lack of quality in the software component, then the
solution was introduced through a methodology for quality evaluation.
The section 4.1 details of the embedded quality process called EQL. It is
constituted of four modules to proceeds the quality evaluation. It is Establish
evaluation requirements, specify the evaluation, design the evaluation and
execute the evaluation.
The second section (4.2) describes the importance of a quality model,
which is taken as reference for quality evaluation. Then, the four parts of the
quality model is presented. The first part of the model is composed of quality
characteristics, sub- characteristics, quality attributes. The quality characteristic
in use: productivity, satisfaction, safety and effectiveness, is the second part of
the quality model. Finally, the last part is formed by additional information,
which is sub-divided in techniques, the organization and marketing
information.
The third section (4.2) of this chapter describes the suggested evaluation
techniques as a way to assess the quality attributes of the component. These
techniques are grouped by level of quality. There are three levels of quality, EQL
I, EQL II and EQL III. In EQL I, the evaluation techniques are more basic. The
EQL II contains the evaluation techniques of EQL I and adds new intermediate
evaluation techniques. The EQL III contains the evaluation techniques of the
two previous levels and adds advanced techniques for quality evaluation.
In the last section (4.5), is shown metric approach used to quantify the
evaluation techniques adopted. It is Embedded Metrics Approach (EMA), which
is based in Goal Question Metric (GQM) paradigm and it divided into three
levels, the conceptual (GOAL), operational (Question) and quantitative (Metric).
Chapter 5 – The Experimental Study 126
5 The Experimental Study
When the new proposal has been suggested, it is necessary to determine
which to introduce, if applicable. Moreover, it is often not possible just to
change the existing embedded design process without obtaining more
information about the actual effect of the new proposal, i.e., it is necessary to
evaluate the proposals before making any changes in order to reduce risks. In
this context, empirical studies are crucial since the progress in any discipline
depends on our ability to understand the basic units necessary to solve a
problem (Basili, 1992). Additionally, experimentation provides a systematic,
disciplined, quantifiable, and controlled way to evaluate new theories.
Thus, we must experiment with techniques to see how and when they
really work, to understand their limits and to understand how to improve them.
In this sense, this chapter presents an experimental study in order to
evaluate the viability of embedded software component quality evaluation
methodology. Before discussing the experimental study defined, it is necessary
to introduce some definitions to clarify its elements.
5.1 Introduction
When conducting an experiment, the goal is to study the outcome when
some of the input variables to a process are varied. According to Wohlin et
al.(Wohlin et al., 1994), there are two kinds of variables in an experiment:
independent and dependent.
The variables that are objects of the study which are necessary to study to
see the effect of the changes in the independent variables are called dependent
variables. Often there is only one dependent variable in an experiment. All
Chapter 5 – The Experimental Study 127
variables in a process that are manipulated and controlled are called
independent variables.
An experiment studies the effect of changing one or more independent
variables. Those variables are called factors. The other independent variables are
controlled at a fixed level during the experiment, or else it would not be possible to
determine if the factor or another variable causes the effect. A treatment is one
particular value of a factor.
The treatments are being applied to the combination of objects and
evaluators. An object can, for example, be a document that will be reviewed
with different inspection techniques. The people that apply the treatment are
called evaluators.
At the end, an experiment consists of a set of tests where each test is a
combination of treatment, subject and object.
5.2 The Experimental Study
According to Wohlin et al. (Wohlin et al., 2000), the experimental
process can be divided into the following main activities: the definition is the
first step, where the experiment is defined in terms of problem, objective and
goals. The planning comes next, where the design of the experiment is
determined, the instrumentation is considered and the threats to the
experiment are evaluated. The operation of the experiment follows from the
design. In the operational phase, measurements are collected, analyzed and
evaluated in the analysis and interpretation. Finally, the results are
presented and packaged in the presentation and package.
The experimental plan presented follows the model proposed in (Wohlin
et al., 2000). Additionally, the experiment defined in (Barros, 2001) was used as
inspiration. The definition and planning activities will be described in future
tense, showing the logic sequence between the planning and operation.
Chapter 5 – The Experimental Study 128
5.3 The Definition
In order to define the experiment, following the GQM paradigm (Basili et
al., 1994), the main objective of this study is to:
Analyze the embedded software components quality evaluation
methodology
for the purpose of evaluating
with respect to the feasibility and its usage
from the point of view of the researchers and quality engineers
in the context of a domain embedded system design.
5.4 The Planning
After definition of the experiment, the planning is started. The definition
determines the foundations for the experiment, the reason for it, while the
planning prepares for how the experiment is conducted.
Context. The objective of this study is to evaluate the viability of
embedded software component quality evaluation methodology proposed. The
embedded software component quality evaluation will be conducted in an
industry with the requirements defined by the experimental staff based on real-
world projects. The study will be conducted as single object study which is
characterized as being a study which examines an object on a single staff and a
single project (Basili et al., 1994). The evaluators of the study will be requested
to act as the roles defined in the methodology (EQP, EQM, EQL, EMA).
Training. The training of the evaluators using the process will be
conducted in a classroom at the university. The training will be divided in two
steps: in the first one, concepts related to embedded software reuse,
component-based development, embedded system design, embedded software
component quality, embedded software evaluation, component repository,
software reuse metrics, and embedded software reuse processes will be
explained during four lectures of two hours each. Next, embedded software
Chapter 5 – The Experimental Study 129
component quality evaluation methodology will be discussed during two
lectures. During the training, the evaluators can interrupt to ask issues related
to lectures. Moreover, the training will be composed of a set of slides and
recommended readings.
Pilot Project. Before performing the study, a pilot project will be
conducted with the same structure defined in this planning. The pilot project
will be performed by a single evaluator, which is the author of the proposed
methodology. For the project, the evaluators will use the same material
described in this planning, and will be observed by the responsible researcher.
In this way, the pilot project will be a study based on observation, aiming to
detect problems and improve the planned material before its use.
Evaluators. The evaluators of the study (two evaluators), according to
their skills and technical knowledge, will act as evaluation leader, embedded
system designer, architecture/environment specialist and programming
language specialist.
Instrumentation. All the evaluators received the chapter 4 of this
thesis which contains the details information about the methodology. At the end
of the experimentation, all the evaluators will receive a questionnaire about the
evaluator’s satisfaction using the methodology. The questionnaires compose the
Appendix C.
Criteria. The quality focus of the study demands criteria that evaluate
the real feasibility by the use of methodology in evaluating the embedded
software component quality and the difficulties of use by users. The benefits
obtained will be evaluated quantitatively through the coverage of the EQM and
EQL, and the difficulties of the users in the methodology usage.
Hypotheses. An important aspect of an experimental study is to know
and to formally state what is going to be evaluated in the experimental study.
Hence, a set of hypotheses was selected, as described next.
- Null hypotheses, H0: This is the hypothesis that the experimenter
wants to reject with as high significance as possible. In this study, the null
Chapter 5 – The Experimental Study 130
hypothesis determines that the use of the methodology in embedded
quality evaluation is not feasible in its use and that the evaluators have
difficulties to apply the embedded evaluation process. Thus, according to
the selected criteria, the following hypothesis can be defined, using GQM:
Goal. To determine the feasibility of the methodology in measuring the
embedded software component quality and to evaluate the difficulties in
using the methodology.
Question.
1. Are the quality attributes proposed on EQM used during the
embedded software component quality evaluation?
2. Are the evaluation techniques proposed on EQL used during the
embedded software component quality evaluation?
3. Do the evaluators have difficulties to apply the methodology
and its activities, steps and guidelines?
Metric.
Ho’: coverage of the embedded software component quality
attributes proposed in the EQM X the quality attributes used
during the component evaluation < 80%
Ho’’: coverage of the evaluation techniques proposed on the EQL
for the quality attributes defined during the component
evaluation < 80%
Ho’’’: %Evaluators that had difficulty to understand, follow and
use the Embedded Software Component Quality Evaluation
Methodology > 20%
The EQM proposed must contain the major quality attributes necessary
to any kind of embedded software component evaluation. In this sense, the null
hypothesis H0’ states that the coverage of the quality attributes proposed in the
Chapter 5 – The Experimental Study 131
EQM X the quality attributes used during the component evaluation is less than
80%.
The evaluation staff should define the techniques that will be used to
evaluate each quality attribute defined previously. In this way, the null
hypothesis H0’’ states that the coverage of the evaluation techniques proposed
on the EQL for the quality attributes defined on the component evaluation is
less than 80%.
At least the embedded component evaluation methodology consists of
four modules and there is a set of activities, steps and guidelines to follow in
order to accomplish the component evaluation. In this way, the null hypothesis
H0’’’ states that the evaluators that will have difficulty understanding, following,
and using the whole embedded software component quality evaluation
methodology is more than 20%.
The values of these hypotheses (80%, 80% and 20%, receptivity) were
achieved through the feedback of some researchers of RiSE group and,
embedded system designer and quality engineers of a Brazilian software
company CMMi level 3 (C.E.S.A.R.). Thus, these values constitute the first step
towards well-defined indices which the methodology must achieve in order to
indicate its viability.
Alternative Hypothesis. This is the hypothesis in favor of which the
null hypothesis is rejected. In this study, the alternative hypothesis determines
that the use of the methodology is feasible and that justifies its use. Thus, the
following hypothesis can be defined:
Goal. To determine the feasibility of the methodology in measuring the
embedded software component quality, and to evaluate the difficulties in
using the methodology.
Question.
1. Are the quality attributes proposed on EQM used during the
component evaluation?
Chapter 5 – The Experimental Study 132
2. Are the evaluation techniques proposed on EQL used during the
component evaluation?
3. Do the evaluators have difficulties to apply the methodology?
Metric.
H1: coverage of the component quality attributes proposed in the
EQM X the quality attributes used during the component
evaluation >= 80%
H2: coverage of the evaluation techniques proposed on the EQL
for the quality attributes defined on the component evaluation >=
80%
H3: %Evaluators that had difficulty to understand, follow and
use the Embedded Software Component Quality Evaluation
Methodology <= 20%
Independent Variables. The independent variables are the education
and the experience of the evaluators. This information can be used in the
analysis for the formation of blocks.
Dependent Variable. The dependent variables are the quality of the
EQM and EQL developed and the usability of the proposed methodology to
assure the embedded component quality. The quality of the EQM and EQL will
be measured through its feasibility. And the quality of the methodology will be
measured through the capacity of the “user” understand, use and execute in a
correctly way all the guidelines, activities and steps of the methodology.
Qualitative Analysis. The qualitative analysis aims to evaluate the
difficulty of the application of the proposed methodology and the quality of the
material used in the study. This analysis will be performed through
questionnaire Appendix C . This questionnaire is very important because it will
allow evaluation of the difficulties that the evaluators have with the embedded
quality process and with the methodology, evaluating the provided material and
the training material, and improving these documents in order to replicate the
Chapter 5 – The Experimental Study 133
experiment in the future. Moreover, this evaluation is important because it can
be verified if the material is influencing the results of the study.
Randomization. This technique can be used in the selection of the
evaluators. Ideally, the evaluators must be selected randomly from a set of
candidates. However, the evaluators were selected by convenience sampling.
Blocking. Sometimes, there is a factor that probably has an effect on the
response, but the experimenter is not interested in that effect. If the effect on
the factor is known and controllable, it is possible to use a design technique
called blocking. Blocking is used to systematically eliminate the undesired effect
in the comparison among the treatments. In this study, the necessity of dividing
the evaluators into blocks was not identified, since the study will evaluate just
three factors, which are the completeness of EQM and EQL, and the use of the
methodology.
Balancing. In some experiments, balancing is desirable because it both
simplifies and strengthens the statistical analysis of the data. However, in this
study it is not necessary to divide the evaluators, since there is only one group.
Internal Validity. The internal validity of the study is defined as the
capacity of a new study to repeat the behavior of the current study, with the
same evaluators and objects with which it was executed (Wohlin et al., 2000).
The internal validity of the study is dependent of the number of evaluators. This
study is supposed to have at least between two and four evaluators to guarantee
a good internal validity.
External Validity. The external validity of the study measures its
capability to be affected by the generalization, i.e., the capability to repeat the
same study in other research groups (Wohlin et al., 2000). In this study, a
possible problem related to the external validity is the evaluators’
motivation, since some evaluators can perform the study without
responsibility or without a real interest in performing the project with a good
quality. This could happen in an industrial project, and the evaluators’
experience, once the background and experience in embedded design area
(including development, tests and quality area) could be lower than the
Chapter 5 – The Experimental Study 134
expected in this experiment. The external validity of the study is considered
sufficient, since it aims to evaluate the viability of the application of the
e0bedded software component quality evaluation methodology. Since the
viability is shown, new studies can be planned in order to refine and improve
the process.
Construct Validity. The validation of the construction of the study
refers to the relation between the theory that is to be proved and the
instruments and evaluators of the study (Wohlin et al., 2000). In this study, a
relatively well known and easily understandable component was chosen. Thus,
this choice avoids previous experience of making a wrong interpretation of the
impact of the proposed methodology.
Conclusion Validity. This validity is concerned with the relationship
between the treatment and the outcome, and determines the capability of the
study to generate conclusions (Wohlin et al., 2000). This conclusion will be
drawn by the use of descriptive statistic.
5.5 The project used in the experimental study
The component submitted to quality evaluation is a Serial-Serial Baud
Rate Converter. This component is used to connect two devices with different
baud rates. One application example is the K-line bus gateway, as shown
graphically in Figure 5.1. The serial vehicle diagnostics protocol known as K-line
is defined on ISO 9141-2. It has serial data communication, very similar to RS-
232 but different voltage signal levels and only one bidirectional line. The serial
data rate defined is 10.400 bps, a nonstandard baud rate and thus not available
on PC’s RS-232C controllers.
The software component under evaluation resides in serial gateway
which converts data from standard PC baud rates to K-line bus and vice-versa.
The component is a Serial-Serial (UART) RS-232 baud rate converter. It works
in a range of 75 bps to 921,600 bps. It was developed in C language and is
compatible with any ANSI C compiler. The BRConvert source code is available
in appendix C. It requires hardware with two serial ports available. The baud
Chapter 5 – The Experimental Study 135
rate conversion is done bi-directionally from one port to another and limited
baud rates depends on specific hardware.
Figure 5.1: Baud Rate Converter Evaluation Architecture.
5.6 The Architecture, Environment and Scenarios
In the automotive context, the serial baud rate converter component is
primarily used when a user wants to interface a K-line bus with a notebook. In
laboratory, where the use of a real vehicle is not practical, only a stand-alone
ECU is required to simulate the vehicle behavior. The ECU responds to K-line
like messages exactly as a real vehicle would do, using the same signal levels,
data packets and timing, although all data values are null.
Chapter 5 – The Experimental Study 136
Figure 5.2: Siemens ECU used to simulate the vehicle behavior.
5.6.1 The Architecture
The architecture proposed for the evaluation (Figure 5.1) is composed of
one Siemens ECU (Figure 5.2), one serial port enabled PC and a microcontroller
board (Figure 5.3), where the component will run and the baud rate conversion
will take place. ECU and PC will be connected to one UART port of the
microcontroller board.
Figure 5.3: Develop board model LPC-P2148 Olimex.
5.6.2 The Environment
The matching of voltage levels between the PC and the microcontroller
will be done through a RS-232C transceiver, i.e., -12V +12V (high and low
Chapter 5 – The Experimental Study 137
levels) to +3.3V and 0V (high and low levels respectively). As previously said,
the K-line protocol requires different voltage levels and has only one data line to
receive and transmit. So the physical interface between microcontroller board
and ECU needs an intervention as well: the ECU defines +12V as high level. This
conversion will be provided by a small K-line interface board connected between
ECU and microcontroller board, as shown in Figure 5.4.
The microcontroller board proposed was manufactured by Olimex, model
LPC-P2148 embedded with the NXP LPC2148 microcontroller, Figure 5.3. The
component will run on this controller. The evaluator must connect the PC on the
UART0 of the board and ECU on the UART1 in the same board.
Figure 5.4: K-line interface board used to connect ECU and microcontroller.
The PC must have a terminal emulator which sends and receives
characters at a given baud rate. This terminal should be able to save the received
characters in a file for later analysis.
5.6.3 The Scenario
The scenario defined was to converter the PC serial port with baud rate at
115,200 bps to K-Line baud rate at 10,400 bps:
� Connect PC on UART0 of the microcontroller board;
� Connect the K-line interface board on UART1 of microcontroller
board;
� Connect the ECU on K-line interface board;
� Configure the component with the specifications:
� UART0 at 115,200
Chapter 5 – The Experimental Study 138
� UART1 to 10,400
� Compile, download and run the component on microcontroller
board;
� Configure the terminal emulator of the computer at 115,200 bps;
� Send request data string from the computer and wait for replay;
� Save the contents of data response received in file.
5.7 The Instrumentation
Instrumentation. Before the experiment can be executed, all
experiment instruments must be ready. This includes the evaluators,
experiment objects, guidelines, forms and instrumentation (equipment/tools).
In this study, the questionnaire Appendix B, in conjunction with the papers
about the methodology and the step by step to carry out the evaluation were
used.
5.8 The Execution
Experimental Environment. The experimental study was conducted
in industrial environment, specifically at C.E.S.A.R. (Recife Center for Advanced
Study) embedded Lab., during September 2009 – December 2009. The
experiment was composed of two evaluators and all the evaluation process was
developed in about 160 hours. The evaluation process activity was conducted
using the baud rate converter (BRConverter) component which was developed
by embedded system engineer of the C.E.S.A.R.
Training. The evaluators who used the proposed process were trained
before the study began. The training took 12 hours, divided into 4 lectures with
three hours each, during the course. Before and after the training, each student
spent about 16h hours reading the papers about the methodology.
Evaluators. The evaluators were two embedded system engineers at
C.E.S.A.R.. The first one (Engº 1) is an embedded software/firmware
development expert, with vast experience (more then 5 years) in embedded
system design. The second evaluator (Engº 2) is an embedded hardware
development expert, with more then 3 years in embedded system design. (Engº
1) has participated in industrial projects involving some kind of software quality
Chapter 5 – The Experimental Study 139
activity. The evaluators are specialists of ANSI C programming language and in
the UVision IDE (integrated develop Environment), the same language and
environment selected to develop the components.
Costs. The experimental study was developed out of working hours of
evaluators. The environment for execution was the embedded labs at C.E.S.A.R.
and evaluator’s homes, the cost for the study was basically for planning and
execution of the evaluation.
5.9 The Analysis and Interpretation
Quantitative Analysis. The quantitative analysis is coverage of the
quality attributes proposed in EQL and the evaluation techniques proposed in
the EQL. The analyses were performed using descriptive statistics.
5.9.1 Coverage of the component quality attributes.
The component is used in communication of car with PC based diagnose
system. Thus, the EQL I was defined through the guidelines for selecting quality
evaluation level presented on chapter 4 section 4. Although the component was
evaluated based on the EQL I, but the evaluation staff added other quality
attributes of EQL II judged important, specifically for this component
evaluation. The components evaluated used the characteristics, sub-
characteristics and quality attributes presented on Table 5.1.
Tabela 5.1: Quality Attributes selected based in EQL I.
Characteristics
Sub-
Characteristics
EQL
Level
Quality
Attributes
Importance
Functionality Real-Time I Response time
(Latency)
4
Functionality Real-Time I Execution time 4
Functionality Accuracy I , II Precision 5
Functionality Self-contained II Dependability 2
Reliability Recoverability I Error Handling 3
Reliability Safety I , II Integrity 3
Chapter 5 – The Experimental Study 140
Usability Configurability I Effort for
configure
4
Usability Configurability I Understandability 3
Usability Attractiveness I Effort to operate 3
Efficiency Resource
Behavior
I Peripheral
utilization
3
Efficiency Energy
consumption
I Amount of
Energy
Consumption
3
Efficiency Data Memory
Utilization
I Amount of Data
Memory
Utilization
4
Efficiency Program
Memory
Utilization
I Amount of
Program Memory
Utilization
3
Maintainability Stability I , II Modifiability 4
Maintainability Changeability I Change Effort 3
Maintainability Testability I , II Test suit provided 4
Portability Deployability I Complexity level 4
Portability Replaceability I Backward
Compatibility
3
Portability Flexibility I Configuration
capacity
3
Portability Flexibility II Mobility 3
Portability Reusability I Architecture
compatibility
3
The sub-characteristics presented above contain 21 possible quality
attributes to be evaluated in a component at EQL I (more details about it could
be seen at chapter 4 section 4 on EQL). From all of them, the evaluation staff
selected 19 quality attributes to evaluate the components quality using the EQL
I. Thus, 90.47% of the quality attributes was selected from all of the possible
and, in this way, the Ho’ was rejected.
Chapter 5 – The Experimental Study 141
5.9.2 Coverage of the evaluation techniques.
After defining the quality attributes, the evaluation staff must define
which evaluation techniques will be used to measure each quality attribute
proposed previously. Table 5.2 shows the evaluation techniques defined for
evaluating components based in EQL I.
Tabela 5.2: Evaluation Techniques selected by evaluation staff
Characteristic Sub-
Characteristic
Quality Attributes Evaluation
Techniques
EQL
Self-contained Dependability Dependency analysis II
Response time
(Latency)
Evaluation measurement
(Time analysis) I
Real-Time
Execution time Evaluation measurement
(Time analysis) I
Precision analysis
(Evaluation
measurement) I
Functional Tests (black
box) I
Functionality
Accuracy Precision
Structural Tests (white-
box) II
Recoverability Error Handling Code Inspection I
Code Inspection I
Relability
Safety Integrity
Algorithmic Complexity II
Effort to Configure
analysis I
Effort for configure
Inspection of user
interfaces I
Configurability
Understandability Documentation analysis
(Use Guide, architectural
analysis, etc) I
Evaluation measurement I
Usability
Attractiveness Effort to operate
Inspection of user
interfaces I
Resource
Behavior
Peripheral utilization Evaluation measurement
I
Efficiency
Energy
consumption
Amount of Energy
Consumption
Evaluation measurement
I
Chapter 5 – The Experimental Study 142
Data Memory
Utilization
Amount of Data
Memory Utilization
Evaluation measurement
I
Program
Memory
Utilization
Amount of Program
Memory Utilization
Evaluation measurement
I
Code metrics and
programming rules II
Documents Inspection II
Stability Modifiability
Static Analysis II
Changeability Change Effort Changeability analysis I
Maintainabiblity
Testability Test suit provided Analysis of the test-suite
provided I
Deployability Complexity level Deployment analyses I
Replaceability Backward
Compatibility
Backward compatibility
analysis I
Configuration capacity Configuration analysis I Flexility
Mobility Mobility analyses II
Portability
Reusability
Architecture
compatibility
Hardware/Software
compatibility analysis I
The quality attributes presented above contain 24 possible evaluation
techniques that should be used to measure the component quality on EQL I
(more details about it could be seen in chapter 4 section 4 on EQL). From all of
them, the evaluation staff selected 21 evaluation techniques to evaluate the
components quality using EQL I. Thus, 87.50% of the evaluation techniques
were selected and, in this way, the Ho’’ was rejected.
As happens with the Ho’ for EQL I, the Ho’’ is also rejected once the
evaluation techniques selected are the basic techniques for evaluating the
quality attribute selected previously (see Table 5.1). After selecting the quality
attributes and the evaluation techniques for EQL I, the evaluation staff should
define metrics using the EML, the punctuation level and the tools to be used for
each quality attributed in order to execute the evaluation. All data generated
during the process are collected in order to be analyzed by the evaluation staff.
In this way, the evaluation staff measured the BRConverter quality using
the definitions of the EQL I and EQL II, and the quality achieved of those
Chapter 5 – The Experimental Study 143
components are presented on Figure 5.5. The results presented could be
interpreted as: 0% <= x <= 100%; closer to 100% being better.
The Figure 5.5 shows the final results of each quality characteristic, which
were obtained by weighted average. It is calculated by the importance assigned
to each sub-characteristic, which is used as weight in the calculation. Figure 5.6
shows the scores of quality in use characteristics that it is obtained from user’s
feedback by using the component in a real environment in according to
expectations. More details Can Be Seen in Appendix C.
BRConverter
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1
Functionality (EQLI) Reliability (EQLII) Usability (EQLI) Efficiency (EQLI) Maintainability(EQLII)
Portability (EQLI)
Charateristics
Qua
lity
Figure 5.5: Evaluation of quality characteristics: BRConverter.
Quality in use characteristics
73%
82%
68%73%
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
Productivity Satisfaction Security Effectiveness
Characteristics
Qua
lity
scor
e
Figure 5.6: Quality in use characteristics: BRConverter.
Chapter 5 – The Experimental Study 144
5.9.3 Difficulties in the Embedded software component Quality
Evaluation Process (EQP).
At the end of the study, the evaluators answer a questionnaire presented
on Appendix C which relates the main difficulties found during the process
usage, as show next.
� Difficulties in Establishing Evaluation Requirements.
When analyzing evaluators’ answers for the difficulties in establish
evaluation requirement, one evaluator found no significant
difficulties in this module due to component simplicity. And,
according to the evaluator “a possible improvement should be a
questionnaire to help the user to define sub-characteristics
importance”. The other evaluators recognize the importance and
the difficulty in to correctly specify a quality model. “There resides
the most important decisions of the whole evaluation for me. If
you select wrong characteristics or sub-characteristics, you can
make the evaluation incomplete, inefficient or even invalid”. This
evaluator agrees the EQL is a good starting point in the selection
of main characteristics, but he suggested that this chapter remains
before the EQP. Evaluators related having some difficulties in the
step “Define the goals/scope of the evaluation”. In order to
decrease this difficulty during the component evaluation process
usage it is interesting to store in a knowledge base the past
decisions about the goals/scope of the evaluation already
executed. Thus, the evaluation staff may analyze the past
experience to help them during these activities and steps and
looking for improving its definitions according to the similarity or
relevance of previous goals/scope definition.
� Difficulties in Specifying the Evaluation. Analyzing the
evaluator’s answers, the evaluators had no difficulties in choosing
the EQL for evaluation, due to the use of guidelines. However, they
had considerable work in completely defining the Embedded
Quality Model (EQM) due to a large number of quality attributes
chosen and therefore the evaluation techniques. They
Chapter 5 – The Experimental Study 145
recommended developing a computational tool to support this
activity. According to one of the evaluators: “The use of a
framework tool could make it easier to find information about
each attribute”. Another suggestion recommended by one
evaluator was to make a clear distinction between qualitative and
quantitative quality attributes. In this way, qualitative attributes
only could be evaluated separately. They reported some difficulties
deciding which characteristics, sub-characteristics and quality
attributes should be selected. The evaluators have little experience
in software quality, so they related any difficulty in understanding
the characteristics, sub-characteristics and quality attributes in
order to select them during the process.
� Difficulties in Designing the Evaluation. After analyzing the
evaluators’ answers, it is apparent that they had some difficulties
in defining which evaluation technique should be used to measure
the quality attributes defined previously. After defining the
evaluation techniques, they found no difficulty in performing the
design evaluation. According to the evaluator: “Once we have
already decided which attributes to evaluate, the decision
regarding how to do it becomes much simpler. I see no difficulty in
this module”. One evaluator recognizes that designing the
evaluation requires some knowledge on test planning.
� Difficulties in Executing the Evaluation. According to the
evaluators, they did not have any difficulty during this activity
because the evaluation plan was well defined and the whole
process was very well documented and it became easy to execute
the activities planned. However, one evaluator suggested a
computational tool or spread sheet to help in the evaluation results
compilation.
Among 26 steps from the component evaluation process, the evaluators
related difficulty in only 3 steps, which means 11.5% of difficulty during the
whole process, and, in this way, the Ho’’’ was rejected.
Chapter 5 – The Experimental Study 146
Conclusion. The experimental study shows that the methodology is
viable and has a complexity level depends on EQL to measure embedded
software component quality. The component evaluation using the methodology
presents a low complexity. The aspects related to understanding, some sub-
characteristic goals and questions need to be reviewed and improved. However,
a full reading on methodology’s chapter is required in order to have a first level
understanding. The results identified in the experiment, the values can be
calibrated in a more accurate way. Nevertheless, most of the problems identified
by the evaluators in terms of difficulties are more related to the training
provided than with the process itself. Finally, developing a computational tool
will help the users to easily customize their evaluation. A tool could make it
easier to add/exclude a characteristic/sub-characteristic, to see detailed
information of each characteristic (like goals, questions, EQL, metrics).
Qualitative Analysis. After concluding the quantitative analysis for the
experiment, the qualitative analysis was performed. This analysis was based on
the answers defined for the questionnaire presented in Appendix C.
Usefulness of the Methodology. The evaluators reported that the
methodology was useful to perform the embedded software component quality
evaluation. Testimony of one evaluators: “I found it very useful, flexible and
specially related for embedded systems components domain”. However,
evaluators indicated some improvements in both activities of the process which
should be carefully considered and reviewed in order to improve the process
proposed.
5.10 Lessons Learned
After concluding the experimental study, some aspects should be
considered in order to repeat the experiment, since they were seen as
limitations of the first execution.
Training. Besides the improvements relating to the lectures, both
highlighted that the training should include a complete and detailed example
covering the whole EQP.
Chapter 5 – The Experimental Study 147
Questionnaires. The questionnaires should be reviewed in order to
collect more precise data related to user’s feedback and the evaluation process.
Moreover, a possible improvement can be done to collect it after the iterations
during the project usage, avoiding the loss of useful information by the
evaluators.
Evaluators Skill. The process does not define the skills necessary for
each role in the process. The evaluators did not have considerable experience in
quality area. In this way, the results achieved should be better and the
methodology will be accurately analyzed. The roles were defined in an informal
way, often allocating the evaluators for the roles defined in their jobs. However,
these issues should be reviewed in order to be more systematic and to reduce
risks.
Motivation. As the project was relatively long, it was difficult to keep
the evaluators’ motivation during all the execution. Thus, this aspect should be
analyzed in order to try to control it. A possible solution can be to define some
checkpoints during the project.
5.11 Summary
This chapter presented the definition, planning, instrumentation,
execution, analysis and interpretation of the experimental study that evaluated
the viability of the embedded software component quality evaluation
methodology. The study analyzed the possibility of evaluators using the process
to use EQM and EQL proposed in this work. The study also analyzed the
difficulties found during the use of the methodology and its activities, steps and
guidelines. The analysis has shown that the methodology is feasible and can be
used to evaluate the embedded component quality. It also identified some
directions for improvements. However, one aspect should be considered,
studies based on observation in order to identify problems and points for
improvements.
The next chapter will present the conclusions of this work, its main
contribution and directions for future works.
Chapter 6 – Conclusions and future works 148
6 Conclusions and future works
As was written at the beginning, the demands that companies place on
these electronic products include low production costs, short time to market and
high quality. The cost and time to market issue is addressed by means of the
rapidly emerging Component-based Development (CBD) approach. In CBSE,
the proper search, selection and evaluation process of components is considered
the corner stone for the development of any effective component-based system.
So, assessment and evaluation of software components has become a
compulsory and crucial part of any CBSD lifecycle. Many organizations fail in
their attempts to select an appropriate approach for use in Component-Based
Software Development (CBSD), which is being used in a wide variety of
application areas, mostly in embedded domain where the correct operations of
the components are often critical for business success and, in some cases,
human safety. Thus, the software components quality evaluation has become an
essential activity in order to bring reliability in (re)using software components.
In this way, in order to properly enable the evaluation of embedded
software components, supplying the real necessities of the embedded design of
building system fast, cheap and high quality systems, an Embedded software
Component Quality Evaluation Methodology is mandatory.
6.1 Contributions
One of main contributions of this work is to improve the lack of
consistency between the standards ISO/IEC 9126, ISO/IEC 14598 and ISO/IEC
25000, for quality evaluation, also including the software component quality
context and extend it to the embedded domain. These standards provide a high-
level definition of characteristics and metrics for software products but do not
Chapter 6 – Conclusions and future works 149
provide ways to be used in an effective way, becoming very difficult to apply
them without acquiring more knowledge from supplementary sources.
In this way, other significant point of this research is to demonstrate that
quality evaluation applied to embedded component is not only possible and
practically viable, but also directly applicable in the embedded system industry.
Some evaluations have been envisioned in conjunction with the industry for
acquiring trust and maturation to the proposed embedded software component
quality evaluation methodology.
A list as other contributions of the new methodology presented:
• Enabled the acquirers of components to evaluate the best fit
according to system’s requirements;
• Allow that an embedded component produced in a Software Reuse
Environment has evaluated the quality before being stored in a
Repository System, specially in the robust framework for software
reuse (Almeida et al., 2004);
• Allow third party quality evaluation required by companies in
order to achieve trust in its components in embedded domain;
To achieve this objective, intermediaries works was implemented, which
it are listed below:
• Implementation of research related to requirements, constraints
and needs of embedded systems design;
• Realization of a survey related to the state-of-the-art in embedded
software component quality, evaluation and certification research;
• Definition of a new quality model specifically for embedded
system (EQM);
• Creation of an new evaluation quality level-based (EQL);
• Definition of a set of metrics using the EMA;
• Performing an experimental study to verify the viability and
practicality of the methodology and to demonstrate the
methodology usage.
Chapter 6 – Conclusions and future works 150
6.2 Future Work
During the research it was perceived that there are needs for a number of
improvements. Some of them are the following:
• Predicting system properties. A research challenge today is to
predict system properties from the component properties. This is
interesting for system integration, to achieve predictability. The analysis
of many global properties from component properties is hindered by
inherent complexity issues. Efforts should be directed to finding
techniques for coping with this complexity.
• Embedded Quality Certification. The long term plan could be to
achieve a degree of maturity that could be used as an embedded
component certification standard for embedded design industry, making
it possible to create an Embedded Quality Certification. Through the
Brazilian projects that the RiSE group is involved in, this “dream” will
become reality through the maturation of the process and the reliability
of the embedded software component reuse on that process;
• Tool Support. A tool support is necessary in order to aid the usage of
the proposed methodology. In that way, it is vitally important to develop
a tool that supports the whole evaluation process, activities, steps, etc.,
since there is a lot of information produced during the evaluation process
that could be missed without a tool support; and
• Risk and Cost/Benefit Management Model. An interesting aspect
to the customer is the cost/benefit and risk management that the
component quality assurance could bring to its business in order to
analyze if the costs relative to the component quality assurance are
acceptable or not (Keil & Tiwana, 2005). In this way, a Risk and
Cost/Benefit Management Model is very interesting to complement the
embedded software component quality evaluation methodology and
should be carefully designed to provide the real risks, costs and possible
benefits to the component customer.
Chapter 6 – Conclusions and future works 151
6.3 Academic Contributions
The knowledge developed during the work resulted in the following
publication:
• (Carvalho et al., 2009a) CARVALHO, F., MEIRA, S. R. L., Eulino,João,
FREITAS, BRUNO Embedded Software Component Quality and
Certification: a Preliminary Evaluation In: 35th IEEE EUROMICRO
Conference on Software Engineering and Advanced Applications (SEAA),
2009, Patras Greece
• (Carvalho et al., 200b) CARVALHO, F., MEIRA, S. R. L., Eulino,João,
Xavier, Elyda, OLIVEIRA, DIOGENES A research of Embedded Software
Component Quality and Certification In: 11th Brazilian Workshop on
Real-Time and Embedded Systems (WTR), 2009, Recife.
• (Carvalho et al., 2009c) CARVALHO, F., MEIRA, S. R. L., FREITAS,
BRUNO, ELLEN, NATHALIA An Embedded software component Quality
Model - EQM In: IRI - IEEE International Conference on Information
Reuse and Integration (IRI), 2009, Las Vegas.
• (Carvalho et al., 2009d) CARVALHO, F., MEIRA, S. R. L., FREITAS,
BRUNO, ELLEN, NATHALIA A Quality Model for Embedded Software
Component In: 11th Brazilian Workshop on Real-Time and Embedded
Systems - WTR, 2009, Recife.
• (Carvalho et al., 2009e) CARVALHO, F., MEIRA, S. R. L., Xavier, Elyda,
Eulino,João An Embedded software component Maturity Model In: QSIC
- 9th International Conference on Quality Software, 2009, Jeju.
• (Carvalho et al., 2009f) CARVALHO, F., MEIRA, S. R. L., SILVEIRA, J.
An Embedded software component Quality Maturity Model (EQM2) In:
Conferencia de Ingeniería de Requisitos y Ambientes de Software
(IDEAS), 2009, Medelim.
• (Carvalho et al., 2009g) CARVALHO, F., MEIRA, S. R. L. Towards an
Embedded Software Component Quality Verification Framework In: 14th
IEEE International Conference on Engineering of Complex Computer
Systems (ICECCS), 2009, Potsdam.
Chapter 7 - References 152
7 References
(Åkerholm et al.,2004) M. Åkerholm; J. Fredriksson; K. Sandström; I.
Crnkovic. Quality Attribute Support in a Component Technology for
Vehicular Software. Fourth Conference on Software Engineering
Research and Practice in Sweden, Linköping, Sweden, 2004.
(Almeida et al., 2004) Almeida, E. S.; Alvaro, A.; Lucrédio, D.; Garcia, V. C.;
Meira, S. R. L. RiSE Project: Towards a Robust Framework for Software
Reuse, In: The IEEE International Conference on Information Reuse
and Integration (IRI), Las Vegas, USA, pp. 48-53, 2004.
(Almeida, 2007) Almeida, E.S. The RiSE Process for Domain Engineering
(RIDE), PhD. Thesis, Federal University of Pernambuco, 2007."
(Alvaro et al., 2005) Alvaro, A.; Almeida, E. S.; Meira, S. R. L. A Software
Component Certification: A Survey, In: The 31st IEEE EUROMICRO
Conference on Software Engineering and Advanced Applications
(SEAA), Component-Based Software Engineering (CBSE) Track, Porto,
Portugal, 2005.
(Alvaro et al., 2006) Alvaro, A.; Almeida, E.S.; Meira, S. R. L. A Software
Component Quality Model: A Preliminary Evaluation, In: The 32st
IEEE EUROMICRO Conference on Software Engineering and
Advanced Applications (SEAA), Component-Based Software
Engineering (CBSE) Track, Cavtat/Dubrovnik, Croatia, 2006.
(Alvaro, 2009) Alvaro, A. A Software Component Quality Framework, PhD.
Thesis, Federal University of Pernambuco, 2009.
(Andreou & Tziakouris, 2007) Andreou, A. S.; Tziakouris, M. A quality
framework for developing and evaluating original software
components. In: Information & Software Technology. Vol. 49, No. 02,
pp. 122-141, 2007.
Chapter 7 - References 153
(Atkinson et al.,2005) C. Atkinson1, C. Bunse, C. Peper, H. Gross. Component-
based software development for embedded systems: An overview of
current research trends. State-of-the-Art Survey. Berlin: Springer,
2005. (Lecture Notes in Computer Science 3778), pp. 1-7
(Avižienis et al. , 2001) Avižienis A., Laprie J-C., Randell B., Fundamental
Concepts of Computer System Dependability, IARP/IEEE-RAS
Workshop on Robot Dependability: Technological Challenge of
Dependable, Robots in Human Environments, 2001
(Barbacci et al, 1995) Barbacci M., Klein M., Longstaff T., and Weinstock C. B.,
Quality Attributes, report CMU/SEI-95-TR-021, CMU/SEI, 1995.
(Barros, 2001) M. O. Barros, Project Management based on Scenarios: A
Dinamic Modeling and Simulation Approach (in portuguese), Ph.D.
Thesis, Federal University of Rio de Janeiro, December, 2001, pp.249.
(Basili & Rombach, 1988) Basili, V.R.; Rombach, H.D. The TAME Project:
Towards Improvement-Oriented Software Environments, In: IEEE
Transactions on Software Engineering, Vol. 14, No.6, pp.758-773, 1988.
(Basili & Selby, 1984) Basili, V.R.; Selby, R.W. Data Collection and Analysis in
Software Research and Management, In: Proceedings of the American
Statistical Association and Biomeasure Society, Joint Statistical
Meetings, Philadelphia, 1984.
(Basili & Weiss, 1984) Basili, V.R.; Weiss, D.M. A Methodology for Collecting
Valid Software Engineering Data, In: IEEE Transactions on Software
Engineering, Vol. 10, No. 06, pp. 728-738, 1984.
(Basili et al., 1994) Basili, V.R.; Caldiera, G.; Rombach, H.D. The Goal Question
Metric Approach, In: Encyclopedia of Software Engineering, Vol. II,
September, pp. 528-532, 1994.
(Basili, 1992) Basili, V.R. Software Modeling and Measurement: The Goal
Question Metric Paradigm, In: Computer Science Technical Report
Series, CS-TR-2956 (UMIACS-TR-92-96), University of Maryland,
College Park, MD, September 1992.
(Bass et al., 2000) Bass, L.; Buhman, C.; Dorda, S.; Long, F.; Robert, J.;
Seacord, R.; Wallnau, K. C. Market Assessment of Component-Based
Software Engineering, In: Software Engineering Institute (SEI),
Technical Report, Vol. I, May, 2000.
Chapter 7 - References 154
(Bertoa et al., 2002) Bertoa, M.; Vallecillo, A. Quality Attributes for COTS
Components, In: The 6th IEEE International ECOOP Workshop on
Quantitative Approaches in Object-Oriented Software Engineering
(QAOOSE), Spain, Vol. 01, No. 02, pp. 128-144, 2002.
(Bertoa et al., 2006) Bertoa, M.F.; Troya, J.M.; Vallecillo, A. Measuring the
Usability of Software Components. In: Journal of Systems and
Software, Vol. 79, No. 03, pp. 427-439, 2006.
(Bertolino & Mirandola, 2003) Bertolino, A.; Mirandola, R. Towards
Component-Based Software Performance Engineering, In: Proceedings
of 6th ICSE Workshop on Component-Based Software Engineering,
USA, 2003.
(Beugnard et al., 1999) Beugnard, A.; Jezequel, J.; Plouzeau, N.; Watkins, D.
Making component contract aware, In: IEEE Computer, Vol. 32, No.
07, pp. 38-45, 1999.
(Beus-Dukic et al., 2003) Beus-Dukic, L.; Boegh, J. COTS Software Quality
Evaluation, In: The 2nd International Conference on COTS-Based
Software System (ICCBSS), Lecture Notes in Computer Science (LNCS),
Springer-Verlag, Canada, 2003.
(Beydeda & Gruhn, 2003) Beydeda, S.; Gruhn, V. State of the art in testing
components, In: The 3th IEEE International Conference on Quality
Software (ICQS), USA, 2003.
(Boegh et al., 1993) Boegh, J.; Hausen, H-L.; Welzel, D. A Practioners Guide to
Evaluation of Software, In: The IEEE Software Engineering Standards
Symposium, pp. 282-288, 1993.
(Boehm et al., 1976) Boehm, W.; Brown, J.R.; Lipow, M. Quantitative
Evaluation of Software Quality, In: The Proceedings of the Second
International Conference on Software Engineering, pp.592-605, 1976.
(Boehm et al., 1978) Boehm, B.; Brown, J.R.; Lipow, H.; MacLeod, G. J.; Merrit,
M. J. Characteristics of Software Quality, Elsevier North Holland, 1978.
(Brinksma et al., 2001) E. Brinksma et al., ROADMAP - Componentbased
Design and Integration Platforms, W1.A2.N1.Y1, Project IST-2001-
34820, ARTIST - Advanced Real-Time Systems
(Brown, 2000) Brown A. W., Large-Scale Component-Based Development,
Prentice Hall, 2000.
Chapter 7 - References 155
(Brownsword et al., 2000) Brownsword, L.; Oberndorf, T.; Sledge, C. A.:
Developing New Processes for COTS-Based Systems. In: IEEE
Software, July/August, pp. 48-55, 2000.
(Caldiera & Basili, 1991) Caldiera, G.; Basili, V. Identifying and Qualifying
Reusable Software Components, In: IEEE Computer, Vol. 24, No. 02,
pp. 61–71, 1991.
(Camposano & Wilberg, 1996)R. Camposano, J. Wilberg, “Embedded System
Design”, in Design Automation for Embedded Systems, vol. 1, pp. 5-50,
Jan 1996.
(Carvalho et al., 2009a) CARVALHO, F., MEIRA, S. R. L., Eulino,João,
FREITAS, BRUNO Embedded Software Component Quality and
Certification: a Preliminary Evaluation In: 35th IEEE EUROMICRO
Conference on Software Engineering and Advanced Applications
(SEAA), 2009, Patras Greece.
(Carvalho et al., 200b) CARVALHO, F., MEIRA, S. R. L., Eulino,João, Xavier,
Elyda, OLIVEIRA, DIOGENES A research of Embedded Software
Component Quality and Certification In: 11th Brazilian Workshop on
Real-Time and Embedded Systems (WTR), 2009, Recife.
(Carvalho et al., 2009c) CARVALHO, F., MEIRA, S. R. L., FREITAS, BRUNO,
ELLEN, NATHALIA An Embedded software component Quality Model
- EQM In: IRI - IEEE International Conference on Information Reuse
and Integration (IRI), 2009, Las Vegas.
(Carvalho et al., 2009d) CARVALHO, F., MEIRA, S. R. L., FREITAS, BRUNO,
ELLEN, NATHALIA A Quality Model for Embedded Software
Component In: 11th Brazilian Workshop on Real-Time and Embedded
Systems - WTR, 2009, Recife.
(Carvalho et al., 2009e) CARVALHO, F., MEIRA, S. R. L., Xavier, Elyda,
Eulino,João An Embedded software component Maturity Model In:
QSIC - 9th International Conference on Quality Software, 2009, Jeju.
(Carvalho et al., 2009f) CARVALHO, F., MEIRA, S. R. L., SILVEIRA, J. An
Embedded software component Quality Maturity Model (EQM2) In:
Conferencia de Ingeniería de Requisitos y Ambientes de Software
(IDEAS), 2009, Medelim.
Chapter 7 - References 156
(Carvalho et al., 2009g) CARVALHO, F., MEIRA, S. R. L. Towards an
Embedded Software Component Quality Verification Framework In:
14th IEEE International Conference on Engineering of Complex
Computer Systems (ICECCS), 2009, Potsdam.
(Cechich et al., 2003) Cechich, A.; Vallecillo, A.; Piattini, M. Assessing
component based systems, In: Component Based Software Quality,
Lecture Notes in Computer Science (LNCS), pp. 1–20, 2003.
(Chen et al., 2005) Chen, S.; Liu, Y.; Gorton, I.; Liu, A. Performance Predication
of Component-based Applications. In: Journal of Systems and
Software, Vol. 01, No. 07, pp. 35-46, 2005.
(Cho et al., 2001) Cho, E. S.; Kim, M. S.; Kim, S. D. Component Metrics to
Measure Component Quality, In: The 8th IEEE Asia-Pacific Software
Engineering Conference (APSEC), pp. 419-426, 2001.
(CMMI, 2000) CMMI Product Development Team. CMMI for Systems
Engineering/Software Engineering/Integrated Product and Process
Development/Supplier Sourcing, Version 1.1 Continuous
Representation, In: CMU/SEI-2002-TR-011, ESC-TR- 2002-011.
Carnegie Mellon University/Software Engineering Institute
(CMU/SEI), November, 2000.
(Comella-Dorda et al., 2002) Comella-Dorda, S.; Dean, J.; Morris, E.;
Oberndorf, P. A Process for COTS Software Product Evaluation, In: The
1st International Conference on COTS-Based Software System
(ICCBSS), Lecture Notes in Computer Science (LNCS), Springer-Verlag,
USA, 2002.
(Comella-Dorda et al., 2003) Comella-Dorda, S.; Dean, J.; Lewis, G.; Morris, E.;
Oberndorf, P.; Harper, E. A Process for COTS Software Product
Evaluation, In: Technical Report, CMU/SEI-2003-TR-017, 2003.
(Councill, 1999) Councill, W. T. Third-Party Testing and the Quality of Software
Components, In: IEEE Computer, Vol. 16, No. 04, pp. 55-57, 1999.
(Councill, 2001) Councill, B. Third-Party Certification and Its Required
Elements, In: The 4th Workshop on Component-Based Software
Engineering (CBSE), Lecture Notes in Computer Science (LNCS),
Springer-Verlag, Canada, May, 2001.
Chapter 7 - References 157
(Crnkovic & Larsson, 2002) I. Crnkovic and M. Larsson. Building Reliable
Component-Based Software Systems. ArtechHouse, 2002.
(Crnkovic, 2005) Crnkovic, I. Component-based software engineering for
embedded systems I - Proceedings of the 27th international conference
on Software Engineering ICSE’05, May , Missouri, USA,2005
(Drouin, 1995) Drouin, J-N. The SPICE Project: An Overview. In: The Software
Process Newsletter, IEEE TCSE, No. 02, pp. 08-09, 1995.
(Fagan, 1976) Fagan, M. Design and Code Inspections to Reduce Errors in
Program Development. In: IBM Systems Journal, Vol. 15, No. 03, pp.
182-211, 1976.
(Frakes & Terry, 1996) Frakes, W.; Terry, C. Software Reuse: Metrics and
Models, In: ACM Computing Survey, Vol. 28, No. 02, pp. 415-435,
1996.
(Freedman, 1991) Freedman, R.S. Testability of Software Components, In: IEEE
Transactions on Software Engineering, Vol. 17, No. 06, June 1991.
(Gao et al., 2003) Gao, J.Z.; Jacob, H.S.J.; Wu, Y. Testing and Quality
Assurance for Component Based Software, Artech House, 2003.
(Georgiadou, 2003) Georgiadou, E. GEQUAMO-A Generic, Multilayered,
Customisable, Software Quality Model. In: Software Quality Journal,
Vol. 11, No. 04, pp. 313-323, 2003.
(Goulao et al., 2002a) Goulao, M.; Brito e Abreu, F. The Quest for Software
Components Quality, In: The 26th IEEE Annual International
Computer Software and Applications Conference (COMPSAC),
England, pp. 313-318, 2002.
(Goulão et al., 2002b) Goulão, M.; Abreu, F. B. Towards a Component Quality
Model, In: The 28th IEEE EUROMICRO Conference, Work in Progress
Section, Dortmund, Germany, 2002.
(Gui & Scott, 2007) Gui, G.; Scott, P.D. Ranking reusability of software
components using coupling metrics. In: Journal of Systems and
Software, Vol. 80, No. 09, pp. 1450-1459, 2007.
(Hall, 1990) Hall, A., Seven Myths of Formal Methods, In: IEEE Software, pp.
11-20, 1990.
Chapter 7 - References 158
(Hamlet et al., 2001) Hamlet, D.; Mason, D.; Woit. D. Theory of Software
Component Reliability, In: 23rd International Conference on Software
Engineering (ICSE), 2001.
(Heineman et al., 2001) Heineman, G. T.; Councill, W. T. Component-Based
Software Engineering: Putting the Pieces Together, Addison-Wesley,
USA, 2001.
(Hissam et al., 2003) Hissam, S. A.; Moreno, G. A.; Stafford, J.; Wallnau, K. C.
Enabling Predictable Assembly, In: Journal of Systems and Software,
Vol. 65, No. 03, pp. 185-198, 2003.
(Hyatt et al., 1996) Hyatt, L.; Rosenberg, L.; A Software Quality Model and
Metrics for Risk Assessment, In: NASA Software Technology Assurance
Center (SATC), 1996.
(ISO/CD 8402-1, 1990) ISOlCD 8402-1, Quality Concepts and Temiinology Part
One: Generic TermsandDefinitions, htemational Standards
Organisation, December 1990.
(ISO/IEC 1131-3,1995) IEC, Application and Implementation of IEC 1131-3, IEC
Geneva, 1995.
(ISO/IEC 12119, 1994) ISO 12119, Software Packages – Quality Requirements
and Testing, International Standard ISO/IEC 12119, International
Standard Organization (ISO), 1998.
(ISO/IEC 14598, 1998) ISO 14598, Information Technology – Software product
evaluation -- Part 1: General Guide, International Standard ISO/IEC
14598, International Standard Organization (ISO), 1998.
(ISO/IEC 15504-2, 2003) ISO/IEC 15504-2, Information technology. Software
process assessment. Part 2 : a reference model for processes and
process capability, International Standard ISO/IEC 15504-2,
International Standard Organization (ISO), 2003.
(ISO/IEC 25000, 2005) ISO/IEC 25000, Software product quality
requirements and evaluation (SQuaRE), Guide to SQuaRE,
International Standard Organization, July, 2005.
(ISO/IEC 61131-3,1995) IEC. Application and implementation of IEC 61131-3.
Technical report, IEC, Geneva, 1995.
Chapter 7 - References 159
(ISO/IEC 9126, 2001) ISO 9126, Information Technology – Product Quality –
Part1: Quality Model, International Standard ISO/IEC 9126,
International Standard Organization (ISO), 2001.
(Jahnke et al., 2000) Jens H. Jahnke, Jörg Niere, Jörg Wadsack, "Automated
Quality Analysis of Component Software for Embedded Systems," icpc,
pp.18, 8th International Workshop on Program Comprehension
(IWPC'00), 2000
(Jezequel et al., 1997) Jezequel, J. M.; Meyer, B. Design by Contract: The
Lessons of Ariane, In: IEEE Computer, Vol. 30, No. 02, pp. 129-130,
1997.
(Junior et al., 2004a) Júnior, M.N. O.; Carvalho, Fernando F.; Maciel, P. R. M.;
Barreto, R. S. A Software Power Cost Analysis based on Colored Petri
Net. In: 25th International Conference on Applications and Theory of
Petri Nets (ICATPN'04) in conjunction Workshop on Token Based
Computing (ToBaCo), July, 2004.
(Junior et al., 2004b) Júnior, M.N. O.; Carvalho, Fernando F.; Maciel, P. R. M.;
Barreto, R. S. Towards a Software Power Cost Analysis Framework
Using Colored Petri Net. In: 14th International Workshop PATMOS
2004, Volume 3254, Pages: 362-371, Springer-Verlag, August, 2004.
(Kalaoja et al., 1997)J. Kalaoja, E. Niemelä, H. Perunka, “Feature Modelling of
Component-Based Embedded Software," step,pp.444, 8th International
Workshop on Software Technology and Engineering Practice (STEP
'97) (including CASE '97), 1997.
(Karlson, 2006) D. Karlson, “Verification of Component-based Embedded
Systems Designs, Dissertation, Linkoping University, 2006.
(Keil & Tiwana, 2005) Keil, M; Tiwana, A. Beyond Cost: The Drivers of COTS
Application Value. In: IEEE Software, Vol. 22, No. 03, pp. 64-69, 2005.
(Kocher et al., 2004) Kocher, P.; Ruby Lee; McGraw, G.; Raghunathan, A.; Ravi,
S. Security as a new dimension in embedded system design, DAC 2004,
June 7–11, 2004, San Diego, California, USA.
(Kogure & Akao, 1983) Kogure, M.; Akao, Y. Quality Function Deployment and
CWQC in Japan, In: Quality Progress, pp.25-29, 1983.
(Kopetz & Suri et al., 2003) H. Kopetz and N. Suri. Compositional design of RT
systems: A conceptual basis for specification of linking interfaces. In
Chapter 7 - References 160
Proc. 6th IEEE International Symposium on Object-oriented Real-Time
Distributed Computing (ISORC), Hokkaido, Japan, May 2003.
(Kotula, 1998) Kotula, J. Using Patterns To Create Component Documentation,
In: IEEE Software, Vol. 15, No. 02, March/April, pp. 84-92, 1998.
(Krueger, 1992) Krueger, C. W. Software Reuse, In: ACM Computing Surveys,
Vol. 24, No. 02, pp. 131-183, 1992.
(Larsson, 2000) Larsson M., Applying Configuration Management Techniques
to Component-Based Systems, Licentiate Thesis, Dissertation 2000-
007, Department of Information Technology Uppsala University.,
2000.
(Larsson, 2004) M. Larsson, Predicting Quality Attributes in Component-based
Software Systems, Phd Thesis, Mälardalen University Press, March
2004.
(Lethbridge et al., 2003) Lethbridge, T.; Singer, J.; Forward, A. How Software
Engineers use Documentation: The State of the Practice, In: IEEE
Software, Vol. 20, No. 06, pp. 35-39, 2003.
(Lucrédio et al., 2007) Lucrédio, D.; Brito, K.S.; Alvaro, A.; Garcia, V.C.;
Almeida, E.S.; Fortes, R.P.M.; Meira, S.R.L. Software Reuse:Brazilian
Industry Scenario, In: Journal of Systems and Software, Elsevier, 2007.
(Lüders et al., 2002) F. Lüders, I. Crnkovic, and A. Sjögren. Case study:
Componentization of an industrial control system. In Proc. 26th Annual
International Computer Software and Applications Conference -
COMPSAC 2002, Oxford, UK, Aug. 2002. IEEE Computer Society
Press.
(Mahmooda et al, 2005) Mahmooda, S.; Laia, R.; Kimb, Y.S.; Kimb, J.H.; Parkb,
S.C.; Ohb, H.S. A survey of component based system quality assurance
and assessment, In: Jounal of Information and Software Technology,
Vol. 47, No. 10, pp. 693-707, 2005.
(McCall et al., 1977) McCall, J.A; Richards, P.K.; Walters, G.F. Factors in
Software Quality, Griffiths Air Base, Nova York, Rome Air Development
Center (RADC) System Command, TR-77-369, Vol. I, II and III, 1977.
(McGregor et al., 2003) McGregor, J. D.; Stafford, J. A.; Cho, I. H. Measuring
Component Reliability, In: The 6th Workshop on Component- Based
Chapter 7 - References 161
Software Engineering (CBSE), Lecture Notes in Computer Science
(LNCS) Springer-Verlag, USA, pp. 13-24, 2003.
(McIlroy, 1968) Mcllroy, M. D. Mass Produced Software Components, In: NATO
Software Engineering Conference Report, Garmisch, Germany, pp. 79-
85, 1968.
(Merrit, 1994) Merrit, S. Reuse Library, In: Encyclopedia of Software
Engineering, J.J. Marciniak (editor), John Willey & Sons, pp. 1069-
1071, 1994.
(Meyer, 2003) Meyer, B. The Grand Challenge of Trusted Components, In: The
25th IEEE International Conference on Software Engineering (ICSE),
USA, pp. 660–667, 2003.
(Mingins et al., 1998) Mingins, C., Schmidt, H., Providing Trusted Components
to the Industry. In: IEEE Computer, Vol. 31, No. 05, pp. 104-105, 1998.
(Müller et al., 2002) P. O. Müller, C.M. Stich, and C. Zeidler. Component-Based
Embedded Systems
(Ommering et al., 2000) R. van Ommering, F. van der Linden, and J. Kramer.
The Koala component model for consumer electronics software. IEEE
Computer, 33(3):78–85, March 2000.
(Ommering, 2002) 14. R. van Ommering. Building product populations with
software components. In Proceedings of the 24th international
conference on Software engineering,. ACM Press, 2002.
(Parnas & Lawford, 2003) Parnas, D.; Lawford, M. The Role of Inspection in
Software Quality Assurance. In: IEEE Transactions on Software
Engineering, Vol. 29, No. 08, pp. 674-676, 2003.
(Paulk et al., 1993) Paulk, M.; Curtis, B.; Chrissis, M.; Weber, C. Capability
Maturity Model for Software, Version 1.1, In: Software Engineering
Institute, Carnegie Mellon University, CMU/SEI-93-TR-24, DTIC
Number ADA263403, February, 1993.
(Poore et al., 1993) Poore, J.; Mills, H.; Mutchler, D. Planning and Certifying
Software System Reliability, In: IEEE Computer, Vol. 10, No. 01, pp.
88-99, 1993.
(Pressman, 2005) Pressman, R. Software Engineering: A Practitioner’s
Approach. McGraw-Hill. 6th Edition. 2005.
Chapter 7 - References 162
(Reussner, 2003) Reussner, R. H. Contracts and quality attributes of software
components, In: The 8th International Workshop on Component-
Oriented Programming (WCOP) in conjunction with the 17th ACM
European Conference on Object Oriented Programming (ECCOP),
2003.
(Rohde et al., 1996) Rohde, S. L.; Dyson, K. A.; Geriner, P. T.; Cerino, D. A.
Certification of Reusable Software Components: Summary of Work in
Progress, In: The 2nd IEEE International Conference on Engineering of
Complex Computer Systems (ICECCS), Canada, pp. 120-123, 1996.
(Schmidt, 2003) Schmidt, H. Trustworthy components: compositionality and
prediction. In: Journal of Systems and Software, Vol. 65, No. 03, pp.
215-225, 2003.
(Schneider & Han, 2004) Schneider, J-G.; Han, J. Components — the Past, the
Present, and the future, In: Proceedings of Ninth International
Workshop on Component-Oriented Programming (WCOP), Oslo,
Norway, June 2004.
(Simão et al., 2003) Simão, R. P. S.; Belchior, A. Quality Characteristics for
Software Components: Hierarchy and Quality Guides, Component-
Based Software Quality: Methods and Techniques, In: Lecture Notes in
Computer Science (LNCS) Springer-Verlag, Vol. 2693, pp. 188-211,
2003.
(Softex, 2007) Softex, Perspectivas de desenvolvimento e uso de componentes
na indústria brasileira de software e serviços, (in portuguese)
Campinas: SOFTEX, 2007. Available at:
http://golden.softex.br/portal/softexweb/uploadDocuments/Compone
ntes(2).pdf
(Solingen, 2000) Solingen, R.v. Product focused software process improvement:
SPI in the embedded software domain, PhD Thesis, Technische
Universiteit Eindhoven, 2000.
(Stafford et al., 2001) Stafford, J.; Wallnau, K. C. Is Third Party Certification
Necessary?, In: The 4th Workshop on Component-Based Software
Engineering (CBSE), Lecture Notes in Computer Science (LNCS)
Springer-Verlag, Canada, 2001.
Chapter 7 - References 163
(Szyperski et al. 1998) C. Szyperski. Component Software: Beyond Object-
Oriented Programming. ACM, Press and Addison-Wesley, New York,
N.Y., 1998.
(Taulavuori et al., 2004) Taulavuori, A.; Niemela, E.; Kallio, P. Component
documentation — a key issue in software product lines, In: Journal
Information and Software Technology, Vol. 46, No. 08, June, pp. 535–
546, 2004.
(Tian, 2004) Tian, J. Quality-Evaluation Models and Measurements, In: IEEE
Software, Vol. 21, No.03, pp.84-91, May/June, 2004.
(Trass et al., 2000) Trass, V.; Hillegersberg, J. The software component market
on the Internet, current status and conditions for growth, In: ACM
Sigsoft Software Engineering Notes, Vol. 25, No. 01, pp. 114-117, 2000.
(Urting et al., 2001) Urting, D.; Van Baelen, S; Holvoet, T.; Berbers, Y.
Embedded software development: components and contracts.
Proceedings of the IASTED International Conference on Parallel and
Distributed Computing and Systems, ACTA Press, 2001, pp. 685-690.
(Van Ommering, 2002) Van Ommering R., "The Koala Component Model", in
Crnkovic I. and Larsson M. (editors): Building Reliable Component-
Based Software Systems, ISBN 1-58053-327-2, Artech House, 2002.
(Voas et al., 2000) Voas, J. M.; Payne, J. Dependability Certification of Software
Components, In: Journal of Systems and Software, Vol.52, No. 2-3 , pp.
165-172, 2000.
(Voas, 1998) Voas, J. M. Certifying Off-the-Shelf Software Components, In:
IEEE Computer, Vol. 31, No. 06, pp. 53-59, 1998.
(Voas, 2000) Jeffrey Voas. Usage-based software certification process, IEEE
Computer Society Press, Volume 33 , Pages: 32 – 37, 2000.
(Wallin, 2002) Wallin, C. Verification and Validation of Software Components
and Component Based Software Systems, In: Building Reliable
Component-Based Systems, I. Crnkovic, M. Larsson (editors), Artech
House Publishers, July, pp. 29-37, 2002.
(Wallnau, 2003) Wallnau, K. C. Volume III: A Technology for Predictable
Assembly from Certifiable Components. In: Software Engineering
Institute (SEI), Technical Report, Vol. III, April, 2003.
Chapter 7 - References 164
(Wallnau, 2004) Wallnau, K., Software Component Certification: 10 Useful
Distinctions, In: Technical Note CMU/SEI-2004-TN-031, 2004.
Available at:
http://www.sei.cmu.edu/publications/documents/04.reports/04tn031.
html.
(Weber et al., 2002) Weber, K. C.; Nascimento, C. J.; Brazilian Software Quality
2002. In: The 24th IEEE International Conference on Software
Engineering (ICSE), EUA, pp. 634-638, 2002.
(Wijnstra, 2001) Wijnstra, Jan Gerben. Quality Attributes and Aspects of a
Medical Product Family. Proceedings of the 34th Hawaii International
Conference on System Sciences - 2001
(Wohlin & Regnell, 1998) Wohlin, C.; Regnell, B. Reliability Certification of
Software Components, In: The 5th IEEE International Conference on
Software Reuse (ICSR), Canada, pp. 56-65, 1998.
(Wohlin et al., 1994) Wohlin, C.; Runeson, P. Certification of Software
Components, In: IEEE Transactions on Software Engineering, Vol. 20,
No. 06, pp 494-499, 1994.
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 165
Appendix A.
Step-by-step instructions to perform the Embedded Quality evaluation Process (EQP)
<Component Name>
Version <Document Version> | <Document date version>
Responsible: <Responsible Name>
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 166
Historic Changes Date Version Description Author
18/06/2009 01.00d Initial Version Fernando Carvalho
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 167
Contents
1. Introduction ...............................................................168
1.1 Overview of the Component .................................................. 168 1.2 Conventions, terms and abbreviations list............................ 168
2. Embedded Software Component Quality Evaluation..168
2.1 Establish Evaluation Requirements...................................... 168 2.1.1 Establish the evaluation propose ....................................................... 169 2.1.2 Identify components .......................................................................... 169 2.1.3 Specify embedded quality model ....................................................... 170
2.2 Specify the Evaluation ...........................................................170 2.2.1 Select metrics ..................................................................................... 170 2.2.2 Establish the evaluation criteria.........................................................171 2.2.3 Establish rating levels for metrics ..................................................... 172
2.3 Design the Evaluation............................................................172 2.3.1 Produce evaluation plan..................................................................... 172
2.4 Execute the Evaluation ..........................................................173 2.4.1 Measure Characteristics..................................................................... 174 2.4.2 Compare with criteria ........................................................................ 175 2.4.3 Assess results ..................................................................................... 175
3. References.................................................................. 177
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 168
1. Introduction <This section should present a brief introduction of the component that will be submitted to the evaluation, the context and motivation to do so.>
1.1 Overview of the Component <This section will present a brief overview of the component, presenting the problem that the component solves, in which domain it works, information about its internal organization and in which architecture it was developed.>
1.2 Conventions, terms and abbreviations list This section presents the Abbreviations list used in this document.
Term Description OS Operational System UART Universal Asynchronous Receiver Transmitter
2. Embedded Software Component Quality Evaluation <This section presents the activities used to execute the Embedded software component Quality evaluation Process module which was described during this thesis, which is represented by Figure 1. The other modules will be used during some sections of this process as required. Next section will present the steps that should be followed to evaluate the component quality.>
Figure 1: The Embedded software component Quality evaluation Process (EQP).
2.1 Establish Evaluation Requirements <This module describes all the requirements that should be considered during the component evaluation.>
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 169
2.1.1 Establish the evaluation propose
<This activity is performed following the next three steps> 2.1.1.1 Select the evaluation staff <This step presents the evaluation staff that will execute the component evaluation.>
Table 1. Evaluation staff. Evaluation staff Stakeholder
(Engº 1) Evaluation Responsible (Engº 1) Embedded Hard/Software Development expert (Engº 2) Embedded Hard/Software Development expert 2.1.1.2 Define evaluation requirements <In this step the evaluation staff must determine which system requirements are legitimate requirements for the embedded component. Ex: Architecture/Interface constraints and Operational and Support Environment > 2.1.1.3 Define scope, propose and objectives <This step should answer some questions like: (i) What does the evaluation expects to achieve?; (ii) What are the responsibilities of each member of the team?; (iii) When should the evaluation finish?; (iv) What constraints must the evaluation staff adhere to?; and (v) What is the related risk of the component to its target domain? This step should contain the goals of the evaluation; scope of the evaluation; component and domain risk-level; statement of commitment from both stakeholder(s) and customer(s); and summary of decisions that have already been made.>
2.1.2 Identify components
<This activity is performed following the next three steps> 2.1.2.1 Define the components to be evaluated <This step lists the component and features which will be subject of the quality evaluation.> 2.1.2.2 Specify the architecture and the environment <This step describes, as precisely as possible, the whole environment to evaluate the component. There are two options: (i) specify the system that the component will work in order to evaluate the quality of the components executed in these systems provided by the costumer;(ii) specify a well-defined environment for the component to be executed and analyzed.> <This step should describe the whole environment of evaluation as precisely as possible. Additionally, the team members should answer the following questions: (i) How much effort will be spent to provide the whole infra-structure to evaluate the component? How is the complexity and what are the constraints of this environment?; (ii) What is the size of the selected systems (if available)? What is(are) the target domain(s) of those systems?; (iii) What is the impact of the software component in the selected system?; and (iv) What are the component dependencies?> 2.1.2.3 Define the scenarios
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 170
<This step the evaluation team should describe the set of scenarios that component will be evaluation>
2.1.3 Specify embedded quality model
<This activity is performed by definition of the embedded quality model step > 2.1.3.1. Define the embedded quality model (internal, external and quality in use characteristics) <This step defines the quality characteristics and sub-characteristics that will be used to evaluate the component quality. Still on, the team evaluation should define the importance level of each characteristic defined according to this classification: 1-Not Important; 2-Indiferent; 3-Reasonable; 4-Important; 5-Very Important.> Example:
Table 2. Characteristics and Sub-Characteristics defined. Characteristics Sub-Characteristics Importance
Functionality Accuracy 4 Functionality Security 3
… … … <This step describes the characteristics that are not presented on the Embedded Component Quality Model (EQM), presented on Chapter 4, and should be considered to evaluate any component quality aspects. After defining the characteristics, it is interesting to complement the table 2 with the new quality characteristics and to define its relevance to the component quality evaluation.>
2.2 Specify the Evaluation <This activity describes how each quality attribute will be evaluated and which techniques and metrics will be used/collected.>
2.2.1 Select metrics
<This activity is performed following the next two steps> 2.2.1.1 Define the EQL for evaluation <During this step the evaluation staff will define which level should be considered to evaluate the quality characteristics proposed earlier (the guidelines for selecting evaluation level could help the evaluation staff in this task). Chapter 4, section 4 presented the EQL and the correlation between those evaluation techniques X quality attributes presented on EQL. Thus, it could be interesting to put another column on the Table 2 in order to show which techniques are interesting to evaluate the proposed quality attributes as show in Table 3. Of course, these techniques will be based on the level that was defined by the evaluation staff.> Example:
Table 3. Evaluation techniques defined. Characteristics Sub-Characteristics
EQL Level
Importance
Functionality Accuracy I 4 Efficiency Energy Consumption II 3
… … … …
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 171
2.2.1.2 Select quality attributes <This step complements the Table 3 with quality attributes for each sub-characteristic as show in Table 4. The quality attributes could be selected from EQM also. In this way, the quality aspects of the component are completely developed.> Example:
Table 4. Importance related for each characteristic.
Characteristics
Sub-Characteristics
EQL Level
Quality Attributes
Importance
Functionality Accuracy I Precision 4 Efficiency Energy
Consumption II Mechanism
Efficienty 3
… … … ... …
2.2.1.3 Select evaluation techniques <This step complements the Table 4 with evaluation techniques, at least one for each quality attributes as show in Table 5. The quality attributes could be selected from EQM also. In this way, the quality aspects of the component are completely developed.> Example:
Table 5. Importance related for each characteristic. Characteristics Sub-
Characteristics EQL Level
Quality Attributes
Importance Evaluation Techniques
Functionality Accuracy I Precision 4 Precision analyses
Efficiency Energy Consumption
II Mechanism Efficient
3 Evaluation Measurement
… … … ... …
2.2.2 Establish the evaluation criteria
<This activity is performed following the next two steps> 2.2.2.1 Specify metrics on EMA <This step will define all metrics necessary in order to collect the data and analyze it at the end of the evaluation process. It should be defined, at least: one metric for each quality attribute proposed on the Table 5; and one metric for each module of the methodology.> Example:
Table 6. EMA example for Precision Quality Attribute. Functionality
Sub-Characteristic Accuracy EQL I Quality Attribute Precision Evaluation Techniques Precision analyses Goal Evaluates the percentage of the results that
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 172
were obtained with precision Question Based on the amount of tests executed,
how many test results returned with precision?
Metric Precision on results / Amount of tests Interpretation 0 <= x <= 1; closer to 1 being better
2.2.3 Establish rating levels for metrics
<This activity is performed by Establishment of score level for metrics step > 2.2.3.1 Define score for metrics <Based on the last step, this one will define the score level of each metrics defined earlier, based on the interpretations defined for each metric.> Example:
Table 7. Example of Score Level in the EMA definition. Functionality
Sub-Characteristic Accuracy EQL I Quality Attribute Precision Evaluation Techniques Precision analyses Goal Evaluates the percentage of the results that
were obtained with precision Question Based on the amount of tests executed,
how many test results returned with precision?
Metric Precision on results / Amount of tests Interpretation 0 <= x <= 1; closer to 1 being better Score Level · 0.00 – 0.3: Not acceptable
· 0.31 – 0.6: Reasonable quality · 0.61 – 1.0 : Acceptable
2.3 Design the Evaluation <This activity describes the whole configuration of the environment that will be used to evaluate the component quality.>
2.3.1 Produce evaluation plan
<This activity is performed following the next four steps> 2.3.1.1 Design the evaluation plan <During this step the evaluation staff will document the techniques used during the embedded component evaluation. The whole team must have knowledge in each specific technique proposed early in order to help during the documentation and for help the members that don’t know so much about certain technique.>
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 173
2.3.1.2 Estimate resource and schedule for evaluation <This step will provide the time spent and resources to evaluate the embedded component quality and the activities to be executed for each stakeholder defined on section 2.1.1., as shows in Table 8.> Example:
Table 8. Component Evaluation Scheduler. Activities 01/11/2009 05/12/2009
Configure the environment (Engº 1) Develop the black-box test case
(Engº 2)
Define the static analysis that will be considered
(Engº 2)
Analyze the source-code (Engº 1) Measure the whole process using the metrics defined
(Engº 1)
Generate the final report (Engº 2) 2.3.1.4 Select instrumentation (equipments/tool) <This step will define how the evaluation staff will execute the evaluation techniques defined earlier. They can use an equipments, or tool, or develop a tool, or a specific method, or a methodology, etc. in order to evaluate the quality attributes. After defining the tools/method/technique/etc, it is interesting to document it so that the whole team can better understand and use it. The team could use the Table 4 in order to document which tool/method/process/technique will evaluate each technique, as shows in Table 9:> Example:
Table 9. Definition of the Tools that should be used during evaluation. Characteristics Sub-
Characteristics
Quality Attributes
EQL
Evaluation Techniques
Importance Tool used
Functionality Accuracy Correctness II Precision analyses
4 Tool A 3 Tool B4
Functionality Security Data Encryption
III. Evaluation Measurement
3 Tool C5
… … … … … … 2.3.1.5 Define the configuration of the instrumentation < In this step the evaluation staff should define the parameters to configure the equipments and software tool >
2.4 Execute the Evaluation <This activity will execute the whole planning of the component evaluation. First the team evaluation will configure the environment and set the instrumentation, after that it will execute the evaluation in order to Analyze if the component has the desired quality level or not.> 3 http://www.toolA.org 4 http://www.toolB.org 5 http://www.toolC.org
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 174
2.4.1 Measure Characteristics
<This activity is performed following the next three steps> 2.4.1.1 Setup the instrumentation, environment and scenarios <During this step the instrumentation is configured, the environment is prepared and scenarios are built to execute the evaluation plan> 2.4.1.2 Execute the evaluation plan <The evaluation staff will apply the schedule given in the evaluation plan. Observations made during the evaluation process also have to be included in the lesson learned document, make recommendation document and in the component evaluation report > 2.4.1.3 Collect data <During execution of the evaluation (last section), all data provided is collected using the metrics defined in section 2.2.4. A table should be used to store those values in order to be further analyzed. An example is show in Tale 10. The interview with component’s user is done in this step to collect the user’s feedback about the quality in use characteristics (Productivity, Satisfaction, Security and Effectiveness) and to fill the table 11 with the results average. The last activity is to fill the additional information table of the component (Table 12). This table contains technical, marketing and organizational information about the component> Example:
Table 10. Table to document the results obtained during component evaluation. Characterist. Sub-
Charact.
Quality Attributes
EQL
Imp Evaluation Techniques
Tool used
Result
Functionality Accuracy Correctness II 4 Precision analyses
Tool A6 Tool B7
0.7
Functionality Security Data Encryption
III 3 Evaluation Measurement
Tool C 8 0.8
… … … … … … …
Table 11. Example of Quality in Use Characteristics.
Quality in Use Characteristics Productivity Satisfaction Security Effectiveness
73,3,% 82,4% 68,2% 73,0%
Table 12. Example of additional information of the component. Additional Information
Technical Information Organizational Information Marketing Information Component Version: 2.01 CMMi Level : III Development time: 100h
6 http://www.toolA.org 7 http://www.toolB.org 8 http://www.toolC.org
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 175
Programming Language :ANSI C Organization’s Reputation: Nationally recognized Cost: $2000
Design and Project Patterns : MISRA Time to market: 2 Months
Operational Systems Supported: None Targeted market : All domains
Compiler Version: RealView MDK-ARM v.3.50 Affordability: Very Nice
Compatible Architecture: All Licensing: uninformed
Minimal Requirements: 2 x Serial Port (UART) Technical Support: C.E.S.A.R. Compliance: -----
2.4.2 Compare with criteria
<This activity is performed following the next three steps> 2.4.2.1 Requirement vs. criteria < In this step, the evaluation staff defines the criteria. The criteria selected will determine whether to ask the right questions regarding the viability of the component in system > 2.4.2.2 Analyze the results <During this step the evaluation staff will analyze all data collected in order to provide the quality level of the component.> 2.4.2.3 Consolidate data <Some adjustments could be done in this step because there are some quality attributes that could influence, in a positive or negative way, other quality attributes. Moreover, the evaluation staff should consider the importance level of each quality attribute in a way that different weights could be applied for each result obtained. In this step the results should be summarized by quality characteristics as described in Table 13, i.e., each characteristic will have only one final result that it correspond the quality level achieved by the component .> < The final result of the quality characteristic evaluation is the sum of the value of each sub- characteristic by multiplying by its importance, divided by the sum of the importance. The final result formula can be write as: QC = (∑(sub-characteristic * importance)) / ∑( importance).>
Table 13. Final results of the component quality evaluation. Characteristics EQL
Final
Results Functionality II 0.7 Reliability II 0.8 … … …
2.4.3 Assess results
<This activity is performed following the next four steps> 2.4.3.1 Write the component evaluation report <In this step, the evaluator responsible for the embedded component evaluation will develop a report that contains the information obtained during the previous steps and
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 176
the evaluation staff should provide some comments in order to the customer improve their component. The evaluator should consider if the component achieves the required quality to be considered in the level in which it was evaluated. This could be achieved through the analysis of the score level of each metric defined during the embedded component evaluation process execution.> 2.4.3.2 Build the component quality label < This step will build the embedded component quality label that it is a one-page summary of quality component evaluation and it is used to put with the component package on the shelf, for example. The Table 14 shown the example of component quality label.>
Table 14. Example of embedded component quality label.
Summary of embedded component quality evaluation Quality Characteristics
Characteristics EQL Final Result Functionality I 0,85
Reliability II 0,62 Usability I 0,78 Efficiency III 0,48
Maintainability II 0,79 Portability I 0,91
Quality in Use Characteristics Productivity Satisfaction Security Effectiveness
0,62 0,75 0,49 0,78 Additional Information
Technical Information Organizational Information Marketing Information Component Version: 2.01 CMMi Level : III Development time: 100h Programming Language :ANSI C Organization’s Reputation: Nationally recognized Cost: $2000 Design and Project Patterns : MISRA Time to market: 2 Months Operational Systems Supported: None Targeted market : All domains Compiler Version: RealView MDK-ARM v.3.50 Affordability: Very Nice
Compatible Architecture: All Licensing: uninformed Minimal Requirements: 2 x Serial Port (UART) Technical Support: C.E.S.A.R. Compliance: ----- 2.4.3.3 Record Lessons Learned <At the end of evaluation, the relevant aspect of the evaluation should documented for future evaluation, like overcome challenges, tools used, problems found and solved, etc.> 2.4.3.4 Make Recommendations <In this step, the evaluation staff will do recommendations to improve the component quality in order to solve some issues of quality found in component evaluation.>
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation. 177
3. References <This section will provide the references to tools, processes, techniques, methods cited during this documents, in such format :> [1] Authors, Title; Conference/Journal (if applicable); Date;
Appendix B - Evaluators feedback about the use of Embedded Software Component Quality Evaluation Methodology. 178
Appendix B.
Evaluators feedback about the use of Embedded Software Component Quality Evaluation Methodology Subject: (Engº 1) Company: C.E.S.A.R. - Recife Center of Advanced Study Job: System engineer / embedded development
specialist 1ª Part: Regarding the use of the methodology to evaluation the embedded software components quality, answer: 1.1) Which difficulties or obstacles that you found? A.: The difficulty was to understand all parts of the methodology and, more important, how they were related to each other. A full reading on methodology’s chapter is required in order to have a first level understanding. But I guess only when you go through a real evaluation you get a full perspective of the complete methodology and how small pieces interact and their importance. 1.2) Which suggestions to improvement the methodology you indicate? A: A brief chapter, or section, explaining in general terms the parts and connections of the methodology parts. This could be quite difficult given the completeness of this methodology. 1.3) What is the methodology efficiency in evaluate precisely the quality of embedded software component? A: For me the efficiency is directly related to the number of chosen quality attributes and also in which level you need to evaluate them. The process proposed (EQP) gives an ideal path for evaluators and makes it easy to not fall into long and inefficient procedures. For such a deep and complete quality evaluation process like this it is not easy to build a very fast evaluation. 1.4) Other comments, suggestions, criticisms, report about the Framework. A: I found it very useful, flexible and specially related for embedded systems components domain. 2ª Part:
Appendix B - Evaluators feedback about the use of Embedded Software Component Quality Evaluation Methodology. 179
2.1 EQP. Specifically about the modules of the Embedded Evaluation Process, EQP, answer the questions: 2.1.1) Which difficulties found in module Establish Evaluation Requirements and which possible improvement? A: The tricky part of this module is the specification of evaluation quality model. There reside the most important decisions of the whole evaluation for me. If you select wrong characteristics or sub-characteristics, you can turn the evaluation incomplete, inefficient or even invalid. A selection of main characteristics based on embedded component reliability level or domain (automotive, x-ray, entertainment) could be helpful. The EQL is a good point of start but it is suggested a chapter after. 2.1.2) Which difficulties found in module Specify the Evaluation and which possible improvement? A: Here the difficulties are related to the number of quality attributes chosen, because the evaluation could be too extended. Besides that, the good setup of metrics is very difficult due to distinct nature of most of them. An improvement could be make a clear distinction between qualitative and quantitative quality attributes, so the user can give a real score to component based only in quantitative attributes. Then qualitative attributes only could be evaluated separately. 2.1.3) Which difficulties found in module Design the Evaluation and which possible improvement? A: Design the evaluation requires some knowledge on test planning. Besides that, I can see no difficult or improvement. 2.1.4) Which difficulties found in module de Execute the Evaluation and which possible improvement? A: There was no difficulty for me in this module. The only complaint was related to evaluation results compilation. We had to make it one by one manually. A tool, or even a spread sheet would help. 2.2 EQM. 2.2.1) The Embedded Quality Model (EQM) proposed, with quality characteristics, sub- characteristics, quality attributes, quality in use characteristics and additional information is sufficient to evaluate the component quality and cover the key quality aspects? A: Yes, more than sufficient. There are too many evaluation aspects defined on EQM. For me the author started from a good list of references and came out with a general quality model which can be applied to any embedded component. I guess all important characteristics are presented there and, depending in which level and domain you want to evaluate your component, there are some key characteristics whose can not be out of evaluation. 2.3 EQL
Appendix B - Evaluators feedback about the use of Embedded Software Component Quality Evaluation Methodology. 180
2.3.1) The evaluation techniques based on quality levels, EQL, is appropriate to evaluate components in different application domain and in different quality levels? A: It is hard to tell. My first answer is yes, because it is presented in a flexible way. It can be used as a very good starting baseline and the evaluators can adapt it and evolve it according to specific requirements. The evaluation techniques selection is the key of this module I thought. If the evaluators select a wrong tool or technique, that characteristics can be under evaluated. The evaluation techniques suggested are a good starting point as well. 2.4 Embedded Metrics Approach - EMA 2.4.1) The Embedded Metric Approach - EMA, and the set of defined metrics are sufficient and adequate to measure the evaluation techniques proposal? A: Yes. You can use it even for reliability statistical analysis like MC/DC defined in DO-178B. You just have to define right your goal and ask the write question.
Appendix B - Evaluators feedback about the use of Embedded Software Component Quality Evaluation Methodology. 181
Evaluator’s feedback about the use of Embedded Software Component Quality Evaluation Framework Subject: (Engº 2) Company: C.E.S.A.R. - Recife Center of Advanced Study Job: System engineer / embedded development
specialist 1ª Part: Regarding the use of the methodology to evaluation the embedded software components quality, answer: 1.1) Which difficulties or obstacles that you found? Some sub-characteristic goals and questions were difficult to understand. The metrics’ domains are not always a percentage. Some of them are quantities in bytes, or seconds. Thus, is impossible to calculate the final result to some of the functionalities. 1.2) Which suggestions to improvement the methodology you indicate? Use the methodology to build a tool could help the users to easily customize their evaluation. A tool could make it easier to add/exclude a characteristic/sub-characteristic, to see detailed information of each characteristic (like goals, questions, EQL, metrics) 1.3) What is the methodology efficiency in evaluate precisely the quality of embedded software component? This question can only be answered with the repetition of the component evaluation. The comparison between our evaluation and others could help us to analyze the precision. 1.4) Other comments, suggestions, criticisms, report about the Framework. 2ª Part: 2.1 EQP. Specifically about the modules of the Embedded Evaluation Process, EQP, answer the questions: 2.1.1) Which difficulties found in module Establish Evaluation Requirements and which possible improvement? There weren’t significant difficulties in this module due to component simplicity. A possible improvement should be a questionnaire to help the user to define sub-characteristics importance. There would be fixed alternatives to each sub-
Appendix B - Evaluators feedback about the use of Embedded Software Component Quality Evaluation Methodology. 182
characteristics question. Each alternative would be related to a different level of importance. 2.1.2) Which difficulties found in module Specify the Evaluation and which possible improvement? It’s not hard to define the chosen EQL according to the sub-characteristic importance, but selecting the quality attributes to be analyzed is a bit more complicated because some sub-characteristics have more than one attribute to the same EQL. The use of a methodology tool could make it easier to find information about each attribute. The 2.1.3) Which difficulties found in module Design the Evaluation and which possible improvement? Once we have already decided what attributes to evaluate, the decision regarding how to do it becomes much simpler. I see no difficulty in this module. 2.1.4) Which difficulties found in module de Execute the Evaluation and which possible improvement? Also there wasn’t any difficult in this module. 2.2 EQM. 2.2.1) The Embedded Quality Model (EQM) proposed, with quality characteristics, sub- characteristics, quality attributes, quality in use characteristics and additional information is sufficient to evaluate the component quality and cover the key quality aspects? I think it is. It’s an interesting model because the set of quality characteristics proposed are complete enough to give a good idea about the weak and the strong aspects of the analyzed components. 2.3 EQL 2.3.1) The evaluation techniques based on quality levels, EQL, is appropriate to evaluate components in different application domain and in different quality levels? To better answer this question we should analyze more components. Anyway, the idea of using a higher EQL to characteristics with greater importance seems to be adequate to build a good component profile. 2.4 Embedded Metrics Approach - EMA 2.4.1) The Embedded Metric Approach– EMA, and the set of defined metrics are sufficient and adequate to measure the evaluation techniques proposal? It is quite adequate. Without EMA, it would be much more difficult to establish which metric would be used to each attribute.
Appendix C.
Embedded Software Component Quality Evaluation Framework
Serial-Serial Baud Rate Converter (BRConverter)
Version 01.00 | 16/01/2010
Responsible: (Engº 1)
Appendix C - BRConverter – Embedded Quality Evaluation 184
Historic Changes Date Version Description Author
18/10/2009 01.00-D01 Initial version Engº 1 14/11/2009 01.00-D02 Evaluation planning
and design Engº 1
21/11/2009 01.00-D03 First measurements Engº 2 22/11/2009 01.00-D04 Electrical
measurements Engº 2
16/01/2010 01.00 Final revision and first release of
evaluation report
Engº 1
Appendix C - BRConverter – Embedded Quality Evaluation 185
Contents
1 Introduction. ...............................................................186
1.1 Overview of the Component .................................................. 186 1.2 Conventions, terms and abbreviations list............................ 186
2 Embedded Software Component Quality Evaluation... 187
2.1 Establish Evaluation Requirements.......................................187 2.1.1 Establish the evaluation propose ....................................................... 187 2.1.2 Identify components ..........................................................................188 2.1.3 Specify embedded quality model ....................................................... 189
2.2 Specify the Evaluation .......................................................... 190 2.2.1 Select metrics .....................................................................................190 2.2.2 Establish the evaluation criteria........................................................ 192
2.3 Design the Evaluation............................................................196 2.3.1 Produce evaluation plan..................................................................... 196
2.4 Execute the Evaluation ..........................................................199 2.4.1 Measure Characteristics..................................................................... 199 2.4.2 Compare with criteria ....................................................................... 200 2.4.3 Assess results .....................................................................................201
3 References.................................................................. 203
Appendix:...................................................................... 204
Appendix C - BRConverter – Embedded Quality Evaluation 186
1 Introduction. The component that will be submitted to quality evaluation is a Serial-Serial
Baud Rate Converter. This component is used to connect two devices with different baud rates. One application example is the K-line bus gateway.
The serial vehicle diagnostics protocol known as K-line is defined on ISO 9141-2 [1] . It has serial data communication, very similar to RS-232 but different voltage signal levels and only one bidirectional line. The serial data rate defined is 10.400 bps, a nonstandard baud rate and thus not available on PC’s RS-232C controllers.
The software component under evaluation resides in serial gateway which converts data from standard PC baud rates to K-line bus and vice-versa.
1.1 Overview of the Component The component is a Serial-Serial (UART) RS-232 baud rate converter. It works
in a range of 75 bps to 921,600 bps. It was developed in C language and is compatible with any ANSI C compiler. It requires a system with two serial ports available. The baud rate conversion is done bi-directionally from one port to another and limited baud rates depends on specific hardware.
1.2 Conventions, terms and abbreviations list This section presents the Abbreviations list used in this document. Term Description
K-line One of the several OBD-II signal protocols. It is used on Brazilian popular vehicles for data communication and diagnostics.
OBD-II On-Board Diagnostics, version 2. It’s a set of specification for automotive diagnostics.
OS Operational System PC Personal Computer UART (Universal Asynchronous Receiver Transmitter)
Integrated circuit which serializes data bytes according to specific mode, baud rates and signal levels. It includes start bit, stop bit and optionally a parity bit.
Appendix C - BRConverter – Embedded Quality Evaluation 187
2 Embedded Software Component Quality Evaluation.
Figure 1: The Embedded software component Quality evaluation Process (EQP).
2.1 Establish Evaluation Requirements
2.1.1 Establish the evaluation propose
2.1.1.1 Select the evaluation staff Table 1. Evaluation Staff
Evaluation staff Stakeholder Engº 1 Evaluation Responsible / Embedded Firmware Engineer Engº 2 Embedded Hardware Development expert 2.1.1.2 Define evaluation requirements - The component needs 2 UART peripherals implemented on hardware or software. - Operating System: Do not use - Programming language: ANSI C - Compiler: RealView MDK-ARM v.3.50 - Supported baud rates: 75 bps to 921,600 bps (hardware dependency) - Supported environments: does not have any environmental constraints. - External dependence: software component UART Driver consisting of, at least, the following functions: bool HasChar(void); bool SendChar(char); char GetChar(void); 2.1.1.3 Define scope, propose and objectives
iv. goals of the evaluation:
Appendix C - BRConverter – Embedded Quality Evaluation 188
a. Detects any lost character;
b. Identify if it changes the input character;
c. Measure accuracy of specified baud rate.
v. scope of the evaluation;
a. Evaluate the component in ARM7TDMI architecture;
b. Verify the three objectives cited above using the specific baud rate
conversion:
i. 115,200 bps ↔ 10,400 bps (K-line);
vi. summary of factors that limit selection;
a. Nowadays, ARM7TDMI stills one of the most used microcontrollers
processor in embedded systems.
2.1.2 Identify components
2.1.2.1 Define the components to be evaluated The component submitted to quality evaluation is a Serial-Serial Baud Rate
Converter. This component is used to connect two devices with different baud rates. It relies on low-lever UART Driver implementation and keeps detecting and transmitting new characters on both sides at specific mode and baud rates. 2.1.2.2 Specify the architecture and the environment
In the automotive context, the serial baud rate converter component is primarily used when a user wants to interface a K-line bus with a notebook. In laboratory, where the use of a real vehicle is not practical, only a stand-alone ECU is required. The ECU responds to K-line like messages exactly as a real vehicle would do, using the same signal levels, data packets and timing, although all data values are null.
The architecture proposed for the evaluation – Figure 1 – is composed of one Siemens ECU, one serial port enabled PC and a microcontroller board, where the component will run and the baud rate conversion will take place. ECU and PC will be connected to one UART port of the microcontroller board.
The matching of voltage levels between the PC and the microcontroller will be done through a RS-232C transceiver, i.e., -12V +12V (high and low levels) to +3.3V and 0V (high and low levels respectively).
As previously said, the K-line protocol requires different voltage levels and has only one data line to receive and transmit. So the physical interface between microcontroller board and ECU needs an intervention as well: the ECU defines +12V as high level. This conversion will be provided by a small K-line interface board connected between ECU and microcontroller board.
The microcontroller board proposed was manufactured by Olimex, model LPC-P2148 [2] embedded with the NXP LPC2148 [3] microcontroller. The component will run on this controller.
The evaluator must connect the PC on the UART0 of the board and ECU on the UART1 in the same board.
Appendix C - BRConverter – Embedded Quality Evaluation 189
The PC must have a terminal emulator which sends and receives characters at a given baud rate. This terminal should be able to save the received characters in a file for later analysis.
Figure 1: Baud Rate Converter Evaluation Architecture
Processor: ARM7TDMI Microcontroller: NXP LPC2148 Compiler: RealView MDK-ARM [4] version 3.50 2.1.2.3 Define the scenarios 1. Scenario: 115,200 bps ↔ 10,400 bps
a. Connect PC on UART0 of the microcontroller board;
b. Connect the K-line interface board on UART1 of microcontroller board;
c. Connect the ECU on K-line interface board;
d. Configure the component with the specifications: i. UART0 at 115,200
ii. UART1 to 10,400 e. Compile, download and run the component on microcontroller board;
f. Configure the terminal emulator of the computer at 115,200 bps;
g. Send request data string from the computer and wait for replay;
h. Save the contents of data response received in file.
2.1.3 Specify embedded quality model
2.1.3.1. Define the embedded quality model (internal, external and quality in use characteristics)
Table 2. Characteristics and Sub-Characteristics defined. Characteristics Sub-Characteristics Importance
Functionality Real-Time 4 Functionality Accuracy 5
Appendix C - BRConverter – Embedded Quality Evaluation 190
Functionality Self-contained 2 Reliability Recoverability 3 Reliability Safety 3 Usability Configurability 4 Usability Attractiveness 3 Efficiency Resource Behavior 3 Efficiency Energy consumption 3 Efficiency Data Memory Utilization 4 Efficiency Program Memory
Utilization 3
Maintainability Stability 4 Maintainability Changeability 3 Maintainability Testability 4
Portability Deployability 4 Portability Replaceability 3 Portability Flexibility 3 Portability Reusability 3
2.2 Specify the Evaluation
2.2.1 Select metrics
2.2.1.1 Define the EQL for evaluation Table 3. Definition of level of quality for evaluation.
Characteristics Sub-Characteristics
EQL
Importance
Functionality Real-Time I 4 Functionality Accuracy II 5 Functionality Self-contained II 2 Reliability Recoverability I 3 Reliability Safety II 3 Usability Configurability I 4 Usability Attractiveness I 3 Efficiency Resource Behavior I 3 Efficiency Energy consumption I 3 Efficiency Data Memory
Utilization I 4
Efficiency Program Memory Utilization
I 3
Maintainability Stability II 4 Maintainability Changeability I 3 Maintainability Testability II 4
Portability Deployability I 4 Portability Replaceability I 3 Portability Flexibility II 3 Portability Reusability I 3
2.2.1.2 Select quality attributes
Table 4. The Quality Attributes selected for compose the EQM.
Appendix C - BRConverter – Embedded Quality Evaluation 191
Characteristics Sub-Characteristics
EQL Level
Quality Attributes Importance
Functionality Real-Time I Response time (Latency)
4
Functionality Real-Time I Execution time 4 Functionality Accuracy II Precision 5 Functionality Self-contained II Dependability 2
Reliability Recoverability I Error Handling 3
Reliability Safety II Integrity 3
Usability Configurability I Effort for configure 4
Usability Configurability I Understandability 3
Usability Attractiveness I Effort to operate 3
Efficiency Resource Behavior
I Peripheral utilization
3
Efficiency Energy consumption
I Amount of Energy Consumption
3
Efficiency Data Memory Utilization
I Amount of Data Memory Utilization
4
Efficiency Program Memory
Utilization
I Amount of Program Memory
Utilization
3
Maintainability Stability II Modifiability 4
Maintainability Changeability I Change Effort 3
Maintainability Testability II Test suit provided 4
Portability Deployability I Complexity level 4
Portability Replaceability I Backward Compatibility
3
Portability Flexibility I Configuration capacity
3
Portability Flexibility II Mobility 3
Portability Reusability I Architecture compatibility
3
Appendix C - BRConverter – Embedded Quality Evaluation 192
2.2.2 Establish the evaluation criteria
2.2.2.1 Specify metrics on EMA Table 5. Metrics specified based in EMA approach.
Cha
Sub-Characteristics
Quality Attributes
Evaluation Techniques E
QL
Imp Goal Question Metric Interpretation Score Level Metrics Results
Final Results
Self-contained Dependability Dependency analysis II 2
Evaluates the ability of the component to provide itself all functions expected
How many functions does the component provide by itself?
Number of functions provided by itself / Number of specified functions
0 <= x <= 1; closer to 1 being better
0- 0.1: Not acceptable 0.11-0.6: Reasonable 0.61-1: Acceptable
2/8 = 0.25
Response time (Latency)
Evaluation measurement (Time analysis)
I 4 Evaluates the time taken since a character is received in one uart and starts transmitting on the other.
How is the average time between 5 samples?
(Σ Time taken between a random set of invocations) / Number of invocations
The lower the better.
>100us: Not acceptable 100: Reasonable < 100 us: Acceptable
119.32 us; 107.69 us; 64.03 us; 107,29 us; 55.60 us; 453,93us / 5 = 90,786 us = 0.85
Real-Time
Execution time Evaluation measurement (Time analysis)
I 4 Evaluates the time used by processor for execute the task
How is the average time of task execution?
(Σ Time task execution) / Number of execution
The lower the better.
>1us: Not acceptable 1us: Reasonable < 1 us: Acceptable
230 ns = 0,85
Precision analysis (Evaluation measurement) I Evaluates the percentage
of the results that were obtained with precision
Based on the amount of tests executed, how much test results return with precision?
Precision on results / Amount of tests
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
1
Functional Tests (black box)
I
Validates required functional features and behaviors from an external view
How precise is are required functions and behaviors of the component?
Number of precise function and correct behavior / Number of function
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
1
Fun
ctio
nalit
y
Accuracy Precision
Structural Tests (white-box) II
5
Validation of program structures, behaviors, and logic of component from an internal view
How well structured is the code and logical implementation of the component?
Number of function with good implementation (well structured and logical) / Number of function
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
1
0.89
Appendix C - BRConverter – Embedded Quality Evaluation 193
Recoverability Error Handling Code Inspection
I 3
Verify coding style guidelines are followed, comments in the code are relevant and of appropriate length, naming conventions are clear and consistent, the code can be easily maintained
How compliant is the component using systematic approach to examining the source code
Number of functions compliant to systematic approach / Number of specified functions
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
8/8=1
Code Inspection
I
Verify coding style guidelines are followed, comments in the code are relevant and of appropriate length, naming conventions are clear and consistent, the code can be easily maintained
How compliant is the component using systematic approach to examining the source code
Number of functions complaint to systematic approach / Number of specified functions
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
8/8=1
Rel
iabi
lity
Safety Integrity
Algorithmic Complexity
II
3
Quantifies how complex a component is in terms of the computer program, or set of algorithms, need to implements the functionalities
How complex is the program component to implement the component functionalities?
Value of cyclomatic complexity (M = E − N + 2P) M = cyclomatic complexity; E = the number of edges of the graph; N = the number of nodes of the graph; P = the number of connected components
M >= 0; which closer to 0 being better
0- 10: Acceptable 11-60: Reasonable > 60: Not acceptable
2 = 0.9
0.96
Effort to Configure analysis
I
Evaluates the time necessary to configure the component.
How much time is needed to configure the component in order to work correctly in a system?
Time spent to configure correctly
The faster it is to configure the component the better, but it depends of the component and environment complexity.
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
0,7 Effort for configure
Inspection of user interfaces
I
4
Analyzes the ability to configure all provided and required functions
How many configuration are provided by each interface?
Number of configuration in all provided interfaces / Number of provided interfaces
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
1
Usa
bilit
y
Configurability
Understandability Documentation analysis (Use Guide, architectural analysis, etc) I 3
Analyses the documentation availability its efficiency and efficacy
How many documents are available with quality to understand the component functions?
Amount of documents with quality / Amount of documents provided
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
0
0.61
Appendix C - BRConverter – Embedded Quality Evaluation 194
Evaluation measurement
I Analyses the complexity to operate the functions provided by the component
How many operations are provided by each interface?
Number of provided interfaces / Number of operations in all provided interfaces
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
1/2=0,5 Attractiveness Effort to operate
Inspection of user interfaces
I
3
Analyzes the ability to operate all provided and required functions
How much time it is needed to operate the component?
All functions usage / time to operate
The lower the better (Σ usage time of each function)
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
0,7
Resource Behavior
Peripheral utilization
Evaluation measurement
I 3 Analyzes the amount of peripherals required to its correct work.
How much peripherals is enough for the component to work correctly?
Number of peripheral required/ Number of peripheral available
0 <= x <= 1; closer to 0 being better
0- 0.3: Acceptable 0.31-0.6: Reasonable 0.61-1: Not acceptable
2/11=0,18 = 0,82
Energy consumption
Amount of Energy Consumption
Evaluation measurement
I 3 Analyzes the amount of energy consumption to its perform the task.
How much amount of energy is consumed for the component to perform the task?
Amount of energy necessary for the component to perform the task.
x > 0; which closer to 0 being better
>100mA:Not acceptable 100 mA: Reasonable < 100 mA: Acceptable
74 mA = 0,85
Data Memory Utilization
Amount of Data Memory Utilization
Evaluation measurement
I 4 Analyzes the amount of data memory required to its correct work.
How much data memory is enough for the component to work correctly?
Amount of data memory necessary for the component to work correctly
x > 0; which closer to 0 being better
0-128B: Acceptable 128-1024B: Reasonable > 1024B: Not acceptable
16 bytes stack = 0,875
Eff
icie
ncy
Program Memory Utilization
Amount of Program Memory Utilization
Evaluation measurement
I 3 Analyzes the amount of program memory required to its correct work.
How much program memory is enough for the component to work correctly?
Amount of program memory necessary for the component to work correctly
x > 0; which closer to 0 being better
0-1KB: Acceptable 1K-16KB: Reasonable > 16KB: Not acceptable
54 bytes = 0,94
0.87
Code metrics and programming rules
II
Analyze if the rules related to a programming languages was used in the component implementation by collect a set of metric
How the component implementation follows the rules related to a programming languages?
Number of function that used the program rules / Number of function the component
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
1
Documents Inspection
II
Examinees document in detail based in a systematic approach to assess the quality of the component documents
What is the quality level of component´s documents
Number of documents with quality / Number of documents available
0 <= x <= 1; which closer to 1 is better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
0
Mai
ntai
nabi
lity
Stability Modifiability
Static Analysis
II
4
checks the component errors without compiling/executing it through of tools
How much errors has the component in design time ?
Number of errors found in design time.
x <= 0; closer to 0 being better
0- 3: Acceptable 4 - 10: Reasonable > 10: Not acceptable
0 = 1
0.37
Appendix C - BRConverter – Embedded Quality Evaluation 195
Changeability Change Effort Changeability analysis
I 3 Analyzes the customizable parameters that the component offers
How much parameters are provided to customize each function of the component?
Number of provided interfaces / Number of parameters to configure the provided interface
0 <= x <= 1; which closer to 1 is better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
1/2=0,5
Testability Test suit provided
Analysis of the test-suite provided
I 4 Analyzes the ability of the component to provide some test suite for checking its functions
is there any test suite? How is the coverage of this test suite
Analysis of the test suites provided Number of test suites provided / Number of functions
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
0
Deployability Complexity level Deployment analyses
I 4 Analyzes how complex it is to deploy a component in its specific environment(s)
How much time does it take to deploy a component in its environment?
Time taken for deploying a component in its environment
Estimate the time first and then compare with the actual time taken to deploy the component
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
Not applied
Replaceability Backward Compatibility
Backward compatibility analysis I 3
Analyzes the compatibility with previous versions
What is the compatibility with previous versions?
Correct results / Set of same invocations in different component versions
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
Not applied
NA
Configuration capacity
Configuration analysis
I 3
Analyzes the ability of the component to be transferred from one environment to another, considering the related changes
How much effort is needed to adapt the component to a new one environment?
Analyze the component constraints and environment constraints - Deploy the component in environment specified on documentation - Time taken to adapt the component in its specified environments
Analyze the time taken to deploy the component in each environment defined
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
0,8 Flexility
Mobility Mobility analyses
II 3
Analyzes the ability of the component to be transferred from one environment to another
Can the component be transferred to other environment without any changes?
Analyze the component constraints/environment Deploy the component in environment specified on documentation * Possible metric: Number of environments where the component works correctly / Number of environments described in its specification
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
0,75 Por
tabi
lity
Reusability
Architecture compatibility
Hardware/Software compatibility analysis I 3
Analyzes the real compatibility of the component with architecture listed in the documentation
How compatible is the component with the architecture listed in the documentation
Number of architecture really compatible / Number of architecture listed as compatible
0 <= x <= 1; closer to 1 being better
0- 0.3: Not acceptable 0.31-0.6: Reasonable 0.61-1: Acceptable
1
0.85
Appendix C - BRConverter – Embedded Quality Evaluation 196
2.3 Design the Evaluation
2.3.1 Produce evaluation plan
2.3.1.1 Design the evaluation plan After a detailed analysis of all 21 quality attributes selected by this EQM, here are the techniques required:
- System tests (black-box and white-box)
- Code inspection
- Real-time analysis
- Code metrics analysis (complexity)
- Electrical profiling (current consumption)
System tests:
The very first analysis can be made using simple functional test-cases. The idea here is to verify the component based on their very basic specification requirements. The test case described below can be used to all functional tests that should be performed: Test type: Functional
Execution type: Manual
Scenario: Request strings sent from PC and response of ECU Simulator saved and analysed. Goal: Verify if data are transmitted and received, after converting baud rates, without modifications or delays.
Pre-conditions: PC and boards are connected and component code running
Post-conditions: all response data should be saved on PC
Procedure Results 1. Using the terminal emulator interface on PC, evaluator should send the following string:
The data should go through data rate component
2. Wait for data response from ECU Simulator
ECU simulator should receive the request command on a correct baud rate, interpret and send correspondent answer string back to PC
3. Verify if the response command arrived correct and without modification
A file saved on PC with all bytes answered by ECU simulator
This very same test case can be used as a basis for other required tests:
- Stability: repeat the test case during 10 minutes;
- Performance: evaluator should run this test case while perform measurements on serial signals;
- Stress: the evaluator can run step 1 over again without waiting for an answer.
Appendix C - BRConverter – Embedded Quality Evaluation 197
Code Inspection:
Most of the quality attributes selected in this particular EQM requires some sort of source code analysis. Among those, the code inspection technique should suit. No special tool is required for code inspection here. The evaluation team should follow a simplified software code inspection without go over rework on source code. The simplified code inspection process requires those steps:
1. Inspection planning: the moderator, hereby represented by the person playing the role of Evaluation Responsible, should plan the inspection, distribute the code, schedule a inspection meeting and, very important, explains the quality attributes which the inspectors should look for;
2. Preparation: each inspector should go over the code and identify possible problems for every quality attribute designated;
3. Inspection meeting: all inspectors gather together to read the code. During the meeting, the role of reader and scriber is played by Evaluation Responsible. Each inspector should point out their highlights while the scriber registers on this document.
Real-time analysis:
For the quality attributes requiring real-time analysis, a digital oscilloscope is the only tool required. The evaluator should complete basic setup described early this section, connect scope probes on transmitting and receiving signals, send patterns and measure the results. Code metrics analysis:
Since all code is written in ANSI C, the tool chosen for code metrics analysis was the free and open source CCCC – C and C++ Code Counter [5] . It has a prompt based interface and results came out in a .html file. After downloading and install CCCC, evaluators should open a CCCC environment and move to source code folder. Then, insert the following command: >> cccc –lang=c *.c
The results are written in a sub-folder, named ‘.cccc’ in a file ‘cccc.html’. All the metrics are there. Electrical profiling:
And finally, for electrical measurements, specifically current consumption, a multimeter equipped with milli-ampère capable Ammeter will suit. The microcontroller board does not drain any source current from serial interfaces, so the current measurement on power supply only will cover the complete system current measurement.
The specific current required for the baud rate conversion component requires two measurements: first, evaluators should perform one current measurement with component in run-time performing conversions. The other measurement should be made without the component only source code compiled and flashed. The result is the difference of two measurements.
Appendix C - BRConverter – Embedded Quality Evaluation 198
2.3.1.2 Estimate resource and schedule for evaluation Table 6. Component Evaluation Scheduler.
Activities Nov/2009 Dez/2009 Jan/2010 Design the evaluation plan Engº 1 / Engº 2 Configure the environment Engº 1 / Engº 2 Preparation for code inspection meeting
Engº 1 / Engº 2
Code inspection meeting Engº 1 / Engº 2 Performing structural tests Engº 1 Execute code metrics analysis tool (CCCC)
Engº 1
Performing real-time analysis
Engº 2
Measure electrical consumption
Engº 2
Write the quality evaluation report
Engº 1 / Engº 2
2.3.1.3 Select instrumentation (equipments/tool)
Table 7. Definition of the Tools that should be used during evaluation. Characteristics Sub-
Characteristics
Quality Attributes
Evaluation Techniques
Instrumentation
Functionality Accuracy Precision Functional tests (white-box)
Functionality Real-Time Response time (Latency)
Evaluation Measurement
Oscilloscope
Functionality Real-Time Execution time Real-time analysis
Oscilloscope
Functionality Self-contained Dependability Code inspection Spread sheet Reliability Recoverability Error Handling Code inspection Spread sheet Reliability Safety Integrity - Code
inspection - Code metrics analysis
CCCC – C and C++ Code Counter
Usability Configurability Effort for configure Code inspection Spread sheet Usability Configurability Understandability Documents
inspection Spread sheet
Usability Attractiveness Effort to operate Code inspection Spread sheet Efficiency Resource
Behavior Peripheral utilization
Code inspection Spread sheet
Efficiency Energy consumption
Amount of Energy Consumption
Electrical profiling
Multimeter
Efficiency Data Memory Utilization
Amount of Data Memory Utilization
Code inspection (compiler output)
Build output
Efficiency Program Memory Utilization
Amount of Program Memory Utilization
Code inspection (compiler output)
Build output
Maintainability Stability Modifiability - Code inspection - Code metrics analysis
CCCC – C and C++ Code Counter
Maintainability Changeability Change Effort Code inspection Spread sheet
Appendix C - BRConverter – Embedded Quality Evaluation 199
Maintainability Testability Test suit provided Code inspection Spread sheet Portability Deployability Complexity level Code inspection Spread sheet Portability Replaceability Backward
Compatibility Code inspection Spread sheet
Portability Flexibility Configuration capacity
Code inspection Spread sheet
Portability Flexibility Mobility Code inspection Spread sheet Portability Reusability Architecture
compatibility Code inspection Spread sheet
2.3.1.4 Define the configuration of the instrumentation
For all tools and instruments required for this evaluation, no special configuration is needed.
2.4 Execute the Evaluation
2.4.1 Measure Characteristics
2.4.1.1 Setup the instrumentation, environment and scenarios During this evaluation, only three tools should be configured: multimeter,
oscilloscope and terminal emulator on PC. The multimeter will be used for current measurement only. The currents in this
component are very low, so it should be configured to measure current in 10-6 amperes (micro-ampere).
The oscilloscope will display the electrical signals in time-domain. The analysis configuration setup will put the amplitude axis to 2V/Division and the time axis to 200us/Division.
The terminal emulator will make it possible to type string characters from PC to the ECU. It shall be configured with the 115200 baud-rate and with the right COM port connected to the P2148 board. 2.4.1.2 Collect data
Table 8. Quality in Use Characteristics. Quality in Use Characteristics
Productivity Satisfaction Security Effectiveness 80% 75% 100% 100%
Table 9. Additional information of the component.
Additional Information
1. Technical Information
a. Component Version: v1.0
b. Programming Language: C
c. Design and Project Patterns: None
d. Operational Systems Supported: None
e. Compiler Version: v 3.50
f. Compatible Architecture:
Appendix C - BRConverter – Embedded Quality Evaluation 200
Microcontroller in general
g. Minimal Requirements: 2 UARTs
h. Technical Support: www.cesar.org.br
i. Compliance: MISRA-C:2004
2. Organization Information
a. CMMi Level: 3
b. Organization’s Reputation: High
3. Market Information
a. Development time: 4 hours
b. Cost: NA
c. Time to market: NA
d. Targeted market: NA
e. Affordability: NA
f. Licensing: NA
2.4.2 Compare with criteria
2.4.2.1 Consolidate data
Table 10. Final results of the component quality evaluation Characteristics EQL
Final
Results Functionality II 0.89 Reliability II 0.96 Usability I 0.61 Efficiency I 0.87 Maintainability II 0.37 Portability II 0.85
Appendix C - BRConverter – Embedded Quality Evaluation 201
BRConverter
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1
Functionality (EQLI) Reliability (EQLII) Usability (EQLI) Efficiency (EQLI) Maintainability(EQLII)
Portability (EQLI)
Charateristics
Qua
lity
Figure 2. Quality characteristics: BRConverter.
Quality in use characteristics
73%
82%
68%73%
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
Productivity Satisfaction Security Effectiveness
Characteristics
Qua
lity
scor
e
Figure 3. Quality in use characteristics: BRConverter.
2.4.3 Assess results
2.4.3.1 Build the component quality label
Table 11. Component quality label.
Summary of embedded component quality evaluation Quality Characteristics
Characteristics EQL Final Result Functionality II 0.89
Reliability II 0.96 Usability I 0.61 Efficiency I 0.87
Maintainability II 0.37
Appendix C - BRConverter – Embedded Quality Evaluation 202
Portability II 0.85 Quality in Use Characteristics
Productivity Satisfaction Security Effectiveness 80% 75% 100% 100%
2.4.3.2 Record Lessons Learned
Since we made the first use of the evaluation methodology, some improvements can be done:
- There was a misunderstanding of where to start when we began. The training we had at the beginning was very interesting but not enough to proceed alone. A careful reading would have helped us too;
- The selection and definition of metrics requires more time than we have expected. Next time we should plan more time for that.
2.4.3.3 Make Recommendations
Two recommendations were raised for this component:
- It should diminish its dependency on UART module;
- Next version should implement an automatic baud-rate parameters configuration, avoiding the need of code recompilation for every parameters configuration.
Appendix C - BRConverter – Embedded Quality Evaluation 203
3 References
[1] ISO 9141-2: 1994; Road Vehicles – Diagnostic System; http://www.iso.org/iso/catalogue_detail.htm?csnumber=16738
[2] Olimex LPC-P2148 Development Board; http://www.olimex.com/dev/lpc-p2148.html
[3] NXP LPC-2148 32-bit ARM-based microcontroller chip; http://www.nxp.com/pip/LPC2141_42_44_46_48_4.html
[4] MDK-ARM Microcontroller Development Kit; http://www.keil.com/arm/mdk.asp
[5] CCCC – C and C++ Code Counter; http://sourceforge.net/projects/cccc/
Appendix C - BRConverter – Embedded Quality Evaluation 204
Appendix: A - Class Diagram; B - Wave form; C – Source Code. Class Diagram
Component Diagram
Wave Form: 1 – UART 0 (10400 bps K-Line) 2 – UART 1 (115200 bps PC)
Appendix C - BRConverter – Embedded Quality Evaluation 205
Source Code: 1 /************************************************************************ * NOME DO ARQUIVO: BRConverter.c * * * * PROPOSITO: Esse modulo prove funcoes de conversao de baud rates * * entre duas uarts disponiveis no ambiente * * * * REFERENCIA A OUTROS ARQUIVOS: * * Nome E/S Descricao * * BRConverter.h E Cabecalho do modulo * * *
Appendix C - BRConverter – Embedded Quality Evaluation 206
* VARIAVEIS EXTERNAS: * * NENHUMA * * * * NOTAS: * * NENHUMA * * * * REFERENCIA A REQUISITO/ESPECIFICACAO: * * NENHUMA * * * * HISTORICO: * * * * Data Autor Versao Descricao da Mudanca * * 13/01/2006 Daniel Thiago 1.0 Versao inicial do codigo * * 31/05/2006 Daniel Thiago 1.1 Inclusao dos comentarios * * * ************************************************************************/ #include "BRConverter.h" /************************************************************************ * * * Declaracao de Variaveis Globais * * * ************************************************************************/ /************************************************************************ * NOME DA FUNCAO: VerifyUart0 * * * * DESCRICAO: Funcao que monitora o recebimento de caracteres na Uart0. * * Quando caracteres sao recebidos pela Uart0, sao enviados * * pela Uart1. * * * * ARGUMENTOS: * * NENHUM * * * * RETORNO: * * NENHUM * * * ************************************************************************/ 54 void VerifyUart0( void ) { if ( UART0_HasChar( ) ) { UART1_PutChar( UART0_GetChar( ) ); } } /************************************************************************ * NOME DA FUNCAO: VerifyUart1 * * * * DESCRICAO: Funcao que monitora o recebimento de caracteres na Uart1. * * Quando caracteres sao recebidos pela Uart1, sao enviados * * pela Uart0. * * * * ARGUMENTOS: * * NENHUM * * * * RETORNO: * * NENHUM * * * ************************************************************************/ 77 void VerifyUart1( void ) {
Appendix C - BRConverter – Embedded Quality Evaluation 207
if ( UART1_HasChar( ) ) { UART0_PutChar( UART1_GetChar( ) ); } }
.\BRConverter\BRConverter.h 1 /************************************************************************ * NOME DO ARQUIVO: BRConverter.h * * * * PROPOSITO: Esse modulo prove funcoes de conversao de baud rates * * entre duas uarts disponiveis no ambiente * * * * REFERENCIA A OUTROS ARQUIVOS: * * Nome E/S Descricao * * Cabecalho.h E Definicao de tipos * * Uart.h E As Uarts 0 e 1 sao requeridas pelo modulo * * * * VARIAVEIS EXTERNAS: * * NENHUMA * * * * NOTAS: * * NENHUMA * * * * REFERENCIA A REQUISITO/ESPECIFICACAO: * * NENHUMA * * * * HISTORICO: * * * * Data Autor Versao Descricao da Mudanca * * 13/01/2006 Daniel Thiago 1.0 Versao inicial do codigo * * 31/05/2006 Daniel Thiago 1.1 Inclusao dos comentarios * * * ************************************************************************/ #ifndef BRCONVERTER_H #define BRCONVERTER_H /*********************************************************************** * * * Declaracao de Includes * * * ************************************************************************/ #include "Cabecalho.h" #include "Uart.h" /************************************************************************ * * * Definicao de Bits de Registradores * * * ************************************************************************/ /************************************************************************ * * * Definicao de Constantes *
Appendix C - BRConverter – Embedded Quality Evaluation 208
* * ************************************************************************/ #define UART0_BAUDRATE ( uint32_t ) 115200 /* Baud rate usado na UART 0 */ #define UART1_BAUDRATE ( uint32_t ) 10400 /* Baud rate usado na UART 1 */ /************************************************************************ * * * Declaracao de Variaveis Exportadas * * * ************************************************************************/ /************************************************************************ * * * Declaracao de Funcoes Providas * * * ************************************************************************/ /************************************************************************ * NOME DA FUNCAO: VerifyUart0 * * * * DESCRICAO: Funcao que monitora o recebimento de caracteres na Uart0. * * Quando caracteres sao recebidos pela Uart0, sao enviados * * pela Uart1. * * * * ARGUMENTOS: * * NENHUM * * * * RETORNO: * * NENHUM * * * ************************************************************************/ 88 extern void VerifyUart0( void ); /************************************************************************ * NOME DA FUNCAO: VerifyUart1 * * * * DESCRICAO: Funcao que monitora o recebimento de caracteres na Uart1. * * Quando caracteres sao recebidos pela Uart1, sao enviados * * pela Uart0. * * * * ARGUMENTOS: * * NENHUM * * * * RETORNO: * * NENHUM * * * ************************************************************************/ 105 extern void VerifyUart1( void ); /************************************************************************ * * * Declaracao de Funcoes Requeridas * * * ************************************************************************/ /************************************************************************ * NOME DA FUNCAO: UARTX_Configure * * * * DESCRICAO: Configura a Uart0 com o baud rate passado como parametro * * * * ARGUMENTOS: * * NENHUM *
Appendix C - BRConverter – Embedded Quality Evaluation 209
* * * RETORNO: * * NENHUM * * * ************************************************************************/ 127 extern void UART0_Configure( uint32_t baud0 ); 128 extern void UART1_Configure( uint32_t baud1 ); /************************************************************************ * NOME DA FUNCAO: UARTX_HasChar * * * * DESCRICAO: Verifica se ha novo caractere recebido pela Uart0 * * * * ARGUMENTOS: * * NENHUM * * * * RETORNO: * * bool_t - Retorna TRUE quando ha um caractere recebido e nao lido * * * ************************************************************************/ 143 extern bool_t UART0_HasChar( void ); 144 extern bool_t UART1_HasChar( void ); /************************************************************************ * NOME DA FUNCAO: UARTX_PutChar * * * * DESCRICAO: Envia o caractere passado como parametro pela Uart0 * * * * ARGUMENTOS: * * char_t - c - Caracter a ser transferido pela serial * * * * RETORNO: * * bool_t - Indica o sucesso da insercao do caracter no buffer. Se * * existir espaço no buffer, o retorno é TRUE * * * ************************************************************************/ 160 extern bool_t UART0_PutChar( char_t c ); 161 extern bool_t UART1_PutChar( char_t c ); /************************************************************************ * NOME DA FUNCAO: UARTX_GetChar * * * * DESCRICAO: Remove o caractere mais velho do buffer de recepcao da * * Uart0. * * * * ARGUMENTOS: * * NENHUM * * * * RETORNO: * * char_t - Retorna oaractere recebido * * * ************************************************************************/ 177 extern char_t UART0_GetChar( void ); 178 extern char_t UART1_GetChar( void ); #endif