[advanced information and knowledge processing] meta-programming and model-driven meta-program...

29
Chapter 12 Complexity Evaluation of Feature Models and Meta-Programs 12.1 What Is Complexity? Complexity is the inherent property of systems to be designed. The need for manag- ing the complexity issues is constantly growing since systems per se are becoming more and more complex mainly due to technological advances, increasing user requirements and market pressure. Complexity management can help to increase quality and understandability of developed products, decrease the number of design errors [TZ81] and shorten their development time. Managing complexity means, firstly, knowing how to measure it. Complexity measures allow reasoning about system structure, understanding the system behaviour, comparing and evaluating systems or foreseeing their evolution. Researchers and practitioners struggle with the complexity problem for more than three decades. Software engineers have long seen complexity as a major factor affecting design quality and productivity. The efforts to manage complexity have resulted in the introduction and studies of such general principles as separation of concerns, information hiding, system decomposition and raising abstraction level in a system design [CHW98]. On the other hand, new design methodologies, which implement those principles combined with various design techniques (e.g. object- oriented design, generative programming [CW07], etc.), have emerged and are further evolving. The evident example is product line engineering (PLE) [KLD02], which shifts from the design of a single system to the design of a family of related systems. The methodology widely exploits the model-driven approach, where at the focus are high-level domain models. Complexity is a difficult concept to define. Though the term ‘complexity’ is used in many of over 25 roadmaps for software [BR00] and can, for example, be found in relation to software development, software metrics, software engineering for safety, reverse engineering, configuration management and empirical studies of software engineering [Vis05], so far, there is no exact understanding of what is meant by complexity with various definitions still being proposed. High complexity of a system usually means that we cannot represent it in a short and comprehensive V. ˇ Stuikys and R. Damaˇ seviˇ cius, Meta-Programming and Model-Driven Meta-Program Development, Advanced Information and Knowledge Processing, DOI 10.1007/978-1-4471-4126-6 12, © Springer-Verlag London 2013 209

Upload: robertas

Post on 09-Dec-2016

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

Chapter 12Complexity Evaluation of Feature Modelsand Meta-Programs

12.1 What Is Complexity?

Complexity is the inherent property of systems to be designed. The need for manag-ing the complexity issues is constantly growing since systems per se are becomingmore and more complex mainly due to technological advances, increasing userrequirements and market pressure. Complexity management can help to increasequality and understandability of developed products, decrease the number of designerrors [TZ81] and shorten their development time. Managing complexity means,firstly, knowing how to measure it. Complexity measures allow reasoning aboutsystem structure, understanding the system behaviour, comparing and evaluatingsystems or foreseeing their evolution.

Researchers and practitioners struggle with the complexity problem for morethan three decades. Software engineers have long seen complexity as a major factoraffecting design quality and productivity. The efforts to manage complexity haveresulted in the introduction and studies of such general principles as separation ofconcerns, information hiding, system decomposition and raising abstraction level ina system design [CHW98]. On the other hand, new design methodologies, whichimplement those principles combined with various design techniques (e.g. object-oriented design, generative programming [CW07], etc.), have emerged and arefurther evolving. The evident example is product line engineering (PLE) [KLD02],which shifts from the design of a single system to the design of a family of relatedsystems. The methodology widely exploits the model-driven approach, where at thefocus are high-level domain models.

Complexity is a difficult concept to define. Though the term ‘complexity’ is usedin many of over 25 roadmaps for software [BR00] and can, for example, be found inrelation to software development, software metrics, software engineering for safety,reverse engineering, configuration management and empirical studies of softwareengineering [Vis05], so far, there is no exact understanding of what is meant bycomplexity with various definitions still being proposed. High complexity of asystem usually means that we cannot represent it in a short and comprehensive

V. Stuikys and R. Damasevicius, Meta-Programming and Model-Driven Meta-ProgramDevelopment, Advanced Information and Knowledge Processing,DOI 10.1007/978-1-4471-4126-6 12, © Springer-Verlag London 2013

209

Page 2: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

210 12 Complexity Evaluation of Feature Models and Meta-Programs

description. Briand et al. [BMB96] state that complexity (of a modular softwaresystem) is a system property that depends on the relationships among elements andis not a property of any isolated element. IEEE Std. 610.12:1990 [IEEE90] definessoftware complexity as ‘the degree to which a system or component has a designor implementation that is difficult to understand and verify’. Therefore, complexityrelates both to comprehension complexity and to representation complexity.

Another definition deals with psychological complexity (also known as cognitivecomplexity) of programs explaining that ‘true meaning of software complexity isthe difficulty to maintain, change and understand software’ [Zus91]. There are threespecific types of psychological complexity that affect programmer ability to com-prehend software: problem complexity, system design complexity and proceduralcomplexity [CA88]. Problem complexity is a function of the problem domain. It isassumed that complex problem spaces are more difficult to comprehend than simpleproblem spaces.

Knowledge-based perception of software complexity is described in [RMWC04]as a process of ‘translating’ human-seen complexity into numbers. The processstarts with an experiment that involves human beings and provides data withembedded knowledge about human perception of complexity. Data processing andanalysis of data models lead to discovery of simple rules which represent humanperception of software complexity.

From the organizational viewpoint, complexity of a system is defined withrespect to number, dissimilitude and states’ variety of system elements and relation-ships between them [BAKC04]. These complexity variables enable the distinctionbetween structural (static) and dynamic complexity. Structural complexity describesthe system structure at a defined point in time, and dynamic complexity representsthe change of system configuration in time.

The aim of this chapter is to contribute towards research in software com-plexity measurement and management by defining complexity metrics specificallyfor feature models and meta-programs. The research is relevant because of theimportance of ensuring meta-program testability and reliability and developingeffective meta-program testing procedures, to which meta-program complexitymeasures can contribute similarly to the contribution of software metrics to predictcritical information about reliability and maintainability of software systems usingautomatic analysis of source code.

12.2 Complexity Management

How to manage complexity? Many factors influence on better management ofcomplexity. For example, from the cognitive complexity viewpoint, the majorfactor is understandability [MA08a]. One way to avoid exceeding the cognitiveconstraints and creating cognitive overload is to reduce the amount of informationthat needs to be stored in a short-term memory and to decrease the uncertainty ofthat information [MV95]. A common method to achieve this would be by creating

Page 3: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

12.3 Complexity Metrics 211

new and useful abstractions. As a program is more than just the informativecode during the process of understanding, a programmer’s level of expertise in agiven domain, that is, domain knowledge, greatly affects program understandingas well as programmer’s knowledge. The commonly recognized principles formanaging complexity are reducing the amount of information, decomposing asystem into modules, abstracting or hiding information and providing different levelsof abstractions.

Software design complexity is also related with design quality. As complexityincreases, design quality also tends to decrease. To achieve the levels of qualityneeded in today’s complex software designs, quality must be designed in, not testedin. Thus the design-for-quality paradigm is becoming extremely important. In thiscontext, Keating [Kea00] proposes simple software design partitioning rules as abasis for a quantitative measure of complexity: the number of modules at any levelof hierarchy must be 7 ˙ 2.

The complexity growth forces researchers to seek for the adequate means forbetter management of complexity. A number of techniques has been identifiedand followed in software design practice that enforces higher program compre-hensibility and reuses and eases complexity management. These include variouslexical conventions, design style conventions and design process conventions. Theprimary tasks are the understanding of the complexity problem and finding ofrelevant measures for evaluating software complexity. These issues may have adirect influence on testability, performance, efficiency and other characteristics ofsoftware systems to be designed.

Software metrics have always been strongly related to the programmingparadigm used by the respective researchers. For example, McCabe’s cyclomaticcomplexity [Cab76, SEI06] was proposed for measuring the testing efforts ofstructural programs. For object-oriented programs, complexity metrics are basedon special object-oriented (OO) features, such as the number of classes, depth ofinheritance tree, number of subclasses, etc. [SEI06].

With the arrival of higher-level programming paradigms such as aspect-orientedprogramming, generic programming or meta-programming, new complexity metricsshould be defined because metrics applied to programs implemented in differentparadigms than the one they were developed for may give false results [SPP06].

12.3 Complexity Metrics

Complexity measures allow reasoning about system structure, understanding systembehaviour, comparing and evaluating systems or foreseeing their evolution. Systemdesign complexity addresses complexity associated with mapping of a problemspace into a given representation. An overall rating of system complexity (systemcomplexity) consists of the sum of the individual module complexities associatedwith the module’s connections to other modules (structural complexity) and theamount of work the module performs (data complexity) [CG90].

Page 4: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

212 12 Complexity Evaluation of Feature Models and Meta-Programs

Structural complexity addresses the concept of coupling, that is, the interdepen-dence of modules of source code. It is assumed that the higher coupling betweenmodules is, the more difficult it is for a programmer to comprehend a given module.Data complexity addresses the concept of cohesion, that is, the intradependence ofmodules. In this case, it is assumed that the higher cohesiveness is, the easier it is fora programmer to comprehend a given module. The structural and data complexitymeasures are based on the module’s fan-in, fan-out and number of input/outputvariables. These metrics address system complexity at the system and module levels.Procedural complexity is associated with the complexity of the logical structure of aprogram assuming that the length of a program in lines of code (LOC) or the numberof logical constructs such as sequences, decisions or loops determines complexityof the program.

Rauterberg [Rau96] addresses a similar problem, that is, how to measurethe cognitive complexity in human-computer interaction. He proposes to derivecognitive complexity (CoC) from behaviour complexity (BC), system complexity(SC) and task complexity (TC) as: CoC D SC C TC–BC.

Sheetz et al. [STM91] address complexity of the OO system at the application,object, method and variable levels, and at each level propose the measures to accountfor the cohesion and coupling aspects of the system. Complexity is presented as afunction of the measurable characteristics the OO system such as fan-in, fan-out,number of I/O variables, fan-up, fan-down and polymorphism.

Cyclomatic complexity is one of the more widely accepted static softwaremetrics [SEI06]. It is intended to be independent of the language and languageformat. The other metrics bring out other facets of complexity, including bothstructural and computational complexity: Halstead complexity measures [Hal77]identify algorithmic complexity, measured by counting operators and operands;Henry and Kafura metrics [HK81] indicate coupling between modules (parameters,global variables, calls); Bowles metrics [SEI06] evaluate the module and systemcomplexity, coupling via parameters and global variables; Troy and Zweben [TZ81]metrics evaluate modularity or coupling; complexity of structure (maximum depthof a structure chart). Wang’s cognitive complexity measure [Wan09] indicates thecognitive and psychological complexity of software as a human intelligence artefact.New complexity metrics also have been proposed for aspect-oriented programming(AspectJ) [PSP06] and generic programming (CCC STL) [PPP07].

There were efforts to describe formal properties of complexity metrics that couldbe used for evaluation and theoretical validation of complexity measures. Weyuker[Wey88] introduces a set of syntactic software complexity properties as criteriaand examines the strengths and weaknesses of the known complexity measures,which include statement count, cyclomatic number, effort measure and data flowcomplexity. Briand et al. [BMB96] provide a theoretical framework for relatingstructural complexity, cognitive complexity and external quality attributes.

Page 5: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

12.4 Complexity Measures of Feature Models as Meta-Programs Specifications 213

12.4 Complexity Measures of Feature Modelsas Meta-Programs Specifications

As system designs evolve under the pressure and demands for better quality,higher functionality and shorter time to market, the complexity growth has directimpact on design methods, approaches and paradigms. Complexity is the intrinsicattribute of systems and processes through which systems are created. One wayto manage the design complexity is to enhance reuse in the context of PLE,where requirements may evolve. What is happening when we need to extendthe scope of requirements beyond one system/component or beyond a family ofrelated systems/components, if there is some prediction on their possible usagein a wider context? It is easy to predict intuitively: the models we need to dealwith are becoming more and more complex. But to which limits we can let togrow complexity of the models in terms of requirements prediction, implementationdifficulties and how we need to manage this complexity at a higher abstractionlevel? The task is to understand the complexity issues and to learn to measurethe complexity quantitatively. There are two different views on complexity [LG06]:complexity as ‘difficulty to test’ (i.e. number of test cases needed to achieve full pathcoverage) and complexity as ‘difficulty to understand a model’. The latter is alsoknown as cognitive complexity of a model. Cardoso et al. [CMNC06] also identifydifferent types of complexity: computational complexity, psychological (cognitive)complexity and representational complexity. Cognitive complexity focuses on theanalysis of how complicated a problem is from the perspective of the person tryingto solve it. Cognitive complexity is related to short-term memory limitations, whichvary depending on the individual and on what kind of information is being retained[Kin98]. For software designers, the ability of coping with complexity of a domainmodel is a fundamental issue, which influences the quality of the final product. Highcognitive complexity of a model leads to a higher risk of making design errors andmay lead to lower than required quality of a developed product, such as decreasedmaintainability. We claim that the properties (such as structural complexity and size)of a feature model represented using Feature Diagrams (FDs) have an impact on itscognitive complexity.

In this context, it is useful to have a boundary for cognitive complexity. Werely on the early Miller’s work [Mil56] stating that human beings can hold 7 (˙2)chunks of information in their short-term memory at one time. We also use the ruleof Keating which is based on the Miller’s work as applied to design domain: ‘thenumber of modules at any level of hierarchy must be 7 ˙ 2’ [Kea00]. Our empiricalrule (Rule 1) for the boundary of cognitive complexity as applied to the featuremodel is as follows:

Rule 1. The number of variation points in an FD must be 7 ˙ 2, if a designerwants to avoid consequences of high cognitive complexity. If the number of variationpoints is fewer than 5, the value of the model may be diminished due to the

Page 6: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

214 12 Complexity Evaluation of Feature Models and Meta-Programs

decreasing granularity level and too much information hiding. If the number ofvariation points is more than 9, the user needs to decompose the model into partsor levels in order the particle to remain within the limits of cognitive complexity.

Rule 2. The cognitive complexity of an FD is calculated as the maximal number oflevels in a feature hierarchy or the maximal number of features in each level of afeature hierarchy.

Rule 3 describes structural (representational) complexity of a feature model.Rule 3 has some correlation with the cyclomatic number, a well-known measurefor evaluating the complexity of a program [SEI06]. Each path in a program graphcorrelates to the adequate sub-tree in the feature diagram since the realization ofthe sub-tree can be seen as a program (path) with the syntax rules for correctimplementation of a particular product instance.

Rule 3. The structural complexity of a feature model with variation points isevaluated by the number of sub-trees where each variation point has only oneselected variant. Each sub-tree is derived from the initial feature diagramas ageneric model for a given domain.

Based on the empirical research and practical implementations [FMM94], thecyclomatic complexity has the following boundaries: from 1 to 10, the program issimple; from 11 to 20, it is slightly complex; from 21 to 50, it is complex; and above50, it is over-complex (untestable).

Rule 4 states how the cognitive complexity and the structural complexity shouldbe combined. It is based on the empiric Metcalf’s law and Keating’s adaptation ofthe law for the complexity evaluation of a design partitioning [Kea00]. Metcalf’slaw states that the ‘power’ of a network is equal to the square of the nodes on it, andthe ‘value’ of the network is equal to the square of the branches on the network. TheKeating’s measure is

C D M2 C I 2: (12.1)

Here, C is the design complexity, M is the number of modules in a design and Iis the number of interfaces among modules.

As design complexity can be presented as a graph in which vertices representmodules and edges represent interfaces (Keating’s model), we can apply thiscomplexity measure to feature models. What is different in our case is that a featurediagram has different properties: vertices and edges play different roles and shouldhave different cognitive weights.

We define cognitive weight of a feature as the degree of difficulty or relativetime and effort required to comprehend it, and total cognitive weight of a featuremodel represented as a feature diagram is the sum of the cognitive weights of itsgraph elements. Following Shao and Wang [SW03], we define weight of sequentialstructure as 1, weight of branching (if-else) as 2 and weight of case selection as3 and introduce cognitive weights to Eq. 12.2 (see Table 12.1).

Page 7: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

12.4 Complexity Measures of Feature Models as Meta-Programs Specifications 215

Table 12.1 Cognitive weights of FD elements

Feature Diagram element Structure Cognitive weightNode (feature) <Concept><Context>

1

Mandatory feature relationship(and-relationship)

A

B C D

A 1

Optional feature relationship (orrelationship) A A

B C CB D

2

Alternative feature relationship(case relationship) A A

B B DC C

3

Groupings of relationships(cardinality)

[1..*] 3

Relationships among terminaland constraint relationships(<requires>, <excludes>)

K Fxor

K Frequires

3

Rule 4. The compound complexity measure of a feature diagram (FD) is estimatedby Eq. (12.2):

Cm D F 2 C �Rand2 C 2Ror2 C 3Rcase2 C 3Rgr2 C 3R2

�=9 : (12.2)

Here, Cm is the compound complexity measure, F is the number of features(variation points and variants), Rand is the number of mandatory relationships, Roris the number of optional relationships, Rcase is the number of alternative relation-ships, Rgr is the number of relationship groupings, R is the number of relationshipsamong terminal nodes including constraints and the division coefficient is the sumof cognitive weights for equalizing the role of relationships.

Example. Here, we present an example how the complexity of feature models canbe calculated. Suppose we have a feature model of the reservation system (Fig. 12.1)that can be used to book different types (room, car, boat) of accommodation orvehicles. Such kind of systems is commonly used to demonstrate and validate newsoftware development and modelling methods. The system manages informationabout reservations, customers and customer billing and provides functionality formaking reservations, check in and check out. A customer may make reservations,change or cancel reservations. When making a reservation, the customer provideshis/her personal details and specifies type of reservation (car, boat or room), startdate (date of arrival in case of room or beginning of usage in case of room or

Page 8: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

216 12 Complexity Evaluation of Feature Models and Meta-Programs

Fig. 12.1 Feature model of a reservation system

Fig. 12.2 Complexity of the reservation system’s feature model

boat) and end date. In case of room reservation, the customer can specify roomtype (single, double, presidential, lux). In case of car reservation, the customer canspecify car brand.

The results of complexity calculation for this feature model are presented inFig. 12.2.

12.5 Evaluation of Abstraction Levels

Abstraction is a basic property for understanding the reality and managing com-plexity of software systems. The abstraction level is the level of detail of a softwaresystem (model, component, program, etc.) [Dam06]. In this sense, abstraction isa primary concept in software engineering and is, in fact, a basic property for

Page 9: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

12.5 Evaluation of Abstraction Levels 217

understanding the reality and managing the complexity of software systems [Alb03].The simplest interpretation of abstraction is hiding of irrelevant details, though thereare many different views what ‘irrelevant’ is [LBL08]. Abstraction is a gradualincrease in the level of representation of a software system, when existing detailedinformation is replaced with information that emphasizes certain aspects importantto the developer while other aspects are hidden. Abstraction is primarily responsiblefor the evolution of programming languages by stimulating adoption of higher-level mechanisms and constructs for programming. More abstract programminglanguage mechanisms allow replacing complex and repeating low-level operations.Better abstraction allows addressing complex problems with less code and lessprogramming errors.

Though different layers of abstraction represent a qualitative leap in the levelof abstraction that allows achieving higher productivity and faster developmenttimes, an interesting problem would be to evaluate the level of abstraction in asoftware system quantitatively. The problem is not a trivial one, because the level ofabstraction is related to the concepts of software complexity [Gla02] and informationcontent [TCS04]. Indeed, a representation of a software system at a higher layerof abstraction contains less detail and usually has less source code lines than acorresponding representation at a lower layer of abstraction.

The problem considered in this chapter is how to evaluate the raise of abstractionintroduced by a higher-level language quantitatively. We argue that it can beevaluated relatively by comparing information content at both layers of abstraction.Since some of information is abstracted away at a higher layer of abstraction, weexpect that information quantity directly represented at a higher level of abstractiongenerally should decrease, because much of it is hidden in the underlying tools(pre-processors, compilers, etc.) and software libraries used. However, the entirecontent of information required to solve a certain problem should remain thesame, as stipulated by the law of conservation of information, which states thatinformation in a closed system of natural causes remains constant or decreases[Dem99]. Therefore, a ratio between content of information at a higher and lowerlevels of abstraction is a metric of abstraction.

We can estimate the increase/decrease of abstraction in software by measuringthe content of information as different layers of abstraction. There are several meth-ods to evaluate information content/complexity such as computational complexity,Shannon entropy and topological complexity [Edm99]. We use the algorithmicinformation content metric known as Kolmogorov complexity [LV97].

Kolmogorov complexity is a measure of randomness of strings based on theirinformation content. We use Kolmogorov complexity-based metric to estimatethe increase in the level of abstraction in meta-programs quantitatively. Meta-programs are generic programs (or program generators) that encapsulate familiesof similar software components. We evaluate the level of abstraction in meta-programs as compared to families of domain programs by estimating and comparingthe information content at the meta-level and domain level of abstraction using acommon compression algorithm.

Page 10: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

218 12 Complexity Evaluation of Feature Models and Meta-Programs

The main idea of Kolmogorov complexity is to measure the ‘complexity’ of anobject by the length of the smallest program that generates it. In general case, wehave a domain object x and a description system (e.g. programming language)® thatmaps from a description w (i.e. a program) to this object. Kolmogorov complexityK(x) of an object x in the description system ® is the length of the shortest programin the description system ® capable of producing x on a universal computer such asa Turing machine:

K'.x/ D minw

fkwk W 'w D xg: (12.3)

Kolmogorov complexity is the ultimate lower bound among all measures ofinformation content. Unfortunately, it cannot be computed in the general case[LV97]. Usually, the universal compression algorithms are used to give an upperbound to Kolmogorov complexity. Suppose that we have a compression algorithmCi. Then, a shortest compression of w in the description system ®will give the upperbound to information content in x:

K'.x/ � mini

fCi.'w/g: (12.4)

As abstraction hides the complexity, abstraction of an object x in a descriptionsystem ® can be defined as an inverse of complexity of x estimated in terms ofKolmogorov complexity:

A'.x/ D 1

K'.x/: (12.5)

The increase of abstraction level between a program ®w that is a representationof x in a description system ® and a program w that is a representation of x in adescription system at a higher level of abstraction can be defined as follows:

A . j'/ D K'.x/

K .x/: (12.6)

Having in mind Eq. 12.4 and that a meta-program MP is a concise representationof a component family �, which is a union of all its members Pj, we estimate theincrease of abstraction levelA in a meta-program as compared to a domain programas follows:

A�MP�jP�

� DminiCi

S

j

P �j

!

miniCi�MP�

� : (12.7)

Here, Ci is a compression algorithm.

Page 11: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

12.5 Evaluation of Abstraction Levels 219

Fig. 12.3 Generic gate described using Open PROMOL

The content of information in component families can be estimated usingthe compression-based information content metric. We use BWT compressionalgorithm, because currently it allows achieving best compression results for text-based information [Man99] and thus better approximate information content. Thelowest size of the compressed components will put the upper limit on the estimatedinformation quantity in the analysed component family.

Example. We develop a meta-program, which describes a component family at ahigher level of abstraction. The identified generic parameters and their values forthe gate component family are as follows:

Gate function D f AND, OR, XOR, NAND, NOR, XNOR gGate inputs D f integer numbers from 2 to 16 g

A meta-program (see Fig. 12.3) was developed using Open PROMOL as a meta-language. This meta-program describes a generic gate and covers a family of 90different component instances, which can be generated from it.

Then, we evaluate content of information at a higher level (meta-level) ofabstraction. We again compress the meta-program using a selected compressionalgorithm, which in our case is BWT. The lowest size of compressed meta-programwill put the upper limit on the estimated information content at the meta-level.

The increase of abstraction between meta-level and domain level shall be the ratioof estimated information content at the meta-level and domain level, as stipulatedin Eq. 12.7. The size of the meta-program given in Fig. 12.3 is 291 B. The sizeof the meta-program compressed using the BWT algorithm is 245 B, which is theestimated quantity of information at the meta-level.

Next, we generate all instances of this meta-program for all possible values ofthe generic parameters f and num. We obtain 90 different component instances (2 ofthem are given in Fig. 12.4a, b). The total size of these instances is 21,426 B whenuncompressed and 726 B after compression.

Page 12: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

220 12 Complexity Evaluation of Feature Models and Meta-Programs

Fig. 12.4 Instances of VHDL gate family: (a) 2-input AND gate and (b) 3-input OR gate

Next, we apply Eq. 12.8 to obtain the estimated abstraction increase for the gatecomponent family:

A D Kdomain .gate/

Kmeta .gate/D 726

245D 2:96: (12.8)

Thus, we estimate that the introduction of meta-programming for describinggeneric gate components using VHDL as a domain language and Open PROMOLas a meta-language allowed to increase abstraction by 3 times.

We also have performed the experiments with the following VHDL componentfamilies and meta-programs: gate, RSA coding processor, serial multiplier, register,shift register, multiplexer and majority function for voting in fault-tolerant systems.We also have performed the experiments with the DSP algorithms implemented asembedded software in C as follows: DCT, FFT, Romberg integration, Chebyshevapproximation and Taylor series expansion of popular mathematical functions. Theresults are summarized in Table 12.2.

The statistical evaluation of the obtained results for abstraction increase(mean D 2.9; std. deviation D 0.992; std. error D 0.286) was performed using one-sample Student’s t-test. The mean is within 95% confidence interval.

The quantity of information has decreased by 2.9 times on average in meta-programs as compared with domain program families. This number varies depend-ing upon the type and size of components, the number of component instancesin a component family, the number of generic parameters in a meta-program,similarity of components within a component family and syntactic characteristicsof domain language and meta-language. In general, we can estimate that the level ofabstraction in Open PROMOL meta-programs is about 3 times higher than the levelof abstraction in domain programs.

Page 13: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

12.6 Complexity of Meta-Programs and Meta-Programming Techniques 221

Table 12.2 Summary of experiments

Componentfamily

No. ofinstances

No. ofmeta-parameters

Est. informationquantity at domainlevel, B

Est. informationquantity atmeta-level, B

Abstractionincrease

RSA 32 2 72; 701 27; 295 2.66Serial multi-

plier10 1 3; 497 1; 827 1.91

Register 1; 024 6 1; 803 827 2.18Shift register 72 5 1; 178 786 1.50Mux 54 4 3; 107 1; 625 1.91Majority 192 2 1; 940 529 3.67DCT 14 2 655 469 1.40FFT 6 1 2; 112 394 5.36Romberg in-

tegration5 1 1; 713 313 5.47

Taylor series 240 3 2; 017 679 2.97Chebyshev

approx.8 1 912 331 2.76

12.6 Complexity of Meta-Programs and Meta-ProgrammingTechniques

Meta-programming, as a paradigm for developing programs that create otherprograms, is a level of complexity above traditional programming paradigms.There are two types of meta-programming: homogeneous and heterogeneous meta-programming (see Chap. 4).

In case of homogeneous meta-programming, we have two subsets of a domainlanguage: one is dedicated for expressing domain functionality, and the other is usedfor managing variability at meta-level (generic parameters, templates, etc.). Thedeveloper has to know only one programming language syntax, the meta-program isas readable as a domain program written in the same domain programming languageand the development flow uses the same development toolset. Therefore, thecomplexity of developing meta-programs using homogeneous meta-programmingtechnique is only slightly higher than complexity of traditional programming.

In case of heterogeneous meta-programming, we have two different languages: adomain language itself and a meta-language, which manipulates with source code ofdomain language programs. As a result, the cognitive complexity of heterogeneousmeta-programs expressed in terms of their readability and understandability issignificantly higher, because the developer must know, understand and use thesyntactical constructs of two different languages in the same meta-specification.The development flow is significantly more complex: not only two developmentenvironments have to be used, but also the testing of meta-programs is a significantand time-consuming problem. Therefore, complexity of developing meta-programsusing heterogeneous meta-programming techniques is considerably higher thancomplexity of traditional programming.

Page 14: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

222 12 Complexity Evaluation of Feature Models and Meta-Programs

Complexity measures may be helpful for reasoning about meta-program struc-ture, understanding the relationships between different parts of meta-programs,comparing and evaluating meta-programs. Here, we distinguish between:

1. First-order properties, or characteristics, which are derived directly from themeta-program description itself using simple mathematical actions such ascounting, for example, program size (count of symbols in a file)

2. Second-order properties, or metrics, which cannot be derived directly fromartefacts but are calculated from first-order properties

Meta-program complexity can be evaluated at several dimensions:

1. Information: Meta-program as message (sequence of symbols) containing infor-mation with unknown syntax and structure.

2. Meta-language: Meta-program as annotated domain knowledge. Domain knowl-edge is expressed using a domain language, whereas domain variability isspecified using a meta-language. Such separation of domain and meta-levels is afirst step towards the creation of a meta-program.

3. Graph: Meta-program as a graph of execution paths, where a root is a meta-program, the nodes are the meta-language constructs, and the leaves are thedomain program instances.

4. Algorithm: Meta-program as a high-level program specification (algorithm),which contains a collection of functional (structural) operations. An operationmay have one or more operands specified as meta-program attributes (parame-ters).

5. Cognition: Meta-program as a number of different information units availablefor human cognition. A unit may represent a meta-language construct (macro,template, function, etc.), its argument or a meta-parameter.

12.7 Complexity Metrics of Heterogeneous Meta-Programs

We use the following metrics for evaluating complexity at different dimensions ofa meta-program: relative Kolmogorov complexity (RKC), meta-language richness(MR), cyclomatic complexity (CC), normalized difficulty (ND), and cognitivedifficulty (CD).

12.7.1 Information Dimension: Relative KolmogorovComplexity

There are several methods to evaluate informational software complexity such asShannon entropy, computational complexity, network complexity and topological

Page 15: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

12.7 Complexity Metrics of Heterogeneous Meta-Programs 223

complexity. We use the algorithmic complexity metric also known as Kolmogorovcomplexity [LV97] (see Eq. 12.3). Kolmogorov complexity has been used earlier(under the name of generative software complexity) to measure the effectiveness ofapplying program generation techniques to software [Hee03]. Program generatorswere defined as compressed programs, and the shortest generator is assumed tohave maximal generative complexity. Here, we evaluate the complexity of a meta-program M using the relative Kolmogorov complexity (RKC) metric, which iscalculated using a compression algorithm C as follows:

RKC D kC.M/kkM k : (12.9)

Here, kM k is the size of a meta-program M, and kC.M/k is the size of acompressed meta-program M.

A high value of RKC means that there is a high variability of text content, that is,high complexity. A low value of RKC means high redundancy, that is, the abundanceof repeating fragments in meta-program code.

12.7.2 Meta-language Dimension: Meta-language Richness

Meta-program M can be defined as a collection of domain language statementswith corresponding annotations (metadata) expressed symbolically: O Dh.s;m/ js;m 2 †�i, where s is a domain language statement, m is the metadataof s and †� is a string of symbols from alphabet†.

For the evaluation of meta-program complexity at the meta-language dimension,we use the meta-language richness (MR) metric:

MR DP

m2Mkmk

kM k : (12.10)

Here, kM k is the size (length) of a meta-program M, and kmk is the size (length)of the meta-language constructs in a meta-program M.

A higher value of MR means that a meta-program contains more metadata andits description is more complex.

12.7.3 Graph Dimension: Cyclomatic Complexity

Cyclomatic complexity (CC) [Cab76] of a program directly measures the number oflinearly independent paths through a program’s source code from entrance to eachexit. For meta-programs, CC is equal to the number of distinct domain programinstances that can be generated from a meta-program.

Page 16: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

224 12 Complexity Evaluation of Feature Models and Meta-Programs

A meta-program M can be defined as a function ˆ.M/ W P ! I that maps froma set of its parameters P to a set of its domain program instances I. Following thisdefinition, CC of a meta-program is equal to the cardinality of a set of the distinctdomain program instances described by a meta-program:

CC D jcodˆj D jI j : (12.11)

Since ˆ is an injective function, which associates distinct meta-program param-eter values with distinct domain program instances, the cyclomatic complexity of ameta-program M can be computed using only the interface description of a meta-program. For independent parameters, the value of CC can be calculated as a productof the number of allowed parameter values for each parameter of a meta-program:

CC DY

p2Pjdom pj: (12.12)

A higher value of CC indicates higher complexity of the meta-program’sparameter set (meta-interface).

12.7.4 Algorithmic Complexity: Normalized Difficulty

A functional program specification S is a sequence of functions S D .f jf 2 F /,where f W .a; a 2 A/ ! A is a specific function (operator) that may have asequence of operands as its arguments, and A is a set of function operands. Formeta-programs, we accept that operations are specified as meta-language functions,and operands are specified as meta-program parameters. For the evaluation of meta-program complexity at the algorithm dimension, we use the Halstead complexitymetrics [Hal77]. From a meta-program, we derive the number of distinct operatorsn1 D jF j, the number of distinct operands n2 D jAj, the total number of operatorsN1 D jS j and the total number of operandsN2 D P

f 2S jAj.Halstead Difficulty D indicates the cognitive difficulty of a program:

D D�n12

��N2n2

�: (12.13)

Halstead volume V measures the size of a program specification:

V D N log2n: (12.14)

For evaluating meta-program complexity at the algorithm dimension, we proposethe normalized difficulty (ND) metric, which is a normalized ratio of the cognitivedifficulty and size metrics:

Page 17: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

12.8 Complexity of Homogeneous Meta-Programming 225

Table 12.3 Summary of meta-program complexity metrics

Metric Objects of measurement Meaning for a meta-program

Relative Kolmogorovcomplexity

Object: meta-program High variability of contentProgram: compressed meta-program

Meta-language richness Data: domain language constructs Complexity of description atmeta-levelMetadata: meta-language constructs

Cyclomatic complexity Independent paths: number of distinctinstances

Complexity of ameta-interface

Normalized difficulty Operators: meta-language functions Algorithmic complexity of ameta-programOperands: meta-program parameters

Cognitive difficulty Meta-program parameters,meta-language functions,meta-language function arguments

Cognitive understandabilityof a meta-program

ND D n1N2

.N1 CN2/ .n1 C n2/: (12.15)

The ND metric measures the complexity of a meta-program as an algorithm.A high ND value means that meta-program is highly complex in terms of time andeffort required to understand it.

12.7.5 Cognitive Complexity: Cognitive Difficulty

Following Miller [Mil56] stating that humans can hold 7 (˙2) chunks of infor-mation in their short-term memory at one time and Keating [Kea00], who claimsthat the number of modules at any level of software hierarchy must be 7 ˙ 2, forevaluating complexity of meta-programs, we propose the cognitive difficulty (CD)metric. Cognitive difficulty is calculated as the maximal number of meta-level units(meta-parameters P, meta-language constructs N1 or their respective arguments N2)in a meta-program:

CD D max .P;N1;N 2/ : (12.16)

The meta-program complexity metrics are summarized in Table 12.3.

12.8 Complexity of Homogeneous Meta-Programming

Inspired by the ‘interface size’ metric defined by [BVT03] for object-orientedprograms, we define complexity metrics of the homogeneous meta-programsbased on the complexity of their meta-interfaces as follows. The complexity of a

Page 18: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

226 12 Complexity Evaluation of Feature Models and Meta-Programs

Table 12.4 Weights of meta-parameters

Type Weight Examples

None 0 voidGeneric value 1 int, bool, StringGeneric type 2 class T, typename VGeneric type parameterized with generic value 3 class T<5>Generic type parameterized with generic type 4 class T<class U>

homogeneous meta-program is the sum of the complexities of its constituent parts,that is, meta-types and meta-functions:

CMP DX

MT

CMTCX

MF

CMF: (12.17)

The complexity of the meta-type CMT is defined as the number of meta-parameters of a meta-type plus the sum of the weights of the meta-parameters:

CMT D ˇˇPMT

ˇˇC

X

pi2PMT

$ .pi /: (12.18)

Here, PMT is a set of the meta-parameters of a meta-type, and $ .pi / is theweight of the meta-parameter pi .

The complexity of the meta-function CMF is defined as the sum of the complex-ities of types of its arguments plus the complexity of the type of the return value:

CMF DX

ai2AMF

CMT .ai /CCMT�rMF

�: (12.19)

Here, AMF is a set of the arguments of a meta-function, and rMF is a type of thereturn value of a meta-function.

The weights of the meta-parameters used for calculation of complexities arepresented in Table 12.4.

Example of complexity calculation for a meta-function:

// Java. Determine if an object is in an array.static <T, V extends T> boolean isIn(T x, V[] y) ffor(int iD0; i < y.length; iCC)if(x.equals(y[i])) return true;return false;gComplexityD1C(2C2)D5

Page 19: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

12.9 Theoretical Validation of Complexity Metrics 227

Example of complexity calculation for a templateclass:

//CCC. Generic Vector typetemplate<class T, int size>class Vector fprivate : T values[size];g;ComplexityD2C1D3

12.9 Theoretical Validation of Complexity Metrics

Validation of software metrics is important to ensure that metrics are accepted by thescientific community and used properly. There are two metric validation methods:theoretical and empirical [Ema00].

Theoretical validation ensures that the metric is a proper numerical characteriza-tion of software property it claims to measure. Empirical validation relates metricswith some important external attributes of software (such as the number of faults).While both types of validation are necessary, the empirical validation requires muchtime and many researchers to contribute since many studies need to be performedto gather convincing evidence from many real-world libraries and applications thata metric is valid. Meta-program complexity research is not mature yet; therefore,while there are open meta-program libraries available for such research, currentlythere are not sufficient data available publicly on the external characteristics of suchmeta-programs such as reliability or maintainability.

Therefore, we validate the proposed meta-program complexity metrics theoret-ically using Weyuker’s properties [Wey88], a set of formal properties that can beused to evaluate any software metrics.

Property 1 is satisfied when we can find two meta-programs of different complexity.All complexity metrics satisfy Property 1:

.9P/ .9Q/ .jP j ¤ jQj/ : (12.20)

Property 2 is satisfied when there are finitely many programs of complexity c, wherec is a non-negative number. The property is not satisfied for complexity measuresthat are size independent.

Property 3 (Eq. 12.21) is satisfied if we can find two distinct meta-programs thathave equal complexity. The property is satisfied by all proposed meta-programcomplexity metrics:

.9P / .9Q/ .P ¤ Q/ .jP j ¤ jQj/ : (12.21)

Page 20: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

228 12 Complexity Evaluation of Feature Models and Meta-Programs

Property 4 is satisfied if equivalent meta-programs of different complexity can bewritten. The property is not satisfied by RKC and MR metrics:

.9P/ .9Q/ .P � Q& jP j ¤ jQj/ : (12.22)

Property 5 is satisfied if after concatenating two meta-programs, the complexityof the merged meta-program increases beyond individual complexities of originalmeta-programs. The property is satisfied by all metrics, except MR (because ofaveraging):

.8P / .8Q/ .jP j 6 jP CQj & jQj 6 jP CQj/ : (12.23)

Property 6 is satisfied if concatenation of two equally complex meta-programswith some other meta-program gives different complexity meta-programs. Theproperty is satisfied by all metrics (because meta-programs can have common meta-parameters but distinct meta-bodies):

.9P / .9Q/ .9R/ .jP j D jQj & jP IRj ¤ jQIRj/ : (12.24)

Property 7 is satisfied if by permuting the order of statements in a meta-program,the complexity of the meta-program changes. The property is not satisfied by allmeta-program complexity metrics except RKC metric.

Property 8 is satisfied if renaming of the symbols and variables of a meta-programdoes not change the complexity of a program. The property is satisfied for all meta-program complexity metrics except RKC metric.

Properties 9a (Eq. 12.25) and 9b (Eq. 12.26) are satisfied when two (or more) meta-programs are concatenated; the sum of complexities of the original meta-programsis less than the complexity of the bigger meta-program. The property is satisfiedby RKC (because the concatenation provides more opportunities for compression),CC (because adding new meta-parameters lead to geometrical increase of meta-program instance number), and CD (because two meta-programs can have the samemeta-parameters, meta-language constructs or their arguments) metrics. Properties9a and 9b are not satisfied by the MR metric (because combining two meta-programswill not lead to their increased coupling). Only property 9a is satisfied by the NDmetric:

.9P / .9Q/ .jP j C jQj < jP IQj/ ; (12.25)

.8P/ .8Q/ .jP j C jQj 6 jP IQj/ : (12.26)

The results of theoretical validation are summarized in Table 12.5. Note thatWeyuker’s properties were developed for procedural languages. Hence, there mightbe possibility that a proposed meta-program complexity measure may not satisfy allthe properties but still may be valid for meta-programming domain as is the casewith object-oriented metrics [MA08b].

Page 21: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

12.10 Examples of Meta-Program Complexity Calculation 229

Table 12.5 Summary of meta-program complexity validation

Weyuker’s property

Complexity metric 1 2 3 4 5 6 7 8 9

RKC C – C – C C C – CMR C – C – – C – C –CC C – C C C C – C CND C – C C C C – C ˙CD C – C C C C – C C

12.10 Examples of Meta-Program Complexity Calculation

12.10.1 Complexity of Heterogeneous Meta-Programs

We demonstrate the complexity calculation of the heterogeneous meta-programdeveloped for the hardware design domain. In that domain, a great number ofsimilar domain entities exist. For example, the most widely used hardware librarycomponents are gates (see Fig. 12.4; in VHDL), which implement a particularlogical function.

The hardware designer requires many different gate components implementingdifferent functions and having a different number of inputs. All these componentsare very similar to each other both syntactically and semantically, and thus theyconstitute a component family.

Next, we develop a meta-program, which describes a gate component family. Forexample, the identified generic parameters and their values for the gate componentfamily are as follows:

Gate function D f AND, OR, XOR, NAND, NOR, XNOR gGate inputs D f integer numbers from 2 to 8 gA gate meta-program (see Fig. 12.5) was developed using Open PROMOL meta-

language. The meta-program has two parameters, three meta-language functionsand its size is 291 B. It differs from the one given in Fig. 12.3 by one essential detail(i.e. by the number of inputs), which is important for calculation in this context.Though the meta-body is the same in both figures, we have repeated it here forconvenience of reading.

We calculate the RKC value using a BWT (Burrows-Wheeler transform) com-pression algorithm, because currently it allows achieving best compression resultsfor text-based information and thus allows to better approximate informationcontent. The size of the gate meta-program is 271 B. The size of the compressedmeta-program will put the upper limit on its information content. After compression,we obtain 245 B; therefore, RKC value of a gate meta-program is equal to245/271 D 0.90.

We calculate MR of the gate meta-program by calculating the size of its meta-interface and the length of its meta-language functions, which is equal to 139 B.Therefore, its MR value is equal to 139/271 D 0.51.

Page 22: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

230 12 Complexity Evaluation of Feature Models and Meta-Programs

Fig. 12.5 Generic gate described using Open PROMOL meta-language

Table 12.6 Complexity measures of the gate meta-program

Complexity dimension Complexity metric Value

Information Relative Kolmogorov complexity (RKC) 0.90Meta-language Meta-language richness (MR) 0.51Graph Cyclomatic complexity (CC) 42Algorithm Normalized difficulty (ND) 0.23Cognitive Cognitive difficulty (CD) 4

Cyclomatic complexity of a meta-program is a number of different programinstances that can be generated from it. The metric can be calculated as the numberof distinct meta-program parameter values. Parameters f and num are independent.Parameter f can have six different values, and parameter num can have seven values.The gate meta-program covers a family of 6 �7 D 42 different component instances.Therefore, its CC value is 42.

The gate meta-program has 3 meta-language functions, 2 distinct functions(@gen, @sub), 4 meta-language function arguments and 3 distinct arguments (num,f,g, f@sub[f]g). Therefore, its ND is equal to 2 � 4 =.3C 4/ � .2C 3/ D 8 =35 D0:23. From the same values, we calculate that its CD is max .2; 3; 4/ D 4.

The values of the calculated complexity metrics for the gate meta-program aresummarized in Table 12.6.

Based on the meta-program complexity metric values, we can make the followingconclusions on complexity of the gate meta-program. The RKC value is high;therefore, the meta-program almost has no repeating fragments, it is coded at ameta-level efficiently and there is hardly room for any additional generalizationwithout introducing new parameters or widening the scope of the meta-program.The MR value shows that meta-language constructs cover only about a half of themeta-program’s size; therefore, its understandability and readability is good.

Following Frappier et al. [FMM94], who introduce the following boundaries ofthe CC values based on empirical research and practical implementations of largesoftware systems: simple (1–10), slightly (moderately) complex (11–20), complex(21–50), over-complex and untestable (>50), we conclude that due to large parame-ter space of the meta-program, the exhaustive testability of its instances is complex.

Page 23: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

12.10 Examples of Meta-Program Complexity Calculation 231

Table 12.7 Complexity of Open PROMOL components of generic VHDL library

Complexity metric

No. Open PROMOL meta-program RKC MR CC ND CD Complexity

1 Serial multiplier 0.219 0.03 4 0.229 27 Simple2 Trigger 0.271 0.27 80 0.111 52 Over-complex3 Gate 0.502 0.49 181 0.026 26 Over-complex4 Adder 0.478 0.25 4 0.169 20 Simple5 Register 0.457 0.34 512 0.136 94 Over-complex6 Multiplexer 0.507 0.36 32 0.051 22 Complex7 Comparator 0.429 0.31 9 0.123 33 Simple8 Shift register 0.392 0.31 18 0.091 121 Moderate9 Subtractor 0.378 0.20 4 0.072 84 Simple10 Parallel multiplier 0.328 0.38 96 0.092 126 Over-complex11 Register file 0.358 0.24 36 0.084 255 Complex12 Counter 0.323 0.28 30 0.044 172 Complex13 Multiplier 0.331 0.64 8 0.092 28 Simple14 Divider 0.527 0.38 30 0.096 48 Complex

The CD value is below lower threshold (<5) for short-term memorability of chunksof information as formulated by [MNA07]; therefore, cognitive complexity of themeta-program is low.

Finally, we present complexity values calculated for Open PROMOL meta-programs created from Altera’s library for OrCAD VHDL components (Table 12.7).Altera’s library is a large collection of specific components, which are supposedto cover the entire circuit design domain (it contains 282 macro-functions and 73primitives, i.e. 355 VHDL components at all). The components were generalizedusing Open PROMOL meta-language to create a generic VHDL component library[Dam01].

We evaluate the results presented in Table 12.7 as follows. Most complexmeta-programs are those, which describe components with largest variability inthe domain, thus requiring a larger number of parameters for selection of aspecific instance and a larger number of meta-language functions to represent theirvariability (see values of CC and CD metrics). Such meta-programs are difficult totest and maintain. Their complexity can be decreased by introducing hierarchicaldecomposition at the meta-program level.

12.10.2 Complexity of Homogeneous Meta-Programs

As an example of complexity measurement of homogeneous meta-programs, weanalyze Boost CCC Libraries. Boost [AG04] is a collection of open-sourcelibraries that extend the functionality of CCC. To ensure efficiency and flexibility,Boost extensively uses CCC template meta-programming techniques. In CCC,the template mechanism provides a rich facility for computation at compile time.

Page 24: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

232 12 Complexity Evaluation of Feature Models and Meta-Programs

Fig. 12.6 An example of template function (fabs)

Here, we analyze complexity of template functions in a Boost.Math. This librarycontains several contributions in the domain of mathematics such as complexnumber and special mathematical functions. An example of such a template function(a fragment) is presented in Fig. 12.6.

Template functions in the Boost.Math library are rather simple. They mostlyhave CC values either 3, 16 or 19, meaning that each template function has asingle template parameter, which can accept either 3 floating point, 16 integeror 19 floating point and integer CCC type values. Only ‘common factor ct’ hastemplate function ‘static lcm’, whose template parameters are numbers of long typerather than types. All template functions also have the same ND value, because alltemplate references are to the same template parameter class and have only onetemplate parameter; therefore, the number of distinct meta-program operators andoperands is equal to 1, and ND is equal to 0.25. The value of the CD metric islarger for components, which have a larger number of template references. Thevalues of the RKC and MR metrics are larger for smaller components, which haveless domain language (CCC non-template) code. When evaluating testability andmaintainability of Boost.Math library components, the CD value could be usedusing the boundaries proposed by Frappier et al. [FMM94].

12.11 Summary, Evaluation and Future Work

In this chapter:

1. We have analysed information content in higher-level programs (meta-programs)and compared it with information content in lower-level (domain) programfamilies. We have proposed to estimate the abstraction level of a programas an inverse of its complexity as defined by Kolmogorov complexity metricmeasured using a standard text compression algorithm. Based on the performedexperiments, we estimate that meta-programming decreases the informationcontent and thus increases the level of abstraction in analysed domains by approx.3 times.

2. We have proposed three measures for evaluating the complexity of featurediagrams (FDs). The measures are based on some properties of FDs, the empiriclaws of Miller’s and Metcalf’s as well as on Keating’s rules. The first measureevaluates the boundaries of cognitive complexity, which are expressed through

Page 25: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

12.11 Summary, Evaluation and Future Work 233

the property of ‘magic seven’ applied to variation points in the FD. The secondmeasure evaluates the structural complexity expressed through the quantitativelyidentifiable number of adequate sub-trees in the FD. The measure correlateswith the cyclomatic number that is used to evaluate program complexity. Thethird measure evaluates both cognitive and structural aspects of the feature modelcomplexity.

3. We have proposed metrics for evaluating the complexity of meta-programs atseveral dimensions (information, meta-language, graph, algorithm, cognition)using a variety of measures adopted from information theory and softwareengineering domain. Such metrics can be used to rank meta-programs basedon their complexity values, to assess testability and maintainability of meta-programs, and can be used by reusable software library developers for evaluatingcomplexity of their work artefacts. Despite the lack of larger-scale empiricalvalidation, we still expect that meta-program complexity metrics could be usedto indicate poorly written or untestable meta-programs, when the metric valuesexceed predefined maximal or minimal boundaries.

The introduced complexity measures of feature models and meta-programs allowreasoning about the structure and behaviour of the system to be modelled at ahigher abstraction level and allow comparing and evaluating system models orthe complexity of their transformation into lower-level representation (e.g. intogeneric programs). The measures also allow to reason about the granularity level,important reuse characteristics that are difficult to express quantitatively and genericprograms (components) to be derived from the feature model. As complexity is theinherent system property with multiple aspects, it is difficult to devise a unifiedmeasure reflecting all aspects of the model. The proposed complexity measuresreflect different views on complexity and enable to evaluate the design complexity atthe model level. Though the presented case study supports theoretical assumptions,more empirical research is needed in order to better evaluate the measures and toreason about their value with a larger degree of certainty.

Quantitative evaluation of the complexity of models is a very important task dueto many reasons: (1) complexity in system design is continuously growing and, as aresult, there is a great need to manage complexity; (2) designs are moving towardsa higher abstract level, thus, the model-driven development is further strengtheningits position; (3) complexity assessment of the developed software systems in theearly stages of the software lifecycle allow to make cost-effective changes tothe developed systems; (4) though software has many complexity measures (e.g.number of code lines, cyclomatic number, psychological complexity), but thestraightforward use of those measures is not always relevant at the model level;(5) how we can reason on the introduction of a new abstraction level (in order tomanage the complexity and, e.g. to avoid over-generalization in component design)without having quantitative measures?

The task to deal with model complexity is hard because of a large variety ofmodel types used to describe the models. We focus on a specific type of modelsdescribed by FDs, which are very useful in the context of product line approaches

Page 26: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

234 12 Complexity Evaluation of Feature Models and Meta-Programs

and the use of generative technologies for implementing the approaches. Due tothe number of factors that contribute to FD complexity, we cannot identify a singlemetric that measures all aspects of a feature model’s complexity. This situation iswell known from the measurements of program source code complexity. A commonsolution is to use different measures within a metrics suite. Each individual measurecan evaluate one aspect of the complexity, and together they can provide a moreaccurate estimation.

12.12 Exercise Questions

12.1. Clarify what is complexity of a system in general? Repeat the technologicalaspects of complexity considered in Chap. 1.

12.2. Why complexity of systems is so important attribute? Enumerate softwarefields where complexity issues are at the focus.

12.3. Analyze different views to the complexity problem and identify types ofsoftware complexity.

12.4. What is complexity management? What are commonly recognized princi-ples to manage complexity?

12.5. What are complexity measures? How they differ for each complexity type?12.6. What is cognitive complexity and how it relates to the ‘magic 7’ problem?

Analyze the Keating’s complexity measure more thoroughly.12.7. Analyze cyclomatic complexity and complexity measures for object-

oriented programming. Identify complexity evaluation problems with thearrival of new programming paradigms.

12.8. Clarify the coupling and cohesion of modules within a software system andhow those features relate to complexity. How they affect reusability?

12.9. What are complexity measures to evaluate feature models?12.10. Select some feature diagrams from previous chapters (e.g. 9 or 10) and using

measures of Sect. 12.4, calculate feature model complexity.12.11. Provide investigation on model complexity measures more thoroughly as

a separate research topic: (a) for graphical modes and (b) for abstract andformal models.

12.12. Compare and evaluate the measures given in Sect. 12.4 and devise newmeasures to evaluate complexity of feature-based models.

12.13. What is an abstraction level in system design and how its complexity can bemeasured and evaluated?

12.14. Learn and explain Kolmogorov complexity as applied to measuring theincrease/decrease of abstraction level.

12.15. Learn and explain meta-program complexity issues at the following dimen-sions: (a) information, (b) meta-language, (c) graph, (d) algorithm and (e)cognition.

Page 27: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

References 235

12.16. Provide some experiments to evaluate complexity metrics of heterogeneousmeta-programs (using metrics given in Sect. 12.7), if the paradigm is yoursresearch topic.

References

[AG04] Abrahams D, Gurtovoy A (2004) CCC template metaprogramming: concepts,tools, and techniques from boost and beyond. Addison Wesley Professional, Boston

[Alb03] Albin ST (2003) The art of software architecture: design methods and techniques.Wiley, Indianapolis

[BAK+04] Blecker T, Abdelkafi N, Kaluza B, Kreutler G (2004) A framework for under-standing the interdependencies between mass customization and complexity. In:Proceedings of the 2nd international conference on business economics, manage-ment and marketing, Athens, Greece, 24–27 June 2004

[BMB96] Briand LC, Morasca S, Basili VR (1996) Property-based software engineeringmeasurement. IEEE Trans Softw Eng 22(1):68–86

[BR00] Bennett KH, Rajlich V (2000) Software maintenance and evolution: a roadmap. In:Finkelstein AC (ed) Future of software engineering. ACM New York, NY, USA,pp 73–87

[BVT03] Bandi RK, Vaishnavi VK, Turk DE (2003) Predicting maintenance performance us-ing object-oriented design complexity metrics. IEEE Trans Softw Eng 29(1):77–87

[CA88] Card DN, Agresti WW (1988) Measuring software design complexity. J Syst Softw8:185–197

[CG90] Card DN, Glass RL (1990) Measuring software design quality. Prentice Hall,Englewood Cliffs

[CHW98] Coplien J, Hoffman D, Weiss D (1998) Commonality and variability in softwareengineering. IEEE Softw 15:37–45

[CMN+06] Cardoso J, Mendling J, Neumann G, Reijers HA (2006) A discourse on complexityof process models. In: Eder J, Dustdar S (eds) Proceedings of Business ProcessManagement BPM 2006 workshops, Vienna, Austria, 4–7 Sept 4–7 2006. LNCS,vol 4103, Berlin: Springer-Verlag, pp 117–128

[CW07] Czarnecki K, Wasowski A (2007) Feature diagrams and logics: there and backagain. In: 11th international software product line conference, SPLC 2007. 10–14Sept 2007, Washington, USA, pp 23–34

[Dam01] Damasevicius R (2001) Scripting Language Open PROMOL: Extension, Environ-ment and Application. MSc, thesis. Kaunas University of Technology, Lithuania

[Dam06] Damasevicius R (2006) On the quantitative estimation of abstraction level increasein metaprograms. Comput Sci Inf Syst (ComSIS) 3(1):53–64

[Dem99] Dembski WA (1999) Intelligent design as a theory of information. IntervarsityPress, Downers Grove

[Edm99] Edmonds B (1999) Syntactic measures of complexity. Doctoral Thesis, Universityof Manchester, Manchester

[Ema00] Emam KEl (2000) A methodology for validating software product metrics.National Research Council of Canada, Ottawa, ON, Canada (NCR/ERC-1076)

[FMM94] Frappier M, Matwin S, Mili A (1994) Software metrics for predicting maintainabil-ity. Software metrics study: Technical Memorandum 2. Canadian Space Agency,St-Hubert, Virginia Polytechnic, 2(3):129–143

[Gla02] Glass RL (2002) Sorting out software complexity. Source Commun ACM Archive45(11):19–21

[Hal77] Halstead MH (1977) Elements of software science. Elsevier, New York

Page 28: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

236 12 Complexity Evaluation of Feature Models and Meta-Programs

[Hee03] Heering J (2003) Quantification of structural information: on a question raised byBrooks. ACM SIGSOFT Softw Eng Notes 28(3):6–6

[HK81] Henry SM, Kafura DG (1981) Software structure metrics based on informationflow. IEEE Trans Softw Eng 7(5):510–518

[IEEE90] IEEE Computer Society: IEEE standard glossary of software engineering terminol-ogy, IEEE Std. 610.12 – 1990

[Kea00] Keating M (2000) Measuring design quality by measuring design complexity. In:Proceedings of the 1st International Symposium on Quality of Electronic Design(ISQED 2000), IEEE Computer Society Washington, DC, USA, pp 103–108

[Kin98] Kintsch W (1998) Comprehension: a paradigm for cognition. Cambridge Univer-sity Press, New York

[KLD02] Kang K, Lee J, Donohoe P (2002) Feature-oriented product line engineering. IEEESoftw 19(4):58–65

[LBL08] Liu J, Basu S, Lutz R (2008) Generating variation-point obligations for composi-tional model checking of software product lines. Technical Report 08-04, ComputerScience, Iowa State University

[LG06] Laue R, Gruhn V (2006) Complexity metrics for business process models. In:Abramowicz W, Mayr HC (eds) Proceedings of 9th international conference onBusiness Information Systems (BIS 2006), LNI 85, Klagenfurt, Austria, pp 1–12

[LV97] Li M, Vitanyi P (1997) An introduction to Kolmogorov complexity and itsapplications. Springer, New York

[MA08a] Misra S, Akman I (2008) A model for measuring cognitive complexity of software.In: Proceedings of 12th international conference on Knowledge-Based IntelligentInformation and Engineering Systems (KES 2008), Zagreb, Croatia, 3–5 Sept 2008,Part II. LNCS, vol 5178, Springer, pp 879–886

[MA08b] Misra S, Akman I (2008) Applicability of Weyuker’s properties on OO metrics:some misunderstandings. Comput Sci Inf Syst (ComSIS), Springer-Verlag Berlin,Heidelberg, 5(1):17–24

[Man99] Manzini G (1999) The Burrows-Wheeler transform: theory and practice, vol 1672,Lecture notes in computer science. MFCS ’99 Proceedings of the 24th InternationalSymposium on Mathematical Foundations of Computer Science, Springer-VerlagLondon, pp 34–47

[Cab76] Mc Cabe TJ (1976) A complexity measure. IEEE Trans Softw Eng SE-2(4):308–320

[Mil56] Miller G (1956) The magic number seven, plus or minus two: some limits on ourcapacity for processing information. Psychol Rev 63(2):81–97

[MNA07] Mendling J, Neumann G, van der Aalst WMP (2007) Understanding the occurrenceof errors in process models based on metrics. In: Proceedings of OTM conference2007. LNCS, vol 4803: CoopIS, DOA, ODBASE, GADA, and IS – Volume Part I,Springer-Verlag Berlin, Heidelberg, pp 113–130

[MV95] von Mayrhauser A, Vans AM (1995) Program understanding: models and ex-periments. In: Yovits MC, Zelkowitz MV (eds) Advances in computers, vol 40.Academic, Troy, pp 1–38

[PPP07] Pataki N, Pocza K, Porkolab Z (2007) Towards a software metric for genericprogramming paradigm. In: 16th IEEE international electrotechnical and computerscience conference, 24–26 Sept, Portoroz, Slovenia

[PSP06] Pataki N, Sipos A, Porkolab Z (2006) Measuring the complexity of aspect-orientedprograms with multiparadigm metric. In: Proceedings of 10th ECOOP workshopon Quantitative Approaches in Object-Oriented Software Engineering (QAOOSE2006), 3 July 2006, Nantes, France, pp 1–10

[Rau96] Rauterberg M (1996) How to measure cognitive complexity in human-computerinteraction. In: Proceedings of the 13th European meeting on cybernetics andsystems research, vol 2, Vienna, Austria, 9–12 April, pp 815–820

Page 29: [Advanced Information and Knowledge Processing] Meta-Programming and Model-Driven Meta-Program Development Volume 5 || Complexity Evaluation of Feature Models and Meta-Programs

References 237

[RMW+04] Reformat M, Musilek P, Wu V, Pizzi NJ (2004) Human perception of softwarecomplexity: knowledge discovery from software data. In: Proceedings of 16th IEEEInternational Conference on Tools with Artificial Intelligence (ICTAI 2004), 15–17Nov 2004, IEEE Computer Society Washington, DC, USA, pp 202–206

[SEI06] Software Engineering Institute (SEI) (2006) Cyclomatic complexity. In: Soft-ware technology roadmap, 2006. Online: http://www.sei.cmu.edu/str/descriptions/cyclomatic body.html

[SPP06] Sipos A, Pataki N, Porkolab Z (2006) On multiparadigm software complexitymetrics. Pure Math Appli 17(3–4):469–482

[STM91] Sheetz SD, Tegarden DP, Monarchi DE (1991) Measuring object-oriented systemcomplexity. In: Proceedings of the first workshop of information technologies andsystems. Cambridge, MA, pp 285–307

[SW03] Shao J, Wang Y (2003) A new measure of software complexity based on cognitiveweights. Can J Elect Comput Eng 28(2):69–74

[TCS04] Taha W, Crosby S, Swadi K (2004) A new approach to data mining for softwaredesign. In: Proceedings of the international conference on Computer Science,Software Engineering, Information Technology, e-Business, and Applications(CSITeA’04), Cairo, Egypt

[TZ81] Troy DA, Zweben SH (1981) Measuring the quality of structured designs. J SystSoftw 2:113–120

[Vis05] Visscher B-F (2005) Exploring complexity in software systems. PhD thesis.University of Portsmouth

[Wan09] Wang Y (2009) On the cognitive complexity of software and its quantification andformal measurement. Int J Softw Sci Comput Intell 1(2):31–53

[Wey88] Weyuker EJ (1988) Evaluating software complexity measures. IEEE Trans SoftwEng 14(9):1357–1365

[Zus91] Zuse H (1991) Software complexity – measures and methods. DeGruyter Publica-tions, Berlin/New York