[ieee 2011 international joint conference on computer science and software engineering (jcsse) -...

6
20ll Eighth Inteational Joint Conference on Computer Science and Software Engineering (JCSSE) OPI Model: A Methodology for Development Metric based on Outcome Oriented Kaenchan Thammarak Soſtware Systems Engineering Laboratory Department of Mathematics and Computer Science, Faculty of Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand [email protected] Abstract- Meic is extremely important for soſtware development. It is a tool for measuring soſtware in both quantitative and qualitative aspects. In addition, meic can be used for determining an achievement of the goals. There are some problems that bring difficulties to the design and development of soſtware metrics. Among them, a number of soſtware perspectives that meics must handle and the way to apply metrics results to support business outcomes are considered important. This paper proposes OPI as a model for implementing soſtware metrics based on business outcomes. There are three steps in this model, defining outcomes (0), defining perspectives (P), and defming indicators (I). A case study has been developed to assess the use of the proposed model. Forty undergraduate students, who have basic owledge in soſtware meic, use the model to develop metrics om the case study. The preliminary evaluation is based on usability aspect. The result has sho that 85 percents of the students think that the model is easy to understand, where 8 and 7 percents say that it is moderate and hard respectively. Kwords: Software metrics; Outcome-oriented approach; software measurement; Business Process. I. I NTRODUCTION Metric [9] is a standard term for soſtware measurement. It is an extremely important tool to qualify software. It is also used to derstand, conol, improve and predict among products, processes and resources in soſtware development. Moreover, meic can be used to determine a successl of soſtware development. Currently, there are a lot of soſtware metrics available, for examples size metrics [10][16][22], complexity metrics [15][16][17], cohesion and coupling metrics [13][19], and others [12][15][20][29]. Development of soſtware metrics is considered a difficult task for the following major reasons. Firstly, meic developers require more experience to define meics that can support multiple perspectives. The novice developers tend to conce in some and ignore other required and important Sarun Intakosum Soſtware Systems Engineering Laboratory Department of Mathematics and Computer Science, Faculty of Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand kisa@kmitl.ac.th perspectives. As a result, the risks might be increased if such meics are used to evaluate soſtware. Secondly, some results om the existing metrics could not support modem soſtware developments and business strategies that based on the outcome concept [1][7][9][11] [23] [25]. In order to solve the above problems, this paper presents an OPI as a new model for development metric based on the outcome concept. The rest of the paper can be organized as follows. Section II is about the related works in the fields of soſtware metrics. Section III presents OPI model and its processes. Section IV shows an example on how to use the model. Afterwards, the conclusion and ture works are presented in the final section (V). II. LATED W OS Outcome encourages an individual to focus on the end state of the result. (e.g., increases the productivity, improves the quality, decreases the defect rate and etc.). [26][27] Outcome oriented used for determines and evaluates the results of a process, plan or program with the intended results. While, goal oriented involves an establishing specific, measurable, attainable, realistic and time-targeted objectives to achieve a desired end state in some sort of assumed development. Previous papers supported in goal oriented than outcome. While, current year, many papers increasingly tend to apply an outcome oriented in many areas such as academic [27][33], business and govement strategies plan [1][22][23], science and technology[33], etc. Especially, an outcome oriented approach is an important choice for development and management information technology. For example, OTFACT is a soſtware-based data collection system for measuring assistive technology outcomes [31]. The open architecte of a unified metadata and service layer for making key educational resources is presented by [27]. [9] presents the Commission on Accreditation of Rehabilitation Facilities (CARF) for organizations to have an active outcome-oriented program evaluation systems. In order to support soſtware development based on the outcome oriented concept, one proper solution is metrics. Metric [10] is a standard term for soſtware measurement that can be use for understand, control, improve and predict among a software processes. However, a 978-1-4577-0687-5/11/$26.00 ©2011 IEEE 337

Upload: sarun

Post on 11-Mar-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 2011 International Joint Conference on Computer Science and Software Engineering (JCSSE) - Nakhon Pathom, Thailand (2011.05.11-2011.05.13)] 2011 Eighth International Joint Conference

20ll Eighth International Joint Conference on Computer Science and Software Engineering (JCSSE)

OPI Model: A Methodology for Development

Metric based on Outcome Oriented

Kaenchan Thammarak

Software Systems Engineering Laboratory

Department of Mathematics and Computer Science, Faculty

of Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand

[email protected]

Abstract- Metric is extremely important for software

development. It is a tool for measuring software in both quantitative and qualitative aspects. In addition, metric can be used for determining an achievement of the goals. There are some problems that bring difficulties to the design and development of software metrics. Among them, a number of software perspectives that metrics must handle and the way to apply metrics results to support business outcomes are considered important. This paper proposes OPI as a model for implementing software metrics based on business outcomes. There are three steps in this model, defining outcomes (0), defining perspectives (P), and defming indicators (I). A case study has been developed to assess the use of the proposed model. Forty undergraduate students, who have basic knowledge in software metric, use the model to develop metrics from the case study. The preliminary evaluation is based on usability aspect. The result has shown that 85 percents of the students think that the model is easy to understand, where 8 and 7 percents say that it is moderate and hard respectively.

Keywords: Software metrics; Outcome-oriented approach; software measurement; Business Process.

I. INTRODUCTION

Metric [9] is a standard term for software measurement. It is an extremely important tool to qualify software. It is also used to understand, control, improve and predict among products, processes and resources in software development. Moreover, metric can be used to determine a successful of software development. Currently, there are a lot of software metrics available, for examples size metrics [10][16][22], complexity metrics [15][16][17], cohesion and coupling metrics [13][19], and others [12][15][20][29].

Development of software metrics is considered a difficult task for the following major reasons. Firstly, metric developers require more experience to define metrics that can support multiple perspectives. The novice developers tend to concern in some and ignore other required and important

Sarun Intakosum

Software Systems Engineering Laboratory

Department of Mathematics and Computer Science, Faculty

of Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand

[email protected]

perspectives. As a result, the risks might be increased if such metrics are used to evaluate software. Secondly, some results from the existing metrics could not support modem software developments and business strategies that based on the outcome concept [1][7][9][11] [23] [25].

In order to solve the above problems, this paper presents an OPI as a new model for development metric based on the outcome concept. The rest of the paper can be organized as follows. Section II is about the related works in the fields of software metrics. Section III presents OPI model and its processes. Section IV shows an example on how to use the model. Afterwards, the conclusion and future works are presented in the final section (V).

II. RELATED WORKS

Outcome encourages an individual to focus on the end state of the result. (e.g., increases the productivity, improves the quality, decreases the defect rate and etc.). [26][27] Outcome oriented used for determines and evaluates the results of a process, plan or program with the intended results. While, goal oriented involves an establishing specific, measurable, attainable, realistic and time-targeted objectives to achieve a desired end state in some sort of assumed development. Previous papers supported in goal oriented than outcome. While, current year, many papers increasingly tend to apply an outcome oriented in many areas such as academic [27][33], business and government strategies plan [1][22][23], science and technology[33], etc. Especially, an outcome oriented approach is an important choice for development and management information technology. For example, OTF ACT is a software-based data collection system for measuring assistive technology outcomes [31]. The open architecture of a unified metadata and service layer for making key educational resources is presented by [27]. [9] presents the Commission on Accreditation of Rehabilitation Facilities (CARF) for organizations to have an active outcome-oriented program evaluation systems. In order to support software development based on the outcome oriented concept, one proper solution is metrics. Metric [10] is a standard term for software measurement that can be use for understand, control, improve and predict among a software processes. However, a

978-1-4577-0687-5/11/$26.00 ©2011 IEEE 337

Page 2: [IEEE 2011 International Joint Conference on Computer Science and Software Engineering (JCSSE) - Nakhon Pathom, Thailand (2011.05.11-2011.05.13)] 2011 Eighth International Joint Conference

performance of metrics must depend on a clearly framework of development. In recent year, a most of usage is Goals­Question-Metric (GQM). In 1994, Basili et al.[28] [30] published the GQM approach that provided a method for defining metrics from project's goals. GQM approach has three steps are: listing major goals, deriving from each goal into the questions and defining metrics based on the derive questions. However, this framework couldn't links between software measurement goal and higher levels. Therefore, the GQM+Strategies is produced for solving the previous problem. GQM+Strategies[31] is developed at the Fraunhofer CESE (Maryland, USA) and Fraunhofer lESE (Kaiserslautern, Germany) that based on the GQM approach.

Figure 1. The GQM+Strategies Method [37]

The benefit of the GQM+Strategies is the capability of linking goal among various levels. Moreover, GQM+Strategies provided the mechanism and blueprint for development and management. Practical Software Measurement (PSM)[37] is one framework to defines metrics. PSM is presented by Florac et al. since 1996. PSM is created for evaluating and controlling process of software development. PSM concentrates in five aspects of software development are performance, stability, compliance, capability, improvement and investment.

However, most of the metric development frameworks base on in the goals oriented that could not fully support an outcome oriented. Therefore, this paper is to presents OPI model as a novel framework for metric development with the outcomes oriented approach. The details of OPI can represent in the next section.

III. OPI MODEL

OPI model is an outcome-oriented approach for developing software metrics. A principle of this model aims to define set of desire outcomes (0), related perspectives (P) and indicators (I) of measurement. This section presents all aspects of OPI model. The section can be separated into two subsections are; an elements of OPI (A) and an operation framework of OPI model (B).

A. Elements o/OPIModel

This section presents three primary elements of OPI model. There are Outcome, Perspective and Indicator that can be described in the next paragraph.

338

Outcome (0) is an ended result of interested entity such as reducing of defect rate, reusable component, customer satisfaction and etc. Outcome has three categories are process outcome, product outcome, and resource outcome. Outcome may is a readiness outcomes or an in-readiness outcome [24]. For example, an increasing of customer numbers is a readiness outcome because it directly measures from an attribute of entity. In the opposite way, an honesty of customer is an in­readiness outcome because it could not directly measure from the internal attribute of entity. Outcome can be extracted from objective and strategy of project. In addition, outcome should clearly define its attributes such as feature, number, position, time period and etc.

Perspective (P) is an appearance of things that relate to outcomes. Perspective can be derived from an environment of an outcome such as stakeholder, risk, technical, technology, and other conditions. Kaplan and Norton[22][23] defined financial, customer, internal business processes, and learning and growth as four common perspectives. In the author view, these perspectives can apply for all areas. Each perspective can divides into sub perspectives. Moreover, each perspective has its result. These results use to produce a primary outcome. For example, value added outcome comes from a combination of a development cost in a financial perspective and defect rate in a quality perspective.

Indicator (I) Fenton [10] defines an indicator as a combination of metrics that provided insights into the software process or project or product itself, while metric relates the individual measures in some way. Most of the economic indicator can be expressed in numbers, index, percentages, ratings, ratios, and rankings.

B. Operation Framework o/OPI

This section presents an operation framework of OPI model. The section is driven by three levels of OPI model that shows in figure 2.

Figure 2. Framework of OPI

Figure 2 shows a framework of OPI model. OPI model has three levels are O-outcome, P-perspective and I-indicator.

An initial phase starts at a defining outcome level. This level, metric developer will collect and analyze a related data to identify outcomes. Outcome can be extracted from objective and strategy of project. Documentation to describe an outcome can shows in table below.

Page 3: [IEEE 2011 International Joint Conference on Computer Science and Software Engineering (JCSSE) - Nakhon Pathom, Thailand (2011.05.11-2011.05.13)] 2011 Eighth International Joint Conference

TABLE !. OUTCOME DESCRlPTION

Outcome # <number of outcome. e.g. 1,1.1,1.1.1>

Statement <Outcomes statement : feature, numbers, position, time period, etc. >

Strategy <method to produce outcome>

Objective <objective is to determine outcome>

Defining perspective (P) is a level of the defining perspectives. This paper suggests four perspectives of Kaplan and Norton [23] as an initial perspective. In addition, metric developer can define other perspectives. The perspectives can use for determining an attribute of metrics. One sensitive case that leads to the complexity is an over-quantity of defining perspectives. Therefore, the metric developer should have an effective tool for selecting a primary perspective e.g. AHP, means-end chain and etc. This level has six attributes must be described in document. Follow as table II

TABLE II. PERSPECTNE DESCRlPTION

Perspective # <number of perspective e.g. 1,1.1,1.1.1>

Name <Perspective name>

Expected result <result that perspectives was compute>

Main perspective <higher perspective >

Sub- perspective <lower perspectives >

Strategy <strategy method to perform this perspective>

Indicator list <list of relative indicators>

Assigning indicator (I) is step of indicator level. This level should concern about an approach for identifying the relevant indicators. This paper presents four steps to establish an Assigning the indicator. It consists of:

1) Defines the related indicator from perspectives

2) Describes properties of each indicator: this step all defined indicators are described its properties. The properties can shows in table below.

TABLE III. INDICATOR DESCRlPTION

Indicator ID <ordering number >

Indicator Name <name>

Perspective <support perspective>

Level <lead( direct), fellow(indirect»

Unit <measure unit>

Weight <weight of indicator>

Expected result <end state result of indicator >

Description <indicator description>

Metric <supporting metric list>

Data sources <source of data>

Evaluations <evaluation method to indicate successful of indicator>

3) Selects primary indicators: outcome, cost and data quality is reasons that cause to the developer could not select all indicators. Therefore, selecting is an important method to identify suitable indicators. This paper presents three criteria to select primary indicators. As folows a figure 3

Figure 3. Criteria of indicator

Figure 3 shows the criteria on three dimensions. Each dimension has the initial criteria for the decision. A first dimension is a data quality: the quality of the indicator can determine with quality of data collection such as complete, reliability, correct and up to date. A second dimension is cost. This dimension aims to eliminate an over-cost indicator that was difficult to implement. This dimension has three criteria are cost of data collection, cost of indicator selecting/creating and composition cost. A final dimension is the outcome. This dimension represents a relationship between outcome and indicator. This dimension consists of measurable, fix for the outcome's situation, impact to other outcome, level of supporting (direct! indirect) and indicator composition. In addition, effect of indicator with other outcomes is included. In this paper introduce the method for an assessment the criteria with scorecard by specialist. The specialist sets expected value to each dimension. After that, they fill the score for each criterion. The score can be 5 to l. All scores are summarized for each indicator. Finally, the score of each indicator is compared with the expected value. The only excepted indicator from expected value is selected.

4) Matchs all selected indicators with all suitable metric.

This paper presents a mechanism to help the metric developer

match all indicators with an available metric and the new

creation. The mechanism calls a value screening approach

(VSA). VSA applied from the mean-end chain model [2][4][8]

and the iLCause form. Mean-end chain can be used for

matching between alternative and value outcomes. IF ... Cause

is a simple form to describe conditions. The main jobs of

IF ... Cause is a screening and matching the alternatives and the

outcome using the criteria. VSA has three elements are node,

edge, and label. Node has three types; an alternative, criteria

and value outcome. Edge is a directed edge that shows the

flow of screening and matching among alternatives, criteria

and value outcome. Label represents the name of alternative

and set of criteria's value. Each attribute is activated in each

criteria node. The label represents as lii {vcl. VC2, ... , vcn} while,

vc is values of criteria.

339

Page 4: [IEEE 2011 International Joint Conference on Computer Science and Software Engineering (JCSSE) - Nakhon Pathom, Thailand (2011.05.11-2011.05.13)] 2011 Eighth International Joint Conference

Figure 4. Representation model of VSA

From the figure 4, the processes of VSA consist of;

Step 1. Aligning ontcomes: Value outcome is a desirable result. For this paper, an outcome is all available indicators.

Step 2. Identify alternatives: this step, developers will collect all alternatives. This situation, a collectable metrics is an alternative.

Step 3. Define criteria: criteria are conditions which use for mapping between alternative and value outcome. This paper introduces four basic criteria for screening and matching. As follows:

- Strategy is a primary criterion for matching metric with the indicator. Strategy is a method to produce outcomes that leads to define indicator. The suitable metrics should have the same strategy. For example:

Reuse business process from use case description

./ number of actor , number of use case x Line of code, Number of comment.

Figure 4. Strategy criteria example

The example shows a relationship between strategies and metrics. Reuse BP from use case is the strategy of this project. The developer need to define the proper metrics for this strategy.

- Perspectives: perspective is one criterion for matching metrics with indicators. First of all, developer should concerns about perspectives and sub perspectives that the indicators support. Metric should be a member of any perspectives. For example:

TABLE IV. PERSPECTIVES CRITERIA EXAMPLE.

Financial User Process Infrastructure ROI LOC Cost and Repository

effort Performance

Cost of Frequency of . . . . . .

activated / Frequency of use

- Scale and unit is one important criterion for matching metric and indicator. Most of the economic indicators can express in various such as numbers, index, percentages, ratings, ratios, and rankings. These scales agree with five groups of metric scale are nominal scale, ordinal scale,

340

interval scale, ratio scale and absolute scale. Therefore, both metric and indicator should be the same scale.

- Variable is a data that used to compute the result of each indicator. For example, a number of process occurrences in one project per all projects. The variables in this indicator consist of process occurrences of an individual proj ect and all related projects. In the same way, the suitable metric should have two variables to support this indicator.

Step 4. Flows the edge direction: this step the alternative flow its properties to each criterion. Edge's direction base on the result from testing criteria of IF ... Cause.

Figure 5 shows components of VSA to design and develop the metrics. There are three parts are metric, criteria and indicator.

Figure 5. VSA models for metric construction

Moreover, the VSA can matches between alternatives and available metrics. VSA can use to identify the new metric. For example, on the figure 5, some indicators don't match with all exiting metrics. This case, the metric developer needs to create the new metric to support them. All criteria are represented in this paper is a guidance to define the new metrics.

IV. OPI IpLEMENTATION

This section proposes an example on how to use the model. The major purpose of the assessment in this paper is only the usability aspect of the model. This paper assumes OPI model can really use for development metric for an outcome­based software development. An example of this paper is to define size metrics that can support the business process use case [5][32]. The example starts at setting outcome. Outcome in this case is a size metric. Next step is a perspectives definition. The perspectives of this case are: functional, behavioral, informational and organizational. Each perspective can assign the indicators that shown in figure 6.

Figure 6. Case example to implement metric with OPI Model.

Figure 6 presented the detail of functional perspective. This perspective has four indicators are the number of activity, the number of user, the number of system activity and the

Page 5: [IEEE 2011 International Joint Conference on Computer Science and Software Engineering (JCSSE) - Nakhon Pathom, Thailand (2011.05.11-2011.05.13)] 2011 Eighth International Joint Conference

number of sub-use case activity. There indicators can conputed four new metrics which show in figure 6.

The author estimates the new size metrics by using it for identifying the number of BP elements. A case study is a registration use case. It is a part of the tendering system [6]. Certainly, it is possible to apply the proposed metric to any common modelling method such as BPMN, Petri Net and EPC.

Moreover, In order to evaluate the model, forty undergraduate students, who have fundamental knowledge in software metric, are asked to develop size metrics from the defined case study and filled in the questionnaire. The questionnaire has three parts are:

- First part, to gather the user skills and knowledge background. The result obtained that the user knows in various metrics e.g. size, complexity, FP, 00 metric and, etc. In addition, all students have defined the metric with GQM method before.

-Second part, the questionnaire asks about a user's satisfaction with OPI Model. The result has shown that 85 percents think that the model is easy, where 8 and 7 percents conclude that it is moderate and hard respectively.

- Third part is an Open-ended answer that wants to other opinions from user. Many questioners told the model easier than the other methods. However, some opinion suggests the model should be improved about the supporting tools such as VSA and templates.

V. CONCLUSION AND FUTURE WORKS

Metric is an extremely important tool to measure software. It can be used to determine the software development success. However, the development of software metric itself is difficult due to the following two major reasons. First of all, development of software metrics depends on developers experience to properly handle multiple perspectives. In addition, some of the exiting metrics results could not support the newly outcome-oriented concept.

In order to solve the problems, this paper presents a new metric development model called OPI that can support the outcome concept. The principle of this model aims to define sets of desire outcomes (0), related perspectives (P) and indicators (I) of measurement. In order to evaluate the model, forty undergraduate students, who have fundamental knowledge in software metric, are asked to develop size metrics from the defined case study and filled in the questionnaire. The major purpose of the assessment in this paper is only the usability aspect of the model. The result has shown that 85 percents think that the model is easy, where 8 and 7 percents conclude that it is moderate and hard respectively. The next step to this research is to improve the usability of the model and test for some other important aspects, for examples complexity and domain coverage. In addition, simulation and verification methods will be developed to support the improvement of the model performance.

REFERENCES

[I] 2GC Limited (2009), "2GC Balanced Scorecard Usage Survey 2009",2009.

[2] Baker, D., Bridges, D., Hunter, R., Johnson, G., Krupa, J., Murphy, J. and Sorenson, K, "Guidebook to Decision-Making Methods, WSRC­IM-2002-00002, Department of Energy", Available: http://emi­web.inel.govlNissmglGuidebook _ 2002.pdf

[3] Basili, Victor R. "Using Measurement to Build Core Competencies in Software", Seminar sponsored by Data and Analysis Center for Software, 2005.

[4] Bernard Wong, "The Appropriateness of Gutman?s Means-End Chain Model in Software Evaluation", International Symposium on Empirical Software Engineering (ISESE'02), 2002, pp.56 - 65.

[5] Business Process Management Initiative (BPMI), "Business Process Modeling Notation (BPMN)", 2004, Available: http://lamswww.epfl.ch/conference/bpmds08.

[6] CEN "Business requirements specification - Cross industry e-Tendering process" 2007, A vailab Ie: http://www.evs.ee/productltabidl 5 9/p-13 83 3 0-cwa-156662007.aspx.

[7] Corlane Barclay, "Towards an integrated measurement of IS projectperformance: The project performance scorecard", Inf Syst Front (2008), Springer Science + Business Media, 2008, pp. 331-345.

[8] Dennis Baker, Donald Bridges, Regina Hunter, Gregory Johnson, Joseph Krupa,James Murphy, and Ken Sorenson, " GUIDEBOOK TODECISION-MAKING METHODS", Developed for the Department of Energy, December 2001.

[9] D. L. Wilkerson, Accreditation and the use of outcomes-oriented information systems Archives of Physical Medicine and Rehabilitation, Vol. 78, No. 8. (1997), S31-S35.

[10] Fenton NE and Pfleeger SL, "Software Metrics: A Rigorous and Practical Approach (2nd Edition)", International Thomson Computer Press, 1996.

[11] Francis Lau, Daniel Vincent, Don Fenna, Randy Goebel and Dennis Modry,Designing an outcome-oriented computer decision-support system for cardiovascular ICU-A preliminary report , Journal of Medical Systems Volume 15, pp. 359-377.

[12] Hafedh Mili, Ali Mili, Sherif Yacoub, Edward Addy, "Reuse Based Software Engineering: Techniques, Organizations, and Measurement", Wiley, 2001.

[13] H.A. Reijers, "A Cohesion Metric for the Definition of Activities in a Workflow Process", Proceedings EMMSAD, 2003, pp. 116-125.

[14] Irene Vanderfeesten, Jorge Cardoso, Hajo A. Reijers, "A weighted coupling metric for business process models", Proceedings of the CAiSE'07 Forum, 2007. pp. 41-44.

[15] Jan Mendlingl and Gustaf Neumannl," Error Metrics for Business Process Models", Proceedings of CAiSE Forum 2007,pp. 53-56,2007.

[16] J. Cardoso, J. Mendling, G. Neumann, and H.A. Reijers," A Discourse on Complexity of Process Models (Survey Paper)", BPM 2006 Workshops, LNCS 4103, Springer-Verlag Berlin Heidelberg 2006, pp. 115-126.

[17] Jorge Cardoso, "Control-flow Complexity Measurement of Processes and Weyuker's Properties", 6th International Enformatika Conference. Transactions on Enformatika, Systems Sciences and Engineering, Vol. 8, Hungary, 2005, pp. 213-218.

[18] Jorge Cardoso, "Process control-flow complexity metric: An empirical validation", IEEE International Conference on Services Computing (SCC'06), 2006, pp. 167-173.

[19] Jorge Cardoso, "Approaches to Compute Workflow Complexity", Dagstuhl Seminar Proceedings 06291 The Role of Business Processes in Service Oriented Architectures, Germany, 2006, pp.I-15.

[20] Jorge Claudio Cordeiro Pires Mascena, Eduardo Santana de Almeida, Sflvio Romero de Lemos Meira, "A Comparative Study on Software Reuse Metrics and Economic Models from a Traceability Perspective", Information Reuse and Integration, IEEE International Conference, pp. 72-77,2005.

341

Page 6: [IEEE 2011 International Joint Conference on Computer Science and Software Engineering (JCSSE) - Nakhon Pathom, Thailand (2011.05.11-2011.05.13)] 2011 Eighth International Joint Conference

[21] Ju-Ling Shih, Eric Zhi-Feng Liu, "A Preliminary Outcome-oriented Review of Game-based Learning Research", 2010 IEEE International Conference on Digital Game and Intelligent Toy Enhanced Learning,201O.

[22] Kaenchan Dhammaraksa, Sarun Intakosum, "Measuring size of business process from use case description," iccsit, pp.600-604, 2009.

[23] Kaplan R S and Norton D P "The balanced scorecard: measures that drive performance", Harvard Business Review Jan - Feb pp. 71-80. 1992.

[24] Kaplan, R. S., & Norton, D. P. , "Measuring the strategic readiness of intangible assets". Harvard Business Review, 82(2), pp52-63, 2004.

[25] Kathe Callahan, Kathryn Kloby, "Moving Toward Outcome-Oriented Performance Measurement Systems", Managing for Performance and Results Series, IBM Center for The Business of Government, 2009.

[26] Maris Martinsons, Robert Davison, Dennis Tse, "The balanced scorecard: a foundation for the strategicmanagement of information systems", Decision Support Systems 25 _1999. Elsevier Science B.V. 1999, pp. 71-88.

[27] Michael Totschnig, Michael Derntl, Israel Gutierrez, Jad Najjar, Roland Klemke, Joris Klerkx, Erik Duval, Franz Muller, "Repository services for outcome-based learning", Proceedings of SE@M1O: Fourth International Workshop on Search and Exchange of E-le@rning Materials, volume 681,2010, pp. 3-12.

[28] Muhammad Ilyas, Touseef Tahir, "Towards a More Structured Goal Definition and Prioritization Approach for an Effective Measurement Process", School of Computing Blekinge Institute of Technology, Sweden, 2009

[29] Philip E. Dennison, Kerry Q. Halligan and Dar A. Roberts," A comparison of error metrics and constraints for multiple endmember spectral mixture analysis and spectral angle mapper", Remote Sensing of Environment Volume 93, Issue 3, 15 November 2004, Pages 359-367

342

[30] Robert E. Park, Wolfhart B. Goethert and William A. Florac,"Goal­Driven Software Measurement-A Guidebook", Software Engineering InstituteCarnegie Mellon UniversityPittsburgh, PA 15213,1996.

[31] Roger O. Smith! OTFACT: Multi-level performance-oriented software with an assistive technology outcomes assessment protocol Journal of Technology and Disability Volume 14, Number 312002,pp.133-139,2002.

[32] Ryan K.L. Ko, Stephen S.G. Lee, Eng Wah Lee, "Business process management (BPM) standards: a survey", Business Process Management Journal Vol. 15 No. 5, pp. 744-791, 2009.

[33] Scott R. Millis "Outcome oriented: Dealing With Missing Data", Center for Outcome Measurement in Brain Injury (COMB!), 2001.

[34] Sergiy Zlatkin and Roland Kaschek, "Towards Amplifying Business Process Reuse", IEEE International Conference on Services Computing (SCC'06), pp. 381-389,2006.

[35] Thomas J. Reynolds; Jerry C. Olson, "Understanding Consumer Decision Making The Means-end Approach To Marketing and Advertising Strategy" ,Psychology Press, USA ,2001

[36] Victor Basili , Jens Heidrich , Mikael Lindvall , Jurgen Munch , Myma Regardie , Adam Trendowicz, "GQM"+ Strategies -- Aligning Business Strategies with Software Measurement", Proceedings of the First International Symposium on Empirical Software Engineering and Measurement, p.488-490, September 20-21, 2007

[37] William A. Florac, Robert E. Park, Anita D. Carleton, "Practical Software Measurement: Measuring for Process Management and Improvement", Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213,1997