or52 extended abstracts

244
OR52 EXTENDED ABSTRACTS Edited by José-Rodrigo Córdoba School of Management, Royal Holloway University of London Nalan Gulpinar Warwick Business School University of Warwick September 2010

Upload: others

Post on 04-Feb-2022

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: OR52 EXTENDED ABSTRACTS

OR52

EXTENDED ABSTRACTS

Edited by

José-Rodrigo Córdoba

School of Management, Royal Holloway

University of London

Nalan Gulpinar

Warwick Business School

University of Warwick

September 2010

Page 2: OR52 EXTENDED ABSTRACTS

Page 3: OR52 EXTENDED ABSTRACTS

TABLE OF CONTENTS

 

Exploring the perceived influence of new knowledge on absorptive capacity in a public sector organisation Merle St Clair Auguste, Ashok Jashapara, Edward Bernroider

6

From initial risk assessments to system risk evaluation and management for emerging technology: Model development E. Beauchamp, R Curran

12

Learning from all sides: Triple Task as a new approach to problem solving Simon Bell, Stephen Moore

19

Staffing of call centre queue using geometric discrete-time-modelling and iterative-staffing algorithm X Chen, D J Worthington

25

(Re)-conceptualising e-government: Studying and using patterns of practice José-Rodrigo Córdoba, Lizzie Coles Kemp, John Ahwere-Bafo

32

Collaborations in information systems: The role of boundary spanning José-Rodrigo Córdoba, Alberto Paucar-Caceres

39

A process-based cost-benefit analysis for digitising police suspect interviews John Cussons

51

Cause-effect analysis of failure of a sewage treatment planttank roof structure Mirosław Dytczak, Grzegorz Ginda

58

Institutional learning and adaptation to global environmental change: A review of current practice from institutional, socio-ecological, and complexity approaches A Espinosa, G I Andrade, E.Wills

65

A polyhedral approach for solving two facility network design problem F Hamid

71

Optimisation of prices across a retail fuel network B Jenkins, T Liptrot, D McCaffrey

77

Information services for supporting quality and safety management P A Kostagiolas, G A Bohoris

84

Location selection for the construction of a casino in the region of Greater London Karim Lidouh

89

Page 4: OR52 EXTENDED ABSTRACTS

Multi-Actor, Multi-Criteria Analysis (MAMCA) for transport project appraisal Cathy Macharis

98

The use of MCDA in strategy and change management William Mayon-White, Ayleen Wisudha

105

The cornerstone of network enabled capability: defining agility and quantifying its benefit James Moffat

108

Scheduling of a production line using the AHP method: Optimization of the multicriteria weighting by genetic algorithm Karen Ohayon, Afef Denguir, Fouzia Ounnar, Patrick Pujo

114

A framework to assess current discourses in information systems: an initial survey of a sample of IS Journals (1999-2009) Alberto Paucar-Caceres

119

Wastewater treatment system selection using the analytical hierarchy process A Perez, M Mena, A Oddershede

130

Estimating preferences from pairwise comparisons using multi-objective optimization S Siraj, J A Keane, L Mikhailov

138

A method for providing sufficient strict individual rankings’ consistency level while group decision-making with feedback Vitaliy Tsyganok

142

Multicriteria analysis of policy options scenarios to reduce the aviation climate impact - an application of the PROMETHEE based D-SIGHT software Annalia Bernardini, Quantin Hayez, Cathy Macharis, Yves De Smet

148

Analysis of the Brazilian demand for sugarcane, ethanol and sugar Luciano Jorge Carvalho, Elizabeth G Rojas, Rosemarie Bone, Eduardo Ribeiro

157

Using ELECTRE and MACBETH MCDA methods in an industrial performance improvement context Vincent Clivillé, Lamia Berrah, Gilles Mauris

162

Application of a multi-criteria decision model for the selection of the best sales force automatization technology alternative for a Colombian enterprise I Cortina, N Granados, M Castillo

169

Personnel-related decision making using ordinal expert estimates Sergey Kadenko

177

Page 5: OR52 EXTENDED ABSTRACTS

An application of the analytic hierarchy process to the foreign students’ integration analysis Antonio Maturo, Rina Manuela Contini

184

Solving the VRP with probabilistic algorithms supported by constraint programming Daniel Riera, Javier Faulin

190

A model of support for students in mathematics Jon Warwick

194

A Taxonomy of ratio scales William C Wedley and Eng Ung Choo

199

Lagrangean metaheuristic for the travelling salesman problem Rosa Herrero, Juan José Ramos, and Daniel Guimarans

204

A multi-dimensional analysis of R&D performance in the pharmaceutical sector and its association with acquisition history Rupert Booth

211

Modelling Traffic Flows on Highways: A Hybrid Approach Salissou Moutari

218

 

 

Page 6: OR52 EXTENDED ABSTRACTS

Exploring the perceived influence of new knowledge on absorptive capacity in a public sector organisation

Merle St. Clair Auguste1, Ashok Jashapara, Edward Bernroider

School of Management, Royal Holloway, University of London (UK)

Abstract This pilot study involves an investigation into how organisations learn in a social sector environment in the Caribbean. The purpose of this paper is two-fold. First it explores the relationship between perceptions of new knowledge in relation to exploratory learning. Second, it outlines emerging issues for further research. Exploratory learning is one of three concepts that comprise Absorptive Capacity (AC), along with transformative and exploitative learning. This qualitative study seeks to discover the nature of relationships between the concepts in social organizations. Data was collected from the organisation using open ended questions via telephone interviews. An emerging issue seems to suggest that ideas-based approach to new knowledge has a higher resistance to change and influences the exploratory learning stage of AC. Keywords : Learning; Organisational Studies; Behaviour; Absorptive Capacity. 1. Introduction

The main aim of this empirical pilot study is to investigate how organisations learn. The dual purpose of this paper explores the relationship between new knowledge and exploratory learning and outlines emerging issues for further research. New knowledge can be internal or external to an individual or an organisation. Exploratory learning refers to the ability to recognise the value and understand new knowledge in relation to organisational strategic efforts. Cohen and Levinthal (1990) define AC as:

“the ability of a firm to recognise the value of new external information, assimilate it, and apply it to commercial ends...” (p. 569).

AC in this study is based on the model offered by Lane et al (2006), where AC consists of exploratory, transformative and exploitative learning. This theoretical contribution of the processes of AC seems to offer a neutral opportunity for further investigation.

                                                            1 Email: [email protected]  

Page 7: OR52 EXTENDED ABSTRACTS

Figure 1. Process Model of Absorptive Capacity - based on Lane et al (2006)

Prior related knowledge is a requirement for AC to occur. Within the scope of this paper prior related knowledge is seen as education (informal or formal) and experience in relation to work processes by an individual or an organisation. Prior related knowledge may influence the significance and nature of new knowledge. This may be a cause for conflict if the new knowledge does not integrate easily with prior related knowledge within a group setting. Since AC has its roots in the Research and Development (R&D) literature, one can assume that prior related knowledge (Ven den Bosch et al 1999), may be considered valuable. Unlike private organisations, in public sector organisations prior related knowledge is not assigned a value unless someone attributes value. In social organisations exploratory learning can be triggered by new knowledge in the form of emerging ideas. The inclination in the exploratory stage of AC from a tacit (idea-based approach to new knowledge) and explicit (evidenced-based approach to new knowledge) knowledge in this setting is also under consideration in this study. In shaping this investigation, the focus is on the individual (motivation, action, reaction) in relation to their context. Within this framework, there is also a focus on the relationships between individuals within the organisational units. As mentioned before, the transfer of context from competitive environments to a social context provides opportunity to highlight alternative features of new knowledge and the link to prior related knowledge. Also the relationship between the two concepts may provide the basis for new insights about the influences of AC.

2. Research Methods

The qualitative approach to this study sought to discover whether the relationships between the concepts that make up the AC construct exist in social organisations and how they work. Among the influences to the development of the questions were individual sense making and reference questions approach (Dervin and Dewdney 1986). The questions raised issues on the influences of AC. A case study was conducted focusing on the individual perspective of their action and interaction within the organisation. Data was collected from the organisation using semi-structured telephone interviews. Accordingly, the interview questions were in relation to the three concepts that comprise AC. The following are a sample of interview questions: What new knowledge do you encounter in the course of your duties; how do you use new knowledge; describe the way in which you reflect on your work related processes?

Page 8: OR52 EXTENDED ABSTRACTS

3. Data Analysis

The software package NVivo® was used to analyse the data. The initial data analysis was deductive where codes were structured according to the theoretical framework developed based on the literature and argument posed. The second form of analysis was inductive. The codes were developed from the terms used by the participants. This was followed by grouping terms that appeared to be similar in order to develop themes. The relationship between units (colleagues working more closely together) within the division investigated was also considered. In addition, the consistency of responses of each question from each respondent was analysed to determine similarities and differences. The entire transcript for each respondent was also analysed to ensure the trustworthiness, credibility, and consistency throughout each individual’s self-reports.

Key: - CEO - Heads of Unit - Officers

Figure 2. Groups of individual participants

4. Preliminary Findings and emerging Issues

For the purpose of this paper, the findings focus on new knowledge in relation to exploratory learning. Role of New Knowledge - Relationship between Ideas-based and Evidenced-based approach New Knowledge is internal and external to the organisation and may be new to the individual who needs to work with the new knowledge. The two general responses from units to the questions related to the nature of new knowledge standout. Individuals in group A and C considered new knowledge first as something that is evidenced-based and idea-based. Group B and D leaned more to idea-based first with respect to new knowledge and tend to use the term ‘feel’ as the main form to describe their reaction to the new knowledge related to their work. These responses relate to tacit (ideas-based) approaches where participants speak to a ‘hunch’ and explicit (evidenced-based) approaches where participants rely on documented knowledge within a public environment. The two groups (A and C) that speak of new internal knowledge lean more to the evidenced-based perception of new knowledge. Evidenced-based approach seems to point to an appropriate source to determine the potential value of new knowledge. Respondents from Group B (idea driven) indicate that new ideas are difficult to bring about change in the unit. There appears to be a relationship to the processes of AC related to new knowledge internal or external and the corresponding connection to the organisational structure. Notwithstanding

Page 9: OR52 EXTENDED ABSTRACTS

groups A and D execute in terms of authority and technical know-how with respect to planning and implementation. Another note is the perception of timeliness and new knowledge to the groups. The idea driven groups are not as concerned about timing as the ‘evidenced-based’ groups who find it critical. Prior related knowledge – level of education and experience Another focus is on the influence of prior related knowledge and the nature of new knowledge on AC. Prior related knowledge in the ideas driven groups is grounded in the educator role as opposed to the evidenced driven groups who appear to perform more administrative type roles. This is a critical issue since prior related knowledge influences the ability to recognise the value in potential new knowledge. Types of Interactions and other emergent issues There are indications that in addition to seeking new knowledge from the Internet, informal interactions (i.e. email, one-to-one conversation) within work groups and more formal ones (i.e. meetings) within the units are some of the ways in which participants discover new knowledge. Some participants stated that at times, new internal knowledge comes from outside the organisation. Although all participants indicated that new knowledge is critical, this is balanced by the responses that new knowledge is sometimes inaccessible. In addition, one participant stated: “... change is always very difficult … and people are not generally compliant with change”. Furthermore, being well informed seems to be more prominent with the management group (C) as opposed to the other groups. However, there is a distinction between the new knowledge with reference to technical work and new knowledge related to organisational directions that may affect change from within the organisation. Findings also seem to suggest that there is a relationship between new knowledge and politics which affects the AC process. In particular, politics change the nature of AC process in such a way that it links directly to exploitative learning, hence circumventing exploratory learning.

5. Discussion

For the purposes of this brief paper, the discussion is centred on the implications of the nature of new knowledge from an individual perspective. According to the majority of the participants, new knowledge may not be new to the world but new to the individual user. The intellectual property issues are not a concern, since new knowledge appears to be shared between individuals or sought through common places such as the Internet. In addition, collaboration in the exchange of best practices and sharing of knowledge from similar experiences appear to be customary. However, there are conflicting responses about sharing of new knowledge in public sector environments. Exploratory learning which involves information seeking may not be as rigorous to ensure that the next step of transformative learning is justified. Prior related knowledge seems to be viewed with a few assumptions. The first is that prior related knowledge is ‘good’, and therefore without flaw or bias. It also presupposes that prior related knowledge has been exposed to some rigor since previous knowledge is also related to past experiences. In supporting the transformation of new knowledge the organisation requires prior knowledge that is related to knowledge already known within the organisation (Nonaka and Takeuchi 1995). The findings indicated a tendency for ideas to be contemplated in some instances without evidenced-based characteristics. This may mean that there may be a tension between the two types of new knowledge (internal and external), the groups, and the roles the individuals represent in carrying out their organisational duties.

Page 10: OR52 EXTENDED ABSTRACTS

10 

Also, if AC is influenced by past experience (Zahra and George 2002), then AC may also be affected by the high levels of tacit knowledge evident in one group as opposed to another, and the responsibilities and roles that they play within the organisation. This scenario seems to indicate a varied approach to new knowledge when it comes into specific units. Daghfous (2004) argues that prior related knowledge has a positive effect on AC, as it provides the ability to complete its three main activities, which are “determine the value of new knowledge, assimilate it and apply it to commercial ends” (Cohen and Levinthal 1990). Another finding in this environment is that change is difficult. The respondents indicated that many colleagues are comfortable in what they are doing and do not see the need to change. In this example prior related knowledge from an individual perspective may not be as positive. Notwithstanding, experiences or ‘learning by doing’ still appear to be a significant factor in improving the organisations ability to exploit new knowledge (Ahanotu 1998).

6. Conclusion and further research

This study raised a number of issues surrounding the literature that require further and deeper investigation. The paper explored the relationship between perceptions of new knowledge in relation to exploratory learning. An emerging issue seems to suggest that ideas-based approach to new knowledge has a higher resistance to change and influences the exploratory learning stage of AC. This exploration has attempted to contribute to the literature by testing one element of the theory as defined by Lane et al (2006). This provides new insights on AC within a public sector environment. These insights suggest opportunities for further research in exploring the perceived influence of new knowledge on AC in public sector organisations.

7. References

1. Ahanotu N D (1998). A conceptual framework for modeling the conflict between product creation and knowledge development amongst production workers. Journal of Systemic Knowledge Management 1:

2. Cohen W M and Levinthal, D (1990) Absorptive Capacity: A New Perspective on Learning and Innovation. Administrative Science Quarterly 35: 128-152.

3. Daghfous A (2004). Absorptive capacity and the implementation of knowledge-intensive best practices. S.A.M. Advanced Management Journal 69: 21-27.

4. Dervin B and Dewdney P H (1986). Neutral questioning: A new approach to the reference interview. Reference Quarterly 25: 506-513.

5. Easterby-Smith M, Graca M, Antonacopoulou E and Ferdinand, J (2008). Absorptive Capacity: A Process Perspective. Management Learning 39: 483-501

6. Inkpen A C and Pien W (2006). An examination of collaboration and knowledge transfer: China-Singapore Suzhou Industrial Park. Journal of Management Studies 43: 779-811.

7. Jansen, J J P, Van Den Bosch F A J and Volberda, H W (2005.) Managing potential and realized absorptive capacity: How do organisational antecedent's matter? Academy of Management Journal 48: 999-1015.

8. Jones O (2006). Developing absorptive capacity in mature organisations - The change agent's role. Management Learning 37: 355-376.

9. Lane P J, Koka B R and Pathak S (2006). The reification of absorptive capacity: A critical review and rejuvenation of the construct. Academy of Management Review 31: 833-863.

10. Lane P J, Salk J E and Lyles M A (2001). Absorptive capacity, learning, and performance in international joint ventures. Strategic Management Journal 22: 1139-1161.

11. Lennox M and King A (2004). Prospects for developing absorptive capacity through internal information provision. Strategic Management Journal 25: 331-345.

Page 11: OR52 EXTENDED ABSTRACTS

11 

12. Merriam S B and Caffarella R S (1999). Learning in Adulthood: A Comprehensive Guide. Jossey-Bass: San Francisco.

13. Szulanski G (1996). Exploring internal stickiness: Impediments to the transfer of best practice within the firm. Strategic Management Journal 17: 27-43.

14. Todorova G and Durisin B (2007). Absorptive capacity: valuing a reconcpetualization. Academy of Management Review 32: 774-786.

15. Van Den Bosch F A J, Volberda H W and De Boer M (1999). Coevolution of firm absorptive capacity and knowledge environment: Organizational forms and combinative capabilities. Organization Science 10: 551-568.

16. Zahra S A and George G (2002). Absorptive capacity: A review, reconceptualization, and extension. Academy of Management Review 27: 185-203.

Page 12: OR52 EXTENDED ABSTRACTS

12 

From initial risk assessments to system risk evaluation and management for emerging technology: Model development

E. Beauchamp2

Aerospace Engineering

Air Transport & Operations Delft University of Technology

Kluyverweg 1 2629 HS Delft

The Netherlands

&

R. Curran Abstract

Evaluation and management of non-event based risks due to emerging technologies in air transport systems are crucial in terms of maintaining system safety levels. A methodology for conducting such a system risk evaluation has to be developed since presently risk assessments are not conducted using common tools or methodology. Risk assessments tend to be the responsibility of a small specialist team or individual where safety risks are handled in a different way to business risks. For overcoming fragmentation in risk assessments perceived by Risk, Budget, Quality or Schedule Managements, and for resolving potential conflicts between safety, efficiency and well-being we propose to use Analytic Hierarchy Process (AHP) methodology. Making explicit the ‘differences’ in the perception of new risks and their consequences (both positive and negative), when using AHP, will allow us to visualise the diversity of judgments about safety and other criteria and to define the problem. Furthermore, transformation to an integral level by applying AHP techniques for weighting risk factors within safety parameters when conducting integrated risk evaluation, should result in proposals for new system or process re-design. Keywords: Air transport, Risk, Management, Decision support systems

1. Introduction

An airline company can be seen as a complex organisation, where different functional groups are affiliated to multiple organisations with different types of operational processes, visions and procedures for risk assessment. When decisions are made on strategies for innovation, triggered either by a desire for competitive advantage from new technology, or by safety incidents, it seems that security, environmental and commercial risks are all considered separately and are not included in the evaluation of safety implications and of risk at the system level.

                                                            2 Email: [email protected]  

Page 13: OR52 EXTENDED ABSTRACTS

13 

Risk assessments tend to apply to isolated areas, processes and procedures and tend to be conducted only when needed for particular operational reasons. In addition, risks incurred by changes are rarely identified and managed systematically. The difficulty in conducting an integrated analysis at the system level arises from the fact that each `block` of processes – operational or commercial - consists of different components which can be classified under four (TPEO) headings: (T) technology/aircraft; (P) people/individuals; (E) environment (O) organisation(s). All of these different factors influence safety. Some of them, e.g. fuel, are subjects for commercial, operational and technical performance. There is a need for a formal mechanism for the transformation of operational concerns into a template for change in order to prevent future failure and/ or to reinforce a positive outcome, or for an evaluation of decisions on change which integrates all possible risk assessments, with a view of balancing safety and operational implications. The focus of this study is decision making process, modelling system risks and their management. This will provide the basis for risk informed decisions in practice. This study is being carried out in close collaboration with a number of European airlines.

2. Current decision process: analysis of case studies

This study focuses on the inter-connections between the various processes within an airline company which have the most impact on in-flight safety, especially in their capacity to create change across the organisation. Conducted case studies show that there is a lack of information on threats and the impact on safety performance in the case of organisational or commercial policy changes. Current decision making processes on change are not structured and are based on subjective judgments from personal expertise. The whole process is not resilient since knowledge may be easily lost when people leave a company. A non-structured decision making process mainly generates a safety issues list, but not a discussion or solutions on how to treat those risks and safety issues. This process is missing a pro-active evaluation of unseen risks, i.e. not event-based. Also it leaves no room for developing a shared picture of each others’ criteria. Criteria on cost of risks to safety are not present in financial considerations. Planning for change aims at resource re-allocation, not at re-design. The synthesis of information between departments takes place on the basis of informal contacts or inter-departmental meetings. Main disadvantages of current decision process can be summarised as follows:

Decision making (DM) process is not structured; DM process based on discussion of risks seen before;

Operational, environmental or commercial threats are all considered separately and

are not included in the overall evaluation of safety implications, although they may be important in terms of safety.

DM process based on subjective judgments from a personal expertise. The whole

process is not resilient since knowledge may be easily lost when people leave a company.

Page 14: OR52 EXTENDED ABSTRACTS

14 

Tactical risk treatment which is event-based and analysed in terms of regulatory requirements;

Group Strategic decision process now generates safety issues list and some trend

analysis without discussion/ solution on how to handle them;

No formalized protocol to consider decision from multi-criteria point of view;

No details of cost for risks to safety;

No residual risk assessment in terms of risk impact on safety performance as well as financial performance;

There is no analysis of safety barriers for future.

Co-ordination on finding the best way for risk treatment has a form of discussion

which does not force all post-holders to exchange their criteria. We argue that supporting strategic decisions on emerging technology, raises the question whether a mechanism exists for measuring explicitly both the impact of safety on the quality of service to passengers, on operations quality, on ‘company-wide quality’, and the effects of such ‘quality’ actions on safety (Beauchamp-Akatova, 2007). It seems that there is no formal methodology for evaluating and supporting strategic decisions and that the overall decision-making process does not necessarily guarantee the resolution of conflicts in order to establish what should be changed between relevant post-holders or departments, and in what way.

3. Requirements for enhancing the evaluation of strategic decisions

In order to resolve these problems, a special methodology needs to be introduced to bridge both the sharing of information on risks and capacities across the organization and the inclusion of them in the decision-making process.

This methodology needs to meet a number of requirements, such as (Beauchamp-Akatova, 2008):

The willingness of a company’s management to invest in safety and to allocate

resources to safety improvement in a timely, proactive manner, despite pressures on production and efficiency, is the key factor in ensuring a resilient organization.

Creating an ability continuously to re-evaluate the assessments of risks (in terms of

opportunities as well as of potential failures) in time; Creating an ability to re-examine management options/ objectives in response to

changing risks; Creating an ability to interact between diverse groups with diverse knowledge; Creating visualisations which capture the integral (system) picture and allow re-

organising individual/ group assessments in order to envisage multiple perspectives. System risk evaluation and management includes risk-based decision-making, coupled with organisational learning, and requires i) a shared understanding of the need for change in a company or industry, and ii) the capacity and analytical skill for transforming existing

Page 15: OR52 EXTENDED ABSTRACTS

15 

experiences and solutions for new customer services in order to respond to challenges (from changes in the operational environment of airlines to the new opportunities stemming from technological and organisational innovations). A decision-making process needs to be a pro-active, considering both the positive and negative implications of risk-induced change. Mechanisms for learning and the trade-offs between various stakeholders leading to appropriate action(s) need to be provided. The decision process might then be seen as an evaluation of consequences for different alternatives, given the plurality of individual and group goals. On the other hand, the decision process should also generate answers on how to modify the system and how to provide dynamic stability at the same time. Understanding how different stakeholders judge the possible negative consequences of risks and their costs, as against the potential benefits from the associated opportunities arising from change, will allow us to visualise the diversity of judgments about safety and other criteria and to learn how to balance them.

4. Towards system risk evaluation and management: Model development In order to better foresee any systems safety implications in a changing environment, the variety of views from different groups of stakeholders, from Flight Operations to Company Management, needs to be examined not as a sum of isolated risk assessments but as system risk evaluation and management, taking into consideration risk interdependency. We are aiming at developing the methodology using the multi-criteria analysis to enable “different stakeholders (including commercial departments) to simulate decisions (e.g. to get feedback about operational consequences of future decisions”, including human factors and safety). In order to overcome poor system risk and multi-objective trade-offs analysis related to a changing environment and to added risks due to innovation, special attention is drawn to Multi Criteria (multi objective) Decision Analysis (MCDA). In the absence of a formal methodology for integrated Decision Support when envisaging and implementing innovation, MCDA methodology can be useful in overcoming the fragmentation in risk assessments perceived by Risk, Budget, Quality or Schedule Managements, in order to resolve potential conflicts between safety, efficiency and well-being. The ‘questioning of existing organisational policies’ before making decisions about change(s) always requires a systematic evaluation of new (non-event based) risks, and thus of safety implications. Decision Support Systems (DSS) may be helpful in the ‘Process of reflection and questioning’ in order to build scenarios and to evaluate whether a system is fit for purpose, as well as in ‘Re-framing’ in order to find what should be changed and in what way (Beauchamp-Akatova, 2007). This work will apply a particular technique for multi-criteria decision analysis - Analytic Network Process (ANP) - in the area of system risk evaluation and management, usable for a company in its strategic decision making (Beauchamp-Akatova, 2009). We apply the AHP/ANP methodology (Saaty, 1980; 2001; 2004) for creating an integrated picture on system risks, including the detection of potential conflicts as well as their resolution. ANP is a general case of the AHP (Analytic Hierarchy Process) and recommended for helping to learn and to understand different points of view within a decision making process. Both AHP and ANP are suitable for dealing with tangible and intangible criteria, for

Page 16: OR52 EXTENDED ABSTRACTS

16 

resolving conflicts and improving communication. However, AHP uses a unidirectional hierarchical relationship between levels in the decision model, while ANP allows for complex interactions between sub-models and between criteria inside a cluster. Thus, for our study we present here two examples of generic framework (see figures 1 and 2).

Figure 1. An example of AHP model/ framework for supporting a decision making process

Figure 2. An example of ANP model/ framework for supporting a decision making process Structured decision making (Failing et al., 2007) is an organised process for engaging multiple parties in a productive decision-oriented dialogue that considers both facts and values. It relies on the principles and tools of decision analysis, the core elements of which include defining objectives and measures of performance, identifying and evaluating alternatives, and making choices based on a clear understanding of uncertainties and trade-offs (Hammond et al., 1999).

Page 17: OR52 EXTENDED ABSTRACTS

17 

We propose that structured decision making process for prioritising the System Risk Solutions and/or Treatments against Cost, Resources etc. will require the following steps:

Define the objectives & Performance measures (per post-holder – e.g. Safety Department; Ground Services; Flight Operations; Engineering; Cabin Operations);

Workshop together: exchange each other objectives + criteria/ measures and generate

a set of possible alternatives for Risk Treatments;

Evaluate and prioritise Consequences (Risks; Cost) for each alternative (individually or in group per post-holder);

Evaluate System Consequences and choose the best Risk Treatment – AHP/ANP –

based decision support for an Accountable Manager;

Implement and monitor. The model based on AHP/ANP is currently being developed. This model can help in conducting an integrated analysis of explicit objectives expressed by different stakeholders as well as ideal criteria for a long-term airline company development. Subsequently it would be possible to build further models in order to define and select the best strategies with a view to avoiding undesirable changes in safety or quality levels while making commercial decisions.

5. Conclusion

The main contribution of this paper is an assessment of a model for the simulation of decisions on emerging technologies and system risk evaluation and management as applied to an air transport system. This model is being applied to concrete cases in order to evaluate the consequences of innovation or change; to choose courses of action on risk management and/ or to make design choices. The results of this study will contribute to the development of a usable version of MCDA at senior management level, using techniques for weighting risk factors within safety parameters when conducting risk evaluation at the system level. As such they are relevant both to the strategic loop of organisational learning, and to strategic decision-making.

6. References

1. Beauchamp-Akatova E (2007). Multi-criteria decision modeling as a contributor to strategies for change. Evaluating change with respect to the balance between commercial decisions and operational consequences for the sustainable future of an (airline) company. Paper presented at NetWork Workshop “Resolving multiple criteria in decision-making involving risk of accidental loss”. Steinhoefel, Germany, September 27-29, 2007.

2. Beauchamp-Akatova E (2008). Balancing Safety Issues with Operational and Commercial Decisions in an Airline Company, In Proceedings of the Third International Symposium on Resilience Engineering, October 28-30, 2008, Antibes-Juan-les-Pins, France, Mines ParisTech les Presses: Collection Sciences Economiques et Sociales, pp 3- 10.

3. Beauchamp-Akatova E (2009). Towards Integrated Decision-Making for Adaptive Learning: Evaluation of systems as fit for purpose, Journal of Risk Research, Vol. 12 (3-4): 361-373

Page 18: OR52 EXTENDED ABSTRACTS

18 

4. Failing L, Gregory R, Harstone M (2007). Integrating science and local knowledge in environmental risk management: a decision-focused approach. Ecological economics, 64: 47-60.

5. Hammond J, Keeney R L, Raiffa H (1999). Smart Choices: A practical guide to making decisions. Havard Business School Press, Cambridge, MA.

6. Saaty T L (1980). The Analytical Hierarchy Process, New York: McGraw-Hill. 7. Saaty T L (2001). Analytic Network Process: Decision Making with Dependence and

Feedback, RWS Publications: 386 pp. 8. Saaty T L (2004). Fundamentals of the Analytic Network Process- dependence and

feedback in Decision-Making with a single Network, Journal of Systems Science and Systems Engineering, Vol.13 (1): 1-35.

Page 19: OR52 EXTENDED ABSTRACTS

19 

Learning from all sides: Triple Task as a new approach to problem solving

Simon Bell3

Communications and Systems Department, MCT Faculty, Open University Milton Keynes, MK7 6AA.

Telephone: +44 (0)1953 604594

Stephen Morse4 Department of Geography, University of Reading, Whiteknights, Reading, RG6 6AB.

Telephone +44 (0)118 3788736 Abstract

This paper introduces the rationale behind a new approach to problem solving – Triple Task (TT) – and discusses how this adds new dimensions to problem solving. TT provides a means for groups to engage together in purposeful work and, at the same time, for facilitators to understand what may be influencing the outputs generated by groups; in particular the role of the group dynamic. The latter should help with the process of facilitation but could also help groups appreciate their own functioning. TT thus moves away from envisioning problem solving only as a means to an output but to a better understanding of process that arrived at the output. Keywords: Triple Task Method, problem solving, workshops

1. Background

Participatory research within the context of ‘Problem Structuring Methods’ (PSM) takes many forms but they share an underlying philosophy of ensuring that all those involved - be they 'researcher' or 'researched' - are involved in the design of a research process as well as the interpretation of findings. However, most participatory methods stop at the point where outputs have been achieved, with little or no attempt to appreciate the dynamics that may have been at play within the group to arrive at those outputs. But experienced workshop facilitators can ‘tell’ when a workshop has worked well, whether some groups have been more insightful than others, whether the dynamics within some groups or the background of the individuals within those groups have hindered or helped their process of discovery and so on. If there are problems with group dynamics then experienced facilitators will often try to intervene to help the group. Also, of course, the learning which the facilitator undertakes during workshops will help with the future facilitation of other workshops. However to date there has been little or no attempt to formalise all of this and thus provide an aid for both facilitators and participants. Tripe Task (TT) is a new participatory methodology for PSM which is designed to not only allow participants to undertake an analysis of problems or issues but also to allow for a degree of understanding as to the group dynamics that have been involved and ultimately how those dynamics may have influenced outputs. TT assumes that an understanding of this maelstrom

                                                            3 Email: [email protected] 4 Email : [email protected] 

Page 20: OR52 EXTENDED ABSTRACTS

20 

of influence can help with an understanding as to why insights were arrived at and thus help with an appreciation of variation that may be seen between groups.

This paper provides a brief outline of the TT process and an example of its application in practice via an EU Framework 7 funded project called POINT (Policy Influence of Indicators).

2. Triple Task Process

TT involves three tasks. Task 1 generates the group’s systemic and reflective answers to research questions while Tasks 2 and 3 are designed to explore the ways in which the groups function and how this influences their analysis both in terms of what emerges under Task 1. From the perspective of participants they only experience Task 1; Tasks 2 and 3 are largely invisible to them. An outline of the three Tasks is set out as follows: Task 1: This is derived from Soft systems methodology (Checkland and Scholes, 1990; Bell 2000; Mingers 2001; Winter and Checkland 2003) blended with worked/ practitioner approaches derived from Participatory Appraisal methods (Chambers 2002, Bell and Coudert 2005) and elements from the psycho-dynamic tradition (Bridger 2007). Task 1 is the main element of TT in the sense that it is the task which is apparent to the groups and provides the insights with regard to the research questions. Task 1 is subdivided into three main steps as set out below: Scoping: A Rich picture is employed as a means to capture ‘stories’ from participants. The Rich picture is an important element of Task 1 and each group begins with a pictoral representation of the significant components and linkages of the system being explored in the research. Participants are then encouraged to draw out major tasks and issues which form a central concern to them. These are organised in terms of precedent and priority and related tasks and issues are ‘clustered’ into indicative systems of challenge (SoCs). Visions of Change (VoCs): This step encourages the groups to explore what changes are required in order to address the SoCs identified under (1). The emphasis is upon what the group deems to be more important and achievable. Desired change: Groups are encouraged to set out what practical steps are required to bring about their VoC. This step is supplemented by activity planning and scenario setting. The latter employs another Rich Picture providing a sort of 'before' and 'after' story when placed next to the rich picture that arose out of Step 1. Task 2: This is an ‘outside in’ review of the group dynamic. In effect it is the researcher’s assessment of the group process using a matrix approach originally developed at the Open University and known as BECM (used in, for example, the Open University Course: 'Managing Complexity: a systems approach' (Open University 2000)). BECM stands for Being, Engaging, Contextualising and Management. BECM can be used as a form of Socio-Analysis and is related to both systems and psychoanalytic traditions. Task 3: ‘inside out’ review of the group dynamic – stakeholders’ assessment of their group process. Task 3 employs the Symlog (A SYstem for the Multiple Level Observation of Groups; www.symlog.com). Symlog has been applied in a wide range of situations (Nowack, 1987). Tasks 2 and 3 represent different ways of looking at group behaviour. Previous studies have shown that such perspectives can overlap although there are also points of difference (Isenberg and Ennis 1981).

Page 21: OR52 EXTENDED ABSTRACTS

21 

3. Triple Task in action

The TT methodology was first used by the authors in the European Union Seventh Framework Programme (FP7/2007-2013) under the grant agreement n° 217207: POINT (Policy Influence of Indicators) project). POINT is a pan-European project involving researchers from across the Union. Its explicit objective is to: “Design a coherent framework of analysis and generate hypotheses on the use and influence of indicators, by pulling together the disparate strands of research and practical experience of indicator use and influence, focusing broadly on European policies, but with a special emphasis on fostering change towards sustainability.”(POINT project document see point.pbworks.com). As a part of this project a number of TT workshops were held in Malta, Slovakia, Finland, Denmark, Belgium and the UK where participants were divided into groups and asked to provide a broad analysis of the policy influence of indicators. In a paper as short as this it is not possible or indeed desirable to present the detailed results. Instead some findings from the three tasks are provided as Figures 1, 2 and 3 (all from the workshop held in Malta which had two groups).  

Rich Picture Observations Group A

Disjointed rich picture with individuals adding’ bits with little attempt to link those pieces into a coherent story. Some connecting lines added later. Good use of colour Strong central theme on indicators (weather vane) but orbited by separate stories less obviously linked to the central theme. Some of the issues are more tangential and identified by individuals without apparent ‘buy in’ from others in the group but are relevant nonetheless.

Group B

Stronger story than that of A with the concept of indicator use as a ‘road’ with many pitfalls Road runs through the whole picture and these is much connectivity Very focussed upon the central theme of the workshop (indicator use) and much more so than that of A

Figure 1. Rich pictures and some interpretive notes for the two groups (A and B) of the Malta

workshop

Page 22: OR52 EXTENDED ABSTRACTS

22 

Group Amoeba Observations Group A

Indicates a great deal of fluctuation and change over the group over the four elements of BECM. The group does have a better middling stage but is generally conflictual

Group B

Group B is a very stable group. A great deal of good internal cohesion. Tends towards almost a ‘flat liner’ from the beginning of the second day.

Figure 2. BECM analysis of the two groups (A and B) in the Malta workshop. BECM results are

presented as an ‘amoeba’ diagram with the circumference representing time and the arms representing BECM scores (high values equate to poorer group function).

The Task 1 outputs of Group B certainly had features that were different to those of Group A. The rich picture of B has more coherence and focus than A, with a clear story of indicators having to travel along a road with many obstacles and feedback loops. As a result there is a theme of linkage and complexity. By way of contrast the rich picture of Group A was more fragmented with a number of almost entirely separate stories that were only later brought together – albeit loosely. The same differences persisted through the outputs of Task 1, with Group B consistently having a stronger and more coherent theme. Indeed if anything the story painted by Group B was perhaps a little too ‘mechanical’ in the sense that the issues they raised are very familiar ones – almost ‘text book’. However, while Group A did struggle a little with coherence and focus in their outputs from Task 1 they did – perhaps ironically – have some more novel insights than did Group B. Group A raised issues such as the importance of knowing what is meant by sustainable development before one can really speak of creating and using indicators, and how education at all levels is an important prerequisite for this.

Page 23: OR52 EXTENDED ABSTRACTS

23 

Group Observations Group A Symlog field diagram: perception of group members regarding group function

Day 1

Day 2

Perceptions rest mostly in the ‘liberal group work’ segment of the field diagram (group centred wing). This is probably to be expected given the nature of those invited to attend the workshop (public sectors employees and some NGO personnel). One point (day 1) is right in the ideal

Deviation from ideal group profile (bars are the average scores for each group; lines are the ‘ideal’ profile for good group function)

Total deviation from ideal: Day 1 = 4.2 Day 2 = 6.5 Broad agreement with the Symlog ideal profile for group work. Deviation from ideal is greater in Day 2 than in Day 1.

Day 1

0

1

2

3

0

1

2

3

5 5 4 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5

Day 2

0

1

2

3

0

1

2

3

6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6

Group B Symlog field diagram: perception of group members regarding group function

Day 1

Day 2

Very similar profile to that of Group A. Points are mostly in the group centred wing (liberal teamwork).

Deviation from ideal group profile (bars are the average scores for each group; lines are the ‘ideal’ profile for good group function)

Total deviation from ideal: Day 1 = 3.4 Day 2 = 7.2 As with Group A – reasonable agreement with the Symlog ‘ideal’ profile for good group function. Group function appears to have been better in Day 1 than Day 2. Deviation for Group B is less than that of Group A for Day 1; a result broadly in agreement with the results of BECM.

Day 1

0

1

2

3

0

1

2

3

7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 6 7 7 7 7 7 7 7 7

Day 2

0

1

2

3

0

1

2

3

4

5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5

Figure 3. Symlog analysis of the two groups (A and B) in the Malta workshop. Placement of

group within the Symlog field diagram and Deviation from ideal Symlog profile

Page 24: OR52 EXTENDED ABSTRACTS

24 

In terms of group function there does at first glance seem to be a link with the outputs from Task 1. The BECM scores for Group A were consistently poorer than those of Group B indicating that Group A had problems with coherence. One individual in Group A was attempting tom dominate proceedings and this clearly did not go down well with the others. Hence a rich picture which is fragmented with individuals insisting on having their own say in the rich picture with little attempt to relate their thoughts to those of the others. This situation did change on the second day, and the dynamics of Group A improved. The Symlog results are less concrete than those of BECM and there are some interesting similarities and contrasts. Members of both groups saw their profile as being ‘group centred’, in contrast to being ‘individual centred’, and this is perhaps not surprising given the makeup of the groups (mostly public sector employees, academics, students and NGO staff). There was no representation from the private sector. Not many of the points were in the ‘most effective teamwork core’ area of the field diagram. Thus the overall assessment is one of groups that were ‘friendly’ group orientated) and on the border between accepting and opposing authority. For both groups it appears that the deviation from the Symlog ‘ideal’ group profile was greater in Day 2 than in Day 1. In day 1 the deviation from ideal group function was greater for Group A than Group B.

4. Conclusions

Pulling all the results together it does indeed appear that characteristics (quality?) of Task 1 outputs can at least be partly explained by the group dynamics at play. This was obviously not the only factor. The makeup of the two groups was also different, with arguably more homogeneity in Group B (all public sector employees) than in Group A which had a greater mix of sectors. Nonetheless the opportunity to be able to link outputs to function is powerful.

5. Acknowledgements

The research leading to these results has received funding from the European Commission's Seventh Framework Programme (FP7/2007-2013) under the grant agreement n° 217207 (POINT project, www.point.pb-works.com).

6. References

1. Bell, S. (with Coudert E.) 2005 A Practioner’s Guide to “IMAGINE”: the Systemic and Prospective Sustainability Analysis - Guide d’Utilisation pour « IMAGINE » : l’Analyse de Durabilité Systémique et Prospective. BluePlan for the Mediterranean, Paper No. 3. Sophia Anitpolis, 51 pages. France. ISBN 2-912081-15-7

2. Bridger, H. (2007). The Consultant and the Consulting Process. London, The Bayswater Institute: Handout at the Midhurst Working Conference.

3. Chambers, R. (2002). Participatory Workshops: A sourcebook of 21 sets of ideas and activities. London, Earthscan.

4. Checkland, P. B. and J. Scholes (1990). Soft Systems Methodology in Action. Chichester, Wiley.

5. Isenberg, DJ and Ennis, JG (1981). Perceiving group members: A comparison of derived and imposed dimensions. Journal of Personality and Social Psychology 41(2): 293-305

6. Mingers, J. (2001). An Idea Ahead of its Time: The history and development of soft systems methodology. Systemic Practice and Action Research 13(6): 733-756.

7. Nowack, W. (1987). SYMLOG as an instrument of internal and external perspective taking - construct-validation and temporal change. International Journal of Small Group Research 3(2): 180 - 197.

8. Open University (2000). T306 Managing Complexity: a systems approach. Milton Keynes, Open University.

9. Winter, M. and P. Checkland (2003). Soft Systems: a fresh perspective for project management. Civil Engineering 156: 187 - 192.

Page 25: OR52 EXTENDED ABSTRACTS

  25

Staffing of call centre queue using geometric discrete-time-modelling and iterative-staffing algorithm

X Chen5 DJ Worthington

Department of Management Science, Lancaster University

Abstract

This paper develops method to use geometric discrete time modelling (DTM) in the call centre and other multi-server queuing systems with time-varying arrival rates. One advantage of using geometric DTM is that it greatly reduces the state-space size and thus significantly reduces the computational requirement to analyse time-dependent queue behaviour. The Geometric DTM thus can be used with an Iterative-Staffing Algorithm to determine appropriate staffing levels in call centre for each time slot to achieve various targeted time-stable performance. Key words: call centre, time-varying queue, geometric discrete-time modelling, iterative-staffing algorithm

1. Introduction

According to Koole (2006) and Steckley et al. (2005), the workforce management is about accurately translating demand for service into a demand for staffs, and finding the optimal service level to personnel trade-off. Accurate calculation of the base staff will lead to a cost-efficient balance between service requests and resources. It is crucial to use the right model to represent the call centre queuing characteristics and apply an efficient algorithm to estimate staff requirements. However, the natural of the call centre queuing characteristics make it difficult and costly to analyse. And hence poses even challenging, both theoretically and practically (Gans et al. 2003). The classic Erlang-C based staffing methods such as pointwise stationary approximation (PSA) and stationary independent period-by period approximation (SIPP) use a sequence of stationary Erlang-C models to approximate system performance. The system at each staffing schedule period is assumed in steady state. With relatively short service times and arrival rates change slowly, PSA/SIPP can produce a very speedy and good performance. But poor performance is observed in situations like medium-to-long service times, temporary overloaded or other complicated features (Green et al. 2007), and as a consequence, be overstaffed or be understaffed. Simulations are less problem-dependent and can be applied much more generally to model the call centre queuing characteristics. However, simulation requires sizable computation effort, which directly impact on its effectiveness and practicality in using as staffing algorithms, especially in real time staffing update applications (Green et al. 2007). This paper introduces an approach of modelling call centre time-dependent queuing characteristics. This approach can work with a modified staffing algorithm from previous literature to provide solutions to call centre staffing problems. We believe that the combined approach can be used as a call centre analytical management tool not only because of its improvements over the commonly used analytical formulae approaches, but also its relatively simplicity and less computational efforts over the simulation approach.

                                                            5 Email: [email protected]  

Page 26: OR52 EXTENDED ABSTRACTS

  26

2. Modelling call centre using Geometric DTM

Discrete time modelling (DTM) approach is to divide the time axis at equally spaced epochs (t). For each of the adjoining two epochs, the probability of having (n) customers in the system are calculated based on the Markovian arrivals probability and service completion probability in the previous epoch, where n can take any integer value. For our proposed Geometric DTM algorithm, the following general assumptions are made:

The time of operation of the system is divided into a set of equal non-overlapping intervals, often referred to as slots. The epochs of each slot are labelled by the integers t = 0, 1, 2..., where 0 is the beginning of the operation and the length of each interval represents one unit of time. The system is only observed at each epoch.

The arrival process has a random at rate between time t and t+1. The probability distribution of the number of arrivals between two adjacent slots can be calculated and is independent of arrivals in other slots. The arrivals are assumed to enter the system at the end of the slot in which they arrive.

The service times have geometric distribution with ratio g, which is a discrete version of

the exponential, i.e. no memory of the time serviced during the previous periods and the probability of a service completion in the next time step is constant.

The arrival and service processes are independent distributions - their joint probabilities

between any epochs are the products of their individual probabilities.

There are s servers in the system.

Same service arrival and completion cannot occur within two adjoin epochs.

There are upper limits of the state-space, i.e. upper limit on the numbers allowed in the system = L; any arrivals when system is full are assumed to be lost. But these upper limits only operate at epochs.

With the assumptions above, discrete time multi-server queues can be formulated as time-inhomogeneous Markov chains, using a state system notations: { }: States at time t, { } = {n: for n = 0,1,2 …., L}, where is describing the number of customers in the system at time t.

(t): (t) = prob( = n), is describing the probability of n customers in the queuing system at time t. We firstly add the possible number of arrivals into the state. Let : Number of arrivals between time t and time t+1, r=0, 1, 2...L, and : Probability of r arrivals between time t and t+1, we have:

(1)

We can update the number of customers in servicing at time t, =min ( , .

Page 27: OR52 EXTENDED ABSTRACTS

  27

With the numbers of customers in servicing , Let : Number of service departure between time t and t+1, : Probability of d services completion the system between t and t+1 conditional

on . We have

prob( = )()1()!(!

!ttt dc

tdt

ttt

t ggdcd

c

(2)

Then the number of customers at the beginning of time t+1 has become , where =

min (

For the probability of reaching at time t+1 from time t, we should calculate for all possible states at time t and the resultant state at time t+1, we have

(t+1) = (3)

given by =min ( , = min ( . Geometric DTM is a forward recurrence algorithm, we start from an initial states at time 0, and repeat the calculation for accumulations of all possible states resulting from the previous state. We can find the queuing system’s state distribution at each epoch of the whole time period T. The forward recurrence algorithm can be described as:

a) Initialise all probability distributions and set starting conditions. (0) = 1

For n = 1,2 …., L; (0) = 0

Next n

b) Clear all later probability vectors, each forward recurrence step considers each state in turn at time t. For t = 1,2, ….T

For n = 0,1,2 …., L (t) = 0

Next n

Next t

c) Iterating forward through time, take each state n in turn, runs through all the possible number of arrivals r and possible number of departures d. This results the states probability for t+1: (t+1). For t = 0,1,2, ….. T

For n = 0,1,2 …., L For r = 0,1,2 …., L Set , the probabilities of number of arrivals r Set c = min (n, s)

For d = 0,1, …c Set : Probability of d services completion the system between t and t+1 conditional on .

Page 28: OR52 EXTENDED ABSTRACTS

  28

Set m = min (L, n + r – d), update number in system, adjusting for any overflow.

(t+1):= (t+1) + Next d

Next r

Next n

Next t

The geometric DTM can be extended to model call abandonment. In M/M/s + M system, the cdf of time-to-abandon is considered as exponential. Thus in geometric DTM we consider the time-to-abandon distribution as geometric with the probability f of abandonment in the next epoch. The abandonment process is independent to the arrival and departure process. The abandonment has probability function:

Prob (patience time = t) = f (1-f)t-1 for t = 1, 2, 3, ……. The possible number of abandonments and their associated probabilities can be generated in a similar way as the service departure:

Let : Number of abandonments between time t and time t+1, : Probability of a number of abandonments between time t and time+1 conditional on . We have

prob( = )()1()!(!

!ttt aq

ta

tttt

t ffaqa

q

(4)

(t+1) = (5)

given by =min ( , = min

( . Discretising service time/time-to-abandon as geometric in DTM greatly reduces the size of the state-space required in a discrete time formulation of a queuing system, as the no memory property means that it is no longer necessary to record the residual service/queue times of any customers in service/queue. Hence, it only needs to record the probability of each number in the system in its state-space. The system state transition occurs between the discrete time epochs are caused by the arrivals, service completions or abandonments that occur between epochs.

3. Performance measures

For each epoch t, we have the distribution of number of customers in the system prob( = n) = (t). Therefore the queue length observed at time t is = max [0, ]

with the average number of customers at time t:

E ( ) = and the expected queue length at time t:

E ( The target service level )(uTSFt is defined as the probability of an imaginary customer that

arrived in a queuing system at instant t, would be served within the service level target time u.

Page 29: OR52 EXTENDED ABSTRACTS

  29

For service level target time u > 0, an imaginary customer arrived between time t and t+1 would find customers in the system. In a FIFO discipline, his waiting probability that less or equal to u time units is the probability that the number of departures between t and t+u is greater than the number of customers waiting in front of him at his arrival,

prob ( < , where = max [0, ]

The geometric DTM method is used to calculate the < . The timeline

between t and t+u is divided into a number of equal time steps , where . The discrete time epochs between t and t+u are defined as a number of integers:

Thus service completion probability between and next time step for the number customers in servicing at time is

. Similar to equation (2), at time , with the number of customers in the system and the number

of customers in servicing before the imaginary customer, =min ( , . Let as

the number of service departure between time and , as the probability of d services completion the system between time and conditional on . We have

prob( = )(''

'''

' ''' )'1(')!(!

!ttt dc

tdt

ttt

t ggdcd

c

(6)

By taking steps, at time , condition on the time arrival imaginary customer still in the system, the probability of the number of customers remaining in front him, , can be estimated by

( ) = (7)

where = Similarly, when incorporate the abandonment processes, we have

prob( = )(''

'''

' ''' )'1(')!(!

!ttt aq

tat

ttt

t ffaqa

q

(8)

= max [0, ], Thus,

( ) = (9)

)(uTSFt is then equal to

< = (10) Virtual waiting time at time t is defined to be the time that an imaginary customer would have to wait before service if they arrived in a queuing system at instant t. Let the probability of virtual

Page 30: OR52 EXTENDED ABSTRACTS

  30

waiting time x at epoch t as , with the estimation of target service factor , in a s server discrete time system with system states defined earlier can be expressed as,

(11) Then the mean virtual waiting time at time t is given by,

E( (12)

4. The Geometric DTM-Based Iterative Staffing Algorithm

The Iterative Staffing Algorithm (ISA) is a simulation based staffing algorithm developed by Feldman et. Al (2009). The algorithm starts off with infinite server approximation, and then iteratively set the staffing level at next iteration as the least number of servers that meets the service-level constraints. The iteration process ends until negligible change in the staffing levels. The performance of the ISA relies on the accuracy of the simulations to estimate the probability distribution of numbers in the system, which requires considerable amount of computational times. For example, 5000 independent replications of the full planning period is needed per iteration in the example of Feldman et al(2009). Given the nature of the geometric DTM, of which the probability distribution of numbers in the system at any epoch can be obtained via one single run, we therefore modified and enable ISA to work with the geometric DTM. Furthermore, as the geometric DTM use forward recurrence algorithm to estimate the state probability at each epoch, the geometric DTM based ISA can work on a forward rolling basis to determine the staffing level across the time epochs. This also reduces significant amount of repetition computations. The geometric DTM based ISA algorithm is described as follows:

Let be the number of servers at time t in iteration i, let to be the probability of n numbers in the system at time t, under the server level . The final iteration yields the ISA

staffing and (t). For each t, the algorithm performs the following steps: Step 1. Iteration starting off with a large finite number server, e.g. the maximum in the states

=L, together with the last states with probability . Use the DTM forward

recurrence algorithm to calculate the with .

Step 2. Set the to be the least of number of servers so that the delay-probability constraint

is met at time t, together with the last states with probability . Use the DTM

forward recurrence algorithm to calculate the with . . Step 3. If the change in the staffing from iteration i to iteration i+1 is equal to 1, .e.g.

stop iteration, . If the total number of staffing changes is less than the number of epochs in the geometric DTM ,

and ,

, go back to step 2.

Page 31: OR52 EXTENDED ABSTRACTS

  31

5. Conclusion

This paper presents a geometric based discrete time modelling approach to analyse the time-dependent M(t)/M/s(t) and M(t)/M/s(t)+M queuing system and approximate their performances. Together with an iterative staffing algorithm, the Geo-DTM can be used to provide staffing functions for some typical time-dependent arrivals call centre scenarios in order to achieve various time-stable service level targets.

6. References

1. Chassioti,E., D.J.Worthington (2004). A new model for call centre queue management. Journal of the Operational Research Society 55:1352–1357.

2. Feldman,Z., A.Mandelbaum, W. A. Massey, W. Whitt (2009). Staffing of time-varying queues to achieve time-stable performance. Management Science 55(9): 1499 – 1512

3. Koole.G (2006). Optimization of Business Processes: An Introduction to Applied Stochastic Modelling.

4. Green, L. V., P. J. Kolesar, W. Whitt(2007). Coping with time-varying demand when setting staffing requirements for a service system. Production and Operations Management. 16(1):13-39

5. Gans.N, Koole.G, and A.Mandelbaum(2003). Telephone call centers: Tutorial, review, and research prospects. Manufacturing Service Operations Management v.5 n.2: 79-141.

6. Samuel G. Steckley, Shane G. Henderson, and Vijay Mehrotra (2005). Performance measures for service systems with a random arrival rate. Winter Sim Conference.

Page 32: OR52 EXTENDED ABSTRACTS

  32

(Re)-conceptualising e-government: Studying and using patterns of practice

José-Rodrigo Córdoba 6

Lizzie Coles Kemp John Ahwere-Bafo

Royal Holloway, University of London

Abstract Current research on e-government is reaching a potential stage of saturation in relation to how to conceive the next stage of development. Studies of adoption and implementation emphasise frameworks, models and issues related to effective integration of e-government systems with administrative processes, as well as elements related to user satisfaction or acceptance. Despite a wealth of information form these studies (in different areas of e-government and geographical regions), a strong assumption held is that e-government is ‘progressing’ towards better citizen engagement and participation. There is no clarity either as to the usefulness of theories in moving beyond ‘explaining’ or challenging e-government adoption. This paper proposes a conceptualisation of e-government development based on three different patterns of practice which embed assumptions as to what is expected from e-government; how to implement it or how to make use of it. We revisit some practical experiences of e-government with a view to highlight the existence of opportunities and challenges to make e-government more inclusive; these give us insights as to what could be a next stage for e-government development. We aim to inform future practice in e-government planning and formulation.

1. Introduction

E-government as a set of practices that involve the use of information and communication technologies in public activities continues its unfolding. To date, many governments have completed a first stage of ‘progress’, in which many offer and at different levels not only basic information deemed to be relevant to citizens but also possibilities to transact online in areas ranging from tax registration and payment to company set up and even management of medical appointments. The variety of manifestations of e-government has not translated fully into utilisation of such services let alone citizens’ engagement with new possibilities of interaction with other citizens and / or government organisations. There is a need to be more critical of the theoretical assumptions and orientations that have driven e-government projects and implementations so far. This is an area that requires inter-disciplinary and comparative analysis. In this paper we step back sufficiently from e-government developments to give us a view of how we can abstract ways of thinking about e-government. Reflecting on the limits of these ways of thinking can help us formulate what we believe is a next stage of e-government in terms of research and practice. We begin by demystifying e-government as a way of contextualising it as a body of knowledge.                                                             6 Email: j.r.cordoba‐[email protected]  

Page 33: OR52 EXTENDED ABSTRACTS

  33

2. From myths to ways of thinking

Two myths can be attributed to e-government, given that its initial promises have not yet been delivered, and that this can lead us to consider that the truth about this body of knowledge might never be attained whilst continuing to invite us to pursue it (Foucault, 1980). The first myth is that e-government is progressing towards governance. Models of e-government implementation that transit from information to participation and engagement (Layne and Lee, 2001) have become dominant and followed in several cases of e-government strategy formulation and project design. Full functionality and inter-operability of e-government systems might have scored a number of successes in countries and internationally (in terms of UN e-government indexes for examples), but deeper transformations are yet to be achieved (Banister, 2004). Far from developing forms of governance as enabling people to take control over their own government affairs, we have witnessed a proliferation of e-government sites that offer many services, but little possibilities for engagement. The second myth about e-government is the development of visions and initiatives that are citizen-centred. The discourse on citizens has, to many, been used as a façade to promote administrative transformations aimed at reduce transaction costs, speed up service delivery and facilitate the management of information technology as an administrative asset. Technology has been underutilised, given that existing (hierarchical) structures are being maintained, to the detriment of better possibilities for interaction with citizens and other stakeholders (Dunleavy, Margetts, Bastow, and Tinkler, 2006). This has left government organisations and citizens impoverished in terms of how to evolve technological structures and configurations towards enabling new forms of governance to flourish. Within this pessimistic spectrum of possibilities, there is thought an opportunity to promote better ways of engaging with citizens that, although not necessarily and entirely sophisticated at a technological level, could help develop their capability so as to enable them to make their own decisions, making government and other institutions facilitators rather than directors. Technology continues surprising us with for profit and non for profit developments. To better appreciate how we can continue developing forms of e-government that are more inclusive and engaging, we propose the following three ways of conceiving of e-government. We describe them with their potentialities and limitations. We focus on exploring what people (could) make use of e-government for rather than aspiring to produce all encompassing definitions to be pursued.

3. Three ways of conceiving e-government

Córdoba (2009) and Córdoba and Orr (2009) have formulated three different ways to understand e-government as a manifestation of how the information society is unfolding. These ways are conceived of by looking at how the information systems supporting organisations are being implemented, and how systemic thinking (looking at different elements and impacts of system) can help practitioners and other people involved make sense of the complexity of situations. A first way of conceiving e-government that seems to dominate how organisations define and implement e-government is idealist. A vision, followed by goals and indicators is defined and translated into projects. Governments also create organisational structures to manage the e-government function as a combination of administrative, operational and technological processes. The vision and goals aim to implement citizen-centred discourses on e-government. Ideas and practices from the corporate world are transferred into public organisations.

Page 34: OR52 EXTENDED ABSTRACTS

  34

This way of thinking can help people jointly define and pursue what they want to make of e-government. Often, visions, goals, projects and indicators mirror what other governments or organisations do. As a first step, this way of thinking has helped firm up collective will, commitment of resources and establishing online presences. It has also helped raising issues and possibilities about maintaining and evolving information infrastructures. Where this vision does not help is in taking steps forward so as to develop governance from citizens’ perspectives. A vision that is not grounded in citizens’ concerns might find it difficult to be recognised as valid by them. It could also fall short of its transformative power if technology is subsumed to organisational structures and existing ways of governing (Dunleavy et al., 2006). A second way of thinking about e-government derived from socio-technical perspectives and mutual interaction between people and technology and their contexts could conceive of e-government as strategic to engage citizens (Heeks, 2006). The purpose is not to adopt and implement visions but discuss and define them with stakeholders, facilitating participation and taking ownership of how we want e-government to look like. Methods of debate and discussion, followed by definition of projects and monitoring mechanisms can help maintaining engagement and facilitating improvement of e-government infrastructures and services. Limits of this perspective come from limits of engagement forms that it wants to promote. Many countries see that their traditions of governance (reference) can play in favour for or against e-government initiatives, and it is necessary to identify these traditions, the relations that they reinforce or resist and the opportunities that they create for any possibility for improvement. A third way conceives that both visions and engagements have limits, many of which cannot be fully foreseen within the dynamics of society (intended or unintended impacts and consequences). This perspective would promote continuous exploration of constraints and possibilities within the relations that are affected or created due to e-government discourses and initiatives. It would seek to continuously find both constraints and possibilities for human beings as subjects of power and ethics (Foucault, 1984). This perspective would then ask the question of who we want to become as subjects as a way of encouraging reflection, strategic co-operation and critique of visions and engagements.

4. The patterns in ‘practice’

We now revisit several case studies to look at how these ways of thinking are embedded in experiences of e-government, and what we can learn if we want to promote engagement and ultimately empowerment in the contexts of these cases. We finish the discussion by looking at what/where could be the next stage of intervention on e-government according to what we think is appropriate at the policy and practice level. Tables 1 and 2 detail some of the insights from using the above patterns in exploring three different initiatives in relation to e-government. All of them have a degree of national impact, as they define and / or implement strategies to bring to life the use of ICTs in helping citizens and governments conduct or transact services. Although focused on different audiences, they all share elements of a vision that defines a state of affairs centred in the citizen, a state in which citizens will be empowered. Implicitly, this state of affairs also defines efficiencies at the government level, and it is clear that both citizens and governments are assumed to be seeking for efficient and cost effective service. The vision makes homogeneous both types of perspectives.

Page 35: OR52 EXTENDED ABSTRACTS

 35

Pattern Idealist: Vision, indicators, transformations. Strategic: Shaping technology, including people and their values

Power-based: Identifying consequences and

possibilities/constraints for action

The Ghana ICT for Accelerated Development Policy Framework & national Medium-Term Private Sector Development Strategy 2004-2008

Definition of broader ‘vision’ (citizen-centred) to cover most aspects of public sector reforms; articulate efforts around ‘online government’ via multiple channels Provision of architectures for IT, costing of services and implementation of services (i.e. outsourcing) Establishment of indicators of e-services use, penetration of internet, cost reductions to achieve transparency and efficiency. Centralised management of the e-government initiative at the national level

Three key strategies form the basis of the Ghanaian experience: (a) Providing enabling business environment- transparency and efficiency (b) E-Governance initiatives (c) E-skill initiatives Govt expects private sector to lead in these initiatives whilst it takes a passive or silent role Declared consultation with different sector organisations about vision Some principles of the current strategy state that plans must respond to ‘needs’ of citizens, and that value for them will be created. Principles could be taken to practice via dialogue and consultation Different sectors (state, industry, the community) are considered in the definition of e-government projects Donor-dependent strategies Private-Public sector initiatives and partnerships Public financial management reform programme (PUFMARP) to ensure accountability through rationalisation and modernisation Introduction of personal identification number (PIN) Passage of legislative instruments e.g. electronic transaction bill Cross-country partnerships e.g. Kofi Annan ICT Centre of Excellence (KACE) and India MoU with Intel, CISCO, NEPAD eSchools Initiative & Microsoft to promote technology transfer in Ghanaian universities and schools through affordable computers and software programmes Establishment of Centres for Information Technology (CIT) at local levels (pilot phase - 2005)

Sources of power that could be used to improve e-government policy and development: The co-existence of both institution and citizen-centred cultures. Some declarative aspects in principles, objectives and action axes (i.e. transparency, efficiency). Some autonomy via one-stop online services (i.e. payments, certificates). E.g. establishment of 23 customer service centres within the Ministry of Public Sector Reforms Promoting people’s autonomy via services according to their own ethical purposes E.g. Passage of intellectual and physical property rights law, role of mentoring Developing skills of public servants Tightening of controls and data management security Encouraging IT Champions Database synchronisation e.g. integrated payroll systems Punitive powers against public servant abuse ICT training for public servants Increasing ICT infrastructure Creation of Ghana government portal (www.ghana.gov.gh) and other state department portals

Table 1. E-government Strategy in Ghana

Page 36: OR52 EXTENDED ABSTRACTS

 36

Pattern Idealist: Vision, indicators, transformations. Strategic: Shaping technology, including people and their values

Power-based: Identifying consequences and

possibilities/constraints for action

Letsgo/Empowering Young People Pilot

Source: Sunderland City Council, Project evaluation report for Letsgocard/empowering young people project, 2009

Vision from Dept. for Children, Schools and Families: “Engagement in positive activities by disadvantaged children increases educational engagement and other positive outcomes.” Sunderland City Council adapted the vision by providing an on-line platform to deliver a pilot of the scheme and to inform on the uptake of positive activities. (The on-line platform enabled 2001 disadvantaged children to purchase positive activities from a 100 activity providers.) The Sunderland scheme was known as “Letsgo”. A shared vision of the importance of positive activities as a means of improving positive engagement at school was constructed between stakeholders. During the pilot, only indicators that reflected purchasing patterns of positive activities could be identified. These purchasing patterns demonstrated uptake of positive activities but the pilot did not run long enough to be able to establish the effect on engagement at school.

Principle of “empowerment” achieved through dialogue and consultation with young people, where young people were included in the process of selecting positive activities. Scope of the on-line portal re-defined in order to respond to the greater than expected contexts through which young people gained online access. The resulted in adjustments to the deployment of the service and an increase in the amount of engagement the Letsgo team had with young people. 100 activity providers used to provide positive activities. Activity providers adapted their working practices in order to support children who were new to the activity. Aspects of registration take place using off-line mechanisms to compensate for perceived problem of verifying scheme eligibility of children on-line. This compensation resulted in a re-definition of the registration process and delivery of the initiative. Critical juncture: Programme delayed and as a result contact lost with young people not attending school. This resulted in the decision to re-focus on participation through schools and focus less on non-attenders as the resource required to re-establish contact was considered too great. Critical juncture: the scheme was extended to compensate for delay in portal availability and registration process. The implication of this critical juncture was that a larger number of young people could be included in the scheme.

In order to participate in the Letsgo scheme, consent from the parents was not required and therefore consent to participate came directly from the children – empowering young people to make their own decisions about participation. Empowerment of young people to select and take part in positive activities required support from schools and support from parents. This potentially constrained the level of empowerment afforded. An unintended consequence of the delay in registration was the resulting ineligibility of children who had previously been registered. Stakeholders decided therefore to increase the age limit in order to compensate for slower than expected registration. The ability to provide a working portal was delayed, resulting in an unexpected gap between registration and usage. In order to compensate for this unintended consequence, practitioners conducted a larger than anticipated amount of follow-up work to encourage use of activities.

Table 2. Lets go card project at Sunderland City Council in the UK

Page 37: OR52 EXTENDED ABSTRACTS

  37

In terms of engagement, we find that the mechanisms or approaches used to help shaping initiatives are those that would ensure some degree of participation in the context at hand. Some of the declarations of participation have been taken forward; in these particular instances (case of Lets go card project), the dynamics between people and the technology used have also generated new forms of interaction to ensure that people’s needs are catered for; furthermore in this case there was feedback to the original vision, something that we see as an interweaving of patterns, although this type of pattern ‘unpacking’ does not develop evenly in the other cases. In this aspect, we recognise the diversity of cases. An issue that requires attention is that of how far participation develops (or is allowed to develop) in relation to what was intended or initially conceived, and how this could create possibilities or constraints for further action by those involved. Following from this, we also find some e-government ‘critical junctures’ (McLoughlin and Cornford, 2006), or ‘windows of opportunity’ (Monteverde, 2009), those moments in which both intended (often called empowerment) and unintended outcomes of the e-government strategy and implementation in projects might then offer additional opportunities for people to engage, in their own ways, in shaping the adoption and use of e-government services. The development of alternative forms of technology (social networking, open source software), the implementation of tighter privacy, identity and confidentiality control mechanisms and the corresponding user response to those mechanisms gives us insights about what political and philosophical ideologies are being reinforced and what the consequences are. It is then important to review what limitations and chances emerging mechanisms generate, how influential and possibly detrimental strategies of shaping, participation and envisioning become for individuals, and how information technologies are effectively appropriated in relation to these strategies.

5. Re-conceptualisation possibilities

From the above analysis we put forward a number of possibilities to help us re-conceptualise government with a view of making it less homogeneous (in terms of its vision) and possibly more contextual. The issue of engagement needs further attention, not least because through it visions have been adopted on the ground, but also because it is generating new forms of interaction online and offline. Furthermore, engagement as initially conceived as a form of consultation seems to be taking off where declarations of engagement have been put to practice. It may well be that we need to talk about e-engagement to account for new forms in which both governments and citizens develop their participation and with emerging and electronically mediated manifestations. Together with this re-definition of engagement on e-government, a more critical attitude about the transition to e-democracy needs to be assumed. The apparent degree of homogeneity exhibited by the definition of ideals could well mean that democracy is not as strong as an ideal to pursue alongside them, or that it has been compartmentalised as an issue to deal with in terms of electronic voting. However, the dynamics identified in the cases seem also to suggest that a new form of democracy could well be emerging and deriving from engagement. It might be then possible to re-define traditional notions in terms of their potential, emerging and electronically mediated forms of interaction.

6. Conclusion

This paper has provided a framework for analysing e-government initiatives with a view of identifying elements that could allow us to venture into moving away from traditional notions and towards a more dynamically oriented re-conceptualisation of e-government as a phenomenon. Using three different ways of conceiving of e-government we have been able

Page 38: OR52 EXTENDED ABSTRACTS

  38

to elicit some key elements of analysis. The dynamics of how e-government has been taken forward suggest that the elements of vision, engagement and empowerment are present and intertwined on how they manifest themselves in the interaction between people and information and communication technologies. These elements should then be re-conceptualised if e-government is to be more fully understood and managed for benefits for citizens other than efficiency. We will continue exploring relationships between these elements, the patterns that help them stand out in e-government adoption and possibilities to move e-government towards better forms of governance by citizens.

7. References

1. Banister, F. 2004. "Deep E-government." in European Group of Public Administration (EGPA) Annual Conference, edited by EGPA. Ljubljana: University of Ljubljana.

2. Córdoba, J.R. 2009. Systems practice in the information society, Edited by S. Clarke. New York: Taylor and Francis (Routledge).

3. Córdoba, J.R. and Orr, K. 2009. "Three patterns to understand e-government: the case of Colombia." International Journal of Public Sector Management 22:532-554.

4. Dunleavy, P., Margetts, H., Bastow, S., and Tinkler, J. 2006. Digital Era Governance: IT Corporations, the State, and E-Government. Oxford: Oxford University Press.

5. Foucault, M. 1980. "Truth and power." Pp. 51-75 in The Foucault Reader: An Introduction to Foucault's Thought, edited by P. Rabinow. London: Penguin.

6. —. 1984. "The ethics of the concern of the self as a practice of freedom." Pp. 281-301 in Michel Foucault: Ethics Subjectivity and Truth: Essential Works of Foucault 1954-1984, edited by P. Rabinow. London: Penguin.

7. Heeks, R. 2006. Implementing and Managing Egovernment: An International Text. London: Sage.

8. Layne, K. and Lee, J. 2001. "Developing fully functional E-government: A four stage model." Government Information Quarterly 18:122-136.

9. McLoughlin, I. and Cornford, J. 2006. "Transformational change in the local state? Enacting e-government in English local authorities." Journal of Management and Organization 12:195-208.

10. Monteverde, F. 2009. "The Process of E-Government Public Policy Inclusion in the Governmental Agenda: A Framework for Assessment and Case Study." in Systems Thinking and e-Participation: ICT in the Governance of Society, vol. Forthcoming, edited by J. R. Córdoba and A. Ochoa-Arias. Hershey (PA): Idea Global.

Page 39: OR52 EXTENDED ABSTRACTS

   

39 

Collaborations in information systems: The role of boundary spanning

José-Rodrigo Córdoba7

School of Management, Royal Holloway, University of London Egham, Surrey, TW20 OEX, UK

Alberto Paucar-Caceres

Manchester Metropolitan University Business School, Aytoun Building, Aytoun Street, Manchester, M1 3GH, UK

Abstract

Information-systems (IS) has become a 'broad church' that includes academics and practitioners in several areas of theory and application. Collaborations among them are hailed as a wealthy sign in the discipline that could show the relevance of IS concepts, approaches and methods in practice. It is not clear yet though how collaborations are really contributing to firm up or maintain the boundaries of IS as a relevant discipline. This paper aims to assess the extent to which collaborations in information systems have contributed or could contribute to the development of the IS field if not of IS as a discipline. We contextualise the notion of boundary spanning within the dynamics of disciplines, and analyse with it collaborations in the period between 1999 and 2009 as reported in articles in a sample of 14 journals from the fields of General Management (2 journals) Information Systems (7 journals) and Management Science/Operational Research (5 Journals). In this paper we report results of a pilot survey involving three journals from the above fields: the European Journal of Information Systems (EJIS), the Journal of Operational research Society (JORS) and the Journal of Information Technology (JIT). We draw some implications from our analysis for the future development of collaborations. Keywords: Information systems; discipline; collaboration; relevance; boundary spanning; bibliometric analysis.

1. Introduction

It is now commonly accepted that the so-called information and communications revolution that has brought radical transformations in businesses (Porter and Millar, 1985) has also brought changes in how individuals work with information systems professionals, and that therefore a field of knowledge on how best to use ICT and the systems that go on top of it is required. Information systems (IS) as a field of theoretical and applied knowledge is said to impact on many different areas, leading some to claim that the field of IS can be better seen as a knowledge reference discipline (Baskerville and Myers, 2002). Moreover, from being considered ‘support’ staff we now see that we are claiming to be professionals which can and should engage in collaborations with other professionals. This in turn can contribute to produce and apply relevant knowledge to relevant problems. IS professionals in academia are encouraged to span the boundaries within academic and practitioner communities (Klein and Hirschheim, 2008). It is important now to assess how collaborations are contributing to span the boundaries of the discipline, and to assess the impact of such efforts.

                                                            7 Emails: [email protected]; [email protected]

Page 40: OR52 EXTENDED ABSTRACTS

   

40 

In this paper we explore in more depth the meaning of a collaboration in information systems. We contextualise it within the development of IS as a discipline as a dynamic and often self-producing process within a system of professional disciplines (Abbott, 1988, 2001). We explore how collaborations bear relationships with boundary spanning as an ideal activity for discipline development. In this paper we report results of a pilot survey involving three journals from the above fields: the European Journal of Information Systems (EJIS), the Journal of Operational research Society (JORS) and the Journal of Information Technology (JIT). We draw some implications from our analysis for the future development of collaborations. We intend to complete the study by assessing the impact of collaborations as reported by journal articles in the rest of the journals of our sample in the period between 1999 to 2009 and hope to draw some initial implications from the analysis that could inform future collaborations in the field. The paper is organised as follows: (a) in the next section, we outline some ideas around the current debate about information systems as a discipline; (b) in section 3 we discuss boundary spanning as a discipline activity; (c) the methodology for surveying IS, MS/OR and general management journals, publishing articles featuring boundary spanning and collaborations, is outlined in section 4; and (d) results of our pilot survey are discussed in section 5 and some initial conclusions are drawn in the final section.

2. Information systems as a reference discipline

To many, information systems (IS) is a body of knowledge that is now on equal footing to many different disciplines (Baskerville and Myers, 2002; Klein and Hirschheim, 2008). The one-way style of discipline that imports concepts and ideas from sociology, philosophy and management among others, could be re-defined as a two way continuous process in which IS methods, techniques and approaches are informing how other disciplines tackle problems in their domain. IS is now seen as a diverse set of communities that are driven by distinct paradigms: positivist, interpretive and critical seem to be recognised sets of assumptions that guide research and scholar activity in the area (Klein and Hirschheim, 2008). Many overlaps occur, and still there are issues related to the use of methods under each paradigm (McGrath, 2005), as well as to the openness of communities to validate and accept other communities’ discourses (Introna, 2003). Different communities could benefit a great deal by talking to each other and in both academic and practical domains (Klein and Hirschheim, 2008). A possible way to continue developing the discipline is the recognition that there is knowledge that IS generates, which is manifested in applied knowledge but also in processes of planning, adopting and maintaining information systems (reference Hirschheim 2006). With this in mind, the purpose would be to facilitate communication and knowledge sharing between communities, with the idea that these communities are groups from different disciplines that come together to share and learn about knowledge objects and practices outside their own particular (formal) profession or work (Wenger, 1999).

3. Boundary Spanning

Implied in the above is the use of a concept that has been proposed in the IS literature as boundary spanning (Klein and Hirschheim, 2008), and is also seen in the management literature as a beneficial activity to generate rigorous but relevant knowledge through research (Bartunek, 2007; Gulati, 2007; Tushman, Fenollosa, McGrath, O'Reilly, and Kleinbaum, 2007). An early account of boundary spanning by Tushman and Scanlan (1981) accounts for a process in which information is imported into an organisation as well as information exchange with an organisation’s external environment. This is a social process, which also helps individuals to interpret and define continuously their social world. Informational boundary spanning involves 1) obtaining information from outside organisational units and 2) disseminating this information to internal users. The process requires individuals able to establish and use internal and external networks of communication, using appropriate

Page 41: OR52 EXTENDED ABSTRACTS

   

41 

language in each of these. The link with professional associations is also established in this definition as a source of valuable knowledge; the condition of dissemination of information is also established. In boundary spanning, developing and sustaining linkages between communities can be accomplished by individuals with “technical training and cosmopolitan orientation, that is by those who are more professional” (Tushman and Scanlan, 1981). Boundary spanners are people who “do not identify themselves fully with either the academic or practitioner community and who have the courage and the interest to treat both groups as of value and as having something to contribute to the other” (Bartunek, 2007:1329). In the IS practical field, this sort of individuals are being regarded as ideal managers, people who have skills to ensure project success through technical and business competence as well as leadership and client orientation and integrity (Napier, Keil, and Tan, 2007). At a more academic level, boundary spanners would need to go beyond facilitating inter-disciplinary communication, presence or even doing lobbying, as many authors suggest (Baskerville and Myers, 2002; Klein and Hirschheim, 2008); they would also need to work around or deal with a reward orientation in the field that does not value inter-disciplinary work (Baskerville and Myers, 2002). This means that boundary spanners would also need to deal with a diverse set of communities which would prefer to continue developing specialised knowledge on their own. Many would regard boundary spanning as a sign that sign that a discipline is in regression or losing academic or practical relevance on its own (reference OR article), whereas others would see this as an opportunity to rethink the basic assumptions that drive the discipline research (Ghoshal, 2005), to jointly negotiate and produce research efforts whose outputs are relevant to different audiences (Bartunek and Louis, 1996), or to learn from each and other community (Klein and Hirschheim, 2008). There is thus a number of possibilities and hopes for boundary spanning and collaboration, however in the IS field there needs to be a more grounded or in-depth perspective that goes beyond the idea of collaboration. We need to see how boundary spanning has unfolded so far, and how its different manifestations can be improved. In this paper we contextualise boundary spanning within a sociological perspective on disciplines (Abbott, 1988, 2001), which can help us understand which and where opportunities and challenges lie ahead for the future development of IS collaborations.

4. Boundary spanning as disciplinary activity

Abbott (2001) regards disciplines as comprising both academic and professional activities, and with a social identity that recognises the need for society to have individuals with knowledge of problems in a particular domain. Through different stages of differentiation, conflict and absorption, disciplines gain, regain or lose ownership (jurisdiction) on their problems. For Abbott, disciplines emerge when a defined set of problems and solutions results in some individuals abstracting knowledge about them (Abbott, 1988); a discipline has presence (and similar structuring) in both academic and practical domains (Abbott, 2001). For Abbott, both academic and practical work contribute equally to the unfolding of a discipline; such unfolding goes through periods of consolidation but these also constitute the seeds of change. Professionals in both camps engage in activities of diagnosing, treating and inferring about problems. In the academic camp focus is on abstracting elements (concepts, methods) from their different manifestations in practice, as well as suggests new ways of treating problems and inferring (researching) them. The perspective of Abbott considers but does not consider knowledge as an accumulative process. This is still an issue of contention in the IS field because to some people there are core and share knowledge assumptions like information systems are socio-technical artefacts

Page 42: OR52 EXTENDED ABSTRACTS

   

42 

(Baskerville and Myers, 2002) which are shaped by both people and technology through time (Orlikowski, 1992), and which inform process and applied knowledge (Iivari, Hirschheim, and Klein, 2001). The lack of clarity on how knowledge disappears and emerges again, as well as the proposing of fundamental (academic) criticisms or engagements via practitioner communities (Klein and Hirschheim, 2008) leaves unchallenged the assumption that knowledge production is accumulative and somehow consensual. For Abbott, in developing a discipline (profession), the risk of generalising is that of losing jurisdiction on certain problems; this is also the risk that many professionals see when collaborating or conducting inter-disciplinary research. Abbott (1988) says:

“Redundancy [of same knowledge types] will increase efficacy and will thereby help a profession control its jurisdictions. Inconsistency between different ways of construing problems will lead to specialisation and possible differentiation in the profession” (p.56)

This indicates that there is still need to understand in more depth how a discipline acquires its status as a fragmented and relational set of activities (Hassan, 2006), and how specialisation a natural consequence of a discipline’s dynamic development (Abbott, 2001) whose relations with collaborations needs to be better ascertained. For Abbott, the focus of analysis is the process by which disciplines get recognised. Moreover, disciplines do not exist in a vacuum but rather in a ‘system’ that is objectively seen but subjectively constructed (Abbott, 1988). In this system, disciplines and sub-disciplines continuously contest each other areas being left by others; similar structures among disciplines emerge to respond to what society makes of them (or the status given to them) (Abbott, 2001). In this way, boundary spanning becomes subsidiary of a discipline’s development, with collaboration being part of it as a complex battlefield in which disciplines and sub-disciplines keep each other at bay. The ultimate result after disciplines and sub-disciplines appear, disappear or emerge again is the provision and maintaining of core knowledge to society that is socially (profitable) and culturally (in line with society’s core values) recognised. In this paper we aim to explore the dynamics of boundary spanning by looking at how within the IS discipline, inter-disciplinary collaborations have unfolded in the last few years; what roles can be assigned to them and what implications they could be having for IS. We want to identify different typologies of IS collaboration; how they fit within processes of differentiation between disciplines or sub-disciplines; how they reflect conflicts between them; and what impacts they generate in terms of new problems to tackle, new ways of diagnosing and treating them, or new inferences.

5. Survey of articles reporting collaborations and boundary spanning in information systems and management science

Although one can argue that in general more or less all fields in management exchange knowledge, it is interesting to explore IS association with management science. Firstly, right from its origins, information systems has been associated with the research and practice of management science. It can be claimed that IS started as a branch or offshoot of management science which evidenced by the fact that in some cases many IS academics/professional came from the MS ranks in the early stages of IS development as a field of knowledge and practice. Indicative of this association is that fact in the US, under the umbrella of INFORMS (the institute of Operational Research), the top MS and IS journals are published. A similar situations occurs in the UK where the top OR journal (journal of Operational Research Society) and the prominent IS Journal, the European Journal of Information Systems (EJIS) are published by the Operational Research Society (ORS). Second, the other area that we think it is necessary to include in the survey is the field of General Management; for the

Page 43: OR52 EXTENDED ABSTRACTS

   

43 

obvious reasons that in this field, general models of management are explored and are fed into all the other fields of management and including IS. We believe that under the current boundary spanning trends, discussed in the previous sections, we expect exchanges between General Management Journals and Information Systems. We limited our review to fourteen leading and influential academic journals from three fields: Information Systems (7), Management Science (5) and General Management (2). It is well known that there are a number of Journals lists (in all areas of management). For the purposes of this research sample, we opted for the “Academic Journal Quality guide” compiled by the Association of Business School (ABS) a UK-based organisation (ABS- Academic Journal Quality Guide (March, 2010); http://www.the-abs.org.uk/?id=257 [retrieved on 12th June, 2010]. Table 1 presents our final sample of fourteen journals; this is a convenience sample that gives a good spectrum of journals and we used it here as an initial point to gauge collaboration between the fields of IS, MS and General Management. To measure the influence of the journals across their discipline and trans-discipline, we have included Impact Factor for 20088. General Management Journals (2)

For the purpose of illustrating the US and European trends, we have selected two journals in this survey. We include a 4* ABS listed US based Journal, The Academy of Management Journal, the flagship of the US Academy of Management ; and from the UK, we selected the British Journal of Management a 4* (since 2009) Journal. A cursory review of these journals reveals that it has been publishing articles on collaboration between management fields including boundary spanning. MS/OR Journals (5)

INFORMS, the (Institute of Operations Research and Management Science) publishes 12 scholarly journals including the flagships OR/MS American journals. From these, we selected three journals: Management Science (MS), Operations Research (OR) and Interfaces. We also include Omega, the International Journal of Management Sciences, a journal of British origins but US-based since 1994. To assess the development of the MS/OR discourses in the UK the Journal of Operational Research Society (JORS), a well established OR/MS journal in the UK and in Europe. From continental Europe, we included the European Journal of Operational Research (EJOR) the flagship journal of the Association of European Operational Research Societies (EURO). IS journals (7)

Since we want to explore the trends of collaborative work in the IS field, we have selected a sample of six mainstream IS journals. To get a balance of US-EU numbers we decided to include US and EU based journals. The three US-based journals are: Management Information Systems Quarterly (MISQ), Information Research (IS), both 4* plus journals. We selected four 3* EU-based journals: Journal of Management Information Systems (JMIS); Information System Journal (ISJ) (Journal of Information Systems until 1997); Journal of Information Technology (JIT); European Journal of Information Systems (EJIS), and Information and Organisation (IO) (AMIT until 2001). The Association for Information

                                                            8 The impact factor is a measure of the frequency with which the "average article" in a journal has been cited in a particular year or period. The annual impact factor is a ratio between citations and recent citable items published. Thus, the impact factor of a journal is calculated by dividing the number of current year citations to the source items published in that journal during the previous two years: http://thomsonreuters.com/products_services/science/free/essays/impact_factor/   

Page 44: OR52 EXTENDED ABSTRACTS

   

44 

Systems (http://ais.affiniscape.com/displaycommon.cfm?an=1&subarticlenbr=432 produces a MIS Journal rankings. The position of these IS journals in this rank is follows: The three US journals occupy the top 5 positions of the Average rank9: MISQ (1.11); IS (2.67); and JMIS (4.86). The four Europe-based IS journals: (ISJ) (JIT); and (EJIS) are amongst the top 30: EJIS (10.17); ISJ (18.17); IO (28.25); and JIT (31.50). Overall our sample contains seven top IS journals from both sides of the Atlantic. Survey methodology

To reveal IS articles reporting collaboration and boundary spanning, we formulated a group of typical keywords associated with the set of collaboration trends sketched in the previous sections. We decided to assemble two set of keywords: (1) keywords to search applications of boundary spanning in Information Systems and Management Science journals; and (2) keywords to search applications of boundary spanning in General management journals: These keywords were:

(1) keywords to search IS and MS journals:

Boundary spanning; Collaborative Research; Consultancy; Joint project; Team research; Inter-disciplinary; Knowledge transfer; Multi-disciplinary; IS Research issues; Communities of practice; Networks

(2) keywords to search General Management journals: Boundary spanning; Information System; Communities of practice; Networks; Information Technology; e-commerce; knowledge management

The survey was based on searching articles that are available on-line. Websites of the twelve journals were searched (21 June 2010) for the period from Jan 1989 to December 201010. All websites featured articles available from Jan 1974 to March/June 2009, well inside the range of our survey. So, to ascertain the number of publications that seem to address collaboration and boundary spanning issues, the strategy was as follows: (a) All journals’ websites databases were queried for the occurrence of the words (as entire

phrases) “collaboration”; “boundary spanning”; “knowledge networks”; “IS Research issues” in the title, abstract or keywords of the article.

(b) A filtering of the initial list of papers in these fields was carried out to weed out those

papers not focused on boundary spanning and collaborative applications (e.g. papers that mention those terms in a casual form).

Modern journals feature ‘advance publication’ status for accepted papers and make them available on line. For articles in ‘advance publication’ or ‘in press’ status, we included only articles that were allocated an Issue number, and consequently page numbers, in their future hard copy publication. The survey considered only papers that have been catalogued by the journals as full research articles, hence book reviews, editorials, letters and viewpoints were not included, because authors in the field do not generally cite these documents. Five of the journals (MS, OR, ISJ, and EJIS) publish six issues a year. Three journals (JORS, Omega and EJOR) have 12 issues a year (JORS started with 6 issues/year and became monthly in 1978);                                                             9 Average ranking points is calculated as (Total of ranks in each article / (Number of articles in which a journal is ranked) http://ais.affiniscape.com/displaycommon.cfm?an=1&subarticlenbr=432 10 For the pilot study reported here, we searched only the following Journals: European Journal Information Systems (EJIS); Journal Operational Research Society (JORS); and the Journal of Information Technology (JIT) using the phrase “boundary spanning” as the keyword.

Page 45: OR52 EXTENDED ABSTRACTS

   

45 

and five journals produce quarterly issues (BJM, MIS-Q, ISR, JMIS, IO and JIT). Assuming an average of 8 articles per issue, a total of 7,20011 constituted our final sampling framework. With this sampling frame as a target, titles, abstracts and keywords of articles published over the 10-year period were queried for the occurrence of our set of keywords.  

ABS-List

Grade

Editorial Board Base

Issues per year

Impact Factor 2008

(ABS list) General Management (2) Academy of Management Journal 4* US 6 2.1 British Journal of Management 4* UK 4 -0.2 Operational Research / Management Sciences (5) Management Sciences (MS) 4* US 6 1.5

Operations Research (OR) 4* US 6 0.3 European Journal Operational Research (EJOR) 3* EU 12 0.5 Omega, The International Journal of Management Sciences 3* US 12 1.3 Journal Operational Research Society (JORS) 3* EU-UK 12 -0.6 Information Systems (7) MIS Quarterly (MIS-Q) 4* US 4 3.4 Information Systems Research (ISR) 4* US 4 0.5 Journal of Management Information Systems (JMIS) 3* US 4 0.6 Information System Journal (ISJ) 3* EU 6 0.6 Journal of Information Technology (JIT) 3* EU 4 0.2 European Journal of Information Systems (EJIS) 3* EU-UK 6 -0.6 Information and Organisation (IO) 3* EU 6 NA

Table 1. Sample of General Management, Management Science/Operational Research and

Information Systems Journals

6. Pilot survey: Discussion of results

In the initial study reported here we searched for occurrence of just one keyword: the phrase “boundary spanning” in only three Journals: European Journal Information Systems (EJIS); Journal Operational Research society (JORS); and the Journal of Information Technology (JIT). In Table 2, the main results of the initial search are presented: there were 36 articles reporting the use of (or making some reference to) boundary spanning in information systems research and management science practice. For reasons of time, we have not analysed the detailed usage of the phrase in each article yet. The journals with greater number of articles featuring “boundary spanning” were the European Journal of Information Management (EJIS) (20 articles); Journal of Information Technology (JIT) (12 articles) followed by the Journal of Operational Research Society (4 articles).

                                                            11 Sampling frame of a total of 7,200 articles based on: 6 journals x 4 issues x 8 articles x 10 years = 1920 articles; 5 journals x 6 issues x 8 articles x 10 years = 2400 articles; and 3 journals x 12 issues x 8 articles x 10 years = 2880 articles

Page 46: OR52 EXTENDED ABSTRACTS

   

46 

European Journal of Information Systems EJIS (20 papers)

1. Commentary on Wanda Orlikowski's ‘Material knowing: the scaffolding of human knowledgeability’, Robert D Galliers , European Journal of Information Systems 15, 470-472 (16 October 2006)

2. Interactive innovation of technology for mobile work, Jan Kietzmann , European Journal of Information Systems 17, 305-320 (31 July 2008)

3. Enabling agile adoption practices through network organizations, Dirk S Hovorka, Kai R Larsen, European Journal of Information Systems 15, 159-168 (15 May 2006)

4. Contextual influences on technology use mediation: a comparative analysis of electronic medical record systems, Elizabeth Davidson, Mike , European Journal of Information Systems 14, 6-18 (26 April 2005)

5. Enterprise agility and the enabling role of information technology, Eric Overby, Anandhi Bharadwaj, V Sambamurthy, European Journal of Information Systems 15, 120-131 (15 May 2006)

6. Explaining changes in learning and work practice following the adoption of online learning: a human agency perspective, Tsai-Hsin Chu, Daniel Robey, European Journal of Information Systems 17, 79-98 (25 February 2008)

7. The role of boundaries in knowledge processes, Y Merali, European Journal of Information Systems 11, 47-60 (8 March 2002)

8. The dynamics of IT boundary objects, information infrastructures, and organisational identities: the introduction of 3D modelling technologies into the architecture, engineering, and construction industry, Uri Gal, Kalle Lyytinen, Youngjin Yoo, European Journal of Information Systems 17, 290-304 (24 June 2008)

9. A genealogical study of boundary-spanning IS design , Susan Gasson, European Journal of Information Systems 15, 26-41 (28 February 2006)

10. The conundrum of IT management* , Joe Peppard , European Journal of Information Systems 16, 336-345 (24 October 2007)

11. Use of innovative content integration information technology at the point of sale, Claudia Loebbecke , European Journal of Information Systems 16, 228-236 (28 July 2007)

12. Potential of critical e-applications for engaging SMEs in e-business: a provider perspective, David H Brown, Nigel Lockett, European Journal of Information Systems 13, 21-34 (18 February 2004)

13. Knowledge management and the politics of knowledge: illustrations from complex products and systems, N Marshall, T Brady, European Journal of Information Systems 10, 99-112 (2 October 2001)

14. How organizations adopt information system process innovations: a longitudinal analysis, Erja Mustonen-ollila, Kalle Lyytinen , European Journal of Information Systems 13, 35-51 (18 February 2004)

15. The impacts of competence-trust and openness-trust on interorganizational systems, Mohammed Ibrahim, Pieter M Ribbers , European Journal of Information Systems 18, 223-234 (7 July 2009)

16. Another road to IT turnover: the entrepreneurial path, Gaëtan Mourmant, Michael J Gallivan (Mike), Michel Kalika , European Journal of Information Systems 18, 498-521 (30 November 2009) doi:10.1057/ejis.2009.37 Special Feature

17. Implementing packaged enterprise software in multi-site firms: intensification of organizing and learning, Paul C van Fenema, Otto R Koppius, Peter J van Baalen, European Journal of Information Systems 16, 584-598 (29 October 2007)

18. Reflecting on action in language, organisations and information systems, Pär J Ågerfalk, Göran Goldkuhl, Brian Fitzgerald, Liam Bannon, European Journal of Information Systems 15, 4-8 (28 February 2006)

19. Understanding e-Government project trajectories from an actor-network perspective, Richard Heeks, Carolyne Stanforth , European Journal of Information Systems 16, 165-177 (2 May 2007)

20. Questioning the IT artefact: user practices that can, could, and cannot be supported in packaged-software designs, M W Chiasson, L W Green ,European Journal of Information Systems 16, 542-554 (29 October 2007)

Journal of Information Technology (JIT) (12 papers)

1. Relationship building and the use of ICT in boundary-crossing virtual teams: a facilitator's perspective, David J Pauleen, Pak Yoong, Journal of Information Technology 16, 205-220 (1

Page 47: OR52 EXTENDED ABSTRACTS

   

47 

December 2001) 2. Special Issue on Global Sourcing: IT Services, Knowledge and Social Capital, Ilan Oshri, Julia

Kotlarsky , Journal of Information Technology 23, 1-2 (7 March 2008) 3. Operational capabilities development in mediated offshore software services models, Sirkka L

Jarvenpaa, Ji-Ye Mao, Journal of Information Technology 23, 3-17 (7 March 2008) 4. Exploring knowledge exchange in electronic networks of practice, Eoin Whelan, Journal of

Information Technology 22, 5-12 (19 December 2006) 5. Moments of governance in IS outsourcing: conceptualizing effects of contracts on value capture

and creation, Shaila M Miranda, C Bruce Kavan, Journal of Information Technology 20, 152-169 (28 June 2005)

6. Successful knowledge transfer within offshore supplier networks: a case study exploring social capital in strategic alliances, Joseph W Rottman, Journal of Information Technology 23, 31-43 (7 March 2008)

7. Anxiety and psychological security in offshoring relationships: the role and development of trust as emotional commitment, Séamas Kelly, Camilla Noonan, Journal of Information Technology 23, 232-248 (17 December 2008)

8. Making organisations virtual: the hidden cost of distributed teams, Karin Breu, Christopher J Hemingway, Journal of Information Technology 19, 191-202 (13 July 2004)

9. Action in context and context in action: Modelling complexity in multimedia systems development, Brian Webb, Seamus Gallagher, Journal of Information Technology 24, 126-138 (13 January 2009)

10. Institutionalizing enterprise resource planning in the Saudi steel industry: A punctuated socio-technical analysis, Kalle Lyytinen, Mike Newman, Abdul-Rahman A Al-Muharfi, Journal of Information Technology 24, 286-304 (16 November 2009)

11. Visualization of interfirm relations in a converging mobile ecosystem, Rahul C Basole, Journal of Information Technology 24, 144-159 (28 May 2009)

12. Offshore middlemen: transnational intermediation in technology sourcing, Volker Mahnke, Jonathan Wareham, Niels Bjorn-Andersen , Journal of Information Technology 23, 18-30 (29 January 2008)

Journal Operational Research (JORS) (4 papers)

1. Scaling knowledge: how does knowledge accrue in systems? , J H Powell, J Swart, Journal of the Operational Research Society 59, 1633-1643 (3 October 2007)

2. Operations management of new project development: innovation, efficient, effective aspects, A H I Lee, H H Chen, H-Y Kang, Journal of the Operational Research Society 60, 797-809 (21 May 2008)

3. Data envelopment analysis with missing data, T Kuosmanen, Journal of the Operational Research Society 60, 1767-1774 (10 December 2008)

4. Operational knowledge management: identification of knowledge objects, operation methods, and goals and means for the support function , F Wijnhoven, Journal of the Operational Research Society 54, 194-203 (1 February 2003)

Table 2. A list of 36 articles mentioning ‘Boundary Spanning’ in EJIS, JIT and JORS

A quick review of the articles reveals that the issue of boundary spanning has been put forward by the IS community in a variety of IS applications and research contexts. The pilot survey of the three journals reveals no clear pattern or trend as the way these concepts are used. The range of usage varies from: exploring “effects of crossing organizational, cultural and time and distance boundaries”, Pauleen, et al, (2001) to papers studying “systems definition and negotiation, explaining the situated rationalities underlying IS design as the co-design of business and IT systems”, Gasson (2006). The next logical step in our study will go to go deeper in analysing the context in which the collaboration and boundary spanning is proclaimed in IS research. For now we simply would like to report our initial findings. A sample of the papers orientations is shown in Table 3.

Page 48: OR52 EXTENDED ABSTRACTS

   

48 

Title

Topic of Collaboration (Paper key words)

Examples of applications to Boundary Spanning

Author /Journal

Relationship building and the use of ICT in boundary-crossing virtual teams

effects of crossing organizational, cultural and time and distance boundaries on relationship building in virtual teams

This paper reports on a field study of New Zealand-based virtual team facilitators working with boundary-spanning virtual teams. From a facilitator's perspective, boundary-crossing issues (organizational, cultural, language and time and distance) can affect relationship building in many important ways.

David J Pauleen, Pak Yoong Journal of Information Technology 16, 205-220 (1 December 2001)

A genealogical study of boundary-spanning IS design

information system design, actor-network theory, boundary objects, situated design

This study provides much needed rich insights into the complexities of systems definition and negotiation, explaining the situated rationalities underlying IS design as the co-design of business and IT systems. A fifth form of boundary object is suggested by this analysis, which is based on the need to align interests across a network of actors.

Susan Gasson European Journal of Information Systems 15, 26-41 (28 February 2006)

How organizations adopt information system process innovations: a longitudinal analysis

empirical research, IS development methods and tools, adoption decisions, IS process-innovations

This paper describes how three organizations adopted information system (IS) process innovations (ISPI) using a sample of over 200 adoptions over a period of four decades{..} Within the three organizations, the types and rates of ISPI adoptions varied significantly. These variations can be attributed to learning mechanisms, the influence of legacy platforms and differences in the boundary spanning activities.

Erja Mustonen-ollila, Kalle Lyytinen European Journal of Information Systems 13, 35-51 (18 February 2004)

Table 3. Some examples of Information Systems and Management Sciences articles on ‘Spanning Boundary’ and ‘Collaborative Research’

Page 49: OR52 EXTENDED ABSTRACTS

     

49 

7. Conclusions

In this paper we have discussed collaboration in information systems in the context of Boundary spanning. To provide empirical evidence that collaboration between IS, MS/OR and other fields of management, a methodology to survey a sample of top journals in these fields was proposed. The paper reports in an initial pilot survey of three journals. From the results, it appears that the journals sampled have produced a healthy number of articles on the topics of collaboration and boundary spanning, corroborating the premises that this topic is well researched and advanced amongst the IS and MS/OR communities. Initial results form our pilot survey indicate a trend in IS research to reach other fields of management by opening collaboration via a variety of practical IS and IT applications and projects. Further research is needed to answer the questions such as: (a) what sort of collaborations have been reported?; (b) why these collaborations are occurring? (is because of the increasing ‘relevance’ (or ‘decline’) of IS in certain areas?); and finally and maybe more importantly (c) what knowledge has been produced, and effectively ‘generalised’ as a result of collaborations?. In order to answer these questions, we expect to complete the study by both surveying the other eleven journals in our sample and by studying in detail the collaboration proposed in these articles.

8. References

1. Abbott, A. 1988. The System of Professions: An Essay on the Division of Expert Labor. Chicago: University of Chicago Press.

2. —. 2001. Chaos of Disciplines. Chicago: University of Chicago Press. 3. Bartunek, J. 2007. "Academic-practitioner collaboration need not require joint or relevant

research: Toward a relational scholarship of integration." Academy of Management Journal 50:1323-1333.

4. Bartunek, J. and Louis, M.R. 1996. Insider/Outsider Team Research. Thousand Oaks (Calif): Sage.

5. Baskerville, R. and Myers, M. 2002. "Information systems as a reference discipline." MIS Quarterly 26:1-14.

6. Ghoshal, S. 2005. "Bad management theories are destroying good management practices." Academy of Management Learning and Education 4:75-91.

7. Gulati, R. 2007. "Tent poles, tribalism, and boundary spanning: The rigor-relevance debate in management research." Academy of Management Journal 50:775-782.

8. Hassan, N. 2006. "Is information systems a discipline? A Foucauldian and Toulminian analysis." Pp. 425-440 in ICIS: 27th International Conference on Information Systems, edited by ICIS. Milwaukee: ICIS.

9. Iivari, J., Hirschheim, Rudy, and Klein, Heinz K. 2001. "Towards more professional information systems development: ISD as knowledge work." in 9th European Conference on Information Systems. Bled, Slovenia.

10. Introna, Lucas D. 2003. "Disciplining information systems: Truth and its regimes." European Journal of Information Systems 12:235-240.

11. Klein, Heinz K. and Hirschheim, Rudy. 2008. "The structure of the IS discipline reconsidered: Implications and reflections from a community of practice perspective." Information and Organization 18:280-302.

12. McGrath, K. 2005. "Doing critical research in information systems: A case of theory and practice not informing each other." Information Systems Journal 15:85-101.

13. Napier, N., Keil, M., and Tan, F. 2007. "IT project managers' construction of successful project management practice: a repertory grid investigation." Information Systems Journal 19:255-282.

14. Orlikowski, Wanda J. 1992. "The duality of technology: Rethinking the concept of technology in organizations." Organisation Science 3:398-427.

Page 50: OR52 EXTENDED ABSTRACTS

     

50 

15. Porter, Michael E. and Millar, Victor E. 1985. "How information gives you competitive advantage." Pp. 149-160 in Harvard Business Review, vol. 63: Harvard Business School Publication Corp.

16. Tushman, M., Fenollosa, A., McGrath, D., O'Reilly, C., and Kleinbaum, A. 2007. "Relevance and rigor: Executive education as a lever in shaping practice and research." Academy of Management Learning and Education 6:345-362.

17. Tushman, M. and Scanlan, T. 1981. "Boundary spanning individuals: Their role in information transfer and antecedents." Academy of Management Journal 24:289-305.

18. Wenger, Etienne. 1999. Communities of Practice: Learning, Meaning, and Identity. Cambridge: Cambridge University Press.

Page 51: OR52 EXTENDED ABSTRACTS

     

51 

A process-based cost-benefit analysis for digitising police suspect interviews

John Cussons12 and Ian J Seath13

Independent Consultants

Abstract

The technology to move to digital recording of audio and video interviews is now available in the marketplace and the question being asked is whether its implementation can deliver improvements in efficiency, effectiveness and cycle-time. In this case study, we answer that question. The conference presentation will be based on a piece of work looking at the cost-benefit of moving from tape-based Police interviewing processes to digital processing. This involved some process analysis, activity costing and also showed up some interesting "non-Lean" waste as well as proving the benefit of going digital.

Keywords: Cost Benefit, Costing.

1. Why invest in digital interviewing of suspect interviews?

The technology to move to digital recording of audio and video interviews is now available in the marketplace and the question being asked is whether its implementation can deliver improvements in efficiency, effectiveness and cycle-time. The pressures currently on the Police to improve performance in suspect, witness and victim interviewing processes include:

budget constraints and challenges to demonstrate value for money and cost improvements the task of meeting the demands of Speedy, Simple, Summary Justice (SSSJ) tape-based technology that is in use being out-dated and prone to failure increasing costs of long-term storage of recorded media pressure on central transcribing teams who produce the typed records of taped interviews

A number of constabularies are considering investing in digital technology and a few are running pilots in some of their interview suites. The issue that many constabularies face is that they are unclear what their current interviewing processes cost them, what capacity they actually have within parts of the process, such as the transcribing team, and also whether there are potential cycle-time reductions that can be made which will improve their ability to meet the SSSJ obligations.

Without this understanding of cost, capacity and cycle-time opportunities it is difficult for senior managers to build a clear case for the move to new technology and thereby justify the investment of scarce budgets.

The objectives of the project described in this case study were to:

Understand the present process for carrying out, recording and transcribing interviews Develop the “to be” process, with the adoption of a digital solution Cost both the present and potential future process Produce a short report showing the costs and benefits of moving to the new technology

                                                            12Email: [email protected]  13 Email: [email protected]

Page 52: OR52 EXTENDED ABSTRACTS

     

52 

2. Methodology

The project took the following approach:

Define the scope and boundaries of the processes associated with interviewing suspects and witnesses.

Identify a set of performance information and costing data required to complete the analysis.

Conduct face-to-face interviews with key staff who operate those processes. Mapping the processes “live” with them, and establishing estimates of activity processing times.

Create costed process maps for the current, non-digital processes. Identify the specific ways in which a digital solution would change the processes.

– A semi-digital solution (use of CDs and DVDs) and a fully-digital solution (server-based) were included.

– Process maps, with activity timings were developed for the fully-digital processes only.

Compare the costs of the digital and non-digital processes.

Examples of tools and techniques used within the project included the following:

SIPOC: a framework for defining the boundaries and scope of a process (SIPOC is an abbreviation of Suppliers, Inputs, Processes, Outputs and Customers). We used this in discussion with the client to establish the boundaries for both Suspect and Witness Interviewing processes.

Process Measurement Framework: a model describing the three types of performance data required to manage any business process. It comprises “Internal KPIs” such as volume, time and cost, “Output KPIs” such as error rates and yield, and “Satisfaction KPIs” such as customer satisfaction. The framework was used as a basis for identifying relevant source data required in the project.

Diagnostic Interviewing: face-to-face interviews with key client staff in order to understand their processes, how well the work and opportunities for improvement.

Process Mapping: we used a tool called “Control 2007” [available from Nimbus Partners]. This is a process management tool which enables processes to be captured rapidly in “live workshops”, working with the staff who operate the processes. With its ability to conduct Activity Based Costing analysis and create multiple scenarios, Control 2007 is a simple, but powerful support tool.

Activity Based Costing: Control 2007 has Activity Based Costing functionality to help calculate “cost per transaction”, “annual costs” and “Full Time Equivalents (FTEs)”. We were able to build a Resource Library within Control 2007 so that staff hourly costs could be built into the process maps and used to calculate transaction costs.

The Seven Wastes: a tool from Lean Thinking which categorises non-value-adding activities into seven types of waste; people waiting, over-production, re-work/failures, people moving, over-processing, inventory and transport of materials.

3. Understanding today’s performance

The key aim of the project was to work with the client’s staff to develop a clear benefits case for the move to the new technology. A secondary, but complimentary aim was to identify other improvement opportunities within the current process at the same time.

Page 53: OR52 EXTENDED ABSTRACTS

     

53 

We started by creating a high-level “as is” process map such as the one shown in Figure 1 below. We defined the scope and boundaries of the processes with the senior management of the constabulary. We then met with front-line staff involved with the interviewing process to carry out a process analysis and develop a detailed set of process maps (using Control 2007 software) which capture the current way that steps are carried out. This analysis also included gaining an understanding of who carries out each step and the time taken.

Prepare for storage

6

Check transcription

5

Transcribe interview

4

Prepare for transcription

3

Conduct interview

2

Suspect arrives at interview room with 

Custody Ref. No. Prepare tapes & docs for interview

1

Tape ready for storage

Store working copy tape

7

Master tape for storage

Transcript for fi le of evidence

PoliceWorkingcopy

Wkg copy matched with 

master

"Result Interview" for 

custodyDefence wkg 

copy

Store master & working copy tape

8

Retrieve master & wkg copy for Court

9

Corrections  reqd.

Tape & transcript

Figure 1. Example high-level Process Map for Suspect Interviewing

Once the “as is” maps were produced we worked with the constabulary to identify the key performance information and costing data which they needed to provide us with to allow us to develop a fully costed model. These data included:

- Volumes of interviews, stratified by type: Major Crime, Serious Crime and Volume Crime - Time spent in conducting interviews - Proportions of interviews that required full, or partial, transcription - Annual spend on tapes - Hourly rates of staff involved in the process - Numbers of staff (FTE) involved in transcription and storage/retrieval processes

As is often the case, the required data were not always readily available and, in some cases, different sources of data gave conflicting information. For example, Custody Suite records were used to quantify the volume of interviews, but were not always able to provide data on interview lengths. It also turned out not to be possible to get useable data on costs of storage and transport of tapes and therefore our cost-benefit analysis only included the direct, time-based costs of operating the interviewing processes.

Page 54: OR52 EXTENDED ABSTRACTS

     

54 

Data gathering to identify how long each process step takes was largely carried out by interviewing relevant staff. However, this can obviously result in biased data and we therefore asked teams of staff to “validate” the data that had been collated by their colleagues. In an ideal world, a technique such as Activity Sampling would be used, but client time and budget pressures for this project meant we had to adopt a more pragmatic approach. Experience with other clients has shown the approach we adopted to be accurate enough for management to be able to draw reasonable conclusions.

We also met with senior managers to understand the wider needs of the constabulary and how digital interviewing technology could support their future strategic direction.

During the data-gathering phase of this work we reviewed the present processes from a “Lean Thinking” perspective to identify where there were potential areas of improvement to be gained prior to, and following, the implementation of digital interviewing technology. The Seven Wastes approach was used to help categorise the various non-value-adding activities that were found within the processes. Table 1 below summarises some examples.

It was clear from this part of our analysis that there were plenty of opportunities for improvement even without an investment in digital technology.

Waste Examples in interviewing processes

People Waiting

Waiting time while tapes are retrieved from storage so they can be collected by an officer.

Over-production

Producing three copies of the interview tape even though defence solicitors virtually never ask for a copy.

Rework & Failures

Corrections of transcripts because the original tape was inaudible, broken, etc.

People Moving

Officers travelling to the ROTI team to deliver tapes (due to fear of loss) and any travel to/from tape storage.

Over-processing

Checking information for completeness when it arrives at the ROTI team (it should not be delivered incomplete). Checking transcripts after they have been returned from correction (why would they be wrong a second time?).

Inventory All the storage of tapes, plus any temporary storage by officers at their desks. Also, the storage of blank tapes, required for interviewing.

Transport of materials

All transport of tapes between police stations and ROTI and storage.

Table 1. Examples of waste identified in the interviewing processes, categorised using the

“Seven Wastes” from Lean Thinking

Page 55: OR52 EXTENDED ABSTRACTS

     

55 

4. Demonstrating the benefits of digital interviewing

Once the current situation had been captured, we developed a fully-costed model in Control 2007 which identified the total cost of the interviewing processes. For example, present costs for one hundred interviews may be £8,000 depending on the mix of major, serious and volume crime interviews that a particular constabulary has. This is based only on the cost of staff time in carrying out the process and there are additional costs of tapes, transport and storage (which we did not quantify). Figure 2 below shows the analysis of cost for each step in the end-to-end process. It highlights the fact that approximately half the total process cost is associated with Transcription activities.

Figure 2. Costs of each process step as a % of total process cost for Major Crime Suspect Interviewing

The process model also allowed us to identify how many full time equivalent (FTE) staff the present workload and process required. This could be compared to present staffing levels within the constabulary to demonstrate whether a “Lean Thinking” approach might offer opportunities for improvement even before new technology is applied. This approach also allowed us to identify where potential bottlenecks might exist in the present process.

It was clear from the analysis that the Transcription Team was staffed to a level based on peak demand and that improvements to work prioritisation and workload balancing could be achieved with staff reductions of perhaps 20%.

The process model will drill down into a much greater level of detail such as that shown below (Figure 3). This example is of a major crime interview process stage prior to digitisation.

Page 56: OR52 EXTENDED ABSTRACTS

     

56 

Interview suspect

1

  Interviewing Officer

Tape running

Police Workingcopy

Master tape for storage

Result interview

7

  Interviewing Officer

 Custody Officer

  Defence Solicitor

  Suspect

"Result Interview" for 

custody

Defence copy

If > 45 mins. Prepare & change 

additional tapes

2

  Interviewing Officer

90% Coninue to Interview suspect

3

  Interviewing Officer 2

  Interviewing Officer

Recap and cease 

Interview

4

  Interviewing Officer

  Interviewing Officer 2

10%Sign master 

tape

5

  Interviewing Officer

  Interviewing Officer 2

  Defence Solicitor

Label working copies

6

  Interviewing Officer Seal master 

copy with label

8

  Interviewing Officer

Figure 3. Example “drilled-down” Process Map for conducting an interview (tape-based)

Our next step was to develop the proposed “to be” processes. Based on our knowledge and understanding of the technology available and the direction in which the constabulary wanted to move we were able to develop a fully-costed “to be” process model. An example of what might happen to the above process if a server-based digital recording system was used is shown below (Figure 4).

Interview suspect

1

  Interviewing Officer

Systemrunning

Result interview

7

  Interviewing Officer

 Custody Officer

  Defence Solicitor

  Suspect

"Result Interview" for 

custody

If > 45 mins. Coninue to Interview suspect

3

  Interviewing Officer

  Interviewing Officer 2

Recap and cease 

Interview

4

  Interviewing Officer

  Interviewing Officer 2

10%Potential  delay 

45 mins

90%

Figure 4. Example “drilled-down” Process Map for conducting an interview (digital-based)

Page 57: OR52 EXTENDED ABSTRACTS

     

57 

The “to be” model allowed us to show potential savings that were available. For example, server-based digital recording could bring the cost of staff time in carrying out the process down to £6,600 per 100 interviews, again dependent on the mix of major, serious and volume crime interviews. As part of the report we identified the likely payback period which for this constabulary would be less than twelve months, based solely on staff time savings. We used cost data for a typical digital installation, including the costs of staff training and change implementation.

Additional benefits which we were not able to quantify in our analysis included:

Downstream monitoring (cost, time) Transport (cost, time) Storage (cost) Security (media/data loss) End-to-end Cycle Time reduction which will support the achievement of SSSJ targets

Had these benefits been costed it is likely that a payback period of 6-7 months would have been possible.

The analysis enabled us to make some additional recommendations on other areas of potential improvement opportunities in addition to those directly related to digital interviewing technology:

More detailed analysis of the Transcription Team’s processes would confirm the scale of potential savings from productivity improvements

Using data and workflow available in a server-based system would enable the Transcription Team’s workload to be managed more effectively and move away from the current FIFO (first in out first out) system

Quantification of benefits not calculated in this work would add weight to the benefits case

The adoption of a server-based solution would open up opportunities to implement a balanced performance measurement system for the interviewing processes

5. Some challenges and learning points

Finding out exactly how many of each type of interview is done each year was not as simple as you might think

People will tell you the “worst case” processing time rather than the average – for accurate data you may need to use activity sampling and a trawl through a sample of custody suite records

The performance information needed to manage and improve these processes simply isn’t available routinely

There is a huge amount of unquantified waste in these processes and people regard it as “normal” working practice

Although the process time savings aren’t huge, the other benefits should make this type of investment a “no brainer”

Page 58: OR52 EXTENDED ABSTRACTS

     

58 

Cause-effect analysis of failure of a sewage treatment planttank roof structure

Mirosław Dytczak14

Warsaw University of Technology, Warsaw, Poland

Grzegorz Ginda15

Opole University of Technology Opole, Poland

Abstract

Contemporary building structures comprise complex technical objects. Their complexity is related to different factors. In the case of failure, complex nature of the structures causes that it is hard to identify failure causes and mechanisms. Application of a thorough cause-effect analysis with regard to factors which govern effective utilisation of structure is therefore necessary in such the case. A possibility of DEcision MAking Trial and Evaluation Laboratory (DEMATEL) method application for identification of cause-effect chain components is dealt with in the paper. A sample failure analysis case is also included to illustrate the approach. Keywords: building, construction, engineering, practice of OR, reliability.

1. Introduction: Decision support with regard to structural failure analysis

A need for ensuring required technical building structure ability results in application of appropriate structural and material solutions. All applied solutions should end up in a harmonious system which is enables appropriate functioning of building objects. On the other hand, numerous factors are present in a process of design, erection and exploitation of the objects. The factors are of technical, random and human nature. The factors appear in different stages of shaping of building objects. Their appearance makes it therefore hard to identify causes and mechanisms of a technical failure. Above complexity results in difficulty in diagnostics of building problems. Application of appropriate decision support tools is therefore necessary to cope with these difficulties in a proper way. Application of DEMATEL method makes it possible to identify components of a cause-effects chain of building object structural failure in an objective manner. It is hoped that application of proposed identification procedure will help to get rid of causes of many structural failures and structural building failures in future. It could be especially helpful in the case of application of innovative structural solutions and new kinds of building structures.

                                                            14 Email: [email protected] 15 Email: [email protected]

 

Page 59: OR52 EXTENDED ABSTRACTS

     

59 

2. Description of a sample structure and the case of its failure

Considered structure comprises innovative roof cover devoted to an introductory sediment tank of a purification plant. It is a composite shell of revolution of a variable thickness. Its failure took place in a snowy winter time period. A local stability loss is considered a possible technical cause to the failure. A brittle breach in a cross-section of the shell is also observed. Analysis of tentative assumptions with regard to structural design stage did not reveal any incorrectness pertaining to mathematical model of the structure. No differences between design assumptions and erection phase were observed as well. Tests of applied material samples revealed, however, a value of Young modulus E for the composite that is a lot (more than two times) lower than expected. Repeated numerical analysis of structure’s behavior, which was based on the correct E value, confirmed that underestimation of this value could influence structure’s displacements dramatically. Increase in the displacements could in turn cause stability loss of the construction and its failure. Another possible technical cause of the failure can result from non-uniformity of composite’s structure. Assumed values and schemes of structural load are also considered as potential cause of the failure. It seems that is would be advantageous not only to find the most probable cause of the failure but also to identify cause-effects relations which would help to avoid similar failures in future. DEMATEL is one of approaches which can be very helpful with this regard.

3. DEcision MAking Trial and Evaluation Laboratory (DEMATEL)

DEMATEL is one of the oldest contemporary decision support methods (Fontela and Gabus, 1976). It regained popularity only a few years ago, mainly due to applications in far east Asian countries. The method has been utilized in many different social, management and technical applications. It has been also developed a lot in the meantime. For example, Dytczak adapted it for a the multi-attribute decision analysis case (Dytczak, 2008). A pair-wise comparison concept is utilised in DEMATEL. The comparison deals with considered factors, decision making units and events. It is related to direct influence of compared objects. A discrete ordering 0–ND scale is applied to measure intensity of direct influence relation between objects considered in pairs. Such scale makes it including both tangible and intangible factors possible. Zero scale level denotes a lack of influence of the first compared object on the second one. The last scale level expresses extreme influence of the first object. Intermediate scale levels denote gradually changing influence of the first object on the second one. Complete set of relations between considered n objects is defined by a graph of direct influence (see Figure 1). The graph includes arcs which connect object vertices. The weights of arcs denote intensity and direction of direct influence between objects which constitute a pair. Feedback between two objects can be also included. It is expressed by a pair of arcs between two considered objects which point in opposite directions.

Page 60: OR52 EXTENDED ABSTRACTS

     

60 

I

2

1

D

TC2

3

E S

B

F

L

31

2

1

2

3

2

3

3 3

1

1

1

3

1

21

2

2

2

2

 

Figure 1. The graph of direct influence Weighted incidence matrix representation of the graph AD is applied for overall (both direct and indirect) influence assessment. The overall influence is defined by a matrix of total influence (T):

1 XIXT , (1) where I denotes the n × n identity matrix and X is a n × n normalised direct influence matrix AD. Matrix AD is divided by the largest of its row-wise sums (the largest column-wise sum can be applied with this regard too) to obtain X:

.

max1

D

D

n

jij

ia

AX (2)

Matrix of total influence makes it possible to define for each object values of two indices: the position (s+) and the relation (s–). The first one expresses overall role (activity) of objects in a pair-wise comparison process while the second denotes a level overall influence on other objects. The relation is a measure object’s causality. Values of both indices for the i-th object result from sums of matrix T components which constitute a row (Ri) and a column (Ci) devoted to the object:

.

,

iii

iii

CRs

CRs (3)

Identification of cause-effects chain components of the failure Components of a computational model of analysis are presented immediately below. Considered objects pertain to events devoted to:

1. Decisions of stakeholders which are engaged in a process of structure design and erection.

Page 61: OR52 EXTENDED ABSTRACTS

     

61 

2. Surrounding environment (independent of the stakeholders). 3. Features and behavior of structural stuff.

The first group consists of decision of three engaged parties:

1. Designer (D). 2. Investor (I). 3. Contractor (C).

The second group is represented by innovational nature of the structure (T). The third group includes the following events:

1. More disadvantageous (than in plans) real values and distribution of structural load (L). 2. Insufficient structural stiffness (E). 3. Structural stability loss (S). 4. Exceeding of real bearing capacity of the structure (B). 5. The failure itself (F).

Intensity of direct influence measurement scale parameter ND = 3 is assumed. Scale levels have therefore the following meaning:

0: a lack of direct influence of the first object on the second object of considered pair of objects.

1: a slight influence of the first object. 2: big influence of the first object. 3: extreme influence of the first object.

Assumed levels of direct influence intensity are presented immediately below. Innovative nature of structure:

1. Influences designer’s decisions extremely (evaluation of direct influence intensity equal to 3).

2. Influences investor’s decisions slightly (evaluation equal to 1). 3. Influences contractor’s decisions considerably (evaluation equal to 2).

Decisions of the designer:

1. Influence the investor’s decisions slightly (evaluation equal to 1). 2. Influence the contractor’s decisions considerably (evaluation equal to 2). 3. Influence estimation of structural load level and scheme extremely (evaluation equal to

3). 4. Influence exceeding of structural bearing capacity considerably (evaluation equal to 2).

Decisions of the investor:

1. Influence the designer’s decisions considerably (evaluation equal to 2). 2. Influence the contractor’s decisions extremely (evaluation equal to 3). 3. Influence stiffness of the structure considerably (evaluation equal to 2).

Decisions of the contractor:

1. Influence the investor’s decisions slightly (evaluation equal to 1). 2. Influence stiffness of the structure considerably (evaluation equal to 2).

Page 62: OR52 EXTENDED ABSTRACTS

     

62 

It is assumed that insufficient stiffness of the structure influences stability loss extremely (evaluation equal to 3). On the other hand, a level and distribution of structural load causes:

1. Structural stability loss considerably (evaluation equal to 2). 2. Exceeding of real bearing capacity of the structure slightly (evaluation equal to 1).

Structural stability loss influences:

1. Exceeding of bearing capacity of the structure slightly (evaluation equal to 1). 2. Failure occurrence extremely (evaluation equal to 3).

Exceeding of bearing capacity is slightly conducive to three events:

1. Reduction of structural stiffness. 2. Stability loss of the structure. 3. Failure occurrence.

Structural failure comprises the inevitable event and, as such, becomes result of all other events. Above mentioned assumptions allow to construct a graph of direct influence (Figure 1) and the corresponding matrix of direct influence AD (Eqn 4). Number of matrix rows and columns is equal to number of considered events: n = 9. Successive rows and columns of the matrix are devoted to considered events in the following order: T, I, D, C, E, L, S, B, F.

000000000

101000000

310000000

012000000

003000000

020220010

020332010

000023200

000032310

DA . (4)

Resulting matrix of total influence T is also presented (Eqn 5). It makes it possible to obtain values of row-wise and column-wise sums of components, indices of position and relation included in Table 1. Graphical illustration of the results is presented in Figure 2.

 

000000000

1167.00083.00917.0000000

2833.00917.00083.0000000

0621.01083.01917.0000000

0773.00250.02750.0000000

0503.02178.01117.01918.02091.00291.00173.00951.00

0721.02696.01744.03176.03381.02162.00204.01124.00

0409.01130.01122.01101.03003.03200.01903.00464.00

0536.01302.01531.01315.04302.02752.02987.01431.00

T . (5)

Page 63: OR52 EXTENDED ABSTRACTS

     

63 

i Object Sums Indices Ri Ci

is is

1 T 1.6156 0 1.6156 1.6156 2 I 1.2330 0.3970 1.6300 0.8360 3 D 1.5208 0.5267 2.0476 0.9941 4 C 0.9223 0.8404 1.7627 0.0819 5 E 0.3773 1.2777 1.6550 ‒0.9005

6 L 0.3621 0.7510 1.1131 ‒0.3889

7 S 0.3833 1.1181 1.5014 ‒0.7348

8 B 0.2167 0.9639 1.1806 ‒0.7473

9 F 0 0.7562 0.7562 ‒0.7562

Table 1. Row-wise and column-wise sums of T components and indices is , 

is

0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2

-1.5

-1.0

-0.5

0.0

0.5

1.0

1.5

2.0

Final Results

T

I

D

C

E

L

S

B

F

The Position

Th

e R

ela

tion

Figure 2. Final results of the analysis

The highest value of the relation confirms primary failure cause role in the case of innovative nature of applied structure (T). Value of the relation is also considerable in the case of both designer’s decisions (D) and investor’s decisions (I) which proves important role of these stakeholders for occurrence of the failure. The relation for constructor’s decisions (C) appears to be roughly equal to zero. Such value suggests neutral role of this stakeholder with regard to the failure occurrence. He is simply neither a cause nor effect for other considered objects. Negative values of the relation registered for other objects confirm their role as effects of other objects. The highest value of the position is obtained for the designer’s decisions (D). It may be therefore stated that this factor plays a key role for identification of nature of considered factors. The least important role plays the failure event (F) due to a low position value. Other factors play rather an average role with this regard.

Page 64: OR52 EXTENDED ABSTRACTS

     

64 

4. Conclusions

Reliable analysis of course and causes of a building failure is rather a complex task. This is mainly due to nature of building objects themselves and processes of their planning and erection. Stakeholders which take place in these processes cab play an important role with this regard. Above mentioned complexity can be addressed adequately thanks to application of appropriate decision support tools. DEMATEL comprises such a tool undoubtedly. Results obtained for a sample analysis confirm usefulness of the method for analysis of building failure causes and their direct and indirect effects. The most important advantages of the approach include ease of methodology and numerical implementation. It is easily implementable using a casual spreadsheet application. The approach is also elastic due to possible easy combination with other suitable methods and concepts, like fuzzy sets, sensitivity and scenario analysis.

5. References

1. Fontela E. and Gabus A. (1976). The DEMATEL Observer. DEMATEL 1976 Report. Technical Report, Batelle Geneva Research Center: Geneva.

2. Dytczak M. (2008). Równoległe zastosowanie metod AHP i DEMATEL w wielokryterialnej analizie decyzji. In: Knosala R. (Ed.): Komputerowo zintegrowane zarządzanie. Vol.I, PTZP: Opole, pp 249-257.

Page 65: OR52 EXTENDED ABSTRACTS

     

65 

Institutional learning and adaptation to global environmental change: A review of current practice from institutional, socio-ecological, and

complexity approaches

A Espinosa16 Hull University Business School (UK) and Universidad de los Andes School of

Management, Bogota, Colombia

G. I. Andrade17, E. Wills18 Universidad de los Andes School of Management, Bogota, Colombia

Abstract

To contribute to current research on institutional change, organisational fitness and ecosystem management, related to global environmental change and sustainability, by arguing for the need of a new multi methodological OR approach to understand and observe socio-ecological and institutional systems and to support them to self-govern themselves in a sustainable way. In this paper we present an initial reflection on the ongoing conceptual framework construction, based upon a multi-paradigmatic soft OR approach, and an example of a preliminary identification of misfits between theory and practice in Colombia, regarding community and institutional actions to mitigate global climate change risks. Keywords: Socio-Ecological Systems, Resilience – thinking, Global Environmental Change, Adaptive Management, Implementation Misfits, Viable Systems, Social Networks, Institutional Theory.

1. Methodology/approach

Our research aim is to build an analytical framework for explaining how institutions and socio-economic systems adapt (or not) to global environmental changes triggered by climatic disruption: in particular we aim to observe and act upon internal vulnerabilities of socioeconomic and governance structures at the national, regional and local levels. We aim to investigate how some innovative concepts and tools suggested by researchers in the field could help people and institutions to move towards a scenario of climate change adaptation, such as: viability and resilience in socio-ecological systems; Complex Adaptive Systems; Adaptive and Multi-Level Governance; Institutional Diversity; and Institutional Isomorphism. Three theoretical approaches have inspired and focused this research. The first approach comes from institutional theory of change (Powell, 1991; Di Maggio & Powell, 1983; Scott, 1995; Fligstein, 2008). We propose that a jolt due to a major event such as climate change destabilize established meanings, rules and practices (Greenwood, Suddaby, Hinings, 2002) at the local levels. Jolts such as social upheavals (Togler & Zucker, 1996) or environmental major events (Meyer, Brooks, Goes, 1990) have been studied in previous institutional theory. This jolt, for

                                                            16 Email: [email protected] 17 Email: [email protected] 18 Email: [email protected]  

Page 66: OR52 EXTENDED ABSTRACTS

     

66 

instance a major flooding or drought, may precipitate as a response, the entrance of new players and actors, such as governmental agencies or NGOs, and the ascendance of new local entrepreneurs and organizations (Greenwood, et al, 2002) who introduce new ideas and practices for collective actions. New actors bring about necessarily new perceptions, and usually during this processes problems are reframed, and new questions arises. The social perception of environmental risks undergoes a significant change. In this process of change, new actors and organizations innovate seeking new viable solutions to the global threat and the institutional misfits it creates. As a response, new practices and innovations in governance and organizational forms are tried. For these new practices to become widely adopted in the local community as new rules, norms and meanings, it is necessary that stakeholders get involved in a previous stage of sense-making (Weick, 1992) or “theorization” (Greenwood et al, 2002; Tolbert & Zucker, 1996) to understand what the implication of global change to local socio-ecological systems are. In this “theorization“ phase, actors and stakeholders who are affected by an external threat, propose and specify new categories and elaborate new meanings and chains of cause and effects of what is occurring (Suddaby, Greenwodd, Hinings, 2002). Theorising is not a momentary act but one which requires sustained repetition to generate a shared understanding of the problem and the new proposal for solution, especially when local responses are implemented. These new accounts of the changing reality by a process of collective sense-making will simplify the “properties” of the new practices. Thus, this stage of “theorization” and negotiation requires a process of discussion and construction of new meanings by which localized deviations from prevailing rules and conventions are simplified and understood. At this stage, actors specify the institutional and organizational failures and propose a local response to overcome their vulnerabilities and strengthen their resilience. If the local community sees such new ideas and practices as more appropriate than previously existing ones, they may be re-institutionalized. According to Greenwood et al (2002) the proposed models must make “a transition from formulation to become a social movement at the local level to further develop an institutional imperative” (Strang & Meyer, 1993: 495). This re-institutionalization process requires a new legitimacy support by stakeholders, not only the local community but also local and national governmental agencies, new NGOs that get involved, and traditional producers among others. The new legitimacy base can be pragmatic (Suchman, 1995) due to the economic results or moral, in the sense that it is the “right thing to do” in relation to environmental and social issues. If these new practices and organizations are successful they will diffuse in time and space to other local scenarios creating an isomorphic effect. Full institutionalization at the national level will occur as the density of adoption of the new practices becomes cognitively institutionalized, that is to say that the new practices become taken for granted by the relevant actors and decision- makers. The second approach comes from socio-ecological management theories, especially those focusing on the resilience-thinking paradigm (Walker and Salt, 2006, Chapter 1) through which the social and ecological interdependences are revealed, as well as the identification of misfits between theory and practice: they can be for example, biophysical, in the social systems and governance, or misfits of spatial or time scale (Cumming et al., 2006). Misfits are often institutional, that is the inadequacy between the problems as identified or addressed at the policy – decision making level, and the response perceived or implemented by local actors. Moreover, the type of responses will depend not only on the magnitude of the climate threat, but also on the socio-ecological system own vulnerability, which may be intrinsic (determined by biophysical, institutional and social variables) or added (as a result of previous human intervention in the landscapes (Walker and Salt 2006). In particular, responses may be affected by the socioeconomic and governance structures at the local level, and the innovative or proactive abilities of the societies to transform the crisis. These ideas or new practices may be related to new organizational forms or new rules to extract resources, to respond to the new

Page 67: OR52 EXTENDED ABSTRACTS

     

67 

contextual change. We state that this process of change occurs mainly at the local level, although influenced or triggered by up-scaled forces. The third approach comes from complexity and soft OR approaches. There has been an increased interest in holistic and complexity approaches to sustainability and in particular in ideas from Complex Adaptive Systems and Viable Systems to better explain issues of organisational transformations in socio-ecological systems (Paucar Caceres & Espinosa, 2010). We have presented elsewhere our understanding on how complexity theory will support self-organisation and self-governance in communities, industries and governments within a socio ecological system; we described analytical tools to support social transformations oriented towards achieving more sustainable governance in a SES, and offered multiple examples of applications of this approach to environmental management and sustainable development (Espinosa & Walker, 2010). By using such models and tools we can map the network of interacting agents, the rules of interaction and the different roles they take regarding sustainability of the SES, and learn about ways to more effectively make decisions and act upon them. We consider that by using this approach in an action research mode, we can rise the agents’ level of knowledge and awareness about misfits between theory and practice, and offer them criteria to jointly design more focused and effective responses to mitigate climate risks. We also explained how by using a multi-methodological approach based on action learning, we can support communities, industries and networks of people dealing with issues of sustainability (Espinosa & Porter, 2010), and in particular on issues of global climate change (Espinosa & Walker, 2010, Ch 6). We suggest that by redrawing organisational boundaries to enable institutions to respond to the main challenges in our socio-ecological systems, and by enabling clusters of self-organising units to work together as a coherent whole, with interactions based upon dynamic, co-evolving, rapid-response control loops (i.e. around critical global climate change risks responses), we can contribute to create a more sustainable governance structure. In this research project, the action learning focuses in improving knowledge and information management, as well as group decision-making, by distributing information, promoting self-organisation and offering meta-systemic management tools for improved multi-level self-governance. Building on the strengths of these combined theories and analytical tools, we are in the process of detailing an interpretative framework, to assess the vulnerability & resilience of socio-economical systems as well as a the capability for adaptive responses from local, regional and national agents. The institutional approach to change helps us to explain the dynamics of interaction of social agents responding (or not) to climate risks; the socio ecological approach explains the misfits between theory and practice that many times explain lack of effective social action. The complexity approach explains how social agents and institutions get organised to address the core risks and manage to more effectively and timely act together. Based on the theoretical perspectives inspiring us, our working hypothesis is that by strengthening cooperation and self-management alternative schemes to help self-organization of community actors they’ll be better equipped to deal with their socio-ecological system risks in a more effective and holistic way. The new framework combining these insights will help us to accomplish a better comprehension and management of socio-ecological systems in high-risk areas. The precise objective is to contribute to the design of both, targeted interventions to reduce vulnerability to climate change, as well as regional and national policies for risks mitigation related to global climate change. More specifically it will allow us to Shape/Diagnose the current socio-ecological system: Identify the components of the socio ecological systems vulnerability in the proper

scales

Page 68: OR52 EXTENDED ABSTRACTS

     

68 

Institutional diagnosis of the environmental institutions involved either directly or indirectly: analysis of their capacity to timely react on main issues of vulnerability.

Formal and informal networks of economic and social actors as well as interest groups, capable of participation in sustainable management projects.

Existing meta-systemic management mechanisms (e.g. decision-support systems, environmental management control systems; monitoring systems on global climate change risks).

Spaces for participation, dialogue and negotiation among interest groups and environmental authorities

Having agreed on the elements of organizational diagnosis and jointly identified possible action courses, we will reflect from these new perspectives about the core situations of risk and vulnerability in Colombia and the required institutional and policy adjustments. We are aiming to test such a framework at different scales (local, regional, national) and produce recommendations for policy and institutional adaptation to encourage resilience at all levels.

In order to test our hypothesis and progress towards development of a full application, we have started reflecting on a particular high risk SES in Colombia, in the Andean Eco-region. Lake Fuquene is one of the most relevant regions for the diary industry in Colombia: its SES has suffered important changes due to global climate change and its survival is in clear danger. We’ll reflect below on the first analysis and findings about this SES from our theoretical perspective as well as about the nature of the misfits and required changes. We shall address the analysis at two spatial scales within the country, one national in which the public policy responses occur, and at the local level, focusing on this specific SES – as a start -. On the basis on this preliminary experience we aim to validate our analytical framework and experiment it later in other SESs (i.e. regional) in order to experiment innovative ways of dealing with major global climate change’s risks at different levels and scales.

2. Findings

Studies about environmental risk to socio-ecological systems at global and local scales abound, and usually offer information on climate threat. Threat is usually assessed through modelling and projection of climate data (Gitay et al., 2001). In Latin America, the Andean region is been signalled as one with major risks. Climate change risks have been established for Colombia (Van der Hammen et al 2002), through downscaled climate projection models (IDEAM 2010; Pabón et al, 2010); Mulligan (2000) explains its impacts on hydrological process. We consider however, that risk studies should be complemented by the assessment of specific vulnerabilities (intrinsic and added) of socio-ecological systems. Nevertheless, methodologies to assess local vulnerabilities (social and institutional) are quite scarce and are currently under construction, especially on the relationships between social and ecological resilience. Following the suggestions of Adger (2000) on this respect, a model applied to Fúquene Lake is currently being constructed19 20. The preliminary analysis suggests that existing institutional arrangements and policies to prevent increase of such risks are either inadequate or not operating as effectively and timely as they should. The case study analysis highlights the limitations in current management practice in the region (i.e. top down approaches to deal with climate change risks’ management;

                                                            19 Franco, C L, Andrade, G I. (2010 – paper in progress). Linking biophysical variables and resilience proxies to address vulnerability to global environmental change in Lake Fúquene socio-ecological system. 20 A joint project is being carried out between Los Andes University – School of Management and the Wetlands Foundation in the Fúquene Lake and other high Andean wetlands, which supports an important dairy industry.  

Page 69: OR52 EXTENDED ABSTRACTS

     

69 

dislocated views of ecological and social processes; inadequate understanding of the need for adaptation, etc). In this first stage, the model is been used to explain the misfits that occur between the current institutional and governance arrangements at the local level to respond to the global threat of climate change, stressing the need for development of innovative bottom up approaches to cross-scale environmental management issues. Especial focus is being given to the use of the best of resources, knowledge and understanding available as well as the identification of formal and informal networks of people and institutions (rules, norms and shared meanings) at each level. In a second stage the model will be used to explain how new local governance structures, organizations and practices emerge (or not) as new practices for collective actions as a response to the external threat. At this preliminary stage, we have identified some institutional misfits in policy implementation when countries and regions are aiming to prevent major risks of global climate change, and decisions does not fit harmonically with local response processes. This reinforces our hypothesis that a new approach to support institutional change is needed, although some limitations of the use of the concept of adaptation have been encountered (see Walker et al., 2004). These ongoing reflexions confirm that more effective responses to climate change risks are urgently required in the context of our country (Colombia), and also that a change of approach is urgently needed. We have identified potential contributions from the institutional, socio-ecological, and complexity approaches, especially to face up to encountered misfits and inadequacies. A trans-disciplinary approach using the above mentioned theories has a clear potential to understand and analyse the process of institutional change and the network characteristics of contemporary social organisations dealing with global climate change risks and by doing so, improve their possibilities of success in overcoming current implementation misfits. Finally, a multi-level (multi-scale) adaptive – transformative management scheme is in the process of been defined, that will help to integrate the reciprocal interactions between the national policy and sub national environmental planning responses and local actions.

3. Research limitations/implications

This paper sets up the theoretical and methodological basis for designing a complex long-term research project that combines the power of institutional theory, complexity and cybernetic theory, and ecosystem management approaches. While institutional Theory offers a clear way to address the dynamics of social interactions within key agents in a particular region, an eco-systemic management approach guides us to identify the misfits between theory and practice, and the complexity approach offers criteria to support the existing networks to improve their cohesiveness, synergies and self-governance for sustainability. A trans-disciplinary analytical framework is being constructed, and practical tools developed, aiming to intervene in the current local management processes and the national policy - making climate change scenarios. The paper doesn’t yet explain or detail the practical implications of using these ideas, - as the research project is still ongoing- but clarifies the implications of moving into such an alternative framework for analysis and its usefulness for designing action research projects in the field.

4. References

1. Adger, N. (2000). Social and ecological resilience: are they related?. Progress in Human Geograph, 24, p. 347.

2. Andrade, G I, Franco, L. (2007). Lagunas de Fúquene, Cucunubá y Palacio. Un ecosistema estratégico bajo tensión. In: Franco, C L, Andrade, G I. (Eds.). Lagunas de Fúquene Cucunubá y Palacio: Conservación de la biodiversidad y manejo sostenible de un ecosistema lagunar andino. Instituto Humboldt. Bogota.

3. Cumming, G S, Cumming, D H M, Redman, C L. (2006). Scale mismatches in social-ecological systems: causes, consequences, and solutions, Ecology and Society, 11(1): 14. [online] URL: http://www.ecologyandsociety.org/vol11/iss1/art14/

Page 70: OR52 EXTENDED ABSTRACTS

     

70 

4. Di Maggio, P, Powell, W W. (1983). The Iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48, p. 147-160.

5. Espinosa, A, Walker, J. (2010 -in press). Complexity and Sustainability: Theory and Practice. Invited book, Book Series on Complexity, Imperial College. May 2010.

6. Espinosa, A., Porter, T (2010 - Forthcoming). Sustainability, Complexity and Learning: Insights from Complex Systems Frameworks. A. Espinosa, T Porter. Invited paper. The Learning organisation. Special Issue.

7. Fligstein, N. (2008). Fields, Power and Social Skill: A Critical Analysis of the New Institutionalisms", International Public Management Review, 9(11): 227-253.

8. Gitay, H, Brown, S, Easterling, W, Jallow, B. (2001). Ecosystems and their goods and services. In: McCarthy, J J, Canziani, O F, Leary, N A, Dokken D J, White K S (Eds.). Climate Change 2001: Impacts, adaptation, and vulnerability. Intergovernmental Panel on Climate Change (IPCC). Cambridge University Press, Cambridge.

9. Greenwood, R, Suddaby, R, Hinings, C R. (2002). Theorizing Change: The role of Professional Associations in the Transformation of Institutionalized Fields, Academy of management Journal, 45, 1, p. 58-80.

10. Mulligan, M. (2000). Downscaled Climate Change Scenarios for Colombia and their Hydrological Consequences. Advances in Environmental Monitoring and Modelling 1 (1), p. 3-35.

11. Paucar-Caceres, A., Espinosa, A. (2010). Management Science Methodologies In Environmental Management And Sustainability: Discourses And Applications. Accepted for publication May 2010, Journal of the Operational Research Society.

12. Powell, W W (1991). Expanding the scope of institutional analysis. In Powell W W & Di Maggio P J ( Eds.). The new institutionalism in organizational analysis. University of Chicago Press. Chicago.

13. Scott, R. (1995) Institutions and organizations. Thousand Oaks, Sage. 14. Strang, D, Meyer, J W (1993). Institutional conditions for diffusion. Theory and Society, 22, p.

487-511. 15. Suchman, M (1995). Managing legitimacy: Strategic and institutional approaches. Academy of

Management Review, 20, pp. 571-611. 16. Tolbert, P S, Zucker, L G. (1996). Institutionalization of institutional Theory. In: Clegg, S, Hardy,

C and Nord, W. The handbook of organization studies, Thousand Oaks. 17. Van der Hammen, T, Pabón, J D, Gutiérrez H, Alarcón J C. (2002). Cambio Global en los

ecosistemas de alta montaña en Colombia. In: C. Castaño. (Ed.). Paramos y Ecosistemas alto andinos de Colombia. Ministerio del Medio Ambiente /IDEAM / PNUD. Bogota.

18. Walker, B, Holling, C S, Carpenter S R, Kinzig A. (2004). Resilience, adaptability and transformability in social–ecological systems. Ecology and Society 9(2): 5. Available at [ http://www.ecologyandsociety.org/vol9/iss2/art5/]. Accessed on July 13th, 2010.

19. Walker, B, Salt, D. (2006). Living in a complex world. Island Press. 20. Weick, K (1995). Sense making in Organizations. Sage

Page 71: OR52 EXTENDED ABSTRACTS

     

71 

A polyhedral approach for solving two facility network design problem

F Hamid21 and Y K Agarwal

FPM Office Information Technology & Systems Department, Indian Institute of Management,

Prabandh Nagar, Off- Sitapur Road, Lucknow 226013, India  

Abstract

The paper studies the problem of designing telecommunication networks using transmission facilities of two different capacities. The point-to-point communication demands are met by installing a mix of facilities of both capacities on the edges at overall minimum cost. We consider 3-partitions of the original graph which results in smaller 3-node subproblems. The extreme points of this subproblem polyhedron are enumerated using a set of proposed theorems. We then compute the facets based on an approach called polarity theory. A theorem proposed in the literature is extended to translate the facets of the subproblem to those of the original problem. We have tested our approach on several randomly generated networks. The computational results show that 3-partition facets reduce the integrality gap, compared to that provided by 2-partition facets, by approximately 30-60%. Also there is a substantial reduction in the size of branch-and-bound tree if these facets are used.

Keywords: integer programming; polyhedral combinatorics; facets; telecommunications

1. Introduction

Since the last century telecommunications has penetrated every aspect of life. The nature of services and volume of demands have changed with the introduction of new sophisticated technologies. Private networks have gained advantage over public switched networks in carrying the communications traffic of large organizations because of the economic and strategic benefits they offer (Magnanti et al, 1995). Customers lease private lines from telephone company for their exclusive use and have the flexibility to reconfigure themselves. Customers select transmission facilities from a small set of alternatives (DS0, DS1, OC3, OC12, etc.). The tariffs of these facilities are complex and offer strong economies of scale. Economies of scale induce the use of higher capacity (HC) facility than an equivalent number of lower capacity (LC) facilities to meet flow requirement of any edge. The problem of designing network becomes “hard" to solve when more than one type of facility is involved due to complexity of the cost structure. The rest of this paper is organized as follows. The next section presents a formal description and mathematical formulation of the two-facility network design problem (TFNDP). Thereafter, we briefly review the literature on similar problems. Following section is completely devoted to the solution strategy. The computational results are presented subsequently. Finally we end up with concluding remarks and scope for future research.

2. Problem definition and formulation

We consider the network design problem (NDP) or network loading problem (NLP) which involves determining the mix of facilities of two capacities (high and low) on the edges of a given

                                                            21 E‐mail: [email protected]. Phone: +919005372677, Fax: +915222734025 

Page 72: OR52 EXTENDED ABSTRACTS

     

72 

graph in order to satisfy the point-to-point demands at minimum cost. Applications of this problem and its variants arise frequently in the telecommunications industry for both service providers and their customers. The multicommodity capacitated NDP we consider is defined on a undirected network

),(= EVG , where V is the set of nodes and E the set of edges. The communication demands

between origin-destination pairs are represented by the set of commodities K . Each commodity

Kk has demand kd that must flow between the origin )(kO and the destination )(kD .

Installing one LC facility on edge ),( ji provides 1 unit capacity at a cost ija . Whereas, installing

a single HC facility on edge ),( ji provides C units of capacity at a cost ijb . The TFNDP is

formulated mathematically as the following mixed-integer programming model:

Minimize )(),(

ijijijijEji

ybxa

(1)

subject to

otherwise

KkVikDiifd

kOiifd

ff k

k

kij

Vj

kji

Vj0

,)(

)(

=

(2)

EjiCyxff ijijkji

kij

Kk

),()( (3)

0, ijij yx and integer Eji ),(

KkEjiff kji

kij ,),(0,

In the above formulation there are two kinds of variables: integral capacity variables

ijx and ijy that define the number of LC and HC facilities loaded on the edge ),( ji , and

continuous flow variables k

ijf that model the flow of commodity k on edge ),( ji in the

direction i to j . Constraints (2) correspond to the flow conservation constraints for each commodity at each node. Capacity constraints (3) model the requirement that the total flow (in both directions) on an edge cannot exceed the capacity installed on that edge.

3. Literature survey

To solve the NP-hard capacitated NDPs, the approaches in the literature can be broadly classified into following different classes: relaxations, simplex-based cutting plane methods, polyhedral based approaches and approximation algorithms/heuristics. We will focus here only on polyhedral based approaches since this is the approach we have followed. However, for a comprehensive survey on NDPs the reader can refer to any of these (Minoux, 1989; Balakrishnan, 1997; Gendron et al, 1999) papers.

In (Magnanti et al, 1993), the authors consider single facility NDP, develop facets and completely characterize the convex hulls of the feasible solutions for two subproblems. The polyhedral properties of single facility NDP are also studied by (Agarwal, 2006; Agarwal, 2009). (Magnanti et al, 1995) study polyhedral properties of the two-facility undirected NDP and introduce three

Page 73: OR52 EXTENDED ABSTRACTS

     

73 

basic classes of valid inequalities: cut inequalities, 3-partition inequalities and arc residual capacity inequalities. (Atamtürk, 2002) discusses the polyhedral properties of the flow formulation for a special network loading problem over a “cutset". (Bienstock et al, 1998) compare cutting plane algorithms based on the flow and capacity formulation. (Günlük, 1999) proposes new families of mixed-integer rounding inequalities. (Barahona, 1996) presents a cutting plane algorithm based on cut inequalities.

(Bienstock and Günlük, 1996) study polyhedra based on directed demand, flow costs, and existing capacities. In (Avella et al, 2007), a new class of tight metric inequalities is introduced, that completely characterize the convex hull of the integer feasible solutions of the NDP.

A detailed discussion on polyhedral terminology can be found in (Nemhauser and Wolsey, 1988).

4. Solution approach and strategy

Our solution approach is motivated by a theorem proposed in (Agarwal, 2006) according to which a facet inequality of the k -node subproblem resulting from a k -partition translates into a facet of the original problem for single facility NDP. We extend this theorem for the TFNDP and use it to translate the facets of 3-node TFNDP to those of the original TFNDP.

The strategy that we adopt to generate the facets of the TFNDP is summarized as follows:

Step 1. Shrink the original NDP graph/problem into a 3-node graph/subproblem.

Step 2. Characterize the polyhedron of the 3-node subproblem and enumerate the extreme points.

Step 3. Compute the facets of the polyhedron using polarity theory.

Step 4. Translate the facets of the subproblem to the facets of the original NDP using the extended theorem mentioned above.

Steps 1-3 are discussed in details below.

Shrinking the original NDP graph

We shrink the original NDP graph by considering the partition of the set of nodes into three subsets. Nodes present within the same subset of the partition merge to a single node. The demand between the pairs of nodes that are in different subsets gets aggregated. Each partition gives a different 3-node subproblem of the same original NDP. Therefore, the number of 3-node subproblem increases exponentially with the increase in number of nodes since the number of partitions grow exponentially. Presently, we are considering all the partitions of the given NDP graph to generate the violated facets. However, this approach will not be practically feasible for problems of larger size as searching exponential number of partitions will drastically reduce the efficiency of our approach. To avoid this, we plan, in future, to find only those optimal partitions that will produce violated facets and this will be the basis for solving the separation problem. Enumeration of the extreme points of the polyhedron

We propose an algorithm that systematically enumerates all the extreme points of the 3-node subproblem polyhedron. Using a set of theorems we can identify whether a given solution is an extreme point or not. Some of these theorems are presented below. It is found that there are maximum of 108 extreme points of this polyhedron depending on the demand between the three nodes.

Theorem 1 A non minimal solution cannot be an extreme point.

Theorem 2 Simultaneous diversion of traffic from two or more edges will not produce an extreme point.

Page 74: OR52 EXTENDED ABSTRACTS

     

74 

Theorem 3 Let ),(max= ijijij xs where ijs is the amount of spare capacity available on

edge ),( ji of the 3-node problem and ijx is the number of LC facilities installed on the edge. A

solution will not be an extreme point solution if ),(0> jiij .

5. Computation of facets of the subproblem

We use polarity theory to find only the most violated facets of the 3-node subproblem polyhedron, P , avoiding generation of all the facets for a partition. The extreme points of the polar polyhedron

give the facets of polyhedron P and vice-versa (Nemhauser and Wolsey, 1988). This was done by solving the following subproblem:

,61,=0,

1,

=

i

Kkxtosubject

Minimizez

i

kp

where is the aggregated capacity across the subsets of a given partition/ 3-node problem,

the violated facet obtained and kx the k -th extreme point of the polyhedron P . The constraints

are the extreme points of the polyhedron P obtained in previous step. This is a simple linear

programming problem with maximum of 108 constraints over six variables. If 1<pz , a violated

facet is found for the given partition w.r.t. the given capacity aggregation. The algorithm to solve the problem can therefore be summarized as follows:

Step 1. Solve LP relaxation

Step 2. Generate 3-node subproblem

Step 3. List extreme points

Step 4. Compute violated facet

Step 5. Add it to the LP problem and re-solve

Step 6. Go back to Step 2 to generate a new 3-node subproblem. If all the partitions have been scanned for generating violated facets, stop.

6. Computational study

We have tested our approach on several randomly generated networks of sizes 4 to 10 nodes with fully connected topology. The algorithm was implemented in Visual C++ 6.0 and callable library of CPLEX 7.0 was used for optimization. The value of C, ratio of HC to LC facility cost and the demand between node-pairs were treated as parameters and their values were varied across experiments. We obtained the lower bound values for: (1) LP relaxation (LP), (2) LP relaxation with 2-partition facets (LP2F), and (3) LP relaxation with 3-partition facets (LP3F). Our objective of this study is to observe the effectiveness of the 3-partition facets in getting tighter lower bound. The performance measures used are:

1. PZ %: The optimal objective function value of problem P expressed as percentage of that of

IP problem = 100/ IPP ZZ . 2. Gap reduction (GR) %: Reduction in the integrality gap between LP2F and IP due to the

introduction of the 3-partition facets = 100))/(( 223 FLPIPFLPFLP ZZZZ .

3. Node count reduction %: Reduction in the node count of the branch-and-bound (B&B) tree

Page 75: OR52 EXTENDED ABSTRACTS

     

75 

of IP2F due to 3-partition facets = 100)/( 232 FIPFIPFIP BBBBBB .

The computational results (Table 1) show that 3-partition facets reduce the integrality gap, compared to that provided by 2-partition facets, by approximately 30-60%. Also there is a substantial reduction in the size of the branch-and-bound tree if these facets are used.

S. No.

Nodes

C

HC/ LC

Demand

ZIP

ZLP %

ZIP2F %

ZIP3F %

Gap Redc. %

B&B Node Count

Min Max IP2F IP3F Redc. %

1 4 6 3.0 1 2 2965 63.5 78.7 96.2 82.2 5 1 80.0 2 5 10 3.0 1 4 4620 64.2 85.9 96.7 76.6 10 3 70.0 3 6 10 3.0 1 5 7170 75.1 91.6 97.6 71.4 36 16 55.5 4 7 10 3.5 1 3 8498 67.2 86.1 92.7 47.5 129 46 64.3 5 7 10 3.5 1 4 9905 71.5 87.8 94.8 57.4 217 59 72.8 6 8 6 3.5 1 3 15440 81.5 88.2 92.9 39.5 438 675 -54.1 7 8 8 3.5 1 3 12240 77.1 86.6 92.0 40.6 370 191 48.4 8 8 10 3.5 1 3 10843 69.6 84.5 92.3 50.1 1057 491 53.5 9 9 10 3.5 1 3 13057 73.6 85.0 91.9 45.7 2231 1385 37.9 10 9 10 5.0 1 3 19205 74.9 84.7 91.6 44.9 7591 3154 58.4 11 9 10 6.5 1 3 23946 74.7 83.0 88.4 31.8 5485 4331 21.0 12 10 20 8.0 1 5 31375 70.8 84.8 91.0 40.8 10094 4312 57.3

Table 1. Average performance measures

7. Conclusion

In this paper we addressed the problem of network design with facilities of two different capacities. We fully characterized and enumerated all the extreme points of the 3-node subproblem polyhedron. A new approach for computing facets is introduced and a new family of facets is identified. The 3-partition based facets strengthen the linear programming formulation to a great extent. Computational results show that these facets significantly reduce the integrality gap and also the size of the branch-and-bound tree. Thus our approach provides both a very good lower bound and a starting point for branch-and-bound. The results ensure that our approach can be an effective tool for solving real life problems. The future work is to solve the separation problem, computational study on larger networks and develop viable heuristic to solve larger real life problems. It is also worthwhile to explore whether 4-partition based facets can further tighten the lower bound.

8. References

1. Agarwal Y K (2006). k-Partition-based facets of the network design problem. Networks 47: 123-139.

2. Agarwal Y K (2009). Polyhedral structure of the 4-node network design problem. Networks 54: 139-149.

3. Atamtürk A (2002). On capacitated network design cut-set polyhedra. Math Program 92: 425-437.

4. Avella P, Mattiab S and Sassanob A (2007). Metric inequalities and the network loading problem. Discrete Optim 4: 103-114.

5. Balakrishnan A, Magnanti T L and Mirchandani P (1997). Network design. In: Dell'Amico M, Maffioli F and Martello S (eds). Annotated Bibliographies in Combinatorial Optimization. John Wiley and Sons: New York, pp 311-334.

6. Barahona F (1996). Network design using cut inequalities. SIAM J Optim 6: 823-837. 7. Bienstock D, Chopra S, Günlük O and Tsai C (1998). Minimum cost capacity installation for

Page 76: OR52 EXTENDED ABSTRACTS

     

76 

multicommodity network flows. Math Program 81: 177-199. 8. Bienstock D and Günlük O (1996). Capacitated network design – polyhedral structure and

computation. INFORMS J Comput 8: 243-259. 9. Gendron B, Crainic T G and Frangioni A (1999). Multicommodity capacitated network design.

In: Sansò B and Soriano P (eds). Telecommunications Network Planning. Kluwer Academic Publishers: Massachusetts, pp 1-19.

10. Günlük O (1999). A branch-and-cut algorithm for capacitated network design problems. Math Program 86: 17-39.

11. Magnanti T L, Mirchandani P and Vachani R (1993). The convex hull of two core capacitated network design problems. Math Program 60: 233-250.

12. Magnanti T L, Mirchandani P and Vachani R (1995). Modeling and solving the two-facility capacitated network loading problem. Oper Res 43: 142-157.

13. Minoux M (1989). Network synthesis and optimum network design problems: models, solution methods and applications. Networks 19: 313-360.

14. Nemhauser G L and Wolsey L A (1988). Integer and Combinatorial Optimization. Wiley: New York.

Page 77: OR52 EXTENDED ABSTRACTS

     

77 

Optimisation of prices across a retail fuel network

B. Jenkins, T. Liptrot and D. McCaffrey

KSS Ltd, Manchester, UK.

Abstract

Optimisation opportunities exist across a network of fuel retail sites by trading off one site against another. However, operational factors make it difficult for retailers to use optimisation at this level. We set out a methodology by which network optimisation can be implemented and show how this sits as a supervisory level on top of a system which is already performing site level price optimisation.

1. Introduction

Retail price optimisation is well established in the automotive fuel market. Commercial systems such as KSS PriceNet and In-house developed systems are in day-to-day operational use by a large number of fuel retailers around the world. These systems help the retailer to set better prices in the deregulated, highly competitive fuel market.

Such systems consist of three main components; a demand model to forecast sales volume as a function of price; a rules engine to evaluate pricing strategies in the form of differential ranges to be satisfied by own prices relative to competitor prices, and an optimisation engine to identify own prices which maximise some objective (typically gross profit) subject to meeting a minimum sales target and satisfying the price constraints specified by the rules engine.

Price optimisation opportunities are present at different levels in a spatial and temporal hierarchy of pricing decisions. For a given site, one can firstly trade off periods of time and secondly fuel grades with differing levels of price sensitivity against one another. Thirdly across a network of sites, one can trade off sites of differing price sensitivity against one another. In all three cases, the aim is to transfer volume from time periods, grades and sites with lower price sensitivity to those with higher price sensitivity, where this volume can be ‘purchased’ (via price discounting) at lower cost in margin. If this is done systematically then the retailer can achieve the same overall volume sales at higher gross profit.

Sites are usually priced independently of one another, with price changes on a given site being triggered by changes in the prices at nearby competitor sites. The first two optimisation opportunities, trading off time periods and fuel grades against one another, are site specific and can be implemented directly, using existing price optimisation systems, through site level price changes in response to short term changes in the external data.

The third one, trading off one site against another, requires coordination of prices across multiple sites and cannot be implemented through direct site level price changes, since it is generally not acceptable for price changes at one site to immediately affect prices at other sites across the network.

This paper sets out a methodology by which optimisation across sites can be implemented. This sits at a supervisory level on top of a system which is already performing site level price optimisation.

Page 78: OR52 EXTENDED ABSTRACTS

     

78 

2. Site optimisation methodology

In this section we give a simplified overview of the methodology of optimisation for fuel pricing.

Suppose that a given retailer has sites, indexed , and that there are sites

belonging to other retailers, indexed . The sites belonging to the given retailer will be referred to as own sites, and those belonging to other retailers as competitor sites. Each own site faces competition from some subset of the competitor sites. This subset may include other own sites.

Suppose there are grades of fuel for sale at own and competitor sites, indexed .

Let sales volume in time period at site for grade be denoted . Then we can model sales in the form

where represents previous sales (i.e. ) for the same grade and site, represents

the current price of grade at site and represents current prices of the same and competing grades at competing sites. See e.g. references (Araman and Caldentey, 2009), (Bitran et al, 1998), (Bitran and Caldentey, 2003), (Gallego and van Ryzin, 1997), (Krasteva et al, 1994), (Singh and Bennavail, 1989 and 1993), (Singh, 1990 and 1991) for details of the

form and estimation of such models. Typically the function is a log-log or log-linear model in which the coefficients of the price terms are called elasticities. The coefficient of the own price term is the direct elasticity and measures the response of volume to a change in own price. The coefficients of the competitor prices terms are called cross elasticities and measure the response of volume to changes in competitor prices.

Let direct sales costs (i.e. cost of product, transport and storage) in time period at site for

grade be denoted and let the applicable sales tax rate be denoted . Then the gross

profit generated from sale of grade at site in time period can be modelled as

The retailer usually wishes to maintain a certain price position relative to competitor brands, and also to maintain certain minimum margin levels. These are modelled as a set of maximum and minimum own price limits, specified in terms of current costs and competitor prices. They define allowed ranges for own prices which change when competitor prices or costs change.

For a given site and grade we assume that there are such constraints, index them and model them as

where is some linear function of the own price, the cost and the competing prices for site

and grade .

Page 79: OR52 EXTENDED ABSTRACTS

     

79 

We assume that the objective is to maximise gross profit, in which case the retailer usually specifies a minimum sales volume target by site. To simplify the discussion, we assume that

there is a single minimum volume target for sales over all grades in period on site . This is modelled in the form

Then we can solve the following optimisation problem to maximise gross profit:

This is solved using non-linear optimisation techniques, see e.g. (Bazaraa et al, 1993), (Bhatti, 2000), (Gill et al, 1981) and (Nocedal and Wright,1999) for further details. A similar optimisation can be formulated to maximise revenue or volume with a maximum volume constraint or a minimum gross profit constraint respectively. The discussion in the next section about optimising across sites applies equally to these alternative optimisation objectives. The above description is typical of current implementations of fuel price optimisation. The main point to note is the following. Apart from exceptional cases where one own site appears in the set of competing sites for a second own site, then the main linkage between own prices lies in the volume constraints, which are set on a site by site basis. So prices for different grades at the same site are dependent on one another, but prices on different sites are generally set independently by the optimiser, in the sense that prices at one site can be changed without impacting the optimum prices at other sites.

This is optimal on a site-by-site basis, but sub-optimal for the whole network. A set of network optimal prices can be calculated by adding one additional constraint to the above problem, namely

where is some minimum network sales target (necessarily greater than the sum of the site targets ).

This bigger problem could be solved using existing systems. However, its output would result in prices at one site changing in response to competitor price changes at another site. This is not acceptable to most fuel retailers, in that it conflicts with their desire to maintain a stable price image relative to the local competitors at a site. It also requires a big increase in management effort if changes in data at one site require propagation of price changes to other sites. Lastly fuel sales data is very noisy, so to be robust, a network wide optimisation requires a greater level of noise filtering that that which is required for day-to-day price setting.

Page 80: OR52 EXTENDED ABSTRACTS

     

80 

3. Network optimisation methodology

In this section we outline an approach to network optimisation which addresses the above three objections and which is consistent with the existing approach to site level optimisation. This sits as a supervisory level on top of the site level optimisation and works as follows.

At regular intervals, say monthly, calculate average competitor prices and costs over the recent

past, e.g. over the past 2 weeks. Denote these as and respectively. We can then use

the existing demand model to estimate how average sales and gross profit will vary

as a function of variations in average own prices given these recent average levels of competitor prices and costs. These estimates take the form

.

We can use the existing price constraint relationships to set limits on the allowed range of average own prices in terms of recent average levels of competitors prices and costs. These can be represented in the form

.

We calculate a total network volume target as the sum of the current site level volume targets

Note that generally each of the site level volume targets is active and so will equal the

current average volume per unit time period being achieved at that site. Thus equals the current total volume per unit time period being achieved over the whole network.

Now calculate, across all sites and grades in the network, the set of average own prices which

maximise average gross profit subject to meeting the total network volume target . This takes the form

Note that we have dropped the site level volume targets from this, so the average prices at each

site are free to move within the range given by the price constraints .

Page 81: OR52 EXTENDED ABSTRACTS

     

81 

Denote the solution to this network level optimisation of average prices as

and calculate the corresponding set of average sales volumes per unit time period, site and grade at this set of optimal average prices. These are given by

Now the total volume constraint is always active (for any practical set of price constraints), so

Now set new site level volume targets

It follows that the sum of these new site level volume targets equals the sum of the previous set

Now use these new site level volume targets within the daily optimisation of actual (rather than average) own prices over the next month

The process of calculating a new set of site level volume targets is repeated every month, or at whatever frequency the process is being implemented. At each monthly network optimisation step, the total network volume target is not changed from what was planned, what changes is the allocation of that total target volume between site level volume targets.

4. Results

In tests, comparing forecast gross profit at network optimised site volume targets versus forecast gross profit at existing non-optimised volume targets, the above methodology yields between 0.5% and 2% improvement in gross profit across different fuel retail networks and at different points in time. This is in addition to the gross profit improvement already being obtained from site level optimisation. For one particular network at which this methodology has been applied Figure 1 shows the change in volume target against elasticity by site for one month.

Page 82: OR52 EXTENDED ABSTRACTS

     

82 

‐4.50

‐4.00

‐3.50

‐3.00

‐2.50

‐2.00

‐1.50

‐1.00

‐0.50

0.00

‐3.00% ‐2.00% ‐1.00% 0.00% 1.00% 2.00% 3.00%

elasticity

% change  in volume target

Elasticity vs volume target change

Figure 1: The change in site volume target suggested by network volume optimisation as a function of the site elasticity.

It can be seen that within a fixed total network volume target, the site volume target is generally being increased at higher (i.e. more negative) elasticity sites and decreased at lower elasticity sites. The other factors being taken into account by the network optimiser are the recent average margin at each site and the recent average position of each site within its allowed price range relative to competitors.

5. Conclusions

At each monthly interval, the network optimiser compares price sensitivity, costs and competitive price position across sites and transfers volume target to sites where volume can be obtained by the site level optimisation with the smallest possible reductions in gross profit. It balances this within the fixed total network volume target by removing volume target from sites where it can be given up in return for the largest possible increase in gross profit. The net result is the same total network volume delivered at higher gross profit.

This approach of optimising the site volume targets and then feeding them down to be used in daily optimisation of site level prices addresses the three objections to network optimisation outlined above. The prices on different sites are only linked via optimisation once a month and it is the site volume targets rather than the prices which are changed in a coordinated fashion across the whole network. The timing of site level price changes is still controlled by the site level optimisation in response to changes in the data at that particular site. Also, the use of fortnightly average data in the generation of volume targets increases the robustness of the solution to noisy data.

6. References

1. Araman, V. and R. Caldentey (2009). Dynamic pricing for non-perishable products with demand learning. Operations Research, 57:1169-1188

Page 83: OR52 EXTENDED ABSTRACTS

     

83 

2. Bazaraa, M. S., H.D. Sherali and C.M. Shetty (1993). Nonlinear Programming: theory and algorithms, 2nd ed., John Wiley & Sons

3. Bhatti, M.A. (2000). Practical Optimisation Methods with Mathematical Applications. Springer-Verlag

4. Bitran, G., R. Caldentey and S. Mondschien (1998). Coordinating clearance markdown sales of seasonal products in retail chains. Operations Research, 46:609-624

5. Bitran, G. and R. Caldentey (2003). An overview of pricing models for revenue management. Manufacturing & Service Operations Management, 5:203-229

6. Gallego, G. and G. van Ryzin (1997). A multiproduct dynamic pricing problem and its application to network yield management. Operations Research, 45:24-41

7. Gill, P.E., W. Murray and M.H. Wright (1981). Practical Optimisation, Academic Press 8. Krasteva, E., M. Singh, G. Sotirov, J.-C. Bennavail and N. Mincoff (1994). Model building

for pricing decision making in an uncertain environment. Proc. IEEE International Conference on Systems, Man and Cybernetics, San Antonio

9. Nocedal, J. and S.J. Wright (1999). Numerical Optimisation, Springer-Verlag 10. Singh, M.G. and J.-C. Bennavail (1989). Knowledge support systems for managerial

decision making. In: V, B. Kelley and A. Rector (eds.). R and D in Expert Systems pp 319-330, Cambridge University Press

11. Singh, M.G. (1990). Knowledge Support Systems for Smarter Tactical Decision for Pricing and Resource Allocation. Proc. IEEE SMC Conference, Los Angeles

12. Singh, M.G. (1991). Decision Technologies for implementing Business Strategies in a competitive Environment. In: M.G. Singh & L. Trave-Massuyes (eds.), Decision Support Systems and Qualitative Reasoning, pp 15-28, North Holland

13. Singh, M.G. and J.-C. Bennavail (1993). Experiments in the use of a knowledge support system for the pricing of gasoline products. Information & Decision Technologies, 18:427-442

Page 84: OR52 EXTENDED ABSTRACTS

     

84 

Information services for supporting quality and safety management

Kostagiolas P.A22

Department of Archive & Library Science, Ionian University, Greece, Ioannou Theotoki 72, Corfu 49100, Tel. +30-26610-87402;

Bohoris G.A23

Department of Business Administration, University of Piraeus, Karaoli & Dimitriou 80, GR-185 34 Piraeus, Greece, Tel : +30 210 4142253 &

Abstract

Both the theoretical and practical frameworks for safety, quality and reliability have been, to a great extent, established prior to the current information revolution. Safety has been and is increasingly important in all aspects of day to day life, including the safe utilization of the information available. The developments in information and communication technologies offer new opportunities for quality and safety management systems implementation. This paper, taking these issues into account, addresses the issue: “What is likely to be the contribution of information services in the implementation of quality and safety standards, frameworks, good practices, etc.?” The analysis includes widely accepted approaches such as the ISO 9000 and ISO 14000. Keywords: Information theory, Libraries, Management, quality, safety.

1. Introduction

Quality, safety, increased availability and access to information have always been in the focus of both economies and societies. Nowadays, these issues are more important than ever before, including the time when most of the relevant quality notions have been developed (Martínez - Lorente et al, 1998; Maguad, 2006). Over the years, relevant knowledge has been broadened through the systematic deployment of the prevailing quality approaches. In fact, “ … without a theory one would have nothing to revise, nothing to learn … ” (Deming, 1982). In that respect, quality management practices, related standards and frameworks are based on the principle of “managing based on facts”, i.e. on concrete, relevant, information containing data. In addition, other systems, to a great extent based on quality standards, such as the risk assessment and management systems (environmental, health & safety, food safety, information security and others) are centered round, the identification, assessment and management of potential risk areas, and, by thus of re-active management. Furthermore, quality and safety management systems are all based on information retrieval and utilization. To this end, quality tools and techniques, such as the Failure Mode Effect & Criticality Analysis (FMEA), the Fault Trees, and others may even be based on information and data of similar applications in relevant industries (in the absence of data/information within a particular, home application). On the other hand, as quality sciences evolve (from the industrial to the information age), an increasing level of deployment of quality principles, methods and approaches to information and knowledge management may appear (Maguad, 2006). Indicative of this diffusion are the ISO/IEC 27000 family of standards (and in particular the “Information Security Management System”), the Capability Maturity Model and its developments, etc. Nowadays, “the ongoing renaissance of information services (IS), together with the burgeoning of novel information and communication technologies (ICT) put forward the need for “new” approaches to quality

                                                            22 Email:[email protected] 23 Email: [email protected]

Page 85: OR52 EXTENDED ABSTRACTS

     

85 

management” (e.g. Srdoc, et al, 2005). Meanwhile, the safety related concerns including environmental and security risks have also been increasing and have received attention at both academics and professionals. Frameworks, with a proactive core, integrating safety and quality management through a constant risk identification within service operations are increasing in emphasis and numbers. Investments in information technologies are rapidly growing, novel services are added and/or the production practices of “old” services are reevaluated. The use of the internet as well as of digital information resources and advances in ICT offer new opportunities. In that framework, “ … management approaches should start to comprehend the meanings of data, information and knowledge …” (PD 7504:2005) and “ … sustain transparent decision-making procedures responsive to changes based on an information intensive rather globalized environment …” (ISO 10012:2003). This work addresses the following research issue: “What is likely to be the contribution of information services in the implementation of quality and safety standards, frameworks, good practices, etc.?”The analysis provides a study of data and information requirements of widely accepted frameworks such as the ISO 9000 and ISO 14000. The design of information services for the support of quality and safety implementation is an interesting and innovative area of research. ISO 9001:2008

CLAUSES DATA REQUIREMENTS INFORMATION REQUIREMENTS

5.6.2 Review input

“The input to management review should include information on internal and external to organization sources (e.g. audits, customer feedback, process performance and product conformity, maintenance, etc.).”

6.3 Infrastructure “In infrastructure supporting services maybe included , such as communication and/or information systems.”

7.2.2 Review of requirements related to the product

“The review should cover relevant product information such as products/services catalogues and/or advertising material”

7.2.3 Customer communication

“The organization should communicate with its customers and clients in relation to product information, etc.”

7.3.2 Design and development inputs

“The Inputs should include information derived from previous similar designs”

7.3.3 Design and development outputs

“The outputs should provide information for purchasing, production and service provision etc”

7.4.2 Purchasing information

“Purchasing information should describe the product to be purchased, including requirements for product approval, personnel qualification, and quality management. The organization should ensure the above prior to their communication to the supplier.”

7.4.3 Verification of purchased product

“The organization performs verification based on the purchasing information.”

7.5.1 Control of production and service

“Controlled conditions should include the availability of information that describes the characteristics of the product, etc.”

Page 86: OR52 EXTENDED ABSTRACTS

     

86 

provision

7.5.4 Customer property

“Customer property can include intellectual property and personal data.”

8.2.1 Customer satisfaction

“Monitoring customer should include customer data on delivered product quality, etc.”

“The organization should monitor information relating to customer perception as to whether the organization has met customer requirements. The methods for obtaining and using this information shall be determined.”

8.4 Analysis of data

“This shall include data generated as a result of monitoring and measurement and from other relevant sources. The analysis of data shall provide information relating to a) customer satisfaction, b) conformity to product requirements, c) characteristics and trends of processes and products, including opportunities for preventive action, and d) suppliers”

8.5.1 Continual improvement

“The organization shall continually improve the effectiveness of the quality management system through the analysis of relevant data, etc.”

Table 1. Requirements analysis for data and information within the ISO 9001:2008 clauses

2. Data and information requirements in quality

A systematic approach to studying information services for supporting quality and safety management depends on developing meaningful usages related to daily routines and/or specific organizational goals. Indeed, the appreciation and understanding of information services potential is gradually maturing, based on issues such as the following: The information services should emerge from people’s needs and be developed through

their interaction with the daily practices and processes. The information services should be comprehended as a potential source of communication

and skills development. The information services should account for distinct cultural and behavioural patterns.

The support of quality and safety systems implementation should be based on a continuum of three interrelated levels of interactions: data, information and knowledge. These cognitive elements defined by Ackoff (1989) are essential for management decisions (PD 7504:2005). Furthermore, the quality of data is a necessary but not sufficient condition for assuring the quality of information (Bevan, 1999). By extension, information quality is a necessary but not sufficient condition for knowledge quality (PD 7506:2005). The information services management and the quality of information (Salaun & Flores, 2001) is a key aspect of the quality and safety systems.. Information scientists may play an important role in developing and testing such approaches and processes. Below, an interrelation of IS with data and information needs included within some of the most well known quality management standards is attempted. The ISO 14001:2004 is intended to provide organizations with the elements of an effective environmental management system (EMS). The ISO 14001 is compatible with the ISO 9001 standard, hence interrelating safety/environmental and other quality goals. The standard can be implemented to all types and sizes of organization and to accommodate diverse geographical,

Page 87: OR52 EXTENDED ABSTRACTS

     

87 

cultural and social conditions (ISO 14001:2004). In clauses of requirements 4.5.1 Monitoring and measurement, and 4.5.5 Internal audit, specific mentioning to data and information is also made (regarding significant environmental aspects). Furthermore, as it is stated within the ISO 14001 standard, an organization should consider the need to retain information about its significant environmental aspects for historical purposes. Indeed, in the informative Annex A “Guidance on the use of this International Standard” of ISO 14001:2004, in paragraph A.5 Checking (A.5.1 Monitoring and measurement), an interesting reference to the complete organizational structure is made for EMS as follows: “Data collected from monitoring and measurement can be analysed to identify patterns and obtain information. Knowledge gained from this information can be used to implement corrective and preventive action.”

3. Conclusions: Information services for quality and safety management

In the current fast developing environment the traditional role of information specialists must evolve and adjust to the new environment. A literature review for the issues of user studies and information needs within the context of information science has been conducted by Wilson (2006). The value of information services is defined as a measure of evidence of these contributions and may be supported by both qualitative and quantitative research (Usherwood, 2002). Over the last decade, the viewpoint that quality management is dynamically linked with information has been gradually introduced (Srdoc, et al, 2005). The study of requirements of data and information within specific standards in order to manage information services is an interesting and innovative area of research (e.g. Kostagiolas, 2006). This work did not intend to address in depth all the research issues originated from the interrelation of quality and safety management and information services development. However, a number of considerations have been made towards an approach for quality and safety management based on their information requirements. Data and information are highly involved within quality practices. An overall information service management strategy is indeed important and hence required.

4. References

1. Ackoff, R.L., (1989) “From Data to Wisdom”, Journal of Applied Systems Analysis, 16:3-9.

2. Bevan, N. (1999), “Quality in use: Meeting user needs for quality”, The Journal of Systems and Software, 49: 89-96.

3. BS 7799-3:2006 ¨Information Security Management Systems–Guidelines for Information Security Risk Management¨.

4. Chowdhury, G., Poulter, A. & McMenemy, D. (2006), “At the Sharp End Public Library 2.0. Towards A New Mission for Public Libraries as a Network of Community Knowledge”, On line Information Review, 30 (4): 454-460.

5. Deming, W.E. (1982), Quality, Productivity and Competitive Position, Cambridge : MIT Press.

6. ISO 2789:2003, “Information and Documentation–International Library Statistics”. 7. ISO 9001:2008 “Quality management systems —Requirements”. 8. ISO 10012:2003, “Measurement management systems – Requirements for measurement

processes and measuring equipment”. 9. ISO/IEC 27001:2005, “Information technology — Security techniques — Information

security management systems — Requirements” 10. ISO 11620:1998, “Information and Documentation–Library Performance Indicators”. 11. ISO 14001:2004, “Environmental management systems — Requirements with guidance for

use”. 12. Kostagiolas, P.A. (2006), “Information services for supporting quality management in

Healthcare”, Journal on Information Technology in Healthcare, 4(3): 137-146.

Page 88: OR52 EXTENDED ABSTRACTS

     

88 

13. Maguad B.A. (2006), “The Modern Quality Movement: Origins, Development and Trends”, Total Quality Management, 17 (2): 179–203.

14. Martínez - Lorente, A.R. & Dewhurst, Fr. & Dale, B.G (1998), “Total quality management: origins and evolution of the term”, The TQM Magazine, 10(5): 378–386.

15. PD 7504:2005, Knowledge Management in the Public Sector: A Guide to Good Practice.. 16. PD 7506:2005 (2005), Linking knowledge Management with other Organizational

Functions and Disciplines: A Guide to Good Practice. 17. Salaun, Y. & Flores, K. (2001), “Information quality: meeting the needs of the consumer”,

International Journal of Information Management, 21: 21-37. 18. Sila I. & Ebrahimpour M., (2002), “An Investigation of the total quality management

survey based research published between 1989 and 2000: A literature Review”, International Journal of Quality and Reliability Management, 19(7):902-970.

19. Srdoc, A. & Sluga, A. & Bratko, I. (2005), “A quality management model based on the “deep quality concept”, International Journal of Quality and Reliability Management, 22: 278-302.

20. Usherwood, B. (2002), “Demonstrating impact through qualitative research”, Performance metrics and measures, 3(3): 117-122.

21. Wilson, T.D. (2006), “60 years of the best in information research. On user studies and information needs”, Journal of Documentation, 62 (6): 658-670.

Page 89: OR52 EXTENDED ABSTRACTS

     

89 

Location selection for the construction of a casino in the region of Greater London

Karim Lidouh24, Alessio Ishizaka, Philippe Nemery

Abstract

In this work, we present a market study with the aim of choosing a suitable borough in the region of Greater London for the construction of a large casino. Currently 17 of the 26 large casinos in London are located in the borough of Westminster which is known to generate the highest revenue in tourist spending. However, in 2006 when proposals were submitted to the Casino Advisory Panel (CAP) by several boroughs for getting a casino license, the borough of Newham was recommended as a suitable area instead of Westminster. By taking two viewpoints into consideration (one focussed on profitability, the other on social benefits) we evaluate the alternatives using the weighted sum and the PROMETHEE method. Finally the results are compared to the proposals submitted to the CAP to validate our approach.

1. Introduction

Site selection is a strategic problem that is often encountered in management or marketing studies. Decisions have long term consequences and should be backed by rigorous analysis and techniques. In practice nevertheless, because of the tedious and expensive process of gathering and analyzing all the required data, most decisions rely on intuition based on past experience. The aim of this paper is to tackle a site selection problem. We do this by detailing the process of gathering data and selecting the relevant criteria to be taken into account. We then use decision aid methods for the weights elicitation of these criteria according to the objectives set and finally evaluate the alternatives. This entire approach is then validated by comparing the results obtained to the decision that was taken by experts. The particular problem that we deal with is about selecting a borough in the region of Greater London for the construction of a new large casino. There are already 26 large casinos in London and 17 of them are located in the borough of Westminster. That borough being the one with the most commercial roads, it naturally attracts tourists and the expenditures are high. However, the presence of a casino can generate several social and regional benefits such as employment, taxes, and additional visits by tourists (AGA, 2003). Westminster is not the most in need of such advantages and we might therefore have to consider another alternative. This same situation was encountered by the Casino Advisory Panel (CAP) in 2006 which was established to hand out casino licenses to the most suitable of boroughs and cities in England. We will study this decision while considering two different viewpoints: one focused on social benefits, the other on profitability. The data will be analyzed using a weighted sum approach and the PROMETHEE outranking method (Brans et al., 2005).

2. The model

Our first objective will be to find where potential prospects can be found. We will therefore need to study the profile of gamblers and thereby select criteria that can help define our target. Our second objective will be to identify potential social advantages and evaluate their impact on the different boroughs.

                                                            24 Email : [email protected]  

Page 90: OR52 EXTENDED ABSTRACTS

     

90 

Profile of a gambler

In this section we will define the segment of customers that we are targeting. This step is indeed important as most preconceived ideas about gamblers can prove to be false. The establishment of a profile will allow us to identify the correct criteria for our study. Several surveys on American gamblers throughout the United States indicate us that most gamblers are likely to be working old couples or parents in their 40’s or 50’s (Harrah’s, 2006; AGA, 2003). Studies show that ethnicity plays an important role when comparing gamblers to non-gamblers. Indeed even though only 69% of the US population is white, 92% of the gamblers are (Harrah’s, 2006). It would therefore be interesting to target boroughs with a high white population. Gamblers also tend to travel more than the average person (Harrah’s, 2006). Areas with a high tourist visitation rate should be targeted. To this we should add that the median household income of players is almost US$8000 higher than the national median. Gamblers are indeed more likely to have pursued education beyond high school and to hold a white collar job (Harrah’s, 2006; AGA, 2003). They are attracted to new technologies and like interactions with other people. Unsurprisingly, religion plays a greater role in the lives of non-gamblers. Surprisingly according to the surveys, the gender does not seem to affect gambling behaviours when considering all the customers of casinos. In fact the number of gamblers of each sex appears to follow national statistics (48% male and 52% female). Even though gender and other criteria can affect behaviour in case of pathological gambling (Ohtsuka et al., 1997), we will not take them into account as our aim is to target all potential customers. Social or ethical criteria

To all the criteria mentioned before, we should include the impact of a casino on the borough that will welcome it. It would be better to consider a borough that has few or no casinos. Aside from the fact that a strong competition would endanger the project, it should be noticed that adding a casino to a neighbourhood that already has many will have a very small impact. A casino can offer numerous job opportunities (Harrah’s, 2000a; AGA, 2003; Andersen, 1996; Rose et al. 1998): Gaming operations: machine technicians, cashiers, dealers, table games supervisors. Casino services: security, food and beverages, retail, purchasing, maintenance and facilities

specialists. Marketing: public relations, market research, advertising professionals. Human resources: employee relations, compensation, staffing, training specialists. Finance and administration: lawyers, audit, payroll, income control. A casino in one of the London boroughs would therefore easily generate about 300 additional jobs (Newham Council, 2006). As the casino will provide training for future employees, its presence will have the most impact on a borough where the inhabitants have low paid jobs. Another thing to consider is the possibility of synergies with revitalization projects planned by the authorities for the most deprived areas of Greater London (Greater London Authority, 2009). We could thereby benefit from improved transports; improved health and security; presence of other attractions in the borough, etc.

Page 91: OR52 EXTENDED ABSTRACTS

     

91 

Unlike what we might think when talking about problem gaming and crime issues related to casinos, most gamblers are actually responsible adults that go out and have fun on occasion without any dire issues (Namrata et al., 2002; Lee et al., 2006). Several studies have also shown that a casino does not increase crime because of its high security (Harrah’s, 2000; NORC, 1999). Criteria hierarchy

By taking our different objectives into account we established a criteria hierarchy (see Figure 1) that we used to elicit weights for our two decision maker profiles:

Locating prospects

o Local prospects Age structure Ethnicity No religion Spending in social activities

o Tourists Visitations Spending

Obtaining regional or social benefits

o Fewer competitors o Poor or low paid population o Regeneration areas o Potential employment

Figure 1. Criteria hierarchy

Page 92: OR52 EXTENDED ABSTRACTS

     

92 

Table 1 presents the data collected for this decision problem. The weights will be elicited using an approach similar to the Analytical Hierarchy Process (Saaty, 2005).

Table 1. Evaluations for the 32 London boroughs

Selecting London Borough

Capacity to Attract Prospects Regional and Social Benefits

Local Prospects Tourists Competitors Pay inequalities Regeneration

Potential Employment

Borough

Age structure (45 and over)

Ethnicity (white)

No religion

Socializing (spend from domestic 2007)

Visitation (day trips 2007)

Spending (spend from overseas 2007)

(number of large casinos)

(hourly pay of top quartile)

(regeneration areas)

(unemployed aged 16-74)

Barking & Dag. 55866 132566 25075 10 1029 35 0 15 1 1777

Barnet 112494 188301 40320 37 6695 162 0 21 0 47715

Bexley 85798 191947 32147 22 2237 81 0 18 0 82203

Brent 82114 76893 26252 25 2711 97 0 16 1 52543

Bromley 120217 255618 48279 31 3978 130 0 22 0 80008

Camden 57800 104390 43609 162 11616 589 0 26 1 70871

Croydon 114942 210573 48615 42 5084 162 0 19 0 81336

Ealing 95724 135139 40436 42 5824 191 0 20 1 81014

Enfield 96507 167394 33777 38 2850 135 0 18 3 76449

Greenwich 70000 151291 41365 21 2999 78 0 19 2 62209

Hackney 52947 89490 38607 18 1864 65 0 18 4 67639

Hammersm. & F. 46820 95909 29148 62 6911 183 0 24 1 46769

Haringey 60147 98028 43249 18 2345 75 0 18 3 66968

Harrow 77398 103207 18674 21 1993 91 0 20 0 52755

Havering 94812 206365 29567 33 3041 121 0 18 0 56786

Hillingdon 86529 176244 32486 80 4314 326 0 18 0 58601

Hounslow 67774 118421 28576 24 3245 111 0 17 0 53783

Islington 49325 99784 41691 45 3800 144 1 24 2 55806

Kensington & Ch. 55277 79594 24240 240 12863 986 6 38 1 49298

Kingston up. Th. 51539 111810 26506 25 3335 111 0 23 0 34165

Lambeth 67260 131939 57751 65 6428 148 0 20 1 73240

Lewisham 72190 141814 50780 17 1701 61 0 18 1 68552

Merton 62714 120378 31100 17 3650 68 0 20 0 44533

Newham 61811 82390 21978 46 2286 133 0 15 5 83840

Redbridge 86243 137097 22952 20 2183 82 0 20 1 65186

Richmond upon Th. 63779 135655 33667 27 2609 89 0 27 0 38133

Southwark 66771 127752 45325 55 7277 162 0 20 2 75621

Sutton 65799 150515 29971 15 1904 61 0 19 0 38885

Tower Hamlets 46810 84151 27823 76 5824 176 0 21 4 69491

Waltham Forest 67325 121694 33541 15 2017 59 0 18 1 61264

Wandsworth 68933 168665 52042 28 4185 115 0 24 0 63456

Westminster 59260 87938 29300 762 46591 2927 17 31 0 57005

TOTAL 2325815 4287861 1130616 2203 181007 8194 24 665 35 1981198

MAX 120217 255618 57751 762 46591 2927 17 38 5 83840

MIN 2890 4909 1767 10 1029 35 0 15 0 1777

max max max max max max min min max max

Page 93: OR52 EXTENDED ABSTRACTS

     

93 

By giving more importance to the economic criteria we obtained a weight distribution that we could impute to a casino operator (see Table 2). And by giving more influence to social criteria we obtained the viewpoint of a political actor (see Table 3).

Locating a suitable borough 

Prospects 0,6 

Social and regional benefits 0,4 

Domestic 0,3 

Tourists 0,7 

Comp. 0,4 

Pay 0,2 

Regen. 0,2 

Employ.0,2 

Age 0,25 

Ethnic. 0,25 

Religion 0,25 

Social. 0,25 

Visits 0,5 

Spend.0,5         

0,045  0,045  0,045  0,045  0,21  0,21  0,16  0,08  0,08  0,08 

Table 2. Weight distribution for the casino operator

Locating a suitable borough 

Prospects 0,3 

Social and regional benefits 0,7 

Domestic 0,5 

Tourists 0,5 

Comp. 0,15 

Pay 0,25 

Regen. 0,35 

Employ. 0,25 

Age 0,2 

Ethnic. 0,2 

Religion0,4 

Social. 0,2 

Visits 0,5 

Spend. 0,5         

0,03  0,03  0,06  0,03  0,075  0,075  0,105  0,175  0,245  0,175 

Table 3. Weight distribution for the political actor

3. Results and Comparison

In this section we will first evaluate the alternatives using two different methods. The results that we obtain will then be compared to the decision taken by the CAP when comparing the London boroughs that participated in the evaluation in 2006. Evaluation of the boroughs

By applying a weighted sum on the normalized evaluations and using these two weight distributions, we obtain two rankings for this problem (see Figure 2). We can see that in both cases Westminster is elected as the most suitable borough, while the rest of the ranking changes greatly with the variation of weights. That seems to be due to certain criteria were Westminster has ridiculously high performances, namely domestic and tourist spending and visitations. Indeed, Westminster is known for its high density of shops that attract lots of tourists. Therefore, with a compensatory method such as the weighted sum, we evidently obtain a ranking in which that particular alternative is highly ranked even though its other performances are low. To overcome this pitfall, the next step will involve a partially compensatory method that will value overall good profiles instead of profiles with a few high performances.

Page 94: OR52 EXTENDED ABSTRACTS

     

94 

Figure 2. Rankings obtained with the weighted sum

The advantage in using a partially or non compensatory method is that it values profiles that have average performances above profiles that have some very good performances along with very bad ones. With such a method, we hope that the impact of economic criteria on the position of Westminster will be reduced and that the advantages of other boroughs will be highlighted. The method we are going to use is called PROMETHEE (Brans et al., 2005). By using the same two weight distributions and parameters equal to the greatest and smallest differences as thresholds, we obtain the results presented in Figure 3. We can see that in the first case Westminster is still considered the best alternative from an economic point of view. However in the second case where we value social and ethical criteria, Westminster’s position is suddenly very low (25th). An additional interesting thing to notice is that the position of Newham in both rankings remains very stable from one method to the other. It is therefore a good choice to consider. Figure 4 also indicates the changes in positions between the two methods and the two scenarios. These differences can be used to identify alternatives that have profiles strongly positioned on a few criteria only.

Page 95: OR52 EXTENDED ABSTRACTS

     

95 

Figure 3. Rankings obtained with PROMETHEE

Figure 4. Differences between the four rankings

Page 96: OR52 EXTENDED ABSTRACTS

     

96 

The decision of the CAP

In this section we will consider the proposals sent to the CAP by the London boroughs of Westminster and Newham (LB Westminster Council, 2006; LB Newham Council, 2006). These two proposals where the only ones that originated from London boroughs willing to establish casino premises on their territory, the other ones coming from other cities in England or requesting the opening of a regional casino. Westminster’s proposal focuses on the strong points of the borough like the high revenue generated by tourism, the high proportion of people in the highest socio-economic categories, the presence of London’s iconic attractions and 40% of London’s surface of hotels. The presence of 17 casinos which represent 75% of the casinos in London and 14% of the casinos in the UK makes it a good location as the inhabitants are used to that type of premise. Also, the social impact of a casino in such an environment would likely be very small. With these criteria in mind, the borough of Westminster requested a license for two additional large casinos. There are even a few areas in the borough far from the commercial area that need some regeneration (see Figure 5), although probably less than in other places in London.

Figure 5. 20% most deprived areas in Greater London

Newham’s proposal included the results of several studies and statistics to enforce their credibility. Their report was focused on the fact that Newham is a key regeneration area and is located within the Thames Gateway (identified as a national priority for regeneration); is London’s best connected borough being accessible by road, rail and tube; and has significant visitor potential. Figure 5 shows that Newham lies at the heart of the areas that are in need of regeneration. The Council is committed to reducing poverty and stated that they would be working to ensure that residents of Newham would benefit from the job opportunities generated by the casino. The CAP’s decision in 2007 was to recommend Newham as an area in which a large casino should be licensed. Yet even though expert opinion stated that a casino would be less harmful than other forms of gambling and would definitely bring benefits to the borough, local residents strongly opposed the casino and had to be convinced before any construction plans could be made. To this date there is still no casino in the London borough of Newham.

Page 97: OR52 EXTENDED ABSTRACTS

     

97 

4. Conclusion

In this paper we presented a study of site selection for a particular problem: the establishment of a casino in the region of Greater London. We first conducted a research with the aim of segmenting and targeting the people that are the most likely to visit casinos. After that we added some social and regional benefits that we would like the project to have such as potential employment and tax revenues for the selected borough. We used the data selected to apply two decision aiding methods and compared the results for different decision maker profiles. To complete our study we compared it to the proposals submitted by the London boroughs to obtain a license for a large casino. The reports were concordant with our results and led to the same decision by the Casino Advisory Panel.

5. References

1. American Gaming Association (2003). State of the States: The AGA Survey of Casino Entertainment. American Gaming Association, Washington, D.C.

2. Andersen A (1996). Economic Impacts of Casino Gaming in the United States, Volume 1: Macro Study. American Gaming Association, 7.

3. Brans J P and Mareschal B (2005). PROMETHEE Methods. In: Figueira J, Greco S and Ehrgott M (eds). Multiple Criteria Decision Analysis: State of the Art Surveys. Springer.

4. Greater London Authority (2009). The London Plan: Spatial Development Strategy for Greater London. Mayor of London.

5. Harrah’s Entertainment Group (2006). Profile of an American Gambler: Survey 2006. Harrah’s License Company.

6. Harrah’s Entertainment Inc. (2000). Casinos and Crime. Issue Papers, Harrah’s Entertainment Inc.

7. Harrah’s Entertainment Inc. (2000a). Economic Impacts of Casino Gaming. Issue Papers, Harrah’s Entertainment Inc.

8. Lee C-K, Lee Y-K, Bernhard B J, and Yoon Y-S (2006). Segmenting casino gamblers by motivation: a cluster analysis of Korean gamblers. Tourism Management, 27, 856-866.

9. London Borough of Newham Council (2006). Proposal document. UK Government Web Archive.

10. London Borough of Westminster Council (2006). Proposal for further casinos to the Casino Advisory Panel. UK Government Web Archive.

11. Namrata R and Oei T P (2002). Pathological gambling: A comprehensive review. Clinical Psychology Review, 22, 1009-1061.

12. National Opinion Research Center (1999). Gambling Impact and Behavior Study. Report to the National Gambling Impact Study Commission, 70-71.

13. Ohtsuka K, Bruton E, DeLuca L, and Borg V (1997). Sex differences in pathological gambling using gaming. Psychological Reports, 80, 1051-1057.

14. Rose A and Associates (1998). The Regional Economic Impacts of Casino Gambling: Assessment of the Literature and Establishment of a Research Agenda. Report prepared for the National Gambling Impact Study Commission, 22.

15. Saaty T L (2005). The analytic hierarchy and analytic network processes for the measurement of intangible criteria and for decision-making. In: Figueira J, Greco S and Ehrgott M (eds). Multiple Criteria Decision Analysis: State of the Art Surveys. Springer.

Page 98: OR52 EXTENDED ABSTRACTS

     

98 

Multi-Actor, Multi-Criteria Analysis (MAMCA) for transport project appraisal

Cathy Macharis25

Vrije Universiteit Brussel, Department MOSI-Transport and Logistics Vrije Universiteit Brussel, Department MOSI-Transport and Logistics, Pleinlaan 2, 1050

Brussels, Belgium

Abstract

In this contribution the multi-actor multi-criteria analysis (MAMCA) method to evaluate transport projects is presented. This evaluation methodology specifically focuses on the inclusion of the different actors that are involved in the project, the so called stakeholders. As the traditional multi criteria analysis, it allows to include qualitative as well as quantitative criteria with their relative importance, but within the MAMCA they represent the goals and objectives of the multiple stakeholders and by doing so allow to include the stakeholders into the decision process. The theoretical foundation of the MAMCA method will be shown together with several applications in the field of transport appraisal. Keywords: transport project appraisal; multi actor; multi criteria analysis

1. Introduction

Several types of evaluation methods can be used for the evaluation of transport projects. The most common used methods are the private investment analysis, the cost-effectiveness analysis (CEA), the economic-effects analysis (EEA), the social cost-benefit analysis (SCBA) and the multi-criteria decision analysis (MCDA). Nowadays, next to economical effects, also the ecological, spatial and social aspects of a project are increasingly gaining importance in light of finding more sustainable solutions. As the first three methods are only taking economic effects into account or are only considering a single goal against the financial costs, the last two ones are gaining more and more attention and are more frequently used because they allow including other aspects besides the economical ones. What is still missing in the existing evaluation methods, is the explicit inclusion of the actors that are involved, the so-called stakeholders. The importance of these stakeholders within the context of transport and mobility policy is widely recognized (Walker, 2000). Including them in the decision-making process is a crucial element in the successful implementation of the measure. Citizens, private companies and different policy levels will have a large impact on the implementation of a project (see for example the discussion about the construction of a “Lange Wapper” bridge in Antwerp this year). If the interests of the stakeholders are not taken into account, the study or analysis will be ignored by policymakers or be attacked by the stakeholders. The methodology we propose integrates these stakeholders explicitly from the start of the appraisal process. We call this approach a multi actor, multi-criteria approach, or short MAMCA methodology. It is an extension of the traditional multi criteria decision analysis. The methodology has been developed by Macharis (2000 and 2004). This methodology accommodates the manifold dimensions and multidisciplinary aspects required by socio-economic studies. Moreover, the MAMCA explicitly includes the stakeholder’s opinions in the evaluation of different policy measures. As such, the MAMCA is able to support the decision maker in his final decision as the inclusion of the different points of view leads to a general prioritisation of proposed policy measures. The

                                                            25 Email: [email protected]

Page 99: OR52 EXTENDED ABSTRACTS

     

99 

methodology will be further explained in the next section. In section three some applications are been described.

2. The Multi-Actor Multi-Criteria Analysis (MAMCA)

The methodology consists of 7 steps (see Figure 1). The first step is the definition of the problem and the identification of the alternatives. These alternatives can take different forms according to the problem situation. They can be different technological solutions, different policy measures, long term strategic options, etc. Next, the relevant stakeholders are identified (step 2). Stakeholders are people who have an interest, financial or otherwise, in the consequences of any decisions taken. Thirdly, the key objectives of the stakeholders are identified and given a relative importance or priority (weights) (step 3). Fourthly, for each criterion, one or more indicators are constructed (e.g. direct quantitative indicators such as money spent, number of lives saved, reductions in CO2 emissions achieved, etc. or scores on an ordinal indicator such as high/medium/low for criteria with values that are difficult to express in quantitative terms etc.) (step 4). The measurement method for each indicator is also made explicit (for instance willingness to pay, quantitative scores based on macroscopic computer simulation etc.). This permits measuring each alternative performance in terms of its contribution to the objectives of specific stakeholder groups.

Stakeholder analysisStakeholder analysis

Stake-holder 1

Stake-holder 1

C11C11 CCAlternativesAlternativesCn1Cn1 CnmCnm

Stake-holder m

Stake-holder m

Ref.Ref.

AlternAltern

C11C11 ResultsImplemen-

tation

Implemen-tation

scenariosscenarios

resultresult

resultresult

CnmCnm

resultresult

resultresult

IndicatorsIndicators Measurementmethods

Measurementmethods

C11C11

CnmCnm

Mitigationstrategies

Mitigationstrategies

C11C11 CCWn1Wn1 WnmWnm

W11W11 WnmWnmOverall analyses

(MCA)

+/0/-+/0/-Deploymentscenarios

Deploymentscenarios

1122

6655

44

33

77

Figure 1. Methodology for a multi-actor, multi-criteria analysis (MAMCA).

Source: Macharis (2004). Steps 1 to 4 can be considered as mainly analytical, and they precede the “overall analysis”, which takes into account the objectives of all stakeholder groups simultaneously and is more “synthetic” in nature. The fifth step is the construction of the evaluation matrix. The alternatives are further described and translated into scenarios which also describe the contexts in which the policy options will be implemented. The different scenarios are then scored on the objectives of each stakeholder group. For each stakeholder a MCDA is being performed. The different points of view are brought together in a multi actor view. This multi actor, multi-criteria analysis yields a ranking of the various alternatives and reveals their strengths and weaknesses (step 6). The stability of the ranking can be assessed through a sensitivity analysis. The last stage of the methodology (step 7) includes the actual implementation. Based on the

Page 100: OR52 EXTENDED ABSTRACTS

     

100 

insights of the analysis an implementation can be developed, taking the wishes of the different actors into account.

3. Applications

The MAMCA methodology is already applied for several transport related decision problems. It is used to cope with an intermodal terminal location decision problem (Macharis, 2000), to evaluate advanced driver assistance technologies (Macharis et al., 2004), for a study about the choice between waste transport alternatives in the Brussels region (Macharis and Boel, 2004), the location choices for a new high speed train terminal (Meeus et al., 2004) and the evaluation of policy measures in the framework of Flanders in Action (VIA, Macharis et al., 2010). Currently it is being used in the BIOSES project in order to evaluate different biofuel scenarios, in the project “Night Deli” for the evaluation of different night distribution scenarios (Verlinde et al., 2009), in the project Spatialist where it is being used to evaluate different implementation scenario’s for Spatial Data Infrastructure in Flanders and in the Policy Research Centre Mobility & Public Works, track Traffic Safety where it is adapted in order to evaluate road safety measures. The latest application is on the different possibilities to solve the mobility problems around Antwerp (Macharis and Januarius, 2010). This Oosterweel connection is the largest infrastructure project ever in Flanders. It has been a point of discussion for several years, especially during the last 2. Several alternatives have been put forward, by the government by also by action groups and other stakeholders. There are mainly five possibilities: the BAM route, which is the initial alternative suggested by the Flemish Government with the contested “Lange Wapper” bridge; the ArupSum – alternative or AS route which proposes a tunnel variant, the third and fourth alternative respectively is the optimization of existing tunnels, like the Liefkenshoek tunnel and the optimization of the road tax in the Kennedy tunnel. Finally, the going concern scenario is the continuation of the current situation. In Figure 2 the main stakeholders and their criteria are shown.

Page 101: OR52 EXTENDED ABSTRACTS

     

101 

Flemish Government

Fin. feasibility

Port Community

Construction firms

City of Antwerp

Duration

Traffic safety

Environmental effect

Efficiency trafficflows

Duration

Profitability

Building and exploitation risk

Oosterweel -connection

Direct access

Economicdevelopment

Air quality

Noise effects

Barrier formation

Nature

Job certainty

Competitive position

Experience & knowhow

Mobility

Figure 2. Decision tree of the Oosterweel connection.

Source: Macharis and Januarius, 2010.

After consultation of these actors and by elicitation of the weights, the different alternatives were evaluated based on the existing studies. For this application the software Expert Choice was used. This software uses the Analytical Hierarchy Process method (Saaty, 1982) as MCDA - technique en provides the advantage that the outcomes can be visualized and allows including an extra layer that represents the stakeholders. The MAMCA allows getting a view on the points of view of the different actors (see for example figure 3 for the point of view of the Flemish government) and getting a multi actor view, in order to get insight in the problem situation (see Figure 4).

Page 102: OR52 EXTENDED ABSTRACTS

     

102 

Environmentalimpact

Financial feasibility

Efficiency trafficflows

Traffic safety OVERALL

Duration

Figure 3. Graph for the Flemish Government. Source: own setup in Expert Choice.

Construction firms

FlemishGovernment

City of Antwerp

Port community OVERALL

Figure 4. Multi-actor view of the Oosterweelconnection. Source: own setup in Expert Choice.

Page 103: OR52 EXTENDED ABSTRACTS

     

103 

On the horizontal axis, the different criteria/objectives of this actor are displayed. The rectangular bars at the bottom and the corresponding values on the left axis indicate the weights these criteria were given. The values on the right axis represent the scores of the different alternatives under consideration. On the ‘OVERALL’ axis, a general prioritization of the proposed alternatives is given for all criteria. For this actor, we can see that the proposed alternatives have all positive and negative points. The BAM – trace for example is good on its impact on traffic safety and the efficiency of the traffic flows. The Going Concern gets a high score as well, due to its performance on financial feasibility. The scenario performs much less on the other criteria, which indicates that this alternative is not an option for the Flemish Government. In the multi actor view the different actors are brought together in one view. The different stakeholders get the same weight in order to show that every point of view is equally important. The analysis indicates that the BAM – trace is the best alternative for the Oosterweel project, followed by the AS – trace. The short term alternatives (Liefkenshoek and Kennedy tunnel) are respectively 3rd and 4th and the Going Concern ends up being the worst case scenario.

4. Conclusions

Several stakeholders are involved and several criteria have to be included for the evaluation of transport projects. The proposed methodology allows incorporating these points of view and several criteria in the analysis. The methodology has been applied in a variety of projects, ranging from the evaluation of infrastructure projects to the evaluation of new technologies. The MAMCA method makes the objectives of the various relevant stakeholders explicit, thereby leading to a better understanding of the objectives of these stakeholders by all parties concerned. Second, the MAMCA method shows the essential trade-offs made by all stakeholders, and makes these stakeholders more aware of the dynamic and spatial aspects of the societal decision-making process. Including stakeholders into the analysis takes more time in the beginning, but it improves the likelihood of acceptance of the proposed solution at the end of the day. The case of the Oosterweel connection is a good example of how difficult it can be for the government to decide on mega projects and even on how to implement a decision. A good insight in what is at stake for the different stakeholders is crucial to come to a succesfull implementation path. The MAMCA methodology can help to achieve this.

5. References

1. Macharis, C. (2000). Strategische modellering voor intermodale terminals: Socio-economische evaluatie van de locatie van binnenvaart/weg terminals in Vlaanderen, Brussels: Vrije Universiteit Brussel.

2. Macharis, C. (2004). The importance of stakeholder analysis in freight transport: The MAMCA methodology, European Transport/Transporti Europei, 25-26: 114-126.

3. Macharis, C.; De Witte, A., and J. Ampe, 2009, “The multi-actor, multi-criteria analysis methodology (MAMCA) for the evaluation of transport projects: theory and practice”, Journal of Advanced Transportation, vol.43, nr. 2, pp.183-202.

4. Macharis, C and B. Januarius, 2010, “The multi-actor, multi-criteria analysis (MAMCA) for the evaluation of “difficult” transport projects the case of the oosterweel connection.” Full paper presented at the 12th WCTR, July 11-15, 2010 – Lisbon, Portugal

5. Macharis, C.; Verbeke, A. and K. De Brucker, 2004, “The strategic evaluation of new technologies through multi-criteria analysis: the advisors case”, in: E. Bekiaris and Y. J. Nakanishi (Eds.), Economic Impacts of Intelligent Transportation Systems. Innovations and case studies, Elsevier, Amsterdam, pp. 439-460.

Page 104: OR52 EXTENDED ABSTRACTS

     

104 

6. Macharis, C. and B. Boel, 2004, “BRUGARWAT: Brussels Garbage by water”, Vervoerslogistieke werkdagen, 4 en 5 november, Hoeven, Nederland. Gepubliceerd in Ruijgrok, C.J. en R.H.J. Rodenburg, Bijdragen vervoerslogistieke werkdagen, pp. 229-242.

7. Macharis, C., De Witte, A. and L. Turcksin, 2010, “The multi-actor multi-criteria analysis (MAMCA): Application in the Flemish long term decision making process on mobility and logistics”, accepted for publication in Transport Policy

8. Saaty, T. (1982). Decision making for leaders, Wadtsworth, Belmont: Lifetime Learning Publications.

9. Verlinde, S.; Macharis, C.; Debauche, W.; Heemeryck, A.; Van Hoeck, E. And F. Witlox, 2009, “Night-time delivery as a potential option in Belgian urban distribution: a stakeholder approach”, Vervoerslogistieke werkdagen: Bijdragen Deel 1, pp. 153-172.

10. Walker, W.E. (2000). Policy Analysis: A systematic Approach to Supporting Policymaking in the Public Sector, Journal of multi-criteria decision analysis, 9: 11-27.

Page 105: OR52 EXTENDED ABSTRACTS

     

105 

The use of MCDA in strategy and change management

William Mayon-White26

Institute of Social Psychology – LSE

Ayleen Wisudha27 Business Psychology Centre - University of Westminster

Keywords: MCDA; soft systems, systems thinking; creativity, creative problem solving, facilitation.

1. Introduction.

This paper presents an account of the authors’ use of simplified MCDA tools in a variety of settings. Their experience suggests that many traditional uses of MCDA are limited by an over emphasis on analytical techniques and insufficient attention being given to the softer aspects of any decision making process. This supports the findings of others working in the field such as Brown, 2005; Humphreys and Jones, 2006. Since the early 80’s, both authors have worked extensively with a variety of formal MCDA approaches. Through this experience, they now advocate the use of a flexible MCDA approach that is embedded in a systems-based framework for the management of change and inter-personal engagement. The approach triggers the activation of processes that promote a range of perspectives on the decision problem, ensuring that they are introduced at an early stage in the diagnostic phase of problem structuring. These processes compliment the structured methodology of MCDA and helps to generate a decision audit trail that can be re-visited at any time to provide a reminder of the rationale for the components that contribute to decisions. In this way, the process-centred MCDA provides a platform for effective communication and a navigation route for teams/group. Structuring is an explorative framework, neutral in its formation yet providing access to differentiation of components in terms of both relationships and relative importance (Vennix, 1996). It also aids in the construction of a common framework of understanding for the stakeholders to engage with when coming to terms with conflicting demands and objectives. All the above will be illustrated using material from case studies generated by the authors. MCDA processes work on information that is generated by a group of decision makers. While structuring facilitates recognition of component relationships, it is the form of information gathering and the available capacity for iterative input and evaluations that prepare the basis for quality control. Individuals participating in this process are subject to biasing from a range of cognitive and emotional elements (Hogarth, 1987) as well as processing styles (Leonard et al, 1999). Focussing on this, Hammond, et al (2006) describe decision making traps that hinder effective decision making. The fuller benefit that should follow from structuring requires a robust interpersonal basis for evaluation and generation of opportunities to challenge and explore. In short, the quality of information which drives and populates the structured framework needs to be aligned with a supporting behavioural process that delivers the capacity for information generation; evaluation and participants’ engagement (Bentley, 2000; Vennix, 1996).

                                                            26 Email: [email protected] 27 Email: [email protected]

Page 106: OR52 EXTENDED ABSTRACTS

     

106 

In order to deliver on this robust form of evaluation, effort needs to be put into building the quality of engagement. Hence, known inter-personal blocks to effective information exchange, such as personality and learning styles, also need to be subject to a form of structuring. This area of work relates to the recognition of misperception and miscommunication characteristic inter-personal disengagement (Clegg and Birch, 2006). Having established ways in which to recognise the interpersonal blocks active within the process of information gain and sharing, steps can be taken to highlight countermeasures to these biases. Facilitation frameworks, such as the World Cafe (Brown, 2005) and Future Search (Weisbord and Janoff, 2000) attend effectively to learning preferences. They contribute to uncover misunderstanding and to encourage the sharing of information. Together with participative processes, such as that advocated by the Institute of Cultural Affairs (Spencer, 1989), and enquiry processes such as that underlined by the movement of Appreciative Inquiry (Lewis, 2007) they help to enhance collective commitment to action.

2. Corporate problem solving

Examples of the use of this approach with client groups in solving organisational problems show that this line of development is both robust and adapts well to different settings. This paper will take one example from a technology strategy and business change setting with an international media company, and a second example is from a strategy assignment with a central government department. In both cases, the engagement last a considerable period of time allowing for a series of workshops to develop options and subsequently to engage with the clients formal decision making processes. A common theme that runs through both of these is the role of the external consultant as both that of expert and as a process facilitator, ensuring that responsibility for the decision and its outcomes is left with the client’s stakeholders

3. Simulating group decision-making

In addition to its use with corporate clients the approach has been used with small teams of postgraduate students in a teaching and experimental mode. These include problem settings that are simulated with the team being provided with a variety of information about the decision problem, and being invited to role play. One example of this involves roles in a community-based decision over options for “planning gain”. Another directs the group to examine resources provided by the institution, and to develop their own prioritised and reasoned agenda for change.

4. Next steps

The authors are now applying the process to health choices and to decision-making in health care delivery.

5. References

1. Bentley, T (2000) Facilitation: Providing Opportunities for Learning, 2nd edition. Stroud: Space Between.

2. Brown J. (2005) The World Cafe: Shaping Our Futures Through Conversations That

Page 107: OR52 EXTENDED ABSTRACTS

     

107 

Matter. Berret-Koehler 3. Brown R. V. (2005) The Operation Was a Success but the Patient Died: Aider Priorities

Influence Decision Aid Usefulness. Interfaces. Vol. 35, No. 6, November-December 2005, pp. 511–521

4. Brown, R. V. (2009). Working with Policy Makers on Their Choices: A Decision Analyst Reminisces. Decision Analysis; March 2009, Vol. 6 Issue 1, p14-24.

5. Clegg, B and Birch, P (2006) disOrganization. Pitman Publishing. 6. Hammond, J. S., Keeney, R. L., Raiffa, H. (2006). The Hidden Traps in Decision Making.

Harvard Business Review; Jan2006, Vol. 84 Issue 1, p118-126. 7. Hogarth, R M (1987) Judgement and Choice: The Psychology of Decision Making. Wiley. 8. Humphreys, P.C., & Jones, G.A. (2006). The evolution of group support systems to enable

collaborative authoring of outcomes. World Futures, 62, 1-30. 9. Jones, G. and Lyden-Cowan, C. (2002) The Bridge: Activating Potent Decision Making in

Organizations. In F. Adam, P. Brezillon, P. Humphreys and J-C Pomerol (Eds.) Decision Making and Decision Support in the Internet Age. Cork, Ireland: Oaktree Press

10. Leonard, N. H., Scholl, R. W., Kowalski, K. B. (1999). Information processing style and decision making. Journal of Organizational Behavior; May99, Vol. 20 Issue 3, p407, 14p.

11. Lewis, S., Passmore, J. and Cantore, S. (2007) Appreciative Inquiry for Change Management: Using AI to Facilitate Organizational Development, Kogan Page.

12. Vennix J A (1996) Group Model Building: Facilitating Team Learning Using System Dynamics, Wiley.

13. Weisbord, M. and Janoff, S. (2000). Future Search: Action Guide to Finding Common Ground in Organizations and Communities, McGraw-Hill.

Page 108: OR52 EXTENDED ABSTRACTS

     

108 

The cornerstone of network enabled capability: defining agility and quantifying its benefit

James Moffat28

Senior Fellow, Defence Science and Technology Laboratory (Dstl), UK

1. Introduction

The development of Network Enabled Capability (NEC) is one of the key priorities for the UK Ministry of Defence. It falls into the category of a difficult and highly important problem for our customer community. I have thus been working on issues related to NEC (and precursor concepts such as Digitisation of the Battlespace) for a number of years. In this paper, I describe how these strands of research have influenced both the policy level and the work across Dstl. My research has split fairly well into two parts: developing a conceptual understanding and approach (thus assisting the development of doctrine) and developing quantified Operational Analysis29 (OA) models to allow assessment of the cost-benefit of such an approach.

2. Quantifying the benefit of agility

In my research, I took a new approach to the representation of Command and Control, based on the understanding from Complexity Theory that simpler agents, joined in a network or hierarchy, can represent the key effects of higher level Command and Control (Moffat, 2002). This key insight (which actually pre-dated the formal development of Complexity Theory) has led to the Deliberate and Rapid Planning processes, and the development of a new generation of simulation models which now underpin Dstl’ s ability to analyse balance of investment across the Equipment Programme (including sensors and command systems), future force structures, and the implications of Defence policy (Taylor and Lane, 2004). This approach has also stimulated other developments such as the JOCASTS wargame used by the Joint Services Command and Staff Course (JSCSC at the Defence Academy), and the HiLOCA model (now owned by Qinetiq). The approach I have adopted is to strike out on a new path and exploit such novel ideas from complexity mathematics in order to create a representation of the Command process which is elegant, yet still transparent. This avoids the use of extensive sets of special expert system rules (the previous available approach to such issues), and has led to models which are the key element of Dstl studies, and which are relevant to our current needs to give authoritative and timely advice. Examples of the models either developed or under development are:

The COMAND Campaign level Maritime, Air and Land model. This is a Command and

Control (C2) centred model which is based on the Rapid and Deliberate Planning processes. This model is the key component of Dstl studies looking at joint balance of capability (including C2) across the defence budget.

The DIAMOND model which represents non-warfighting scenarios at a joint level

(including the effects of non military entities such as refugees, or aid agencies) exploits the agent architecture developed as part of our research. This model is now in use in studies, and has been given to a number other countries, including the US.

                                                            28 Email: [email protected] 29 Operational Analysis is the Defence application of Operational Research. 

Page 109: OR52 EXTENDED ABSTRACTS

     

109 

The CLARION campaign level Land/Air model is due to incorporate the Rapid Planning

process. CLARION is the main model within Dstl for analysis of Land/Air force structure tradeoffs across the equipment budget.

The SIMBAT model (providing underpinning analysis at the tactical level) is a pure

instantiation of the Rapid Planning process, and is used to support high level analysis as well as lower level tactical studies.

The SIMBRIG model spans the gap between SIMBAT and CLARION. It has been

developed using elements of the Rapid Planning process to drive the manoeuvre units.

The SIMMAIR model is currently under development as a system level model to bridge the gap between tactical naval models and the COMAND model. It is driven by the Rapid Planning process.

The WISE formation level wargame comprises a number of military players at up to

Divisional and Brigade level, underpinned by a simulation ‘engine’. This engine is driven by the Rapid Planning process. I have recently developed a prototype closed form simulation version of WISE, incorporating the Deliberate Planning process. The gaming structure is now in study use, and has been key to consideration of future Army operational level force structures.

Figure 1 indicates how these various models fit together to form a hierarchy for application to analysis across the spectrum of requirements, and how these are shared and owned by the various Systems Departments in Dstl: Policy and Capability Studies (PCS), Air and Weapons Systems (AWS), Land Battlespace Systems (LBS) and Naval Systems (NS).

Figure 1. How these models fit together across the systems departments of Dstl.

Campaign level

Joint warfighting (COMAND)

Campaign level

Peacekeeping (DIAMOND)

Campaign level Land warfighting

(CLARION)

System level Air/Maritime

(SIMMAIR)

System level

Land Warfighting

(SIMBRIG, SIMBAT)

System level Land Wargame/Simulation

Peacekeeping/warfighting

(WISE)

PCS and AWS Dept

LBS Dept

NS and AWS Dept

LBS Dept

Page 110: OR52 EXTENDED ABSTRACTS

     

110 

3. Validation of the Approach

As part of transitioning such models to the study programme, they have to undergo a rigorous validation process. This includes both detailed scrutiny of the model assumptions and behaviour by military officers, and (where possible) comparison of the model behaviour with historical conflicts of relevance. For example, as part of the process of commissioning the COMAND model, a detailed comparison was made between COMAND and the Falklands conflict of 1982. Since COMAND is a stochastic model this comparison was between the single actual outcome of the 1982 conflict, and the ‘fan’ of results from 160 replications of the COMAND model (Moffat, Campbell and Glover, 2004). Three main types of agent decision making were represented in this comparison: a). In terms of the (Deliberate) campaign plan for each side’s maritime assets, this consisted of a string of missions. At various points, triggers were built into the plan, allowing it to branch to a new string of missions dependent on the situation at the trigger point (this might be the sinking or not of a major warship for example). b). In terms of Rapid Planning, maritime missions could be adapted to reflect local circumstance. For example a UK ship in transit to a patrol area could mount an attack of opportunity if its sensors detected such a threat and the attack was likely to succeed. c). Air missions were developed and prosecuted as a function of the sensor information on targets. For example all Argentinean air missions attacking the UK task force were created by the model in response to sensor information (mainly from Maritime Patrol Aircraft (MPA) and sensors based on the Falkland Islands). Entity/group missions are the building blocks of the scenario and are the key to COMAND’s representation of human intelligence as represented by the decisions made by the various commanders, and the emergent effect of these decisions. Broadly it was possible to represent all types of mission: for example, the retreat of the Argentinean navy to port, following the loss of one of their ships; the regrouping of the various ships into a single amphibious landing force and its subsequent passage to San Carlos. In terms of overall campaign outcome, we performed a number of comparisons of casualties (actual versus predicted by the model). Such casualties are the product of the number of engagements (driven by the Command process) , and the effectiveness in each engagement. The effectiveness per engagement was scaled back to 1982 levels in the model based on the historical records and log books. Thus this comparison was a true test of the validity of our representation of the Command process. Many detailed comparisons were carried out (Moffat, Campbell and Glover, 2004). Just one of these is shown here (Figure 2), comparing the actual historical record of the number of UK ships sunk or operationally rendered incapable, versus that predicted by the model. The result is convincingly close.

Page 111: OR52 EXTENDED ABSTRACTS

     

111 

0

1

2

3

4

5

6

7

8

9

10

15

-Ap

r

17

-Ap

r

19

-Ap

r

21

-Ap

r

23

-Ap

r

25

-Ap

r

27

-Ap

r

29

-Ap

r

1-M

ay

3-M

ay

5-M

ay

7-M

ay

9-M

ay

11

-Ma

y

13

-Ma

y

15

-Ma

y

17

-Ma

y

19

-Ma

y

21

-Ma

y

23

-Ma

y

25

-Ma

y

27

-Ma

y

29

-Ma

y

31

-Ma

y

2-J

un

4-J

un

6-J

un

8-J

un

10

-Ju

n

12

-Ju

n

14

-Ju

n

Campaign day

COMAND

Upper 95% CI

Lower 95% CI

Actual

Figure 2. Comparison of the Falklands Conflict with the COMAND simulation.

4. The Way Forward

These simulation models represent a significant monetary and intellectual investment by the MoD, and are sufficient to take us along the journey from our current capabilities to what is termed the ‘NEC Transition’ epoch of NEC (Ministry of Defence, 2005) as shown in Figure 3.

TraditionalC2

Intermediate C2

Naturallysynchronised

Shared awareness

Informationsharing

Organicsources

baseline

NEC Initial NEC

Transition

NECMature

IncreasingSituational Awareness

Increasing Command Agility

Equipment Capability

Doing Better Things

Figure 3. The NEC journey However, our understanding of the Mature stage of Network Enabled Capability (NEC), envisaged as towards the end of the NEC journey, is still not sufficient for the full development of tools and methods by which it can be modelled. The key aspect which characterises it is agility, which would come about through the adaptivity of task organised force units. I am thus continuing to work on enhancing both our conceptual understanding, and the model set, to capture this. For example, the closed-form simulation version of WISE (mentioned earlier)

Page 112: OR52 EXTENDED ABSTRACTS

     

112 

complements the gaming version, allowing the investigation of a much wider set of assumptions – a key aspect of agility. Finally, Figure 4 gives a sense of my development of conceptual understanding in relation to agile task organised force units. Here, ‘Comprehensive Task Groups’ (CTG’s) are formed at the joint level. The horizontal links imply sharing not just of information, but of resources. This requires a high level of shared situational awareness. The ‘bubble’ represents the shared understanding of the intent of the Joint Force Commander across these CTG’s. Similarly, each CTG can cascade this structure down to its ‘Units of Action’ (UA). A lengthier discussion of this approach is at (Moffat, Grainger et al, 2006).

Figure 4. Conceptual model of agile mission grouping.

5. The International dimension

It is one of the key tenets of the UK approach to defence doctrine that it should be aligned to that of our close allies, in particular the United States (US). Over the last decade, a key driver of US doctrine has been the development of the ideas of Network Centric Warfare (NCW). Leading the thinking in the USA has been Dr David Alberts, (Director of Research, Office of the Assistant Secretary of Defence (Networks and Information Infrastructure), the Pentagon). He was the lead author of the seminal work ‘Network Centric Warfare: Developing and Leveraging Information Superiority’ (Alberts, Garstka and Stein, 1999). I have worked jointly with Dr Alberts to help develop this conceptual thinking, through the forum of the NATO Research and Technology Organisation (RTO). This has led to a number of products of great benefit to the UK, including the NATO Code of Best Practice for Command and Control (C2) assessment (Moffat et al, 2002); endorsed by UK MoD as the guide which should be used for all studies related to C2 (including, therefore, NEC related studies). We also jointly developed a new Reference model of Command and Control fit for the Information Age (NATO, 2006). In particular, we developed the concept of agility (in terms of

PJHQ 

JFHQ 

CTG1  CTG2  CTG3 

JF Commander’s 

Intent  

UA1  UA2  UA3  TG1 Commander’s 

Intent 

Page 113: OR52 EXTENDED ABSTRACTS

     

113 

both the agility of the force, and the agility of the Command process) and this is discussed at length in two books (Moffat, 2003; Atkinson and Moffat, 2005). The concept of agility now underpins both the US doctrine of NCW and our own High Level Operational Concept (Ministry of Defence, 2010). It thus has a direct influence on the way forward for Defence in both the UK and the USA.

6. Conclusions

As a consequence of my research, Dstl now has a much more coherent approach to the modelling and analysis of Network Enabled Capability, and thus a coherent response to customers in this important domain. My work has also aided a strategic shift (in both the UK and USA) in conceptual thinking (and thus doctrine) towards the notion of agility as the fundamental driver of force structure.

7. References

1. Alberts D, Garstka J and Stein F (1999) Network Centric Warfare US DoD CCRP, Washington DC, USA (reprinted 2003).

2. Atkinson S and Moffat J (2005). The Agile Organization US DoD CCRP, Washington DC, USA.

3. Ministry of Defence (2005). The JCB NEC Delivery Strategy, Final Version. 4. Ministry of Defence (2010) The High Level Operational Concept. DCDC. 5. Moffat J (2002). Command and Control in the Information Age: Representing its

Impact. The Stationery Office, London, UK. 6. Moffat J et al (2002). NATO Code of Best Practice for C2 Assessment. US DoD

CCRP, Washington DC, USA. 7. Moffat J (2003). Complexity Theory and Network Centric Warfare. US DoD CCRP,

Washington DC, USA. 8. Moffat J, Campbell I and Glover P (2004). Validation of the Mission-Based Approach

to Representing Command and Control in Simulation Models of Conflict. J Opl Res Soc 55, 340-349.

9. Moffat J, Grainger P et al (2006). Modelling Agile Mission Grouping – Problem Structuring. Dstl Unpublished Report.

10. NATO (2006). Exploring New Command and Control Concepts and Capabilities Final Report prepared for NATO, Jan 2006. Available at www.dodccrp.org accessed 20 April 2010.

11. Taylor B and Lane A (2004). Development of a Novel Family of Military Campaign Simulation Models. J Opl Res Soc 55, 333-339.

Page 114: OR52 EXTENDED ABSTRACTS

     

114 

Scheduling of a production line using the AHP method: Optimization of the multicriteria weighting by genetic algorithm.

Karen Ohayon, Afef Denguir, Fouzia Ounnar30, Patrick Pujo

LSIS, Aix Marseille University, Av Esc. Normandie Niémen,

13397 Marseille cedex 20, France

Abstract

In this paper, we first present a metaheuristics literature review followed by a describing of the chosen metaheuristic to optimize the multicriteria decision aid method (AHP). Second, we will consider a case study to propose a scheduling for a production line using AHP method. Finally, we will apply the chosen metaheuristic to optimize the configuration phase of AHP and compare between both results. Keywords: Analytic Hierarchy Process, Genetic Algorithm, Multicriteria Decision Making, Optimization

1. Introduction

New workshop scheduling techniques are required to satisfy industrial production conditions which become more and more tight. The AHP method [Saaty, 1980] is considered as a tool for decision support for multicriteria problems for which qualitative and quantitative aspects can be both considered. In the context of controlling a production system, AHP is regularly used to circumvent problems of machine breakdown [Ounnar, 1999] to choose policies outsourcing [Ounnar et al., 2003] or to evaluation suppliers in a self-organized logistics network partnership [Ounnar et al., 2007] [Ounnar et al., 2009]. The multicriteria decision aid method AHP (Analytic Hierarchy Process) is also used to optimize production workshop scheduling [Ounnar and Pujo, 2009]. Optimization with a single criterion doesn’t often sufficient to reflect reality. Indeed, quite often, multicriteria optimization problems are reduced to single criterion optimization problems. This method is difficult to apply when criteria are expressed in a different way. It is more interesting to find a solution to the problem by considering the different criteria. In this case, AHP method will be applied in the scheduling of a cardio vascular prosthesis production line. We propose to implement AHP by two main phases: configuration and exploitation. To rank alternatives (solutions), an expert needs to identify the relative importance parameterization of criteria and their associated indicators during the configuration phase. This configuration led to a pair wise comparison between the different criteria and indicators to highlight their importance in the decision making process. This 'static' phase of the algorithm must be validated by a mathematical verification of consistency. The exploitation phase corresponds to the 'dynamic' of the AHP algorithm. It consists to classify alternatives compared to the overall objective. Thus, AHP uses a set of parameters that is estimated by an expert with some margin error. The resulting scheduling will therefore not necessarily be optimal. Using a metaheuristic such as a genetic algorithm can reduce the subjectivity of AHP’s result by parameterization improvement process. This process is realized by exploring other solution. The objective is to improve the outcome of the decision support and the scheduling.

                                                            30 Corresponding author. E-mails: {karen.ohayon, afef.denguir, fouzia.ounnar, patrick.pujo}@lsis.org

Page 115: OR52 EXTENDED ABSTRACTS

     

115 

In this context, first, we will present a metaheuristics literature review followed by a describing of the chosen metaheuristic to optimize the multicriteria decision aid method (AHP). Second, we will present a case study to propose a scheduling for a production line using AHP method. Finally, we will apply the chosen metaheuristic to optimize the configuration phase of AHP and compare between both results.

2. Literature review

The configuration phase of AHP requires the choice of all weighting parameters of pairwise comparison. As the number of these parameters is important, it becomes impossible to explore all possibilities for AHP configuration. Most often, to configure AHP, an expert gives a solution which can contain errors or uncertainties. Besides, if the AHP configuration problem is proposed for several experts, in this case, different proposals for the AHP configuration appear. These observations lead us to optimize the configuration phase of the AHP method through a metaheuristic. A metaheuristic is an iterative stochastic algorithm which guides the research space exploration to figure out an optimal solution. It can be seen as a research procedure that consists to find an overall extremum for an objective function without exploring all solutions of the research space; this characterizes the research procedure of a reasonable time response. The metaheuristics are divided into three categories. The first one handles only one solution and relies typically on local search methods. The second category handles a population of solutions. The last category is the hybrid one and consists to combine the two previous approaches [Sevaux, 2004]. The population-based metaheuristics give a good solution but require more research time. They offer the best compromise between research efficiency and genericity of the research strategy. Among the most commonly used population-based metaheuristics, we opted for the genetic algorithms [Holland, 1975] [Holland, 1992], which, for the one hand, uses more generic strategy to figure out the “best” solution comparing to the colonies of ants metaheuristic [Dorigo et al., 1991] [Dorigo, 1992] and, for the other hand, matches better our problem structure (the encoding of AHP settings is easily expressed by a chromosome).

3. AHP Optimization by a genetic algorithm

Every problem in genetic algorithms is considered as an environment in which evolve a population of potential solutions [Michalewicz, 1999] [Sevaux, 2004]. Each solution is the equivalent to an individual in the population and it is represented by its chromosome structure. The general principle of genetic algorithms is to construct new solutions from existing solutions in an iterative and a pseudorandom way. Each new solution must proceed by an evaluation of its performance. This process is realized by creating new individual from a generated and evaluated population (initial population) by using genetic operators (crossover, mutation …). The insertion of new individuals in the next population will be effective when their performance is superior to the less efficient solution of the current one. The implementation of a genetic algorithm requires adapting the algorithm to the problem to solve. It’s consist to encode the problem data (define a structure representing the AHP configuration data in a chromosome) and parameterize the variables and functions of the genetic algorithm (Set the population size, estimate the execution rate of genetic operators: crossover and mutation operators, choose a method to generate an initial population, choose a fitness function to measure the adaptability of an individual in the population in its environment and finally establish a stopping criterion for the execution of the algorithm) [Goldberg, 1987] ; [Goldberg, 1991] ; [Goldberg, 1999].

Page 116: OR52 EXTENDED ABSTRACTS

     

116 

To optimize an AHP configuration setting, the chromosome must represent AHP preference matrixes. Considering the upper triangular part of matrixes is sufficient to encode all information in AHP preference matrixes. Then, each preference matrix will correspond to a gene and each elementary preference to an allele. Considering n the dimension of an AHP preference matrix, the size of the gene will be equal, in this case, to n (n-1)/2. The first gene represents the preference matrix between criteria and the following ones represent preferences between indicators of a criterion. To set the initial population, we defined two methods. The first method consists in considering the settings proposed by different experts. The second method exploits the doubts and uncertainties of an expert; it requires the implementation of a mechanism allowing the expert to express his uncertainty according to one (or more) preference(s) value(s): it might subsequently generate all possible combinations of configurations. The new individuals are created from existing individuals in the current population. Generally, two individuals are drawn and a genetic operator is applied to them. The easiest genetic operator to implement is the crossover one. It consists in constructing a first child with the beginning of the first parent and the end of the second parent. The second child is constructed with the start of the second parent and the end of the first parent. The positioning of the cutoff point (beginning/end in the chromosome) is randomly determined at each draw of parents. Another classical operator is mutation operator. It consists to change randomly a gene or an allele of a chromosome. This operator can be easily preceded by the crossover operator. The relative implementation proportions of these operators affect the speed, the convergence of the algorithm and the quality of the founded solution. The fitness function allows the calculation of the performance of the genetic algorithm solutions. It calculates the phenotype of a chromosome from its genes. In our case, it consists to launch a workshop scheduling with a set of predetermined data. The observation of the results makes the evaluation of performance possible. This calculation is particularly disadvantageous in terms of response time. Different fitness functions can be defined. For example, a fitness function can minimize the average date of completion and / or the average number of delays, or maximizing the overall equipment effectiveness (OEE). This fitness function allows us to compare two individuals and this based on evaluating criteria that depend on the problem to optimize. Finally, the stopping criterion allows ending the genetic algorithm execution. The end of the iteration can be fixed by a maximum number of generations or by finding a convergence limit reached. Beyond the general principle of implementation of this genetic algorithm, AHP can improve significantly the efficiency of the algorithm, and that on two aspects. First, it is very interesting to test the viability of a new individual before calculating its fitness function, especially when obtaining its value is complex. AHP allows, by calculating the consistency coefficient to pre-validate or not a solution.Second, to remove generation of inconsistent solutions by the crossover operator, we proposed a new implementation of this operator. Due to a cutoff point located only between the genes, which mean between the AHP preferences matrixes, from an initial consistent population, the new crossover operator can only generate consistent chromosomes. The consistency will be checked only in case of a mutation.

4. Testing and analysis of results

To evaluate solution’s performance (optimization of AHP by genetic algorithm), many tests have been made in a study case that consider three machines (M1, M2, M3) and ensure three

Page 117: OR52 EXTENDED ABSTRACTS

     

117 

different types of products (P1, P2, P3), the fitness function to minimize represent the average total waiting time in workshop of products. These tests have been made by diversifying the parameter of crossover operator (random cutoff point, between matrixes cutoff point) and mutation frequency (1% and 0, 1%). We was able to evaluate the overall behavior of the genetic algorithm and estimate an optimal convergence characteristic, development of the fitness function values and execution time of each test. Finally, we obtained an ameliorated configuration of AHP parameters. We note an increase by 10% of the fitness function. The optimization strategy of solutions research (test the viability of a new individual before calculating its fitness function) makes us having 71% of a performance gain of the algorithm.

5. Conclusion

The Analytic Hierarchy Process (AHP) method includes a phase based on the expert trial which establishes the parameters of the AHP algorithm. However, several experts may be involved in this setup process; each expert gives a diffrent perception. This will generate different settings to resolve. Moreover, the proposed expert setup isn’t necessarily the optimal one. These findings led us to optimize the configuration phase of the AHP by the implementation of a metaheuristic, and more particularly through a genetic algorithm. This metaheuristic will test settings parameters similar to those proposed by an expert (or) expert (s) to figure out the optimal setting of parameters and thus an optimal schedule. For that, and after reviewing the existing metaheuristics and justifying the choice of genetic algorithms, we adapted the genetic algorithms method for the multicriteria decision aid method (AHP). The implementation of this adaptation has been tested on a case study consisting in scheduling production tasks. First results show a significant increase by 10% in the fitness function. This work represents our first study on the optimization of the AHP method by genetic algorithm.

6. References

1. Dorigo M., Maniezzo V., Colorni A., Positive feedback as a search strategy, Technical Report 91-016, Polytechnique de Milon, Italie, 1991.

2. Dorigo M., Optimization, Learning and Natural Algorithms, Thèse de PhD, Polytechnique Milan, Italie, 1992.

3. Goldberg D. E., Genetic algorithms with sharing for multimodal function optimization, in Proceedings of the second International Conference on Genetic Algorithms, 1987.

4. Goldberg D. E., Genetic Algorithms in Search: Optimisation and Machine Learning, Addison- Wesley, 1989.

5. Goldberg D. E., Genetic Algorithms in Search, Addison-Wesley, 1991. 6. Holland J. H., Adaptation in natural and artificial systems, Technical Report, Université

de Michigan, Ann Arbor, 1975. 7. Holland J. H., Adaptation in natural and artificial systems: An introductory analysis with

applications to biology, control, and artificial intelligence, MIT Press/Bradford books, Cambridge, 1992.

8. Michalewicz Z., Genetic Algorithms + Data Structures = Evolution Programs, Springer, Berlin, 1999.

9. Ounnar F., Prise en Compte des Aspects Décision dans la Modélisation par Réseaux de Petri des Systèmes Flexibles de Production. Thèse de PhD, Institut National Polytechnique de Grenoble – Laboratoire d’Automatique de Grenoble, Grenoble – France, 1999.

Page 118: OR52 EXTENDED ABSTRACTS

     

118 

10. Ounnar, F., Bouchriha, H., Pujo, P, Ladet, P., D’Amours, S., Faire ou faire-faire dans un réseau logistique auto-organisé, 5e Congrès International de Génie Industriel, Québec, Canada, 2003.

11. Ounnar F., Pujo P., Mekaouche L., Giambiasi N. Customer-supplier relation-ship management in an intelligent supply chain network. Production Planning & Control, 18(5), 377-387, 2007.

12. Ounnar, F., Pujo, P. Pull control for Job Shop: Holonic Manufacturing System approach using multicriteria decision-making, Journal of Intelligent Manufacturing, DOI 10.1007/s10845-009-0288-4, 2009.

13. Ounnar F., Pujo P., Mekaouche L., Giambiasi N., Integration of a flat holonic form in an HLA environment. Journal of Intelligent Manufacturing, Vol. 70, pp. 91-111 2009.

14. Saaty T. L., The Analytic Hierarchy Process, McGraw-Hill, New York, 1980. 15. Sevaux M., Métaheuristiques: Stratégies pour l’optimisation de la production de biens et

services, Rapport d’HDR Université de Valenciennes et du Hainaut-Cambrésis, France, 2004.

Page 119: OR52 EXTENDED ABSTRACTS

     

119 

A framework to assess current discourses in information systems: an initial survey of a sample of IS Journals (1999-2009)

Alberto Paucar-Caceres31

Manchester Metropolitan University Business School,

Aytoun Building, Aytoun Street, Manchester, M1 3GH, UK

Abstract

We propose a framework to reflect on the development of four management sciences (MS) paradigms arguing that the field of information systems (IS) has followed a similar path. The framework contains four IS paradigms/discourses: (1) Positivist/Normative; (2) soft/interpretive; (3) critical/pluralistic; and (4) constructivist/2nd order Cybernetics. The paper characterizes these approaches to IS by using four key terms: System; Organisation; Management and Information, exploring the way these concepts are perceived through the lens of the four paradigms. The paper reports on the findings of current IS trends, from an initial survey of six top IS journals identifying articles adhering to the interpretive, critical and constructivist paradigms published between 1999 and 2009. Results seem to indicate that that IS is moving towards a practice in which interpretive, critical and constructivist discourses are used. Conclusions based on the proposed framework and publication trends, together with some points for further research, are offered. Keywords :Philosophy of information systems; interpretive; critical systems; constructivism; SSM; survey.

1. Introduction

The debate and critique to the goal-seeking positivistic model of classical Management Science/ Operational Research (MS/OR) have been prominent at least amongst the systems, management science and information systems communities in the UK. Results of this debate have contributed to the establishment of a strong soft systems tradition amongst British OR and systems practitioners. From the late 70s, the development of this tradition has enriched the MS/OR landscape with a number of methodologies developed initially by Checkland (1981, 1999) and then continued by a number of systems and OR authors such as Eden et al, (1983); Friend et al (1977, 1997); Flood and Jackson (1991); Jackson (1991); and Mingers (1997a). In this context, we deliberately borrowed the framework from MS/OR to explore the development of similar discourses in the field of Information Systems. We argue that similar debates have been taking place in the field of information systems resulting in what we refer as the theoretical and practical positions of four discourses in IS: (1) hard/normative; (2) soft/interpretive; (3) critical/pluralistic; and (4) constructivist/2nd order Cybernetics. To further distinguish the characteristics of each paradigm, in the paper we consider approaches to Management Science by using four key terms: System; Organisation; Management and Information, and explore the way the four different MS discourses see these concepts, Jackson (2003); Olave-Caceres and Gomez-Flores, (2001). A list of established management science systemic methodologies representing each of these paradigms is suggested. From the four discourses proposed here, the first one, the hard/normative, is the most commonly used and widely known as we explain later; for this reason we decided to concentrate on surveying the later three: interpretive, critical and constructivist paradigms in IS. A sample of six top journals publishing research in information

                                                            31 Email: [email protected]

Page 120: OR52 EXTENDED ABSTRACTS

     

120 

systems (IS) is then surveyed to assess the use of the systemic methodologies mapped into the framework. Using a set of keywords associated with each methodology, the databases of the selected journals were queried for IS articles published for the 10-year period between 1999 and 2009. By identifying articles reporting applications of (or referring to) systemic methodologies into the different aspects of IS the paper aims to raise information systems and systemic practice researchers’ awareness of the benefits of further exchange and conversation between management scientists and academics and practitioners of these two fields of management. The paper is organised as follows: (1) the development of management science methodologies is mapped into the above said four paradigms; suggesting that these paradigms can be replicated to map the development of information systems; (2) the methodological orientations of each paradigm and the way four key management concepts (System, Organisation, Management and Information) are seen by each paradigm is discussed; (3) the methodology used in the survey is presented; and (4) based on a sample of ten IS journals that have published articles using these approaches some initial IS trends are suggested together with some tentative conclusions and topics for further research.

2. Mapping the development of information systems discourses

To a greater extent, the field of IS has followed a similar path of Management Science (or Operational Research). To illustrate this path we borrowed a framework that depicts the development of MS/OR discourses over the last decades. The framework, adapted from Jackson [2003], categorises MS/OR methodologies into four discourses and has been used extensively in management science. Information Systems can be considered a field adjacent to management science and therefore one can be argued that we can transport this framework to mirror the IS discourses developments. There are certainly clear parallels between MS and IS (some will argue that IS is and offshoot or a branch of MS) so in the rest of this paper we will refer to the paradigms existing in MS and by extension we refer as also present in IS. The main MS/OR methods and methodologies attached to these four discourses are briefly surveyed in the next section. These discourses are: (a) the ‘hard’/functionalistic/normative paradigm; (b) the interpretive paradigm; (c) the critical/multi-methodological paradigm; and (d) the Constructivist/2nd order Cybernetics/ Organisational Cybernetic Paradigm. There is no place in this paper to describe the philosophical underpinnings of each paradigm. A useful summary of some features of each paradigm is offered in Fig. 1. A full account of the key characteristics (ontological, epistemological and methodological claims) of these paradigms can be found elsewhere (see for example Jackson [2003]; Paucar-Caceres [2010], Paucar-Caceres and Pagano [2009].

3. Survey of articles using systemic methodologies in information systems

We started by assembling a sample of journals dedicated to publish academic and practitioners’ research in information systems. It is well known that there are a number of Journals lists (in all areas of management). To select our sample, we opted for the “Academic Journal Quality guide” compiled by the Association of Business School (ABS) a UK-based organisation (ABS- Academic Journal Quality Guide (2009); http://www.the-abs.org.uk/files//abs_web_subject.pdf; subject area listing (INFO MAN), Page 15-16; [retrieved on 7th May, 2009]. This guide is in fact, a ‘list of lists’ but it contains also its own ranking, this is the rank we have used in our selection. Table 1 presents our final sample of six journals: one 4*, four 3* and one 2* Journals; this is a convenience sample that gives a good spectrum of IS journals and we used here as an initial point to gauge the use of non-normative paradigms in IS and explore current trends amongst IS journals. We are planning to expand the sample to include a wider range of journals. The six journals selected are well known and recognised in the fields of Information system in both

Page 121: OR52 EXTENDED ABSTRACTS

     

121 

sides of the Atlantic; and although we have used the 2009 ranking, ABS has announced a new list to be released on the 20th March, 2010, we do not expect the ranking of these well established journals to change significantly. Journal Publisher ABS Journal

Guide list 2009

Rank 1* to 4*

Information Systems Research (ISR) Informs 4

Journal of Information Technology (JIT) Palgrave 3

Information Systems Journal (ISJ) Wiley 3

Journal of Strategic Information System (JSIS) Elsevier 3

European Journal of Information Systems (EJIS) Palgrave 3

International Journal of Information Management (IJIM) Elsevier 2

Table 1. Information Systems Journals

To reveal Information Systems articles reporting the use of systemic methodologies, we assembled a group of typical keywords associated with the set of systems methodologies sketched in the previous section. We decided to assemble a set of keywords that will best describe the IS paradigms as are portrayed in Figure 1. In assembling these keywords, we concentrated on the most well known methodologies from each paradigm, (e.g. Checkland’s SSM for the interpretative paradigm) and searched for the appearance of these keywords in the title, abstract or indeed keywords of each article. Previous studies (see for example, Schultze and Leidner, 2002) have indicated that the normative (outlining a positivistic epistemology and objectivistic ontology) has prevailed in Information Systems research. The general attachment of information systems to a positivistic/normative paradigm has been questioned to what seem to be a ‘solitary paradigm’, Goles and Hirschheim (2000). Although there is evidence of shifts occurring, nevertheless this paradigm is still the predominant. We intend to measure how far IS has gone to use other paradigms an that is the reason why we decided not to survey this paradigm but the other three, that is the interpretive, critical and constructivist. The keywords assembled are shown in Table 2.

Page 122: OR52 EXTENDED ABSTRACTS

     

122 

  Hard/Normative  ‘Soft’/Interpretive Critical/Pluralistic Constructivis/2ndOrder Cybernetics 

Methodological 

orientation  

Discovery of law relations amongst 

variables; ‘deep’ structures and 

patterns 

Learn from the intervention and 

understand perception and people 

purposes; to improve problematic 

situations. 

Enlighten/empower stakeholders, especially those 

disadvantaged. 

The focus of the analysis is to observe the ability 

of the organisational system to handle the 

complexity  

Research 

Intention 

Optimisation; problem‐solving  Bring consensus or accommodation 

between stakeholder’s views. 

Reform social order; systems thinking should 

concentrate on the issue of inequality of the participants 

Holistic, systemic

Ontology  Realism Interpretivism Interpretivism Nominalist

Epistemology  Positivism and Structuralism  Anti‐positivism Anti‐positivism Anti‐positivism

 

Management Concepts 

System 

 

 

 

 

Systems exit in and are part of the 

real world; they can be engineered. 

 

‘analysis of the problem situation is 

conducted in systems terms’  

‘quantitative analysis can be useful 

since systems obey laws’  

Separation of the real world and Systems 

Thinking world; systemiticity is in the 

process. 

 

‘analysis of the problem situation is 

designed to be creative and may not be 

conducted in systems terms’ ‘quantitative 

analysis is unlikely to be useful’ 

System refers to the totality of elements: ethical, 

political, ideological and metaphysical. 

 

‘analysis of the problem situation is designed to reveal who 

is disadvantaged by current systemic arrangements’ 

‘quantitative analysis may be useful especially to capture 

particular biases in existing systemic arrangements’ 

Distinction in language of a set of elements that 

conserve relations and that it is accepted in a 

particular domain of reality and explication 

Organisation  

 

Socio‐technical system; elements 

combined in a structured manner 

to achieve known goals efficiently  

Social unities product of the interaction 

between actors  

 

 

Coercive context and in which the social and 

organisational world are oppressive and unequal 

 

Community constituted as a systems of co 

ordinations in language, that is a network of 

conversations under certain emotions. If such 

emotion is the mutual acceptance the community 

constitutes a human social systems 

Management  Rational process to decision‐

making. Decision‐maker acts in full 

possession of ‘bounded rationality’. 

Effort to maintain relations through 

negotiations and evaluating different 

courses of actions. 

Dedicated to ensure fairness; Systems to be designed by 

supporting disadvantaged stakeholders encouraging to 

make full contribution to systems design. 

Seen as the development of relations of 

acceptance in a human social system that allow 

the system to realise itself continually  

Page 123: OR52 EXTENDED ABSTRACTS

     

123 

Metaphors  Classical OR/MS and Normative IS: 

Machines 

Systems Dynamics: Flux and 

transformation 

Management Cybernetics: Brain 

Cultural systems

Political systems 

Cultural system

Psychic prison 

Instruments of domination 

Information  It implies three related concepts: 

Probability of occurrence of a 

signal; degree of order of the 

elements of information; and an 

effective data organisation. 

Human process attributes meaning to 

data in a cognitive, especial and temporal 

particular context. 

Resonance in language that allows that the 

interlocutor emotions/moods to develop from 

doubt or ignorance to satisfaction or sense of 

action. 

Figure 1. Methodological orientation, research intention and some management concepts of IS discourses,

adapted from Jackson (2003) and Olave-Caceres and Gomez-Florez (2001)  

Page 124: OR52 EXTENDED ABSTRACTS

     

124 

1. Hard/Normative methods (*) Keywords used in survey

Classical Data base oriented method

(not surveyed)

2. Interpretative Methodologies

o Soft Systems Methodology, SSM (Checkland) o Interactive Planning, IP (Ackoff) o Strategic Assumption Surfacing and Testing, SAST

(Mason and Mitroff) o Strategic Choice Approach (SCA) (Friend) o Social System Design, SSD (Churchman) o Cognitive Mapping, SODA, JOURNEY (Eden and

Ackerman)

o Soft Systems Methodology o Interactive Planning o Strategic Assumption Surfacing and

Testing o Strategic Choice Approach o Social System Design o Cognitive Mapping, SODA,

JOURNEY o Problem structuring methods

3. Critical/pluralistic Methodologies

o Critical Systems Heuristics; o Total systems intervention; Critical systems

Thinking; o Critical pluralism; o Multi-methodology

o Critical Systems Heuristics o Total systems intervention o Critical systems Thinking o Critical pluralism o Multi-methodology

4. Complex Systems Methodologies

o Organisational Cybernetics Viable Systems Model; Team Syntegrity (Beer)

o Complexity theory

o Organisational Cybernetics - Viable Systems Model Team Syntegrity;

o Complex Adaptive Systems (CAS); Social Network Analysis (SNA)

Table 2. Main set of systemic methodologies and keywords used in

querying abstract and titles of article

(*) Because the normative paradigm is widely used in Information Systems Research (Schultze and Leidner, 2002) this paradigm is not surveyed in this paper.

The list considered journals published in English in the UK and the USA. We started by surveying the journals directly using the set of keywords: Titles and abstracts of articles published in the six selected journals (Table 1) were queried for the occurrence of the above keywords. All the journals’ websites were searched on the 26 February 2010. The ten year period (1999 and 2009) was considered adequate to be reasonable span of time over which to assess the exposure of systemic methodologies in IS articles. The survey considered only IS papers that have been catalogued by the journals as articles or academic papers hence, books reviews, editorials, letters, etc. were not included because these documents are not generally cited by authors in the field.

Page 125: OR52 EXTENDED ABSTRACTS

     

125 

4. Discussion of results

In Table 3, the main results of the search are presented, including the number of information systems articles classified by journal, IS paradigm and by the particular methodologies used. As seen in this table, there were 51, 48 and 46 articles reporting the use of methodologies (or making some reference to) in information systems adhering to the interpretive, critical or constructivist approach respectively. The journals with greater number of articles were the Journal of Information Technology (JIT) (50 articles) followed by the European Journal of Information Management (EJIS) (47 articles). Information Systems Research (ISR) and the Journal of Strategic Information System (JSIS) reported fewer articles (13 and 3 respectively). Interestingly, the Information Systems Journal (ISJ) published interpretative methodologies (largely Checkland’s SSM) but these were published before 1999 and not counted in this survey. Although the results are unclear, some patterns seem to emerge. In general, there are signs of the interpretive methodologies being used reasonable well (51 articles). From these, 27 (more than half) were published in EJIS. Also, both the critical and constructivist paradigm were used in a similar number of articles (48 and 46 respectively); with JIT and EJIS welcoming most of articles using these paradigms. In terms of specific methodologies used within these paradigms, a general critical approach was prominent (14 articles in EJIS); and within the constructivist camp, complexity theory seems to be used reasonably well (17 in JIT). The other interesting pattern emerging between the journals surveyed, is that there are some general trends regarding the paradigm they favour; this seems to be in line with the journals’ aims and scope expressed in their websites. It can be seen, for example, the positivistic orientations of a journal such as ISR (a top 4* ranked in the ABS-list) has precluded the appearance of no articles using other paradigms rather than the normative. On the other hand if we take a journal such as EJIS, we can see the wider orientation of this journal (a 3* ABS-list) has given space to non-positivistic papers.

5. Conclusions

In this paper, we have borrowed a framework used in studying management science to assess the development in information systems. Positioning the development of management science methodologies in the context of information systems, the paper has: (1) mapped a set of established systemic methodologies into four information systems discourses/paradigms: normative, interpretivist, critical and constructivist; and (2) surveyed a sample of six well established information systems journals to give an initial picture into which methodologies have adhered to the later three (non-normative) discourses during the period between 1999 and 2009. Although the results of the initial survey presented have significant limitations (we have only surveyed 6 journals amongst dozens and we have only identify and counted the number of articles), nevertheless the results seem to indicate that information systems research is moving away from the normative/positivistic paradigm hard-oriented associated methodologies. There is a total of 145 articles using interpretative, critical and constructivist associated methodologies, which is a healthy sign that information systems is exploring non-normative approaches. Although the analysis presented here has contributed towards the understanding the current trends of information systems discourses, further research is necessary to study both a larger scope of IS journals and a sample of exemplar papers representing these discourses to ascertain fully a change of paradigm orientations in information systems.

Page 126: OR52 EXTENDED ABSTRACTS

     

126 

Management Science/Information Systems Discourse 

 

[Abbreviation: Total of articles published] 

Information 

Systems 

Research 

[ISR:13] 

Journal of 

Information 

Technology 

[JIT:50] 

Information 

Systems 

Journal 

[ISJ:20] 

Journal of 

Strategic 

Information 

System 

[JSIS:3] 

European 

Journal of 

Information 

Systems 

[EJIS:[45] 

International 

Journal of 

Information 

Management 

[IJIM:14] 

 

Interpretivist 

Soft Systems Methodology, SSM   3  4  ‐  3  25  2  

Interactive Planning, IP   ‐  2  1  ‐  2   ‐ 

Social Systems Design    ‐  1  ‐  ‐  ‐  ‐ 

Cognitive Mapping, SODA, JOURNEY  ‐  8  ‐  ‐  ‐  ‐  

Total of articles published (51)  3  15  1  3  27  2 

Critical 

Critical systems   4  7  10  ‐  14  6 

Multi‐methodology/pluralism   1  1  1  ‐  4  ‐ 

                                   Total of articles published (48)  5  8  11  ‐  18  6 

Page 127: OR52 EXTENDED ABSTRACTS

     

127 

Constructivist/ 2nd Order Cybernetics 

Constructivist Approach   1  7  1  ‐  ‐  ‐ 

Organisational Cybernetics; Viable Systems Model    ‐  3  ‐  ‐  ‐  2 

Complexity Theory    4  17  7  ‐  ‐  4 

                                Total of articles published (46)   5  27  8  ‐  ‐  6 

Table 3. Articles in information systems journals published applying Interpretive, Critical and Constructivist approaches (1999-2009)

 

Page 128: OR52 EXTENDED ABSTRACTS

     

128 

6. References

1. Association of Business Schools (ABS). Academic Journal Quality Guide (2009), edited by Aidan Kelly, Huw Morris, Michael Rowlinson and Charles Harvey (version 3), March 2009 [As revised 21.4.2009]. http://www.the-abs.org.uk/files//abs_web_subject.pdf; subject area listing (INFO MAN), Page 15-16; retrieved on 7th May, 2009.

2. Ackoff, R. (1981), Creating the corporate future, Wiley, New York. 3. Ackoff, R. (1993), The art and Science of Mess Management. In Mabey, C., Mayon-White,

B. Managing Change, pp 47-54, Paul Chapman Publishing, London. 4. Checkland, P.B. (1981, 1999) Systems Thinking, Systems Practice, Wiley. 5. Beer S. (1966). Decision and Control: the meaning of operational research and

management cybernetics. John Wiley & Sons: Chichester. 6. Beer, S. (1979), The heart of the Enterprise, Wiley, Chichester. 7. Beer, S. (1981), Brain of the Firm, Wiley, Chichester Brocklesby 1995; 8. Beer, S. 1994. Beyond Dispute: The invention of Team Syntegrity. John Wiley &

Sons,Chichester. 9. Eden, C. Jones, S., and Sims, D. (1983), Messing about problems, Pergamon, Oxford. 10. Flood, R. and Jackson M, (1991), Creative Problem Solving: Total Systems Intervention,

Wiley. 11. Friend, J. K. and Hickiling, A., 1997. Planning under pressure: The Strategic Choice

approach., 2nd ed., Butterworth-Heineman, Oxford. 12. Friend, J. (2006) Labels, methodologies and strategic decision support, Journal of

Operational Research Society, 57, 772-775 13. Goles, Tim and Hirschheim, Rudy (2000), The paradigm is dead, the paradigm is

dead…long live the paradigm: the legacy of Burrell and Morgan Omega, Volume 28, Issue 3, 1 June 2000, Pages 249-268.

14. Jackson, M. (1982), The nature of soft systems thinking: The work of Churchman, Ackoff and Checkland, Journal of Applied Systems Analysis, 9:17.

15. Jackson, MC. (1991) Systems Methodology for the management Sciences, Plenum press, New York.

16. Jackson, MC. (1999) Towards coherent pluralism in management science, Journal of the Operational Research Society 50, 12-22.

17. Jackson MC (2003) Systems Thinking: Holism for Managers, Wiley: Chichester 18. Klein, HK and Hirschhheim, R. (1987) a comparative framework of data modelling

paradigms and approaches. Computer Journal, 30 (1) pp. 8-15 19. Lewis, P. (1994), Information-Systems development, Pitman. 20. Maturana H, Varela F. (1980). Autopoiesis and Cognition: the realization of the living.

Reidel: Dordrecht. 21. Maturana H., and Varela, F. (1988) The Tree of Knowledge: the biological roots of human

understanding. Shambala: Boston. 22. Mingers, J. (1984), Subjectivism and soft systems methodology-a critique, Journal of

Applied Systems Analysis. 11:85. 23. Mingers, J., (1997a) Multi-paradigm Multimethodology, in Multimethodology, Mingers

and Gill (eds.), Wiley. 24. Mingers, J., (1997b), Towards critical pluralism, in Multimethodology, Mingers and Gill

(eds.), Wiley. 25. Mingers, J. and Brocklesby, J. (1996) Multimethodology: towards a framework for critical

pluralism. Systemist, 18(3), pp. 101-132. 26. Morgan, G. (1997) Images of Organisations, second edition Sage, Beverly Hills 27. Olave-Caceres, Y. A. and Gomez-Florez, L. C. (2001) Una Reflexión Sistémica sobre los

fundamentos Conceptuales para Sistemas a de Información, Grupo de Investigación en Sistemas y Tecnología de Información. Universidad Industrial de Santander, Colombia.

28. Pask, G. (1961). An Approach to Cybernetics. Harper & Brothers Publishers. New York.

Page 129: OR52 EXTENDED ABSTRACTS

     

129 

29. Paucar-Caceres, A. (2010) Mapping the changes in Management Science: A Review of ‘Soft’ OR/MS Articles Published in Omega (1973- 2008)’. OMEGA, Vol. 38 (2010) 46-56.

30. Paucar-Caceres, A. and Pagano, R. (2009) ‘Systems Thinking and the use of Systemic Methodologies in Knowledge Management’. Systems Research and Behavioural Science 26, Issue 3, May/June, 2009; p. 343-355, Wiley

31. Simon, H. A., (1947), Administrative Behaviour, Mcmillan, New York 32. Schultze, U. and Leidner, D (2002) Studying Knowledge Management in Information

Systems Research: Discourses and Theoretical assumptions, MIS quarterly, Vol. 26, No 3; pp213-242

33. Ulrich, W. (1994). Critical Heuristics of Social Planning: A New Approach to Practical Philosophy. Second, ed., New York and London: John Wiley & Sons.

34. Ulrich, W., (1987), Critical Heuristics of Social systems design, European Journal of Operational Research, 31: 276-83.

35. Wood-Harper AT and Fitzgerald, G. (1982). A taxonomy of current approaches to Systems analysis. The Computer Journal, 25 (1); pp.12-16.

Page 130: OR52 EXTENDED ABSTRACTS

     

130 

Wastewater treatment system selection using the analytical hierarchy process

A Perez and M Mena32

Civil Engineering Department, Hydraulic Systems , University of Chile, Chile.

A Oddershede33 Industrial Engineering Department

University of Santiago of Chile, Chile

Abstract

This paper presents a framework for evaluating alternative technologies to assist the decision making process to select a domestic wastewater treatment system. A case study is explored involving the achievement of multiple objectives for a small city in Chile. This has been addressed through a multi-criteria decision model performed according to principles of the Analytical Hierarchy Process (AHP).The results indicated that due to the isolated characteristics of the studied city, a simple, easy to operate, with low maintenance and operation costs, including low sludge generation wastewater treatment system is required. The AHP methodology reflected to be a useful tool for structuring and managing the decision problem, identifying the issues that directly affect the selection of alternatives. The study provides a basis for setting priorities and making decisions for wastewater treatment at any locality. Keywords: wastewater treatment technology; AHP; multi-objective; decision support.

1. Introduction

Nowadays, the wastewater treatment technology (WTT) has been a central concern as a contribution to environment protection and rural development. Several efforts have been done in water pollution control to prevent human waste from reaching drinking water supplies (EPA-US.2008). Presently there exist a variety of WTT systems that can be innovative and deal with technology advances. The EPA_US (2008) document offers a list of innovative and emerging WTTs offering different operations aspects, performance and features. However, it has become to be a problem for township decision makers (DMs) for finding the best feasible technology at low-cost and satisfying country regulations. Sperling (1996) presents a comparison among the most frequently used systems in developing countries. Metcalf & Eddy (2003) in their book give guidance for treatment and reuse. In this context it is essential to identify an appropriate WTT system for region requirements. Thus, this study examines through a multi-criteria approach the proper WTT system for a new city, Chaitén located at an isolated zone in Chile. The purpose of the investigation is to develop a new decision making model based upon expert judgments for identifying high-priority dimensions concerning the WTT .The results will bring in a starting point to analyze those WTT system parameters considered as more relevant for performing the service. The proposal will be analyzed from technology feasibility perspective using AHP (Saaty, T.L.1997) methodology because from technology feasibility perspective the WTTs available                                                             32 E-mail: [email protected] and [email protected] 33 E-Mail : [email protected]  

Page 131: OR52 EXTENDED ABSTRACTS

     

131 

offer different characteristics that not always are in conformity with region necessity. AHP orders, structure the problem in study, formulate a hierarchy and follow a comparison procedure to analyze and prescribe a course of action helping DMs to find the alternative that best meets the needs identified. AHP has been proposed in recent literature as an emerging approach to large, dynamic, and complex real world multiple criteria decision-making problems (Saaty, TL.1998). To achieve the objective, the area characteristics where the new city will be installed were studied, as well as the needs and most important aspects of the ancient city of Chaitén. The method is used to state criteria and rank main factors .Different alternatives for wastewater treatment were analyzed, considering its particularities, operation aspects, advantages and disadvantages. Seven feasible WTTs are to be compared using the generated AHP decision model, including the most important aspects. The results indicated three most appropriated technologies for this new city project, proceeding an analysis from cost perspective. Section 2 of this paper describes the problem faced by the DM. Section 3 introduces to the WTT systems. Section 4 presents a simplified AHP decision model based upon human expert’s knowledge and experience. The results given in section 5 generate information that is not currently available. Finally, in section 6, the conclusions of the study are provided.

2. System description

In May 2008, Chaitén, a Chilean town located in Los Lagos Region, was devastated by a volcano eruption. The government decided to relocate the town in Santa Barbara, a small bay 10 km north, where a new city will be constructed. The idea is to transform this disaster into an opportunity to build a model town: sustainable, safe and designed to fit and improve all the requirements of the community. This new city project considers setting up a suitable WTT, where the chosen technology should be reliable and in conformity with the concept of sustainability providing an essential role. Thus, this study involves the territory characterization, the description and analysis of technology systems available and to identify the main factors for proposing a methodology to select wastewater treatment system for this new city. Area Characteristics. Santa Barbara is located at coordinates 42º 51' S latitude and 72º 47' W longitude. The new city will be installed in a public property area , located at an average height of 17 m compared to sea level and covers an area of 90 hectares. This town is basically a small bay, protected on the east by the solid cord Chaitén. The area has little flat land, considered as a restriction. The area has a mild maritime climate and rainy with a greater presence of rainfall during winter season (May to July).Average annual rainfall is greater than 4.000 mm, and in winter months exceeds 500mm/month average temperature does not exceed 10 ° C and the prevailing winds come from the west. Due to its geographical characteristics, the town of Chaiten has a very strong isolation. The existing road is the main Southern Highway (Route CH-7) which connects north to south Santa Barbara Caleta Gonzalo, Chaitén, Puerto Cárdenas, Villa Santa Lucia and the southern boundary with the Region XI of Aysen. None of these routes are paved, except for some sections near the town of Chaitén. Currently, it is estimated that 48% of families displaced from Chaiten intends to relocate to Santa Barbara, equivalent to 2,900 inhabitants.In the future, once the new city is completed, it is expected that this percentage increases. This population would add to the population of the town by 2010, which is estimated about 400 people. These descriptions indicate that due to the isolated characteristics of the studied city and the population size, a simple, easy to operate, with low maintenance and operation costs, and low sludge generation wastewater treatment system is required.

Page 132: OR52 EXTENDED ABSTRACTS

     

132 

Because the main pollutants of domestic sewage are organic matter, solids and nutrients, the treatment alternatives technologies here presented are mainly those processes aimed at removing organic matter and suspended solids present in the water, so physicochemical processes and tertiary treatment for removal of inorganic compounds, metals, specific elements, etc., are not included in this analysis. In addition, this study aims to select the treatment that best meets the needs of the town of Santa Barbara, complying with the Chilean current regulations regarding the discharge of sewage effluent and sludge disposal or reuse.

3. Wastewater Treatment Systems

The pre-selection of alternatives is performed based on the requirements described above and the objectives established for the treatment in general, selecting the processes that have proven their ability to meet them. With these considerations, the choices are: Worm Filter: process suitable for small populations (less than 5000 inhabitants), simple

operation and low operating costs. Trickling Filter: relatively simple process, applicable to small communities and proven

efficiency. Rotating Biological Contractor: easy operation system, low sludge production and low

energy requirements. Extended Aeration: This alternative was selected for being reliable and suitable for small

populations, with low production of stabilized sludge in addition to their usual preference, mainly in southern regions of the country.

Chemically Enhanced Primary Treatment: chosen for its easy operation and low costs of operation.

Integrated fixed-Film Activated Sludge: treatment system with capacity for low and high loads, low sludge production and good operation in cold climates.

Marine Outfalls: chosen because Santa Barbara is located on the waterfront. Moreover, this system is simple to operate, to maintain, and needs low prior removal of contaminants.

4. Analytic Hierarchy Process Application

A model with five-level hierarchical structure was developed for the general problem of selection of technology for treating wastewater. The first level is related to the desired global objective, in this case the aim is: "Select a system of wastewater treatment for Santa Barbara”. The next level represents the benchmarks, which are the main aspects that contribute and interests taken into account when selecting a technology. The third and fourth level is the sub-criteria that influence the progress of each of the criteria level. The final level shows the alternatives, in this case, the different technologies for wastewater. Therefore, the hierarchical structure is built based on the most important aspects associated with the overall objective in question, which allows the decomposition of the problem and analyze it according to each of them. This is very interesting, because in this way it is possible to perform a sensitivity analysis to see how much and how it affected the final outcome to change the preferences of one or more aspects. The following comparative criteria were selected based on engineer’s experience and the recommendations of von Sperling investigation (Sperling, 1996):

Environmental Risks: refers to the possible proliferation of odors, vectors and noise of different treatment options.

Stability of the process: the stability of the process of removal of organic matter may be affected by changes in the flow, quality and toxic increase in the influent to the treatment plant. This criterion compares the performance of technologies to these changes.

Page 133: OR52 EXTENDED ABSTRACTS

     

133 

Climate: The different treatment processes work differently according to the climate in which they are. This criterion assesses how they behave technologies in cold rainy weather, cold, temperate and warm.

Requirements: This criterion refers to the land requirements and energy of the different alternatives, categorized into high or low need.

Sludge Generation: refers to high or low production of sludge, either primary or secondary treatment of each alternative.

Sludge Treatment: This criterion evaluates the need for sludge treatment.

Population: Some technologies are more suitable for low flows to high flows and other, which can be translated in size of population of the locality in question. This criterion compares the performance of the different processes for small towns, medium and large.

BOD removal required: it will vary with the disposal of effluent treatment plant, so the required removal of BOD will be different if the discharge is in rivers, rivers with dilution capacity, lakes or sea. This difference is noted for removal percentage, assessing to what range is the alternative in question.

Required removal of nutrients: as for the removal of BOD, this approach will depend on the final disposal of effluent treatment plant, so the removal of nutrients will be different if the discharge is in rivers, rivers with dilution capacity, lake or sea.

Proximity and accessibility: refers to the reliance on technology to the proximity of supply centers.

The hierarchy was built using these criteria and decomposing subcriterias as shown in Figure 1.

5. Results

The most important criteria for analysis are those who obtained the highest rates in the allocation of priorities, the other criteria have low priority because experts qualified them with less importance for this case. This does not mean that the system chosen should not meet these criteria but that have less emphasis, less influence in the comparative analysis. In this case, as shown in figure 2 the experts assigned greater importance to the Population criterion with 13.9% of the total weight followed by Weather (13.6%), Generation of Sludges (10.6%), Sludge Treatment (10.6 %), Proximity and Accessibility (10.4%) and required removal of BOD (10.3%). Treatment options to better meet these requirements will gain weight, showing a best fit to the stated goal than others. The criteria preferences shown above depend entirely on the personal opinion of the experts questioned, which answered based on their own experience and the characteristics of the locality. Local results, i.e. results that show how the alternatives behave under each criterion for comparison, indicate that the marine outfall is preferred in most aspects, "Environmental Hazards", "Stability of the Process", "Climate ", "Requirements", "Sludge Generation" and "Sludge Treatment". For the criterion "population", the preferred system is extended aeration sludge treatment, for the criteria "BOD" and "removal of nutrients" preference is IFAS and for the criterion "proximity and accessibility" worm filters is the preference. As global results, it is observed in Figure 3 that the best alternative for the treatment of wastewater from Santa Barbara, according to expert opinions, proved to be the marine outfall, as is the option that received the brunt of the global synthesis (20.4%). This is because this system fits better the majority of benchmarks. All other alternatives were similar preferences, leaving the last place to chemically enhanced primary treatment (CEPT) with 12.4% preference.

Page 134: OR52 EXTENDED ABSTRACTS

     

134 

Figure 1. The Hierarchic Structure

Figure 2. Criteria Comparison

Page 135: OR52 EXTENDED ABSTRACTS

     

135 

Figure 3. Global Result To compare the costs of the best three alternatives, the equivalent annual cost (EAC) of each process was calculated, considering the investment, operation and maintenance costs and an evaluation period of 20 years, at a 10% discount rate indicated, in Figures 4 and 5.

Figure 4. Investment Costs of Alternatives

Figure 5. Operation and Maintenance Costs of Alternatives

Page 136: OR52 EXTENDED ABSTRACTS

     

136 

The result shown in figure 6 indicates that despite the Marine outfall have the higher relative investment cost, the equivalent annual cost of this system is less than the alternative extended aeration. The Worm filters, although located in third place technique preference, are located in the first place preferably in costs with an important difference with respect to the other two systems, which adds an additional attraction to this system.

Figure 6. Equivalent Annual Cost of the Alternatives

6. Conclusions

The AHP proved to be a useful tool to structure and organize the decision problem, identifying the issues that directly affect the selection of alternatives for WTT. The model here proposed can be refined to obtain a more detailed analysis of what is required, modifying the criteria and / or adding or removing options. The model represents a basis for those who must set priorities and make decisions to select WTT in Santa Barbara or in any location. Santa Barbara requires a simple, easy to operate, with low maintenance and operating costs and low sludge generation WTT system given their own characteristics and isolation of the area. The outfalls were found to be an attractive alternative solution, as they provide a safe and efficient technology, however from costs perspective is not advantageous. The decision will then depend on how much one is willing to sacrifice on costs.

7. References

1. Barañao, P., Tapia L. (2004). “Tratamiento de las Aguas Servidas: Situación en Chile”. Revista Ciencia & Trabajo, Número 13, pag 111-117.

2. Carrasco, C. (2007). “Tratamiento Físico Químico de Aguas Residuales”. Memoria para optar al título de Ingeniero Civil, Facultad de Ciencias Físicas y Matemáticas. Universidad de Chile.

3. EPA-US. (2008). “Emerging Technologies for Wastewater Treatment and In-Plant Wet Weather Management”. EPA 832-R-06-006, Fairfax, Virginia.

4. Germain, E., Bancroft, L., Dawson, A., Hinrichs, C., Fricker, L., & Pearce, P. (2007). “Evaluation of Hybrid Processes for nitrification by comparing MBBR/AS and IFAS configurations”. Water Science & Technology, 55 (8-9), 43-49.

5. Iwai, S., Oshino, Y., & Tsukada, T. (1990). “Design and Operation of Small Wastewater Treatment Plants by the Microbil Film Process”. Wat. Sci. Tech. , 22 (3/4), 139-144. 100

6. Metcalf & Eddy, Inc. (2003). “Wastewater engineering, Treatment and Reuse “(Fourth ed.). NewYork: Mc Graw Hill.

7. Oddershede, A., Arias, A., & Cancino, H. (2007). “Rural development decision support using theAnalytic Hierarchy Process”. Journal of Mathematical and Computer Modeling, 46 (7-8), 1107-1114.

Page 137: OR52 EXTENDED ABSTRACTS

     

137 

8. Saaty, T.L. (1997). “Toma de Decisiones para Líderes: El proceso analítico jerárquico la toma de decisiones en un mundo complejo”. (M.Escudey, E.Martinez, & L.Vargas, Trans.) Pittsburgh: RWS Publications.

9. Saaty,T.L.(1998) “Fundamentals of Decision Making & Priority Theory”, RWS Publications. 10. Santibañez, J. (2002). “Estudio de Plantas Piloto de Aguas Servidas a base de Tecnologías

No Convencionales”. Memoria para optar al título de Ingeniero Civil, Facultad de Ciencias Físicas y Matemáticas. Universidad de Chile.

11. Sperling, v. M. (1996). “Comparison among the most frequently used systems for wastewater treatment in developing countries”. Water Science and Technology, 33 (3), 59-62.

12. WEF/ASCE. (1998).”Design of Municipal Wastewater Treatment Plants” (Fourth edition.Vol.1). USA.

Page 138: OR52 EXTENDED ABSTRACTS

     

138 

Estimating preferences from pairwise comparisons using multi-objective optimization

S. Siraj34

J.A. Keane School of Computer Science, The University of Manchester, UK

L. Mikhailov Manchester Business School, The University of Manchester, UK.

Abstract

Human errors, biases and uncertainties often pollute decisions in the process of selecting the best alternative from available options. The preference structure in widely used decision-making techniques involves pairwise comparisons. The problem of identifying the most suitable prioritization method remains open. Most prioritization methods for pairwise comparisons are based on single-objective optimization. No method exists that minimizes deviations from both direct and indirect judgments whist estimating the priority vector. This paper proposes an approach to simultaneously minimize deviations from direct and indirect judgments. It also analyzes how intransitivity affects the estimated priority vectors. The results, calculated from thousands of random pairwise comparison matrices using Monte-Carlo simulation, confirm a relationship between the cyclic judgments and the average minimum number of violations. This leads us to propose minimizing the number of violations along with the other two objectives. A prototype application has been developed to generate all non-dominated solutions using a multi-objective genetic algorithm. The new approach has been applied to several examples and the generated priority vectors are compared with existing methods. The new approach is shown to offer multiple Pareto-optimal solutions giving the user more flexibility than all other tested methods. Keywords: Pairwise Comparisons, Decision Support Systems, Optimization, Simulation.

1. Introduction

The pairwise comparison (PwC) method is often used when a decision agent (DA) is unable to directly assign the criteria weights or the scores of available options. Once a PwC is acquired from the DA, the main challenge is the presence of inconsistency among the judgments constituting the PwC. The prioritization problem is to calculate preferences from the acquired PwC (whether consistent or inconsistent). There are several methods to estimate preferences from PwC. Different methods perform differently and no method outperforms other methods in all situations. For each pair of elements compared in PwC, an indirect (latent) judgment can also be captured with the help of other judgments present in the PwC. These indirect judgments are only different in the presence of inconsistency. Considering optimization-based prioritization, no method exists that formulates indirect judgments explicitly, and minimizes deviations from both direct and indirect judgments. We propose to minimize these deviations using a multi-objective evolutionary approach. This newly proposed method offers an interactive selection from a set of non-dominated solutions, according to the DA's requirements.

                                                            34 Correspondence: S. Siraj, Room 2.58 Kilburn Building, The University of Manchester, M13 9PL, UK. E-mail: [email protected]

Page 139: OR52 EXTENDED ABSTRACTS

     

139 

2. Problem definition

This section formulates the prioritization problem in multiple-criteria decision-making, based on pairwise comparisons. It defines ordinal and cardinal consistency, based on reciprocal and transitive properties of pairwise ratio judgments. Ordinal inconsistency represents a circular-triad of preferences (Kendall and Smith 1940), or a three-way cycle. Saaty (1977) developed a measure of cardinal consistency, Consistency Ratio (CR), based on the properties of positive reciprocal matrices. If the value of CR is smaller or equal to 0.1 (or 10%) then the inconsistency is considered acceptable. The prioritization problem is to determine a priority weight vector from the given judgments in the PwC. The weights for decision elements are calculated from the PwC using several mathematical techniques. Most of these methods are based on optimization. A summary of prioritization methods is given in (Choo and Wedley, 2004), where 18 different methods are analyzed and compared. There remains no consensus on the evaluation criteria to be used for comparison of these methods. The most widely used criteria are total deviation, TD, mean absolute deviation, MAD, and number of priority violations, NV (Golany and Kress, 1993; Mikhailov and Singh, 1999; Srdjevic, 2005). Mikhailov (2006) introduced a new approach to directly use the two criteria, TD and NV, as two objectives to be minimized simultaneously.

3. Revealing the indirect judgments

This section will discuss the concept of indirect judgments and its mathematical formulation. If the DA’s judgments are cardinally inconsistent, then we obtain at least one indirect judgment incongruent with the one provided by the DA. We formulate the total number of indirect judgments possible from complete set of direct judgments in the PwC. The concept of indirect judgments has been analyzed for widely used methods. No method exists that simultaneously minimizes deviations from both direct and indirect judgments. In order to estimate preferences, it is sensible to consider both the acquired judgments and the others latent in the DA's mind. We propose here a technique to minimize deviation from both types of judgments.

4. Transitivity and violations

This section justifies the consideration of the criterion of priority violations in our approach. In order to justify the need to minimize NV, we investigate the effects of transitivity on priority violations. Different rankings are produced when the comparisons are intransitive. Hartvigsen (2005) shows the possibility of obtaining different rankings, even in the absence of intransitive judgments. To analyze the effects of transitivity on the estimated priority vectors, Monte-Carlo simulations have been used. This technique has been used to analyze PwCs (Herman and Koczkodaj, 1996; Koczkodaj, 1998; Ishizaka and Lusti, 2006). An enumeration algorithm has been implemented to calculate the number of three-way cycles, L, by testing all possible combinations of any three judgments. All the optimization methods were implemented using a genetic algorithm in Java. The results show that the greater the number of cycles present, the more the violations while estimating preferences. The experiment also shows that the existing methods generate a higher number of violations than the minimum possible value. The idea of minimizing NV has already been proposed in (Mikhailov, 2006); here we extend this approach to minimize three objectives, as discussed in the next section.

5. Prioritization using indirect judgments (PrInT)

In order to estimate preferences, we propose to estimate a consistent PwC that minimizes the deviations from indirect judgments along with the direct ones, simultaneously. The objective

Page 140: OR52 EXTENDED ABSTRACTS

     

140 

function for indirect judgments is formulated in this section. The results of the Monte-Carlo experiment suggested that priority violations, NV, should be included as the third objective to minimize along with the direct and indirect deviations. It is shown that the importance of indirect judgments may vary according to the types of judgmental error, and the level of uncertainty in the DA's mind. Two approaches are proposed to assign variable importance to direct and indirect judgments. The first one is the pre-prioritization approach: to aggregate all objectives into one function by assigning different weights to individual ones. The second technique, post-prioritization, requires the generation of all possible Pareto-optimal solutions using multi-objective optimization. The post-prioritization approach gives greater flexibility to DAs; however, it also demands more computation time given the generation of all non-dominated solutions.

6. Illustrative examples

This section investigates the benefits of the proposed technique and compare it to existing methods. To illustrate the use of the proposed technique involving indirect judgments, we consider various examples of PwCs starting from fully consistent, through slightly inconsistent, to the ones with intransitive judgments. The weights are first estimated for all the PwCs using existing methods. The Pareto-optimal front is then generated using PrInT with the post-prioritization approach. To compare the weights generated by the different methods, we calculate TD, NV and the deviation from second-order indirect judgments (TD2). The results for these examples show that the new approach offers a wide range of non-dominated solutions, providing flexibility to select one according to the requirements.

7. Conclusion

The problem of obtaining preferences from PwC has been explored by considering indirect judgments. The effect of intransitive judgments on the priority violations has been investigated. The results, calculated from thousands of random PwCs using Monte-Carlo simulation, confirm a relationship between the cyclic judgments and the average minimum number of violations. A new method based on multi-objective optimization has been proposed that minimizes deviation from both direct and indirect judgments, along with priority violations. The approach offers multiple solutions giving the user increased flexibility compared with all other tested methods. This approach outperforms all other methods for intransitive PwCs, giving minimum violations and remaining as close as possible to the direct and indirect judgments.

8. References

1. Choo, E., Wedley, W., 2004. A common framework for deriving preference values from pairwise comparison matrices. Comput. Oper. Res. 31 (6), 893–908.

2. Golany, B., Kress, M., 1993. A multicriteria evaluation of methods for obtaining weights from ratio-scale matrices. European Journal of Operational Research 69 (2), 210–220.

3. Hartvigsen, D., 2005. Representing the strengths and directions of pairwise comparisons. European Journal of Operational Research 163 (2), 357–369.

4. Herman, M., Koczkodaj, W., 1996. A monte carlo study of pairwise comparison. Information Processing Letters 57 (1), 25–29.

5. Ishizaka, A., Lusti, M., 2006. How to derive priorities in AHP: a comparative study. Central European Journal of Operations Research 14 (4), 387–400.

6. Kendall, M., Smith, B., 1940. On the method of paired comparisons. Biometrika 31 (3/4), 324–345.

7. Koczkodaj, W., Jun. 1998. Testing the accuracy enhancement of pairwise comparisons by a monte carlo experiment. Journal of Statistical Planning and Inference 69 (1), 21–31.

Page 141: OR52 EXTENDED ABSTRACTS

     

141 

8. Mikhailov, L., 2006. Multiobjective prioritisation in the analytic hierarchy process using evolutionary computing. In: Applications of Soft Computing: Recent Trends. Vol. 36. Springer, pp. 321–330.

9. Mikhailov, L., Singh, M., 1999. Comparison analysis of methods for deriving priorities in the analytic hierarchy process. In: Systems, Man, and Cybernetics, IEEE International Conference on. Vol. 1. pp. 1037–1042.

10. Srdjevic, B., Jul. 2005. Combining different prioritization methods in the analytic hierarchy process synthesis. Computers & Operations Research 32 (7), 1897–1919.

Page 142: OR52 EXTENDED ABSTRACTS

     

142 

A method for providing sufficient strict individual rankings’ consistency level while group decision-making with feedback

Vitaliy Tsyganok35

National Academy of Sciences of Ukraine, Institute for Information Recording,

Kyiv, Ukraine.

Abstract

A way of determining sufficient consistency level of strict individual rankings is suggested. It is assumed that absence of cycles in the final preference relationship built according to Condorcet method based on individual ordinal pair comparison matrices can be considered an adequate sufficient consistency indicator. A method is developed for achieving sufficient estimate consistency, targeted at minimizing the number of times the experts in the group are addressed. Keywords: ordinal expert estimation, sufficient rankings’ consistency, feedback with the expert

1. Introduction

Ordinal (rank) expert evaluation is used for decision-making in many weakly-structured human activity scopes (business, state government, scientific planning etc). Ordinal expert estimates are extremely useful when experts, building pair comparisons, find it difficult to estimate the degree of dominance of one alternative over another. In such cases instead of forcing experts to provide any estimates we can offer them to input values of ordinal preferences between alternatives, i.e. build alternative rankings. Presumably, experts possess sufficient information for differentiating the relative importance of any pair of alternatives, so, individual expert alternative rankings will be strict. The problem of adequate and “fair” individual rankings’ aggregation remains topical in the context of group expert alternative evaluation. Sufficient consistency level of individual rankings is the decisive criterion, enabling the decision-maker to aggregate individual rankings into generalized group ranking, influencing the final choice of decision variants. It is shown in (Tsyganok and Kadenko, 2010) that concordation and rank correlation coefficients, introduced by (Kendall, 1962), do not depend monotonously on the minimal number of permutations in a ranking, necessary for bringing it to the group ranking given a priori, so they cannot be used as sufficient individual rankings’ consistency measures; individual rankings’ set is considered consistent enough for their aggregation if aggregate preference relation built on their basis using (Condorcet, 1785) method is transitive, i.e. represents a rank order relation. The aggregate relation is built as a dominance matrix (see example below). If a matrix is intransitive, individual rankings’ aggregation is considered inadmissible, and experts should be offered to change their alternative rankings, so the aggregate relation becomes transitive. Obtaining a strict aggregate ranking can be an additional (but non-compulsory) condition of organizing feedback with experts and individual expert estimates’ aggregation. This requirement can arise from the specificity of a given problem or from the fact that given individual rankings are also strict.

                                                            35 Email: [email protected]

Page 143: OR52 EXTENDED ABSTRACTS

     

143 

We should stress, that any expert must be free to agree or disagree with suggestions concerning changes in his\her ranking.

2. Problem Statement

What is given: A set of n alternatives evaluated by m experts; strict expert alternative rankings; an aggregate relation (in general case, intransitive) given as a matrix built through aggregation of individual rankings using (Condorcet, 1785) method. We should find a way of organizing feedback with experts in order to transform the aggregate relation into a transitive one, i.e. define successively, which experts should be addressed and which alternatives they should swap in individual rankings. Minimizing the number of times the experts are addressed should be chosen as feedback quality criterion.

3. Solution Idea

Since the number of questions posed to the experts, regarding changes of their initial opinions, depends on their answers (consent or refusal), it would be impossible to minimize this number using traditional approaches. So, beside the general number of questions posed to the expert group, we suggest taking the probability of their positive answers into consideration, since during feedback negative answers (refusals to change previously given opinions) lead to further search of other variants of transforming the aggregate relation into a transitive one and result in new suggestions concerning changes of initial individual rankings. Let’s take the following presumptions, characterizing the expert’s logic, as guidance: 1) An expert more willingly changes previous estimates, which, to his mind, are less important. For example, an expert will more likely swap 5th and 6th alternatives than 1st and 2nd ones, in his\her ranking (providing, alternatives are ranked in order of their importance decrement). 2) The possibility of getting a negative answer from the expert grows proportionally to the number of suggestions made. So, the more changes an expert is offered to make, the less chances (s)he will agree with all of them. Thus, an expert will more probably agree to swap the 3rd alternative with the 4th, than with the 6th, because in the second case the “former” 4th and 5th alternatives will also have to change places. Using these points, we shall form the further feedback organization strategy to minimize the number of suggestions made to experts, during transforming the set of individual expert rankings, which do not allow building a strict aggregate ranking, in such a way that this condition is fulfilled. Thus, we suggest organizing the search among all strict expert ranking variants. The search aims to find the set of rankings, closest to the given one in terms of changes’ quantity and quality (and, consequently, maximize the probability of experts’ agreement to such changes), which will form a strict rank order relation after their aggregation. We suggest using genetic algorithm (GA) (Holland, 1994), allowing to find acceptable solutions of multi-variable function extreme values’ search problems. To solve problem using GA, a specific utility function must be formed and solution variants from the acceptability area must be coded. The suggested algorithm envisions searching a solution variant, corresponding to the minimum of utility function. In this case solution variant represents a set of strict expert rankings, whose aggregation also presents a certain strict ranking. Moreover, it is possible to transform the

Page 144: OR52 EXTENDED ABSTRACTS

     

144 

initial expert rankings’ set into this solution variant, using the minimal number of change suggestions to the experts. We suggest building the utility function, based on the aforementioned presumptions about expert’s logic and equivalence of permutations in the rankings. Utility function F can be represented as a sum of components uF :

.1

m

uuFF (1)

Each component corresponds to an expert ranking, and looks as follows:

,)1)(12( 2dhndFu (2)

where n is a quantity of alternatives in the ranking, h is the smaller of two alternative ranks, involved in the permutations, d is the “distance” between swapped alternatives.

,21 rrd

21,rr are the ranks of swapped alternatives in the initial expert ranking. Equation (2) considers the number of so called elementary permutations of alternatives, i.e., swaps of neighbouring alternatives ),1( d necessary for introducing given changes into expert ranking. The equation also includes the weight of every elementary permutation, depending on alternatives’ remoteness from the ranking’s end (alternative with rank n ). So, permutation of alternatives which are less important to the experts, has smaller weight, and, consequently, is more likely to be suggested to the expert as a necessary change. Besides, equation (2) has one more property: the weight of swapping non-neighbouring alternatives equals the sum of respective consequent elementary permutation weights. Let’s consider the issue of coding GA solution variants (called “individuals”). In the suggested GA variant, an individual represents a set of strict expert alternative rankings, whose aggregation results in a strict ranking. It is suggested to transform the initial expert rankings set into this set. It is possible to calculate a utility function value for each individual. Let’s establish a unique correspondence between individuals ( n alternatives’ ranking variants) and respective natural values. The power of such set equals the number of combinations of n elements: !.nPn

Thus the feedback organization algorithm is a search (choice) among all possible ranking variants conducted, according to these conditions: 1) initial rankings from the set must be strict ones; 2) aggregation of these rankings must result in a strict ranking; 3) transforming the initial expert ranking set into target set must require minimal trade-offs from the experts, and aim to maximize the probability of experts’ consent to introducing changes into previously built rankings. After that experts are offered to change their initial rankings into required ones through swapping respective alternatives. The algorithm ends when experts agree to all suggested ranking changes. Presumably, experts can be addressed simultaneously. In the process of addressing the experts the information about their answers is being stored. In case some expert refuses to sanction suggested alternative pair inversion, further questioning is terminated and new target ranking set is sought; during further questioning previous answer history is taken into consideration. So, if an expert refused to swap a certain pair of alternatives,

Page 145: OR52 EXTENDED ABSTRACTS

     

145 

he is not asked this same question again. If some expert has already agreed to swap a suggested alternative pair, this swapping is not suggested to him again and the weight of this given swapping is not considered while further selection criterion (utility function) formation. Besides, in case the expert agrees to swap a pair of non-neighbouring alternatives, he, presumably, agrees to all respective elementary permutations of neighbouring alternative pairs. Thus the algorithm is designed to find a series of alternative pair permutations, allowing to achieve sufficient consistency of alternative ranking set. In case when, due to multiple refusals of experts to change their opinions, no acceptable ranking set is found, we conclude that the expert group is unable to achieve a consistent aggregate opinion and suggest conducting a new expertise.

4. Step-by-step algorithm: An example

Say, a group of 4 experts ranked 4 alternatives. The individual rankings are: ),1,2,4,3(1 R

),4,2,3,1(2 R ),4,3,2,1(3 R ).2,1,4,3(4 R This means that the first expert assigned

rank “3” to the first alternative, rank “4” – to the 2nd etc. The second expert assigned rank “1” to the first alternative etc. Steps 1-3 of the algorithm check if expert rankings’ consistency level is sufficient:

Step 1. Define the domination matrices based on expert rankings: ,

0

10

110

1110

1

D

,

0

10

110

1110

2

D ,

0

10

110

1110

3

D .

0

10

110

1110

4

D These matrices are

built as follows: if alternative 1A dominates over 2A (its rank is smaller )( 21 rr ), then the

respective matrix element ,112 d otherwise .112 d Since the matrices are

reciprocal ),( jiij dd we confine ourselves to representing only the elements above the

principal diagonal. Step 2. Build the aggregate matrix D based on matrices, obtained on the previous step, using Condorcet’s method. The method envisions the summing of initial matrices

),1(, miDi followed by finding the sign of respective element sums: .sign1

m

iiDD We

should note that the aggregate matrix can include non-diagonal zero elements; these may

appear if the number of experts in the group is even. Thus, .

0

10

010

0010

D

Page 146: OR52 EXTENDED ABSTRACTS

     

146 

Step 3. Check, if relation D includes non-diagonal zero elements, and if there are none, check, if it is transitive. Existence of at least one non-diagonal zero element indicates that the relationship is not a strict order. Numerous sources (Kendall and Smith, 1940; Litvak, 1982; Iida, 2009) show that transitivity violation can be detected by the presence of 3-cycles (circular triads) in the relation. Three arbitrary alternatives ),,( 321 AAA form a cycle if the following

relation is fulfilled: ,1321 AAAA where “ ” denotes dominance. The aggregate

relation is transitive if and only if it has no cycles in it. Cycles in a relation’s matrix can be easily detected algorithmically, or by checking, whether the number of cycles equals zero:

n

iSn i

CCl1

23 0 , where

n

jiji dS

1

– are row sums of matrix D , and jiij dd .

So, we conclude that D does not satisfy the posed requirements (it includes non-diagonal zero elements), and proceed to the next step to organize feedback with the experts. Step 4. Using the GA, we conduct a targeted enumeration of expert ranking variants, which are respectively coded as individuals in the population as described above. An individual represents a set of m numbers form 1 to ,!n each individual corresponds to a strict ranking variant for n alternatives. The initial population is generated randomly and new individuals “are born”, “mutate”, “cross-over” and “die-off”, while only the fittest ones survive. The fitness of each individual is inversely proportional to the value, calculated according to (1). GA search ends when after a given iteration (generation) number the fittest individual stabilizes. It is selected as the search result. Before calculating the value according to (1), each individual is checked as to sufficient ranking consistency level through performing steps 1-3. Thus, using the GA, we find an individual )9,1,3,12( (each value is a number of respective

ranking vector), corresponding to the set of rankings ),1,3,4,2(*1 R ),4,2,3,1(*

2 R

),4,3,2,1(*3 R ),4,1,3,2(*

4 R where the utility function 5F reaches its minimum.

Step 5. Successively offer the experts to change previously built ranking R into *R by introducing one ore more alternative swaps. In case experts accept all suggestions, the

algorithm ends, and, based on rankings *R (obtained from R through feedback), which are sufficiently consistent, the aggregate group ranking can be built.

Thus, based on comparison of rankings 1R and *1R a suggestion for expert #1 is formulated:

“do you agree to swap alternatives #1 and #3 (ranked “3” and “2”)?” Say, the expert agrees.

Then, after making sure that after one permutation the rankings 1R and *1R are already identical,

we proceed to analyzing the rankings of expert #2. Since ,*22 RR and ,*

33 RR we proceed

to analyzing the rankings built by expert #4. Based on 4R and *4R a suggestion for the 4th

expert is formulated: “do you agree to swap alternatives #1 and #4 (ranked “3” and “2”)?” Say, expert #4 refuses, so we search for a new expert ranking set (step 4), taking obtained experts’ answers into consideration. The result of step 4 is an individual ),15,1,3,12(

representing the rankings’ set ),1,3,4,2(*1 R ),4,2,3,1(*

2 R ),4,3,2,1(*3 R

),4,1,2,3(*4 R where 5F (the summand, corresponding to 1R and *

1R is not considered

while calculating ,F because the question of swapping alternatives #1 and #3 is unnecessary: the expert has already answered positively). At step 5, while analyzing the rankings of expert #1, the question of swapping alternatives #1 and #3 is not posed, because the expert has given positive answer. While analyzing the rankings

Page 147: OR52 EXTENDED ABSTRACTS

     

147 

of expert #4, based on 4R and *4R a new suggestion for the expert is formulated: “do you agree

to swap alternatives #2 and #4 (ranked “4” and “2”)?” In case the expert agrees to swap the

alternatives, ,*44 RR and at this point the feedback algorithm ends. As a result, the set of

expert rankings R is transformed into set ,*R which is sufficiently consistent for aggregating them into group ranking.

5. Conclusions

An algorithm for organizing feedback with experts while building group ranking is suggested. The problem solved, is extremely topical in the areas of expert examinations and decision-making support systems development, so the suggested algorithm can become an important element of their mathematical software. Experimental research of the algorithm testifies to its high efficiency. The key direction of future research lies in the approach extrapolation to the cases of different expert competence and non-strict alternative rankings.

6. References

1. Marquis de Condorcet (1785). Essai sur l'application de l'analyse á la probabilité des décisions rendues á la pluralité des voix: http://gallica.bnf.fr/ark:/12148/bpt6k417181 accessed 31 May 2010.

2. Holland, J.H. (1994). Adaptation in natural and artificial systems. An introductory analysis with application to biology, control, and artificial intelligence. Bradford book edition: London.

3. Iida, Y. (2009). The number of circular triads in a pairwise comparison matrix and a consistency test in AHP. J Ops Res Soc of Japan 52: 174-185.

4. Kendall, M.G. and Smith, B.B. (1940). On the method of pairwise comparisons. Biometrika 31: 324-345.

5. Kendall, M.G. (1962). Rank Correlation Methods. Charles Griffin and Co.: London. 6. Litvak, B.G. (1982). Expert Information: Methods for Obtaining and Analysis (in Russian).

Radio i svjaz: Moscow. 7. Tsyganok V.V. and Kadenko S.V. (2010). On Sufficient Consistency Level of Group

Ordinal Estimates (in Russian). Problemy Upravlenija I Informatiki 4: 107-112.

Page 148: OR52 EXTENDED ABSTRACTS

     

148 

Multicriteria analysis of policy options scenarios to reduce the aviation climate impact - an application of the PROMETHEE

based D-SIGHT software

Annalia Bernardini1, Quantin Hayez2, Cathy Macharis1, Yves De Smet2

1Department MOSI-T, Vrije Universiteit Brussel, Pleinlaan 2 - 1050 Brussels, Belgium

2CoDE-SMG, Engineering Faculty, Université Libre de Bruxelles, Boulevard du Triomphe 210/10, 1050 Brussels, Belgium

Abstract

The Aviation and the Belgian Climate Policy Project analyses the different climate policies options aimed to reduce the climate change impact of the aviation sector. In view to compare the effectiveness of the different alternative policies scenarios the PROMETHEE GAIA Multi-criteria Analysis (MCA) method was chosen. By applying a MCA, it was possible to some extent to delineate an appropriate platform for future compromises. The PROMETHEE based D-Sight software application has shown to be a valuable tool related to a complex decision problem on behalf of sustainable transport policy. Keywords: Aviation, Climate Change, Multi-criteria Decision Aid, PROMETHEE, D-Sight, Software, Scenarios, Criteria Hierarchy

1. Introduction

Since the publication of the Intergovernmental Panel on Climate Change special report on aviation (IPCC, 1999), the international scientific community has become aware of the significance of the impacts of emissions from the aviation sector on climate change. So far, the Kyoto Protocol commitment does not include emissions from international aviation. To limit the global warming under about 2°C above pre-industrial temperatures, it is necessary to take the emissions from all sectors into account and strongly reduce the total (Marbaix et al., 2008). In addition to CO2, NOx emissions, condensation trails and cirrus cloud formation contribute to this impact. Different policy options may thus be considered for the inclusion and integration of international air transport into climate policy. In this context, the Aviation and the Belgian Climate Policy (ABC) Project provides an analysis and assessment of various policy options from a technical, environmental and an economic point of view. Belgium is situated in the so-called FLAP zone (Frankfurt-London-Amsterdam-Paris). This zone includes the four busiest airports in Europe both in terms of passengers and movements. Furthermore, the surrounding area includes other airports close to Belgium (Eindhoven, Düsseldorf, Köln, Luxembourg, Lille, Maastricht ...). In order to assess the effectiveness of the identified policy options it was decided to use a multi criteria analysis (MCA). Furthermore, this analysis will also allow to consider the viewpoint of different groups of stakeholders (political agents, airlines, transportation users). __________________ 1Corresponding author Email: [email protected]

Page 149: OR52 EXTENDED ABSTRACTS

     

149 

The PROMETHEE GAIA multi-criteria analysis method was chosen to carry out the assessments of the identified policy options to reduce the aviation climate impact. The multi disciplinary ABC project research consortium provided the qualitative scores of the performance matrix (considering the performance value of each alternative in relation to the identified criteria). The policies were grouped into: “Technology R&D/Investments”, “Operational efficiencies/Infrastructures” and “Market based measures”. They were evaluated in relation to ten criteria, regrouped in several appropriate categories: “Climate performance”, “Feasibility”, “Large scale implementation” and “Socio-economic impact”. This allowed to propose the selection of the best compromise alternatives by the PROMETHEE method (Brans and Mareschal, 1994). For several alternatives there has been a common agreement of the ABC consortium to set two different scores (high and low) consenting to establish the two optimistic and pessimistic scenarios used for this purpose. In fact the scenario method aims to increase a better understanding of the problem perspectives concerning the studied system, taking into account a set of assumptions. However, scenarios are no forecasts, but a structured look on possible future developments to generate knowledge on unclear futures (Godet, 1987).

2. PROMETHEE analysis with-D-Sight

D-Sight is a recent software that implements the PROMETHEE GAIA methods. It will offer the possibility to analyse the research consortium assessment matrix. The PROMETHEE method requires the decision maker to identify the different alternatives and objectives of his problem. The objectives are expressed in the form of criteria associated to preference functions chosen by the decision -maker. Weights are also specified for each criterion and represent the relative importance that the decision maker allocates to them. In practice the pair-wise comparison procedure has been proven to be very interesting for this purpose (Macharis et al., 2003). In the present application, the weights were obtained by having an interactive discussion with the ABC consortium. Furthermore those were introduced in both optimistic/pessimistic D-Sight scenarios. The weight elicitation step is of course crucial. Nevertheless, the D-Sight software offers a tool called “Walking Weights” that allows to perform a sensitivity analysis. On the basis of this tool, the analyst may investigate if some weights have to be elicited with a high precision degree and/or if the rankings results significantly changes when the weights are modified. For practical reasons, a qualitative scale was used to express the experts’ evaluations. A qualitative scale consists of a list of semantic values and a list of corresponding numerical values.  

Semantic values Numerical values

Bad 1

Bad to Neutral 2

Neutral 3

Neutral to Good 4

Good 5

Good to Very good 6

Very good 7

Very good to Excellent 8

Excellent 9 Source: Own setup, 2010

Table 1. Qualitative scale in D-Sight

Page 150: OR52 EXTENDED ABSTRACTS

     

150 

Each semantic value is associated to its corresponding numerical value, which is actually used for the computations. For this assessment a 1 to 9 point scale (Table 1) going from bad to excellent effectiveness has been applied (i.e. How effective is the policy option ‘alternative fuels’ to contribute to reduce the ‘CO2 emissions’). For several relations with dual effectiveness properties it was commonly agreed by the ABC consortium to set two different scores (high and low) consenting to establish the two optimistic and pessimistic scenarios used for this purpose. For each criterion, a specific preference function has to be defined to compute the degree of preference associated to the best alternative in the pair-wise comparisons process (cf, Brans and Mareschal, 1994). Regarding this assessment the preference function ‘usual shape’ (basic type without any threshold) was applied.

3. Application outcome

For each group of possible policy options (“Technology R&D/Investments”, “Operational efficiencies/Infrastructures” and “Market based measures”) a group of sub policy options has been defined (Table 2).  

Alternative groups Alternatives Shortnames

Technology R&D/Investments

Energy efficiency a1

Alternative fuels a2

Operational efficiencies/Infrastructures

Air Traffic Management a3

Operational improvements a4

Market based measures

Carbon tax (CO2) a5

Emission Trading (CO2) a6

Extended Emission Trading a7

EU-Emission Trading a8 Source: Own setup, 2010

Table 2. Set of alternatives for the D-Sight assessment used within this study Those policy options (alternatives) were assessed for each identified criterion (Table 3).

Criteria groups Criteria

Climate performance CO2

Aircraft Induced Cloudiness (AIC)

Total Climate Impact

Feasibility Technical

Economic

Large scale implementation Acceptance

Swift Implementation

Socio-economic impact Aviation Sector

Other Transports

Others Source: Own setup, 2010

Table 3. Set of criteria for the D-Sight assessment used within this study

Page 151: OR52 EXTENDED ABSTRACTS

     

151 

These criteria are gathered in four different categories with their respective weights that give their importance (Table 4). These weights were based on the pair-wise comparisons worked out by the ABC consortium.

Criterion Categories Shortnames Weight Climate performance CP 52,5%

Feasibility F 13,7% Large scale implementation LSI 14,8%

Socio-economic impact SEI 19,0% Source: Own setup, 2010

Table 4. Criterion categories weights

For the sake of simplicity, all the criteria in each category have been equally weighted. We actually considered two different scenarios, an “Optimistic” one and a “Pessimistic” one. All that information allowed us to construct a hierarchy tree, in which the “-O” expression stands for Optimistic and the “-P” stands for Pessimistic (Figure 1).

Source: Own setup with D-Sight, 2010

Figure 1. Hierarchy tree with the “-O” Optimistic and the “-P” Pessimistic scenario

Page 152: OR52 EXTENDED ABSTRACTS

     

152 

The PROMETHEE II ranking based on the net preference flow (Figure 2) showed that two alternatives, a6 and a7, can be clearly identified as the best ones (with respective scores of 0,32 and 0,36).

Source: Own setup with D-Sight, 2010

Figure 2. PROMETHEE II ranking based on the net preference flow of the analysed alternatives (actions) in D-Sight

If we want a still global but more descriptive view of the problem, we can analyze the GAIA plane, regarding the two scenarios (Figure 3). In this plane, the alternatives are represented by the points, and the scenarios are represented by the axis. The axis with no label is the decision stick which is obtained by using the weights given by the decision-maker. It indicates the direction of the best alternatives. In this case it represents the compromise between the two scenarios that are equally weighted. This explains the central position of decision stick between the “Pessimistic” axis and the “Optimistic” axis. The a6 and the a7 alternatives are both dominating all the other alternatives regarding the aggregated optimistic score and the pessimistic aggregated score. This means that whatever the compromise (e.g. the weights) between the two scenarios could be, a6 and a7 will always be the most preferred alternatives. As a consequence, this enforces our previous conclusions.

Page 153: OR52 EXTENDED ABSTRACTS

     

153 

Source: Own setup with D-Sight, 2010

Figure 3. PROMETHEE GAIA plane regarding the two scenarios within D-Sight For each alternative optimistic and pessimistic scores were associated (Table 5).

Optimistic Pessimistic

Energy efficiency a1 -0,306 -0,183 Alternative fuels a2 -0,381 -0,706

Air Traffic Management a3 0,225 0,217 Operational improvements a4 -0,32 0,018

Carbon tax (CO2) a5 0,152 0,021 Emission Trading (CO2) a6 0,408 0,231

Extended Emission Trading a7 0,387 0,336

EU-Emission Trading a8 -0,166 0,066 Source: Own setup, 2010

Table 5. Optimistic and Pessimistic scores of the alternatives

Page 154: OR52 EXTENDED ABSTRACTS

     

154 

Now that two alternatives have been identified as the best ones, it is interesting to further investigate these two by comparing their inner scores into a spider web chart (Figure 4).

Source: Own setup with D-Sight, 2010

Figure 4. D-Sight Spider Web Chart for comparing alternatives profiles The a6 alternative, which has actually a global score of 0,32, has a more “globally” good profile than highest scored a7. This alternative has the advantage of being high scored on the most important category, which is the “Climate Performance” both in the optimistic and pessimistic scenario. This partially explains the large stability intervals for the scenarios. Those intervals, which are represented in Table 6 indicate the range in which each scenario’s weight can be changed without affecting the first rank of a7. The Optimistic scenario can thus be weighted from 0% (totally pessimistic case) to 83% and “Extended Emission Trading” (a7) will be the most preferred alternative regarding the current parameters.

Scenario Min

Weight Current Weight

Max Weight

Optimistic 0% 50% 83%

Pessimistic 17% 50% 100% Source: Own setup, 2010

Table 6. Stability intervals for the scenarios

Page 155: OR52 EXTENDED ABSTRACTS

     

155 

4. Conclusions

This evaluation has been carried out in the context of the ABC Impacts project with the objective to inform political decision-makers about the environmental, political and socio-economic implications for Belgium of integrating (or not) the international aviation transport sector into climate policy. The MCA method has been chosen by the project evaluators to assess the effectiveness of the systematic analyses of a set of policy options according to different scenarios. Most of the economical, industrial, financial or political decision problems are multi-criteria. Usually there is no optimal solution; no alternative is the best one on each criterion. Compromise solutions have to be considered (cf. Brans and Mareschal, 1994). Numerous evaluation methodologies can be employed for an assessment but the majority of them entails a monetarisation of the different elements considered and does not allow to analyse the decision problem in a subjective and qualitative approach. The evaluation considered the pair-wise assessments resulting from the ABC consortium and also took into account the viewpoints of different groups of stakeholders (political agents, airlines associations, etc). The D-Sight software allowed to use qualitative scales instead of the default numerical scale for the values taken by the criteria. The multi disciplinary ABC consortium provided the qualitative scores of the performance matrix consenting to establish the two optimistic and pessimistic scenarios used for this purpose. The sensitivity analysis of D-Sight showed that whatever the compromise between the two scenarios could be, the both dominating alternatives a6 and the a7 will always be the best. Actually the ABC Impacts project is sited within his second phase. The presented ranking of the policy measures is still under discussion within the ABC consortium. For each policy measure, a description of “pros and cons” from the project experts will justify the most satisfied compromise. The results will be analysed in the second phase report in July 2010. By applying a MCA it is possible to some extent to delineate an appropriate platform for future compromises. By taking the different scenarios into account, we can evaluate the consequences of previously decided orientations and, with the aid of multi-criteria methods, deduce the priority strategic actions to be taken in order to exploit the expected changes, thus helping to device the strategic plan (Godet, 1987). The D-Sight software application has shown to be a valuable tool related to a complex decision problem on behalf of sustainable transport policy. It supports the decision maker in making his final decision by pointing out, for each expert and stakeholder, which elements have a clearly positive or a clearly negative impact on the sustainability of the considered alternatives.

5. References

1. Brans, J. P., (1982). L’ingénierie de la décision. Elaboration d’instruments d’aide à la décision. Méthode PROMETHEE. In: Nadeau, R., Landry, M. (Eds.), L’aide à la décision: Nature, instruments et perspectives d’avenir, Presses de l’Université Laval, Quebec, Canada.

2. Brans, J.P. en B. Mareschal, (1994). The PROMCALC & GAIA decision support system for MCDA. Decision Support Systems, vol.12, pp. 297-310.

3. Decision Lab, (1999). Decision Lab 2000–Getting Started Guide. Visual Decision Inc., Montreal, Canada.

4. Godet, M., (1987). Scenarios and strategic management. In: Butterworth Scientific. 5. IPCC (1999). Aviation and the Global Atmosphere, Special Report of IPCC Working

Groups I and III in collaboration with the Scientific Assessment panel to the Montreal Protocol on Substances that Deplete the Ozone Layer. Cambridge University Press, UK.

6. Macharis, C., Brans, J.P. and B. Mareschal, (1998). The GDSS Promethee procedure. Journal of Decision Systems, vol. 7, pp. 283-307.

Page 156: OR52 EXTENDED ABSTRACTS

     

156 

7. Macharis, C.; Springael, J.; De Brucker, K. and A. Verbeke (2003). PROMETHEE and AHP: the design of operational synergies in multicriteria-analysis. Strengthening PROMETHEE with ideas of AHP. European Journal of Operational Research, Volume 153, Issue 2, 1 March 2003, Pages 307-317.

8. Macharis, C., Verbeke, A. and K. De Brucker, (2004). The strategic evaluation of new technologies through multi-criteria analysis: the advisors case. In: E. Bekiaris and Y. J. Nakanishi (Eds.), Economic Impacts of Intelligent Transportation Systems. Innovations and case studies, Elsevier, Amsterdam, pp. 439-460.

9. Marbaix, P., Ferrone, A., Matthews, B., (2008). Inclusion of non-CO2 effects of aviation in the ETS: a summary. <www.climate.be/abci> accessed 1 June 2010.

10. Meyer, S., Matheys, J., Van Lier, T., Marbaix, P., Ferrone, A., Matthews, B., (2008). Aviation and the Belgian climate policy: integration options and impacts. ABC-Impacts, Final Report – Phase I. Brussels: Belgian Science Policy 2008, (Research Programme Science for a Sustainable Development).

Page 157: OR52 EXTENDED ABSTRACTS

     

157 

Analysis of the Brazilian demand for sugarcane, ethanol and sugar

Luciano Jorge Carvalho, Elizabeth G. Rojas, Rosemarie Bone, Eduardo Ribeiro FIPSE (U.S fund for the improvement of Post secondary Education), University of

Florida (UF), Universidade Federal do Rio de Janeiro (UFRJ), CADE

 

Abstract

Energy dependence on fossil fuels, changes in global climate and competition to supply the growing demand for renewable sources of energy have raised the importance of Brazilian Ethanol in the international context. This study was performed in an effort to understand the logistics of Brazilian Ethanol and estimate the respective demand curves for sugarcane, sugar and ethanol. To achieve this objective, annual data from 1997 to 2008 are used to estimate the parameters of the econometric models, understanding that sugarcane behaves as the industry’s feedstock, while its by-products behave as competing goods. The results correspond to the precepts of economic theory and serve to clarify the dynamics of the sugar-ethanol industry and the relationship between the quantities demanded and the variables used to explain them. Income elasticity plays a pivotal role in explaining the quantities demanded for all of the industry’s goods and the results indicate that the Brazilian consumer is income elastic to the quantities demanded for such goods. Sugar is not considered an inferior good as most studies suggest because it is an essential element of the Brazilian basket of goods and does not represent a significant portion of the consumer’s budget. It is also noted that the demand for ethanol does not react to its prices and it is inelastic to the price of gasoline, although they are blended together. For sugar cane, the results show that it is inelastic to the industrial production of its by-products because of the delay attending an increase in demand. Keywords: Brazil, Demand, Econometrics, Regression, Sugar-ethanol industry

1. Introduction

The emergent discussion regarding the use of renewable sources of energy has being placed upon the agendas of most political leaders around the world. The strong dependence on fossil fuels, climate change as well as competition for the development of alternative sources of energy has put pressure on developed countries like the U.S, Canada and Sweden to implement production models of clean energy in their respective energy portfolios. Meanwhile, Brazil has taken a vanguard position in regards to these issues. The South American country has played a key role on the industry’s international development through the production and commercialization model for ethanol, a bio-fuel produced from sugar cane. Brazil is a fierce competitor in the sugar and ethanol industries. In 2007, it was the world's largest producer of sugarcane and sugar, being responsible for 33.1% of sugarcane’s world production (515.8 million tons) and 20% of sugar’s world production (33.2 million tons). Moreover, Brazil was the second largest ethanol producer and number one exporter supplying 38% (23 million cubic meters) of anhydrous and hydrous ethanol world production, behind U.S. with 49.60% and exported 3.5 billion liters of ethanol (half of total world exports) (MAPA, 2009).

Page 158: OR52 EXTENDED ABSTRACTS

     

158 

In this context, the objective of the research is to estimate the demand curves for Brazilian sugarcane, sugar and ethanol for the period 1997 to 2008, utilizing an econometric analysis and understanding that sugarcane behaves as the industry’s feedstock, while its by-products behave as competing goods. Furthermore, it elucidates the relationship between the quantity demanded of these products and the variables used to explain them to understand the dynamics of the market and apply these results to future endeavors.

2. Economic model

The model specification establishes, in accordance with consumer choice theory , the quantity demanded of sugarcane is the level of industrial processing of products derived from sugarcane, the price of this input and the national income per capita.

QSC = f (IP, PSC, YPC) (1)

where: IP = Industrial production of sugarcane by-products (Average 2002 = 100). Source:

Brazilian Institute of Geography and Statistics (IBGE). (http://www.ibge.gov.br/home/estatistica/indicadores/industria/pimpfagro_nov

accessed 2 February 2010) PSC = Average price received by Brazilian producer of sugarcane (R$/ton). Source:

Brazilian Institute for Applied Economic Research (IPEA). (http://ipeadata.gov.br/ accessed 2 February 2010) YPC = Brazilian GDP per capita in 2008 price levels (Thousands of R$). Source: IPEA. (http://ipeadata.gov.br/ accessed 2 February 2010) It is expected that the quantity demanded of sugarcane reacts positively to the industrial production of its by-products and to the income per capita. The quantity demanded of sugarcane is negatively affected by increases in its prices due to a perfectly competitive market. As to the quantity demanded of sugar, the proposed model includes as explanatory variables: the price of sugar and the Brazilian income per capita.

QS = f (PS, YPC) (2)

where: QS = Quantity of sugar consumption (Thousands of tons). Source: (MAPA, 2009). PS = Average price of crystal sugar traded in São Paulo State (R$/50 kg). Source: Brazilian

Center for Advanced Studies on Applied Economics (CEPEA). (http://www.cepea.esalq.usp.br/xls/SACUCAR.XLS accessed 2 February 2010) It is expected that an increase in sugar prices will be associated with a reduction in the quantity demanded of sugar and an increase in income will lead to increases in consumption of sugar. The last model proposes that gasoline and ethanol prices, in addition to income, are the constraints to the quantity demanded for ethanol: QALC = f (PALC, PGAS, YPC) (3) where: QALC = Final energy consumption of ethanol (Thousands of m3). Source: (MME, 2009). PALC = Average price of hydrated ethanol to the Brazilian consumer (R$/m3). Source:

(ANP, 2009). PGAS = Average price of unleaded gasoline to the Brazilian final consumer (R$/m3). Source:

(ANP, 2009).

Page 159: OR52 EXTENDED ABSTRACTS

     

159 

It is expected that the quantity demanded for ethanol is negatively correlated with ethanol and gasoline prices because Brazilian fuel blends contain high levels of ethanol (currently 25% ethanol and 75% gasoline). Additionally, it must react positively to increases in income. Note that prices and income per capita are deflated to constant prices of 2008 using the Brazilian General Price Index for Internal Availability (IGP-DI), calculated by The Getúlio Vargas Foundation (FGV).

3. Econometric Model

The demand equations for sugarcane, sugar and ethanol are estimated using the log-log functional form, which provides with the respective elasticity of demand with respect to the explanatory variables. The estimate of the model parameters utilizes the Ordinary Least Squares Method (OLS) to evaluate the existence of an autocorrelation pattern by the Durbin-Watson statistic, which must have a value close to two. An autocorrelation pattern indicates that successive error terms are correlated, which affects the interpretation of the estimated parameters and the underestimation of the level of statistical significance which would lead to a flawed test. The econometric analysis is performed with the software Gretl. Annual data provides twelve observations covering the period between 1997 and 2008. The set of parameters to be estimated from β1 to β8 represent the partial slope coefficients, where, for example, β2 shows the response on the quantity demanded for sugarcane given a change of one unit in the industrial production, while keeping the other variables constant. These coefficients also represent the elasticity of the explanatory variables with regards to the demands for the corresponding goods. tt3t2t11t + YPCln + PSCln + IPln + = QSCln (4)

tt5t42t + YPCln + PSln + = QSln (5)

tt8t7t63t + YPCln + PGASln + PALCln + = QALCln (6)

In the above equations, the variables are turned into logarithms to obtain the log-log functional form. The 321 and , terms represent the intercepts of the curves with the vertical axis, and

particularly the existence of a demand derived from variables outside the model.

4. Results

This section presents the estimate of the demand equations of the sugar-energy sector using OLS. Demand for sugarcane The statistically significant variables to explain the demand for sugarcane are GDP per capita and the industrial production of the sugar-ethanol market. The regression results based on model (4) are presented below: ln QSCt = 11,8958 + 0,210060 ln IPt + 2,65502 ln YPCt R2 = 0,97456 (7) se = (0,7524) (0,0902) (0,3007) F(2,9) = 173,77 t = (24,06)*** (2,42)*** (9,35)*** Significant p-values are indicated with asterisks. Thus, p-value less than 1% is indicated with three asterisks. A p-value less than 5% and greater than 1% is indicated with two asterisks and, finally, a p-value less than 10% and greater than 5% is indicated with just one asterisk.

Page 160: OR52 EXTENDED ABSTRACTS

     

160 

The sugarcane price variable is not statistically significant (p-value = 0.57). The other independent variables (PI and YPC) are statistically significant. The non-existence of autocorrelation was determined through the Durbin-Watson statistics. The evidence presented in the variables is consistent with the predictions. By analyzing the variables individually, it is observed that a change of one percent in the industrial production will be associated with a less than proportional change of 0.21% in the same direction on the demand for sugarcane. In regards to the national income per capita, an increase of 1% is followed by a more than proportional increase of 2.6% in the quantity demanded of sugarcane. Thus, the income elasticity is greater than the elasticity of industrial production. Finally, the coefficient of determination (R2) indicates that 97% of the variations in the quantity demanded for sugarcane are due to variations in Brazilian income per capita and the industrial production of the sugar-ethanol market. Demand for sugar The results indicate that the only statistically significant variable that affects the demand for sugar is the income of the Brazilian consumer. ln QSt = 4,92913 + 1,60615 lnYPCt R2 = 0,503405 (8) se = (1,3233) (0,5045) F(1,10) = 10,13713 t = (3,725)** (3,184)*** The coefficient of determination (R2) of 0.503 indicates that the Brazilian GDP per capita explains more than half of the changes in sugar consumption. Furthermore, the coefficient of the variable sugar price is not statistically significant to explain the demand for sugar in Brazil. Thus, an increase in income leads to increases in demand, but decreases in sugar prices do not affect the demand for sugar. It is recommended to adjust the model by adding exogenous variables such as the level of sugar exports and its price on the international market, suitable variables due to the character sugar has as a commodity in the international market. However, note that this model could not include such variables because the study was completed in the context of Brazil’s domestic market. Demand for ethanol The coefficients of the adjusted model comply with the expected results. The variables price of gasoline and income per capita are statistically significant at the level of 1% but price of ethanol is not. ln QALCt = 5,6766 + 3,0204 ln YPCt – 0,53125 ln PGASt R

2 = 0,847035 (9) se = (1,223) (0,4297) (0,14831) F(2,9) = 24,92

t = (4,643)*** (7,029)*** (–3,852)*** The results show that income has the greatest effect on the quantity demanded of ethanol. The income elasticity of demand indicates that an increase of 1% in the income of Brazilian consumers is accompanied by an increase of 3% in the quantity demanded of ethanol, ceteris paribus. An increase of 10% in gasoline prices produces a decrease of approximately 5% in the volume sold of ethanol fuel in the Brazilian domestic market. Therefore, the demand for ethanol fuel is inelastic to the price of gasoline and elastic to income. Ethanol reacts more than proportionally to changes in income and is less responsive to changes in gasoline prices

Page 161: OR52 EXTENDED ABSTRACTS

     

161 

because even with the flexible-fuel vehicles, consumers still opt for gasoline in most cases, even when prices show it would be more advantageous to consume ethanol.

5. Conclusions

The results for the demand for sugarcane show it is inelastic to the industrial production of its by-products because of the delay to attend a greater demand by the sugar-ethanol industry. This slow response is due to the size and time required for the maturity of investments to take place in the plants and distilleries. The quantities demanded for sugar do not react to changes in price, mostly because sugar is an essential good in the Brazilian basic basket and represents a small share of the consumers’ budget. The price of ethanol is also not a significant variable to explain the changes in its quantity demanded. This is because during most of the period considered in this study (1997-2003), the flex-fuel technology was not available and the sales of vehicles that ran exclusively on ethanol were in sharp decline. Therefore, if sugarcane by-products are price inelastic, sugarcane is price inelastic likewise. Ethanol demand is also inelastic to changes in the price of gasoline, although there is a federal mandate to blend 25% of this good with gasoline. This can be explained by consumer’s tolerance when gasoline prices increases because they face transportation needs, there is a lack of alternative fuels for light vehicles and the proportion of gasoline spending allocated in the budget of the Brazilian consumers. Income elasticity plays a pivotal role in explaining the quantities demanded for all of the industry’s goods. Economic growth in Brazil spurs income per capita, which in turn increases the capacity of domestic consumption of these goods and drives the expansion of the industry. The Brazilian consumer, thus, is income elastic to the quantities demanded for such goods. Furthermore, by comparing both by-products, it is concluded that ethanol has the highest income elasticity of demand. This can be explained by an established sugar consumption pattern that dates back to the country’s history in contrast to the fairly recent ethanol fuel boom. This study also concludes that white sugar is income elastic and it is not considered an inferior good in the Brazilian market as most studies suggest. Sugar is a normal good because its demand does not decrease with the increase of income per capita. This conclusion is proved in the econometric model that showed income elasticity for demand of sugar to be greater than one. Consumption patterns and economic history of Brazil would also correlate with these findings as noted above. Sugar is a basic food need for most Brazilians. It is recommended that for future studies the sugar econometric model be adjusted by including exogenous variables to the Brazilian market as opposed to the objective of this study. It is also recommended to consider the introduction of the flex-fuel vehicles which occurred after half of the period of this study. Adding monthly data beginning on 2004 would help to understand the dynamics of prices and quantities established with this new technology and would help to clarify the different phases that the ethanol industry has faced for nearly three decades in Brazil.

6. References

1. Alves L, Carvalheiro E, Shikida P and Souza E. Uma análise econométrica preliminar das ofertas de açúcar e álcool paranaenses. Revista de economia agrícola 54: 21-32.

2. Brazil. Automotive Industry Association – ANFAVEA (2009). Brazilian Automotive Industry Yearbook. Anfavea: São Paulo.

3. Brazil. Ministry of Agriculture, Livestock and Supply – MAPA (2009). Anuário Estatístico da Agroenergia. Mapa: Brasília.

4. Brazil. Ministry of Mines and Energy – MME (2009). Brazilian Energy Balance. Epe: Rio de Janeiro.

Page 162: OR52 EXTENDED ABSTRACTS

     

162 

5. Brazil. National Agency of Petroleum, Natural Gas and Biofuels – ANP (2009). Oil, Natural Gas and Biofuels Statistical Yearbook. Anp: Rio de Janeiro.

6. Gujarati D (2004). Basic Econometrics. McGraw-Hill: New York.

Page 163: OR52 EXTENDED ABSTRACTS

     

163 

Using ELECTRE and MACBETH MCDA methods in an industrial performance improvement context

Vincent Clivillé, Lamia Berrah and Gilles Mauris

LISTIC, Polytech’Annecy-Chambéry, BP 80439, 74944 Annecy Le Vieux cedex

Abstract

According to the control loop principle, the so-called performance expressions identify the objective satisfaction, reflecting thus the impact of the launched improvement actions on the objective achievement. In a multicriteria performance and action context, the selection of the right action to launch is a cumbersome task. Industrial decision-makers require, thus, pieces of information about the satisfaction of the fixed objectives on the one hand and the potential actions to launch on the other hand. In this sense, they carry out more or less formal decision mechanism using their knowledge. In this study some reflexions concerning the relevance of the use of the MDCA methods for the selection of the action plans with regard to the performances are proposed. More precisely, the widely used ELECTRE and MACBETH methods are considered. These approaches are analysed, as they deal with human knowledge, in the particular case of an industrial improvement process, handled by the Fournier Company. Practical considerations are then highlighted by the application of these methods, in the kitchen and bathroom furniture manufacturing area. Keywords: Multi Criteria Decision Aid, ELECTRE, MACBETH, industrial performance, manufacturing case study

1. Introduction

In the current context of financial crisis and economic globalisation, the performance continuous improvement needs a strong synergy between the company’s strategy, the defined objectives and the launched actions. This is the purpose of improvement philosophies such as the Kaizen (Imai, 1988) the Lean Manufacturing (Womack et al., 1990) and the 6 Sigma (Pyzdek, 2001) ones. According to the multicriteria aspect of the performance, identifying and choosing the improvement actions with regard to the objectives is a difficult multicriteria problem. In this sense, using MCDA methods allows Decision-Makers (DMs) to make their decision on rigorous mechanisms (Figueira et al., 2004) (Saaty, 1977) (Roy, 1985). Among the numerous MCDA available methods in industrial context, DMs can mainly use two method families (Guitouni and Martel, 1999) (Bouyssou et al., 2002) (Eom and Kim, 2005):

the outranking methods family (Roy, 2004). the performance aggregation methods family (Dyer, 2004).

The DM defines his problem by:

the set of criteria, which contribute to legitimating the decision the set of actions which can be described according the set of criteria.

The outranking methods rank the actions considering their comparison according to each criterion, while the aggregation methods synthesize the performances associated to each criterion into an overall expression.

Page 164: OR52 EXTENDED ABSTRACTS

     

164 

The aim of this study is to consider two interesting methods – for the industrial context- of each family, respectively the outranking ELECTRE III method (Roy 1991) and the aggregation MACBETH method (Bana e Costa et al., 1997). The application of these methods is considered through an industrial case study, given by the Fournier Company. The problem subscribes, thus, with the global increase of the turn-over by improving the retailers’ network. In the next section, a global description of the ELECTRE III and MACBETH methods and the way they are generally used by industrial DMs are given. We focus on the following section on the presentation of the case study and the application of the previous methods. By considering the industrial results of the application, some concluding discussion and finally problems to be considered in the future are pointed out.

2. The ELECTRE III and MACBETH methods

The comparison principle of ELECTRE III is: an action A is better than an action B if A is better than B according to a majority of criteria without being very worst according to a minority of criteria, other criteria being indifferent or incomparable. The criterion comparison is made by comparing the values of each action defined on ordinal scales. Then two indexes are then computed according to the weights (expressed on an integer scale e.g. 1 to 5) generally associated by DMs to the considered criteria:

the concordance index, ranging from 0 to 1, which reflects the arguments to favour A instead of B, which can be a strict preference i.e. the difference of the value of A and B is higher than a threshold p defined by the DMs) or a weak preference i.e. the difference of the value of A and B is higher than a threshold q defined by the DMs, and lower than p, q<p),

reciprocally the discordance indexes, ranging from 0 to 1 which reflects the arguments to favour B instead of A. Finally the credibility index, ranging from 0 to 1, informs on the confidence of the comparison between A and B. It is expressed from the previous indexes, by reducing the concordance index according to the discordance one (Roy, 2004). The use of MACBETH involves two different aspects:

the expression of the expected elementary performances, , ranging from 0 to 1, with regard to the considered criteria,

the determination of the aggregation operator parameters, for expressing the overall performance, ranging from 0 to 1.

In this sense, MACBETH principle is the transformation of qualitative comparisons of the considered actions (under the form of preference and strengths of preference given by the DMs (Vansnick, 1984)) into numerical performance expressions (Bana e Costa et al., 2004).

3. The Industrial case study

The case study is issued from the Fournier Company, a SME that produces kitchens, bathrooms and storing closets. The goal of the company is to increase its business turnover continuously. More particularly, since the beginning of the financial crisis in 2008, the company has focused on the management system of the retailers’ network. Indeed, while in the previous period, the major objective was associated to the productivity and the performance of the factory. It is necessary today for the company to go over the industrial cost, quality and delay criteria, by considering the indirect activities, such as logistic, marketing, and advertising.

Page 165: OR52 EXTENDED ABSTRACTS

     

165 

In this new context for the Fournier Company, the Quality Manager (QM) has expressed his needs on formal tools for making his decisions in this domain. ELECTRE III and MACBETH have been considered for this formalization. Context description Regarding the budget, the resources and the feasibility, a set of potential actions is considered by the QM: A1: To redesign the stores in keeping with the brand concept. A2: To open new stores. A3: To recruit new salespeople. A4: To make direct marketing. A5: To advertise. A6: To turn out the store’s human resources. A7: To reorganize the marketing processes. A8: To develop the environmental and social responsibility aspects of the company. The impact of these actions is essentially on the following criteria: c1: Cost (of the improvement action), c2: Delay (duration of the action), c3: Technical feasibility, c4: Resource needs c5: Employees’ conviction c6: Efficacy (of the improvement action on the business turnover increase) The QM has now to identify the most relevant actions with regard to their impact on the turnover increase. In Table 1 the discourse universe of the impact of the actions execution on each criterion is given. The QM has chosen to use linguistic as well as numeric values.  

c1 Cost c2 Delay (months)

c3 Technical Feasibility

c4 Resource needs

(months-man)

c5 Employees’ Conviction

c6 Efficacy

Maximal Satisfaction Minimal Satisfaction

Weak

Acceptable Moderate

High

Very High Unacceptable

1

30

Applicable

Simple

Complex

Unknown

1

150

Unanimously shared

Strong and partially shared

Weak and partially shared

Isolated Absent

Very effective Effective enough

Not very effective

Ineffective

Table 1: Criteria universe of discourse

Page 166: OR52 EXTENDED ABSTRACTS

     

166 

Then, in Table 2 the impact of the action execution with regard to the considered criteria is gathered in a matrix form.

c1 c2 (months)

c3 c6 (months-man)

c5 c6

A1 Acceptable 24 Applicable 120 Una. shared

V. Effective

A2 High 24 Complex 120 Una. shared

V. Effective

A3 Moderate 8 Simple 12 Una. shared

V. Effective

A4 Acceptable 5 Complex 6 S. P. shared E. Effective A5 High 3 Complex 1 W. P.

shared N. V.

Effect. A6 Acceptable 12 Simple 12 W. P.

shared N. V.

Effect. A7 Very High 10 Unknown 24 W. P.

shared E. Effective

A8 Weak 4 Simple 6 Isolated Ineffective

Table 2: Action/Criterion matrix Let us notice that from this matrix, it is not possible to compare the actions directly. Applying ELECTRE III ELECTRE III computes first the concordance and discordance indexes. For this purpose the linguistic values must be converted in quantitative ones. The QM converts the linguistic values into integer values increasing with the satisfaction level. For instance for criterion c5, an unanimously shared employees’ conviction is valuated as 4 when an absent one is valuated as 1. In addition the QM easily gives the values of the criteria weights. To give the preference thresholds is not so easy. So it is necessary to define what the strict and weak preferences are. Finally the thresholds are fixed according to the quantification of Table 3.

c1 c2 c3 c4 c5 c6 Weight 3 4 3 3 4 5

Threshold q, weak preference

0 0 0 0 0 0

Threshold p, strong preference

1 3 1 3 1 1

Table 3: Weights and thresholds quantification

Page 167: OR52 EXTENDED ABSTRACTS

     

167 

The different indexes can then be computed by ELECTRE which provides the action outranking (see table 4)

For example, A1 is preferred to the A2, A5, A6 and A7. A1 is incomparable to A8, and A3 and A4 are preferred to A1.

Legend : I : indifference between actions P : action ‘line’ preferred as action ‘column’ P* : action ‘line’ no preferred as action ‘column’ R : action ‘line’ incomparable to action ‘column’

Table 4: ELECTRE outranking

Applying MACBETH Using MACBETH, the QM expresses strengths of preference for pair-wise comparisons of actions and the corresponding elementary expression are then computed. As an example, in table 5 the expression mechanism for the c3 and c6 criteria are presented.

The Technical feasability of A8 (sustainable development) ijudged “moderately better” than the one of the A1 (stores inaccordance with the brand concept)

A very effective (VE) action is preferred(without precise the strength) than aneffective enough (EE) action.

Table 5: The elementary performances expression of c3 and c6 criteria

Knowing that MACBETH essentially uses the weighted mean for aggregating criteria performances, it is necessary to determinate the weights of the different criteria. The QM has to compare pair wise actions once again. In this case, for the sake of simplicity, he can consider characteristic actions (eventually virtual), i.e. actions for whose one criterion is totally satisfied and the other ones not at all as shown in Table 6.

Page 168: OR52 EXTENDED ABSTRACTS

     

168 

C Cost, D Delay time, TF Technique Feasability, RNRessource needs, C Employees’ Conviction, EEfficacy

Table 6: The weight determination

The overall performance is then computed.

 

Table 7: The overall performance about the set of improvement actions

4. Discussion and conclusion

In the considered context MACBETH is more demanding in pieces of information, because strengths of preference are asked as well as characteristic actions comparisons. Nevertheless poor information concerning this point can lead to the non existence of a weighted average. ELECTRE is only based on the considered actions but requires the threshold definition and it seems to be easier to carry it out than MACBETH. Both methods allow the DMs to rank considered actions according to their interest. In addition MACBETH gives a more refined description of actions which enables the DMs to evaluate the impact on the overall performance. Concerning the decision legitimisation the QM appreciates the two MCDA formalisations (list of criteria, actions, , criteria weights and thresholds, pair-wise action strengths of preference). Moreover the information traceability is very useful to validate or to reconsider the obtained results. It also allows the DMs to make an interactive modification of the method parameters thanks to the associated software. It seems that ELECTRE III is more adapted at the strategic level, in which ordinal information is sufficient while MACBETH method is very interesting at the operational level, in which quantification is useful.

Page 169: OR52 EXTENDED ABSTRACTS

     

169 

An additional functionality asked by the QM concerns the ability to evaluate subset of actions and not only one single action. Sensitivity analysis concerning the preferences and weights would have also a great interest to increase the final decision confidence. We thank Christian Farat, Fournier Quality Manager for his availability and his expertise which made this work possible.

5. References

1. Bana e Costa CA and Vansnick JC (1997). Applications of the MACBETH approach in the framework of an additive aggregation model. Journal of Multi-Criteria Decision Analysis 6 (2): 107-114.

2. Bana e Costa CA De Corte JM Vansnick JC (2004). On the mathematical foundations of Macbeth in MCDA. Multiple Criteria Decision Analysis. Figueira J Greco S and Ehrgott M(eds). Kluwer Academic Publishers: 409-442.

3. Bouyssou D. et al. (2002). Aiding decisions with multiple criteria: essays in honor of Bernard Roy. (eds). Kluwer Academic Publishers,

4. Dyer J.S., 2004, MAUT Multriattribute Utility Theory, in MCDA. Multiple Criteria Decision Analysis. Figueira J Greco S and Ehrgott M(eds). Kluwer Academic Publishers: 265-295.

5. Eom S and Kim E. (2005) A survey of decision support system applications (1995–2001). Journal of the Operational Research Society 57: 1264-1278.

6. Imai M (1988). Kaizen: Key to Japan's Competitive Success.  Random House USA Inc: 259p.

7. Guitouni A and Martel JM (1998). Tentative guidelines to help choosing an appropriate MCDA method. European Journal of Operational Research 109 (2): 501-521.

8. Figueira J Greco S and Ehrgott M (2004). MCDA. Multiple Criteria Decision Analysis State of the Art Surveys. Kluwer Academic Publishers: 1045 p.

9. Pyzdek T (2001). The six sigma handbook. McGraw-Hill New York. 10. Roy B (1985). Méthodologie multicritère d’aide à la décision. Economica: 423 p. 11. Roy B (1991). The outranking approach and the foundations of ELECTRE methods.

Theory and decision 31: 49-73 12. Roy B (2004). Paradigms and Challenges, in MCDA. Multiple Criteria Decision Analysis.

Figueira J Greco S and Ehrgott M(eds). Kluwer Academic Publishers: 3-24. 13. Saaty T. (2004). The analytic hierarchy and the analytic network processes for the

measurement of intangible criteria and for decision making. in MCDA. Multiple Criteria Decision Analysis. Figueira J Greco S and Ehrgott M(eds). Kluwer Academic Publishers: 345-407.

14. Vansnick JC (1984). Strength of preference: theorical and practical aspects. in J.P Brans Eds Operational Research IFORS 84 North Holland Amsterdam: 367-381.

15. Womack JP Jones DT and Roos D (1990). The machine that changed the world. Free Press New York: 339 p.

Page 170: OR52 EXTENDED ABSTRACTS

     

170 

Application of a multi-criteria decision model for the selection of the best sales force automatization technology alternative for a Colombian

enterprise

I Cortina, N Granados and M Castillo*

Departamento de Ingeniería Industrial, Universidad de los Andes, Cr 1ª No 18ª-10, Bogotá, Colombia.

Abstract

A Colombian enterprise needs to select a technology for sales force automatization in order to optimize the work of the commercial area which involves orders, portfolio recovery, information for customers regarding features and availability of products, and customer orders that have not been delivered yet, among others. This paper presents a multi-criteria decision model, using qualitative and quantitative variables for the selection of the best sales force automatization technology alternative. The Analytic Hierarchy Process (AHP) model built takes into consideration the benefits and costs of each alternative evaluated, using a mathematical expression that incorporates these two aspects. In order to construct costs hierarchy, it was necessary to calculate the expected value of the PV of cost of each alternative using a Monte Carlo simulation model. Based on the results produced by the model built in this work, the company decided to implement the alternative that presented the best global behavior. Keywords: Information systems, decision analysis, simulation

1. Introduction

Sales Force Automatization is a key component of CRM systems and nowadays has experienced a major boom due to its benefits regarding cost reduction, improved customer service, support on decision making for sales representatives, speed in the sales process and feedback to the company. Stanton, Etzel and Walker (2004, pp. 603-604) define Sales Force Automatization (SFA) as the ability to use electronic tools to combine company and customer information in real time. It allows to take full advantage of technological advances in communications and to combine them with a wide variety of possibilities for incoming and outgoing information available on the market. Companies, therefore, are able to manage customer information and make it available for sales representatives, who can manage it efficiently. Ultimately, this leads to customer satisfaction. Decisions related to sales force automatization are considered to be strategic. Organizations are constantly faced with this kind of decisions, which have primarily an economic impact. Such decision-making processes are not always based on a structured methodology nor do they always use the appropriate tools to analyze and solve problems involving risk and uncertainty. Such was the case of the IT department of a Colombian company that is a world leader in the manufacturing of construction materials, decoration products and disposable packaging. This

                                                             E-mail: [email protected] 

Page 171: OR52 EXTENDED ABSTRACTS

     

171 

company had to select the technology for automating the sales force that would allow them to optimize the tasks associated with the commercial area. This paper illustrates a decision model designed under a structured methodology to support this decision. The model evaluates the qualitative and quantitative variables of interest.

2. Literature Review

There is scarce literature on applications of multi-criteria selection models to choose technologies to automate a sales force. Nevertheless, there is literature that provided useful information concerning the minimum features that sales force automatization software must have in order to be considered adequate. That information was used to define the qualitative factors to be taken into account, as well as how to obtain the necessary information to evaluate them. Cepeda (2000), for example, illustrated a guide that is developed through a comparative analysis of some of the most representative suppliers. Additionally, it presents a series of evaluation factors developed through the formulation of key questions that allow greater success in the process of making technology purchase decisions. Méndez (2002) investigated the use of wireless technology to carry out the process of automating sales forces. His document presents the strategic preparation to be carried out by an organization in order to implement a sales force automatization model using wireless technology. Additionally, it illustrates the parameters for the evaluation of this type of project. This literature generated further understanding of aspects related with mobile devices and operating systems. Avila & Diaz (2008), for instance, developed a mobile application to provide advanced business services to members of an organization who remain outside the office for extended periods of time. The methodology designed by Castillo (2006) was used to develop the model of this article. To implement it, a case presented by Castillo (2006) was taken as reference. The objective of the case was to structure and analyze a decision-making process to select the IT infrastructure of a financial company. In the case, an influence diagram was developed to calculate the expected value of the usefulness of each alternative and then an AHP model was used to select the best performing alternative.

3. Methodology

The methodology designed by Castillo (2006) was used to develop a model for selecting the technology for a sales force automatization. Figure 1 presents the methodology:

Description of the current situation

Literature Review

Problem  structuring

Design of the specific methodology

Aplication of the specific methodology

Analysis of results and recommendations

Identification of the model to be used

Data collection

Model building

Figure 1. Methodology used.

Page 172: OR52 EXTENDED ABSTRACTS

     

172 

Problem Structuring

In order to structure the problem, the system had to be delimited and the main aspects of the problem, the mst relevant actors and the relations between them had to be identified. With this information, a list was made of the relevant variables of the problem. Finally, the decision alternatives were defined by the decision makers and company experts on the matter. In the context of this decision, two main factors were considered: quantitative factors and qualitative factors. The first ones are represented mainly by the costs associated with different alternatives. Qualitative factors, on the other hand, were grouped into three types: functional, technical expertise, and technical support and experience. At this stage, four alternatives were identified to be analyzed: Alternative 1: "SW End to End". In this alternative, the company would continue with the same provider but with a new application – mSeries– which is now marketed under its new name, Spring Wireless. This alternative scheme involves a new End to End, which would be hired as a full-service, rather than simply purchasing software licenses. In this case, the supplier would be responsible for hardware, software and the direct support for the sales representatives. Alternative 2: "Telefónica-Insitu”. In this alternative, the company would hire a new solutions provider –Telefónica– which also offers End to End scheme. Telefónica would work with its strategic partner Insitu Mobile Software SA. Alternative 3: "Telefónica-WM." As in Alternative 2 (Telefónica-Insitu), this alternative includes the hiring of services from Telefónica as a solutions integrator, but in this case, the manufacturer of the software would be WM Wireless & Mobile SA. Alternative 4: "SW Current Scheme." This alternative is equal to Alternative 1 (SW End to End) in terms of software and application provider but its operation scheme is different since it would not work with the End to End model, but would rather continue with the current model in which the company itself is responsible for hardware, software and direct support of sales representatives. Figure 2 presents the differences among the alternatives.

Model Identification

Two AHP models were built to determine the weight of the alternatives regarding benefits and cost. Those results were combined in a mathematical expression proposed by Saaty (2010). The expression took into account the level of importance that the decision maker assigned to each of those aspects. To fill the pairwise comparison matrices required to build the costs hierarchy a Monte Carlo simulation on Crystal Ball was developed, to calculate the expected value of the PV of costs.

Page 173: OR52 EXTENDED ABSTRACTS

     

173 

FACTOR 

DESCRIPTION AND VALUES FOR EACH ALTERNATIVE 

Alternative 1  

(SW End to End) 

Alternative 2  

(Telefónica ‐ Insitu)  

Alternative 3  

(Telefónica ‐ WM) 

Alternative 4  

(SW Current Scheme) 

 Set‐Up 

Included Project management, consulting, development and testing. 

Software acquisition, integration 

with ERP, users and technician training. 

Development, training and supervision. 

Project management, consulting, development and testing. 

Implementation 

Time 

2,672 hours  3,500  hours  NA  2,672  hours 

Four months  Six months  Four months  Four months 

Cost 

Cost of implementation is deferred for 

three years and paid through leasing. Engineer hourly cost is US $50 for a 

total of US $133,600.

Engineer hourly cost is $22 for a total of US $77,000. 

US $11,495 (just once) 

Cost of implementation is deferred 

for three years and paid through leasing. Engineer hourly cost is US 

$50 for a total of US $133,600.

Software MSeries license update. This update costs US $3 per license. 

US $13,845 – software acquisition per 200 sales representatives. 

US$172 per license (200) MSeries licenses update. This update costs US $3 per license. 

Mobile Device  

(PDA‐Smartphone) 

Mobile device per sales representative 

at a monthly fee of US $14.76 for three years. 

It includes a mobile device per sales 

representative at US $327. The 

actual cost of the device is higher, but Telefónica subsidizes a fraction, 

subject to data plan taken.

It includes a mobile device per sales

representative at US $327. The 

actual cost of the device is higher, but Telefónica subsidizes a fraction, 

subject to data plan taken.

Mobile device per sales 

representative at a monthly cost of US $ 14.76 for three years. 

Monthly Operational Cost  

US $ 67.77 per vendor. It includes hosting, service desk, service 

management and wireless communication. 

Service Desk: It includes an operator 

of the company for centralization, document and scale requirements at US $1,500 plus US $2 per 

salesperson, which correspond to the support and maintenance by 

Insitu. Wireless Communication: It includes an unlimited data plan 

per salesperson, atUS $31.

Service Desk: It includes an operator 

of the company for centralization,   document and scale requirements  at US $1,500 plus US $17  per 

salesperson, which correspond to the support and maintenance by 

WM. Wireless Communication: It includes an unlimited data plan 

per salesperson, at a cost of US $31.

The IT Director’s projection 

determined an operational cost of US $51 per salesperson. This value includes direct support by the 

company’s helpdesk team, annual fee and maintenance support for 

each salesperson, communications and all costs associated with data 

and server maintaining.

Device Provisioning 

It includes review, re‐installation and 

replacement logistics. US $ 29 per event. 

This alternative does not include 

this service. It is contracted directly by the company at US $ 15. 

This alternative does not include 

this service. It is contracted directly by the company at US $ 15. 

It includes review, re‐installation 

and replacement logistics. US$ 15 per event. 

Figure 2. Alternatives Comparison Summary

Page 174: OR52 EXTENDED ABSTRACTS

   

  174 

Data Collection

The bidders provided their proposals with all the necessary information. Later, experts used the information to make the pairwise comparison matrices to determine the relative importance of the issues involving the alternatives. Then, historical information was collected to determine the probability distributions of the random variables included in the model to run the Monte Carlo simulation. An evaluation team was created in order to fill out the pairwise comparison matrices. The team included the CEO, the Sales Managers of all of the business units and the IT Manager. An initial meeting took place and the group was told about the current project stage and the role they would play. Subsequently, individual interviews were conducted with each of them to obtain the information needed through the use of surveys. Then, the Decision Analysts filled up the matrices. The consistency of each of the comparison matrices was checked using the consistency ratio calculated by Expert Choice for that purpose. This ratio should not exceed 10% so that the matrix can be considered consistent (Saaty, 2001). Building the Model

Quantitative model: Initially, a Monte Carlo simulation was run using Crystal Ball to calculate the expected value of the PV of the cost associated with each alternative. Table 1 summarizes the PV of the cost of the alternatives. The cost comparison matrix was built assigning a higher score to the alternative with the highest PV of cost. This was done because a lower cost represents better performance in the selected mathematical expression for the general model.

Table 1. PV of the cost of alternatives Qualitative Models: For each of the AHP defined (Benefit AHP and Cost AHP), the conceptual model was developed and built using the Expert Choice software. Figures 3 and 4 illustrate the hierarchies.

Page 175: OR52 EXTENDED ABSTRACTS

   

  175 

Figure 3. Benefit AHP conceptual model

Figure 4. Cost AHP conceptual model General Model (Cost vs. Benefit): The general model combines results in the mathematical expression shown below. In order to establish the performance value of each alternative, the expression involves the benefits and costs of the decision as follows: Performance = WB*Bi – WC*Ci where WB = benefits’ global weight Bi = standardized performance of alternative i regarding the benefits WC = costs’ global weight Ci = standardized performance of alternative i regarding the costs

Page 176: OR52 EXTENDED ABSTRACTS

   

  176 

4. Results obtained

The performance of each alternative is shown in Table 2. The results of the hierarchies depended on the level of importance assigned to each factor by the decision maker: 80% for benefits and 20% for costs.

Table 2. Alternatives performance In the Benefits hierarchy, the best alternative is Alternative 4, with a score of 39.1%, followed by Alternative 1, with a close score of 38.3%. Alternative 3 and 2 had distant scores of 15.2% and 7.5%, respectively. Alternative 4 had the best overall score because it was the best alternative in each of the three main aspects involved in this model: The functional, the technical and the support/experience aspects. Although this alternative has the same supplier and implementation as Alternative 1, its performance in the support/experience aspect is better. Specifically, in the opportunity criterion the score is higher because the first-level support response times improve considerably when support is given directly by company staff, unlike the End to End model in which this service is provided by the supplier. Table 2 shows that Alternative 2 has the lowest performance regarding benefits: 7.5%. This is due to the fact that it has the lowest score in most of the criteria, except in compatibility. Regarding to costs hierarchy, the best alternative for the overall goal –to determine the best financial solution for automating the company’s sales force– is Alternative 3 with a score of 8.5%, followed by Alternative 2 with a close score of 12.8%. Alternatives 4 and 1 had distant scores of 29.0% and 49.7%, respectively. This behavior in the costs model is due to the analysis and comparison of each alternative related to the others, based on the descriptive statistics thrown by the simulation in Crystal Ball and the risk profile of the decision makers. When comparing each alternative performance in the general model, the best one is Alternative 4, with a value of 68.3%. In second place is Alternative 1 with a performance of 58.4%. The reason why performance of both alternatives is not that different is because they come from the same provider. Alternative 2 and 3 global performance is not that different because both involve working with the same mobile phone solution integrator, which makes them share most of the total cost items such as mobile devices cost, wireless communication (voice/data) and helpdesk service. Moreover, the profit component is influenced by the software developer, Insitu or WM, and not by the integrator Telefónica.

Page 177: OR52 EXTENDED ABSTRACTS

   

  177 

5. Conclusions and recommendations

The application of the fundamentals of decision theory benefited the company because it offers a structure to analyze alternatives. This structure provides methodology, mathematical modeling and computational tools to improve the evidence and the quality of decisions. Designing a model that involves easily identifiable factors helped the interpretation of their interactions, which motivated the participation of different stakeholders in the collection of the information required. Following a methodology helped the integration of the most important aspects of the decision making process. Additionally, following a methodology enabled to make more accurate judgments. Although the methodology was developed in order to meet the goal of a particular project, it is clear that it can be used in other decision-making problems involving IT, making the appropriate adjustments. In the costs model, the analysis of descriptive statistics such as expected value, variance, coefficient of variability and percentiles, among others, is recommended. Also, it is important to take into account the risk profile of the decision makers, rather than simply comparing absolute values. Although Alternative 4 is one of the most expensive, it has the best performance because it generates the highest benefit. Its performance is superior because the benefits are much more important than cost for the decision makers. Therefore, the recommendation given to the company was to choose Alternative 4, i.e. continuing the current operation scheme with the same provider but with the new mobile solution they offer to automate the sales force. Nowadays, the company is implementing Alternative 4.

6. References

1. Avila H and Diaz N (2008). Implementación de una fuerza de ventas móvil. Bogotá, Universidad de los Andes.

2. Castillo M (2006). Toma de decisiones en las empresas: entre el arte y la técnica. Bogotá: Universidad de los Andes.

3. Cepeda S (2000). Método de selección y evaluación para propuestas de adquisición e implementación de Sistemas Integrados de Gestión Empresarial ERP (Enterprise Resource Planning). Bogotá: Universidad de los Andes.

4. Clemen (1996). Making Hard Decisions, Duxbury Press. 5. Evans (2007). Statistics, Data Analysis and Decision Models, Pearson-Prentice Hall. 6. Keeney-Raiffa (1993). Decision with Multiple Objectives, Preference and Value Tradeoffs,

Cambridge University Press. 7. Kirkwood (1997). Strategic Decision Making, Wadsworth: Cengage Learning. 8. Méndez R (2002). Automatización de fuerza de ventas, empleando tecnología inalámbrica.

Bogotá: Universidad de los Andes. 9. Saaty T (2001). Decision making with dependence and feedback: the Analytic Network

Process. Pittsburgh: RWS Publications. 10. Saaty T (2005). Theory and Applications of the Analytic Network Process, RWS

Publications. 11. Saaty T (2010). Principia Mathematica Disernendi. Pittsburgh: RWS Publications. 12. Stanton W, Etzel M & Walker B. (2004). Fundamentos de Marketing. Mc Graw Hill. 13. Winston (2006). Modelos Financieros con Simulación y Optimización. Palisade

Latinoamérica

Page 178: OR52 EXTENDED ABSTRACTS

   

  178 

Personnel-related decision making using ordinal expert estimates

Sergey Kadenko36

National Academy of Sciences of Ukraine, Institute for Information Recording,

Kyiv, Ukraine.

Abstract

A method for selecting candidates applying for a vacancy in an organization is suggested. A manager (decision-maker) is supposed to build a hierarchy of criteria, influencing the candidates’ (and working employees’) skills level. In case it is problematic for him\her to define the importance (weight) of each of these criteria and cardinally estimate employees’ and applicants’ skills (or build respective pair comparison matrices), (s)he can at least try to build their rankings and calculate criterion weights based on them. A manager can rank working employees according to their global skills’ level, and according to all criteria which influence it. Based on these rankings single criterion weights can be calculated using the method developed by the author. Candidates can be ranked according to their global skills’ level through aggregation (weighted summing) of their single-criterion ranks, which, in turn, can be obtained through interviews, and questionnaires. Keywords: Multi-criteria decision making, ordinal expert estimation, ranking, criteria hierarchy, criterion weight

1. Introduction

Assessment of candidates applying for vacancies remains a topical current problem for many organizations. The issue of job-related decision-making was tackled by many researchers (Atherton, 1998), (Adamus, 2009). Depending on specific situation when job candidates should be estimated, several approaches can be used. Direct estimation of candidates’ skills levels is the most simple and obvious one of them. In case candidates’ skills level is defined on the basis of their estimates according to several value-independent criteria (Keeney and Raiffa, 1993) in some kind of ratio scale, it is possible to construct an additive value function, reflecting their ratings, i.e. calculate these ratings as weighted sums of single-criterion estimates. If criteria are intangible ones and it is difficult to estimate candidates’ skills and criterion weights directly, it is possible to apply AHP/ANP (Saaty, 2008), or alternative approaches (Tavana, 2006). In this paper we are going to analyze the case when while assessing new candidates applying for a vacancy, we can learn from previous experience of employee skills’ estimation. Generally known and commonly used experience-based learning approaches include least squares method, regression models, group method of data handling, neuronal network-based algorithms, linear extrapolation method e t c. Yet, all these approaches focus on using cardinal values as the source of data for learning. We suggest an approach, which allows using ordinal estimation experience for criterion weights calculation in case it is problematic or impossible to obtain cardinal estimates of employees’ and/or candidates’ skills and of estimation criterion weights. It should be stressed here that, in contrast to cardinal estimates, ordinal estimates (or rankings) do not bear any specific information about quantitative relation between alternatives, allowing only to order alternatives according to a given criterion.

                                                            36 E-mail: [email protected] 

Page 179: OR52 EXTENDED ABSTRACTS

   

  179 

2. Strict problem formulation

Let us formulate the exact statement of a problem, which can require ranking of new candidates applying for a vacancy. We have:

1) A set of employees, already working in the organization mEEE ,...,, 21 .

2) A set of value-independent criteria, i.e., critical components, influencing employees’ general skills level: nCCC ,...,, 21 . Let us define the general skills level as G (see Figure 1).

Figure 1. Problem structure

3) Strict ranking of employees according to their skills level G: mgg ,...,1 .

4) A set of ordinal estimates (rank scores) of employees according to estimation criteria: n..j,m..i},r{ ij 11 , where rij is the rank score, or ordinal estimate of i-th employee

according to j-th criterion. As it has been already noted, these estimates can be obtained through questionnaires, interviews or direct assessment of candidates.

5) A set of candidates, applying for a vacancy (potential new employees) pmmm EEE ,...,, 21 .

We should find:

1) Weights of all estimation criteria njwj ..1},{ {wj}; ,n..j,w,wn

jjj

1

101 allowing

to preserve the global ranking of employees, calculated as ranking of weighted sums of their single-criterion rank scores.

2) Ordinal estimates (rank scores) of candidates according to the global criterion (general skills level) G: pmm gg ,...,1 .

Page 180: OR52 EXTENDED ABSTRACTS

   

  180 

3. Solution algorithm idea

Presumably, employees’ and candidates’ ranking is built as ranking of weighted sums of their single-criterion rank scores (Totsenko, 2005). This means that in order to find the most stable criterion weights’ vector (the one which allows to preserve the global ranking of weighted sums of employees’ single-criterion rank scores even after perturbations), we should find the center of the area where all the inequalities from the following system are fulfilled:

n

jjj

n

jjij

njww

miwa

1

1

..1,0,1

1..1,0

(1)

where lkmlkrra ljkjij ,..1,, , 2/)1(..1 mmi if we suppose that global ranking

transitivity requirement may be unfulfilled. It is supposed that employees in the ranking are arranged in the order of their skills’ level increase). Each inequality corresponds to a pair comparison of two employees Ei and Ej in ordinal scale. In general case the system may be redundant and can possibly contain incompatible inequalities. As we already know, the region of acceptability is limited by a system of linear constraints, or inequalities. Consequently, if the region is not empty, it is convex, and its closure is a compact set (see Figure 2, illustrating a 3-dimensional case). So, the necessary and sufficient condition of the region’s existence is the existence of finite number of its extreme points. Any convex combination of the region’s extreme points will lie within the region. In our case the region’s extreme points are the points, where the simplex

(

n

jjj njww

1

..1,0,1 ) and the hyper-planes given by the inequalities’ left sides intersect.

Hence, we can find extreme points as solutions of linear equations’ systems: each system will

consist of n – 1 hyper-plane equation and the simplex equation (

n

jjj n..j,w,w

1

101 .). Each

system will contain n equations and n unknown variables (weights).

Figure 2. Illustration of a 3-dimensional case.

Page 181: OR52 EXTENDED ABSTRACTS

   

  181 

After enumerating all the equation systems we can either define the weights’ acceptability range (i.e., its extreme points and “center”), or learn that it is empty.

If the weights acceptability range is empty, we propose to turn to the one(s), who built the global ranking of employees according to their general skills level and ask him\her\them to change it in such a way that criterion weights’ acceptability range is not empty. In order to do this, we must know, what pairs of employees should be swapped in the global ranking. We suggest to look through all the weight vectors, obtained through solving the equation systems, described above, and choose the vectors, on which the minimal Kemeny distance (Kemeny and Snell, 1973) between initial global ranking and weighted local ordinal estimate sums’ ranking is reached. Kemeny distance between rankings is 4 times larger than the number of unfulfilled inequalities from system (1) (Kadenko, 2008a), so, since these values are directly related, we are choosing Kemeny distance and not other coefficients (such as Kendall’s rank correlation value (Abdi, 2007) or concordation value) as an error indicator. Among these vectors we recommend to choose a vector, on which the minimal absolute residual is reached.

2/)1(

1

1

min

..1,0

2/)1(..1,0

mm

ii

j

n

jijij

u

njw

mmiuwa

(2)

where ui is a non-negative residual value for i-th inequality. In this case the feedback will demand the smallest changes (i.e. permutations number) in the initial global ranking of employees from the decision-maker (or manager), and he will be most likely to make them. If the one, who ranked the employees agrees to make changes in their global ranking, a non-empty weights acceptability area (its extreme points and center njwj ..1},{ {wj};

n

jjj njww

1

..1,0,1 ), will be obtained. After that single-criterion rankings of the candidates

can be defined through aforementioned procedures (questionnaires, interviews, direct assessment): njpir jim ..1,..1},{ , , where rm+i,j is a rank score (an ordinal estimate) of the i-th candidate

according to the j-th criterion. Then, putting recently calculated weight values (reflecting the decision-maker’s priorities under the circumstances and the specificity of the situation) into the weighted sum formula, we can build the candidates’ ranking according to the global criterion, representing their general skills level. This will be the ranking of weighted sums of their ordinal estimates according to the all the components, influencing their general skills level:

n

jjkmjimjjkm

n

jjjim pkiggwrwr

1,,,

1, ..1,, (3)

4. Step-by-step solution procedure

Step 1: Make a transition from rankings to the system of inequalities (1) and add inequalities

deriving from coordinate planes’ equations to the system:

njw

mmiwa

j

n

jjij

..1,0

2/)1(..1,01

, where lkmlkrra ljkjij ,..1,,

(4)

Page 182: OR52 EXTENDED ABSTRACTS

   

  182 

Step 2: Take first n – 1 inequalities from the system and change them to equations. Then we

add the simplex equation

n

jjw

1

1 to the system. Thus we get a system of n equations with n

unknown variables:

n

jj

n

jjij

w

niwa

1

1

1

1..1,0

(5)

Step 3: Check if the system’s determinant equals 0, or if the system’s equations are linearly dependant. If the system’s determinant equals 0, form another (next) system and return to Step2. Otherwise proceed to Step 4.

Step 4: In case the equation system’s determinant does not equal 0, we find the system’s

solution, which is a point in n-dimensional space. Step 5: Substitute vector w in all the inequalities from system (4) with the weight values

obtained from the system (5). If at least one of the inequalities is not (strictly) fulfilled, the weight values are not memorized. If the solution of system (5) satisfies all the inequalities, it is one of the extreme points of the weights’ acceptability region, and we should store it in memory.

Step 6: Enumerate all the systems of n linear equations, which can be formed from the

inequalities’ system (4) and simplex equation repeating steps 2 to 5 with the rest of possible n-dimensional systems. Thus we get a set of extreme points for the acceptability region of system (4). The existence of extreme points, if any, testifies to the existence of non-empty acceptability region.

Extreme points lie on the intersection of simplex with the hyper-planes specified by left sides of inequalities from system (4), including coordinate hyper-planes.

Step 7a: Calculate the optimal criterion weight vector (i.e. the one, most tolerant to possible

perturbations) as the simple average of coordinates of system (4) solution area’s extreme points (see Figure 2).

Step 7b: If after Step 5 none of the obtained equation systems’ solutions satisfies all the

inequalities of system (1), we can conclude that the acceptability range is empty. In this case we should search for weight vectors on which minimal numbers of inequalities from system (4) were unfulfilled. Among them we should choose the one, on which minimal residual value is achieved:

2/)1(

1

1

min

..1,0

2/)1(..1,0

mm

ii

j

n

jijij

u

njw

mmiuwa

(6)

where ui is a non-negative residual value for i-th inequality. Then we should invert all the unfulfilled inequalities. This operation will require swapping respective employees in their ranking according to general skills level. After Step 7b we should repeat Steps from 1 to 7a to get a non-empty weights’ acceptability area and its center.

Page 183: OR52 EXTENDED ABSTRACTS

   

  183 

Step 8: We build the single-criterion rankings of new candidates. Then we get their global ranking as ranking of weighted sums of their ordinal estimates according to the all the components, influencing their general skills level:

n

jjkmjimjjkm

n

jjjim pkiggwrwr

1,,,

1, ..1,, (7)

5. Numerical example

Table 1 represents a hypothetical example, where one of three candidates, applying for a vacancy in a department, already consisting of five employees, must be chosen. Suppose, the number of criteria, which influence employees’ general skills level, is 3. Skills evaluation criteria C1 C2 C3 G Employees Rankings Е1 3 2 2 5 Е2 2 5 1 4 Е3 1 4 4 3 Е4 4 3 3 2 Е5 5 1 5 1 In order to get a non-empty weights acceptability area, the decision-maker should put employee Е1 between employees Е4 and Е5 in the global ranking. Employees Ranking after permutations Е2 2 5 1 5 Е3 1 4 4 4 Е4 4 3 3 3 Е1 3 2 2 2 Е5 5 1 5 1 Extreme points of criterion weights’ acceptability area

0 0.75 0.25 0.25 0.6875 0.06250.25 0.75 0 0 1 0

Acceptability area center 0.125 0.7969 0.0781Candidates Rankings Е6 3 1 1 1 (3*0.125+1*0.7969+1*0.0781=1.25) Е7 2 2 3 2 (2*0.125+2*0.7969+3*0.0781=2.0781) Е8 1 3 2 3 (1*0.125+3*0.7969+2*0.0781=2.6719)

Table 1. A hypothetical example

Under the circumstances, candidate Е6 is the best one according to decision-maker’s priority system.

6. Conclusions

An approach to using ordinal estimates as a data source for ranking candidates, who apply for a vacancy, has been described. Due to the fact that ordinal estimates are less informative than cardinal ones, it is inappropriate to compare the approach with other experience-based methods. However, results of such comparisons are quite encouraging: the described method returns the results, which are compatible with those of using least squares, group data handling, minimal residual and linear extrapolation methods (Kadenko, 2009). Although none of multiple rankings’ aggregation procedures is immune to paradoxes and drawbacks, the problem remains topical. Moreover, in the light of described situation, where comparing the candidates applying for a vacancy, is a crucial task, the given approach, illustrating the solution of the inverse problem, proves to be a useful decision-making support instrument. The method can be used in case it is problematic or impossible

Page 184: OR52 EXTENDED ABSTRACTS

   

  184 

to obtain any other estimates of employees and candidates beside ordinal ones, and criterion weights are unknown. While building rankings, decision-makers and experts are usually asked much simpler questions than the ones posed to them, when they have to make direct estimations or pair comparisons cardinally. That’s why even if all other estimates are unknown, ordinal ones may still be obtainable. In general case criteria hierarchy or network for employees’ assessment may have a complex multi-level structure (Kadenko 2008b, 2009). Presently, the mathematical algorithm for solving the problem envisions a laborious potential extreme points’ enumeration procedure, so the method is targeted at relatively small numbers of comparable alternatives (in our case – employees and candidates). Even in its present condition it can be integrated into both new and existing decision-making support software.

7. References

1) Abdi, H. (2007). Kendall rank correlation. In N.J. Salkind (Ed.), Encyclopedia of Measurement

and Statistics (pp. 508-510). Thousand Oaks (CA): Sage. 2) Adamus W. (2009). A new Method of Job Evaluation. Proceedings of the Tenth International

Symposium on the Analytical Hierarchy Process (ISAHP 2009). Retrieved August 11, 2009, from http://isahp.org

3) Arrow, K. J. (1963). Social Choice and Individual Values, 2nd ed. Wiley, New York. 4) Atherton E. (1998) Just the Job. OR Insight, 11, 6-13. 5) Kadenko, S. V. (2008a). Modified Method of Relative Weights’ Determination Based on

Ordinal Estimates (in Ukrainian). Data Recording, Storage and Processing, 10(1), 137-149. 6) Kadenko, S. V. (2008b). Determination of Parameters of Criteria of "Tree" Type Hierarchy on

the Basis of Ordinal Estimates. Journal of Automation and Information Sciences, 40(8), 7-15. 7) Kadenko, S. V. (2009). Experience-based Decision Support Methods Using Ordinal Expert

Estimates. Proceedings of the Tenth International Symposium on the Analytical Hierarchy Process (ISAHP 2009). Retrieved August 11, 2009, from http://isahp.org

8) Keeney, R. L., Raiffa, H. (1993). Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Cambridge University Press, New York, 1993.

9) Kemeny, J. G., Snell, L. (1973) Mathematical Models in the Social Sciences. MIT Press, Cambridge, MA.

10) Saaty, T. L. (2008). The Analytic Hierarchy/Network Process. RACSAM 102 (2), 251–318. 11) Tavana, M. (2006). A priority assessment multi-criteria decision model for human spaceflight

mission planning at NASA. JORS 57, 1197–1215. 12) Totsenko, V. G. (2005). Method of Determination of Group Multi-criteria Ordinal Estimates

with Account of Expert Competence. Journal of Automation and Information Sciences 37(10), 19-23.

Page 185: OR52 EXTENDED ABSTRACTS

   

  185 

An application of the analytic hierarchy process to the foreign students’ integration analysis

Antonio Maturo, Rina Manuela Contini37

Department of Social Sciences, University of Chieti–Pescara, Italy

Abstract

We propose an application of the AHP process to analyze the complex and multidimensional phenomenon of foreign students’ school integration. A case study is presented in which, according to inquiries conducted with teacher’s interviews, an order of priorities is presented among n alternatives. The alternatives are ranked with the AHP procedure of Saaty. Starting from the definition of the general objective, foreign students’ school integration, the general objective has been divided into two sub-objectives the interpersonal communication and the degree of scholastic profit, and has been determined the weight for each sub-objective with respect to the general objective. Furthermore, with Saaty’s method, fuzzy weights of the variables that represent the phenomenon have been determined. The summary of the assessment made by the teachers was carried out considering for each pair of variables the geometric mean of the assessment and an uncertain multiplicative index of the assessments, for which a suitable formula has been determined. The group evaluations has been therefore represented by triangular fuzzy numbers.

1. “Foreign students” integration

The aim of this study is to suggest the use of AHP process and of fuzzy logic to study complex social problems, in order to define new models of sociological analysis. An important example of complex social phenomena is given by the scholastic integration of children and adolescents with a migration background. This issue is particularly relevant to Western societies which are undergoing significant changes. These changes are due to international migrations and globalisation processes (Bauman, 1998; Besozzi, 2001; Sassen, 2007). In the International debate different interpretative paradigms of integration processes are recognized: these are at the basis of different applications in the political ambit: assimilationist paradigm (Park and Burgess, 1924); multiculturalist theories; neo-assimilation theory (Alba and Nee, 1997; Brubacker, 2001); segmented theory of assimilation (Portes and Rumbaut, 2001). The school is at the core of the matter of the contemporary debate on integration and the quality of the future living together in a multiethnic society. The educational challenge is placed in the ambit of a larger objective of social cohesion: the failure of scholastic performance of new generations of immigrants can generate difficulties in the integration and impede the development of social bonds and feelings of belonging (Portes and Hao, 2002). The formative and socialization processes at school provide the basis for accessing public space and social resources (Nesse network, 2008). The school is the symbolic place where the evolution of complex societies is vividly represented; to study the school means to study society. Researches carried out on the problem of scholastic integration of foreigners highlight how the concept of scholastic integration is complex, multidimensional, gradual, and contextual. In fact, the

                                                            37 E‐mail: [email protected][email protected] 

Page 186: OR52 EXTENDED ABSTRACTS

   

  186 

integration process has to do both with the acquisition of abilities, competences and knowledge, and with the relationships between adults and peers at school and outside school.

2. Analysis of the “foreign students’ scholastic integration” problem with the AHP method and fuzzy numbers

Our aim is to analyse the complex phenomenon of scholastic integration. The mathematical tools are the AHP procedure, and fuzzy arithmetic. Our purpose is to elaborate methods that allow us to represent the complexity of the phenomenon, and to underline non-detectable aspects, on an intuitive and qualitative level. The complex phenomenon of the foreign students’ scholastic integration is analyzed with the Analytic Hierarchy Process. The AHP creates a method of analysis useful in the economic sciences and social sciences, in particular in situations in which it is necessary to decide the order of priority in which we have to place n satisfactory alternatives, in different measures, having prescribed choice criteria. In our research, we start from the idea that the general objective (GO) of the analysis is “the foreign students’ scholastic integration” and that, referring to the scientific literature on the topic, it is divided into two particular sub-objectives:

A1 = interpersonal communication; A2 = the degree of scholastic profit. Moreover, a set of variables has been determined which allow us to give an implicit definition of the general objective. The variables considered are:

B1 = interaction and relations in class with peers; B2 = relations in the extra-curricular time; B3 = interaction and relations in class with teachers; B4 = relations with the belonging environment; B5 = linguistic-expressive abilities; B6 = logic-mathematical abilities; B7 = manual skills; B8 = group activities; B9 = sport skills. The purpose of our research is to determine the critical variables; in other words, the variables that have the greatest effects on the degree of scholastic integration. A group of twelve motivated teachers, T(r), r = 1, 2, ..., 12, have supported the initiative expressing their own ideas. The teachers considered are very expert and interested in the problem, and then they can be considered privileged observers. In the first phase we assign, with the AHP procedure, a weight to each sub-objective with respect to the general objective and a weight to each variable with respect to every sub-objective. In the second phase we calculate the weight of every variable with respect to the general objective, called the scores of the variables and then we individuate the critical variables, i.e., the most important variables for the phenomenon studied. In the third phase we introduce a measure of uncertainty and replace the weights with fuzzy weights, with the aim to have a control of uncertainty arising by the different opinion of the teachers interviewed. A questionnaire has been given to a group of 12 teachers in a primary school in Pescara, where the problem of “the foreign students’ scholastic integration” is particularly felt. Every teacher T(r), r = 1, 2, ..., 12 is requested to compile three matrices of pairwise comparisons:

Page 187: OR52 EXTENDED ABSTRACTS

   

  187 

1. The matrix A(r) of the sub-objectives A1 and A2; 2. The matrix M(r) of the variables B1, B2, ..., B9, with respect to sub-objective A1; 3. The matrix N(r) of the variables B1, B2, ..., B9, with respect to sub-objective A2.

The approach is the one considered in the Analytical Hierarchy Process (Saaty, 1980; Saaty and Peniwati, 2007). Let X1, X2, ..., Xm be the objects to compare (e.g., sub-objectives or variables). Every teacher T, if considers Xi preferred or indifferent to Xj, then is requested to estimate the importance of Xi with respect to Xj using one of the following judgments: indifference, weak preference, preference, strong preference, absolute preference. The judgment chosen is said to be the linguistic value associate to the pair (Xi, Xj). Then the linguistic values are expressed as numerical values following the Saaty fundamental scale:

indifference = 1, weak preference = 3, preference = 5, strong preference = 7, absolute preference = 9.

The scores 2, 4, 6, 8 are used for intermediate valuations. If the object Xi has one of the above numbers assigned to it when compared with object Xj, then Xj has the reciprocal value when compared with Xi. Then a pairwise comparisons matrix A = (aij) with m rows and m columns is associated to the m-tuple (X1, X2, ..., Xm), where aij is the number assigned to Xi when compared with Xj. Let Y(r) = (yij

(r)) denote the generic element of the set {A(r), M(r), N(r)}. The synthesis of the teachers’ opinions is made by considering, for every each pair (i, j) of indices, the geometric mean Gij

Y of the opinions of the teachers, defined by the formula:

Gij

Y = (yij(1) yij

(2) ... yij(12))1/12; (1)

The choice of Gij

Y is justified by the following interesting properties: (G1) minr yij

(r) ≤ GijY ≤ maxr yij

(r), then 1/9 ≤ GijY ≤ 9;

(G2) GijY = 1/ Gji

Y. Then the matrix GY = (Gij

Y) has the same properties of internality and reciprocity of the matrix Y(r) = (yij

(r)) and you can apply the Saaty procedure to GY. The weights associated to the elements represented to the rows of GY are the components of the normalized positive eigenvector EY associated to the principal eigenvalue �Y of the matrix GY are the. Moreover, if n is the number of objects submitted to pairwise comparisons, Saaty suggests to consider the following consistence index:

Y = (�Y-n)/(n-1), (2) and assumes the judgments are strong coherent if the index Y is not superior to 0.1. As to the matrix GA, of order 2, the consistence is evident, and we have obtained: �A = 2, A = 0, EA = (0.874, 0.126). For the matrix GM and GN we have obtained: �M = 10.344, M = 0.168, �N = 10.306, N = 0.163,

Page 188: OR52 EXTENDED ABSTRACTS

   

  188 

with eigenvectors EM = (0.303, 0.211, 0.164, 0.042, 0.090, 0.058, 0.039, 0.057, 0.031), EN = (0.302, 0.107, 0.165, 0.055, 0.124, 0.104, 0.054, 0.059, 0.025). The not strong coherence is justified by the differences of the opinions of the 12 teachers. Let EM,N the matrix having as rows EM and EN. The matrix product W = EA EM,N is the vector of the variables. We obtain: W=(0.30287, 0.19790, 0.16413, 0.04364, 0.09428, 0.06380, 0.04089, 0.05725, 0.03024). By previous analysis we can underline that, in order to obtain the foreign students scholastic integration, in the opinion of the group of teachers, the most important variables are, in order of preference: B1 = interaction and relations in class with peers; B2 = relations in the extra-curricular time; B3 = interaction and relations in class with teachers. The sum of the correspondent is 0.6649. The fourth variable in order of importance is B5 = linguistic-expressive abilities. The global score of {B1, B2, B3, B5} is 0.6649 + 0.094284 = 0.759184. A marginal role is the one of the remaining variables: B4 = relations with the belonging environment; B6 = logic-mathematical abilities; B7 = manual skills; B8 = group activities; B9 = sport skills. In order to control the uncertainty due to the different opinions of teachers we propose to replace weights with fuzzy weights, obtained with the introduction, for every Y = (yij) belonging to the set {A, M, N}, of a matrix UY = (Uij

Y), where the number UijY, called the index of multiplicative

uncertainty of the opinions, is defined by the formula:

12

]G

y[ln

expU

12

1r

2Yij

)r(ij

Yij

. (3)

The choice of Uij

Y is justified by the following interesting properties: (U1) Uij

Y ≥ 1 and UijY = 1 iff yij

(1) = yij(2) = ... = yij

(12); (U2) Uij

Y = UjiY;

Previous properties and the property of Uij to be a pure number allows us to assume as group opinion of teachers on the pair (Xi, Xj) of objects the fuzzy triangular number Fij

Y = (GijY/Uij

Y, GijY, Gij

Y UijY),

with core the geometric mean Gij

Y and endpoints obtained multiplying the core by the reciprocal positive numbers 1/Uij

Y and UijY, respectively. From matrix UY = (Uij

Y) a vector VY = (ViY) is

calculate as the geometric mean of the columns of UY.

Page 189: OR52 EXTENDED ABSTRACTS

   

  189 

For every object Xi, we define fuzzy number associate to Xi the triangular fuzzy number

TiY = (Ei

Y/ ViY, Ei

Y, EiY Vi

Y), (4) where Ei

Y is the component i of the eigenvector EY. We have obtained: VA = (1.027, 1.027), VM = (1.193, 1.289, 1.248, 1.357, 1.323, 1.308, 1.308, 1.312, 1.250), VN = (1.244, 1.392, 1.259, 1.382, 1.340, 1.322, 1.355, 1.329, 1.240). Then the fuzzy numbers associate to A1 and A2 are: T1

A = (0.874/ 1.027, 0.874, 0.874 1.027) = (0.851, 0.874, 0.898); T2

A = (0.126/ 1.027, 0.126, 0.126 1.027) = (0.123, 0.126, 0.129), respectively. It is interesting to note that the maximum of left and right spreads are 0.023, and 0.024, respectively. This can be interpreted as a palatable agreement among the interviewed teachers about judgments on interpersonal communication and scholastic profit degree. The fuzzy numbers associate to B1, B2, ..., B9, with respect to the sub-objective A1 are, respectively, the triangular fuzzy numbers having as set of cores EM and as set of left and right endpoints: EM/VM = (0.2540, 0.1637, 0.1314, ,0.0310, 0.0680, 0.0443, 0.0298,0.0434, 0.0248), EM * VM = (0.3615, 0.2720, 0.2047, 0.0570, 0.1191, 0.0759, 0.0510, 0.07478, 0.03875). The fuzzy numbers associate to B1, B2, ..., B9, with respect to the sub-objective A2, are, respectively, the triangular fuzzy numbers having as set of cores EN and as set of left and right endpoints: EN/VN = (0.2428, 0.0769, 0.1311, 0.0398, 0.0925, 0.0787, 0.0399, 0.0444, 0.0202), EN * VN = (0.37569, 0.1489, 0.2077, 0.0760, 0.16616,0.1375, 0.0732, 0.0784, 0.031). Finally we obtain the fuzzy scores s(BI) of the variables BI, I = 1, 2, ..., 9 by the formula:

s(BI) = T1A TI

1B + T2A TI

2B, (5) where we consider as addition the Zadeh addition of fuzzy numbers, based on the Extension Principle (Zadeh, 1975), while the multiplication is the triangular approximation of the Zadeh multiplication, i.e., we replace the Zadeh product with the triangular fuzzy number having the same core and the same support.

3. References

1. Alba, R., & Nee V. (1997). Rethinking assimilation theory for a new era of immigration. International Migration Rewiew, vol. 31, 4, 826-74.

2. Bauman, Z. (1998). Globalization. The Human Consequences. Cambridge-Oxford: Polity Press, Blackwell.

3. Besozzi, E. (2001). L’incontro tra culture e la possibile convivenza. Studi di sociologia, XXXVIII, 1, 64-81.

4. Bourdieu, P. (1980). Le capital social. Notessoires. Actes de la Recherche en Sciences Sociales, 3, 31.

Page 190: OR52 EXTENDED ABSTRACTS

   

  190 

5. Brubaker, R. (2001). The Return of Assimilation? Changing Perspectives on Immigration and Its Sequels in France, Germany and the United States. Ethnic and Racial Studies, vol. 24, 4, 531-548.

6. Coleman, J., S. (1988). Social Capital in the Creation of Human Capital. American Journal of Sociology, vol. 94, 95-121.

7. Council of Europe (2008). White Paper on Intercultural Dialogue. Living Together s Equals in Dignity. Strasburgo: www.coe.int/dialogue.

8. Farley, R., & Alba R. (2002). The new second generation in the United States. International Migration Rewiew, vol. 36, 3, 669-701.

9. Klir G., & Yuan B. (1995). Fuzzy sets and fuzzy logic: Theory and Applications. New Jersey: Prentice Hall.

10. Nesse network (2008). Education and Migration. Strategies for integrating migrant children in European schools and societies. Brussels: Education & Culture DG. www.nesse.fr.

11. Park, R.E., & Burgess, E.W. (1924). Introduction to the Science of Sociology. Chicago: The University of Chicago Press.

12. Portes, A., & Hao L. (2002). The price of uniformity: Language, family and personality adjustment in the immigrant Second generation. Ethnic and Racial Studies, 25, 889-912.

13. Portes, A., & Rumbaut, R.G. (2001). Legacies. The story of the migrant second generation. Berkeley-New York: University of California Press-Russel Sage Foundation.

14. Rumbaut, R.G. (1997). Assimilation and its discontents: between rhetoric and reality. International Migration Rewiew, vol. XXXI, 4 (Winter), 923-960.

15. Saaty, T. L. (1980). The Analytic Hierarchy Process. New York: McGraw-Hill. 16. Saaty, T.L.. & Peniwati, K. (2007). Group decision-making: Drawing out and reconciling

differences. Pittsburgh: PA: RWS Publications. 17. Sassen, S. (2007). A sociology of globalizzazion. New York: Norton & Company. 18. Zadeh, L., (1975). The Concept of a Linguistic Variable and its Application to Approximate

Reasoning I and II. Information Sciences, 8, 199-249; 301-357.

Page 191: OR52 EXTENDED ABSTRACTS

   

  191 

Solving the VRP with probabilistic algorithms supported by constraint programming

Daniel Riera38, Angel A. Juan38

Studies of Computer Science, Multimedia and Telecommunication Open University of Catalonia, Barcelona, Spain

Javier Faulin39

Department of Statistics and OR, Public University of Navarre, Pamplona, Spain

Abstract

Among the human-being problems, there are those which carry with the exploration of a number of feasible and unfeasible possible solutions. These have been a challenge faced constructing specific strategies. Thus, techniques rising from several areas have been used. They can be complete or incomplete, with mathematical background or a more computational one, single techniques, greedy, constructive, etc. Most of these have proven to work fine when they are tailor-made solutions. Therefore, usually, the solution for a problem is totally useless for a different one or even for the same problem with new/different constraints. With this position paper, we claim a hybrid methodology combining probabilistic algorithms with a complete technique might be useful in order to separate the optimisation engine and the validation tool. This eases the possibility to have a general optimisation tool, with an acceptable minimal fine tuning, which includes neither the problem model nor validation jobs.

1. Introduction

Along history, people have dealt with problems where decisions to be taken imply certain benefits/costs. Several of these problems, due either size or complexity, are not easily tractable. One of the most studied ones is the Vehicle Routing Problem (VRP) (Golden et al., 2008; Laporte, 2007; Toth and Vigo, 2002). Optimisation is a field where a wide work has been done in the last few decades. Problems introduced in the previous paragraph have been faced by means of different techniques, all of them with its advantages and weaknesses, assuring or not completeness, generic or tailor-made, based on mathematical foundations, algorithms, graphical models, etc. Most of them have been successful for certain problems, but have fallen short for others. Some successful current optimisation algorithms are based in the generation of solutions combining statistical distributions and simulation features. In short, a new solution is built by evolving a previous one in the following way: possible improvements are analysed, probabilities are distributed among these alternative improvements, and a simulation is run. The result of the simulations becomes the new solution if is feasible. While these methods work very well with a small tuning effort, changes in the model – new constraints, change of certain conditions, etc. – may penalise the feasibility checking phase. In order to deal with this, and applying this to the already mentioned VRP, we propose to use Constraint Programming (CP) to build the model. CP has proved to be a powerful and flexible tool for

                                                            38 Email: {drierat,ajuanp}@uoc.edu 39 3Email: [email protected] 

Page 192: OR52 EXTENDED ABSTRACTS

   

  192 

modelling all kind of constraints. Furthermore, its inclusion frees (or at least lightens) the optimisation part from the model construction/adaptation.

1. Search: Probabilistic algorithms Although literature provides a very wide range of approaches to combinatorial problems, when facing real scenarios with huge numbers of clients, entities, or whatever to be served/optimised, incomplete algorithms based on evolution of previous complete or incomplete solutions seem to perform better than complete techniques. Thus, there are those which complete a single solution by adding parts of it incrementally. These often make choices based on certain probabilities, which normally favour greedy (in terms of the cost function) selections. On the other hand, there are methodologies which create a set of solutions and generate new ones by crossing features of these. This evolution is also dependent on certain probabilities. Although these techniques are not complete, the gap between the solution they (quickly) find and the best known or optimal one is usually very tight. Hence, if the aim is not to get the optimal solution but a good solution in a limited time, these techniques can be extremely useful.

2. Modelling and validation: Constraint programming

Constraints arise in most areas of human endeavour. A constraint is simply a logical relation among several unknowns (or variables), each taking a value in a given domain. The constraint thus restricts the possible values that variables can take. Constraint Programming (CP) is the study of computational systems based on constraints. The main idea is to solve problems by stating constraints (requirements) about the problem area and, consequently, finding a solution satisfying all the constraints. CP combines ideas from a number of fields including Artificial Intelligence, Combinatorial Algorithms, Computational Logic, Discrete Mathematics, Neural Networks, Operations Research, Programming Languages and Symbolic Computation. The problems solved using CP are called Constraint Satisfaction Problems (CSP) (Tsang, 1993). A CSP is defined as:

set of variables, 1 2, ,..., nX x x x ,

for each variable ix , a finite set iD of possible values (its domain), and

a set of constraints restricting the values that the variables can simultaneously take.

CP is based on exhaustive search supported by consistency algorithms which allow refusing paths in the search tree in advance. This paradigm takes advantage of propagation of decisions in order to avoid exploring unfeasible solutions. CP can be used as a validation tool since it can be called with a complete solution. The answer will be returned immediately saying if the solution is consistent with the model or not.

3. Methodology - General structure

As said before, the main idea we are working with is the separation of the search engine and the model itself. We must state that the former is not totally independent from the model, since, as seen in the algorithm below, the first and second steps require a constructive heuristic (or perhaps a metaheuristic) containing certain knowledge of the problem. This is not an issue because the simplest method is useful to generate the solutions.

Page 193: OR52 EXTENDED ABSTRACTS

   

  193 

Algorithm1: General view of the optimisation methodology

1. Generation of an initial solution by means of a well-known constructive greedy technique

2. Evolution of the current solution by means of a probabilistic algorithm

3. Validation of the new solution by means of the CP model

a. If the solution is verified, it becomes the current solution

b. Else it is refused

4. If final condition is found, then end, else go to step 2 In a very simple way, algorithm 1 shows how the proposed methodology works. For the first step --- and trying to solve the VRP --- the classical Clarke and Wright (C&W) (Clarke and Wright, 1964) algorithm can be used. This gives an initial solution which feeds the rest of the algorithm. For the second step we have chosen the main idea from the SR-GCWS-CS algorithm (Juan et al., 2010). In this case, following the basis of the C&W algorithm, probabilistic choices are included in the search in order to evolve the current solution without falling into local minimums. Finally, the third step of the algorithm, for the VRP, is implemented by using a CP library we have constructed for this family of problems (Riera et al., 2009) in order to quicken the creation of new VRP models. We are currently making the initial tests of the methodology so we hope to reach results very soon.

4. Conclusions and current/future work

For the solution of combinatorial problems we have proposed to separate as much as possible the modelling part from the search engine. Work done till now shows that tailor-made solutions fall short when there are changes in the problem to solve, and hence, new solutions built from scratch are needed. On the other hand, metaheuristics work fine for any problem, but parameters tuning is not very comprehensive and/or modelling might be quite complicated. In this paper we claim the separation of modelling and search in order to free as much as possible metaheuristics (or other methods) from the modelling effort. For this we propose CP for the modelling part, because this is a straightforward tool for modelling and validation, and probabilistic algorithms for the search part. They are able to evolve solutions by means of their statistical/probabilistic features in order to avoid local minimums while moving to the optimal. Thus, we consider the use of CP gives the methodology the following advantages:

Very powerful modelling tool: accepts a wide range of constraints: linear, non linear, suspensions, etc.

Models easily modified/incremented Fast validation performance

The main drawback we find we could meet is: Since we have split between the search engine and the validation tool, communication penalty between them has been introduced. In the current tests we are working to minimise this time in order to make both tools work as a single one (as far as possible).

Page 194: OR52 EXTENDED ABSTRACTS

   

  194 

Although we are working with the VRP family, the proposal can be applied to any problem. In this case, the main points are: the choice of a simple methodology to generate an initial solution, the modelling of the problem by means of CP and the incorporation of a probabilistic behaviour into a methodology to generate solutions for that problem (it can be the same as the one used for the initial solution). We plan to test the proposed methodology with problems coming from scheduling, planning, timetabling, etc. in order to generalise it as much as possible.

5. Acknowledgements

The research of this paper has been partially sponsored by the Internet Interdisciplinary Institute (IN3) and the Knowledge Community on Hybrid Algorithms for solving Real-life rOuting, Scheduling and Availability problems (HAROSA).

6. References

1. Clarke G, and Wright J (1964). Scheduling of Vehicles from a central Depot to a Number of Delivering Points. Operations Research, 12, 568-581.

2. Golden B, Raghavan S and Wasil EE (eds.) (2008). The Vehicle Routing Problem: Latest Advances and New Challenges. Springer.

3. Juan A, Faulin J, Jorba J, Riera D, Masip D and Barrios B (2010). On the Use of Monte Carlo Simulation, Cache and Splitting Techniques to Improve the Clarke and Wright Savings Heuristics, J Opl Res Soc, doi:10.1057/jors.2010.29.

4. Laporte G (2007). What you should know about the Vehicle Routing Problem. Naval Research Logistics, 54: 811-819.

5. Riera D, Juan A, Guimarans D, Pagans E (2009) A Constraint Programming-Based Library for the Vehicle Routing Problem. In: Proceedings of the 21st European Modeling and Simulation Symposium (EMSS 2009), pp. 261-266. Puerto Cruz, Tenerife, Spain.

6. Toth P and Vigo D (2002). The Vehicle Routing Problem. SIAM Monographs on Discrete Mathematics and Applications. SIAM.

7. Tsang, E (1993). Foundations of Constraint Satisfaction. Academic Press.

Page 195: OR52 EXTENDED ABSTRACTS

   

  195 

A Model of support for students in mathematics

Jon Warwick

Faculty of Business London South Bank University

London SE1 0AA

1. Introduction

Higher Education (HE) in the United Kingdom (UK) is now entering a period of great economic uncertainty and the Government is likely, over the next five years, to be demanding even greater educational efficiency so that standards are maintained while central funding is reduced. Many UK HE institutions are adopting an outcomes driven ethos so that measurement against key performance indicators (relating to, for example, retention, progression, achievement, and employability) is becoming commonplace as institutions seek to explore which courses and/or departments remain (in the eyes of senior management) fit-for-purpose. Among those HE institutions that pursue a widening participation agenda, the retention of students has become the subject of much concern as this impacts not only on current and likely future funding, but also on university league table positions. One area of great concern is the level of mathematical competence shown by incoming students and not just among the traditional STEM subjects (Smith, 2004). Students studying, for example, nursing and other health related subjects must demonstrate the quantitative knowledge and skills necessary for dosage calculations in what may be truly matters of life or death. Unfortunately, admissions tutors have noticed enormous variability in the mathematical abilities of students even though they will all have met the minimum entry qualifications for their chosen course. This then presents a challenge as to how these students can be taught and supported during their first year of study so that such mathematical weaknesses can be identified as early as possible and then remedied. This paper reports on the outcomes of a study undertaken over a three year period within the Faculty of Business at London South Bank University. The study explored how students undertaking a range of computing courses could best be supported as they worked on an underpinning mathematics unit taken as part of their first year of study. Although the students were specific to a particular domain of study, it is believed that the outcomes of the research will have an impact on mathematics support across a range of disciplines.

2. The Study

One common way of dealing with variable knowledge and skills sets among students is to undertake a series of diagnostic tests so that gaps in knowledge can be identified and then filled. Unfortunately, studies have shown that mathematical testing (even if purely diagnostic) can have the effect of raising levels of mathematical anxiety within students and it was felt that this sort of approach would not necessarily be conducive to a good first year student experience and may even have the effect of worsening student engagement and retention. As a consequence of these arguments, this research took a rather different approach and was concerned more with exploring the mathematical and other educational experiences of the students prior to university entry, their expectations of study, anxieties towards mathematics, and other qualitative influences that may contribute to a lack of engagement with the mathematics unit and hence poor levels of achievement. It was felt that by exploring these issues, it may be possible to collect sufficient evidence on each student so as to indicate how a student might be best supported mathematically without introducing a regime of diagnostic testing and evaluation.

Page 196: OR52 EXTENDED ABSTRACTS

   

  196 

The project was divided into three phases and these are now briefly described. Phase 1 As reported in Warwick (2010a), London South Bank University has, in common with many other similar institutions, traditionally provided resources for a centralised mathematics support centre. The centre provides assistance to students (often on a one-to-one basis) but on a drop-in and voluntary basis, i.e. the student has to make a request for assistance. In common with other such centres (Mac an Bhaird et al, 2009) it was found that computing and business IT students were not making consistent use of the centre even when directed to it and many students who we had considered at risk of failure in the mathematics unit were not seeking additional help at all. As a response to this, in 2006 the mathematics unit team were able to bring the mathematics support ‘in-house’ so that faculty resources were used to provide compulsory support sessions for all students as part of their course timetable. Thus the idea of suggesting that students attend centralised university support sessions that were voluntary was removed and instead students were timetabled to attend the support classes until they were able to demonstrate that support was no longer required. This they would show by passing a multiple choice quiz (not contributing to the summative assessment of the unit) that covered a number of fundamental mathematical areas (known colloquially as the mathematics ‘driving test’). There was no compulsion for students to attempt this quiz. If a student would prefer to continue attending the support sessions and never attempt the driving test then this was permitted. Additionally, we were keen to find ways of working with students so as to reduce their mathematical anxiety and enhance their mathematical self-efficacy - a type of personal cognition defined as “people’s judgements of their capabilities to organise and execute courses of action required to attain designated types of performance” (Bandura, 1986). Specifically, self-efficacy judgements are made on the basis of four main sources of evidence. Firstly, previous success or failure in mathematical performance so that repeated success in mathematical assessments for example will strengthen self-efficacy beliefs whereas repeated failure will weaken them. Secondly, comparison of oneself with peers, colleagues, classmates etc. Favourable comparison with others strengthens self-efficacy beliefs. Thirdly, comments made by others (usually in a position of authority such as a teacher or parent) regarding the person’s ability to complete a task with self-efficacy being enhanced as others express confidence in ability. Finally, self-efficacy relates to inner feelings of anxiety, worry, tension etc. that might be provoked by having to undertake mathematical tasks. Lowering levels of anxiety and stress, for example, would tend to enhance self-efficacy. Enhancing self-efficacy can motivate students to engage meaningfully with their studies. Phase 2 With a support process in place, phase two of the research project was concerned with developing a qualitative model of student learning that illustrated the positive and negative feedback structures that can either build enthusiasm for study, or lead to disengagement of the student from their studies. A sequence of detailed interviews with students produced quite a large model, the core of which is shown in Figure 1.

Page 197: OR52 EXTENDED ABSTRACTS

   

  197 

Figure 1. Core Feedback Process with Support (Warwick, 2010a) Similar feedback processes are, of course, occurring in all units that a student is studying and so feedback processes that occur in individual units are also linked together. For example, additional time spent studying in one unit necessarily reduces the time spent on other units with consequent changes to engagement etc. The full model has been described in Warwick (2010a). The purpose of the model (in its entirety) was two-fold. Firstly it helped us to understand from the student perspective the cause and effect relationships that shaped their engagement with all their taught units, and to identify known system archetypes (e.g. accidental adversaries, drifting goals etc.). Secondly, it was suggested that we could use the model to work with new students to help them understand the possible consequences of poor engagement, lack of time spent studying etc. Phase 3 In this final stage of the project we wanted to address a number of additional questions among which were:

i) What can we say about the initial expectations of students when they begin the mathematics unit? This would help us to organise introductory material in an appropriate way;

ii) What do students tend to believe about their own mathematical abilities? More specifically,

do the students believe that any mathematics difficulties they may have can be overcome since a fixed mind-set of ‘I’m no good at mathematics’ is likely to make attempts at support and development difficult.

In order to address these questions, a cluster sample of students were asked to complete a questionnaire at the beginning of the mathematics unit. This Student Expectations survey required students to indicate their level of agreement with a series of statements using a 5-point Likert scale (1 indicating no agreement and 5 indicating complete agreement). The statements were designed to assess levels of agreement with a number of statements of the following type:

i) I expect to pass this mathematics unit ii) I have always done well in mathematics

iii) I was expecting to study mathematics as part of my course

Mathematical Knowledge

and Understanding

+

Rate of Knowledge Acquisition

Level of Engagement with Learning

Assessment & Classroom Experience

++

+

+Mathematical Self-efficacy

Actual Base Knowledge

Required

Basic

Knowledge Gap ‐

+ ‐

Additional Support Time

+ +

Support Session

Experience

+

+

Page 198: OR52 EXTENDED ABSTRACTS

   

  198 

iv) I do not enjoy studying mathematics v) I can change the type of person that I am with regard to studying

vi) I can change my ability to succeed in mathematics vii) I am generally confident in carrying out mathematical calculations correctly

viii) I can judge whether my answers to calculations seem correct or not ix) I am able to carry out (specific) mathematical calculations x) I am good at mental arithmetic

xi) I generally need to work with a calculator when studying mathematics Statements relating to item (ix) make specific reference to basic topics covered in the support sessions and driving tests. Relating the results of this questionnaire to an individual student’s driving test scores (students attempt the tests as and when they wish) allowed the identification of those questionnaire statement which were significantly correlated with attainment (Warwick 2010b). This allowed the development of an Expectations Questionnaire which contained a reduced set of questions, primarily those which significantly correlated with subsequent student performance in the driving tests. It was encouraging to note that students did, in the main, believe that their mathematics could be improved even if prior experience had been bad and confirmed that the most anxiety inducing elements of mathematics study invariably revolved around the assessment process. The results of this third stage of the research seem to indicate that we could work with students in particular ways to help support them in their mathematical studies.

3. Discussion

The transition from school or college to higher education is one which a number of students find quite difficult particularly in an age where widening participation is emphasised and where it is almost impossible to describe a ‘typical’ university student. This places an emphasis on universities to look carefully at their first year curricula to ensure that the units offered ease this transitional process and address the difficult educational problems posed by what is, for many universities, a mixed intake of students. These issues seem particularly important in subjects that require mathematics as an underpinning subject since mathematics seems to engender reactions from students which are extreme, and the weaker students often complain of having had poor educational experiences in mathematics prior to university entry. Certainly at LSBU the teaching of mathematics to computing students has raised a number of challenges and at a time when the academic performance of lecturers is being harnessed to key performance indicators (including those relating to student retention and progression) these challenges must be overcome. The outcomes of this programme of research have been very encouraging since, in terms of key performance indicators, the mathematics unit performs as well as any other unit taken by the cohort of students and this represents a much improved position. Specifically, the research has produced a mathematics teaching approach that now centres around the following key areas: Firstly, at the start of the mathematics unit all student are given the Expectations Questionnaire that will help teaching staff identify those students who will require particular attention during the support sessions, and also those who will be expected (and encouraged) to attempt the mathematics driving test as early as possible as support is not likely to be required. The advantage of this type of questionnaire is that it is not a traditional form of mathematical diagnostic testing and so is not anxiety inducing among the weaker students. Secondly, we use the large qualitative model developed through phase 2 of this research to illustrate to students how their learning behaviour across the different units can influence their learning outcomes. We try to use it to give the students a feel for the complexities of study in higher

Page 199: OR52 EXTENDED ABSTRACTS

   

  199 

education and discuss with them what system archetypes can mean to them in terms of their learning behaviour. It is way of improving the students’ pedagogic literacy and understanding. Thirdly, the lecture material is delivered to students in a broken lecture style based around the Kolb learning cycle (Kolb, 1984) so that students are exposed to a mixture of learning styles. Through working on their own in some tasks but also with peers and the tutor (when needed) students have sources of evidence available through with self-efficacy judgements may be made. Fourthly, the timetabled support sessions also have a mixture of tutor lead activities as well as one-to-one working with students. As the better able students work their way out of the sessions via the mathematics driving test, the time available to work with the weaker students increases. We have found that working with students during the support sessions encourages students to believe that they can improve their mathematics, reduces anxiety and enhances self-efficacy all of which improves engagement with the unit and enhances student achievement as measured by unit pass rates and attendance registers.

4. Conclusion

Our experience of working with students in this way has been very positive. Although the research we have conducted has been restricted to working only with those students studying mathematics as part of their undergraduate computing curriculum, there is no reason to suppose that similar results may not be achieved with students studying mathematics as an underpinning subject in other academic disciplines as well. This might include courses in business, management, or even nursing and other health studies. The next phase of work within the university will be to see whether these results can be extended to other students working within the faculty on business and management courses.

5. References

1. Bandura A (1986). Social Foundations of Thought and Action: A Social Cognitive Theory. Prentice-Hall Inc: New Jersey.

2. Kolb D A (1984). Experimental Learning: Experience as the Source of Learning and Development. Prentice Hall: Englewood Cliffs NJ.

3. Mac an Bhaird C, Morgan T and O’Shea A (2009). The Impact of the Mathematics Support Centre on the Grades of First Year Students at the National University of Ireland Maynooth. Teaching Mathematics & its Applications 28: 117-122.

4. Pell G and Croft A (2008). Mathematics Support – Support for All? Teaching Mathematics & its Applications 27(4): 167-173.

5. Smith A (2004). Making Mathematics Count – The Report of Professor Adrian Smith’s Enquiry into Post-14 Mathematics Education. February 2004, The Stationery Office, 2/04 937764.

6. Starkings S (2004). Maths for Business and Computing Students. MSOR Connections 4(2): 22-24.

7. Warwick J (2010a). A Developing Qualitative Model Diagnosing the Learning Behaviour of Undergraduate Computing Students. PRIMUS 20(4): 275-298.

8. Warwick J (2010b). Exploring Student Expectations in Mathematics Learning and Support. Teaching Mathematics & its Applications 29: 14-24.

Page 200: OR52 EXTENDED ABSTRACTS

   

  200 

A Taxonomy of ratio scales

William C. Wedley and Eng Ung Choo

Faculty of Business Administration, Simon Fraser University

Burnaby, B. C. Canada

Abstract

Stevens classified scale types as nominal, ordinal, interval and ratio, but did not make finer distinctions within those scale types. Ratio scales, in particular, can be expressed in many different ways. A similarity transform (multiplication by a positive constant) changes the values but not the ratios. This paper looks at different expressions of ratio scales and comments on their use in Multiple Criteria Decision Making. It investigates ratio scales use in the Analytic Hierarchy/Network processes and suggests mechanisms to achieve commensurate aggregation. Keywords: Analytic Hierarchy/Network Process, Ratio scales, Commensurability

1. Introduction

Ratio scales have been defined as the estimation or discovery of the ratio between a magnitude of a continuous quantity and a unit magnitude of the same kind of property (Michell, 1999). A magnitude for an object can be thought of as a level of intensity starting at absolute zero and extending to some level higher on the scale. The fact that unit value can be any intensity level means that there are numerous ways to define the scale. The Analytic Hierarchy (AHP) and Network (ANP) Processes are popular decision methods that use ratio scales (Saaty, 2008). Since it was first introduced (Saaty, 1977), the prime criticism of AHP is that it does not adhere to the interval scale tenets of classical utility theory. More correctly, however, the origin of AHP/ANP and its normative foundations are in the theory of psychological measurement that evolved out of the pioneering work of Stevens (1946). Understanding this distinction, Bernasconi, Choirat & Seri (2010), have updated AHP with the newer advancements that have occurred in psychological measurement. While they bridge the gap between AHP and theory of measurement, they do not emphasize that one of the main distinctions of Saaty’s approach is the use redundant ratio questioning and multifaceted mathematics associated with such redundancy. Stevens (1946) classified scale types as nominal, ordinal, interval and ratio, but did not make finer distinctions within those scale types. This paper analyzes ratio scales and classifies them into various sub-types according to the clarity of the unit that is used. It then looks at the sub-types used by AHP/ANP and makes observations on methods of aggregation that respect commensurability.

2. Types of Ratio Scales

Because the unit of measurement is arbitrary, ratio scales can be expressed in many different ways. Herein, six types of ratio scale are identified. In order to understand them, it is useful to refer to Figure 1 which depicts each type for the weight of 4 different objects. Regular ratio scales – These are the most well recognized scales. They have an accepted and

well-established unit of measure that is fixed for all objects. In Figure 1, an object weighing 1 kilogram is the unit. Object 3 has that weight.

Page 201: OR52 EXTENDED ABSTRACTS

   

  201 

Defined ratio scales – These scales have the magnitude of a specified object as the unit of measure. With n objects being considered, there can be as many defined ratio scales as there are objects. Object 2 is the defined unit of measure in Figure 1.

Relative ratio scales – Multiplication by a positive constant (b>0, b≠1) changes the values of a

ratio scale and the object that represents the unit of measure. While ratios between object values remain the same, it is possible for no specific object to have unit value. A relative ratio scale is such a scale – it displays relative ratios, but not a defined unit of measure. In Figure 1, an object somewhere between Objects 2 and 3 has unit value. Since this unit object is unidentified, the unit of measure is unclear.

Unit sum scales – These scales are a special type of relative ratio scale where the sum of object

values equals unity. Any ratio scale of a discrete number of objects can be made relative by dividing each value by the sum of values. Such normalization is equivalent to making an abstract object the unit of measure. Notice in Figure 1 that unit value on the scale will be an object that is slightly less than twice the weight of Object 1. That abstract object will possess the sum of weights of all 4 objects.

Linked ratio scales – Two different ratio scales of the same property can be the same scale if

there is some common object to both of them can be used to re-express one scale in terms of the other. In Figure 1, Object 5 is introduced. This object is also a member of a second scale of different objects which are not shown. By making both scales defined ratio scales with Object 5 as the unit of measure, they can be joined into one. The common object in both scales is the link between them.

Chained ratio scales – Chained ratio scales are linked scales, but with multiple binary links.

They occur as a consequence of only 2 magnitudes being necessary to define a ratio scale. If we have several such binary ratio scales that are linked in terms of information between them, then we can chain them together to aggregate their information.

In Figure 1, the binary ratios of Object1/Object2, Object2/Object3 and Object3/Object4 form a

minimal spanning tree. We can convert that information to a defined scale by assigning unit value to any object and then figuring out the values for other objects. For example, specify Object 2 as the unit by giving it a value of 1. This means that Object 1 has a value of 2 (since Object1/Object2=2), Object 3 has a value of 1/3 ((since Object2/Object3=3), and Object 4 has a value of 1/6 (since Object3/Object4=2).

That these are the correct values can be verified by looking at the defined scale in Figure 1.  

V1Regular V1

Defined V1Relative V1

Unit Sum V1Linked V1

Chained

Object 1 6 kg 2 3 0.571 3.3 2/1 Obj1/Obj2Object 2 3 kg 1 1.500 0.286 1.65 3/1 Obj2/Obj3Object 3 1 kg 0.333 0.500 0.095 0.55 2/1 Obj3/Obj4Object 4 0.5 kg 0.167 0.250 0.048 0.275

Object 5 1

Type of Scale Regular Defined Relative Unit Sum Linked ChainedUnit of Measure fixed specified undefined undefined multiple defined variable

Magnitude of Unit standard=1 an object=1 some object=1 sum of objects=1 linked object=1 several objects=1Clarity of measure very clear clear unclear unclear obscure obscure

Figure 1. Comparison of different types of ratio scales.

Page 202: OR52 EXTENDED ABSTRACTS

   

  202 

Looking across Figure 1, we can see that the clarity of the scale becomes more obscure if the unit is left undefined and if more than one object takes unit value.

3. Uses of Ratio Scales by AHP/ANP

In different ways, AHP/ANP methodologies make use of most scale types. The distributive mode of AHP and ANP normalize priorities to unit sum scales while the ideal mode uses defined scales for alternatives (the best alternative takes unit value) (Saaty, 1993). Both methods use unit sum scales for criteria. Linking pin AHP is similar except that any alternative can be the defined unit and that alternative is the referent (link) for making criteria comparisons (Schoner, Wedley & Choo, 1993). Defined scales are also used for anchoring strategic criteria when analyzing Benefits, Opportunities, Costs and Risks. Linked scales are commonly used in AHP to bridge across orders of magnitude. AHP methodology recommends that any set of objects be within one order of magnitude. To extend the scale across two orders of magnitude, just include an object from the lower order in the next higher order. That way, a priority is established for the same object in each cluster (Wedley, Schoner & Choo, 1993). From there, it is routine to normalize both orders to the unit of the common object and combine the results. By successively using this technique over many orders, a scale can be established that spans many orders of magnitude. Chained scales are used for establishing starting rules for partial (incomplete) matrices. When making comparisons, priorities can be determined from just the first n-1 comparisons that form a spanning tree. All subsequent comparisons are redundant. A relevant research question that is still largely unresolved is which combination of n-1 comparisons provides the best starting platform for making subsequent comparisons (Harker, 1987, Wedley Schoner & Tang, 1993, Carmone, Kara & Zanakis, 1997, Ishizaka & Lusti, 2004).

4. Aggregating AHP/ANP scales

Saaty (2004) has observed that no mathematical definition says that a scale must have a unit or an origin with zero value. Adopting that perspective, he contends that relative and unit sum scales do not need them. He is correct that we do not know absolute zero or a unit of measure beforehand, nor do we need them to derive a scale. During the comparison process, we use each object as a temporary and non-specific unit. In effect, each comparison defines a binary scale that can be chained together across all objects. Relative and unit sum scales do not create problems when only one property is being considered. But in multi-criteria situations, problems can arise during aggregation if we add or delete alternatives or normalize in some different manner. Re-normalization with a changed set of objects implies a change in the unit of measure for each criterion. If unrecognized with no offsetting modifications, such changes in norms can yield incorrect priorities and problems such as rank reversal. A basic premise of additive synthesis of ratio scales is that they be commensurate. To determine commensurability, it is necessary to recognize and understand the units of the scales that are being aggregated. Unfortunately, relative and unit sum scales do not facilitate this requirement. The distributive and ideal modes of AHP both use unit sum scales for criteria but different scale types for alternatives (Saaty 1993). In the ideal mode, priorities of alternatives are in the unit of the best alternatives (defined scale). In the distributive mode, alternative priorities are in the unit of an abstract object (unit sum scale) that has the totality of criterion possessed by the alternatives. Since each criterion is in a different unit of measure, it is the criteria weights that provide the proportional transformation that achieve commensurability before additive aggregation. And, since the distributive and ideal modes have different types of units, we would expect different criteria weights

Page 203: OR52 EXTENDED ABSTRACTS

   

  203 

to be necessary to achieve commensurability (Choo Schoner & Wedley,1999). In practice, however, the same unit sum criteria weights are used for both modes. This results in two different sets of composite priorities that are supposed to be correct aggregate results. The fact that both cannot be correct indicates that AHP methodology has imperfections. In using AHP/ANP for benefit cost analysis, similar errors can be observed. Using unit sum scales for Benefits, Opportunities, Costs or Risks can result in B/C or BO/CR ratios that do not match baseline results with regular ratio scales (Bernhard & Canada, 1990, Wedley Choo & Schoner, 2001). The results can even indicate profitability whereas in reality the costs exceed the benefits. Using examples with monetary (regular ratio scales) for B, O, C and R, Wijnmalen (2007) has shown that the unit sum priorities for each component must be adjusted to a common unit before the BO/CR ratios give correct results. Methods that do pay attention to the commensurability of scales include Referenced AHP (Schoner & Wedley, 1989), Linking Pin AHP (Schoner, Wedley & Choo, 1993), Benchmark Measurement (Wedley, Choo & Schoner, 1996) and Dominant AHP (Kinoshita, Sekitani & Shi, 2002).

5. Conclusion

Ignoring the unit of measure when aggregating ratio scales can lead to erroneous results. This is particularly so with relative ratio scales and unit sum scales that display values for objects but not the unit in which they are measured. A consequence is that aggregation proceeds without recognition that the components are incommensurate. Although sometimes obscure, all ratio scales have a unit of measure. Through a proportional transformation, that unit can always be changed to the magnitude of some other object. The classification scheme recommended herein is useful for identifying the type of scale being used and the nature of its unit of measure. Such knowledge is crucial for determining criteria weights that achieve commensurability before aggregation takes place.

6. References

1. Bernasconi, M., Choirat, C. & Seri, R. (2010) The analytic hierarchy process and theory of measurement, Management Science, 56, 4, 699-711.

2. Bernhard R. H. & Canada, J. R. (1990), Some problems in using benefit/cost ratios with the Analytic Hierarchy Process, The Engineering Economist, 36 (1) 56-65.

3. Carmone, F.J., Kara, A. & Zanakis, S H. (1997), A Monte Carlo investigation of incomplete pairwise

4. comparison matrices in AHP, European Journal of Operational Research, 102, 538-553. 5. Choo E.U., Schoner B., & Wedley W.C. (1999) Interpretation of criteria weights in multi-

criteria decision making. Computers and Industrial Engineering Journal 37,527-541. 6. Harker, P. T. (1987), Shortening the comparison process in the AHP. Mathematical

Modelling, 8, 139-141. 7. Ishizaka A. & Lusti, M. (2004) An expert module to improve the consistency of AHP

matrices, 8. International Transactions in Operational Research, 11, 97-105. 9. Kinoshita, E., Sekitani, K., & Shi, J. (2002) Mathematical properties of dominant AHP and

concurrent convergence method, J. of the Operations Research Society of Japan, 45 (2) 198-213.

10. Michell, J. (1999) Measurement in Psychology, Cambridge University Press. 11. Saaty, T. L. (1977), A scaling method for priorities in hierarchical structures. Journal of

Mathematical Pshychology, 150, 234-281. 12. Saaty, T.L. (1993) Rank preservation and reversal: the AHP ideal and distributive modes,

Mathematical and Computer Modelling 17(6) 1-16.

Page 204: OR52 EXTENDED ABSTRACTS

   

  204 

13. Saaty, T. L. (2004) Scales from measurement – not measurement from scales, Proceedings of the 17th International Conference on Multiple Criteria Decision Making, Whistler, B. C. Canada, August 6-11, (http://www.bus.sfu.ca/events/mcdm/Proceedings/MCDM2004%20SUM.htm), May 29, 2010.

14. Saaty T. L. (2008). Relative measurement and its generalization in decision-making – Why pairwise comparisons are central in mathematics for the measurement of intangible factors: The Analytic Hierarchy/Network Process. RACSAM, Revista de la Real Academia de Ciencias Serie A: Matemáticas 102(2): 251-318.

15. Schoner, B. & Wedley, W. C. (1989) “Ambiguous Criteria Weights in AHP: Consequences and Solutions”, Decision Sciences, 20, 3, 462-475.

16. Schoner B., Wedley, W.C. & Choo, E. U. (1993), A unified approach to AHP with linking pins, European Journal of Operational Research 64, 384-392.

17. Stevens, S. S. (1946), “On the theory of scales of measurement”, Science, 103, 677-680 18. Wedley, W. C., Schoner, B. & Tang, T. S. (1993), Starting rules for incomplete comparisons

in the Analytic Hierarchy Process, Mathematical Modelling, 17(4/5), 151-161. 19. Wedley, W. C., Schoner, B. & Choo, E. U. (1993), Clustering, dependence and ratio scales

in AHP: Rank reversals and incorrect priorities with a single criterion" Journal of Multi-Criteria Decision Analysis, 2, 145-158.

20. Wedley W.C., Choo E.U. & Schoner B. (1996) Benchmark measurement: Between relative and absolute, Proceedings of the Fourth International Symposium on the Analytic Hierarchy Process (Burnaby, B. C., Canada) 335-345.

21. Wedley, W. C., Choo, E.U. & Schoner, B. (2001), Magnitude adjustment for AHP benefit/cost ratios, European Journal of Operational Re-search, 133, 342–351.

Page 205: OR52 EXTENDED ABSTRACTS

   

  205 

Lagrangean Metaheuristic for the Travelling Salesman Problem

Rosa Herrero, Juan José Ramos, and Daniel Guimarans40

Departament de Telecomunicació i Enginyeria de Sistemes

Universitat Autònoma de Barcelona, Spain Abstract This paper presents a metaheuristic methodology based on the Lagrangean Relaxation technique, applied to the Travelling Salesman Problem. The presented approach combines the Subgradient Optimization algorithm with a heuristic to obtain a feasible primal solution from a dual solution. Moreover, a parameter has been introduced to improve algorithm convergence. The main advantage is based on the iterative evolution of both upper and lower bounds to the optimal cost, providing a feasible solution in a reasonable number of iterations with a tight gap between the primal and the optimal cost. Keywords: Lagrangean relaxation, Metaheuristic, Travelling salesman, Subgradient optimization.

1. Introduction

The Travelling Salesman Problem (TSP) is probably the best known combinatorial problem: “A salesman is required to visit once and only once each of n different customers starting from a depot, and returning to the same depot. What path minimises the total distance travelled by the salesman?” (Bellman, 1962). The TSP belongs to NP-Hard optimization problems class (Savelsbergh, 1985). Lagrangean Relaxation (LR) is a well-known method to solve large-scale combinatorial optimization problems. It works by moving hard-to-satisfy constraints into the objective function associating a penalty in case they are not satisfied. An excellent introduction to the whole topic of LR can be found in (Fisher, 1981). A metaheuristic “refers to a master strategy that guides and modifies other heuristics to produce solutions beyond those that are normally generated in a quest for local optimality” (Glover and Laguna, 1997). Compared with classical heuristics, Lagrangean metaheuristics provides both an upper and a lower bounds (UB and LB), thus a posterior quality check of the solution obtained. Furthermore, it reduces search space, as dual information can be used to prune decision variables. The proposed LR-based method uses the Subgradient optimization algorithm combined with a heuristic. Aiming to improve algorithm's convergence on the optimal solution, a heuristic is introduced in order to obtain a feasible solution from the dual variable. Indeed, this method tries to improve the upper bound with the values of these feasible solutions, so a better convergence is obtained. The present paper is structured as follows: Next section introduces the notation and presents a formulation of the problem. The proposed LR-based method is described afterwards. The following section presents some results of tests on common benchmark instances. Finally, some conclusions and further research topics are presented.

                                                            40 Emails: rosa.herrero,juanjose.ramos,[email protected]  

Page 206: OR52 EXTENDED ABSTRACTS

   

  206 

2. Problem formulation

The symmetric TSP can be defined on a complete undirected graph , connecting the customers set through a set of undirected edges . The edge

in E has associated a travel cost ec , supposed to be the lowest cost route connecting node i to node j. Solving the TSP consists on determining a route whose total travel cost is minimised and such that each customer is visited exactly once and the route starts and ends at the depot (i=1). The classical formulation requires defining the binary variable xe to denote that the edge is used in the route. That is if customer j is visited immediately after i; otherwise . Thus, TSP can be mathematically formulated as follows:

min e ee E

c x (1)

subject to

( )

2ee i

x

, i I (2)

( )

1ee E S

x S

, 1

,2

S I S I (3)

where ( ) { : , ( , ) or ( , )}i e E j I e i j j i represents the set of arcs whose starting or ending

node is i; and ( ) { ( , ) : , }E S e i j E i j S represents the set of arcs whose nodes are in the subset S of vertices. Constraint (2) states that every node must be visited once, that is, every customer must have two incident edges. Subtour elimination constraint (3) states that the route must be a Hamiltonian cycle, so it can not have any subcycle.

3. Lagrangean Relaxation method

LR exploits the structure of the problem, so it reduces considerably the problem complexity. However, it is often a major issue to find optimal Lagrangean multipliers. The most commonly used algorithm is the Subgradient optimization (SO). The main difficulty of this algorithm lays on choosing a correct step-size in order to ensure algorithm's convergence (Reinelt, 1994). In order to face this limitation, the proposed method combines the SO algorithm with a heuristic to obtain a feasible solution from a dual solution. It can get a better upper bound, so it improves the convergence on the optimal solution starting at an initial UB obtained with a Nearest Neighbour Heuristic. In spite of optimality can not be always reached, the proposed method is able to provide a feasible solution with a tight gap between the primal and the optimal cost in a reasonable number of iterations.

4. Lagrangean dual problem

The proposed LR relaxes the constraint set requiring that all customers must be served (2) weighting them with a multiplier vector u, since all subcycles can be avoided constructing the solution x as a 1-tree. Actually, a feasible solution of the TSP is a 1-tree having two incident edges at each node (Held and Karp, 1971). The advantage is that finding a minimum 1-tree is relatively easy. The Lagrangean Dual problem obtained is where

1-tree ( )

( () min 2 )e e i ex e E i I e i

L u c x u x

, defining the subgradient ( )

2ki e

e i

x

.

Page 207: OR52 EXTENDED ABSTRACTS

   

  207 

5. Lagrangean Metaheuristic The proposed LR-based method can be considered a specification of the Lagrangean Metaheuristic presented on Boschetti and Maniezzo (2009). A heuristic obtains a feasible solution from the dual variable, so it tries to improve the UB and a better convergence is obtained. Eventually, this feasible solution may be provided as the best solution if the method is stopped. The stopping criterion is based on the maximum number of iterations ( ) and also on a floating-point exception (step-size ). The proposed LR-based method algorithm is shown in Table 1.  

0 Initialization

1 Initialize parameters 2 Obtain an UB applying Nearest Neighbour Heuristic

3 Initialize 4 Iteration k

5 Solve the Lagrangean function

6 Check the subgradient

7 if then Optimal solution is found → EXIT

8 if then apply a heuristic to improve the UB

9 Check the parameter 10 Calculate the step-size 11 Update the multiplier

12 Table 1. The Proposed LR-based method algorithm.

The proposed heuristic to improve the UB is applied when the 1-tree is nearly a Hamiltonian path. That is, if the subgradient satisfies (step 8). As any solution is a 1-tree, this criterion means that the solution has few vertices without two incident edges. This heuristic replaces an edge

where j has some extra edges for an edge where vertex l has one single edge minimising the cost of the exchange. In the presented approach, two different moves have been defined (Figure 1): (a) unlimited move, and (b) limited move only replaces edges which both vertices i and l are connected to same vertex j. First, it iteratively applies the unlimited movement but being aware that it could produce an infinite loop. Then, it iteratively applies the limited movement until finding a feasible solution.

Page 208: OR52 EXTENDED ABSTRACTS

   

  208 

Figure 1. Movements of the heuristic

(a) The unlimited move (b) The limited move

 

A good estimation of the parameter ξ would avoid increasing the computation time. First, its value may be large, for instance the value of the first iteration , but it should be updated whenever a better feasible solution is found according to . If this parameter is not correctly updated, the heuristic becomes time expensive. Eventually, the heuristic could find the optimal solution without detecting it, so the method would continue iterating until .

As mentioned, algorithm's convergence is critically influenced by the step-size k . This value relies

on either the LB or the UB, which are normally unknown or bad estimated. Therefore, convergence may not be assured for all cases. In order to overcome this limitations, the use of a parameter , such that LB L UB , is proposed. By definition, this parameter corresponds to a better estimation of the optimal value L* than those obtained for LB and UB. The calculation of the step-size turns into:

Convergence is guaranteed if the term tends to zero. In turn, convergence efficiency can be improved as long as the new parameter gets closer to the (unknown) optimal solution. The main idea is very simple: as the algorithm converges on the solution, new better lower bounds are known and new better upper bounds estimations can be obtained by using the heuristic designed to get feasible solutions. Therefore, the parameter is updated according to the following conditions:

It is initialized with 0

If , it is updated

If , then

Finally, the parameter is initialized to the value 2 and is updated as Zamani and Lau (2010) suggest. If the lower bound is not improved, is decreased, using the formula with

. On the other hand, if the lower bound is improved, then its value is increased according

to the formula providing that .

6. Results

The methodology described in the present paper has been implemented in Java. All tests have been performed on a non-dedicated server with an Intel Xeon Quad-Core i5 processor at 2.66GHz and 16GB RAM. In general, different processes were launched to solve different problems, while external applications were active at the same time. For this reason, CPU time is provided just for giving a rough idea of the algorithm computational performance. A total of 59 symmetric TSP instances have been used to test the efficiency of the proposed approach. They have been obtained from the library TSPLIB (http://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/ updated 6 August 2008), a reference site with a large number of instances for the TSP, and related problems, from various sources and of various types.

Page 209: OR52 EXTENDED ABSTRACTS

   

  209 

The experiments have been conducted using the distance according to the specification included in the library, and the number of iterations (300) used by Reinelt (1994) without the setting of parameters and . Table 2 presents the number of problems solved ordering by size, as well as the average gap of the obtained values UB, LB, and from the best known value (BKS). The gap was calculated as

follows . Therefore, if the gap is negative, the is smaller than the best known

solution. Thus, the parameter is a good estimation with regard to an unknown lower bound or an initial upper bound obtained with a Nearest Neighbour Heuristic.

Table 2. Summary of results obtained

 

Table 3. Results obtained for some representative problems

Table 3 presents representative results comparing obtained values to best known solution. The columns shows: UB best feasible value, percentage distance from optimality of the UB, tUB CPU time in seconds to find the best feasible solution, percentage distance from optimality of the LB, and tFinal final CPU time in seconds.

Page 210: OR52 EXTENDED ABSTRACTS

   

  210 

Figure 2. Convergence of LB,  , and UB in problem pr439. 

Figure 2 shows the evolution of upper and lower bounds on one run (problem pr439). As can be seen, the parameter is updated according to the conditions explained. It shows LB, , and UB converge on BKS with theirs respective gaps 1.76 %, 0.36 %, and 4.60 %. The initial UB obtained with a Nearest Neighbour heuristic has a gap of 24 % then the gap was much reduced.

7. Conclusions and future work

The present paper has presented a metaheuristic methodology based on the Lagrangean Relaxation technique. This scheme has been used to tackle the Travelling Salesman Problem. The proposed LR-based method uses the SO algorithm combined with a heuristic. Aiming to improve algorithm's convergence on the optimal solution, the heuristic is introduced in order to obtain a feasible solution from the dual variable. On the one hand, the method provides both UB and LB, thus a posterior quality check of the solution obtained. On the other hand, it reduces search space, as dual information can be used to prune decision variables. Finally, In spite of optimality can not be always reached, the proposed method is able to provide a feasible solution with a tight gap between the primal and the optimal cost in a reasonable number of iterations.

It should be remarked that several lines for future research are open. First, the parameters ρ and must be adjusted with fine-tuning processes. Second, since is a good estimation, the heuristic to obtain a feasible solution from a dual one is to be improved in order to reduce its computation time. Finally, the presented algorithm has been included into a Variable Neighbourhood Search framework to tackle the Capacitated Vehicle Routing Problem, showing very good results both in terms of the quality of the solution and in terms of computational efficiency (Guimarans et al 2010).

8. References 1. Bellman, R. (1962), Dynamic programming treatment of the travelling salesman problem,

Journal of ACM (1), 61-63. 2. Boschetti, M. and Maniezzo, V. (2009), Benders decomposition, lagrangean relaxation and

metaheuristic design, Journal of Heuristics 15, 283-312. 3. Fisher, M. (1981), The lagrangean relaxation method for solving integer programming

problems, Management Science 27, 1-18. 4. Glover, F. and Laguna, M. (1997), Tabu Search, Kluwer Academic Publishers: London 5. Guimarans, D., Herrero, R., Riera, D., Juan, A. and Ramos, J. (2010), Combining constraint

programming, lagrangean relaxation and probabilistic algorithms to solve the vehicle routing problem, in Proceedings of 17th RCRA workshop on Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion, Bologna, Italy.

Page 211: OR52 EXTENDED ABSTRACTS

   

  211 

6. Held, M. and Karp, R. (1971), The travelling salesman problem and minimum spanning trees: Part II, Mathematical Programming 1, 6-25.

7. Reinelt, G. (1994), The traveling salesman: computational solutions for TSP applications, Lecture Notes in Computer Science 840.

8. Savelsbergh, M. (1985), Local search in routing problems with time windows, Annals of Operations Research 4, 285-305.

9. Zamani, R. and Lau, S. (2010), Embedding learning capability in lagrangean relaxation: An application to the travelling salesman problem, European Journal of Operational Research 201(1), 82-88.

Page 212: OR52 EXTENDED ABSTRACTS

   

  212 

A multi-dimensional analysis of R&D performance in the pharmaceutical sector and its association with acquisition history

Rupert Booth41

Doctoral Researcher, Warwick Business School, The University Warwick, UK.

Abstract

The efficiency of the pharmaceutical R&D process was measured, using the Resource Based View (RBV) as the basis for the choice of measures. A merger typology was populated with the acquisitions of the Top 50 companies in the sector over 12 years. Six contributions to knowledge arose: (i) a set RBV-based principles for the design of Multi-Dimensional Measurement Framework, from outside the firm; (ii) a DEA model of the pharmaceutical R&D process; (iii) a demonstration of decreasing returns to scale; (iv) statistical testing of the association between acquisition history and R&D efficiency for acquisitions in aggregate; (v) examination of diversification effects by testing for cross-border and cross-sector acquisitions only; (vi) repeating the entire analysis using Return on Assets, to compare with the results for efficiency. The last exercise provided an explanation for the merger paradox, namely why managers continue to undertake acquisitions despite their disappointing outcome.

Keywords: Data envelopment analysis, Mergers and Acquisitions, Research and Development, Resource Based View, Accounting Measures

1. Introduction Research into firms’ performance has been continuing for over 30 years and Cartwright & Schoenberg, 2006) noted that “the failure rates of merges and acquisitions have remained consistently high”. However, research findings have varied and Schoenberg (2006) suggested one explanation for the variation is the approach to measurement. The line of enquiry in this research has therefore been to examine the performance of acquisitions using a contemporary approach to performance measurement, using multiple performance measures and paying particular attention to the role of intangibles, using the Resource Based View (RBV) of the firm as the theoretical basis for the choice of measures. The findings have also shed light on the ‘merger paradox’, whereby acquisitions continue despite their disappointing outcomes (Schenk, 2008).

Six contributions to literature have arisen from this research, each summarised below.

2. Application of the RBV to Measurement The RBV has evolved into the dominant theory of strategic management since its inception by Wernerfeld (1984) and its consolidation by Barney (1991) and Peteraff (1993), though there have since been constructive criticisms, e.g. by Porter (1994), commenting that it was “overly introspective” for paying too little attention to external factors. However this was seen as a source of potential advantage for the purpose of this research: if an introspective theory could be useful to develop a strategy, based mainly based on a view of internal resources, could the same theory start with the strategy and then infer the internal resources requiring measurement by an external evaluator, without access to the internal workings of the firm? This question has now been answered in the affirmative.

A review of the literature of the RBV was undertaken and the implications of the main arguments for measurement were identified and are summarised in Table 1. These were supplemented by a further reading of the literature on Multi-Dimensional Measurement Frameworks (MDMFs), especially the Balanced Scorecard. However most of this literature presumes access to the internal workings of the firm and is concerned with the selection measures for internal decision makers and Lebas & Euske (2002) noted that the needs of the internal and external evaluators are different.                                                             41 Email: [email protected]

Page 213: OR52 EXTENDED ABSTRACTS

   

  213 

Given the absence of previous work in this area, a set of principles was developed for the design of framework, specifically by and for an external evaluator; this is given in Table 2.

Phase Author Contribution to Measurement Early Wernerfelt (1984) a) Original definition of resources with examples.

Rumelt (1984) b) Isolating mechanisms with examples Prahalad and Hamel (1990)

c) Link to practitioner concept of “core competencies”

Consolidation Barney (1991) d) VRIN tests: Valuable; Rare; Imperfectly imitable; Non-substitutable

Peteraf (1994) e) Link to value and rent generation (in practice this may reduce to considering efficiency).

Dynamic Dierickx and Cool (1989)

f) Importance of deployment as opposed to possession.

Amit and Schoemaker (1993)

g) Capabilities

Teece et al (1997) h) Paths, Positions & Processes Table 1. Contributions of the major RBV authors to performance measurement

Page 214: OR52 EXTENDED ABSTRACTS

   

  214 

Category No. Principle Scope and Structure 1 The multi-dimensional measurement framework must recognise the

stakeholders which can affect the firm. 2 The dimensions and parameters should include at least one example of

a leading measure, in order to understand future competitive advantage. 3 Exposure of the firm to risk should be measured Resources and Barriers

4 Resources are candidates for measurement. To qualify as a resource, an asset of the firm must provide competitive advantage and must be VRIN.

5 The measures should include the factors which limit the size and growth of the firm.

6 Tangible and intangible resources need to be distinguished and the means of sustenance of intangible resources need to be considered.

7 Once a resource has been identified, its associated barrier to imitation by potential competitors should be identified.

8 If a weakening barrier to imitation exists, it is important to measure the effects of the weakening barrier, i.e. the emerging competition, and the rate of change.

9 Resources act together to produce effects and there are links between them, and measures should capture this holistic view, rather than measuring resources individually.

Dynamics 10 Resources need to be deployed to be useful and this requires capabilities and also business processes, which offer more potential for measurement.

11 Positions should be measured as well as processes. 12 Process flows change resource levels, (though in different ways for

tangible and intangible resources) and offer an opportunity to measure “action variables”

13 Ideally resources should be linked to economic rent, though in practice it may only be possible to distinguish inputs to outputs to measure comparative efficiency.

Benchmarking & Comparison

14 The size of the firm should be measured to understand the effect of scale

15 Forming ratios, e.g. between levels of resources and associated flows, or to allow for the effects of scale is useful for tracking performance over time.

Table 2. Design Principles for an MDMF for an External Evaluator

The principles in Table 2 were applied to the design of a set of performance measures for the pharmaceutical R&D process, which differed from that which had been used earlier in the literature.

Pharmaceutical DEA Model

The MDMF was an improvement on previous measurement approaches for R&D efficiency. For example, Graves and Langowitz (1993) used a simple unit cost, examining approved compounds produced per unit of R&D expenditure, which does not account for the multiple outputs of the R&D process, which may be traded in licensing deals (a firm may prioritise its outputs for development and therefore calculation of a simple unit cost for a selected output is not a reliable measure).

The multiple measures provided by the MDMF have advantages but do require the use of a comparative efficiency approach, such as DEA, to analyse the multiple inputs and outputs.

Page 215: OR52 EXTENDED ABSTRACTS

   

  215 

DEA was selected not only for its ability to analyse multiple inputs and outputs but also because it is a non-parametric approach, which does not need the form of the production function to be specified, which is difficult for the R&D process. However, to date, it has not been extensively used to measure the efficiency of the R&D process. This may be because the outputs of the process (i.e. a design or discovery) are intangible and therefore quantification is difficult (one of the few exceptions to this is the pharmaceutical industry, where there is a clear definition of what constitutes a product that may be sold).

A further challenge of pharmaceutical R&D is the duration of the R&D pipeline, with inputs today committed in the expectation of outputs some years hence. The time-lags have been identified and R&D expenditure for a number of years, the greater part of the duration of the pipeline, has been collected, so there will always be some correspondence between input and output.

Applying DEA to measure efficiency, where there is a longitudinal dimension, is a novel application and its application to pharmaceutical R&D is an extension of the relevant literature.

Decreasing Returns to Scale

The use of DEA to measure the efficiency of the R&D process for pharmaceutical firms of a variety of sizes provided the opportunity to determine if there are increasing or decreasing returns to scale in the R&D process, or neither. The question is significant for several reasons, outlined below.

The relationship between scale efficiency, measured by the ratio of CRS and VRS efficiency scores) and size of R&D expenditure in 2006 was first examined graphically, as shown in Figure1:

‐2.25

‐1.70

‐0.79

‐0.90

‐2.30

‐1.22

‐2.45

‐1.13

‐1.68

‐1.40

‐1.81 ‐1.82

‐1.69

‐1.47

‐0.77

0.00

‐1.89

‐0.76

‐1.18

‐2.23

‐0.86

‐1.99

‐1.13

‐0.85

‐2.11

‐2.76

‐1.07

‐0.72

‐1.26

‐2.16

‐1.72

‐1.02

‐2.26

‐1.74

0.00

‐2.73

‐2.51

‐2.30

‐1.80

‐0.81

‐0.64

‐0.78

‐1.15

‐1.79

‐1.58

‐1.45

‐0.03

‐2.05

‐3.00

‐2.50

‐2.00

‐1.50

‐1.00

‐0.50

0.00

3.50 4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50 8.00 8.50 9.00

ln(RD06)

ln(CRS/VRS)

ln(CRS/VRS)

Expon.

Figure 1. The relationship between scale efficiency and size of R&D expenditure in 2006

The graph appeared to show a consistent relationship between scale efficiency. However this does not represent a statistical test and therefore the sample was bisected on the basis of scale efficiency and the mean R&D expenditure for the sub-group with upper-median scale efficiency was $551m and the mean R&D expenditure for the sub-group with lower-median scale efficiency was $3,096m. For a p-level of 1% in a two sided t-test, the null hypothesis of constant returns to scale was rejected.

Page 216: OR52 EXTENDED ABSTRACTS

   

  216 

The population of the upper-median sub-group was examined with the Anderson-Darling test and found to be non-normal and therefore a non-parametric test of significance for the difference between the medians between the two sub-groups groups ($395m and £2,661m) was therefore undertaken. Again the null hypothesis was rejected the null hypothesis at a p-level of 1%.

The alternative hypothesis of variable returns to scale is therefore accepted as statistically highly significant.

Previous academic studies had employed a variety of methodologies and given rise to conflicting findings, e.g. Graves & Langowitz (1993) found decreasing returns to scale while Jensen (1987) found “Firm size does not significantly affect the marginal productivity of research expenditure”. Decreasing returns to scale have now been shown for pharmaceutical R&D, when two methodological issues are addressed, namely the multiple outputs of the R&D process and the need to consider R&D over a substantial period of the pipeline.

Association between Acquisition History and Efficiency

The central focus of the research was to identify if there was an association between acquisition history and efficiency, with the objective of making an additional contribution to this well-researched field by the application of contemporary measurement techniques.

The efficiency of the pharmaceutical firms as measured by the DEA model was associated with their merger history. The outcome was inconclusive in aggregate, which was counter to the finding of Koenig & Mezick (2004) who found that “pharmaceutical companies that merged were able to achieve more favorable post-merger productivity scores than were attained prior to their merger”. However that work also suffers from the limitation of using a single-input/single-output measurement technique and there were sampling issues, including a consideration of an early merger wave with a small sample size in which the mergers themselves could be characterised as horizontal mergers between pairs of global companies intended to realise cost savings. Compared to the work of Koenig & Mezick (2004), this study has considered a larger number of firms, a full range of acquisition motivations and the full range of outputs of the R&D process.

Diversification

The research also examined whether acquisitions made to acquire new products (i.e. outside the pharmaceutical sector) or access new markets (i.e. of a company in a different nation) showed different characteristics to acquisitions as a whole. The results for the cross-sector acquisitions showed that firms with lower-median efficiency had a higher measure of central tendency for historic normalised deal value. This was to compare with prior findings in the diversification, e.g. Shelton (1988) reported “Multivariate regression analysis shows that acquisitions which permit the bidder access to new but related markets create the most value with the least variance”. However there are methodological differences, the most significant of which is the use of a parametric technique to relate input to output; furthermore that study was not specific to the pharmaceutical sector.

The results for the cross-border analysis were inconclusive and this may reflect the typology used to define the degree of inter-relatedness. This is a particularly difficult area for the pharmaceutical sector, which is global, with research activity undertaken in a common language and linked by electronic information systems. Furthermore where mergers occur, they are often between culturally similar firms (e.g. US/UK, continental European and Japanese). Given the simple typology used (i.e. an acquisition in a foreign nation), which was necessarily simple to preserve sample sizes in each sub-group, it is not clear if a ‘diversified’ cross-border acquisition represents a significant diversification and this may be the reason for the inconclusive result.

Page 217: OR52 EXTENDED ABSTRACTS

   

  217 

ROA Comparison: an Emerging Pattern

Using ROA, all three hypotheses, namely acquisitions in aggregate, cross-sector and cross-border, showed a statistically significant association between above-median ROA and a higher historic normalised deal value. When the technical and financial efficiency results are considered together a pattern emerges, summarised in Table 3.

In Table 3, “strong” implies at least one (parametric or non-parametric) statistically significance result (α < 10) and “weak” at least one statistical result of some significance (10 ≤ α < 20). One pattern that emerges is that in the pharmaceutical sector, a history of cross-sector acquisitions is consistently associated with lower performance than other types. The former observation is consistent with received opinion within the industry (Datamonitor, 2009).

Type Technical Financial

Aggregate Inconclusive Strong For

Cross-Border Inconclusive Strong For

Cross-Sector Weak Against Inconclusive

Table 3. Pattern of Hypothesis Tests

Furthermore, financial efficiency (as measured in the short-term) is consistently more positively associated with merger activity than technical efficiency. A divergence has been noted between efficiency and ROA before, e.g. Fadzlan et al (2008) provide an acquisition-specific comparison between measurement by DEA and measurement by financial ratios and in this particular case finding that mergers do not increase profitability of Singaporean banking groups but that they do increase efficiency. This is in contrast to the pharmaceutical sector finding. The contrary finding of this research does however provide a potential explanation for the merger paradox.

3. Merger Paradox

Given the frequency of mergers in the pharmaceutical sector and its influence on industry structure, any further understanding of the merger paradox, namely why mergers persist, if they damage long-term productivity, would be a useful contribution to the academic literature. ROA is monitored by market analysts and in making an acquisition the effect on ROA would be a prime consideration. Therefore managers may be proceeding with acquisitions that lower efficiency and hence the long-term performance of the firm, possibly in the correct expectation that ROA would increase. Nonetheless, in the words of Danzon et al (2007) “Thus mergers may be a response to trouble but they are not a solution”.

4. References

1. S. Cartwright, R. Schoenberg (2006), "Thirty years of mergers and acquisitions research: Recent advances and future opportunities", British Journal of Management, Volume 17 Issue S1, Pages S1-S5

2. Schoenberg R., 2006, Measuring the Performance of Corporate Acquisitions: An Empirical Comparison of Alternative Metrics, British Journal of Management, Vol 17, No 4, pp361-370

3. Schenck H. (2008), The Merger Paradox, Determinants and Effects, published in 21st Century Management. A Reference Handbook, v. I, Los Angeles: Sage, pp. 345-354

Page 218: OR52 EXTENDED ABSTRACTS

   

  218 

4. Wernerfelt, B. 1984. “A Resource-Based View of the Firm. Strategic Management Journal 5: 171-180.

5. Barney, J.B. 1991. “Firm Resources and Sustained Competitive Advantage”, Journal of Management 17: 99-120.

6. Peteraf, M.A. 1993. “The Cornerstones of Competitive Advantage: A Resource-Based View”, Strategic Management Journal 14: 179-191.

7. Porter, M.E. Porter, Toward a dynamic theory of strategy. In: R.P. Rumelt, D.E. Schendel and D.J. Teece, Editors, Fundamental Issues in Strategy, Harvard Business School Press, Boston, MA (1994)

8. Lebas, M., Euske, K. (2002), "A conceptual and operational delineation of performance", in Neely, A. (Eds),Business Performance Measurement: Theory and Practice

9. Graves S.B. & Langowitz N.S. (1993), Innovative Productivity and Returns to Scale in the Pharmaceutical Industry

10. Strategic Management Journal, Vol. 14, No. 8, pp. 593-605 11. Jensen, E.J., 1987. Research expenditures and the discovery of new drugs. Journal of Industrial

Economics 36, pp. 83–95 12. Koenig & Mezick (2004), Impact of mergers & acquisitions on research productivity within the

pharmaceutical industry , Scientometrics , Issue Volume 59, Number 1 13. Shelton L.M., 1988, Strategic business fits and corporate acquisition: Empirical evidence,

Strategic Management Journal, Volume 9, Issue 3, Pages 279 – 287 14. Datamonitor (2009), “Mapping the Healthcare Landscape Bringing Pharmaceuticals into Focus” 15. Fadzlan S., Majid A., Zulkhibri M., & Razali H., “Efficiency and Bank Merger in Singapore: A

Joint Estimation of Non-Parametric, Parametric and Financial Ratios Analysis, http://mpra.ub.uni-muenchen.de/12129/, MPRA Paper No. 12129, posted 12. December 2008

16. Danzon, P. M., Epstein, A. & Nicholson, S. Mergers and Acquisitions in the Pharmaceutical and Biotech Industries. Manage. Decis. Econ. 28, 307–328 (2007

Page 219: OR52 EXTENDED ABSTRACTS

   

  219 

Modelling Traffic Flows on Highways: A Hybrid Approach

Salissou Moutari

Queen’s University Belfast

 

Please see PDF file

Page 220: OR52 EXTENDED ABSTRACTS

Ranking Alternatives on the Basis of Intensity ofDominance and Fuzzy Logic within MAUT

Antonio Jiménez1, Alfonso Mateos1 and Pilar Sabio2

1Departamento de Inteligencia Artificial, Facultad de Informática,

Universidad Politécnica de Madrid, Campus de Montegancedo s/n, 28660 Madrid, Spain.

2Departmento de Economía Aplicada, Facultad de Ciencias Legales y Sociales,

Universidad Rey Juan Carlos, 28933 Madrid, Spain

E-mail: [email protected], [email protected], [email protected]

Abstract. We introduce a dominance measuring method to derive a ranking of al-

ternatives to deal with incomplete information in multi-criteria decision-making problems

on the basis of multi-attribute utility theory (MAUT) and fuzzy sets theory. We consider

the situation where the alternative performances are represented by intervals and there

exists imprecision concerning the decision-makers’ preferences, leading to classes of com-

ponent utility functions and trapezoidal fuzzy weights. An additive multi-attribute utility

model is used to evaluate the alternatives under consideration. The approach we propose

is based on the dominance values between pairs of alternatives that can be computed by

linear programming. These values are then transformed into dominance intensities from

which a dominance intensity measure is derived. The performance of the proposed method

is analyzed using MonteCarlo simulation techniques.

Keywords: Decision Analysis, Fuzzy Sets, Environmental Studies.

INTRODUCTION

The additive model is considered a valid approximation in most real decision-

making problems (Raiffa, 1982; Stewart, 1996). The functional form of the additive

model is

u(Ai) =nX

j=1

wjuj(xij),

1

Page 221: OR52 EXTENDED ABSTRACTS

where xij is the performance in the attributeXj of alternativeAi; uj(xij) is the utility

associated with value xij, and wj are the weights of each attribute, representing their

relative importance in decision-making.

Most complex decision-making problems involve imprecise information. For ex-

ample, it is impossible to predict the exact performance of each alternative under

consideration, being derived from statistical methods. At the same time, it is often

not easy to elicit precise weights, which are represented by intervals. DMs may

find it difficult to compare criteria or not want to reveal their preferences in public.

Furthermore, the decision may be taken within a group, where the imprecision of

the preferences is the result of a negotiation process.

A lot of work on MAUT has dealt with incomplete information. Sarabando

and Dias (2009) give a brief overview of approaches proposed by different authors

within the MAUT and MAVT (multi-attribute value theory) framework to deal with

incomplete information.

Eum et al (2001) provided linear programming characterizations of dominance

and potential optimality for decision alternatives when information about perfor-

mances and/or weights is incomplete, extended the approach to hierarchical struc-

tures (Lee et al, 2002), and developed the concepts of weak and strong potential

optimality (Park, 2004). More recently, Mateos and Jiménez (2009) and Mateos et

al (2009) considered the more general case where imprecision, described by means of

fixed bounds, appears in alternative performances, as well as in weights and utilities.

At the same time, a number of studies have been conducted concerning impre-

cision using fuzzy sets theory. These studies feed off the advances of research into

arithmetic and logical operators of fuzzy numbers, like Tran and Duckstein’s study

proposing the comparison of fuzzy numbers by a fuzzy measure of distance (Tran

and Duckstein, 2002).

Following this research line, we consider a decision-making problem with m al-

2

Page 222: OR52 EXTENDED ABSTRACTS

ternatives, Ai, i = 1, . . . ,m, and n attributes, Xj, j = 1, . . . , n, where incomplete

information about input parameters has been incorporated into the decision-making

process:

• Alternative performances are described under uncertainty [xLij, xUij], i = 1, . . . ,m,

j = 1, . . . , n, where xLij and xUij are the lower and the upper performances of

the attribute Xj for the alternative Ai, respectively.

• Component utilities are described by functions u(•), which belong to classes

of functions of utility [uLj (•), uUj (•)], j = 1, . . . , n, where uLj (•) and uUj (•) are

the lower and the upper utility functions of the attribute Xj.

• Imprecise weights are represented by trapezoidal fuzzy numbers ewj, j = 1, . . . , n.

One possibility described in the literature for dealing with imprecision attempts

to eliminate inferior alternatives based on the concept of pairwise dominance. Given

two alternatives Ak and Al, the alternative Ak dominates Al if Dkl ≥ 0, Dkl being

the optimum value of the optimization problem:

Dkl = min {u(Ak)− u(Al)} =nP

j=1

ewjuj(xkj)−nP

j=1

ewjuj(xlj)

s.t.xLkj ≤ xkj ≤ xUkj, j = 1, . . . , n

xLlj ≤ xlj ≤ xUlj, j = 1, . . . , n

uLj (xkj) ≤ uj(xkj) ≤ uUj (xkj), j = 1, . . . , n

uLj (xlj) ≤ uj(xlj) ≤ uUj (xlj), j = 1, . . . , n.

The objective function in the above optimization problem can also be represented

by Dkl = minnP

j=1

ewj [uj(xkj)− uj(xlj)]. Therefore, its resolution is equivalent to

solving (Mateos et al, 2007)

Dkl =nX

j=1

ewjz∗klj, (1)

3

Page 223: OR52 EXTENDED ABSTRACTS

where z∗klj is the optimum value of the following optimization problem

min zklj = uj(xkj)− uj(xlj)

s.t.xLkj ≤ xkj ≤ xUkj, j = 1, . . . , n

xLlj ≤ xlj ≤ xUlj, j = 1, . . . , n

uLj (xkj) ≤ uj(xkj) ≤ uUj (xkj), j = 1, . . . , n

uLj (xlj) ≤ uj(xlj) ≤ uUj (xlj), j = 1, . . . , n.

(2)

The optimal solution of problem (2) can be determined in a very simple way

for certain types of utility functions (Mateos et al, 2007). If the utility function

is monotonically increasing or decreasing, then z∗klj = uLj (xLkj) − uUj (x

Ulj) or z

∗klj =

uUj (xUkj)− uLj (x

Llj), respectively.

A recent approach is to use information about each alternative’s intensity of

dominance, known as dominance measuring methods. Ahn and Park (2008) compute

both dominating and dominated measures, from which they derive a net dominance.

This is used as a measure of the strength of preference in the sense that a greater

net value is better.

In the next section we introduce a dominance measuring method accounting for

fuzzy weights. Section 3 analyzes the performance of the proposed method using

MonteCarlo simulation techniques. Finally, some conclusions are provided in Section

4.

MEASUREMENT METHOD FOR FUZZY DOMINANCE

The proposed dominance measuring method adapts the proposal by Mateos and

Jiménez (2009) and Mateos et al (2009) to account for fuzzy weights making use

of work by Tran and Duckstein (2002) on distances between fuzzy numbers based

on the generalization of the left and right fuzzy numbers (GLRFN ) (Dubois and

Prade, 1980; Bárdossy and Duckstein, 1995).

4

Page 224: OR52 EXTENDED ABSTRACTS

A fuzzy set ea = (a1, a2, a3, a4) is called GLRFN when its membership function

is defined as

μa(x) =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

L

µa2 − x

a2 − a1

¶if a1 ≤ x ≤ a2

1 if a2 ≤ x ≤ a3

R

µx− a3a4 − a3

¶if a3 ≤ x ≤ a4

0 otherwise,

where L and R are strictly decreasing functions defined in [0, 1] and satisfying the

conditions:

L(x) = R(x) = 1 if x ≤ 0

L(x) = R(x) = 0 if x ≥ 0.For a2 = a3, you have Dubois and Prade classic definition of right and left

fuzzy numbers (Dubois and Prade, 1980). The trapezoidal fuzzy numbers are a

special case of GLRFN with L(x) = R(x) = 1 − x. A GLRFN is denoted as

a = (a1, a2, a3, a4)La−Ra and a α-cut of a is defined as

a(α) = (aL(α), aR(α)) = (a2 − (a2 − a1)a3L−1a (α), a3 − (a4 − a3)a3R

−1a (α), ).

(Tran and Duckstein, 2002) define the distance between two GLFRN fuzzy num-

bers a and eb asD2(ea,eb, f) =

⎧⎪⎨⎪⎩Z 1

0

⎧⎪⎨⎪⎩haL(α)+aR(α)

2− bL(α)+bR(α)

2

i2+

+13

∙³aL(α)+aR(α)

2

´2+³bL(α)+bR(α)

2

´2¸⎫⎪⎬⎪⎭× f(α)(dα)

⎫⎪⎬⎪⎭ /

Zf(α)(dα).

The function f(α), which serves as a function of weights, is positive continuous

in [0, 1], the distance being computed as a weighted sum of distances between two

intervals along all of the α-cuts from 0 to 1. The presence of function f permits

the DM to participate in a flexible way. For example, when the DM is risk-neutral,

f(α) = α seems to be reasonable. A risk-averse DM might want to put more weight

on information at a higher α level by using other functions, such as f(α) = α2 or a

higher power of α. A constant (f(α) = 1), or even a decreasing function f , can be

utilized for a risk-prone DM.

5

Page 225: OR52 EXTENDED ABSTRACTS

For the particular case of the distance of a trapezoidal fuzzy number a =

(a1, a2, a3, a4) to a constant (specifically 0), we have:

1. If f(α) = α then d(ea, 0)2 = µa2 + a32

¶2+1

3

µa2 + a32

¶[(a4 − a3)− (a2 − a1)]+

2

3

µa3 − a22

¶2+19

µa3 − a22

¶[(a4 − a3) + (a2 − a1)]+

1

18

£(a4 − a3)

2 + (a2 − a1)2¤−

1

18[(a2 − a1) (a4 − a3)] .

2. If f(α) = 1 then d(ea, 0)2 = µa2 + a32

¶2+1

2

µa2 + a32

¶[(a4 − a3)− (a2 − a1)]+

1

3

µa3 − a22

¶2+1

6

µa3 − a22

¶[(a4 − a3) + (a2 − a1)]+

1

9

£(a4 − a3)

2 + (a2 − a1)2¤−

1

9[(a2 − a1) (a4 − a3)] .

As trapezoidal fuzzy numbers are used to represent weights, the objective func-

tion in (1) can be now represented by

Dkl = edkl = nXj=1

ewjz∗klj =

nXj=1

(wj1, wj2, wj3, wj4)z∗klj = (dkl1, dkl2, dkl3, dkl4).

The first step in the proposed method, then, consists of computing the above

trapezoidal fuzzy numbers. Consequently, the strength of dominance of alternative

Ak can be defined as

edk = (dk1, dk2, dk3, dk4) =Xl=1l 6=k

Dkl =

⎛⎜⎜⎝ nXl=1l 6=k

dkl1,nXl=1l 6=k

dkl2,nXl=1l 6=k

dkl3,nXl=1l 6=k

dkl4

⎞⎟⎟⎠ .

Next, a dominance intensity, DIk, for each alternative Ak is computed as the

proportion of the positive part of the fuzzy number edk by the distance of the fuzzynumber to zero. Specifically, the dominance intensity for alternative Ak is computed

according to the location of edk as follows:1. If edk is completely located at the left of zero, then DIk is minus the distance

of edk to zero, because there is no positive part in edk.6

Page 226: OR52 EXTENDED ABSTRACTS

2. If edk is completely located at the right of zero, then DIk is the distance of edkto zero, because there is no negative part existing in edk.

3. If edk includes the zero in its base, then the fuzzy number will have a part onthe right of zero that we denote edRk and another part on the left of zero thatwe denote edLk . DIk is the proportion that represents edRk with respect to edk bythe distance of edk to zero less the proportion that represents edLk with respectto edk by the distance of edk to zero.

Next, we analyze each one of these cases in more detail.

• If dk4 < 0, see Figure 1a), then the dominance intensity of alternative Ak is

defined as DIk = −d(edk, 0, f).- Figure 1 -

• If dk1 > 0, see Figure 1b), then the dominance intensity of alternative Ak is

defined as DIk = d(edk, 0, f).• If dk1 < 0 and dk2 > 0, see Figure 1c), the corresponding trapezoidal fuzzy

number is divided by the vertical axis (at zero) into two parts. The left partedLk represents the proportion−dk1

−dk1dk2 − dk12

dk4 + dk3 − dk2 − dk12

=(dk1)

2

(dk4 + dk3 − dk2 − dk1) (dk2 − dk1),

whereas the right part edRk represents the proportiondk4 + dk3 − dk2 − dk1

2−(−dk1)(−dk1)2(dk2 − dk1)

dk4 + dk3 − dk2 − dk12

=

=dk2(−dk2 + dk3 + dk4)− dk1(dk3 + dk4)

(dk2 − dk1) (dk4 + dk3 − dk2 − dk1).

7

Page 227: OR52 EXTENDED ABSTRACTS

The dominance intensity of alternative Ak is defined as

DIk =dk2(−dk2 + dk3 + dk4)− dk1(dk3 + dk4)

(dk2 − dk1) (dk4 + dk3 − dk2 − dk1)d(edk, 0, f)

− (dk1)2

(dk4 + dk3 − dk2 − dk1) (dk2 − dk1)d(edk, 0, f).

• If dk3 < 0 and dk4 > 0, Figure 1d), the corresponding trapezoidal fuzzy number

is again divided by the vertical axis into two parts, edLk and edRk , represented bythe proportions

dk4 + dk3 − dk2 − dk12

dk4dk4

dk4 − dk32

dk4 + dk3 − dk2 − dk12

=

=dk4 (dk4 − dk2 − dk1)− dk3 (dk3 − dk2 − dk1)

(dk4 + dk3 − dk2 − dk1) (dk4 − dk3),

and

dk4dk4

dk4 − dk32

dk4 + dk3 − dk2 − dk12

=(dk4)

2

(dk4 − dk3) (dk4 + dk3 − dk2 − dk1),

respectively, and the dominance intensity of alternative Ak is

DIk =(dk4)

2

(dk4 − dk3) (dk4 + dk3 − dk2 − dk1)d(edk, 0, f)

−dk4 (dk4 − dk2 − dk1)− dk3 (dk3 − dk2 − dk1)

(dk4 + dk3 − dk2 − dk1) (dk4 − dk3)d(edk, 0, f).

• If dk2 < 0 and dk3 > 0, see Figure 1e), edLk and edRk aredk4 − dk3

2+ dk3

dk4 + dk3 − dk2 − dk12

=dk4 + dk3

dk4 + dk3 − dk2 − dk1,

anddk2 − dk1

2− dk2

dk4 + dk3 − dk2 − dk12

=−dk2 − dk1

dk4 + dk3 − dk2 − dk1,

respectively, and the dominance intensity of alternative Ak is

DIk =−dk2 − dk1

dk4 + dk3 − dk2 − dk1d(edk, 0, f)− dk4 + dk3

dk4 + dk3 − dk2 − dk1d(edk, 0, f).

8

Page 228: OR52 EXTENDED ABSTRACTS

Once the dominance intensity, DIk, has been computed for each alternative Ak,

the alternatives are ranked, where the best (rank 1) is the alternative with greatest

DIk and the worst is the alternative with the least DIk.

PERFORMANCE ANALYSIS BASED ON MONTECARLO SIMU-

LATION TECHNIQUES

We carried out a simulation study of the above method to analyze its perfor-

mance. Four different levels of alternatives (m = 3, 5, 7, 10) and five different levels

of attributes (n = 3, 5, 7, 10, 15) were considered in order to validate the results.

Also, 5000 trials were run for each of the 20 design elements.

We used two measures of efficacy, hit ratio and rank-order correlation (Ahn and

Park, 2008; Mateos et al, 2009). The hit ratio is the proportion of all cases in which

the method selects the same best alternative as in the TRUE ranking. Rank-order

correlation represents how similar the overall structures ranking alternatives are in

the TRUE ranking and in the ranking derived from the method. It is calculated

using Kendall’s τ (Winkler and Hays, 1985): τ = 1 − 2 × (number of pairwise

preference violations)/(total number of pair preferences).

First, component utilities for each alternative in each attribute from a uniform

distribution in (0, 1) are randomly generated, leading to an m × n matrix. The

columns in this matrix are normalized to make the smallest value 0 and the largest

1, and dominated alternatives are removed. Next, attribute weights representing

their relative importance are generated. Note that these weights are the TRUE

weights and the derived ranking of alternatives will be denoted as the TRUE ranking.

Those weights are then transformed into triangular fuzzy numbers applying a 5%

deviation from the original weight. Finally, the ranking of alternatives is computed

and compared with the TRUE ranking. Table 1 exhibits the measures of efficacy for

each of the 20 design elements for the cases of a risk-prone and a risk-neutral DM.

9

Page 229: OR52 EXTENDED ABSTRACTS

- Table 1 -

We can conclude that the hit ratio and the correlation coefficient are very similar

for the cases of a risk-prone and a risk-neutral DM; the hit ratio is greater than 78%

for all the design elements, whereas Kendall’s τ is greater than 86%. Both are

increasing against the number of attributes. Whereas the hit ratio decreases when

the number of alternatives is increased, Kendall’s τ increases.

CONCLUSIONS

Dominance measuring methods are becoming widely used in a decision-making

context with incomplete information and have been proved to outperform other

approaches, like most surrogate weighting methods or the modification of classical

decision rules to encompass an imprecise decision context.

In this paper a new dominance measuring method has been proposed, in which

weights representing the relative importance of decision-making criteria are de-

scribed by fuzzy numbers. The method is based on the distance of a fuzzy number

to a constant, where the generalization of the left and right fuzzy numbers, GLRFN,

is used.

The application of Monte Carlo simulation techniques has demonstrated that

the proposed method performs well in terms of two measures of efficacy: hit ratio

and rank-order correlation.

Acknowledgments. This paper was supported by the Spanish Ministry of Ed-

ucation and Innovation project TIN 2008-06796-C04-02 and by Madrid Regional

Goverment project S2009/ESP-1594.

References.

Ahn BS and Park KS (2008). Comparing methods for multiattribute decision

making with ordinal weights. Comput. Operations Res. 35: 1660—1670.

10

Page 230: OR52 EXTENDED ABSTRACTS

Bárdossy A and Duckstein L (1995). Fuzzy rule-based modeling with applications

to geophysical, biological and engineering systems. CRC Press: Boca Raton.

Dubois D and Prade H (1980). Fuzzy sets and systems: theory and applications.

Academic Press: New York.

Eum Y, Park KS and Kim H (2001). Establishing dominance and potential

optimality in multi-criteria analysis with imprecise weights and values. Comput.

Operations Res. 28: 397—409.

Lee K, Park KS and Kim H (2002). Dominance, potential optimality, imprecise

information, and hierarchical structure in multi-criteria analysis. Comput. Opera-

tions Res. 29: 1267—1281.

Mateos A and Jiménez A (2009). A trapezoidal fuzzy numbers-based approach

for aggregating group preferences and ranking decision alternatives in MCDM. In:

Evolutionary Multi-Criterion Optimization, LNCS 5467: 365-379.

Mateos A, Jiménez A and Blanco JF (2009). Ranking methods based on domi-

nance measures accounting for imprecision. In: Algorithmic Decision Theory, LNAI

5783: 328—339.

Mateos A, Ríos-Insua S and Jiménez A (2007). Dominance, potential optimality

and alternative ranking in imprecise decision making. J. Oper. Res. Soc. 58:

326—336.

Park K (2004). Mathematical programming models for characterizing dominance

and potential optimality when multicriteria alternative values and weights are si-

multaneously incomplete. IEEE Trans. Sys. Man. and Cybernet. 34: 601—614.

Raiffa H (1982). The art and science of negotiation. Harvard University Press:

California.

Sarabando P and Dias LC (2009). Simple procedures of choice in multicrite-

ria problems without precise information about the alternatives values. Comput.

Operations Res. (in review), INESC- Coimbra, technical report, 1—24, 2009.

11

Page 231: OR52 EXTENDED ABSTRACTS

Stewart TJ (1996). Robustness of additive value function method in MCDM. J.

Multi-Criteria Decision Anal. 5: 301—309.

Tran L and Duckstein L (2002). Comparison of fuzzy numbers using a fuzzy

distance measure. Fuzzy Sets and Systems 130: 331-341.

Winkler RL and Hays WL (1985). Statistics: Probability, inference and decision.

Holt, Rinehart & Winston: New York.

12

Page 232: OR52 EXTENDED ABSTRACTS

Figure 1

13

Page 233: OR52 EXTENDED ABSTRACTS

Table 1

Alternatives Attributes Hit ratio Kendall’s τ

risk-prone risk-neutral risk-prone risk-neutral

3 3 90.78 90.72 88.90 88.85

3 5 91.42 91.34 88.77 88.66

3 7 91.02 91 87.97 87.97

3 10 90.9 90.96 87.08 87.15

3 15 89.78 89.78 86.24 86.27

5 3 85.02 84.98 89.8 89.79

5 5 85.8 85.8 89.60 89.59

5 7 85.16 85.12 89.29 89.28

5 10 83.64 83.66 88.11 88.18

5 15 83.74 83.66 86.72 86.70

7 3 84.4 84.38 92.16 92.15

7 5 83.72 83.66 91.48 91.47

7 7 83.48 83.34 91.07 91.04

7 10 82.98 82.94 90.21 90.26

7 15 81.14 81.16 88.66 88.67

10 3 85.22 85.08 93.89 93.87

10 5 83.08 82.96 93.51 93.49

10 7 82.28 82.18 93.01 93.01

10 10 82.74 82.66 92.08 92.12

10 15 78.36 78.54 90.41 90.45

14

Page 234: OR52 EXTENDED ABSTRACTS

Captions of Figures and Tables

Figure 1: Computing dominance intensities

Table 1. Measures of efficacy

15

Page 235: OR52 EXTENDED ABSTRACTS

Modelling traÆ ow on highways: A Hybridapproa hSalissou Moutari �Abstra tThis paper treats the problem of vehi ular traÆ simulation in roadnetworks, using ma ros opi type models. More pre isely, the paper intro-du es a new hybrid model whi h ombines the mathemati al ontinuummodel in [7, 8℄ with the nonparametri regression method, in order tosimulate traÆ ow in road networks. Some numeri al simulations havebeen used to highlight the potential of the new model to reprodu e satis-fa torily traÆ dynami s along highways as well as around interese tions.Furthermore, the model o�ers an appropriate trade-o� between a ura yand omputational omplexity, and therefore it is suitable for real-timeappli ation.Keywords: traÆ ow, highways, ma ros opi , ontinuum model, non-parametri regression, hybrid.1 Introdu tionOver the last �fty years, a wide range of methods and te hniques have beendeveloped to address the problem of modelling traÆ ow in road networks.These methods onsist of either mi ros opi models, whi h des ribe the pro essof the supposed me hanisms of one vehi le following another, e.g. [5℄, or ma ro-s opi models whi h des ribe the dynami s of traÆ ow through ma ros opi variables (e.g. density, average velo ity, and ow-rate) in spa e and time, e.g.[7, 8, 2, 6℄. In order to simulate traÆ ow in rowded roads su h as highways,mi ros opi models appear to be non ompetitive for omputational reasons,thus the work in this paper is ex lusively dedi ated to ma ros opi models.Ma ros opi models of traÆ ow in lude models based on histori al data also alled knowledge-based models e.g. [4, 10, 9, 11℄, and models whi h are eitherbased on perturbations of the isentropi gas dynami s models, e.g. [7, 8℄, orderived from mi ros opi type models, e.g. [2, 6℄ { these types of models arealso termed ontinuum models or hydrodynami s models{. However, ea h ofthe aforementioned lasses of ma ros opi models has strengths and weaknessessin e the models have been designed rather to des ribe traÆ dynami s for aspe i� area of a road network. Despite their ability to fore ast traÆ volumeat the spe i� lo ations where histori al observations have been olle ted (e.g.�CenSSOR David Bates Building, Queen's University Belfast, University Road, BelfastBT7 1NN, United Kingdom. email: s.moutari�qub.a .uk1

Page 236: OR52 EXTENDED ABSTRACTS

entries/exits of interse tions), a key on ern regarding knowledge-based modelsis their limitation in reprodu ing traÆ dynami s (in spa e and time) along astret h of road. On the other hand, although they des ribe satisfa torily traÆ dynami s along a stret h of road, ontinuum models are rather rude when it omes to des ribe traÆ patterns in the vi inity of interse tions.Therefore, neither knowledge-based models nor ontinuum models, used sep-arately, enable to des ribe e�e tively traÆ dynami s in a road netowrk. Thepurpose of this paper is to introdu e a methodology whi h onsiders both thepast sequen es of traÆ ow patterns (histori al observations) to predi t fu-ture traÆ states around interse tions via a knowledge-based model, whereas a ontinuum model will serve to des ribe traÆ dynami s along road se tions.The rest of the paper is organised as follows. Se tion 2 presents a briefdis ussion on the ma ros opi models whi h will be oupled to onstru t thehybrid model. More pre isely, these models onsist of the Lighthill-Whithamand Ri hards model [7, 8℄ (LWR model, in short) and the nonparametri regres-sion method. Se tion 3 introdu es the new hybrid model. Furthermore, in orderto show remarkable insights of the new hybrid approa h, some experiments havebeen arried out on a ase study, and the produ ed numeri al results have beenpresented. Finally a on lusion is drawn up in Se tion 4.2 Ma ros opi modelling of traÆ owMa ros opi traÆ models des ribe the overall average behavior of traÆ owrather than the intera tions between individual vehi les. Therefore they involvethe traÆ density, the average velo ity and the ow-rate. In this se tion thetwo ma ros opi models whi h will be oupled in the next se tion for the hybridmodelling purpose are presented. These models are the LWR model and thenonparametri regression method.2.1 A ontinuum ma ros opi model: The LWR modelContinuum traÆ ow models are governed by mass onservation or ontinuityequation(s) and they are either based on perturbations of the isentropi gasdynami s models or derived from ar-following models. The �rst ontinuummodel of vehi ular traÆ ow has been introdu ed in the �fties by Lighthill &Whitham [7℄ and Ri hards [8℄, and this model is ommonly referred to as theLWR model. The main dependent variables used, in the LWRmodel, to des ribemathemati ally road traÆ dynami s, are the density of ars � = �(x; t) and theaverage velo ity v = v(x; t), respe tively at lo ation x and time t. From thesequantities another important traÆ parameter is derived, namely the ow-rateq = q(x; t) = �v, whi h is of great interest for both theoreti al and experimentalpurposes. The LWR model onsists of the single ontinuity equation (2.1) below,hen e it is also alled the �rst order model.��t�(x; t) + ��xq(x; t) = 0; (2.1)where q(x; t) = �(x; t)v(x; t).Sin e equation (2.1) involves simulataneously two variables � and v, it2

Page 237: OR52 EXTENDED ABSTRACTS

does not give rise by itself to a self- onsistent mathemati al model. Therefore,in order to omplete the model, one needs to devise a suitable losure relationwhi h, for example, expresses the ow rate q as a fun tion of the density �. Asu h losure relation ould be obtained by reprodu ing the typi al behaviourof experimentally ow-density relationship, through whi h the ow q ould beexpressed as a fun tion of the density �. A su h relation of the form q = f(�)is ommonly termed the fundamental diagram (see Figure 1). The fun tion fis often required to be: monotoni ally in reasing from � = 0 up to a ertaindensity value � (� 2 (0; �max)), de reasing for � � � � �max and on ave witha unique maximum point � = � in the interval [0; �max℄. The parameter �maxdenotes the maximum density of vehi les allowed along the road, orrespondingto a bumper-to-bumper traÆ jam. The elementary ow-density relationshipused in the LWR model [7, 8℄ is:f(�) = �ve(�); (2.2)where ve(�) = vmax�1� ��max� : (2.3)The parameter vmax(> 0) denotes the maximum average velo ity, whi h maybe observed by vehi les on an almost empty road.2.2 A knowledge-based ma ros opi model: The nonpara-metri regression (NPR) methodStatisti al parametri models { in luding linear and non-linear regression,ARIMA models, et . { are perhaps the most well-known methods, but otherte hniques su h as nonparametri regression e.g. [4℄, neural or fuzzy-neural net-works e.g. [10, 11℄, Bayesian networks e.g. [9℄, et ., are ommonly used fortraÆ ow fore asting. In this work the nonparametri regression te hniquewill be used to fore ast traÆ ow at interse tions. Some of the advantages ofnonparametri method over alternatives in lude: its ability to expli itly use themultivariate nature of traÆ states to produ e fore asts, its intuitive features aswell as its simpli ity to implement. Nonparametri regression is a data-drivenheuristi fore asting te hnique whi h does not require any rigid assumptions orany prior knowledge about the statisti al stru ture of the data, and only suf-� iently large quantities of data representing the underlying system is needed.Furthermore, nonparametri regression is a \ exible, powerful method for es-timating an unknown fun tion" [1℄. The method sear hes a olle tion of his-tori al observations for re ords similar to the urrent onditions. Then, thesere ords, also alled the nearest neighbours to the urrent onditions, are usedto generate fore asts. The quality of the neighbourhood, i.e. the set of nearestneighbours found by the sear h pro edure, dire tly impa ts the a ura y of theestimates, and a fundamental hallenge of nonparametri regression is to de�nethe number of nearest neighbours to be used to generate fore asts. There aretwo basi approa hes to de�ne the neighbourhood in nonparametri regression:Nearest Neighbour based approa h and Kernel based approa h. Nearest neigh-bour nonparametri regression uses a �xed number of neighbours to generatefore asts while kernel nonparametri regression uses any number of neighboursthat are within a �xed distan e around the urrent onditions. One advantageof the nearest neighbour approa h, ompared the kernel based approa h, is that3

Page 238: OR52 EXTENDED ABSTRACTS

it will always generate a fore ast. Hen e the hoi e of the nearest neighbournonparametri regression in this work.3 A hybrid model: Coupling the LWR modelwith the NPR methodContinuum ma ros opi models are designed to des ribe traÆ dynami s alongroad se tions whereas knowledge-based models fo us on traÆ ow fore astingat some spe i� lo ations, in a road network, where histori al data are available.Hen e none of these two approa hes, used separately, is able to apture e�e -tively traÆ dynami s in road networks. Rather than using either a ontinuummodel or a knowledge-based model solely to model traÆ ow in a road net-work, the work in this paper introdu es an alternative approa h whi h onsistsof oupling both types of models. The main advantage of this hybrid approa his its ability to des ribe satisfa torily traÆ dynami s in ea h area of a roadnetwork. As su h, this hybrid model enables to improve the de� ien y of both ontinuum models and knowledge-based models in des ribing traÆ dynami sin road networks, and provides a trade-o� between a ura y and omputational omplexity. An additional noteworthy feature of the model is its simpli ity toimplement and to operate, whi h makes it suitable for real-time appli ation.The hybrid model is stru tured and operates as shown in the diagram inFigure 2.In order to exemplify this new approa h, in the next se tion we present anappli ation on a ase study.3.1 Case StudyFor this study we onsider a part of East Anglia (England) road network whi h onsists of following roads: A12-A120, A120, A12 and A14. We onsider onlyan unidire tional traÆ ow on ea h road. Therefore, as shown in Figure 3, thestudied area onsists of two jun tions: Jun tion A12-A120/A120 �! A12 (ajoin jun tion) and Jun tion A12 �! A14-North/A14-South (a fork jun tion).At ea h jun tion, some ow data are olle ted at the sites denoted by somediamonds in Figure 3. At the join jun tion A12-A120/A120 �! A12 the owdata sets ame from the site 6534 on A12/A120, the site V30013383=5 on A120,and the site V30015327=3366 on A12; whereas at the fork jun tion A12 �!A14-North/A14-South the ow data from the site 30013374=5 on A12, the site9923 on A14-North, and the site V30013396=406 on A14-South were onsidered.From these sites, mat hing databases of three months of aggregate 1 hour traÆ ow-rate were assembled from January 1st to Mar h 30th (2008), resulting in adatabase of 2; 160 observations. The previously des ribed hybrid approa h hasbeen applied to the road network in the en ir led area in Figure 3. Initiallyall the roads are assumed to be empty. The ow data sets olle ted at ea hjun tion serve as histori al observations in whi h re ords similar to the urrent onditions are sear hed, in the nonparametri regression method. For the LWRmodel the following parameters' values have been onsidered: vmax = 90Km/hand �max = 140Veh/Km.However, the length of the road se tion A12 between the two jun tions isabout 30Km and therefore the distan e an be travelled in less than 1 hour for4

Page 239: OR52 EXTENDED ABSTRACTS

a free ow traÆ ondition. Hen e in order to apture traÆ dynami s alongthis road se tion, the time step to be used in the dis retisation of the LWRmodel should be less than 1 hour. On the other hand, only hourly ow data areavailable at jun tions for the nonparametri regression method. Sin e the sametime step is required for both models in order to ouple them, in the simulationa 1 hour time step has been onsidered for both models and the lengths ofthe road se tions have been s aled up about 10 times the real length. Morepre isely, we assumed that the length of the road se tion A12 between the twojun tions is 300Km, whereas for A12-A120, A120, A14-North and A14-South aroad se tion of 200 Km from the jun tion was onsidered.The numeri al simulation of the hybrid model for 200 hours (about 8 daystime period) using the in ows on A12-A120 and A120 (at about 200Km up-stream the join jun tion) depi ted in Figure 4(a) has produ ed the followingresults: the out ows on A14-North and A14-South (at about 200Km down-steam the fork jun tion) depi ted in Figure 4(b); In Figure 5 the ow-rates,passing over the sites where data have been olle ted at the jun tions, are plot-ted. Figure 6 and Figure 7 show the traÆ density on ea h road throughout thesimulation time.The key advantage of this hybrid approa h, ompared to lassi al traÆ models, is its ability to reprodu e traÆ dynami s in a road network by tak-ing into a ount the sto hasti ity of traÆ patterns around interse tions. Forexample for this ase study, it an be observed (see Figure 6 (b)) that from ertain values of in ows on highways A12-A120 and A120, a traÆ jam buildsup on A120 around the join jun tion. Most interestingly, the model enables tohave an outline about the way the traÆ jam is likely to propagate along A120upstream the join the jun tion (see Figure 6 (b)).4 Con lusion and outlookThis paper has introdu ed a new hybrid model for traÆ ow, based on a ou-pling of the LWR model [7, 8℄ and the nonparametri regression method. Someinteresting features of the model, in luding a reasonable trade-o� between a u-ra y and omputational e�ort, have been highlighted through some numeri alexperiments. For future resear h, it would be worthwhile to investigate thepotential of this new hybrid model, for simulating traÆ ow in large roadnetworks.A knowledgmentThe author is grateful to the Highway Agen y for providing the data used inthis paper.Referen es[1℄ Altman N S (1992). An Introdu tion to Kernel and Nearest Neighbor Non-parametri Regression. The Ameri an Statisti ian 46: 175-185.5

Page 240: OR52 EXTENDED ABSTRACTS

[2℄ Aw A and Ras le M (2000). Resure tion of se ond order models of traÆ ow?. SIAM Journal on Applied Mathemati s 60: 916-938.[3℄ Cheslow M, Hat her S G and Patel V M (1992). An initial evaluationof alternative intelligent vehi le highway systems ar hite tures. MITREReport.[4℄ Davis G A and Nihan N L (1991). Nonparametri Regression and Short-Term Freeway TraÆ Fore asting. Journal of Transportation Engineering117: 178-188.[5℄ Gazis D C, Herman R and Rothery RW (1961). Nonlinear follow-the-leadermodels of traÆ ow. Operations Resear h 9: 545-567.[6℄ Kerner B S (2004). The Physi s of TraÆ . Springer: Berlin.[7℄ Lighthill M and Whitham J (1955). On kinemati waves. Pro eedings ofthe Royal So iety. A229: 317-345.[8℄ Ri hards P I (1956). Sho k waves on the highway. Operations Resear h 4:42-51.[9℄ Sun S, Zhang C and Yu G (2006). A Bayesian network approa h to traÆ ow fore asting. IEEE Transa tions on Intelligent Transportation Systems7: 124-132.[10℄ Taylor C E and Meldrum D R (1995). Freeway traÆ data predi tion usingneural networks. In: Pro eedings of the Pa i� Rim TransTe h Conferen e1995: Seattle, Washington, pp. 225-230.[11℄ Yin H B, Wong S C, Xu J M and Wong C K (2002). Urban traÆ owpredi tion using a fuzzy-neural approa h. Transportation Resear h Part C10: 85-98.

6

Page 241: OR52 EXTENDED ABSTRACTS

Figures

ρ

ρve(ρ)

ρmax

vmax

0Figure 1: Fondamental diagram: Flow-Density relationship.LWR Model

Initialisation

Set the initial and

boundary traffic conditions

for each road

Inputs

Current traffic conditions

at the boundaries of each road

Traffic states on each raod

during the previous time step

Outputs

Current traffic states

on each road

NPR Model

Inputs

Flow data set collected

at each juntion

Current traffic states

around each junction

Outputs

Forecasted flows passing

through each junction

Figure 2: Operational stru ture of the hybrid model.7

Page 242: OR52 EXTENDED ABSTRACTS

Case Study

Traffic flow direction

Flow data collection site

llll

ll

ll ll

ll

North

Figure 3: Case study: Only an unidire tional traÆ ow is onsidered on ea h road.The diamond on ea h road at the jun tion denotes the site where ow data are ol-le ted.

0 20 40 60 80 100 120 140 160 180 2000

500

1000

1500

2000

Inflows on Highway A120 and A12−A120 200 Km upstream the join junction

Time [h]

Flo

w [ve

h/h

]

Inflow on Highway A120Inflow on Highway A12−A120

(a) 0 20 40 60 80 100 120 140 160 180 2000

500

1000

1500

2000

Outflows on Highway A14−North and A14−South 200 Km downstream the fork junction

Time [h]

Flo

w [ve

h/h

]

Outflow on Highway A14−NorthOutflow on Highway A14−South

(b)Figure 4: Flow-rates at the boundaries of the ase study road network: (a) In owson A12-A120 and A120 at about 200Km upstream the join jun tion; (b) Out ows onA14-North and A14-South at about 200 Km downsteam the fork jun tion.8

Page 243: OR52 EXTENDED ABSTRACTS

0 20 40 60 80 100 120 140 160 180 2000

500

1000

1500

2000

2500

3000

3500

Inflows and Outflow at the join junction

Time [h]

Flo

w [ve

h/h

]

Inflow on Highway A120Inflow on Highway A12−A120Outflow on Highway A12

(a) 0 20 40 60 80 100 120 140 160 180 2000

500

1000

1500

2000

2500

3000

3500

Inflow and Outflows at the junction fork junction

Time [h]

Flo

w [ve

h/h

]

Inflow on Highway A12Outflow on Highway A14−NorthOutflow on Highway A14−South

(b)Figure 5: Flow-rates at the jun tions of the ase study road network: (a) In ows onA12-A120 and A120 and out ow on A12 at the join jun tion; (b) In ow on A12 andout ows on A14-North and A14-South at the fork jun tion.

1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2

20

40

60

80

100

120

140

160

180

200Traffic density along 200 Km of Highway A120 upstream the join junction

Space [100 Km]

Tim

e [h

]

(a) 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2

20

40

60

80

100

120

140

160

180

200Traffic density along 200 Km of Highway A12−A120 upstream the join junction

Space [100 Km]

Tim

e [h

]

(b) 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3

20

40

60

80

100

120

140

160

180

200Traffic density along Highway A12 downstream the join junction

Space [100 Km]

Tim

e [h

]

( ) 0

ρmax

Figure 6: TraÆ dynami s along road se tions of the ase study road network: (a)Contour plot of the traÆ density along a se tion of 200 Km on A12-A120 upstreamthe join jun tion; (b) Contour plot of the traÆ density along a se tion of 200 Kmon A120 upstream the join jun tion; ( ) Contour plot of the traÆ density along A12(� 300 Km) downstream the join jun tion.9

Page 244: OR52 EXTENDED ABSTRACTS

1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2

20

40

60

80

100

120

140

160

180

200Traffic density along 200 Km of Highway A14−North downstream the fork junction

Space [100 Km]

Tim

e [h

]

(a) 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2

20

40

60

80

100

120

140

160

180

200Traffic density along 200 Km of Highway A14−South downstream the fork junction

Space [100 Km]

Tim

e [h

]

(b) 0

ρmax

Figure 7: TraÆ dynami s along road se tions of the ase study road network: (a)Contour plot of the traÆ density along a se tion of 200 Km on A14-North downstreamthe fork jun tion; (b) Contour plot of the traÆ density along a se tion of 200 Km onA14-South downstream the fork jun tion.

10