approaches to benchmarking: the case of framework ...infohouse.p2ric.org/ref/05/04161.pdf ·...

30
Approaches to Benchmarking: the case of ‘framework conditions’ and ICT-Os by Professor John Bessant and Professor Howard Rush Centre for Research in Innovation Management University of Brighton paper prepared for the Institute for Prospective Technological Studies European Commission Joint Research Centre February 1998

Upload: truongkiet

Post on 04-Feb-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Approaches to Benchmarking: the case of ‘framework conditions’and ICT-Os

by

Professor John Bessant and Professor Howard Rush

Centre for Research in Innovation ManagementUniversity of Brighton

paper prepared for theInstitute for Prospective Technological StudiesEuropean Commission Joint Research Centre

February 1998

2

Foreword

Effective exploitation of new ICTs to economically productive ends, as muchas their efficient diffusion, requires major changes and innovation in theorganisational arrangements employed by firms and institutions, both internallyand in their external networking practices. This ICT-O coupling, or mutualdependence of ICTs and organisational arrangements, arguably constitutes oneof the most formidable generic technological challenges presently facing allsectors of business. It is also further indication of a new competitivenessregime taking hold in industrialised countries whereby knowledge and theefficiency of its manipulation are increasingly the sources of competitiveadvantage and the drivers of new markets and new business opportunities, andthus of new employment. In this way, ICT-Os constitute a critical component ofthe knowledge infrastructure and resource on which this new competitivenessregime is being built.

This present paper has been prepared as a background, and input, to thediscussions around the setting up of a pilot benchmarking exercise in the fieldof ICT-Os. The exercise aims to test the applicability of benchmarking in thepublic policy arena, by identifying the links between Framework Conditionsand best practice in the field of ICT-Os, as well as the means to improvepolicies to make framework conditions more propitious to the attainment ofbest practice. It is one of four pilot exercises being conducted across several EUMember States in the frame of an initiative of DG III and the DirectoratesGeneral for Industry from the Member States.

The paper traces the origins and development of benchmarking, describes anumber of examples, and gives a scheme or typology with which to classifydifferent types of benchmarking activities.

Most importantly, the paper includes a check list of some very practicalconsiderations regarding how to actually conduct a benchmarking exercise. Inlight of the check-list, the paper also reflects upon the applicability of abenchmarking approach to ‘framework conditions’ in general, and in relation tothe ICT-O issue in particular. It thereby provides valuable food for thought ofrelevance as much to the ICT-O benchmarking exercise currently underway, asto any other similar exercise.

James P Gavigan, IPTS20 February 1998

3

Approaches to Benchmarking: the case of ’framework

conditions’ and ICT-Os

John Bessant and Howard Rush

Benchmarking is not a new concept. The term originates from the approach

used by civil engineers in construction to measure against a standard but has

now has become widely used to indicate some form of structured comparison.

In particular it can be used to measure against higher standards with a view

to enabling learning about how to close the gap. This paper presents some

ideas on how such an approach might be used in the case of benchmarking

‘framework conditions’ which influence the diffusion and use of information

and communication technologies and organisational techniques. Before

addressing the benchmarking of policy frameworks and implementation

mechanisms which are seen as underlying innovation and diffusion, the paper

outlines some of the key historical influences on benchmarking, explores the

range of applications to which benchmarking has been applied, and presents

a set of principles and a generic model of benchmarking.

4

1: The evolution of benchmarking

1.1 Early use

In a business context, the origin of the word goes back to the experience of the

Xerox Corporation in the 1980s and is well-described by Camp (1989). Xerox

were facing a decline in market share in Japan and instituted a structured

comparison of different players in that market against their own performance.

The results were disturbing; on a variety of indicators - including product

design and performance, speed and quality of service, cost, etc. - Xerox

measured poorly against it’s major rivals. The resulting shock led to a

systematic process of comparison against factual data on many dimensions

of both performance (how big is the gap?) and practice (what are they doing

differently - and better - than us?). Some of this involved work on products -

the kind of ‘reverse engineering’ approach which many companies have used

before. But much of it was on the relatively new area of process - how

competitors actually created and delivered better products faster, more

flexibly, with higher quality, etc.

The impact of this approach on Xerox is widely held to have been critical. It

forced the company to rethink its strategy and to pay attention to major new

areas of development - and helped turn around its fortunes. More

importantly, it did so by warning of a crisis before that crisis actually overtook

and incapacitated the firm. Benchmarking is thus a powerful way of providing

early warning of the nature of gaps to be closed. But it also has a valuable

role in enabling learning - in opening up a company to new and better ways of

doping things. It is this learning aspect of benchmarking which has made it

such a powerful business tool.

5

1.2 Benchmarking and business process re-engineering (BPR)

The emergence of benchmarking took place at approximately the same time as

another powerful idea began to take hold - that of process re-engineering

(Davenport, 1992). This concept, with its origins in work study and systems

analysis, involved organisations in stripping down their thinking about

business to a few essential ‘core’ processes which could be systematically

mapped, measured and improved. Benchmarking offered a powerful way of

doing this, particularly in terms of inter-firm comparisons of how common core

processes were carried out and became an important part of the BPR toolkit.

Process improvement methodologies of this kind became very fashionable during

the late 1980s and their diffusion was accelerated by their adoption and

promotion by many management consultants. BPR was initially triggered by

experiences in banking and insurance where IT had failed to deliver the

expected gains in productivity and where it was recognised that simply

computerising inefficient or ineffective processes would simply compound the

problem (Hammer, 1990). Hence the need to rethink from a ‘back to basics’

perspective and the role for learning tools such as benchmarking within it.

BPR’s concerns moved from IT support to overall process mapping and

improvement and spread from a focus on overall core processes to

application within sub-processes which might be carried out by particular

parts or divisions of the organisation.

1.3 Benchmarking and total quality management (TQM)

A second stream was also developing in parallel which represented fertile ground

for the benchmarking approach. The concept of ‘Total Quality Management’

(TQM) had originated in Japan in the late 1960s and had been experimented

with by many Western firms (Oakland, 1989). But the initial attempts to

change culture had often failed to deliver and there was considerable

disenchantment with the approach and with the particular tools such as

quality circles. Work in the USA on developing a national quality award

6

similar to the Deming Prize in Japan led to the emergence of a more holistic

view of TQM which integrated themes like leadership, strategic direction,

external and internal linkages and the processes which needed to be put in

place to ensure that quality became embedded in the day-to-day activities of

the company. Importantly this approach laid stress on measuring and using

regular monitoring of performance to drive the process of quality

improvement; the Baldrige award (named after the US Secretary of

Commerce who promoted it) offered a template for ‘best practice’ which many

companies began using as a framework for benchmarking their quality

activities (Gavin, 1991).

This model was refined and extended by others - for example, in Europe the

European Foundation for Quality Management (EFQM) developed their own

framework which looked at both the enablers and the results of quality

activities and tried to put measures in place for both of these. This was

important in the development of benchmarking because it moved the

emphasis from simple performance comparisons to include the dimension of

practice - and in doing so opened up the possibility for extensive learning

between firms about how to achieve better performance. The value of this

approach was enhanced by the growing pressure to demonstrate quality

capability by acquiring ISO 9000 certification by firms; in order to achieve this

standard it was necessary for firms to define quality processes, measure

them and then improve them. Benchmarking against these frameworks

provided powerful route to learning how to do this.

1.4 Benchmarking and ‘best practice’

The concept of ‘best practice’ began to enter the vocabulary of managers and

policy-makers in the early 1990s, with the recognition that the benchmarking

approach could be used to help motivate and promote change across

industry. Industrial policies which had relied on awareness-raising and

demonstration projects showing what could be done could now be

complemented by a powerful approach which showed firms the urgent

7

reasons why something should be done - because of the costs of not closing

the gap with ‘world class’ or other models of best practice. It is an approach

which is increasingly used - across sectors, amongst different groupings of

firms (for example regionally or by size) - and which offers a framework for the

development of a series of improvement activities.

Figure 1 shows the evolution of the benchmarking approach in outline, together

with some examples of application. ‘Probe’, ‘Prism’, and ‘Microscope’ are all

names of particular benchmarking approaches, whilst ‘Benchmark 2000’

implies the emerging state of the art at the end of the decade.

Figure 1 Evolution of Approaches to Benchmarking

Benchmarking2000

Reverseengineeringof products

Xerox workon Japanese

market

Total qualitymanagement

frameworks -Baldrige etc.

EuropeanQuality

award model

Businessprocess re-engineering

’Bestpractice’

approaches

Manufacturing

Services

Government

Self andfacilitator

drivenbenchmarking

as tool forstarting change

process

Probe,Prism,

Microscope

R&Dinstitutions

Benchmarkingas learning aid

Internationalpolicy

benchmarking

2: Application and Typology of Benchmarks

2.1 Example application of benchmarking

8

Since its early development in Xerox and other large US corporations the

practice of benchmarking has evolved and has found increasingly widespread

application. Some examples (which range from manufacturing through to the

service sector, as well as exercises conducted at the activity, the firm and the

sector level) highlight its potential for use as an aid to learning in a variety of

situations:

• In the UK there has been an attempt to provide a structured benchmarkingapproach to help firms position themselves against a generalised model ofgood performance and practice and then to focus on particulardevelopment plans suggested by the comparisons.1 This exercise, knownas the PROBE model, is being actively promoted by the Confederation ofBritish Industry and evolved from work done at London Business Schoolwith IBM Consulting Group. Having developed a range of relevantmeasures of both performance and practice the first phase of this work wasto promote awareness of the models and the gaps which they implied. Thefollow-up phase was to offer a diagnostic service whereby firms wouldmeasure themselves using the benchmarking framework and then identifyaction plans. The feedback from this was framed against the overallnational position and then against ‘firms like them’ - by sector, size, etc.;the process was facilitated by an expert and supported by software whichallowed rapid interpretation and display of the relevant benchmarks. It is acumulative benchmark - the more firms use the service the more data goesinto the database to sharpen the model.

• An alternative process - the UK Benchmarking Index is being run on behalfof the UK Department of Trade and Industry by PERA Consulting andoffered as a service through the Business Link infrastructure to SMEs. Theapproach is very similar but the dataset is more oriented towards SMEs. Avariant on the Probe theme has also been developed - ‘Microscope’ -which is tailored more to the needs of SMEs. In both cases the emphasisis firmly on using the benchmark as a motivator to learning anddevelopment and as a way into various kinds of support and promotionalactivity offered as part of government programmes.

• The UK DTI has also commissioned a benchmarking study of how the UKis developing within the Information Society (Spectrum/DTI 1997). Thisstudy focused on the factors which were perceived as ‘drivers’ behind therelevant activities in nine countries. The work monitored the UK’s progressagainst the eight selected countries as a means of identifying areas of

1 Reports such as ‘Made in Britain’ (Voss, 1995) and ‘Made in France’ (Taddei and Coriat,1993) suggested that whilst a small number of firms were at the level of performance andpractice which justified their being labelled ‘world class’, there were many others who were indanger of lagging behind and being displaced in terms of their competitiveness.

9

strength and weaknesses. A follow-up study (Spectrum/DTI 1997) focusedon the ownership of Information and Communications Technologies (ICTs)in five nations as a means of identifying inhibiting factors behind companyusage of ICTs. This study was undertaken via telephone interviews of 500companies in the UK and a sample of 200 companies in the US, Japan,France and Germany, followed by large surveys of consumers and internetusers. The work was supplemented by case studies to provide additionalqualitative information.

• The Ministry of Economic Affairs in The Netherlands (1995) conducted awide-ranging exercise entitled Benchmarking the Netherlands which theydescribed as a ‘competitiveness test’. In it they examine the performanceof The Netherlands in a number infrastructural themes including monetaryand fiscal stability, technology and education, the physical and the fiscalinfrastructure, capital and labour markets and the market for goods andservices. Performance is compared with data from Germany, Belgium andDenmark (who are seen have sharing a number of similar characteristicsas The Netherlands) as well as the United States and Japan (whose areperceived as have very different institutions). Although the study is biasedtowards measures of performance than assessing the institutionalpractices/policy mechanism applied within any of the ‘infrastructures’, theintention was to identify areas where “economic potential can bestrengthened” and to learn. According to the Ministry, the outcomesprovided a research agenda for comparisons of international policy.

• Benchmarking Research and Technology Institutes (RTIs) - Rush et. al.(1996) analysed in depth some of the most successful RTIs from ninenations in order to show precisely what they do, how they do it, how theyare funded and how they work with industry. The intention was to illustratepractical, best practice strategies by showing how leading RTIs manageand organise themselves within their national systems of innovation. Theresults have been used to facilitate learning in less successful RTIs.

• An alternative to this approach of benchmarking RTIs can be found in thework of the World Association of Industrial and Technological ResearchOrganisations (WAITRO) who have conducted a detailed survey of thepractices used in the daily operations of over sixty RTIs which locates eachinstitute against a previous developed model of best practice (Grier 1995).

• The Higher Education Funding Council Executive has been at the forefrontof benchmarking in a variety of academic areas in the UK educationsystem (Fielding, 1997). Examples include a practice benchmarking studyof the teaching of history, another concerned with ways of assessingstudent performance, and a third related to the procurement process. (Thelatter is more about a range of administrative practices than education butused to suggest ways of both saving money and improving services.)

10

• The provision of heath care is one of the most recent areas for theapplication of benchmarking. Although relative newcomers in this field (incomparision with manufacturing), medical practioners and administratorshave, nevertheless become one of the most active in instigatingbenchmarking activities. 2 Examples include: clinical practices (Maxwell,1996), primary paediatric services (Morris 1997), laparoscopicchoecystectomy services (Campbell, 1994), facilities management(Wagstaff, 1995).

• An on-going exercise commissioned by DG III of the EuropeanCommission is aimed at benchmarking the diffusion and utilisation of ICT-Os. The first phase of this work was completed at the end of 1997 andcollated available secondary data on the use of ICT-Os across theEuropean community and the United States. Proposals for second phaseof this project include selected case studies and the benchmarking of those‘framework conditions’ which are perceived as influencing rates ofdiffusion.

2.2 Types of benchmarking

From the examples provided above, it is clear that benchmarking can be

applied in a variety of different ways.3 One way to classify these might be by

objective, as shown in Table 1:

Table 1: Benchmarking by Objectives

OBJECTIVE

OF BENCH-

MARKING

DESCRIPTION

Competitive Comparison of the organisation’s performance against one of the

competitors.

Process Measurement and comparison of a specific process against the similar

process of the organisations known to be best at that process.

Functional A variation of the previous one that compares a function of the

organisation to the same function in other organisations.

Generic A variation of process benchmarking that compares like processes of two

or more organisations without limitation to competition or same industry.

2 The UK’s National Health Service has, for example, has set up the Clinical Benchmarking company(a joint venture between the NHS trust Federation and Newchurch & Company) (Phillips, 1995).3 See Voss et. al. (1997) for results of a survey which looked at how manufacturing companies usedbenchmarking.

11

OBJECTIVE

OF BENCH-

MARKING

DESCRIPTION

Industry Comparison of processes within organisations within the same industry,

but not necessarily competitors.

Performance Comparison of the performance attributes of one company’s product

against the attributes of another company’s corresponding product.

Strategic An approach to strategic business planning based on study and

adaptation of strategies from organisations known to be best at the

processes supporting those strategies.

Tactical A variation of process benchmarking involving the comparison of short-

term processes as differentiated from longer-term processes.

Classification by objective is an approach which can be applied to both

products/services - the things which an organisation offers - and also to the

processes whereby those offerings are created and delivered. Whilst much of

the early focus has been on firm-level benchmarking there is no reason why

the underlying approach cannot be deployed in different contexts, and there

is now some activity around benchmarking of, for example, government

polices and the processes for implementing them.

Following this, a second approach to classification would be by the opportunities

for deploying benchmarking in different contexts, as illustrated in Table 2.

12

Table 2: Benchmarking by Activity

Product Process Examples

Activity level Comparisons of howdifferent parts of thesame organisationcarry out a commonactivity.

Different production cellsmaking the same part ordifferent admin. groupscarrying out similar activities.

Intra-firmprocesses

Comparisons of howdifferent divisions orunits of a businesscarry out similarprocesses.

Different car plants within thesame company with differentlevels of productivity - how arethey doing it?

Different branches of a bankprocessing cheques withdifferent error levels - how?

Inter-firmprocesses

Reverseengineering ofcomparableproducts fromdifferentmanufacturers.

Comparisons offirms within thesame sectorcarrying out similarprocesses.

The IMVP (Womack et. al.,1990) study of 70+ carassembly plants.

Sector-wide comparisonsorganised by tradeassociations.

International studies of sectorperformance - e.g. the autocomponents sector UK vs.Japan.

Out-of-industrybenchmarking

Functionalitycomparison indifferent producttypes.

Comparisons ofdifferent firms fromtotally differentsectors but with acommon basicprocess.

Hospitals comparing their assetutilisation practices (beds,operating theatres, etc.)against those of industry.

Manufacturers comparinglogistics against those of fastmoving consumer goodsretailers.

Airlines comparing changeoverpractices (air-to-air time atairports) with fast set-up timemanufacturing

13

3: Applying a benchmarking approach

3.1 Key principles of benchmarking

As the previous section indicates, there is a wide variation in applications of

benchmarking. However, all benchmarking is fundamentally a structured

approach to comparison. Underlying any successful benchmarking activity

are some important basic principles. These need to be taken into account if

the exercise is to be worthwhile. They include:

• it requires a clear focus - the tighter the definition of the core process

being studied, the more valuable and focused the learning opportunities.

A useful aid to this is the development of a flow-chart, see Figure 2,

representing all the stages in the process.

Figure 2: Flow chart for Process Benchmarking

Customer Delegation CreditDepartment

OrderManagement

DataProcessing Plant

Ordering

Order received

Credit checking

Solve CreditConflict

Order received

Send order

Order received

Stock checking

Production Plan

Define priorities

ok?

ok?Assenbly

and delivery

Send Order

NO

NO

YES

YES

ACTIVITY

DECISSION

PROCESS FLOW

source: Temaguide, (1998)

• it requires measurement, - the more developed the measures the more

effective the learning becomes. In the case of the above figure we can

14

begin to document the process and try to identify which are the key

indicators that actually show the performance of the process.

• it requires differentiation between dimensions of performance and practice

- for example, knowing that Motorola has a quality level on average of x

parts per million defects (ppm) whereas IBM has one of y ppm tells us

something about their relative performance and identifies the gap to be

closed. But it tells us nothing about how each firm does it. We need to

look at the practices within each and then to ask two questions: do they

both have the same practices (or are some missing) and, if so, is one

carrying them out more effectively than the other - and how?

• it is primarily an aid to learning, - providing inputs of new ideas for

improving practices within organisations. Whilst the framework may be

used to facilitate the award of prizes or to identify ‘best in class’ this is only

a motivator of learning, not an end in itself.

• it works best when the gaps involved are not too large - for example, a

small engineering firm benchmarking against 3M or IBM may find that there

is too big a gap in both performance and practice - and thus dismiss the

potential for learning. Equally that same firm benchmarking itself against

other SMEs within the same geographical region would be strongly

motivated to change because it is comparing itself to ‘people like us’. (This

is a weakness in early approaches which set up ‘world class’ as the

benchmark. Firms often saw the gap as so big and beyond their

aspirations that they did nothing in response to the benchmarking.)

• it can be incorporated into a more integrated framework - for example, the

EFQM/Business Excellence model, the Baldrige award, Probe, etc.

However, in each case the overall model needs to have clear component

dimensions with clear measures, as indicated above.

15

• it can be applied across sectors and internationally - although the risk is

that in broadening the scope the focus is lost. Cross-country comparisons

need to be well-designed to ensure that like for like comparisons are being

made and that the influence of external contextual factors is minimised or

else allowed for in interpreting the results.

3.2 A generic model of benchmarking

By applying the principles described above, it is possible to conceive of a

general seven step model. As depicted in Figure 3, the activity begins with

the decision of where to apply benchmarking - which might be decided

through consultation with ones customers. This is followed by the actual

benchmarking exercise with a plan for improvement, implementing,

monitoring and continuous improvement.

Steps two through four represent basic activities in a formal process of

benchmarking. On their own, however, they result in little more than an

indicator of where something stands in relation to others - providing a league

table or performance indicator. As a tool for learning, however, the results

need to be communicated, recommendations formulated, implementation

plans devised (including timescales and resources required), and they need

to be continually monitored and up-dated. Furthermore, it is important to

keep in mind that best-practice is a dynamic concept and that what is being

benchmarked against is unlikely to have stood still.

16

Figure 3 Typical generic approach to Benchmarking

7. Continuous improvement

2. Understanding of the subject

3. Identification and undertandingof “best-in-class” and data gathering

4. Analysis and comparison of results

5. Improvement Plan to overcome theBest-in-class

6. Implementation and monitoring of the action plans

1. Decission of where to apply benchmarking

• Ask customers about their requirements for theproduct

• Self-assessment on the critical factors identifiedby the customer

Particular and key steps inBenchmarking

• Communication of Benchmarking results• Justification of the recomendations in terms of

resources• Determine how much the organisation could

improve• Development of action plans

• The action plans should be clearly expressed interms of time and resources

• The implementation should be guided byflexibility

• Constant update and follow up of the results fromthe analysis

• Especially important not to forget the possibleimprovements achieved by competitors

Source: Temaguide, (1998)

17

4: Benchmarking Policy Initiatives

4.1 Benchmarking as a tool for the policymaker?

It is easy to conceive of such a model being widely applied in industry.

Theoretically, the model depicted in Figure 3 should also apply equally well

for any type of comparison. But, although the term is often used loosely in

the context of its application in, for example, the area of government policy

design and implementation, there is still relatively little experience of its

systematic deployment in this and other spheres of activity.

In principle benchmarking has much to offer policy-makers, since it could

facilitate learning and diffusion of good practice. There is some evidence

(Dodgson and Bessant, 1996) that learning of this kind is increasingly

common and that there is convergence of policy and implementation

mechanisms, particularly across the EU countries. However making use of

benchmarking in the policy arena requires consideration of several points.

First is the question of focus. Making a systematic comparison of several

support schemes with a similar focus would be possible but comparing

something broad and multi-faceted like education policy would be extremely

difficult. Similarly comparing core processes carried out by similar

departments in different countries has some potential but at the level of

comparing whole Ministries would be fraught with difficulty.

Closely related is the second question of measurement. Whilst most

policymakers are required to be accountable for their work and have thus

introduced various kinds of performance measure, these are not always

directly comparable. Measurement of take-up or of effective implementation

of a policy would be appropriate but care would be needed to ensure like-for-

like comparisons. Downstream measures of performance, such as the overall

adoption/diffusion of a particular technology which was the subject of a

18

government promotion policy would be much more difficult to undertake. First

there is the need to separate out what proportion of the diffusion could be

directly attributed to the influence of government policy (as opposed to other

forces) and second, how much and in what ways was the policy influential?

That said, suitable performance measures can be developed which at least

permit comparisons to identify where things appear to be working better and

this can be used to focus attention on possible learning points in the practice

area. It is here - the how of designing and implementing policy - where the

main benefits of benchmarking are likely to arise. Whilst it is unlikely that

sufficiently accurate performance measurements can be devised to allow for

an accurate comparison, the broad brush measures can help show up where

there are differences and this can aid learning and the flow of new ideas

about how to enhance performance through adoption of new or improved

practices.

For example, take the case of a widely used policy on awareness raising

about a particular technology. Many countries are now trying to do this and

are using a wide variety of methods - from media campaigns, through

roadshows, exhibitions, demonstration site visit programmes to various kinds

of facilitated diagnosis and advice. These programmes are all trying to

achieve the same kind of performance measure - a higher level of awareness

amongst a target population of firms. And they are all doing broadly similar

things, such that benchmarking would be useful to effect a structured

comparison between what different countries are doing (is someone doing

something extra or different to our suite of mechanisms?) and how they are

doing it (are there different modes of implementation which are more or less

successful?).

The challenge is thus to define the focus tightly enough to enable meaningful

comparisons to be made, and to emphasise the practice dimension in

addition to the performance one, since it is here that most opportunities for

learning and development of ‘best’ practice arise.

19

4.2 Benchmarking of ‘Framework conditions’

With the above comments in mind it is possible to explore the potential for

using benchmarking to explore what has been called ‘framework conditions’.

At the outset it is useful to specify what we understand by the term; we see it

as covering the national or even supra-national framework of policies and

their implementation mechanisms. In many ways there is considerable

overlap between the concept of framework conditions and that of ‘national

systems of innovation’ (Lundvall 1992) in that both are concerned with the

context within which industrial change takes place. Elements might include:

• the science and technology infrastructure and its connections (orlack of them) with industry

• the education system (formal, vocational, etc.);

• the financial system;

• the industrial relations system;

• the sectoral support infrastructure (trade associations, technicalbodies, etc.);

• specific industrial policy mechanisms at both national and regionallevel; etc.

Each of these areas contains a sub-set of specific policies and mechanisms

which have a bearing on how easily change can take place. For example, the

adoption of a new manufacturing method might be facilitated by the

availability of information emerging from the science and technology

infrastructure, and the mechanisms which enable it to be transferred. It might

benefit or be retarded by the availability of appropriate skill sets emerging

from the education system. And investment in it can be enabled or retarded

by the operation of the financial system. Policy mechanisms can accelerate

and direct through a mixture of advice, subsidy and information. And so on.

At this high level of aggregation it would be extremely difficult to apply a

benchmarking approach but as we move down towards more specific areas,

20

so the potential increases. As suggested in the previous section, it would be

possible to benchmark different policy approaches to raising awareness of

new techniques, for example. Equally it would be possible to look at that part

of the vocational education system which identified and delivered appropriate

skills to support a particular type of new technology, and benchmark its

operational practices against those in place in other countries.

The challenge in applying benchmarking to framework conditions are thus

those of:

• focus - disaggregating the framework conditions sufficiently to providea ‘core process’;

• clarity - deciding on the approach to benchmarking being employed -between activities, between processes, out-of-industry, etc.;

• measurement - defining relevant, feasible and appropriate measuresof both performance and practice;

• purpose - clarifying whether the primary purpose is historicalperformance comparison (in which case emphasis will be onperformance benchmarking) or learning and continuous improvementof policies and their implementation (in which case emphasis will be onpractice benchmarking).

4.3 The case of ICT-O

It will be useful to continue this discussion of the appropriateness of

benchmarking with reference to a specific example - the case of the diffusion

of information and communication technology/organisational techniques. We

can consider these under the four headings mentioned above:

21

4.3.1 Focus

The first difficulty raised here is the high level of aggregation in the topic.

There are certainly difficulties in clustering ICTs, never mind also linking in

organisational techniques. From the company level perspective the issue is

less one of shopping for a particular technique than one of trying to make

strategic improvements to performance using the bundle of approaches

loosely grouped under labels like ‘world class manufacturing’.

If our aim is to understand how particular framework conditions affect the

diffusion of a particular technology within this bundle, then that will be

different from looking at those conditions which address the promotion of a

broad set of the technologies. Depending on which approach we take we will

need to devise different measures for the performance end. For example, the

former case would aim to count how many firms have adopted that specific

technology whereas the latter would look to broader measures of take-up -

perhaps based on a checklist.

There is also the question of separating out measures of diffusion from those

of performance improvement as a result of using these technologies. In the

former case it is relatively easy to measure the spread of particular

technologies (especially those embodied in particular pieces of equipment).

But there is widespread evidence to indicate that simply adopting a new

technology or technique may have no impact (or even a negative impact) on

firm performance.

It has, for example, been well documented by among others Roach (1994) for

IT investment, and Rush and Bessant (1992) in examining the impact of

advance manufacturing techniques, the existence of a ‘productivity paradox’.

High levels of capital investment clearly failed to have the anticipated impact

of productivity. In some cases this may have been due to inappropriate

implementation, such as failing to integrate activities controlled by digital

technologies or not recognising the importance of adopting complementary

22

organisational techniques. Clearly measuring ‘ownership’ is not the same as

‘use’ or the ‘intensity of use’ and if we wish to measure the improvement in

performance (productivity etc.) as a result of diffusion of these technologies it

becomes much more difficult (Ferrez et. al. 1992).

The third area of difficulty here is in the aggregation of what we mean by

framework conditions in this context. Clearly there are many features which

might have a bearing on diffusion rates - as suggested in the previous

section. Possible framework conditions might include both general

‘background’ conditions (such as the general business infrastructure or the

legal and regulatory framework) and specific policy relevant conditions which

can be developed and changed as a result of learning through benchmarking.

We need to be clear about which ones we are focusing on - for example,

industrial promotion policies rather than education policies.

These are not insurmountable problems but they do indicate the care with

which benchmarking exercises need to be designed if they are to be

meaningful. In this case we might envisage a series of parallel benchmarking

exercises, each dealing with a disaggregated element, rather than trying to

conduct an over-arching and largely meaningless comparison of the whole.

For example, if the core process we are interested in is awareness raising

than the practices which might be benchmarked are a range of different

mechanisms such as exhibitions, demonstrations visits, consultant advice,

media publicity, seminar, etc. Alternatively, performance measures might be

the number of firms which report awareness of the bundle of ICT-Os selected.

At the least some form of structured methodology, identifying (as above) the

core process, and performance and practice areas and measures will be

needed.

23

4.3.2 Clarity

The type of benchmarking selected will affect the choice of measures, etc. If

we are to go for an inter-country comparison of practices then this will

probably mean a different level and type of data collection than if we are to

look at benchmarking diffusion within a sector or between sectors within a

national economy. If the purpose is to look internationally then care needs to

be taken to ensure that local conditions are allowed for; this is less an issue in

sectoral or national level studies.

4.3.3 Measurement

The main issue here is one of identifying suitable measures for the

performance and practice variables which we want to look at. A secondary

question is then going to involve the availability of data and/or the

development of suitable methods to collect it.

In terms of identifying suitable measures, this argues strongly for

disaggregation. For example, in the performance area it is much easier to

collect data on the number of robots installed than the overall diffusion of a

bundle of different ICT-Os. A number of sources exist to help supply this

information from the supply side, and surveys and other instruments have

been used to collect data from the demand side. (Examples include the

Benchmark survey carried out annually within the UK by Works Management

magazine, or the database managed by the UN ECE group on engineering

industries).

For organisational innovations it may be necessary to construct measures, or

to measure using a structured checklist of examples. In either case what can

be collected is only data about the adoption, not the effective use of the

technologies.

24

On the practices side the first measure is simply one of presence or absence.

Does one country deploy the same policy mechanisms as the other, for

example. Beyond that it is likely to be difficult to assess the extent to which

practices are well or badly implemented; this is likely to require the

development of simple scales and gradually refining them. For example, we

can construct a simple scale based on bad-OK-excellent and then begin to

define characteristics which we would expect to see at each point on the

scale. The following is a simple example of such a scale:

Assume the practice area is awareness raising.

First pass measure is: Which of the following mechanisms do you use(Yes/no answers)

Exhibitions, TV, Radio, Newspapers and magazines, Consultantadvisors Roadshows, Demonstration projects, Site visits, Videoservice, etc.

Second pass measure: For each of the mechanisms, indicate theextent to which the practice is developed:

1= Never actually implemented, although technically part of our rangeof mechanisms2 = Occasionally used but not reviewed or improved since we started3 = Used from time to time and reviewed annually4 = Used regularly and reviewed at monthly intervals. Practice ismodified on a monthly basis in response to this feedback5 = Used frequently and with a high degree of user feedback. This istaken frequently and forms a regular input to updating the practice.

In other words we can begin to develop simple scales and then continuously

refine and sharpen them.

This kind of approach has been used in a number of benchmarking projects

where the variables are not easily measured. For example, in the Software

Engineering Institute’s Capability/Maturity Model (Paulk, 1995) assessments

are made of different software development practices through a scaled

scoring system such as this; as has the CIRCA model of continuous

improvement (Bessant and Caffyn 1997).

25

4.3.4 Purpose

In many ways this is the most significant question in designing benchmarking

studies. If the purpose is primarily one of performance comparison, then the

focus is likely to be strongly on the performance dimension. How well are we

doing compared to someone else is the basic question, and it is based largely

on historical data. In many ways this is the stuff of conventional industrial

economics - and whilst there are major questions about the quality and

availability of data the main output remains the same - being able to say that

one region is better than another in terms of performance measured data.

This approach may act as a motivator to change because it highlights the gap

between what the best performers achieve and the rest. What it does not say

is how that gap arose or how it might be closed; this can only be answered

with reference to data on comparative practices and the degree to which they

are well or badly implemented. Some hypotheses can be offered as to why

the performance gaps exist but in order to move to an understanding of what

might be done we need to focus more on the practices dimension.

In many ways much of what has been called benchmarking at the

international level is actually international performance comparison. It serves

to demonstrate that there are gaps - for example, between Europe and the

US or Japan, or between different European states. And it raises serious

questions about the quality and availability of accurate diffusion data. But too

much preoccupation with that means that we are likely to get benchmarking of

information systems (statistical collections, available data sources, etc.) and

not some of the other things which contribute to good framework conditions or

policies. There is a need to have some better idea of the diffusion - in some

ways this is the performance measure side of the reference model. But even

if we just take a crude ‘excellent-good-bad’ scale for that axis, it will still

provide some indication of the gap to be closed. An important theme here is

26

the idea of ‘iterative measure development’ in which each round refines the

measurement framework.

If the purpose is one of learning and continuous improvement, then emphasis

shifts to the practices side. Here we are concerned with trying to understand

the how rather than dimensionalise the what. In the context of benchmarking

framework conditions for ICT-Os the main issue is to try and learn from

different players about the variety of possible practices and also about the

relative effectiveness of different mechanisms for their implementation.

27

5: Conclusions

Since its initial conception as an engineering tool, benchmarking has gained

widespread acceptance as a systematic means for making structured

comparisons in a broad range of managerial and industrial operations.

Although primarily employed in manufacturing activities, benchmarking has

also been applied in the service sectors and, most recently, in the public

sector (e.g education, healthcare, transport). The range of activities which

are now considered appropriate for benchmarking is so wide, however, that

the term is in danger of being watered down into just another buzz word in the

management literature. Therefore, in this paper, we have tried classifying the

different types of benchmarking exercises by objective and activity and to

differentiate two major dimensions of benchmarking - those that compare

performances and those which are concerned with practices.

Many of the benchmarking exercises which have been conducted to date

would probably fall into the former category. While there is certainly merit in

locating ones own performance among a relevant sample, it is in the practice

dimension (or phase of a benchmarking activity) that we believe lies the best

opportunity for learning and continuous improvement.

However, benchmarking should not be seen as a universal panecea. Some

commentators have suggested that it can be a waste of time for some firms in

that it can breed complacency amongst brand leaders, detracts ‘laggard’ firms

from improving, and requires effort and time better invested elsewhere

(Womack and Jones, 1996). While these criticisms are no doubt valid for a

number of firms, we suspect that for the majority, benchmarking holds more

advantages than disadvantages. Nevertheless, it has to be recognised that

they can be resources intensive and they are not necessarily easy. Indeed,

we have pointed out that there are particular difficulties associated with

attempting to benchmark the impact of ICT-Os. While it is possible to

calculate rates of diffusion on a fairly superficial level (e.g. ownership), the

28

’intensity’ or ‘quality’ of use is much more difficult. Furthermore, in-depth case

studies of impact at the individual firm level provides ample evidence of the

beneficial consequences of appropriate adoption of ICT-Os and there are

many instances in which a simplified approach would be suitable.

Benchmarking of ‘framework conditions’ is not likely to be any easier and

needs to be approached in a disagregated fashion - identifying the range of

‘conditions’ within the framework and the practices or mechanisms which

make up each one. While still difficult, the diversity of approaches across

Europe provides an excellent opportunity in which to compare and access the

practices employed in different countries through what might be called a

‘multiple laboratories’ approach. Such a benchmarking exercise might

conceivably start with a checklist of framework conditions, the identification of

which ones can be identified within each ‘national system of innovation’ and

an assessment of how well each is done in comparison with others who are

using similar mechanisms. Like any good benchmarking exercise it must

contain the four elements discussed in this paper: focus, clarity, measurability

and purpose. Furthermore, it must recognise that ‘best practice’ is a dynamic

concept or a ‘moveable feast’ as we continue to innovate and extend the

possible.4

4 This is certainly the case when benchmarking against other firms. It can be argued, however, that onecannot better an idealised model - such as zero deflects.

29

Bibliography

Bessant, J., and Caffyn, S., (1997), High-involvement through continuousimprovement, International Journal of Technology Management, Vol. 14, No.1.

Campbell, A., B., (ed.), (1994), Benchmerking in health care: models forimprovement, Joint Commission Journal on Quality Improvement 1994,,20(5).

Camp, R., (1989), Benchmarking - the search for industry best practices thatlead to superior performance,. Milwaukee, Quality Press.

Davenport, T., (1992), Process innovation: Re-engineering work throughinformation technology, Boston, Harvard University Press.

Dodgson, M., & Bessant, J., (1996), Effective innovation policy, London:International Thomson Business Press.

Ferraz, J., Rush, H., and Miles, I., (1996), Development, Technology andFlexibility: Brazil faces the industrial divide, London, Routledge.

Garvin, D., (1991), How the Baldrige award really works, Harvard BusinessReview, November/December, 80-93.

Grier, D., (1996) Best Practices for Management of Research andTechnology Organisations, WAITRO Report.

Hammer, M., (1990), Don’t automate, obliterate. Harvard Business Review.

Fielding, J., (1997), Benchmarking - Why Bother?, Council Briefing, HEFCE

Lundvall, B. A.,(ed.), (1992), National Systems of Innovation: Towards atheory of Innovation and Interactive Learning, Pinter Publishers, London.

Maxwell, M., (1996), Clinical benchmarking: results into practice, InternationalJounral fof Health Care Quality Assurance, 9(4), 20 - 23.

Ministry of Economic Affairs, The Netherlands (1995), Benchmarking TheNetherlands - a test of Dutch competitiveness.

Morris, J.,M., (1997), Paediatric benchmarking: a review of its development,Nursing Standard 1997, 12(2), 43-46.

Paulk, M., C., (1995), The Evolution of the SEI’s Capability Maturity Model forSoftware, Software Process: Improvement and Practice, Pilot Issue.

30

Phillips, S., (1995), Benchmerking: providing the direction for excellence,British Journal of Health Care Management, 1(14): 705-707.

Roach, S., (1994), Lessons of the Productivity Paradox, in Gillin, P., (Ed), TheProductivity Payoff: The 100 Most Effective Users of Information Technology,Computer World, September 19, Section 2:55.

Roach, S., The Hollow Ring of the Productivity Revival, Harvard BusinessReview, Nov-Dec 1996, pp. 81 - 91.

Rush, H., and Bessant, J., (1992), Revolution in Three-Quarters Time:Lessons from the Diffusion of Advanced Manufacturing Technologies,Technology Analysis and Strategic Management., vol. 4 no. 2.

Spectrum/DTI, (1996), The Development of the Information Society - AnInternational Analysis, DTI’s Information Society Initiative, The StationaryOffice.

Spectrum/DTI, (1997), Moving into the Information Society - An InternationalBenchmarking Study, DTI’s Information Society Initiative, The StationaryOffice.

Taddei, D., and Coriat, B., (1993), Made in France - L’Industrie franøaisedans le compótition mondiale, Hachette le livre de Poche.

Temaguide, (1998), A guide to technology management, SocintecConsultants, Bilbao, Spain.

Voss, C., (1994), Made in Britain, London Business School.

Voss, C., A., �õhlstr m , P., and Blackmon, K., (1997), Benchmarking andoperational performance: some empirical results, International Journal ofOperations & Production Management, Vol. 17, Nos. 9 and 10.

Wagstaff, T., (1995), Weighing up your performance, Hospital Development,26(5).

Womack, J., and Jones, D., (1996), Lean Thinking: Banish Waste and CreateWealth in Your Corporation, New York, Simon & Schuster.

Womack, J., Jones, D., and Roos, I., (1990) The Machine that Changed theWorld, New York, Rawson Associates.