‘working-to-rules’: a case of taylor-made expert systems

21

Click here to load reader

Upload: peter-holden

Post on 26-Jun-2016

218 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: ‘Working-to-rules’: a case of Taylor-made expert systems

Working-to-rules’: a case of Taylor-made expert systems

Peter Holden

Taylorism popularised the view that through the fragmentation of manual tasks into specialised, repeatable constituent activities, and the removal of tacit knowledge and discernment from the shopfloor worker, there would be significant increases in productivity. Where Taylorism mechanised and downgraded manual tasks, this paper argues that a similar ‘machine-centred’ approach towards the develop- ment of expert systems will degrade the characteristics of human knowledge through an emphasis upon the automation of expertise. In order to move away from an automation focus, it is necessary to move towards systems which augment rather than replace the qualities of human tacit knowledge. Problem identification is a critical first stage in the development lifecycle because it is here where the decision to enhance or replace expertise is made. It is a process of understanding the problem from a number of perspectives; however, the emphasis under Taylorism is upon the technical dimension, with a disregard for organisational and human factors which are central factors in the identification process. The multiple-perspective concept embraces the technical, organisational and persona1 perspectives in a wider framework of enquiry. It should help those responsible for introducing expert systems into an organisation to recognise the limitations of a purely technical perspective and to choose the right combination of elements to match the specific needs of the organisation.

Keywords: Taylorism, expert systems, development framework, scientific management, technical perspective, multiple-perspective

One of the main reasons why expert systems fail in industry is because of a lack of definition over purpose and role in the organisation. A company’s understanding of expert systems is often uncertain and notional. This confusion can give rise to a number of potential problems: notably, a poor specification of the problem; the wrong choice of expert system for the application; the wrong

Innovation and Technology Assessment Unit, Cranfield Institute of Technology, Cranfield, Bedford MK43 OAL, UK. Tel: (0234) 750111

0953-5438/89/020199-21 $03.00 0 1989 Butterworth & Co (Publishers) Ltd 199

Page 2: ‘Working-to-rules’: a case of Taylor-made expert systems

reasons for justifying this choice; and a technically motivated development programme.

It is in this climate of development that expert systems may be prone to the machine-centred arguments for introducing expert systems into the organisa- tion. This looks towards automating expertise in a similar manner to the way in which Taylor&m mechanised shopfloor activities at the turn of this century. This paper begins therefore by looking at the development of Taylorism as a management philosophy, and its emphasis upon the scientific approach to work. Management took control over the knowledge of production in order to plan and coordinate the precise motions of workers through time and motion study. The ‘rationality’ of the worker centred on the desire to earn as much money as possible and this allowed Taylor to introduce piecework whereby pay was directly liked to output.

The significance of Taylorism is that it rejected the tacit or practical knowledge of the worker in preference to precise scientific methods. The emphasis upon explicit knowledge in the automation of expertise is covered later in this paper. It is argued that besides the technical difficulties, systems that attempt to replace the expert fail to acknowledge the value of the contribution offered by the user to the decision-making process.

The section after that looks at the potential benefits of experts systems for the worker. Expert systems allow workers to cut across traditional job boundaries and offer the potential to provide the worker with increased responsibility and a more complete and varied job content. Equally, however, they can downgrade expertise and reduce the competence and confidence of the worker to undertake mental tasks. Similarly, the following section looks at the possible use of expert systems by management as a mechanism to maintain its monopoly of knowledge over the workforce or as a means of more effective and widespread delegation. This section also describes how expert systems have begun to replace positions within management itself, especially lower management functions.

Taylorism is part of a wider pervading ‘scientific culture’ which remains strong in technological development today. The reasons why Taylorism remains so strong despite its evident shortfalls is described and alternatives to a ‘scientific approach’ in the development of expert systems are considered. Although formal analysis techniques are appropriate to conventional program- ming, the need to incorporate human knowledge, which by definition is uncertain and qualitative, requires more than investigation from a single technical perspective.

The section entitled ‘An approach towards identification’ looks specifically at the identification phase of the development life-cycle as the most important in preventing a machine-centred approach and describes the importance of enhancing scientific models through consideration of human and organisation- al factors. These are embraced within a multiple-perspective framework which is described in the following section. The emphasis of the framework is as an enquiry system rather than a specific problem-solving methodology and is intended for use by those considering introducing expert systems into an organisation.

200 Interacting with Computers vol 2 no 2 (1989)

Page 3: ‘Working-to-rules’: a case of Taylor-made expert systems

Principles of Taylorism

F.W. Taylor was an American management consultant and chief exponent of the scientific management movement which arose during the first two decades of this century. He was initially concerned with achieving higher productivity in the steel industry where his methods were responsible for raising production by almost 400% per day (Taylor, 1912). His work concentrated on establishing management control over the knowledge necessary for production. He believed that the physical movements of the worker could be regarded ‘scientifically’ as those of a machine. For this reason, the label ‘machine theory’ is sometimes given to this school of thought.

Taylor set out to reduce what he considered to be the conscious and deliberate restriction of output by operators. He observed that few operators worked at the speed at which they were capable - he attributed this to systematic soldiering (Taylor, 1903, p 30). This was caused, not through basic motivational issues as later schools of thought would advocate (see, for example, the human relations school (Hertzberg et al., 1959)), but because of poor management controls which made it easier for each worker to work slowly in order to protect perceived self-interests. As Taylor notes, the system allowed workers to exercise their own discretion and waste their efforts on ‘inefficient and untested rules of thumb’ (Taylor, 1912). The objective of Taylor’s scientific management was to plan and coordinate shopfloor workers’ movements in the quickest and most efficient possible way. He furthered this end by introducing what is now known as ‘work study‘ or ‘Organisation and Methods’. The techniques he developed included the methodical timing of any job by analysing it into constituent or elementary motions; and an emphasis on piecework - linking an individual’s pay directly to his output - so that workers had incentives to produce as much as possible in a given period. The simplification which made Taylor’s work possible was that workers could be studied as if they were machines and that work could be controlled as if they were machines. This was possible because, in Taylor’s words, ‘. . . every single act of every workman can be reduced to a science’ (Taylor, 1912). It was his intention to remove from workers all control over their work in order to subject them to the control of the manager.

The essence of Taylorism was an emphasis upon explicit ‘scientifically authenticated knowledge, at the expense of tacit knowledge’ (Rosenbrock, 1985). The skills and abilities of the worker were discouraged in favour of a specified routine where ail contingencies could be anticipated (although this is a dangerous premise to make in the ‘real world’). As Taylor (1906) points out, ‘under our system, the workman is told minutely just what to do and how he is to do it: any improvement which he makes upon the orders given to him is fatal to success’. A consequence of this doctrine is that tasks are performed in ignorance of the purpose for which they are required. Braverman (1974) discusses the dissociation of the labour process from the skills of the workers, which, as Taylor himself observed, means that ‘the managers assume . . . the burden of gathering together all the traditional knowledge which in the past has been possessed by the workmen and then of classifying, tabulating and

Holden 201

Page 4: ‘Working-to-rules’: a case of Taylor-made expert systems

reducing this knowledge to rules, laws and formulae’ (Taylor, 1967). Braverman notes a further consequence of Taylorism, the way in which management maintains a monopoly over knowledge to control each step of the labour process and its mode of execution. He argues that the most important part of this process was not the written instructions, which described in detail the tasks that the worker was to accomplish as well as the means to be used in doing the work, but the systematic preplanning and precalculation of the labour process. This took away from workers the responsibility for conceiving, planning and initiating their work tasks, therefore leaving the imaginative task of creation to management.

Braverman’s argument, although simplistic in claiming that management attempt to control the labour process in its entirety, is useful in highlighting the possible consequences of management adopting a machine-centred approach. He suggests that the Taylorist process of deskilling and degrading work exists today in modern industry among manual and white collar workers, and that its extent is hidden by the categories used to classify skill levels. The designation of occupations as ‘skilled’ or ‘unskilled’ is frequently arbitrary, and a rise or fall in either category may simply be the result of changing the standard or method of classification. It is here where the potential of expert systems may influence the notion of skill, knowledge and expertise; for example, expert systems may allow ‘unskilled’ workers to perform contemporary semiskilled jobs in a short period of time, but the actual level of skill and expertise acquired may be small. The confusion over types of skill and expertise should be addressed first before the implications of expert systems are discussed, because perceptions on what constitutes ‘expertise’ will reflect on the objectives for introducing the system and will influence the design and development process.

The nature of knowledge and expertise

A characteristic of Taylorism is the emphasis upon the scientific and explicit knowledge of management with a disregard for workers’ skills and practical or tacit knowledge. Behind this process lies the philosophy that human expertise is predictable and can be logically described in a formalised language (Gullers, 1988; pp 3137). Therefore, the machine will be more effective than humans in decision-making. Expert systems viewed in this light would therefore provide a means of enforcing formal knowledge, and removing workers’ discretion and heuristics. However, it is precisely these nonexplicit forms of knowledge which make up expertise. Hayes-Roth et al. (1983) refer to expertise as being approximate, discretionary, incomplete and inconsistent. Dreyfus and Dreyfus (1986), moreover, describe how experts have difficulty in rationalising the nature of their expertise in scientific terms. Josefson (1987) talks of the difficulties faced by nurses in attempting to articulate their expertise in facts and rules alone. Furthermore, emphasis upon developing scientific knowledge for the computer, Josefson argues, overlooks the need and importance of practical knowledge. Thus, there is agreement among these authors of the need for other than explicit knowledge in expertise, but how is this defined and qualified? Goranzon (1988, p 16) distinguishes between three types of

202 Interacting with Computers vol 2 no 2 (1989)

Page 5: ‘Working-to-rules’: a case of Taylor-made expert systems

knowledge. Propositional or theoretical knowledge is expressed in theories, methods, models and concepts and is acquired from a theoretical study of an activity - this constitutes the explicit knowledge and formalised methods in scientific management. A second knowledge is skills or practical knowledge which are built up from experiences gained from actually undertaking a particular activity or practice - this equates to the manual skills of the worker. A third type of knowledge mutually interdependent with skills is knowledge of familiarity. This is the ability to deal with uncertainty or unique situations through a deep understanding of the field based on experience - it amounts to judgement and a sense of being correct by applying heuristics or ‘rules of thumb’ and ‘know-how’. As Gullers notes, ‘Even an old, retired craftsman can act as an instructor, that is to say he can still make assessments, demonstrate and correct faulty technique and methods. He retains his eye for the job, his feeling for form and his knowledge of materials used.’ (Gullers, 1988, p 36).

A second issue to consider is how these types of knowledge should be distributed and what proportions of each make up expertise? Taylorism as a philosophy clearly placed too great an emphasis upon propositional knowledge while overlooking the potential value to be gained from discussing the nature of practical knowledge held by the worker. This requires a two-way exchange between the theories and scientific regulations of management and the skills and familiarity gained by undertaking work. A number of authors (Johannes- sen, 1988; Cooley, 1988a; 1988b; Rosenbrock, 1988; Gill, 1986) argue that the ‘machine-centred’ approach, implicit in Taylorism, is prevalent in industry today. Gill speaks of the present focus of propositional knowledge - technical design, computational methodologies and formal rules, for example - in the development and use of expert systems. Furthermore, it is the exclusive attention to this ‘explicit knowledge’ which leads to the view that the primary function of expert systems is to automate human expertise. Again like Taylorism, expert systems that focus upon automation disregard the human contribution offered by the user. Deskilling in Taylorism was a result of practical knowledge being unused and the sense of know-how becoming redundant. Where Taylorism characterised a division of labour, expert systems now have the potential to impose a division of knowledge, because workers, as users of the expert system, may become alienated from their own tacit knowledge and experiences (Ostberg, 1988, p 177). The implication from this is that expert systems are not expert as such, because they fail to capture ‘expertise’, where expertise constitutes a mixture of all three types of knowledge. It is only when interdependency exists between practical and propositional knowledge that expert systems can depart from the machine/ automative focus and look towards enhancing the role of the user.

Cooley questions the legitimacy of the automation of expertise as an objective, but also the feasibility of representing human knowledge and skills within expert systems and new technologies in general. Rather than focus upon replacing the ‘human dimension’, therefore, expert systems should be viewed as an appendage to the human - what Cooley calls a ‘human-machine symbiosis’. Cooley argues that without a human-centred approach towards expert systems, we lose the practical skills, ‘know-how’ and applied creativity

Holden 203

Page 6: ‘Working-to-rules’: a case of Taylor-made expert systems

that is necessary for technical development in the first instance. This is supported by Gullers who reports of a Swedish study of large engineering companies, which shows that without a high degree of human availability and human participation, with the discretion to use and develop practical or tacit knowledge, new technology fails to function effectively (Gullers, 1988, pp 32-34).

The next section looks specifically at the impact that Taylorism has had upon the shopfloor worker, and notes the possible similar repercussions that face users of expert systems that fail to recognise the importance of this human dimension.

Expert systems, Taylorism, and the worker

Where expert systems have replaced expertise, they have equally as much scope to enhance the role of the user. Replacement can be conceived as unskilled workers automating the propositional knowledge of management or, according to the machine model, of management replacing the skills of the shopfloor worker. This section highlights the fact that the way in which expert systems are used very often depends upon perceptions of their function and role in the organisation.

Polyvalence (Littler, 1985, p 125) denotes a situation in which workers are able to perform a range of tasks that cut across or extend traditional job boundaries and skills. It is brought about through the removal of skill demarcations or through enlarging the task competences of the worker in a job requiring some expertise; this is an area where expert systems may be of benefit. They offer the potential to provide the worker with increased responsibility for a more complete set of tasks. However, their use can also make it easier for management to identify accountability for substandard performance. Expert systems facilitate performance measurement and may thereby make a transition from direct personal supervision of the labour process to a ‘responsible autonomy format’ that allows for greater management control. In effect, expert systems can substitute supervision at a distance for supervision directly in the workplace.

A second potential area for expert systems for the worker is intelligent tutorial systems and other learning support environments. These systems make rapid and effective training and retraining feasible and cost-effective. However, they can also lead to what Smith (1987) calls the ‘human EPROM’, or throwaway workforce in which a form of expertise is endowed upon the user for a specific function and then erased and reprogrammed as demand for expertise requires. Here, the worker is reduced to a form of ‘automatic telephone exchange’ because there is no opportunity to exercise control over the decision-making process and no mechanism by which to contribute personal skills and experience.

A consequence of Taylorism was that workers’ jobs were designed as narrowly and precisely as possible. Each task was made a job in itself. Complex tasks such as maintenance therefore were fragmented and assigned to specialised maintenance operators; similarly, inspection would have special-

204 Interacting with Computers no1 I no 2 (1989)

Page 7: ‘Working-to-rules’: a case of Taylor-made expert systems

ised inspector roles. Expert systems could be used to perpetuate this demarcation by preventing the user from learning anything other than highly specialised tasks. Management would actively dissuade ‘learning’ by workers from the system; this could be achieved by preventing access to the system’s reasoning and explanation facilities. This would reduce the transparency of the expertise held within the system and expose the product of the system’s expertise only. A ‘proficient user’, therefore, would be one who is efficient in activating the right expertise for the correct problem, rather than of understand- ing the essence of knowledge upon which the solution is based. Indeed, extending this machine-centred argument further, the selection of suitable candidates as ‘users’, as with the selection of suitable workers in Taylorism, would be based on the suitability of the worker to do the tasks specified by the system, rather than necessarily his personal ability, intellect or skill to solve the problem or do the task in its entirety.

A consequence of this ‘division of knowledge’ is that where Taylorism led to a loss of competence in the people that operated the machinery, expert systems that always handle the most frequently occurring situations may result in workers losing their own practical knowledge or skills. Thus, as Reason (1987) notes, if people are required to step in, they are no longer able to do so. Taking responsibility in this way requires that one is able to make the decision. If the decision-maker becomes used to following the advice of the system, he will gradually become less able to make the decision himself. More and more, the responsibility is thereby taken over by the system. The user may be unaware of this process or, as Hollnagel (1987) points out, may maintain the ‘illusion’ of competence, and hence responsibility.

The issue of responsibility is one of understanding the information and knowledge that describes the situation correctly. This points towards the possible dilemma between information presentation and decision-making, since the way in which a problem or case is presented may heavily influence the way in which it is understood, and therefore the way in which decisions are made. This is an argument against the ‘neutrality’ of expert systems; although they may be perceived as simply ‘tools’, they will still have an impact upon the decision or the decision-making process.

Parallel with issues of responsibility, the use of expert systems in a working environment raises issues of acceptability that transcend purely technical credibility. Bell and Hardiman (1989) and Mumford (1987a) observe that user acceptance is a major hurdle to the success of expert systems. A Taylorist approach to development, however, would not recognise this importance, because it would assume that the financially motivated rationale of the worker would be enough to gain acceptance. Expert systems are thus acceptable as a mechanism in achieving greater returns for the work undertaken: work content is not considered a precondition of acceptability. In practice, without justification and explanation of the lines of reasoning, the user is unlikely to accept the system. The ‘trust relationship’ may deteriorate to the extent that the system might not be used at all.

Under Taylor’s perception of work, the mechanisation of skills was a visible and tangible occurrence resulting in increased fragmentation, specialisation

H&en 205

Page 8: ‘Working-to-rules’: a case of Taylor-made expert systems

and repetition of tasks. The impact upon the worker was also concrete; simply to attend to fewer tasks and assume a more simplified role in the workplace. By contrast, the ‘automation of expertise’ is incomprehensible to the worker, according to Ostberg. The representation of ‘abstract’ functions cannot be understood unless the worker is provided with a suitable model in which to interpret the contents and context of such functions. This requires us to depart from the dichotomous ‘worker-do’, ‘management-think’ model of Taylorism and develop expert systems which nol only transfer knowledge, but do so in a way that is understood and is capable of being used by the worker. Following Ostberg’s argument further, this not only requires effective dialogue between the user and the expert system, but the system assumes a ‘persona’ based upon the precise individual needs of the end user (Ostberg, 1988, p 179).

The principles of scientific management referred primarily to workers on the shopfloor involved in manual tasks. White-collar workers were viewed as being closer to management than manual labour, and so received certain rights of autonomy that allowed limited scope for discretionary activities. However, the expansion of administrative functions has led to a redefinition of the role of the white-collar worker whereby the prospects of automating decision-making in these areas, if not more feasible, are certainly more visible now than in ‘blue-collar’ work. Reed (1987) envisages a large role for expert systems, not as a means of job-enrichment or training, but as a means of the mechanisation of white-collar workers.

Expert systems for the ‘scientific’ manager

Under Taylorism, two roles are envisaged for expert systems. The first looks towards maintaining the control of knowledge over workers; and the second, at improving the efficiency by which management generate and distribute ‘expertise’ to the shopfloor. These are discussed separately.

Expert systems as a control function A Taylorist approach to expert systems would regard their purpose as enabling management to retain a monopoly over knowledge and decision-making to control each step of the labour process. Taylor argued that manual and managerial work should be clearly separated in the interests of efficiency. This division of labour relies on the assumption that experts are necessary to handle the ‘complex’ tasks of achieving effective organisalional control. The role of expert systems in this process would be less to instil knowledge upon the worker, but rather sustain management control over it.

The principal control mechanism for Taylor was Time and Motion study (Taylor, 1903, p 58). The basis of this was to measure what workers actually did, according to the time taken, and to develop the ‘one best way’ of working.

A specific goal of time study was to identify a set of basic activities, and unit times for each of them, so that standard times for more complex activities could be constructed by analysing them into their basic components and adding up their unit times. A similar philosophy might be adopted in the study of users engaged in consultation with expert systems. In this case, decisions may be

206 Interacting with Computers vol 1 no 2 (1989)

Page 9: ‘Working-to-rules’: a case of Taylor-made expert systems

conceived as ‘basic activities’ and therefore have associated unit times such that the time required to complete a sequence of decisions is considered equal to the sum of the units. Ostberg draws a comparison between the ways in which methods time measurement techniques (a modern variation of work study) have been used to code and classify manual movements, with the way in which knowledge engineering has been used with white-collar workers in the financial services. He describes how an expert system developed by Arthur Andersen for mortgage loan evaluation adopted a similar works study bias to codify mental activities (Ostberg, 1988, pp 173-174).

Time and motion study, planning, and routing of materials are examples of the systemisation of work knowledge, which enables management to specify precisely the way in which a job is to be done. However, individuals wiil reconcile such regimentation and control in their ordinary lives in different and unpredictable ways. Workers have responded passively to some forms of mechanisation and yet violently against demarcation: both can effect the progress of work. It is not enough, therefore, to assume that time and motion studies reflect the rates of work actually being achieved. A more appropriate measure is to indicate the amount of effort that is actually contributed by the worker. Presently, there is a tacit agreement on the level of effort which will be accepted by both management and workforce as a ‘reasonable’ equivalent for a given rate of piecework. This forms the basis of an ‘effort rating’ decided by the collective bargaining process (Clegg, 1979, p 133).

Expert systems, with their potential to continually monitor and supervise the user, could formalise the measurement of effort. They give management the chance to set rates of productivity for workers and measure the quality of effort in return. The technology of Taylor’s day allowed ‘slackening’ and ‘soldiering’, because the machines used allowed the worker to avoid working to manage- ment‘s standards deliberately. The use of expert systems by management today could conceivably minimise such point-of-production resistance through online supervision and measurement of progress.

Expert systems and managerial restructuring As weil as a control mechanism, expert systems would also redefine roles within management. Taylorism saw the average manager as being incompetent and ‘irrational’, and therefore tacking a legitimate basis for his authority. Taylor‘s ‘mental revolution’ required a change by management as well as workers to accept the replacement of their ‘arbitrary powers over workers’ (Taylor, 1903, p 50). Managerial expert systems, too, would therefore seek to ‘scientificate’ previously discretionary techniques and ensure that management undertook planning and coordination functions in a systematic way. The implication is that managerial functions are not immune to the ‘automation focus’. The logic behind this is made clear by Reed: ‘. . . in ail likelihood, expert systems will be used in conjunction with workers of slightly less skill and seniority than those whose positions may be eliminated’ (Reed, 1987, p 141). Wells looked at the role of the design supervisor with the advent of computer-aided design. He found that with the introduction of the CAD system, the design supervisor’s tasks were dispersed between the computer and lower level draughtsmen, to design

Page 10: ‘Working-to-rules’: a case of Taylor-made expert systems

specialists, and that higher level decisions were sent up to the next level of management (Wells, 1987, pp 27-35). In effect, the lower management function of the design supervisor became unnecessary as a result of computerisation; tasks were absorbed by the improved capabilities of the draughtsmen, with discretion and technical expertise ascending to higher levels of management. A similar consequence of expert systems within management may obviate the need for lower-middle management functions. The emphasis upon redefining roles may accentuate the division between thinking and doing to a state where a small ‘managerial elite’ of highly skilled managerial, technical and profession- al jobs with strategic responsibility and discretionary use of expertise is able to control an enlarged workforce of unskilled service jobs without the need for intermediary supervision.

This process is reconciled in Taylorism as rationalising ‘managerial irrational- ity’ and improving the efficiency by which formal expertise is communicated to the worker. It would also make time and motion methods applicable to an increased number of staff in an organisation, and thereby link performance directly with productivity. Previously, rather poorly defined and nebulous managerial functions made it difficult to evaluate ‘a fair day’s pay for a fair day’s work’. However desirable or feasible this scenario is, it does highlight the potential for expert systems to replace expertise beyond the shopfloor and into the realms of middle management. Mumford (1987b) describes how Digital Equipment’s XCON expert system was designed to replace the technical editor responsible for the configuration of computer equipment, and Feigenbaum (1988) describes systems that have actually begun to replace professional jobs.

Expert systems clearly have a role in management, but it is important that this role is understood by the organisation and the motives for doing so are not misplaced. Expert systems should be seen as a means to improve and enhance the role of the manager, what Fish (1988) calls a ‘cooperative facilitator’, and certainly not of displacing it. Managers embarking on expert systems projects should be aware of the difficulties of achieving knowledge sharing as a real organisational objective: therefore the perspective should be one of building around the requirements of the user.

New patterns of relationships are inevitable. Rather than try to suppress these, they should be accommodated within the design process. Management should be aware that implementing expert systems may require relocation of expertise and a possible devolution of some control responsibilities to lower levels in the managerial decision-making structure. Although such devolution is possible, and it is a requirement of the organisation that it is sufficiently flexible to accommodate this readjustment, the prospect of the managerial function being replaced entirely is unlikely. This is because management is necessarily multidisciplinary and involves a complex set of technological, organisational and socioeconomic factors that do not translate readily and completely into a declarative program. To perform such a translation would probably fragment and divide necessary relationships and relinquish a holistic understanding of the problem. By introducing measures of accountability as a distorted justification for payment-by-results, an attempt is made to establish a paradigm of decision-making. This simplifies and blinkers the problem into

208 Interacting with Computers vol 2 no 2 (2989)

Page 11: ‘Working-to-rules’: a case of Taylor-made expert systems

narrow and definable tasks. Finally, and perhaps most important of all, the notion of ‘good managerial practice’ would probably argue against the use of wholly computerised systems in many cases, because of the strong emphasis upon a personalised approach.

The impact of Taylorism upon technical development

Taylorism has been analysed in general terms as a rather unsuccessful management practice (Rose, 1973, as an expression of the logic of capitalism (Braverman, 1974), and as a crude management ideology (Drucker, 1968). Whatever Taylorism is perceived to be, it shares with all theories on organisations an implicit or explicit model of human behaviour - a conception of how people behave in organisations. Taylorism was a machine model which saw only the instrumental aspects of human behaviour. The question of whether to automate or enhance human knowledge depends upon the perceived value of human worth. The ‘net worth’ of the workers was seen in economic terms as units of production, and could be treated as such under the laws of scientific management. Taylor overlooked the psychological and social variables which affected organisational behaviour. For example, ‘cooperative partnership’ (Taylor, 1912, p 26) amounted to the worker relinquishing his control to management and the discretionary use of heuristics, in exchange for higher wages through increased productivity.

In a discussion of alternatives of Taylorism, it is necessary to first question why, despite the evident shortfalls of Taylorism as a management technique, ‘Taylor’s vision remains unhindered’ (Rosenbrock, 1985) in expert systems, and other new or leading technological developments. Rosenbrock offers a partial solution, consistent with the arguments put forward by Cooley (1988) and Gill (1986) in the section on the nature of knowledge and expertise, by referring to the presence of a ‘scientific culture’ which persists in UK industry. This culture can only fully accept that knowledge which is explicit and gained through scientific analysis. Taylorism does not abolish tacit/practical knowledge, which constitutes the backbone of expert systems, but divides it into small bundles of ‘inferred knowledge’ that can be acquired in a short length of time. We might observe an evident contradiction therefore: expert systems use rules of thumb, or heuristics, and knowledge, which, using Taylor’s definition, is manifestly ‘unscientific’, and yet their implementation may engender typically Taylorist effects upon the organisation and the individual. A possible reason for this is that intentionally, or because of poor design of the interface, explanation or maintenance facilities, it will build in an inflexibility to respond to change or to the unexpected, by rejecting the human initiative that is needed to meet them. Thus expert systems designed to automate the decision-making process fail to make use of the contributions offered by the user as a manual worker, but also as a white-collar worker or manager. Without the contributions of new insights and newly acquired knowledge, there is an increased likelihood of the expert system becoming inflexible and context insensitive.

Merton (1947) offered a second explanation of why Taylorism prevails in management. He argued that technology is used as a political tool by

Holden 209

Page 12: ‘Working-to-rules’: a case of Taylor-made expert systems

management to increase the discipline in work and therefore gain tighter control over the activities of workers. If management increases task specialisa- tion and reduces the level of skill required in a job, then they can offer lower wages. Tighter control by management over the workforce also means less discretion for the worker over work practices. Braverman (1974) regards this process as a means of adapting labour to the needs of capitalism.

A further argument by Clegg and Dunkerley (1980) claims that Taylorism is ‘self-perpetuating’. The ‘adverse’ human reaction to specialised, repetitive work may simply confirm managements’ feelings that tight control over shopfloor workers and work practices is necessary to produce goods and services efficiently. In this way, scientific management becomes a self-justifying technique; a circuit which can only be broken by a change in managers’ perceptions.

Alternatives to the ‘scientific culture’ in the development of expert systems

Hoos (1979) argues that Taylorism is present, but as one aspect of a larger, more pervading and influential ‘scientific culture’ that dominates technological development. Operations research, systems analysis and decision analysis are all part of the same all-embracing approach. She notes, ‘in our technological era, the dominant paradigm is so technically orientated that most of our problems are defined as technical in nature and assigned the same treatment’.

It appears that despite, or because of, the increased sophistication of technology and the application of various theorems and techniques since Taylor, the underlying rationale of scientific management still prevails. If this is so, then this suggests that it is an essential part of the development process. Conventional programming techniques are congruent with this technical perspective because of the nature of the problems that are usually well- structured and boundable, such that it is possible to achieve optimal solutions. Linstone (1978) talks of the current analysis techniques being applicable to these ‘rational and traditional systems’, and the fact that a ‘best solution’ exists to a problem means that it is possible to quantify this through cost-benefit or similar analyses. The process of defining the system in this case is an analytical process made up of logical sequences.

However, the scientific emphasis that so dominates conventional program- ming is less appropriate for expert systems because of major differences in their function. The single most important difference is that with expert systems the concern is with a human expert; with conventional programming, it is with procedures. For this reason, the development lifecycle of expert systems is much less defined because of the need to accommodate human ‘irrationality’ at the individual and organisational level. The concept of ‘systems analysis’ for expert systems becomes inaccurate because the structured development of conven- tional programming is replaced by an overlapping and integrated cycle made up of multidisciplinary activities.

In developing expert systems, it should be clear that the approach taken must

210 Interacting with Computers vol 1 no 2 (1989)

Page 13: ‘Working-to-rules’: a case of Taylor-made expert systems

be sensitive to individual needs as well as business and organisational requirements. By recognising this, it becomes evident that not only is a one-dimensional approach unsuitable, but that there is unlikely to be a single methodology, tool or technique likely to satisfy all dimensions of the problem (Smith and Tranfield, 1988). Adopting a Taylorist approach would impose a misdirected rationality to the development process and simplify what is a complex set of factors and multidimensional relationships. It will also precipitate a development environment in which there is ‘dissonance’, or a conflict and wearing down effect, between various mechanisms of develop- ment. A consequence of dissonance is that organisations fail to change with the technology (Den Herlog, 1982): an example of this would be a situation in which an expert system is implemented without the user being appraised on the basis of the new service, and without consideration to new relationships being formed.

Expert systems have potential for organisational redesign; an underlying premise of this proactive approach is that expert systems can be used as an opportunity to restructure the organisation rather than attempting to adapt the technology to it (Klein, 1988). Whether or not expert systems are used in this way, however, rests on how they are used or controlled in the organisation. Giving the user more responsibility is a shallow promise if the expert system is used as a Taylorist mechanism to institute rigid rules and instructions on what, where, and how things should be done.

There are further related reasons why expert systems development should not follow a purely scientific approach, supported by Linstone’s work on the unsuitability of the ‘scientific paradigm’ in technology assessment (Linstone, 1981). Foremost is that expert systems lack a sense of objectivity because, by definition, their role is to capture a personalised and highly individual process of solving problems. The claim that the properties of the observer must not enter into the description of that person’s observations is therefore clearly unattainable. The implicit understanding in expert systems development is that the individual view, as the expert, user or both, is of central importance. With the desire to maintain objectivity, however, the scientific approach ignores the individual, losing him or her to an ‘aggregated view’.

Linstone points out that where decision-making is involved, there are immediate and intrinsic relationships to both the individual and the organisa- tion. Expert systems, Iike technology assessment, should therefore be ‘. . . taken out of the singular perspective and become all encompassing of all perspec- tives’. Despite the possible benefits arising from this approach, current expert systems development appears to overemphasise computational methodologies and techniques (Whitaker and Ostberg, 1988). Guller (1988) talks of a ‘machine-centred’ approach, and Martinez and Sobol(l988) describe the use of various analysis techniques for implementing expert systems. The development process is viewed in terms of ‘scientific experimentation’ (Kelly, 1986), and the temptation to use methodologies, tools and techniques has now been accompanied by project management techniques (Probst and Worlitzer, 1988). The large inventory of such tools, models and methodologies can be useful and, in some areas, necessary. However, the apparent preoccupation with choice of

Hofden 211

Page 14: ‘Working-to-rules’: a case of Taylor-made expert systems

technique may overlook the real issue of providing useful systems for the organisation.

In an attempt to ameliorate the effects of Taylorism and the implications of a machine-centred approach, it is first necessary to understand where, in the development lifecycle of expert systems, Taylorism has the strongest impact. Skingle (1987) identified three phrases in this cycle. Identification, the first phase, includes application suitability and feasibility assessment. The second phase is that of development, which involves specification, design and implementation of the system. The final operational phase includes operational use of the system and maintenance. As the first phase, identification is significant in that the prevalence of a scientific/machine-centred approach will not only influence the statement of objectives for introducing expert systems into the organisation, but will shape the course of design and development and eventual utilization of the system. It is in this area, therefore, where a greater focus on human and organisational issues can help to reduce the impact of Taylor&m.

An approach towards identification

Clegg (1988) refers to the idea of ‘appropriate technology’, in which information technology can be operated and managed in ways appropriate for their users and for the organisation: ‘appropriate technology. . . is concerned with ways in which people have the opportunity to take ownership of, and some measure of control over the design and management of the technology‘. Clegg (1988, p 133) argues that organisational and human issues are retrospective in technically driven projects. Often, default options are taken where jobs and organisational structures are designed as they were under Taylorism, with minimum levels of skill and responsibility and with highly differentiated structures. Despite this, there is tremendous pressure upon organisations to develop new technology. This is reflected in the ‘automate or liquidate’ imperative (Ingersoll, 1987). There is the implicit assumption that artificial intelligence and computer- integrated manufacturing are always necessary, when in fact business problems may call for more ‘appropriate’ solutions that may not require this technology.

Deciding whether expert systems are appropriate for the organisation is one aspect of problem identification. There should be no imperative to develop, but rather a reasoned judgement should be reached based on those issues that are important in defining the desirability, feasibility and impact of any such future systems. Expert systems may be one of a number of possible solutions in addition to restructuring, the use of conventional techniques, automation, rationalisation, and simplification, for example. It is limiting therefore to use models and methodologies because the purpose is one of awareness and conceptualisation, of understanding the problem context, rather than formalis- ing specific problems.

The importance of attending to management’s perceptions of technology and work organisation if Taylor&m is to be eradicated, or at least minimised, was indicated earlier (Clegg and Dunkerley, 1980, section seven). The importance of expert systems development as a management issue (Sell, 1987) suggests that

212 Interacting zuith Computers uol 2 no 2 (19891

Page 15: ‘Working-to-rules’: a case of Taylor-made expert systems

management perceptions are clearly important and influential in the direction taken in development. It is thus necessary to control the expectations of management, and ensure that the objectives and requirements of the proposed system are clarified and understood. The objectives should be organisationally motivated not in a Taylorist sense, but by the need to improve the efficiency and effectiveness of decision-making by making better use of the knowledge in the organisation. Without defining their need and use, there is the danger, especially in the speculative, high profile market-led environment in which management is often first exposed to expert systems technology, that the reasons for development become clouded and the system, by default, may end up replacing expertise and providing expensive solutions to the wrong problems.

The importance of the identification phase is not shared by developers of expert systems. As Sell observes, ‘The problem identification phase suffers badly from lack of credibility, with a consequent lack of funds, support, commitment, and freedom of action that are essential for success; and it frequently adopts procrustean measures of fitting the problem to the solution’ (Sell, 1987, p 402). ‘Analysis’ very often begins during the knowledge acquisition phase, with little deference to problem definition and preproject feasibility (Diaper, 1988). Stow et al. (1986) go some way to recognising the absence of an approach for the identification of business applications suitable for expert systems techniques. However, they resolve the problem from a technical perspective by adopting a business analysis approach. This focuses on the decision process in the organisation, but fails to recognise the individual- ised requirements of the decision maker.

The multiple-perspective concept

The above approach to problem identification suggests the need for a process which impresses upon those concerned with introducing expert systems into the organisation of the importance of identifying correct problems for expert systems technology; of defining the objectives of the system; and of assessing the human and organisational implications, as well as the implications upon existing technology. There should be the understanding that there is no ‘one-best-way’ match between technology and problems (Cooley, 1988a); rather there should be a dialogue between business and organisational needs and solution requirements.

Soft systems methodology (SSM) is a possible approach (see Checkland, 1981) to defining problems in the organisation from a number of viewpoints and prescribing an approach to solving these problems. This approach is typically applied to unstructured problem situations and is a process of enquiry that considers organisational structure, functions, processes, and the climate or environment in which the problem resides. It is an organisational model with the flexibility to accommodate human issues, and allows problem identification to become a ‘purposeful activity’ without the use of ‘hard’ scientific analysis techniques. A preferred approach, similar to SSM, is the multiple-perspective concept (MPC) of Linstone (1984). This is not a theorem or model, but a

Holden 213

Page 16: ‘Working-to-rules’: a case of Taylor-made expert systems

conceptual framework in which to address the identification phrase of expert systems development from a number of perspectives.

MPC is particularly suited to problem identification because, as Linstone notes, when ‘. . . the domain is ill-structured, there is a significant decision- analysis content and there are significant human aspects involved’ (Linstone, 1981). MPC has also been used widely for technology assessment in info~ation technology (Linstone, 1978).

There are three primary perspectives in the MPC framework. The technolo- gical (T) perspective is used to study the technical element; the organisational (0) perspective addresses the organisalionai element; and the personal (P) perspective looks at the individual element. However, the essence of MPC is that any perspective may illuminate any element. Although a T perspective is essential in understanding about expert systems, the 0 and P perspectives add important insights that amplify the developer’s understanding of the problem.

There are numerous dimensions to MPC according to the interface between each of the perspectives - this is highlighted in Figure 1. The technical aspect (setting (l), Figure I) represents the choice of expert systems technology for example, based upon an assessment of tools and problem characteristics. It is also concerned with the physical setting; that is, the physical conditions under which the system will operate. For example, the expert system may require special protective casing to operate in a factory environment, or may require a

Technical aspects Organisational

Personal/individual aspects

0 Technologyfphysicat environment @ Individual actors

@ Socio-technical setting 0 6 Political action

@ Techno-personal setting @ Decision focus

@ Organisational setting

Figure 1. Diagrammatic representation of multiple-perspective cancepf

214 Interacting with Computers ~01 I no 2 (1989)

Page 17: ‘Working-to-rules’: a case of Taylor-made expert systems

particular locational setting in the production process. An understanding of technical requirements is an important stage in the identification phase.

The sociotechnical setting (2) is significant in that it allows for consideration of how expert systems may alter the decision-making process, relationships between superior and subordinate, and suggests that some forms of expert systems may be more appropriate than others for particular types of organisation. It recognises that by changing the technology, there may be intended or unintended organisational and business restructuring. Cost- benefit analyses and critical success factors, for example, are the technical means to evaluate some of these interactions.

The technopersonal setting (3) further dispels the fallacious assumption that technology is ‘neutral’ by showing that technology affects, and is affected by, the individual. Expert systems have the potential to either enhance or mechanise the role of the user: conversely, poorly motivated experts are not conducive to well-designed or effective expert systems. This setting is important because it draws attention to the crucial role that the individual plays in the development process and upon its impact in the organisation.

The organisational setting (4) cannot be overlooked as a factor in the success of expert systems. The structure of the organisation is likely to determine the extent to which expert systems can conceivably be integrated into the business, and influence which areas are feasible or acceptable for development.

The individual setting (5) recognises that experts, users, knowledge en- gineers, project leaders and so on, as individuals, will behave and react differently when alone than when cooperating as members of an organisation. The interplay of leadership and management will shape the success of the project and response to it. Similarly, the political element (6), which reflects the function of company politics and power structure, represents the interplay between the organisation and the individual. The political structure will determine the outcome of a decision to adopt expert systems technology (every project should have a ‘champion’) and will shape the process of implementa- tion.

The decision focus signifies the culmination of settings and perspectives and focuses upon the boundaries of where decision-making and assessment should ideally take place (this is represented by the shaded area in Figure 1). It is in this area that the multiple-perspective concept takes form. However, the focus of perspective will vary according to the type of problem: for structured problems, the dominant perspective is technological and the focus will shift to point A in Figure 1. For less structured (or fuzzy) problems, where the decision style may be judicial systems or intuitive, organisational and personal perspectives play a more important role in understanding the problem and defining needs, and so the focus of perspective will shift to somewhere near point B in Figure 1. The focus of perspective will also shift according to the short- and long-term horizons of both the user and the business. Individuals and organisations inevitably change working patterns and outlook over time, according to changes in personal needs and external influences. By understanding that perspectives can change, it also becomes evident, therefore, that the assessment of expert systems should be an ongoing and iterative process.

Holden 215

Page 18: ‘Working-to-rules’: a case of Taylor-made expert systems

Concern over the balance of perspectives, and how they should interrelate, takes second place to the concept of enlarging the capacity of expert systems‘ developers to see any situation from alternate points of view. The 0 and P perspectives help to compensate for the ‘deterministic’ approach, particularly of the expert systems’ market, and the ‘novelty factor’ that any recent innovation is likely to impart. Most of all, however, the 0 and I? perspectives help developers to see the limitations and gaps in the T perspective, and help to redefine ‘systems analysis’ away from purely scientific endeavours into the realms of human-sociotechnical systems.

Conchsions

Expertise has the notable characteristic of being expensive. The rational argument for reducing its value, therefore, is for the organisation to reduce its reliance upon expert personnel by fragmenting knowledge into its simplest elements, and restricting access to special knowledge and training to a select few. The growth of Taylorism and the wider principles of ‘scientific manage- ment’ may be seen as a process in which workers have gradually lost control or possession of their skills to management. The development of expert systems from a similar perspective can be seen as an attempt to submit human expertise to machines.

The subordination of people to machine, however, is not complete, because of the significance of tacit and practical knowledge as a component of expertise. In Taylorism, heuristics or ‘rules of thumb’ were substituted for formal and scientific methods that necessarily rejected the contribution of the worker. Similarly, expert systems attempt to formalise expertise into declarative statement in a way that simplifies the tacit dimension of knowledge. A preoccupation with replacing expertise diverts attention from the more useful exercise of achieving a more productive marriage between user and expert system.

Expert systems, though varied in purpose and construction, should have the common objective of enhancing the role of the user. This should not necessarily be exclusive to expert systems alone. Indeed, as with many technology-led developments, there is a retrospective consciousness towards qualitative issues as it becomes clear that technological success need not equate with business success. Such sensibility reflects an apparent maturity from Taylorism to a more ‘systems-conscious’, multi-perspective approach. As increased prominence is placed on human and organisational issues, particularly during identification, a more complete and animated understanding of the problem allows for a more appropriate match between requirements and solutions.

Acknowledgements

The author is grateful for the intellectual input of Bob Muller, Patricia Rees, Martin Colbert, Dan Diaper, Anna Hart, Enid Mumford, Martyn Cordey-Hayes, John Towriss and Karamjit Gill, and is also grateful to Ian Senior for the excellent computing facilities.

216 Inferacting with Computers rot 2 no 2 (19891

Page 19: ‘Working-to-rules’: a case of Taylor-made expert systems

Note

The author is currently undertaking research for a large engineering company within GEC to provide a framework for identifying and developing ‘meaning- ful’ knowledge-based systems. This paper reflects the thoughts and concerns of the author during the preliminary stages of a three-year study into the role of knowledge-based systems in manufacturing industries in the UK.

References

Bell, J. and Hardiman, R.J. (1989) ‘The third role: the naturalistic knowledge engineer’ in Diaper, D. fed.) Knowledge elicit&ion: principles, lechrriqtres and applications EIlis Horwood, Chichester, UK, 49-85

Braverman, H. (1974) Labour and monopoly capital Monthly Review Press, New York, USA

Checkland, P. (1981) Systems fhinking. Systems practice Wiley, New York, USA

Clegg, C. (1988) ‘Appropriate technology for humans and organisations’ fhf. Technal. 3, 3, 133-145

Clegg, H.A. (1979) The changing system of industrial relations in Great Britain BasiI Blackwell, Oxford, UK

Clegg, S. and Dunkerley, D. (1980) Organisation, class and control Routledge and Kegan Paul, New York, USA

Cooley, M. (1988a) ‘Creativity, skill and human-centred systems’ in Goranzon, B. and Josefson, I. (eds) Knowledge, skill and artificial intelligence Springer-Verlag, Berlin, FRG, 127-137

Cooley, M. (1988b) ‘The human use of expert systems’ Aries at City quarterly Reaiew 2, l-15

Den Herlog, J.F. (1982) ‘The role of information and control systems in the process of organisational renewal’, in Locke& M. and Spear, R. teds) Organisations as systems Oxford University Press, Oxford, UK, 112

Diaper, D. (1988) ‘The promise of POMESS’ paper presented at the joint ICL/Ergonomics Society conference on the Human and Organisational Issues of Expert Systems (Stratford-Upon-Avon) (to be published)

Dreyfus, H.L. and Dreyfus, S.E. (1986) Mind ouer vaccine BlackweII, Oxford, UK

Drucker, P. (1968) The practice of management Pan, London, UK Feigenbaum, E.A. (1988) The rise of the expert company Macmillan Press, New York, USA

Fish, A.N. (1988) ‘Expertise in the human-computer system: dispensing with the expert systems metaphor’ paper presented at the joint ICL/Ergonomics Society conference on the Human and Organisational Issues of Expert Systems (Stratford-Upon-Avon) (to be published)

Gill, KS. (1986) ‘Knowledge based machine: issues of knowledge transfer’ in Gill, K.S. (ed.) AI for Society Wiley, Chichester, UK

Goranzon, B. (1988) ‘The practice of the use of computers. A paradoxical encounter between different traditions of knowledge’ in Goranzon, 8. and Josefson, I. feds) Knowledge, skills and artificial intelligence Springer-Verlag, Berlin, FRG, 9-18

Gullers, P. (1988) ‘Automation-skill-practice’ in Goranzon, 8. and Josefson, I. (eds) Knowledge, skills and artificial intelligence Springer-Verlag, Berlin, FRG

folded 217

Page 20: ‘Working-to-rules’: a case of Taylor-made expert systems

Hayes-Roth, F., Waterman, D.A. and Lenat, D.B. (1983) Building expert systems Addison-Wesley, Reading, MA, USA

Herkberg, F., Mausner B and Snyderman, B. (1959) The motivation to work (1983) Wiley, New York, USA

Hollnagel, E. (1987) ‘Information and reasoning in intelligent decision-support systems’ int. 1. Man-Mach. Stud. 27

Hoos, I.R. (1979) ‘Societal aspects of technology assessment’ rec~lnologic~f Forecus~ing and Societal Change 13,3,191-202

Ingersoll (1987) T&ztzology itz mntr@ctrtri,~g Ingersoll, Rugby, UK

Johannessen, KS. (1988) ‘Rule following and tacit knowledge’ AI nnd Society 2, 4, 287-302

Josefson, 1. (1987) ‘The nurse as an engineer’ AI and Society 1, 2, llF126

Kelly, B. (1986) ‘Systems development methodology: defining approaches’ in KBS 298.5 Online Publications, Pinner, UK

Klein et al. (1988) ‘Organisational structure: the implications of expert systems’ paper presented at the joint ICLlErgonomics Society conference on the Human and Organisational Issues of Expert Systems (Stratford-Upon-Avon) (to be published)

Linstone, H.A. et al. (1978) ‘The use of structured modeling for technology assessment’ Report 78-1 Futures Research Institute, Portland State University, Portland, OR, USA, 132

Linstone, H.A. et al. (1981) ‘The multiple perspective concept’ Te&nological Forecasting and Social Change 20, 275-325

tinstone, H.A. (1984) Muffi~Ze perspecfiues for decision-mukjng North Holland, Amster- dam, The Netherlands

Littler, C. (1985) ‘Taylorism, Fordism and job design’ in Knights, D., Willmott, H. and Collinson, D. Job redesign Gower Publishing, 10-29

Martinez, D. and Sobol, M. (1988) ‘Structured analysis techniques for the implementa- tion of expert systems’ Inf. Softw. Technol. 30,2,81-88

Merton, R.K. (1947) ‘The machine, the worker and the engineer‘ Science 105, 79-84

Mumford, E. (1987a) ‘The successful design of expert systems - are means more important than ends? paper presented at Oxford P.A. conference, Templeton Coilege, Oxford, UK (unpublished)

Mumford, E. (1987b) ‘Managing complexity - the design and implementation of expert systems’ (unpublished) (reference quoted in Ostberg (1988, p 183))

Ostberg, 0. (1988) ‘Applying expert systems technology: division of labour and division of knowledge’ in Goranzon, B. and Josefson, I. (eds.) Knowledge, skill and artificial infei~igence Springer-Verlag, Berlin, FRG, 169-183

Probst, A-R. and Worliber, J. (1988) ‘Project management and expert systems’ Project Management 6‘1

Reason, J. (1987) ‘Cognitive aids in process environments: prostheses or tools’ Int. 1. Man-Mach. Stud. 27

Reed, E.S. (1987) ‘Artificial intelligence or the mechanisation of work’ AI and Society 1,2, 138-143

Rose, M. (1975) Indusfrial behaviour: theorefical deuezo~men~s since Tayior AIIen Lane, UK

Rosenbrock, H.H. (1985) ‘Can human ski11 survive microelectronics? in Rhodes, E. and Wield, D. teds) rmp~emenfing new technologies Basil Blackwell. and Open University, Oxford, UK

218 Interacting with Computers vol I no 2 (1989)

Page 21: ‘Working-to-rules’: a case of Taylor-made expert systems

Rosenbrock, H.H. (1988) ‘Engineering as an art’ Al and Society 2, 4, 315320

Sell, P. (1987) ‘Strategic issues in introducing knowledge based systems’ in KBS 1987 Online Publications, Pinner, UK, 401

Skingle, B. (1987) ‘The validation of knowledge-based systems’ in KBS 2987 Online Publications, Pinner, UK, 27-36

Smith, D. (1987) ‘AI. and the human EPROM’ Al and Society 1, 2, 146-150

Smith, S. and Tranfield, D. (1988) ‘Managing rapid change’ Management Decisions 26,l

Stow, R. et al. (1986) ‘How to identify business applications of expert systems’ 2nd Int. Expert Systems Corzf. Learned Information, London, UK

Taylor, F.W. (1903) ‘Shop management’ in Taylor, F.W. (ed.) Scienfific management Harper, New York

Taylor, F.W. (1906) ‘On the art of cutting metals’ Trans. Am. Sot. Me&. Eng. 28, 5

Taylor, F.W. (1912) ‘Principles of scientific management’ in Taylor, F.W. ted.) Scienfific mu~ugemenf Harper, New York, USA

Taylor, F.W. (1967) The principles of scientific management Harper and Row, New York, USA

Wells, C.S. (1987) ‘The design supervisor - a changing role with CAD?’ IMechE I. C369/87, 27-35

Whitaker, R. and O&berg, 0. (1988) ‘Channelling knowledge: expert systems as communication media’ AI and Sociefy 2, 3

219