arizona study guide

55
THE PROCESS OF SCHEMA EMERGENCE: ASSIMILATION, DECONSTRUCTION, UNITIZATION AND THE PLURALITY OF ANALOGIES STEVEN KAHL University of Chicago Booth School of Business 5807 South Woodlawn Avenue Chicago, IL 60637 Tel: (773) 834-0810 Email: [email protected] CHRISTOPHER B. BINGHAM Kenan-Flagler Business School The University of North Carolina at Chapel Hill 4200 Suite, McColl Building Chapel Hill, NC 27599 Tel: (919) 923-8413 Email: [email protected]

Upload: others

Post on 12-Sep-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Arizona Study Guide

 THE PROCESS OF SCHEMA EMERGENCE: ASSIMILATION, DECONSTRUCTION, UNITIZATION AND THE PLURALITY OF ANALOGIES

STEVEN KAHL University of Chicago Booth School of Business

5807 South Woodlawn Avenue Chicago, IL 60637

Tel: (773) 834-0810 Email: [email protected]

CHRISTOPHER B. BINGHAM Kenan-Flagler Business School

The University of North Carolina at Chapel Hill 4200 Suite, McColl Building

Chapel Hill, NC 27599 Tel: (919) 923-8413

Email: [email protected]

Page 2: Arizona Study Guide

  ii  

THE PROCESS OF SCHEMA EMERGENCE: ASSIMILATION, DECONSTRUCTION, UNITIZATION AND THE PLURALITY OF ANALOGIES

Abstract

Schemas are a key concept in strategy and organization theory. While much is known about the value of schemas and their use and structure, direct examination of how new schemas emerge are lacking. Our study of the life insurance industry’s development of the business computer schema from 1945-1975 addresses this gap. We find that schema emergence involves three key processes: (1) assimilation into an existing schema, (2) deconstruction of the existing schema, and (3) unitization of the new schema into a single cognitive unit. Intriguingly, our study also shows that these three processes are shaped by the interplay of multiple analogies over time. More broadly, beyond contributing to the psychological foundations of strategy and organization by setting forth the processes of emergence, these findings have important implications for the process of change and managerial cognition.    

Page 3: Arizona Study Guide

  1  

 A longstanding finding in strategy and organization theory is that managing environmental

change is difficult. Research suggests that it may not be the environmental changes per se that create

difficulties, instead it may be executives’ cognitive assessments of the change (Kaplan, 2008).

Environmental changes like technological innovations make it challenging for organizations to

effectively engage in coordinated action since the meaning of the innovations may be unclear.

Consequently, the management literature increasingly has focused on the cognitive aspects of and

implications for market behavior (Kaplan & Tripsas, 2008; Tripas & Gavetti, 2000;).

Research suggests that schemas are central to the ways organizational members deal with these

cognitive challenges. Schemas are defined as knowledge structures that contain categories of

information and relationships among them (Dane, 2010; DiMaggio, 1997; Elsbach, Barr, & Hargadon,

2005; Fiske & Dyer, 1985; Gick & Holyoak, 1983). Schemas are important because they help give

meaning to environmental changes and so help stimulate and shape action (Elsbach et. al., 2005;

Nadkarni & Naryayan, 2007; Ruef & Patterson, 2009). A primary agenda for firm research on schema is

examining their impact (Schminke et al., 1997), change (Elsbach et. al., 20007) or structural attributes

such as size, complexity or focus (Dane, 2010; Nadkarni & Narayanan, 2007). Surprisingly, however,

there is very little understanding in strategy and organization theory about how schemas emerge.

Work in psychology does provide some understanding of schema emergence. It points to the role

of analogies where individuals think back to some situation they experienced or heard about and then

apply lessons from that situation to the current one. This research typically relies on lab studies where

individuals are carefully provided an unambiguous analog that exactly fits the focal situation (Gick &

Holyoak, 1983). Yet, firms rarely operate in such controlled settings with single analogies. So while it

appears likely that analogies play a role in the formation of a collective schema, there are no studies that

explicitly address how this may occur. Overall, although managerial cognition scholars have continued

Page 4: Arizona Study Guide

  2  

to call for research that moves beyond understanding the content of important cognitive concept like

schema, there is very little understanding about schema emergence. As Gavetti and colleagues (2005:

709) stated, “Especially intriguing in the question of where cognitive representations come from.”

Our study addresses this research gap by asking “how does a new collective schema emerge over

time?” Through content analysis, we empirically study how life insurance firms developed a new

schema for the business computer from 1945-1975. The commercial introduction of the business

computer in 1954 was a significant new technology and created the opportunity for potential consumers

to develop a new schema to interpret it.

Our central contribution is a theoretical framework that helps open the “black box” of schema

emergence. We identify three novel processes. First, assimilation appears important in schema

emergence. We find that insurance firms faced multiple analogies for the computer, but largely adopted

the one most related to prior technological solutions. While the related analogy helped the new

technology gain traction, it had the effect of making the emerging new schema get assimilated into an

existing schema. Deconstruction is another key process since we find that through ongoing use with the

computer, the existing schema gets deconstructed. The categories and relations of the central analogy

become more general and less valid whereas the categories and relations of a marginal analogy become

more specific and more valid. Finally, we find support for unitization as a process related to schema

emergence. Data reveal that categories and relations related to the marginal analogy become

increasingly connected over time so that they become a conceptually distinct stand-alone cognitive unit.

Overall, like existing research in psychology, our study reveals the relevance of analogies in schema

emergence. However, unlike existing research in psychology, our study uncovers a much more nuanced

role of analogies that underscores their number, nature, and continued interplay. Besides contributing to

Page 5: Arizona Study Guide

  3  

the literature on managerial cognition, our study contributes fresh insights to change theory and to the

increasingly influential psychological foundations of strategy and organization.

THEORETICAL BACKGROUND

As scholars become increasingly interested in the cognitive underpinnings of markets,

technological change, and strategic decision-making, they have begun to characterize the cognitive

concepts that influence how market participants interpret their environment. This research often

highlights schema. Schemas are defined as the representations of the categories associated with a

concept as well as the relations among those categories (Dane, 2010; Hargadon & Fanelli, 2002). As

such, schemas act as cognitive frameworks that simplify information processing (DiMaggio, 1997).

They enable firm leaders to order and interpret their environment and so promote efficiency as ongoing

activities are cast into relatively stable patterns (Misangyi, Weaver, & Elms, 2008).

The schema concept is related to a variety of cognitive concepts, including frames, industry

recipes, categories, knowledge structures, dominant logics, or interpretive schemes. While these

concepts all vary slightly, they share the assumption that any given experience may be understood by a

group of individuals or firms in different ways (Bartunek, 1984). Importantly, many of these concepts

also share definitional underpinnings with the schema concept. For example, several scholars studying

use the term “frame” (Benford & Snow, 2000; Kaplan, 2008; Kaplan & Tripsas, 2008) to describe how

organizational members understand the world around them. Yet, much empirical work on frames (e.g.,

Kaplan, 2008; Kaplan & Tripsas, 2008) relies on and explicitly references Goffman’s (1974/1986: 21)

original conceptualization of frames, as “schemata of interpretation”. Other organizational scholars

focus on interpretive schemes, which are defined as “cognitive schemata that map experience of the

world” (Bartunek, 1984: 355), or the term “knowledge structures”, which is used interchangeably with

the term schema (Walsh, 1995: 285). Thus, we focus on the schema concept not only because it is

Page 6: Arizona Study Guide

  4  

central in the organizations literature but also since other frequently invoked cognitive concepts share

conceptual underpinnings with the schema concept.

Focusing on schemas allows us to more concretely define and measure what we mean by

emergence. Because schemas include categories and their relations, we can analyze emergence in terms

of the formation and development of both of these elements. Looking at categories alone may not

provide a complete understanding because this approach does not address the relations among elements

that also appear to impact interpretation and action. For example, Siggelkow (2001) found that the

apparel giant Liz Claiborne had difficulty adapting to changes in its environment largely because of the

many independent relationships among its business activities. Likewise, Henderson and Cockburn

(1996) argued that many pharmaceutical firms are slow to develop science driven R&D partly because

of “complementarities” among existing practices.

Further, thinking of schemas as categories and their relations (and not just categories) is

important since it enables scholars to address several key questions related to schema such as their

structure, use or change. For example, some studies focus on the structural characteristics of schema, a

central characteristic of which is complexity (Eden et. al, 1992; Nadkarni & Narayanan, 2005; 2007).

Complexity refers to both the number of distinct categories in a schema as well as the degree of

connectedness (i.e., number of relations) among those categories (Walsh, 1995). Theorists argue that

complex schemas allow organizational members to accommodate a greater variety of distinct strategy

solutions in decision making (Nadkarni & Narayanan, 2007). Other studies focus on schema change.

They suggest that with the accumulation of experience and understanding, a given schema will become

more stable. That is, the categories comprising schemas, in addition to the relations linking categories to

other categories, may become harder to change (Dane, 2010; Fiske & Taylor, 1984; Goldstein &

Chance, 1980). An important insight in this work is that within market settings, schemas may exist at a

Page 7: Arizona Study Guide

  5  

collective level, and not just an individual level. As support, firm and population studies emphasizing

the role of knowledge argue that schemas exist at collective levels that come to represent the varied

understanding between the participants (DiMaggio & Powell, 1983; Grant, 1996; Johnson & Hoopes,

2003; Nadkarni &Narayanan, 2007; Weick, 1979). Johnson and Hoopes, for example, investigate

“whether intraindustry competitors hold a common pattern of beliefs (or schema).” Similarly, Hargadon

and Fanelli (2002: 294) note that because schemas are influenced by the surrounding social context, firm

members often come to share the same schema.

Yet, while extant organizations research has been quite explicit about the definitions of schema

as well as the structure, use, change and collective-level relevance of schema, it has been far less explicit

about schema emergence. Hence, empirical studies use a variety of assessment procedures to identify the

presence, evolution, and nature of a schema, but provide little in-depth understanding about how that

schema comes to exist.

Fortunately, psychology research provides some insight into schema emergence. It highlights the

role of analogies and analogical transfer. Analogical transfer is defined as the transfer of knowledge

from one situation or concept to another by a process of mapping- the construction of correspondences

(often incomplete) between elements of a source analog and those of a target (Gick & Holyoak, 1983;

Holyoak & Thagard, 1989). Experimental evidence (Gick & Holyoak, 1983; Novick & Holyoak, 1991;

Schustack & Anderson, 1979) and computational analysis (Winston,1980) suggest a strong relationship

between the processing of concrete analogs and the formation of a general schema (e.g., learning the

general concept of “pump” by comparing hearts and water pumps)1.

                                                                                                               1  For example, in one of the earliest studies investigating the induction of a problem schema from concrete analogs, Gick and Holyoak (1983: 3) asked subjects to solve Dunker’s (1945) “radiation problem”: “You are a doctor faced with a patient who has a malignant tumor in his stomach. It is impossible to operate on the patient, but unless the tumor is destroyed the patient will die. There is a kind of ray that can be used to destroy the tumor. If the rays reach the tumor all at once at a sufficiently high intensity, the tumor will be destroyed. Unfortunately, at this intensity the healthy tissue that the rays pass through on the way to the tumor will also be destroyed. At lower intensities the rays are harm- less to healthy tissue, but they will not affect the tumor either. What type of procedure might be used to destroy the tumor with

Page 8: Arizona Study Guide

  6  

Although these general theoretical arguments linking analogies to the emergence of a new

individual-level schema are generally supported in psychology, unresolved issues remain in the

organizations literature about how a collective level schema emerges. First, while the organizations

literature discusses analogical transfer, it does not directly and explicitly show how it relates to schema

emergence. Rather, it links analogies to important organizational outcomes such as innovation

(Hargadon & Sutton, 1997), learning (Neustadt & May, 1986), competitive positioning (Gavetti,

Levinthal & Rivkin, 2005), problem solving (Schon, 1993), creativity (Hargadon & Fanelli, 2002), and

institutional designs (Etzion & Ferraro, 2010). For example, in their empirical study on technology

innovation at the design firm IDEO, Hargadon and Sutton (1997: 739), found that designers made

analogies between past solutions and current problems. In one case, designers trying to power a door

opener on an electric vehicle charger came up with a solution when they remembered an analogous

action in the pistons that open the rear window of a station wagon. However, while these empirical

studies provide much understanding about the value of analogical transfer in organizations, they do not

directly address how it influences schema emergence.

A second unresolved issue is that schema emergence in organizations is likely to be more

complicated and dynamic than psychology theory suggests. Psychology studies on schema emergence

generally focus on lab studies where individuals face clear, concise and concrete analogs that directly

map to target problems (Gick & Holyoak, 1983). Yet, decision makers in organizations rarely face such

                                                                                                                                                                                                                                                                                                                                                                     the rays, and at the same time avoid destroying the healthy tissue?” Before having subjects try to solve the radiation problem, Gick and Holyoak had them read stories about analogous problems and their solutions. In one story, “The General”, a general wants to capture a fortress situated in the middle of the land. There are several roads emitting outward from the fortress. But, each contains mines so passage is only safe with a small force of men as a large force will set off the mines. This makes large force direct attack impossible. The general solves the problems by splitting the army into small groups, positioning each at the start of a different road, and having all groups converge at the same time on the fortress. Other stories (“The Red Adair”, “The Fire Chief”, “The Commander”) provided by the authors also offer analogous solutions to the radiation problem so that subjects generally induced a “”convergence” schema. This emergent schema helped subjects solve the radiation problem – i.e., the doctor could have several low-intensity rays directed the tumor from different directions at the same time, thereby preserving the healthy but having the effects of the low-intensity rays combine to eliminate the tumor.

Page 9: Arizona Study Guide

  7  

clear-cut situations and solutions. Moreover, psychology studies show that in order for a schema to

emerge through analogical transfer, many subjects needed an explicit hit to help them see the

correspondence between the elements of a source analog and those of a target (Novick & Holyoak,

1991). It is hard to understand how (if at all) the giving of “hints” might occur in organizations. Finally,

the nature of analogies is likely to be different at the organizational level. Studies in psychology find that

individuals usually draw on one analogy at time to generate new schema. Serial processing of analogies

takes place where each is not in direct competition with another (Gick & Holyoak, 1983). By contrast, in

organizations individuals may face multiple competing analogies that must be processed simultaneously,

not serially. Further, these analogies may be of varying familiarity – some more incremental and some

more radical – thereby making it less clear which analogies might be adopted and why. Together, these

points suggest that existing psychological theory for schema emergence at the individual level may be

imprecise for schema emergence at the collective level.

Similarly, the nature of an organization complicates understanding for how a collective (vs.

individual level) schema might emerge. On the one hand, it seems likely that the number of potential

relationships between categories might increase combinatorially with the number of new categories

added over time by different users thereby permitting a vast and unpredictable range of different

potential schema structures to emerge. Yet, on the other hand, having a large number of relations and

categories may be insufficient to guarantee schema emergence; many of the relations or categories may

be less relevant or salient. Indeed, it may be the case that certain emergent categories and relations can

inhibit schema formation by drawing a disproportionate amount of attention of users and so “drown out”

other important categories and their relations. Existing organizational literature on categories provides

some tentative support for these arguments. It highlights the importance of the differentiation between

categories in order to establish a distinct category (Hannan, Polos, & Carroll, 2007). Yet because this

Page 10: Arizona Study Guide

  8  

work focuses on only one component of schemas, categories, and overlooks relations among categories,

it is not clear if what happens at the category level happens at the schema level.2 In sum, while the

organizations literature is generally clear that collective-level schemas exist and are central to the way

firm members come together and have a shared sense of belonging (Bartunek, 1984; Johnson & Hoopes,

2003), this literature it is generally unclear how a collective level schema comes to exist.

Overall, these unresolved issues suggest a lack of specific understanding regarding how a

collective schema emerges. So, while existing literature suggests that the development of a new schema

involves the identification of new categories and relations, there is little in-depth understanding about

how this developmental process occurs. In this paper, we address this gap. In what follows, we detail our

methods, and then set forth our empirically grounded theoretical framework for schema emergence.

DATA AND METHODS

Given the general lack of research on schema development, we combined theory elaboration

(Lee, 1999) and theory generation (Eisenhardt, 1989) in our analysis. Thus, we were aware of the extant

literature on schema and so examined data for the relevance constructs such as categories and relations

among categories. But, we also looked for unexpected types of processes by which those categories and

relations developed over time and became institutionalized as a new schema.

Case Selection and Data Sources

                                                                                                               2 Much research on the schema concept in psychology began in parallel to theoretical developments on categorization – i.e., how people categorize objects. Categorization research highlights the concept of prototypes, an ideal instance of a category (e.g., a robin a better prototype of the category “bird” than ostrich). Individuals decide whether a new instance is a member of the category by determining the similarity of its features to those of the prototype. The more features it shares, the more consistently it will be viewed as a typical category member. Although research on categories and schema both share a concern with the way knowledge shapes understanding, they are different in a number of ways. For example, category research helps shed light on how a label is applied. Schema research extends this view by helping explain what effect the label has on a category and how relations link it to other categories. Further, the categorization literature discusses how prototype categories often have typical features (size, color, shape) and most attributes are known, even if actual category members differ so extensively on those features that they become irrelevant for identifying category members (Anderson, 1980). In contrast to the concept of categories, the concept of schema lets some features remain unspecified. Due to this flexibility, research suggests that a schema might be a more efficient cognitive representation than a prototype category since it requires fewer specifics and is more focused on the essence of category membership (Mandler, 1979).

Page 11: Arizona Study Guide

  9  

Our setting is how the life insurance industry developed a new cognitive schema to interpret

the business computer from 1945-1975. By business computer, we mean those computers used to

process and manage transactions as opposed to more computationally focused computers, which were

used in the military and as well as in business. In this case, the insurance industry used business

computers to process and issue new applications as well as manage the paying of premiums. For

simplicity sake, throughout the paper we will use the term computer to refer to these business

computers. As a radical new technology, computers did not exist in a previous schema and offers an

opportunity to examine how a new schema gets developed.

We focus on the insurance industry for several reasons. First, as Yates (2005) notes, insurance

was one of the first and largest users of the computer. Thus, understanding how this industry developed

its understanding of the computer is important to our general understanding of the commercialization of

the computer within industry. Second, the large financial and organizational commitment to purchase a

computer left a rich archival record of public discussion about the computer. Last, certain

characteristics of the insurance industry’s history allow us to address changes within the schema. The

collective schema we are interested in is the schema of insurance companies, not the broader market or

the shared schema between the various market participants. Thus, this is the schema that consumers

used to evaluate the new technology. Yates (2005) notes that three professional and trade associations

played the prominent role in shaping how insurance firms interpreted and used the computer: The

Society of Actuaries (SOA), Life Office Management Association (LOMA), and Insurance Accountant

and Statistical Association (IASA). SOA is the main professional society for actuaries, a key occupation

within the insurance industry. LOMA and IASA were more trade associations that focused on general

issues related to insurance, and in particular, the use of office technology.

At the field level, these associations created committees to investigate the computer, held

Page 12: Arizona Study Guide

  10  

conferences, and as will be discussed in SOA’s case, distributed an influential report on the computer

before its commercial release. The occupational groups who participated within these associations -

accounting, administration, actuary, and the systems group responsible for developing and managing

office technology - also represented the main occupational groups who investigated and introduced the

computer within the insurance organization. Thus, we take these associations to represent the core

collective discourse about the computer for insurance firms.

Consequently, our archival efforts focused on the proceedings of these three associations, their

electronics committee meetings/reports, as well as an influential book for an insurance professional

associated with this group. The proceeding documents included detailed discussions about how

members plan to engage or are currently engaging with using the computer – in many cases with

detailed procedures of how the system works, as well as other important topics such as organizational

and career issues of computer-related professionals and how to purchase and evaluate computers. We

collected 399 articles, reports, and books from insurance representatives that discussed the preexisting

technologies as well as the computer from 1945 – 1975.3 While the computer was not commercially

released until 1954, we collected works dating back to 1945 to get a sense of the pre-existing schema as

well as early interpretations of the computer before it was actually used. A unique feature of this case is

that insurance firms developed a schema prior to the commercial release of the computer and their use of

the computer helps us assess the influence of experience in modifying a schema. While the computer

and its schema certainly continue to evolve even today, we stop in the mid 1970s because by then the

computer schema had fully emerged as an independent knowledge structure – our primarily concern in

                                                                                                               3  As  noted,  the  SOA  developed  important  early  documents,  which  we  include  in  our  study.  But,  since  we  are  interested  in  the  business  computer  and  actuaries  also  worked  on  computational  computers,  the  bulk  of  these  articles  throughout  this  time  period  come  from  IASA  and  to  a  lesser  extent  LOMA.      Actuaries  were  well-­‐represented  presenters  at  the  IASA  and  LOMA  meetings.    In  addition,  while  computer  manufacturers  presented  at  this  meetings,  we privileged the insurance representative’s accounts because we are interested in their schema. Yates (2005) notes how insurance played an important role in shaping the computer manufacturer’s perspective.

 

Page 13: Arizona Study Guide

  11  

this paper.

Coding Methodology

Following an established research stream, we take written discourse to represent the cognitive

schemas (Barr, Stimpert, & Huff, 1992; Tsoukas, 2009). This approach is particularly relevant to the

insurance case. As noted in the previous section, written discourse in the form of these reports and

proceedings was a primary source of communication and learning between insurance companies.

Methodologically we are interested in using these texts to identify and measure the changes in the

insurance firms’ collective schema. By defining schemas in terms of their categories and relations, this

process entails identifying the various categories and how they are related within each text.

Grammatically, we interpret nouns as categories and verbs as relations, which means capturing all of the

nouns within the text as well as the verbs and the nouns which the verbs connect. Some nouns may not

be connected through relational verbs, so it is important to identify categories and relations separately.

Conventional discourse analysis tools are not well equipped to identify these relational

structures, but Carley (1993, 1997) has identified a semi-automatic process, called cognitive mapping

that measures cognitive schemas. Cognitive mapping is a form of content analysis that involves

processing sentences within a text to isolate the nouns and verbs. Carley and colleagues (1993, 1997)

have developed software, called Automap, to assist in this process. We adopted her general procedure

for coding, and used Automap to help define the verbs and nouns. More specifically, we scanned each

original document and used OCR technology to convert the images to text files.4 We input each text in

the Automap software which identified the instances of all the words, including the parts of speech.

Carley notes that an important part of this process involves considering which nouns and verbs to

capture within the text. We focused on nouns and verbs that discussed the computer and its operations,

                                                                                                               4  11 documents were either unscanable or the OCR results were of too poor quality to use. For these cases, we did not use the Automap software, but coded the category and relational structure by reading through the text.

Page 14: Arizona Study Guide

  12  

choosing to exclude extraneous information introducing the topic or describing the company. We

processed the nouns and verbs separately so the identified categories are not dependent upon the

relational structure. While this software can also map the relations, we found the texts to be written in a

style that made it difficult to accurately capture this in an automated fashion. Therefore, with the

generated list of nouns and verbs, we read through each text capturing what categories each verb

connects.

To give a sense of this process consider the following statement from an article presented at an

association meeting: “The cards are then mechanically calculated and punched with the unpaid number

of weeks and the unpaid portion of the first-year's premium and commission to be withdrawn” (Beebe,

1947). Following this procedure, the categories include “cards”, “unpaid number of weeks and portion

of the premium”, and “the unpaid number of weeks and portion of the commission”. The relations

include “calculate” and “punch”. The full relational statements connect card with each of the two data

elements. We coded each verbs in its verb form. For example, in this case, “calculated” would be coded

as “calculate”. However, with nouns, we captured each in its form as represented in the text in order to

identify how it changed over time as well as identify new emergent concepts. Carley (1993, 1997) also

recognizes that the analyst must decide how to code frequency – either as each occurrence or whether it

occurs in the text at all. Since many of these texts were procedural in nature, they had many occurrences

of the same concepts and relations. Consequently, we only identified whether they were present in the

text. We completed this procedure for each text for each year, generating 4,330 unique categories and

451 unique relations5. Given the inherent subjectivity of this process, we randomly selected ten articles

and had two graduate students not associated with the project code each. 92% of the categories,

                                                                                                               5  The total number of relations appears lower than the total number of categories. This relates to our coding strategy. The number of categories is high because we coded categories at the lowest level of analysis – that is, how the terms were exactly presented in the text. (e.g., IBM_650 or IBM_650_digital_computer). The number of relations is lower since when coding the verbs we focused on relational verbs that connect categories.

Page 15: Arizona Study Guide

  13  

relations, and relational connections of one coder were included in our coding, and 88% of the other.

The collective schema for each year represents the aggregate of each text for a given year.

Appendices 1 and 2 provides a summary of the most frequently occurring categories (Appendix 1) and

relations (Appendix 2) for the collective computer schema from 1947 (corresponding to the first

discussion of the computer) through 1975. To avoid the idiosyncrasies of year to year fluxuation in the

level of discourse, we collapse the initial discussions of the computer prior to commercial introduction

(1947- 1954) together and group by three year increments after its commercial introduction in 1954. We

adopt this group for much of our temporal analysis. Each cell represents the total number of articles that

used a particular category or relation. The total number of articles for each time period is represented at

the top of each table. These frequency counts over time show how certain categories and relations

become more or less prevalent over time and so help assess schema emergence. We refer to these

Appendices throughout the paper. Finally, it should be noted that throughout our narration of the story

we try to use the terms used at the time, but often use the term “computer”. This is not intended to reify

the computer. However, we do note that Appendix 1 shows that the term “computer” was used

frequently early on. Yet, in these texts it is also associated with other terms. We capture this variance

and changes throughout the history of the schema emergence.

Following Kaplan and Tripsas’ (2008) call for more historical work within the cognitive

processes of technological change, we extend Carley’s (1997) method by considering the historical

context and the actual use of computers. Our analysis of the associations’ materials revealed the

importance of technological changes in the computer, how the insurance industry used the computer,

and broader trends about computer technology such as the development of management information

systems (MIS) in the late 1960s and 1970s. Therefore, we supplement our analysis of association

proceedings with historical analysis of technological changes, research on the life insurance industry’s

Page 16: Arizona Study Guide

  14  

uses of the computer (Yates, 2005), and the general history of these broader movements (Haigh, 2001).

HISTORICAL CONTEXT AND SUMMARY OF THE SCHEMA EMERGENCE PROCESS

Life insurance firms provide policyholders coverage against potential loss in exchange for

premiums. Beyond the actuarial and investment analysis, much of the work in insurance firms are

routine and clerical in nature: preparing and processing policy records, determining the premiums paid,

dividends, and agent’s commissions, as well as notifying and collecting policyholder’s premium

payments, and the accompanying accounting procedures to record these transactions (Adams, 1946).

Historically, insurance firms invested substantially in clerical workers and technology to efficiently

manage the policy and accounting processes (Yates, 1989). By the 1940s, virtually all insurance firms

used a combination of business machines – tabulating machines, sorters, verifiers, calculators, and

addressing machines – to sort and calculate information represented as punched holes in cards. These

punch cards became the record on which policy and accounting information was stored. IBM historians

have noted that “Insurance companies were among the largest and most sophisticated business users of

punched-card machines in the late 1940s and 1950s” (Bashe, Johnson, Palmer, & Pugh, 1986).

During World War II, industries in general experienced a clerical labor shortage, which was

exacerbated in insurance by a post-war life insurance boom (Bureau of Labor Statistics, 1955). As a

result, the insurance industry became increasingly interested in new technologies that aided in

information processing. One promising, radically new technology that emerged from the war was the

computer. The U.S. government in World War II used primitive computers mainly for computational

purposes such as calculating missile trajectories. After the war, technology manufacturers in search of a

commercial market began developing what became known as business computers, intended not solely

for computational work but also for managing business processes such as processing premiums. Part of

Page 17: Arizona Study Guide

  15  

the process of adopting the computer involved developing interpretations of what the computer was –

i.e., a schema that captured understanding about the computer.

Our historical analysis surfaced three distinct temporally phased processes shaping the

insurance firms’ development of the computer schema: assimilation, deconstruction, and unitization.

Unexpectedly, we found that each process relates to analogical transfer, but in a more nuanced and

dynamic way than what is described in the extant psychology literature. First, our study shows that

before the computer was even commercially available in 1954, insurance companies faced two

competing but fundamentally distinct analogies for the new technology – “machine” and “brain”.

Insurance companies assimilated the machine analogy into an existing office machine schema, treating it

as standard piece of mechanized equipment. We call this stage assimilation. While assimilation helped

initiate action to purchase the computer, the novel aspects of the computer, such as programming and

decision-making related to the brain analogy, received little attention and so were largely ignored.

After gaining some experience with using the computer into the 1960s, insurance companies

began to reflect more deeply about how they had assimilated the computer. We call this reflective

process, deconstruction, because it led to the weakening of some existing categories and relations in the

existing schema and facilitated the incorporation novel categories and relations related to the brain

analogy. Thus, the prior analogy of computer as a brain that was largely ignored was made more

relevant while the machine analogy that was assimilated was made less relevant. Finally, the last

process, unitization, served to strengthen the more novel categories and relations related to brain analogy

so that what emerges is a new stand-alone schema that is dissociated with the existing schema. Overall,

a key insight is that the schema emergence reflects the ongoing interplay of multiple distinct analogies,

rather than the quick selection of one analogy over others as suggested in the literature. In what follows,

we discuss the details of each process.

Page 18: Arizona Study Guide

  16  

THE ASSIMILATION PROCESS

The initial starting point of the emergence process is that the new object (the computer in this

study) must be cognitively recognized. The existing literature argues that making analogies between

past technological solutions and new technology is useful because it increases the recognition and

ultimately the legitimization of the new technology (Hargadon & Sutton, 1997). Our data, however,

reveal that such grounded analogies can initially have negative, not positive, effects since they divert

attention away from other relevant analogies that spotlight what is truly unique about the new

technology. Specifically, we found that prior to the commercial release of the computer in 1954

insurance companies faced two distinct analogies for the new technology: (1) “machine” and (2)

“brain”. Because the machine analogy was grounded in past technological solutions (i.e., tabulating

machines), insurance companies largely adopted this analogy and not the brain analogy. The result of

using the machine analogy was that insurance firms assimilated the new technology (computer) by

fitting it directly into their existing schema for office machines. This assimilation however came at the

cost of pushing into the background the analogy with the human brain that emphasized the novelty and

potentiality of the computer.

To show this assimilation process requires identifying the schemas of the proposed analogies

as well as the existing schema used to evaluate these alternatives. Yates’ (2005) historical analysis

reveals that insurance firms began to substantially investigate and publicly discuss the computer in the

mid-1940s. The initial discourse on the computer included presentations at conferences from insurance

and computer manufacturers, commissioned reports to study how the computer could be used, and even

books on the subject.6 Therefore, our initial analysis focused on 1945 through the commercial

introduction of the computer in 1954. We identified 67 texts, 41 of which addressed how insurance                                                                                                                6 Studies of cognitive practices often focus on the media (Rosa et al, 1999). In this case, the general media did not start actively discussing the computer until closer to its commercial release in 1954.

Page 19: Arizona Study Guide

  17  

firms used the existing office technologies, such as tabulating machines, collators, and printers to

process insurance work. The following passage about how Prudential used various tabulating machines

to prepare important documents illustrates how insurers thought about the current technology:

From approved new business applications, policy writing and beneficiary cards are key punched and verified. The policy writing cards are mechanically sorted and matched with master cards (by collator) to insure the accuracy of the age, kind, premium and amount of insurance in each case. The policy writing and beneficiary cards are next mechanically merged in policy number order and used to write the policies on a bill feed tabulator. After completing the listing of the agents' register sheets, the new business, reinstatement and life transfer cards are reproduced to in-force file cards. (Beebe, 1947: 191-2).

This passage highlights the focus on the punch cards as the primary unit of information, the machine

doing the work, and the functions through which it gets processed, manipulated, and calculated.

The remaining 26 texts developed two dominant analogies for the new technology: one

comparing the computer to the “brain” and another with a “machine”. Edmund Berkeley (see Yates

(1997) for more detailed discussion of Berkeley), an executive from Prudential who had extensive

interactions with computer manufacturers, developed an the human brain analogy. At industry

meetings, he discussed the computer as a “mechanical brain” and in 1949 developed the analogy in his

1949 popular book, Giant Brains. He opens the book with the basic construct of the analogy: “Recently,

there has been a good deal of news about strange giant machines that can handle information with vast

speed and skill. The calculate and the reason … These machines are similar to what a brain would be if

it were made of hardware and wire instead of flesh and nerves” (Berkeley, 1949: 1).

Where Berkeley’s brain analogy focused on the new technology’s ability to think and reason

much like a human, others within the industry developed a different analogy. Most importantly, the

three societies also commissioned committees to generate reports about the computer and its potential

use in the insurance industry. In 1952, the Society of Actuaries’ Committee on New Recording Means

and Computing Devices issued a major and influential report, and both IASA and LOMA hosted panels

at their conferences about potential use of computers and in 1953 co-sponsored an Electronics

Page 20: Arizona Study Guide

  18  

Symposium focused exclusively on this topic. Unlike Berkeley’s work, these reports characterized the

computer as an electronic version of existing technology, often using the analogy “information

processing machine” or “data processing machine” to describe the new technology. The 1952 Society of

Actuaries’ report stated:

These new machines have been called computers because they were developed primarily for mathematical work. It is a mistake, however, to think of them today as purely computing machines capable only of a large amount of arithmetic. In recent years, some very important improvements have converted them into machines capable of a wide variety of operations. Nowadays we must think of them as information processing machines with computing representing just a part of their total capabilities. (Davis, Barber, Finelli, & Klem, 1952: 5).

While qualitatively different, both analogies tried to make sense of the new technology by

invoking aspects of the pre-existing schema. We measure the extent of assimilation by the degree to

which the categories and relations of the analogy are the same as those of the existing schema. If many

categories and relations are the same, the analogy is more assimilated within the existing schema. To

assess the degree of assimilation, we identified the categories and relations for the preexisting schema,

the brain, and the machine analogies using the aforementioned coding procedure. This generated 334,

322, and 152 unique categories and 103, 83, 77 unique relations for the pre-existing schema, machine

analogy and brain analogy respectively. Figure 1 shows that of the total categories in the pre-existing

schema, almost 35% of them were shared with the computer as machine analogy vs. 12% with the

computer as brain analogy. Likewise, Figure 1 shows that of the total relations in the pre-existing

schema, almost 50% of them were shared with the computer as machine analogy vs. 26% with the

computer as brain analogy.

********************** Insert Figure 1 About Here **********************

However, simply sharing categories and relations is not a complete measure of assimilation. It

is possible that the shared categories and relations are not part of the central schema. In these cases, the

Page 21: Arizona Study Guide

  19  

shared categories and relations may be with parts of the existing schema that are not very meaningful.

Table 1 addresses this issue by comparing which categories and relations associated with the machine or

brain analogy are shared with the central categories and relations of the pre-existing schema. We define

central in terms of the high frequency categories and relations within the pre-existing schema. Table 1

and Figure 1 show that the machine analogy shares many of the core categories and relations of the

existing schema. In contrast, the brain analogy does not (see also Table 1 and Figure 1). Hence, the

machine analogy is well assimilated, but the brain analogy is not.

********************** Insert Table 1 About Here

********************** While the texts on the exiting technologies invoked a wide variety of categories and relations,

Table 1 shows a concentration around categories such as clerks, punch cards, policy, tabulators and

machines. Clerks interacted with the various kinds of machines according to the listed relations to

manipulate punch cards to complete a business process. The relations between these categories centered

on creating or inputting information on cards through the action of a “punch”, making computations

related to the business process (“tabulate,” “calculate”), processing the punch cards (“sort,” “file,”

“merge,” “matching”) and writing out the output (“print,” “list,” “post”). Consistent with Bebe’s (1947)

passage, generally the main unit of information was the punch card – the relations were between the

machine and the punch card and not the data found on it. Yet “check” and “verify” indicate the need

for verification of the information actually punched on the cards. Verification involved both mechanical

and manual processes and our analysis revealed that clerks often validated the results from one machine

before passing it along to a new machine. “Reproduction” – or the creation of new punch cards with the

same information – was also frequently required because cards were used for different purposes. Some

estimated that life insurance firms had as many as 10 different versions of the same card, making

verification and consistency between the cards hard to manage.

Page 22: Arizona Study Guide

  20  

In particular, Table 1 shows the strong similarities between the pre-existing schema and the

“machine” analogy (vs. “brain” analogy). For the machine analogy, the core categories still focus on

machines and punch cards. Relationally, the machine analogy focused on similar kinds of transaction-

oriented actions: entering in information (“read,” “punch in”), computing (“compute,” “add,”),

processing information (“sort,”) and writing results (“write”). In contrast, the brain analogy uses

categories and relations that generally are not shared with either the pre-existing schema or the computer

as a machine analogy. While there are similarities, such as “data”, “computer”, and “add”, the brain

analogy concentrates on categories and relations associated with decision-making and thinking. New

categories, such as “problem” and “operation”, and relations, such as “look-up”, “solve”, “store”,

“remember”, and “think”, reflect the brain-like processes associated with decision-making.

The insurance industry quickly converged on the machine analogy and so largely characterized

the computer as such. Of the 26 texts that discussed the computer from 1947-1954, 21 adhered to the

computer as a machine analogy. Many rejected the brain analogy outright. E.F. Cooley of Prudential

Life, argued: “I might use the term "giant brains" to tie in my subject with the more or less popular

literature on this subject. But I hate to use that term since there are false implications in it, implications

that these machines can think, reason and arrive at logical conclusions, and I don't agree that this is

true (Cooley, 1953: 355).” Computer manufacturers also emphasized the machine analogy during this

time period. They presented 32 texts, ranging from documented question and answer periods in society

meetings to exhibits to prepared remarks. Those that addressed the computer used the machine

language. IBM even adopted the label “electronic data processing machine” to describe the computer,

which quickly became common after the commercial release of computers in 1954.

The large-scale adoption of the machine analogy helped develop the initial conceptual schema of

the computer. The high percent of shared categories and relations between thinking of the computer as a

Page 23: Arizona Study Guide

  21  

machine and the existing schema indicate that that the machine analogy was assimilated within the

existing schema. Interestingly, the categories and relations associated with the brain analogy

represented some of the more novel features of the new computer technology and help differentiate it

from existing technologies. Berkeley (1949) made this differentiation explicit in his Great Brain book,

where he used a table to show how the computer differs in thinking ability from previous technologies.

Consequently, these categories and relations related to the brain analogy were not completely ignored,

but faded into the background as the insurance industry focused on the more machine-like qualities and

functions of the computer. Therefore, much of the novelty of the new technology gets less recognition

and development because of the strong preference for the machine analogy.

In 1954, the computer was commercially released and the insurance industry shifted from

discussing what it could do to developing uses for it. Yet, thinking of the computer in terms of a

machine maintained an influence on how insurance firms initially used the computer. A series of surveys

conducted by the Controllership Foundation in 1954-7 provides support. It reveals that 68% of firms

purchasing computers purchased IBM’s 650 – a smaller computer that more closely resembled a

tabulating machine and could be wheeled in to replace a tabulating machine. The surveys also reveal

that rather than create new business processes, life insurance firms converted existing office

applications, such as premium billing and accounting, over to the computer (this did not vary by kind of

machine). Yates (2005) notes that this incremental machine-like use of the computer persisted

throughout the 1950s and into the 1960s.

Overall, we find that the initial development of a new schema involves the use of analogies.

Why are analogies related to schema development? One reason is that analogies pervade human thought.

Studies find that making what is new seem familiar is a fundamental aspect of human intelligence that

relies on analogical reasoning (Hargadon & Sutton, 1997). Another reason relates to our context.

Page 24: Arizona Study Guide

  22  

Empirical research suggests that the use of analogies is especially likely when what is new is sufficiently

novel and challenging (Gick & Holyoak, 1983). Thus, individuals are more likely to use an analogy if

situations are ill-defined. The lack of clarity about the functions of the new (computer) technology

during this time period (1945-1954) addresses this point.

Another question is why the machine and brain analogies surfaced? As noted earlier, at the heart

of analogical reasoning is the process of mapping –finding a set of correspondences (albeit rough and

incomplete) between what is known and what is unknown (Farjoun, 2008). During this mapping process

individuals will often search for alternate views as they try and find a cognitive representation that

maximizes the degree of correspondence between what is known and what is unknown. Since both the

machine analogy and brain analogy offer one-to-one correspondence with the new (computer)

technology being introduced (e.g., card, clerk, and sort for machine or memory, think, and process for

brain) it is reasonable that some references were made to a machine while others were made to a brain.

A related question is why the machine analogy was adopted more than the brain analogy? This

answer relates to the similar/dissimilar nature of analogies, a generally overlooked feature of analogies

in the existing organizations literature. We find that although insurance industry members

simultaneously faced multiple analogies that varied greatly in their frame of reference (machine vs.

brain) analogies with more familiar frames of reference appeared to gain greater traction than those with

less familiar frames of reference. This is because memory search is frequently shaped by existing

schema and so an analog from a distinct domain (i.e., human physiology) lacks readily similar

resemblances to what is known (Tversky, 1977). Thus, it appeared easier for insurance firms to

conceptualize the computer as a machine than as a brain since the former was more similar to existing

technological solutions (e.g., the tabulating machine) that were commonly understood and broadly used

throughout the industry. As mentioned, many insurance firms learned about the computer through

Page 25: Arizona Study Guide

  23  

attending presentations at industry association meetings and reading the popular press (Yates, 2005).

The industry associations allowed members to present their different ideas at their conferences;

however, such members largely adopted the computer as machine analogy as it appeared to be the most

readily understood and so legitimated within their associations. For example, as noted earlier, the

influential SOA 1952 report adopted the machine analogy.

Yet, while the machine analogy helped better anchor the new technology within a more familiar

reference, this familiarity caused emerging conceptions of the computer to be assimilated into current

conceptions of office machines. Surprisingly, therefore, we find that the beginnings of a new schema

may be assimilated into an existing schema. This assimilation relates to the power of well-defined

existing schemas. The existing schema used by the insurance industry was established in both the

categories and relationships among categories. This made it hard for the insurance community to

conceive of a new office related technology as something else besides a machine and so accommodate it

as a novel schema. Indeed, insurance firms had been long-time users of tabulating machines and had

developed common routines and uses for these machines (Yates, 2005). These common uses helped

establish consensus around the categories of clerks, machines, and punch cards, and the transaction-

oriented relations (See Table 1), and maintain consensus around them over time – i.e., they were just as

likely to appear in a text after the computer was introduced in 1947 as they were before. Overall, like

work in psychology we find that analogies are relevant for seeding new schemas. However, unlike work

in psychology, we find that firms face multiple analogies and the familiarity of them can unexpectedly

foster assimilation into a pre-existing schema and so prevent the new schema from becoming a stand-

alone cognitive structure. Collectively, these observations lead to the following propositions:

Proposition 1: When firms face multiple analogies, they tend to select the one most similar to

prior technological solutions

Page 26: Arizona Study Guide

  24  

Proposition 2: More emphasis on similar analogies can lead an emerging schema for new

technology to get assimilated into pre-existing schema for prior technology

THE DECONSTRUCTION PROCESS

Our first finding suggests that assimilation may be the starting point of schema emergence.

However, assimilation presents a challenge and tension for ongoing schema development since while the

machine analogy facilitates familiarity with the existing schema, in order for new schema to emerge it

needs to be seen as conceptually distinct. So, how does a schema further develop? One possible way

would be the gradual development of the initial schema so that the machine analogy extends beyond the

initial schema. If this were the case, we would expect to see persistence of the core machine-related

categories and relations identified in Table 1 along with the gradual addition of new machine-related

categories and relations. But, Figures 2 and 3 reveal that instead of persistence, the initial machine-

related categories and relations diminish over time. In contrast, our data suggest a different pattern.

Specifically, we find that brain-related categories and relations, which were less prevalent in the late

1950s, resurface and grow markedly over time. Collectively, Figures 2 and 3 show the insight that a

new schema further emerges as the influence of a familiar analogy fades out while the influence of

another, less familiar analogy fades in. We now explain how this fading out and in occurs.

Although assimilation of the computer into the existing office machine schema remained

dominant, the schema began to gradually and subtly change in the 1960s. In particular, the idea of the

punch card as the central processing unit of information expanded to consider the data within the card.

Also, the computer as a machine analogy began to weaken as the significance of programming became

more apparent. Finally, the decision-making aspects identified new users of the computer beyond

clerks. This also helped develop new relations that tied to the computer as brain analogy. We call this

process deconstruction to emphasize the process by which existing categorical boundaries and relations

Page 27: Arizona Study Guide

  25  

begin to breakdown. An important point of this phase is that previously overlooked analogies (i.e.,

brain) get revisited and become more relevant while at the same time previously dominant analogies

(machine) becomes less relevant.

Figures 2 compares the frequency of the initial categories for the machine-based schema

versus the brain-like schema that eventually emerged. Figure 3 makes the same comparison at the

relational level. Collectively, they illustrate the deconstruction process at the level of changes in

categories and relations. Figure 2 shows that the core machine related categories persisted into the

1960s but then begin to fade in frequency throughout the 1960s and into the 1970s (see Appendix 1 for

more detail on the exact categories that faded). Likewise, Figure 3 shows that machine-related relations

also began to fade in frequency over the same period of time (see Appendix 2 for more detail on

relations). In contrast, Figure 2 shows that brain-related categories began to increase in number and

frequency from when the computer was commercially introduced to the early 1970s. Figure 3 depicts a

related pattern. Relations related to the brain also increased dramatically during the 1960s and early

1970s.

********************** Insert Figure 2 About Here ********************** ********************** Insert Figure 3 About Here ********************** ********************** Insert Table 2 About Here

**********************

Table 2 shows the most frequent brain-like categories and relations prevalent at the end of the

observation period and their frequencies in prior periods. Unlike the machine related schema, this

schema included many categories and relations associated with decision-making and brain-like

activities, such as working with data and management. Over time, the new categories and relations

Page 28: Arizona Study Guide

  26  

related to the brain conveyed more specific information about how the computer would perform

activities related to the brain. In particular, the ideas surround the systems and their interface with

“users” to process data and information in order to aid in making management decisions. Table 2 also

shows an increased attention to the activities around programming which instruct and guide the

computer to make these decisions. Even shared terms with the machine analogy changed their meaning

through their relations. For example, in the 1950s, text used the relation “verify” to highlight how clerks

would need to check the output of the machines; however, in the 1960 texts, “verify” is also used as an

internal process that computers can perform to check its own work. Thus, verification becomes more of

a brain-like activity that computers could do which further differentiated them from other office

machines. Similarly, relations such as “produce” and “prepare” originally referred to processing punch

cards, but they increasingly became associated with “report”, “management”, and “user” (which all

increased in the later periods of the sample). Thus, the conception of computers shifted from machines

in which “information” was “put through” to process transactions to computers and programs that

“develop” and “prepare” information for analysis and decision-making.

While Appendices 1 and 2 reveal that the categories and relations related to the brain analogy

often became more specific, they also reveal that in many cases the categories and relations related to the

machine analogy became more general. Initially, the texts of early computer as machine operations

detailed the processes of clerks and operators interacting with machines to process punch-cards. This

detail helps explain the initial high frequency of such terms as “clerk”, “punch”, and “punch card” seen

in the Appendices. One of the reasons that these terms decreased over time is that insurance firms began

to replace these detailed description with broader terms that more generally captured the overall process.

For example, the details of punch card operations got replaced with the more general term “punch card

system” (which had 1 occurrence in the first period and increased to 12 occurrences over the next 15

Page 29: Arizona Study Guide

  27  

years after the introduction of the computer). Similarly, the detailed specification of the clerical work

was replaced by more general descriptions such as “clerical operation” or “clerical work”. Overall, by

expanding the conceptual focus to include more detailed and novel brain-like aspects of the computer,

the deconstruction phase reflects the shifting of relevance from the machine analogy to the brain

analogy.

A key question is why did brain analogy increase in relevance in the 1960s and not during the

initial commercialization of the computer in the 1950s? From the technical perspective, solid state

computers with disk drives increased data storage and enabled random access of data (Ceruzzi, 1998),

processing power increased to support more real-time processing, and terminals improved access

(Chandler, 1997). The rise of categories such as “data base” and “terminal” in Table 2 reflect the

commercial introduction of these kinds of technology. In addition, these new categories were related to

other categories through relations such as “store”, “update”, “verify” to indicate that insurance firms

associated specific technological improvements in disk space, processing, and computer access with

advances toward using the computer to make more effective decisions. These observations are

consistent with Kaplan and Tripsas’ (2008) prediction about the interplay between cognitive structures

and cognitive development.

Figures 2 and 3 also indicate that the deconstruction process was not temporally abrupt, but

developed throughout the 1960s. However, it also shows the beginning of a sharp increase in brain like

terms during the mid-1960s. This coincides with management consultants beginning to release reports

about the ineffectiveness of computer use. One of the most cited is McKinsey’s report which attributed

poor returns on early computing investment to how they were used only as “super accounting

machines”. This criticism help associate negative connotations with machine-like uses and further

encouraged searching for more brain-like functions.

Page 30: Arizona Study Guide

  28  

Related to these technical changes and evaluation of current uses, insurance companies further

learned about unforeseen issues through using the computer. Before actually using the computer,

programming was not viewed as something significantly different than managing operations with

tabulating machines. In fact, the influential Society of Actuaries’ report defined programming through a

comparison with the physical process of wiring tabulating machines together (Davis et al., 1952).

However, when it came to programming the computer, insurance firms learned that it was significantly

more important and complicated than anticipated. Many insurance firms underestimated programming

costs, which ran from 33%-100% of the hardware rental and in some cases caused companies to

question the benefit of using computers (Yates, 2005). Table 2 reflects this learning process through the

increasing occurrence of “program”.

Learning about programming introduced new categories of users that furthered deconstruction of

the machine schema. To illustrate, consider C.A. Marquardt’s of State Farm description of

programming: “Electronic Data Processing equipment is a tool and not an electronic brain, or any

other kind of brain. … It is nothing more than a super-speed “moron” acting on detailed instructions.

The programmers who direct its operations are the real ‘brains’” (Marquardt, 1960: 255). Marquardt

did not completely abandon the classification of computer as piece of equipment, but it but it also

highlighted how computers did not completely fit with the pre-existing machine classification either.

Computers follow “detailed instructions”; whereas, machines are operated upon. Moreover, Marquardt

recognized a new kind of user, the programmer, who interfaces with the computer in a more technical

sense as opposed to clerks who operate the machines. Through their programming experiences,

insurance firms began to realize that computers did not operate the same way as tabulating machines and

interfaced with new kinds of people.

Page 31: Arizona Study Guide

  29  

To summarize, while machine, clerk, and punch card and their transaction-oriented relations still

persisted, they became noticeably weaker, as demonstrated by their lower frequency counts, as insurers

expanded their conception of the new technology. Because the newer categories and relations did not

replace the assimilated schema from the previous period, the conceptual focus of the schema actually

expanded to include categories and relations from both the machine and brain analogies. That is,

categories and relations related to the computer as machine analogy have become less frequently

referenced while the categories and relations related to computer as brain analogy have become more

frequently referenced.

Why is deconstruction useful in new schema development? One reason is that it helps un-

assimilate new schema from existing schema. Our study shows that when insurance firms (and other

industry constituents such as consultants) began to use the computer, the validity of the machine analogy

weakened. This destabilizing of the previously strong conceptual boundaries between categories and

relations in the existing schema helped shift attention to previously ignored categories (e.g., information)

and relations (e.g., programming, decision making, and thinking) related to the brain analogy and that

focused on what was truly novel about the computer. In general, this finding in our study highlights a

more dynamic and process oriented view of analogies that what exists in the literature. That is, prior

organizational literature does not discuss how multiple analogies interplay over time. Rather, it largely

assumes a cross-sectional view where an analogy is shown to shape decision-making at one moment in

time. Yet, we find that multiple analogies not only exist, but also persist. Thus, the brain analogy did not

simply fade away when the machine analogy became more dominant. Rather, it remained embedded in

memory. Through use of the new technology, firms in the insurance industry began to more frequently

articulate how the functions of computer could conceptually map to the functions of a brain. This is

consistent with research noting that if something novel can be connected to prior knowledge in such a

Page 32: Arizona Study Guide

  30  

way that its relevant features are considered at a more abstract level (e.g., “thinking” or “decision

making” of computer), then what is novel (i.e., computer) has the potential to be related to what on the

surface level might appear to be a more unfit analog (e.g., brain) (Chi, Feltovich, & Glaser, 1981).

Besides suggesting that deconstruction happens, our study also reveals insight into the logic of

how deconstruction happens. Although the logic of our first finding, assimilation, relates to the

similar/dissimilar nature of analogies, we find that the logic of our second finding, deconstruction,

relates to the general/specific nature of analogies. This was not a simple matter of replacement, instead

we find that deconstruction occurs over time as categories and relations related to the machine analogy

became more general while the categories and relations relating to the brain analogy became more

specific. For example, the previously more specific categories of “punch card” “collator” or

“addressograph” began to be referred to as the more general “card” or “machine”. In contrast, the more

general brain-related terms such as “giant brain” began to be referred to with more specific terms such

as “memory” “program” or “instruction”. In our data, making categories and relations more general

appeared to have a weakening effect on the power of the analogy since the removal of specific terms left

only few general terms of “machine” that conveyed less information and so less fervor about how the

new technology related to a machine. It also made it cognitively easier to make contrast and

differentiation. That is, contrasting with “punch card system” is cognitively easier than differentiating

from a detailed relational description of these systems. Making categories and relations more specific,

alternatively, appeared to have the effect of strengthening an analogy since the addition of more specific

terms added new discrete linkages to the analogy which in turn strengthened its overall relevance.

Taken together, these observations lead to our next proposition:

Proposition 3: Categories and relations are more likely to become deconstructed when they

become more general and less specific

Page 33: Arizona Study Guide

  31  

THE UNITIZATION PROCESS

While deconstruction helped make the novel categories and relations associated with the brain

analogy more frequently invoked, it did not establish these categories and relations as a new schema.

Creation is not enough for emergence as what is created must also persist. We observe that categories

and relations became a new schema through unitization in the 1970s. Unitization surfaced from our

data. Like Hayes-Roth (1977), we define unitization as the process through which a group of categories

and relations get connected in such a way that they are recalled as a unit as opposed to individually –

much like the clerk, machine, punch card relationship of the initial schema. We apply unitization to

schema emergence to describe how new categories and relations break ties with the existing schema and

become a single unit in memory. As noted in the previous section, we find that the new emergent

schema ends up reflecting a structure that supports the original brain analogy. Thus, in the emergence of

a collective schema we see evidence for the revival and replacement of analogies: the influence of the

machine analogy gradually shrinks while the influence of the brain analogy gradually swells.

One way to measure unitization is by the concentration of certain groupings of categories and

relations referenced in documents – the more the concentration of certain groupings, the higher the level

of unitization. Table 2 shows an increasing concentration in categories and relations associated with

information processing to make management decisions: such categories as “system”, “information”

“management”, “program” and relations such as “provide”, “determine”, and “update”. Beyond

concentration, another way is to consider the level of connectedness between categories. Highly

connected terms suggest that they function more as a unit and are thus invoked as such (vs.

individually). To get a sense of the connectedness, we took a representative category of the machine-

like schema, “machine”, and a representative category of the brain-like schema, “system” (as in

information and management systems) and compared the number of connections they had within the

Page 34: Arizona Study Guide

  32  

schema over time. Table 3 shows that initially “machine” was well connected with the current schema,

but over time it became less connected. The decline in the occurrence of machine as a category (shown

by the article counts) explains some of this increasing disconnectedness. In contrast, “system” became

more connected within the existing schema over time. Consequently, the machine category not only

became less prevalent it became less connected as well. By contrast, the more brain-like system category

became more integrated as a unit. Beyond the level of connectedness is what these categories connect

to. Looking at the machine relationships reveals concentration among “punch card” and the operations

associated with processing transactions. In contrast, “systems” is affiliated with a broader array of

categories, many of which are associated with decision-making, such as “evaluation team”,

“calculation”, and even “decision making” itself.

********************** Insert Table 3 About Here

**********************

The increasing connectedness shows that a unit structure has emerged around the categories and

relations associated with decision-making, and servicing, which are distinct from the original assimilated

schema in the 1950s. This process had the effect of simplifying the broader conceptual space exhibited

during deconstruction (where both computer as machine and computer as brain were appropriate

conceptions) around fewer categories and relations linking to computer as brain.

A key question is why did the schema unitize around the computer as brain analogy? After all,

the deconstructed schema was broad, with categories and relations that both stressed the relevance of

machine and brain analogies. One important factor was the development of database technology to

manage data throughout the organization. In 1975, Robert Shafto of New England Mutual Life

Insurance explained that a data base was “that set of data-fields related to the ‘complete’ performance of

all required transactions regardless of how those data-fields are organized. … but to truly complete the

Page 35: Arizona Study Guide

  33  

performance it must also include those needed to provide ‘management information’” (Shafto, 1973:

1973). This view of data residing in its own “base”, as emphasized by the common splitting of the word

“database” into two (which was common in this time period), represents a shift in thinking about data

being tied to a punch card or file to perform a specific transaction. By freeing data from the transactions

they support, it could now be used for a variety of purposes, including processing “management

information”. Indeed, Table 2 shows the increasing use of the terms “data base” and “store”.

Related to the growth of database technology, the management information systems (MIS)

movement of the early 1970s also facilitated the unitization process. Howard Arner of American

Bankers Life defined MIS as “the orderly combination of facts for management” (Arner, 1972: 15).

More specifically, MIS was intended to leverage the computer within the organization to process

information to support managerial decisions. The historian Thomas Haigh (2001) notes that at a broad

level the MIS movement is closely tied to the systems men occupational group’s desire to expand their

jurisdictional control within the organization. Systems men were responsible for business process design

and forms control within organizations and were very prominent in the insurance industry given its

emphasis on clerical processes. In fact, systems men authored many of the computer articles published

in association proceedings. MIS received significant attention in the insurance industry, having separate

sessions at the IASA proceedings in the early 1970s. While MIS had limited success in terms of

actually being implemented, it had a profound impact on unitizing the emerging computer schema

within the insurance industry. The MIS perspective made the developing categories of information,

manager (information user) connected together through decision-making relations more explicit.

Why is the process of unitization related to the development of a new schema? First,

unitization of new schemas signals separation from existing schemas. In other words, unitization helps

separate new categories and relations from those in existing schemas and establishes them as a separate

Page 36: Arizona Study Guide

  34  

and holistic cognitive unit. Studies in human perception find that ongoing experience with specific

configurations of cognitive components and relations can result in the formation of a single image-like

representation (Goldstone, 2000). Similar to this view, we found that from the late 1960s into the early

1970s, insurance companies began to evoke a consistent image of the computer that simultaneously

integrated the novel categories (e.g., programmer, manager, information) and relations (e.g., assist in

deciding, prepare, give service) that seemed to share more similarities to the original brain analogy than

the machine-related analogy. That is, data show strong support for statements where brain-related

categories were connected with other brain-related categories through brain-related relations. In contrast,

machine related categories appear to be increasingly unconnected with other machine categories. This

suggests that the process of unitization appears to follow a logic of connectedness. Statements from

users became centered around a few “core” brain-related categories and relations that were used together

and that became associated with a conceptually distinct unit of knowledge and thus a new schema. In

contrast, machine related categories were increasingly unconnected to other categories in statements by

users. As a consequence, the machine analogy was pushed to the periphery as over a period of time as it

began to have less significant meaning. Such a push to the periphery also facilitated schema emergence

since there now was clearer conceptual differences between the pre-existing schema emphasizing

relations to a machine and the new emergent schema emphasizing relations to a brain. The fact that such

connectedness/unconnectedness among categories and relations requires time to occur may be one

reason unitization did not occur in our study as the first process of schema development but instead as

the last process. Overall, these observations lead to our last proposition:

Proposition 4: The more a group of categories and relations get connected, the more they are

treated as a stand-alone cognitive unit

DISCUSSION

Page 37: Arizona Study Guide

  35  

Schemas are a central concept within the strategy and organizations literatures. They help firm

members build industry boundaries, foster relational capital, and fortify competitive positioning. Yet,

while much is known about schema structure and schema use, very little is known about schema

emergence. The purpose of this paper is to help address this gap. Specifically, we use 30 years of in-

depth qualitative and quantitative data to explore how the insurance industry developed a collective

schema for the computer.

Schema emergence process

A primary contribution is an emergent theoretical framework for how shared schemas emerge

over time. First, new schemas begin through assimilation. We find that while firms face multiple

analogies to help them understand new technologies, they select the analogy that is most similar to prior

technological solutions. However, while prior literature suggests that similar analogies can be positive

since they improves legitimacy for what is new (Hargadon & Douglas, 2001), we find that it can be

negative since it triggers an existing schema and makes this existing schema (and its associated

categories and relations) the conceptual focus, not the new schema. Consequently, while assimilation of

new schema into an existing schema helped initiate action to purchase the computer, it negatively

impacted the development of the new schema because the more novel aspects of the computer (e.g.,

programming and decision making) became overlooked and underappreciated. Creation of a new

independent schema, therefore, seems to first reflect dependence on an old existing schema.

Second, new schemas further develop through deconstruction of existing schemas. Schemas are

meant to provide simplified representations of the world that enable more efficient processing of

complex information (Fiske & Taylor, 1984). To the extent that they do not accurately represent the

world or filter out too much information, they are not useful. That is, a schema must be valid to a

certain degree. However, most research that invokes schemas does not directly address their validity

Page 38: Arizona Study Guide

  36  

(Walsh, 1995). But, validity is a significant issue in schema development because of its novelty and

uncertainty. A contribution of our study is revealing how initial use of the computer served as a means

to validate the initial schema. Data show that over time insurance firms began to better understand the

uses of the technology. This, in turn, lowered the descriptive power of the initial “machine” analogy as

evidenced by the decreasing frequency and increasing generalization of the categories and relations

supporting it, and strengthened the power of the previously minimized “brain” analogy as evidenced by

the increasing frequency and specificity of the categories and relations supporting it. The overall effect

of deconstruction is a broadening of the categories and relations in the conceptual focus of users. Yet,

while important, this process still does not result in a new schema since the categories and relations

within the conceptual focus still maintain ties to the pre-existing schema.

A final process of schema development is unitization. As technological advancements and use of

the computer increased over time, the “machine” analogy weakened as a conceptual understanding for

the computer; categories related to the “machine” analogy were less frequently used and so became less

connected with each other. By contrast, the categories related to the “brain” analogy became more

frequently used and more connected to each other. As a result, categories and relations associated with

the original brain analogy came to constitute a distinct and single unit in memory, one that had almost

no tie to categories and relations to other schemas and was thus theoretically equivalent to a novel

schema (in this case a computer schema). Therefore, a key point is that new schemas may not emerge as

a stand alone cognitive unit until much later in their developmental process. Indeed, almost 20 years had

passed since the commercial introduction of the computer before categories and relations comprising the

emergent computer schema were invoked in a “all or none” fashion.

In summary, our theoretical framework identifies how a collective level schema develops over

time. A new schema (1) begins by being assimilated into an existing schema, (2) continues to develop

Page 39: Arizona Study Guide

  37  

through deconstruction of the existing schema and (3) emerges as a novel cognitive structure through

unitization. Overall, this theoretical framework highlights the importance of analogies and provides a

critical complement to prior research examining schema use, schema structure and schema change by

offering insights on the equally important question of schema emergence.

Implications for managerial cognition

Broadly, our study contributes to the managerial cognition literature by setting forth a

pluralistic view of analogies. Much research on analogical transfer argues that, facing a new situation,

individuals think back to some previous situation they know about and then they apply the lessons from

the previous situation to the new one (Gavetti et al., 2005; Gick & Holyoak, 1983). While this literature

acknowledges that individuals may have different analogies, there is little if any discussion about how

these different analogies might be reconciled since it assumes that individuals process their own

analogies sequentially and focus on one that fits the new situation best.

In contrast, we highlight a pluralistic view. A counter-intuitive insight is that the use of

multiple analogies from different sources may be both more efficient and effective for schema

development, than the use of single analogies. Why is this the case? One reason is that the value of one

analogy can be difficult to assess in isolation. Studies on collective decision making find that having

more than one alternative helps executives determine each alternative’s benefits and detriments (e.g.,

Schweiger et al., 1986; Eisenhardt, 1989). Thus, multiple analogies lead to the efficient debate of ideas

in a deeper, relative way. The use of multiple analogies also reduces the escalation of commitment to

any one. When organizations entertain multiple analogies they are less likely to become cognitively

entrenched with any one analogy and thus can shift to a new analogy (e.g., brain) if their ongoing

experience does not support any one analogy (e.g., machine analogy). Much psychology research on

analogies misses these advantages because it puts individuals into non-natural laboratory contexts,

Page 40: Arizona Study Guide

  38  

where there is no interaction with others and explicit “hints” are given to identify unambiguous analogs.

Organizations research misses these advantages because it largely focuses attention on the attributes of a

single analogy or how an innovation was intimately related to one analogy (Gavetti, 2005). In other

words, by not exploring how multiple analogies interplay over time, research provides a limited and

fairly static view of analogical transfer that glosses over the comparative benefits of multiple analogies.

By contrast, in firms, single analogies may not only be less accurate but also less efficient for

schema development. For example, psychology studies examining whether subjects can foster the

development of a schema from a single story analog (through means of summarization instructions,

verbal statements of the underlying principle, or a diagrammatic representation of it) find no evidence of

success (Gick & Holyoak, 1983). Likewise, organizational research suggests that because many

analogies are based on superficial features of problems (Gavetti, 2005), the use of a single analogy may

make firms likely to overweight or overgeneralize similarities from one context to another. The more

fundamental implication is that while pluralism is so readily apparent in organizations, it is often

missing from academic theories on managerial cognition (for an exception see Kaplan, 2008). Thus,

many organizational theorists studying cognition import individual-level psychological theories without

fully adjusting the theories to reveal how the theory is different at higher, collective levels of analysis

where issues such as power, politics, and different interpretations are common. A key contribution of

this paper, therefore, is complementing the many studies examining the individual origins of collective

constructs (e.g., capabilities and competencies) by examining the opposite, but largely overlooked

direction –how individual-level theories (e.g., analogical reasoning) might play out differently at a

collective level. Succinctly stated, our study suggests that models of analogical reasoning from single

analogies may be elegant and allow parsimony, but they are often untrue in the context of organizations

and come at the expense of empirical validity.

Page 41: Arizona Study Guide

  39  

Implications for change

Our study also contributes to change theory. In particular, we highlight the management of

paradox, an increasingly important theme in work on change (Eisenhardt, 2000). Paradox is the

simultaneous existence of two inconsistent states. Two prominent inconsistent states in our study are

familiarity and unfamiliarity. Our study shows that rather than manage technical change by splitting

between similar (machine) and dissimilar (brain) analogies, firms in the insurance industry were aware

of and utilized both. Firms first appeared to manage technical change by anchoring new ideas to familiar

work. Most insurance companies learned about the computer through attending presentations and asking

questions about it at industry association meetings. Many discussed the new computer technology in the

context of familiar business solutions and practices. For instance, the 1952 Society of Actuaries’ report

focuses on the potential business processes the computer could economically perform. It shows that

insurance firms identified four potential applications—actuarial investigations, policy settlement work,

file keeping, and regular policy service—and ultimately recommended focusing on policy service. The

rest of the report was a detailed analysis of how to apply IBM’s Card-Programmed Calculator or CPC (a

small system consisting of electronic tabulating devices and referred to as a punched-card computer) to

perform these functions, providing even the data requirements, calculations, and functions.

In contrast, dialogue of the computer that invoked the brain analogy did not as effectively anchor

ideas in familiar work practices. While Berkeley’s 1947 presentations at the three insurance associations

identified a list of potential applications, they included no business process diagrams or data

requirements. Instead, as mentioned, Berkeley focused on showing that a computer could think the way

humans do by isolating the various abstract functions of thinking. As a result, his presentations lacked

the same level of familiar detail about the use of the computer that members of the associations were

accustomed to, presumably making his analysis harder to understand. His book also provided very little

Page 42: Arizona Study Guide

  40  

discussion of the actual application of the computer to solve any contemporary business problems.

However, our study suggests that while anchoring new ideas within a familiar context appears

useful to start change, it may be less useful to sustain change. For sustaining change, accentuating the

unfamiliar appears key. In particular, the analogy of the computer as a machine helped users quickly

integrate the computer into the existing schema. But, this quick integration seemed to come at the cost

of losing sight of computer’s most novel aspects. Consequently, while anchoring new ideas within the

familiar provides utility, organizations must overcome the threat of having those ideas becoming too

assimilated. By contrast, not anchoring new ideas within a familiar context may be more useful to

sustain change. Thus, while the brain analogy did not provide any significant categories or relations to

anchor it in the existing schema (thereby making it unable to get traction earlier), it did help distinguish

the computer from the existing technology so that the computer’s more novel features could gain

attention later. This suggests a tension in that that although what is new must be made familiar enough

to garner short-term attention, it must also be must be seen as distinct to avoid longer-term assimilation.

Overall, the management of technical change appears to hinge on exploiting the duality between familiar

and unfamiliar in a way that captures both states, thereby capitalizing on the inherent pluralism within

the duality.

Methodological Implications

While the main focus of this paper is on the emergence process and its implications for

cognitive development, our network-based approach contributes to a growing interest in the relational

aspects of meaning making. Our data show that understanding the nature of relations between the

categories within the schema matters significantly to their meaning. As we showed with terms such as

“verify”, what the term connects with can change over time significantly changing its meaning and role

in the cognitive schema. Verification went from humans validating machine processes to a

Page 43: Arizona Study Guide

  41  

distinguishing feature of the computer as a brain-like function based on changes in the connections.

Analytically, engaging in these relational aspects is important to developing a more robust explanation

of the cognitive schemas.

Limitations and future research

Like all research, our research has limitations that suggest opportunities for future research.

The cost of using a case is that it may include some peculiarities that prevent generalization. We

identified three distinctive processes involved in schema emergence and presented them in sequential

order because that is how they unfolded in the insurance case; however, future research is needed to

validate this order and consider the conditions under which this may not occur. Future research is

needed to better understand the nature of the assimilation, deconstruction, and unitization processes

themselves. For example, assimilation in our study occurred in a social context in which insurers shared

information through society meetings where committees interpreted the computer as a machine. This

suggests that the strength of the existing schema as well as the social environment can influence which

analogy is selected and so how assimilation takes place. Our research also suggests that assimilation,

deconstruction, and unitization each relate to the use of analogies. Although we believe that the plurality

of analogies is a widely relevant phenomenon influencing managerial cognition, it may be that in a

different context the number, nature and interplay of analogies have different effects. Finally, in this

study we have addressed general aspects of network structures such as level of connectedness. Future

work is needed to apply network analytic tools more robustly explain the processes associated with

schema emergence. Overall, an important next step is submitting our emergent findings to rigorous

empirical validation.

CONCLUSION:

Page 44: Arizona Study Guide

  42  

Our central contribution is a theoretical framework for schema emergence. Like prior work in

psychology, our study suggests that schema emergence relates to the use of analogies. However, in

contrast to work in psychology, our study reveals a pluralistic view of analogies where the relevance of

varied analogies rises and falls over time. There are several findings. First, assimilation appears

important in schema emergence. Firms concurrently face varied competing analogies for the computer.

However, firms largely converge on the one most related to prior technological solutions. While this

convergence helps the new technology gain traction, it also comes at a cost since an emerging new

schema gets assimilated into an existing schema where the novelty of the new technology is largely

overlooked. Second, deconstruction appears important in schema emergence. As firms use new

technology, existing schema get deconstructed thereby weakening the relevance of the similar analogy

and strengthening the validity of what originally appeared to be a dissimilar analogy. Finally,

unitization appears important in schema emergence. We find that the categories and relations related to

the dissimilar analogy get strengthened so that it now becomes central while the categories and relations

related to similar analogy fade so that it becomes peripheral. Overall, by exploiting the unique features

of in-depth historical analysis, we find that schema emergence relates to a more nuanced role of

analogies that extends beyond their traditional view in the literature – one that underscores not only their

overlooked quantity and qualities but also their unanticipated but critical temporal interchange.

Page 45: Arizona Study Guide

  43  

References

Adams, W. J. 1946. A Progress Report of the Association Costs Committee. Paper presented at the 1946

LOMA Annual Proceedings. Anderson, J. R. 2000. Learning and memory: An integrated approach. New York: Wiley. Arner, H. 1972. What is MIS. Proceedings of the Insurance Accounting and Statistical Association: 15-

17. Axelrod, R. 1976. The Cognitive Mapping Approach to Decision Making. In R. Axelrod (Ed.), Structure

of Decision: 221-250. Princeton, NJ: Princeton University Press. Barr, P. S., Stimpert, J. L., & Huff, A. S. 1992. Cognitive Change, Strategic Action, and Organizational

Renewal. Strategic Management Journal, 13: 15-36. Bartunek, J. M. 1984. Changing Interpretive Schemes and OrganizationalRestructuring: The Example of a

Religious Order. Administrative Science Quarterly, 29(3): 355-372. Bashe, C. J., Johnson, L. R., Palmer, J. H., & Pugh, E. W. 1986. IBM's Early Computers. Cambridge:

MIT Press. Beebe, F. 1947. Application of Punched Card Accounting Machines. Paper presented at the 1947

LOMA Annual Proceedings. Benford, R. D., & Snow, D. A. 2000. Framing Processes and Social Movements: An Overview and

Assessment. Annual Review of Sociology, 26: 611-639. Berkeley, E. C. 1949. Giant Brains: or, Machines that Think. New York: John Wiley and Sons. Bureau of Labor Statistics. 1955. The Introduction of an Electronic Computer in a Large Insurance

Company. In U. S. D. o. Labor (Ed.), Studies of Automatic Technology, no. 2. Washington D.C.: U.S. Government Printing Office.

Carley, K. 1993. Coding Choices for Textual Analysis: A Comparison of Content Analysis and Map Analysis. Sociological Methodology, 23: 75-126.

Carley, K. 1997. Extracting Team Mental Models through Textual Analysis. Journal of Organizational Behavior, 18: 533-558.

Ceruzzi, P. 1998. A History of Modern Computing. Cambridge: MIT Press. Chandler, A. 1997. The Computer Industry: The First Half-Century. In D. Yoffie (Ed.), Competing in the

Age of Digital Convergence: 37-122. Boston: Harvard Business School Press. Chi, M. T. H., Feltovich, P. J., & Glaser, R. 1981. Categorization and representation of physics problems

by experts and novices. Cognitive Science, 5: 121-152. Cooley, E. F. 1953. Electronic Machines - Their Use by Life Companies. Proceedings of the Life Office

Management Association: 355-363. Dane, E. 2010. Reconsidering the Trade-Off Between Expertise and Flexibility: A Cognitive

Entrenchment Perspective. Academy of Management Review, 35(4): 579-603. Davis, M. E., Barber, W. P., Finelli, J. J., & Klem, W. 1952. Report of Committee on New Recording

Means and Computing Devices: Society of Actuaries. DiMaggio, P. 1997. Culture and Cognition. Annual Review of Sociology, 23(ArticleType: research-

article / Full publication date: 1997 / Copyright © 1997 Annual Reviews): 263-287. Dunker, K. 1945. On problem solving. Psychological Monographs, 58(5). Eisenhardt, K. M. 1989. Building Theories from Case Study Research. Academy of Management Review,

14(4): 532-550. Eisenhardt, K. M. 2000. Paradox, spirals, ambivalence: The new language of change and pluralism. The

Academy of Management Review, 25(4): 703-705. Elsbach, K. D., Barr, P. S., & Hargadon, A. B. 2005. Identifying Situated Cognition in Organizations.

Organization Science, 16(4): 422-433. Eden, D., & Shani, A. B. 1982. Pygmalion goes to boot camp: Expectancy, leadership, and trainee

performance. Journal of Applied Psychology, 67(2): 194-199.

Page 46: Arizona Study Guide

  44  

Etzion, D., & Ferraro, F. 2010. The Role of Analogy in the Institutionalization of Sustainability Reporting. Organization Science,21(5): 1092-1107

Farjoun, M. 2008. Strategy making, novelty and analogical reasoning — commentary on Gavetti, Levinthal, and Rivkin (2005). Strategic Management Journal, 29(9): 1001-1016.

Fiske, S. T., & Dyer, L. M. 1985. Structure and Development of Social Schemata: Evidence From Positive and Negative Transfer Effects. Journal of Personality and Social Psychology, 48(4): 839-852.

Fiske, S. T., & Taylor, S. E. 1984. Social Cognition. Reading, MA: Addison-Wesley Gavetti, G., Levinthal, D. A., & Rivkin, J. W. 2005. Strategy making in novel and complex worlds: the

power of analogy. Strategic Management Journal, 26(8): 691-712. Gick, M. L., & Holyoak, K. J. 1983. Schema induction and analogical transfer. Cognitive Psychology,

15(1): 1-38. Goffman, E. 1974. Frame analysis. Cambridge, MA: Harvard University Goldstone, R. L. 2000. Unitization during category learning. Journal of Experimental Psychology,

26(1): 86-112. Grant, R. M. 1996. Toward a Knowledge-Based Theory of the Firm. Strategic Management Journal,

17(Winter Special Issue): 109-122. Haigh, T. 2001. Inventing Information Systems: The Systems Men and the Computer, 1950-1968.

Business History Review, 75(1): 15-61. Hannan, M., Polos, L., & Carroll, G. 2007. Logics of Organization Theory: Audiences, Codes, and

Ecologies. Princeton, NJ: Princeton University Press. Hargadon, A., & Sutton, R. I. 1997. Technology brokering and innovation in a product development firm.

Administrative Science Quarterly, 42: 716-749. Hargadon, A., & Douglas, Y. 2001. When Innovations Meet Institutions: Edison and the Design of the

Electric Light. Administrative Science Quarterly, 46: 476-501. Hargadon, A., & Fanelli, A. 2002. Action and possibility: Reconciling dual perspectives of knowledge in

organizations. Organization Science, 13(3): 290-302. Hayes-Roth, B. 1977. Evolution of cognitive structures and processes. Psychological Review, 84(3): 260-

278. Henderson, R., & Cockburn, I. 1996. Scale, Scope, and Spillovers: The Determinants of Research

Productivity in Drug Discovery. RAND Journal of Economics, 27(1): 32-59. Holyoak, K. J., & Thagard, P. 1989. Analogical mapping by constraint satisfaction. Cognitive Science,

13(3): 295-355. Johnson, D. R., & Hoopes, D. G. 2003. Managerial Cognition, Sunk Costs, and the Evolution of Industry

Structure. Strategic Management Journal, 24(10): 1057-1068. Kaplan, S. 2008a. Cognition, Capabilities, and Incentives: Assessing Firm Response to the Fiber-Optic

Revolution. Academy of Management Journal, 51(4): 672-695. Kaplan, S. 2008b. Framing Contests: Strategy Making Under Uncertainty. Organization Science, 19(5):

729-752. Kaplan, S., & Tripsas, M. 2008. Thinking about technology: Applying a cognitive lens to technical

change. Research Policy, 37(5): 790-805. Koch, C. 1999. Biophysics of Computation: Information Processing in Single Neurons. New York:

Oxford University Press. Lee, T. W. 1999. Using qualitative methods in organizational research. Thousand Oaks, CA: Sage

Publications, Inc. Marquardt, C. A. 1960. Maintaining Programs, Making Changes, Etc. - Periodic Audits to Determine

Accuracy of Performance and Adherence to Prescribed Routines Processed by the Computer. Proceedings of the Insurance Accounting and Statistical Association: 255-256.

Misangyi, V. F., Weaver, G. R., & Elms, H. 2008. Ending Corruption: The Interplay Among Institutional Logics, Resources, and Institutional Entrepreneurs. Academy of Management Review, 33(3): 750-770.

Page 47: Arizona Study Guide

  45  

Nadkarni, S., & Narayanan, V. K. 2007. Strategic schemas, strategic flexibility, and firm performance: the moderating role of industry clockspeed. Strategic Management Journal, 28(3): 243-270.

Neustadt, R. E., & May, E. R. 1986. Thinking in Time: The uses of history for decision-makers. New York, NY: Free Press.

Novick, L. R., & Holyoak, K. J. 1991. Mathematical problem solving by analogy. Journal of Experimental Psychology: Learning, Memory and Cognition, 17(3): 398-415.

Ruef, M., & Patterson, K. 2009. Credit and classification: The impact of industry boundaries in nineteenth-century America. Administrative Science Quarterly, 54(3): 486-520.

Rosa, J. A., Porac, J. F., Runser-Spanjol, J., & Saxon, M. S. 1999. Sociocognitive Dynamics in a Product Market. Journal of Marketing, 63: 64-77.

Schminke, M., Ambrose, M., & Noel, T. 1997. The effects of ethical frameworks on perceptions of organizational justice. Academy of Management Journal, 40: 1190-1207.

Schon, D. 1993. Generative metaphor: A perspective on problem-setting in social policy. In A. Ortony (Ed.), Metaphor and Thought: 137-163: Cambridge University Press.

Schustack, M. W., & Anderson, J. R. 1979. Effects of analogy to prior knowledge on memory for new information. Journal of Verbal Learning and Verbal Behavior, 18(5): 565-583.

Schweiger, D. M., Sandberg, W. R., & Rechner, P. L. 1989. Experiential Effects of Dialectical Inquiry, Devil's Advocacy, and Consensus Approaches to Strategic Decision Making. The Academy of Management Journal, 32(4): 745-772.

Shafto, R. A. 1973. Building the Data Base. Proceedings of the Insurance Accounting and Statistical Association: 250-253.

Shepherd, G. M. 1991. Foundations of the Neuron Doctrine. New York: Oxford Univeristy Press. Siggelkow, N. 2001. Change in the Presence of Fit: the Rise, the Fall, and the Renascence of Liz

Claiborne. Academy of Management Journal, 44(4): 838-857. Tripsas, M., & Gavetti, G. 2000. Capabilities, Cognition and Inertia: Evidence from Digital Imaging.

Strategic Management Journal, 21(10/11): 1147-1162. Tsoukas, H. 2009. A Dialogical Approach to the Creation of New Knowledge in Organizations.

Organization Science, 20(6): 941-957. Tversky, A. 1977. Features of similarity. Psychological Review, 84: 327-352. Tyrrell, J. 1972. Policyholders Service and Automated Aids. Proceedings of the Insurance Accounting

and Statistical Association: 1-3. Walsh, J. P. 1995. Managerial and Organizational Cognition: Notes from a Trip Down Memory Lane.

Organization Science, 6(3): 280-321. Weick, K. E. 1979. Cognitive Processes in Organizations. In B. Staw (Ed.), Research in Organizational

Behavior, Vol. 1: 41-74. Greenwich, CT: JAI Press. Winston, P. H. 1980. Learning and reasoning by analogy. Commun. ACM, 23(12): 689-703. Yates, J. 1989. Control Through Communication: The Rise of System in American Management.

Baltimore, MD: Johns Hopkins Press. Yates, J. 1997. Early Interactions Between the Life Insurance and Computing Industries: The Prudential's

Edmund C. Berkeley. Annals of the History of Computing, 19(3): 60-73. Yates, J. 2005. Structuring the Information Age: Life Insurance and Technology in the 20th Century.

Baltimore: John Hopkins University Press.

Page 48: Arizona Study Guide

  46  

Table 1: Categories and Relations of the existing schema, 1945 – 1954

Page 49: Arizona Study Guide

  47  

Table 2: History of Brain-Like Categories and Relations

!"#$%&'( )*+,-.-)*/+ )*//-.-)*/, )*/0-.)*12 )*1)-.-)*13 )*1+-.-)*11 )*1,-.-)*1* )*,2-.-)*,4 )*,3-.-)*,/ 5&#"65&#"6-7'#896$: /3 31 12 /+ 30 /2 +* +/ 30/!"#$%&'( )* )) +, -+ +- -) +, +* ).-/0/&'# . )* +* +* )1 ). +1 +- )23456"(#7&4"5 +) ++ -) -8 )+ +3 )1 )* )1+%/'( ) - )8 ),$("9(7# * +- -- +8 ). ), )- )- )8,$("9(7##'( * 1 )- , - )* 2 )- 13:7&7 )+ )2 )- +) . )+ )* )) )3.

#7579'#'5& 8 * )- - 2 )8 )) )) 21&'(#457; - . )) +-/"6&<7(' - 8 )) ),&(75/7!&4"5 ) 2 )* )2 )+ , . , 1*:7&7=$("!'//459 ) + 1 8 2 2 , -8:7&7=>7/' - 1 , ),?@A 0 ) )) 2 )+ . * 1 *)!"#$%&'(=/0/&'# + * * , . 2 1 8+!"5&("; - ) + )* - - - 1 -1B';7&4"5 )*+,-.-)*/+ )*//-.-)*/, )*/0-.)*12 )*1)-.-)*13 )*1+-.-)*11 )*1,-.-)*1* )*,2-.-)*,4 )*,3-.-)*,/ 5&#"6

$("C4:' ) ) + . , )- -8:'&'(#45' * ) * 8 ) ) 2 )) -8%$:7&' - )- )3 1 1 . )3 *.$('$7(' 8 )8 )- )8 , , )+ . )3,$("!'// - . )3 1 1 * . . 28!D'!E )3 +- ), )- 8 1 . , )++$(":%!' 2 , )8 )* . 1 )2 , ,,9'5'(7&' + 8 - )) , +,('C4'< ) + 8 - 1 )1!"#$;'&' ) ) + ) 1 )+!7;!%;7&' 1 )1 ), )3 1 - )3 2 .3D75:;' )8 )3 2 1 8 - 2 2 2+/&"(' . * . , 2 , - 2 *8C'(460 - * 2 8 8 2 + 2 *3'5&'( - 8 + 2 8 + , 2 82(%5F , , 8 + . 2 2 8-:'C';"$ + + - 2 , 1 2 -*!"5&("; - - 8 8 8 , 2 -+#7E' ) ) ) - * ) - 2 +)"$'(7&' 8 + + + ) 2 ),4:'5&460 ) + ) 2 )3

Page 50: Arizona Study Guide

  48  

Table 3: The Connectivity of Terms over Time

This table counts the total number of relations that “machine” (machine like connectivity) and “system” (brain like connectivity) for each time period. A relation is measured by being connected to another term by a verb connection. We counted connections for every term the verb connected.

!"#$%&%!"'# !"''%&%!"'$ !"'(%&!")* !")!%&%!")+ !")#%&%!")) !")$%&%!")" !"$*%&%!"$, !"$+%&%!"$#-./0123%45223/617168 !"# $% &' ( " ' $ !-./0123%9:61/;3< !" !& !( # ' ) ' '

=8<63>%45223/617168 & ! " &" !( && %' )'=8<63>%9:61/;3< # &) !) !) &* &# !* !$

Page 51: Arizona Study Guide

  49  

Figure 1: Level of overlap between existing schema and analogies, 1945-1954

                       

0%  

10%  

20%  

30%  

40%  

50%  

60%  

Shared  Categories   Shared  Relations  

Computer-­‐as-­‐machine  

Computer-­‐as-­‐brain  

Page 52: Arizona Study Guide

  50  

Figure 2: Frequency of Categories over time, 1947 - 1975  

To calculate the frequencies for Machine-like schema, we summed the frequencies for all the categories within the schema during the initial period in 1947 – 1954 and calculated their frequency for each time period. For the Brain-like schema, we summed the frequencies for all the categories within the schema during the ending period in 1973 – 1975 (see Table 2 for a detail of the more frequent categories) and calculated their frequency for each time period.

0  

100  

200  

300  

400  

500  

600  

700  

800  

900  

1000  

1947  -­‐  1954  

1955  -­‐  1957  

1958  -­‐1960  

1961  -­‐  1963  

1964  -­‐  1966  

1967  -­‐  1969  

1970  -­‐  1972  

1973  -­‐  1975  

Machine-­‐Like  Schema  

Brain-­‐Like  Schema  

Page 53: Arizona Study Guide

  51  

Figure 3: Frequency of Relations over time, 1947 - 1975

To calculate the frequencies for Machine-like schema, we summed the frequencies for all the relations within the schema during the initial period in 1947 – 1954 and calculated their frequency for each time period. For the Brain-like schema, we summed the frequencies for all the relations within the schema during the ending period in 1973 – 1975 (see Table 2 for a detail of the more frequent categories) and calculated their frequency for each time period.

0  

50  

100  

150  

200  

250  

300  

350  

400  

450  

500  

1947  -­‐  1954  

1955  -­‐  1957  

1958  -­‐1960  

1961  -­‐  1963  

1964  -­‐  1966  

1967  -­‐  1969  

1970  -­‐  1972  

1973  -­‐  1975  

Machine-­‐Line  Schema  

Brain-­‐Like  Schema  

Page 54: Arizona Study Guide

  52  

APPENDIX 1 Most Frequent Categories over time

Each cell represents the total number of articles which used that particular category during that time period. The total number of articles for each time period is represented at the top of each table. To be included in the table, the category needed to be in approximately 10% of the total articles.

!"#$%&' ()*+,-,().* ()..,-,().+ ()./,-()01 ()0(,-,()02 ()0*,-,()00 ()0+,-,()0) ()+1,-,()+3 ()+2,-,()+* 4"'564"'56,78'9$6%: .2 20 01 .* 2/ .1 *) *. 2/.!"#$%&'( )* )) +, -+ +- -) +, +* ).-/01"(#2&/"0 +) ++ -) -3 )+ +4 )5 )* )5+676&'# . )* +* +* )5 ). +5 +- )84$("9(2# * +- -- +3 ). ), )- )- )3,$%0!:;!2(< )3 -) -5 +3 * 5 * * )+,<2&2 )+ )8 )- +) . )+ )* )) )4.!2(< )+ +, +3 )5 5 , * + )4-#2!:/0' +4 +) +, . 3 * 3 3 .*$("9(2##/09 )4 )8 +- )3 5 , 5 * .41/=' )) +4 +4 )5 8 3 ) * ,3&(2062!&/"0 ) 8 )* )8 )+ , . , 5*$("9(2##'( * 5 )- , - )* 8 )- 54!='(> )+ )- ++ )) ) + * 3 54$"=/!7 , )5 )8 )4 5 8 3 + 54&2$' )+ . )8 )3 8 8 3 ) 8,#2029'#'0& 3 * )- - 8 )3 )) )) 85('!"(< )) , )* )) 8 5 * 3 85:"#';"11/!' 5 )4 )8 3 5 . )4 - 88?@A;)34) 0 0 + +4 )) ), 5 8 83?@A;8*4 3 +3 +- 3 0 - ) ) 84'B%/$#'0& )- )+ )- 8 ) - + * **#290'&/!;&2$' )4 )* . , - - 3 + *3CDE 0 ) )) 8 )+ . * 5 *)"$'(2&/"0 )4 )) )) )) ) - 0 3 *)2$$=/!2&/"0 + )) )- , + ) 3 8 35!"##/66/"0 - )* )* 5 + ) ) - 35F"G 8 )3 )- 8 - 3 ) 35("%&/0' , . 5 , - 5 + ) 3*!"0H'(6/"0 ) )- ). * - - 33!"#$%&'(;676&'# + * * , . 8 5 3+('$"(& - 8 , * * . + 3 3+#26&'(;1/=' + 8 8 )+ * 3 + 3 3)29'0& - - + - 5 , )) + -.$"=/!7:"=<'( , 5 )) 3 + * + -.!"0&("= - ) + )* - - - 5 -5

Page 55: Arizona Study Guide

  53  

APPENDIX 2 Most Frequent Relations over time

Each cell represents the total number of articles which used that particular relation. The total number of articles for each time period is represented at the top of each table. To be included in the table, the relation needed to be in approximately 10% of the total articles.

!"#$ !"#$%&%!"'# !"''%&%!"'$ !"'(%&!")* !")!%&%!")+ !")#%&%!")) !")$%&%!")" !"$*%&%!"$, !"$+%&%!"$# -./01-./01%23/45167 #" +) '+ #" +) #) #' #, +')%&"%' () *+ (, (+ - . / , (**012%& * *) *+ (3 . (( 3 ( ((/0#"04#" - (- (+ (- , , (* / (),%45%1546" . (. (, () . + () 3 /)0#781%" 3 , (- (9 / . (3 , ,,:7#6 / (( , (( 9 9 . * ,9;#<6" , / (- . 3 (+ - - ./0#7%":: + / () . . 9 / / 3-&4285" (- () 3 . - + 3 3 3*0#<26 , + / () , . 9 * 3(10846" + (+ () . . / () 9/%72="#6 + (( (- (+ 9 - 9 + 9/><5" 9 , 9 * ( * * * 9.:67#" / 9 / , 3 , + 3 9-?"#@" - (+ / . * + ( * 9-="#<>A + 9 3 - - 3 * 3 9)0155 . / 9 * + 9 * -/5<:6 ( , - / ( + ( * -."26"# + - * 3 - * , 3 -3#12B , , - * / 3 3 -+#"0#781%" + , - + -+%7?04#" 3 9 + - ( 9 + -*0#7@#4? - , (( 9 * - * 9 -(?46%& + - . - + - * * -)#"48 (+ . * * - - ( + +/488 . * 9 + * * * + +,#12B6&#71@& - 3 , ( * * ( +.8"="570 * * + 3 , . 3 +9