strategic notes

287
Strategy A Strategy is a long term plan of action designed to achieve a particular goal , most often "winning." Strategy is differentiated from tactics or immediate actions with resources at hand by its nature of being extensively premeditated, and often practically rehearsed. Strategies are used to make the problem easier to understand and solve. The word derives from the Greek word stratēgos, which derives from two words: stratos (army) and ago (ancient Greek for leading). Stratēgos referred to a 'military commander' during the age of Athenian Democracy . Contents 1 Interpretation 2 Noted texts on strategy 3 See also 4 References 5 External links Interpretation Strategy is about choice , which affects outcomes. Organizations can often survive -- indeed do well -- for periods of time in conditions of relative stability, low environmental turbulence and little competition for resources. Virtually none of these conditions prevail in the modern world for great lengths of time for any organization or sector, public or private. Hence, the rationale for strategic management. The nature of the strategy adopted and implemented emerges from a combination of the structure of the organization (loosely coupled or tightly coupled), the type of resources available and the nature of the coupling it has with environment and the strategic objective

Upload: dnderius

Post on 27-Apr-2015

899 views

Category:

Documents


10 download

TRANSCRIPT

Page 1: Strategic Notes

Strategy

A Strategy is a long term plan of action designed to achieve a particular goal, most often

"winning." Strategy is differentiated from tactics or immediate actions with resources at

hand by its nature of being extensively premeditated, and often practically rehearsed.

Strategies are used to make the problem easier to understand and solve.

The word derives from the Greek word stratēgos, which derives from two words: stratos

(army) and ago (ancient Greek for leading). Stratēgos referred to a 'military commander'

during the age of Athenian Democracy.

Contents

1 Interpretation

2 Noted texts on

strategy

3 See also

4 References

5 External links

Interpretation

Strategy is about choice, which affects outcomes. Organizations can often survive -- indeed

do well -- for periods of time in conditions of relative stability, low environmental turbulence

and little competition for resources. Virtually none of these conditions prevail in the modern

world for great lengths of time for any organization or sector, public or private. Hence, the

rationale for strategic management. The nature of the strategy adopted and implemented

emerges from a combination of the structure of the organization (loosely coupled or tightly

coupled), the type of resources available and the nature of the coupling it has with

environment and the strategic objective being pursued.[1] Strategy is adaptable by nature

rather than rigid set of instructions. In some situations it takes the nature of emergent

strategy. The simplest explanation of this is the analogy of a sports scenario. If a football

team were to organize a plan in which the ball is passed in a particular sequence between

specifically positioned players, their success is dependent on each of those players both

being present at the exact location, and remembering exactly when, from whom and to

whom the ball is to be passed; moreover that no interruption to the sequence occurs. By

Page 2: Strategic Notes

comparison, if the team were to simplify this plan to a strategy where the ball is passed in

the pattern alone, between any of the team, and at any area on the field, then their

vulnerability to variables is greatly reduced, and the opportunity to operate in that manner

occurs far more often. This manner is a strategy.

In the field of business administration it is possible mention to the "strategic consistency."

According to Arieu (2007), "there is strategic consistency when the actions of an

organization are consistent with the expectations of management, and these in turn are with

the market and the context."

Classic texts such as Sun Tzu's The Art of War, written in China 2,500 years ago, the political

strategy of Niccolo Machiavelli's The Prince, written in 1513, or Carl von Clausewitz's On

War, published in 1832, are still well known and highly influential. In the twentieth century,

the subject of strategic management has been particularly applied to organisations, most

typically to business firms and corporations.

The nature of historic texts differs greatly from area to area, and given the nature of

strategy itself, there are some potential parallels between various forms of strategy (noting,

for example, the popularity of the The Art of War as a business book). Each domain

generally has its own foundational texts, as well as more recent contributions to new

applications of strategy. Some of these are:

Political strategy

o The Prince , published in 1532 by Niccolò Machiavelli

o Arthashastra , written in the 4th century BC by Chanakya

o The Book of the Courtier by Baldassare Castiglione

Military strategy :

o The Art of War , written in the 6th century BC by Sun Tzu

o Strategikon , written in the 6th century AD by the Byzantine emperor Maurice

o Taktikon, by the Byzantine emperor Leo VI the Wise

o On War , by Carl von Clausewitz (19th century)

o Strategy, by Basil Liddell Hart

Page 3: Strategic Notes

o On Guerrilla Warfare, by Mao Zedong

o The Influence of Sea Power upon History , by Alfred Thayer Mahan

o The Air Campaign, by Colonel John A. Warden, III

o Makers of Modern Strategy, edited by Peter Paret

o Strategy, by Edward N. Luttwak

Economic strategy

o General Theory of Employment, Interest and Money , published in 1936 by

John Maynard Keynes

Business strategy

o Competitive Strategy , by Michael Porter

o Strategy Concept I: Five Ps for Strategy and Strategy Concept II: Another Look

at Why Organizations Need Strategies, by Henry Mintzberg

o Winning In FastTime by John A. Warden, III and Leland A. Russell, 2002.

General strategy

o Strategy Safari , by Henry Mintzberg, Bruce Ahlstrand and Joseph Lampel.

o Political Strategy and Tactics by Laure Paquette

Strategic Theory

o Science, Strategy and War: The Strategic Theory of John Boyd by Frans Osinga

o Strategy generative by Jean-Paul Charnay

o Strategy and Ethnic Conflict by Laure Paquette

Others

o Marcel Détienne and Jean-Pierre Vernant, Les Ruses de l'intelligence, Paris:

Flammarion, 1993 (on the role of the Greek Metis)

Strategic management

Page 4: Strategic Notes

Contents

1 Processes

o 1.1 Strategy formulation

o 1.2 Strategy implementation

o 1.3 Strategy evaluation

1.3.1 Suitability

1.3.2 Feasibility

1.3.3 Acceptability

2 General approaches

3 The strategy hierarchy

4 Historical development of strategic management

o 4.1 Birth of strategic management

o 4.2 Growth and portfolio theory

o 4.3 The marketing revolution

o 4.4 The Japanese challenge

o 4.5 Gaining competitive advantage

o 4.6 The military theorists

o 4.7 Strategic change

o 4.8 Information and technology driven strategy

5 The psychology of strategic management

6 Reasons why strategic plans fail

7 Limitations of strategic management

Page 5: Strategic Notes

o 7.1 The Linearity Trap

Strategic management is the art, science and craft of formulating, implementing and

evaluating cross-functional decisions that will enable an organization to achieve its long-

term objectives[1]. It is the process of specifying the organization's mission, vision and

objectives, developing policies and plans, often in terms of projects and programs, which are

designed to achieve these objectives, and then allocating resources to implement the

policies and plans, projects and programs. Strategic management seeks to coordinate and

integrate the activities of the various functional areas of a business in order to achieve long-

term organizational objectives. A balanced scorecard is often used to evaluate the overall

performance of the business and its progress towards objectives.

Strategic management is the highest level of managerial activity. Strategies are typically

planned, crafted or guided by the Chief Executive Officer, approved or authorized by the

Board of directors, and then implemented under the supervision of the organization's top

management team or senior executives. Strategic management provides overall direction to

the enterprise and is closely related to the field of Organization Studies. In the field of

business administration it is useful to talk about "strategic alignment" between the

organization and its environment or "strategic consistency". According to Arieu (2007),

"there is strategic consistency when the actions of an organization are consistent with the

expectations of management, and these in turn are with the market and the context."

“Strategic management is an ongoing process that evaluates and controls the business and

the industries in which the company is involved; assesses its competitors and sets goals and

strategies to meet all existing and potential competitors; and then reassesses each strategy

annually or quarterly [i.e. regularly] to determine how it has been implemented and whether

it has succeeded or needs replacement by a new strategy to meet changed circumstances,

new technology, new competitors, a new economic environment., or a new social, financial,

or political environment.” (Lamb, 1984:ix)[2]

Processes

Strategic management is a combination of three main processes which are as follows (as

documented by Lemon Consulting)

Strategy formulation

Page 6: Strategic Notes

Performing a situation analysis, self-evaluation and competitor analysis: both internal

and external; both micro-environmental and macro-environmental.

Concurrent with this assessment, objectives are set. These objectives should be

parallel to a timeline; some are in the short-term and others on the long-term. This

involves crafting vision statements (long term view of a possible future), mission

statements (the role that the organization gives itself in society), overall corporate

objectives (both financial and strategic), strategic business unit objectives (both

financial and strategic), and tactical objectives.

These objectives should, in the light of the situation analysis, suggest a strategic

plan. The plan provides the details of how to achieve these objectives.

This three-step strategy formulation process is sometimes referred to as determining where

you are now, determining where you want to go, and then determining how to get there.

These three questions are the essence of strategic planning. I/O Economics for the external

factors and RBV for the internal factors.

Strategy implementation

Allocation and management of sufficient resources (financial, personnel, time,

technology support)

Establishing a chain of command or some alternative structure (such as cross

functional teams)

Assigning responsibility of specific tasks or processes to specific individuals or groups

It also involves managing the process. This includes monitoring results, comparing to

benchmarks and best practices, evaluating the efficacy and efficiency of the process,

controlling for variances, and making adjustments to the process as necessary.

When implementing specific programs, this involves acquiring the requisite

resources, developing the process, training, process testing, documentation, and

integration with (and/or conversion from) legacy processes.

Thus this type of problem can occur in strategy

In order for a policy to work, there must be a level of consistency from every person in an

organization, including from the management. This is what needs to occur on the tactical

level of management as well as strategic.

Page 7: Strategic Notes

Strategy evaluation

Measuring the effectiveness of the organizational strategy. It's extremely important

to conduct a SWOT analysis to figure out the strengths, weaknesses, opportunities

and threats (both internal and external) of the entity in question. This may require to

take certain precautionary measures or even to change the entire strategy.

In corporate strategy, Johnson and Scholes present a model in which strategic options are

evaluated against three key success criteria:

Suitability (would it work?)

Feasibility (can it be made to work?)

Acceptability (will they work it?)

Suitability

Suitability deals with the overall rationale of the strategy. The key point to consider is

whether the strategy would address the key strategic issues underlined by the

organisation's strategic position.

Does it make economic sense?

Would the organisation obtain economies of scale, economies of scope or experience

economy?

Would it be suitable in terms of environment and capabilities?

Tools that can be used to evaluate suitability include:

Ranking strategic options

Decision trees

What-if analysis

Feasibility

Feasibility is concerned with the resources required to implement the strategy are available,

can be developed or obtained. Resources include funding, people, time and information.

Tools that can be used to evaluate feasibility include:

Page 8: Strategic Notes

cash flow analysis and forecasting

break-even analysis

resource deployment analysis

Acceptability

Acceptability is concerned with the expectations of the identified stakeholders (mainly

shareholders, employees and customers) with the expected performance outcomes, which

can be return, risk and stakeholder reactions.

Return deals with the benefits expected by the stakeholders (financial and non-

financial). For example, shareholders would expect the increase of their wealth,

employees would expect improvement in their careers and customers would expect

better value for money.

Risk deals with the probability and consequences of failure of a strategy (financial

and non-financial).

Stakeholder reactions deals with anticipating the likely reaction of stakeholders.

Shareholders could oppose the issuing of new shares, employees and unions could

oppose outsourcing for fear of losing their jobs, customers could have concerns over

a merger with regards to quality and support.

Tools that can be used to evaluate acceptability include:

what-if analysis

stakeholder mapping

General approaches

In general terms, there are two main approaches, which are opposite but complement each

other in some ways, to strategic management:

The Industrial Organizational Approach

o based on economic theory — deals with issues like competitive rivalry,

resource allocation, economies of scale

o assumptions — rationality, self discipline behaviour, profit maximization

The Sociological Approach

Page 9: Strategic Notes

o deals primarily with human interactions

o assumptions — bounded rationality, satisfying behaviour, profit sub-

optimality. An example of a company that currently operates this way is

Google

Strategic management techniques can be viewed as bottom-up, top-down, or collaborative

processes. In the bottom-up approach, employees submit proposals to their managers who,

in turn, funnel the best ideas further up the organization. This is often accomplished by a

capital budgeting process. Proposals are assessed using financial criteria such as return on

investment or cost-benefit analysis. Cost underestimation and benefit overestimation are

major sources of error. The proposals that are approved form the substance of a new

strategy, all of which is done without a grand strategic design or a strategic architect. The

top-down approach is the most common by far. In it, the CEO, possibly with the assistance of

a strategic planning team, decides on the overall direction the company should take. Some

organizations are starting to experiment with collaborative strategic planning techniques

that recognize the emergent nature of strategic decisions.

The strategy hierarchy

In most (large) corporations there are several levels of management. Strategic management

is the highest of these levels in the sense that it is the broadest - applying to all parts of the

firm - while also incorporating the longest time horizon. It gives direction to corporate

values, corporate culture, corporate goals, and corporate missions. Under this broad

corporate strategy there are typically business-level competitive strategies and functional

unit strategies.

Corporate strategy refers to the overarching strategy of the diversified firm. Such a

corporate strategy answers the questions of "in which businesses should we compete?" and

"how does being in these business create synergy and/or add to the competitive advantage

of the corporation as a whole?"

Business strategy refers to the aggregated strategies of single business firm or a strategic

business unit (SBU) in a diversified corporation. According to Michael Porter, a firm must

formulate a business strategy that incorporates either cost leadership, differentiation or

focus in order to achieve a sustainable competitive advantage and long-term success in its

chosen arenas or industries.

Page 10: Strategic Notes

Functional strategies include marketing strategies, new product development strategies,

human resource strategies, financial strategies, legal strategies, supply-chain strategies,

and information technology management strategies. The emphasis is on short and medium

term plans and is limited to the domain of each department’s functional responsibility. Each

functional department attempts to do its part in meeting overall corporate objectives, and

hence to some extent their strategies are derived from broader corporate strategies.

Many companies feel that a functional organizational structure is not an efficient way to

organize activities so they have reengineered according to processes or SBUs. A strategic

business unit is a semi-autonomous unit that is usually responsible for its own budgeting,

new product decisions, hiring decisions, and price setting. An SBU is treated as an internal

profit centre by corporate headquarters.

An additional level of strategy called operational strategy was encouraged by Peter

Drucker in his theory of management by objectives (MBO). It is very narrow in focus and

deals with day-to-day operational activities such as scheduling criteria. It must operate

within a budget but is not at liberty to adjust or create that budget. Operational level

strategies are informed by business level strategies which, in turn, are informed by

corporate level strategies.

Since the turn of the millennium, some firms have reverted to a simpler strategic structure

driven by advances in information technology. It is felt that knowledge management

systems should be used to share information and create common goals. Strategic divisions

are thought to hamper this process. This notion of strategy has been captured under the

rubric of dynamic strategy, popularized by Carpenter and Sanders's textbook [1]. This

work builds on that of Brown and Eisenhart as well as Christensen and portrays firm

strategy, both business and corporate, as necessarily embracing ongoing strategic change,

and the seamless integration of strategy formulation and implementation. Such change and

implementation are usually built into the strategy through the staging and pacing facets.

Historical development of strategic management

Birth of strategic management

Strategic management as a discipline originated in the 1950s and 60s. Although there were

numerous early contributors to the literature, the most influential pioneers were Alfred D.

Chandler, Jr., Philip Selznick, Igor Ansoff, and Peter Drucker.

Page 11: Strategic Notes

Alfred Chandler recognized the importance of coordinating the various aspects of

management under one all-encompassing strategy. Prior to this time the various functions

of management were separate with little overall coordination or strategy. Interactions

between functions or between departments were typically handled by a boundary position,

that is, there were one or two managers that relayed information back and forth between

two departments. Chandler also stressed the importance of taking a long term perspective

when looking to the future. In his 1962 groundbreaking work Strategy and Structure,

Chandler showed that a long-term coordinated strategy was necessary to give a company

structure, direction, and focus. He says it concisely, “structure follows strategy.”[3]

In 1957, Philip Selznick introduced the idea of matching the organization's internal factors

with external environmental circumstances.[4] This core idea was developed into what we

now call SWOT analysis by Learned, Andrews, and others at the Harvard Business School

General Management Group. Strengths and weaknesses of the firm are assessed in light of

the opportunities and threats from the business environment.

Igor Ansoff built on Chandler's work by adding a range of strategic concepts and inventing a

whole new vocabulary. He developed a strategy grid that compared market penetration

strategies, product development strategies, market development strategies and horizontal

and vertical integration and diversification strategies. He felt that management could use

these strategies to systematically prepare for future opportunities and challenges. In his

1965 classic Corporate Strategy, he developed the gap analysis still used today in which we

must understand the gap between where we are currently and where we would like to be,

then develop what he called “gap reducing actions”.[5]

Peter Drucker was a prolific strategy theorist, author of dozens of management books, with

a career spanning five decades. His contributions to strategic management were many but

two are most important. Firstly, he stressed the importance of objectives. An organization

without clear objectives is like a ship without a rudder. As early as 1954 he was developing a

theory of management based on objectives.[6] This evolved into his theory of management

by objectives (MBO). According to Drucker, the procedure of setting objectives and

monitoring your progress towards them should permeate the entire organization, top to

bottom. His other seminal contribution was in predicting the importance of what today we

would call intellectual capital. He predicted the rise of what he called the “knowledge

worker” and explained the consequences of this for management. He said that knowledge

work is non-hierarchical. Work would be carried out in teams with the person most

knowledgeable in the task at hand being the temporary leader.

Page 12: Strategic Notes

In 1985, Ellen-Earle Chaffee summarized what she thought were the main elements of

strategic management theory by the 1970s:[7]

Strategic management involves adapting the organization to its business

environment.

Strategic management is fluid and complex. Change creates novel combinations of

circumstances requiring unstructured non-repetitive responses.

Strategic management affects the entire organization by providing direction.

Strategic management involves both strategy formation (she called it content) and

also strategy implementation (she called it process).

Strategic management is partially planned and partially unplanned.

Strategic management is done at several levels: overall corporate strategy, and

individual business strategies.

Strategic management involves both conceptual and analytical thought processes.

Growth and portfolio theory

In the 1970s much of strategic management dealt with size, growth, and portfolio theory.

The PIMS study was a long term study, started in the 1960s and lasted for 19 years, that

attempted to understand the Profit Impact of Marketing Strategies (PIMS), particularly the

effect of market share. Started at General Electric, moved to Harvard in the early 1970s, and

then moved to the Strategic Planning Institute in the late 1970s, it now contains decades of

information on the relationship between profitability and strategy. Their initial conclusion

was unambiguous: The greater a company's market share, the greater will be their rate of

profit. The high market share provides volume and economies of scale. It also provides

experience and learning curve advantages. The combined effect is increased profits.[8] The

studies conclusions continue to be drawn on by academics and companies today: "PIMS

provides compelling quantitative evidence as to which business strategies work and don't

work" - Tom Peters.

The benefits of high market share naturally lead to an interest in growth strategies. The

relative advantages of horizontal integration, vertical integration, diversification, franchises,

mergers and acquisitions, joint ventures, and organic growth were discussed. The most

appropriate market dominance strategies were assessed given the competitive and

regulatory environment.

Page 13: Strategic Notes

There was also research that indicated that a low market share strategy could also be very

profitable. Schumacher (1973),[9] Woo and Cooper (1982),[10] Levenson (1984),[11] and later

Traverso (2002)[12] showed how smaller niche players obtained very high returns.

By the early 1980s the paradoxical conclusion was that high market share and low market

share companies were often very profitable but most of the companies in between were not.

This was sometimes called the “hole in the middle” problem. This anomaly would be

explained by Michael Porter in the 1980s.

The management of diversified organizations required new techniques and new ways of

thinking. The first CEO to address the problem of a multi-divisional company was Alfred

Sloan at General Motors. GM was decentralized into semi-autonomous “strategic business

units” (SBU's), but with centralized support functions.

One of the most valuable concepts in the strategic management of multi-divisional

companies was portfolio theory. In the previous decade Harry Markowitz and other

financial theorists developed the theory of portfolio analysis. It was concluded that a broad

portfolio of financial assets could reduce specific risk. In the 1970s marketers extended the

theory to product portfolio decisions and managerial strategists extended it to operating

division portfolios. Each of a company’s operating divisions were seen as an element in the

corporate portfolio. Each operating division (also called strategic business units) was treated

as a semi-independent profit center with its own revenues, costs, objectives, and strategies.

Several techniques were developed to analyze the relationships between elements in a

portfolio. B.C.G. Analysis, for example, was developed by the Boston Consulting Group in the

early 1970s. This was the theory that gave us the wonderful image of a CEO sitting on a

stool milking a cash cow. Shortly after that the G.E. multi factoral model was developed by

General Electric. Companies continued to diversify until the 1980s when it was realized that

in many cases a portfolio of operating divisions was worth more as separate completely

independent companies.

The marketing revolution

The 1970s also saw the rise of the marketing oriented firm. From the beginnings of

capitalism it was assumed that the key requirement of business success was a product of

high technical quality. If you produced a product that worked well and was durable, it was

assumed you would have no difficulty selling them at a profit. This was called the production

orientation and it was generally true that good products could be sold without effort,

encapsulated in the saying "Build a better mousetrap and the world will beat a path to your

Page 14: Strategic Notes

door." This was largely due to the growing numbers of affluent and middle class people that

capitalism had created. But after the untapped demand caused by the second world war was

saturated in the 1950s it became obvious that products were not selling as easily as they

had been. The answer was to concentrate on selling. The 1950s and 1960s is known as the

sales era and the guiding philosophy of business of the time is today called the sales

orientation. In the early 1970s Theodore Levitt and others at Harvard argued that the sales

orientation had things backward. They claimed that instead of producing products then

trying to sell them to the customer, businesses should start with the customer, find out what

they wanted, and then produce it for them. The customer became the driving force behind

all strategic business decisions. This marketing orientation, in the decades since its

introduction, has been reformulated and repackaged under numerous names including

customer orientation, marketing philosophy, customer intimacy, customer focus, customer

driven, and market focused.

The Japanese challenge

By the late 70s people had started to notice how successful Japanese industry had become.

In industry after industry, including steel, watches, ship building, cameras, autos, and

electronics, the Japanese were surpassing American and European companies. Westerners

wanted to know why. Numerous theories purported to explain the Japanese success

including:

Higher employee morale, dedication, and loyalty;

Lower cost structure, including wages;

Effective government industrial policy;

Modernization after WWII leading to high capital intensity and productivity;

Economies of scale associated with increased exporting;

Relatively low value of the Yen leading to low interest rates and capital costs, low

dividend expectations, and inexpensive exports;

Superior quality control techniques such as Total Quality Management and other

systems introduced by W. Edwards Deming in the 1950s and 60s.[13]

Although there was some truth to all these potential explanations, there was clearly

something missing. In fact by 1980 the Japanese cost structure was higher than the

Page 15: Strategic Notes

American. And post WWII reconstruction was nearly 40 years in the past. The first

management theorist to suggest an explanation was Richard Pascale.

In 1981 Richard Pascale and Anthony Athos in The Art of Japanese Management claimed that

the main reason for Japanese success was their superior management techniques.[14] They

divided management into 7 aspects (which are also known as McKinsey 7S Framework):

Strategy, Structure, Systems, Skills, Staff, Style, and Supraordinate goals (which we would

now call shared values). The first three of the 7 S's were called hard factors and this is

where American companies excelled. The remaining four factors (skills, staff, style, and

shared values) were called soft factors and were not well understood by American

businesses of the time (for details on the role of soft and hard factors see Wickens P.D.

1995.) Americans did not yet place great value on corporate culture, shared values and

beliefs, and social cohesion in the workplace. In Japan the task of management was seen as

managing the whole complex of human needs, economic, social, psychological, and spiritual.

In America work was seen as something that was separate from the rest of one's life. It was

quite common for Americans to exhibit a very different personality at work compared to the

rest of their lives. Pascale also highlighted the difference between decision making styles;

hierarchical in America, and consensus in Japan. He also claimed that American business

lacked long term vision, preferring instead to apply management fads and theories in a

piecemeal fashion.

One year later The Mind of the Strategist was released in America by Kenichi Ohmae, the

head of McKinsey & Co.'s Tokyo office.[15] (It was originally published in Japan in 1975.) He

claimed that strategy in America was too analytical. Strategy should be a creative art: It is a

frame of mind that requires intuition and intellectual flexibility. He claimed that Americans

constrained their strategic options by thinking in terms of analytical techniques, rote

formula, and step-by-step processes. He compared the culture of Japan in which vagueness,

ambiguity, and tentative decisions were acceptable, to American culture that valued fast

decisions.

Also in 1982 Tom Peters and Robert Waterman released a study that would respond to the

Japanese challenge head on.[16] Peters and Waterman, who had several years earlier

collaborated with Pascale and Athos at McKinsey & Co. asked “What makes an excellent

company?”. They looked at 62 companies that they thought were fairly successful. Each was

subject to six performance criteria. To be classified as an excellent company, it had to be

above the 50th percentile in 4 of the 6 performance metrics for 20 consecutive years. Forty-

three companies passed the test. They then studied these successful companies and

Page 16: Strategic Notes

interviewed key executives. They concluded in In Search of Excellence that there were 8

keys to excellence that were shared by all 43 firms. They are:

A bias for action — Do it. Try it. Don’t waste time studying it with multiple reports and

committees.

Customer focus — Get close to the customer. Know your customer.

Entrepreneurship — Even big companies act and think small by giving people the

authority to take initiatives.

Productivity through people — Treat your people with respect and they will reward

you with productivity.

Value-oriented CEOs — The CEO should actively propagate corporate values

throughout the organization.

Stick to the knitting — Do what you know well.

Keep things simple and lean — Complexity encourages waste and confusion.

Simultaneously centralized and decentralized — Have tight centralized control while

also allowing maximum individual autonomy.

The basic blueprint on how to compete against the Japanese had been drawn. But as J.E.

Rehfeld (1994) explains it is not a straight forward task due to differences in culture.[17] A

certain type of alchemy was required to transform knowledge from various cultures into a

management style that allows a specific company to compete in a globally diverse world. He

says, for example, that Japanese style kaizen (continuous improvement) techniques,

although suitable for people socialized in Japanese culture, have not been successful when

implemented in the U.S. unless they are modified significantly.

Gaining competitive advantage

The Japanese challenge shook the confidence of the western business elite, but detailed

comparisons of the two management styles and examinations of successful businesses

convinced westerners that they could overcome the challenge. The 1980s and early 1990s

saw a plethora of theories explaining exactly how this could be done. They cannot all be

detailed here, but some of the more important strategic advances of the decade are

explained below.

Page 17: Strategic Notes

Gary Hamel and C. K. Prahalad declared that strategy needs to be more active and

interactive; less “arm-chair planning” was needed. They introduced terms like strategic

intent and strategic architecture.[18][19] Their most well known advance was the idea of

core competency. They showed how important it was to know the one or two key things that

your company does better than the competition.[20]

Active strategic management required active information gathering and active problem

solving. In the early days of Hewlett-Packard (H-P), Dave Packard and Bill Hewlett devised an

active management style that they called Management by Walking Around (MBWA).

Senior H-P managers were seldom at their desks. They spent most of their days visiting

employees, customers, and suppliers. This direct contact with key people provided them

with a solid grounding from which viable strategies could be crafted. The MBWA concept was

popularized in 1985 by a book by Tom Peters and Nancy Austin.[21] Japanese managers

employ a similar system, which originated at Honda, and is sometimes called the 3 G's

(Genba, Genbutsu, and Genjitsu, which translate into “actual place”, “actual thing”, and

“actual situation”).

Probably the most influential strategist of the decade was Michael Porter. He introduced

many new concepts including; 5 forces analysis, generic strategies, the value chain,

strategic groups, and clusters. In 5 forces analysis he identifies the forces that shape a firm's

strategic environment. It is like a SWOT analysis with structure and purpose. It shows how a

firm can use these forces to obtain a sustainable competitive advantage. Porter modifies

Chandler's dictum about structure following strategy by introducing a second level of

structure: Organizational structure follows strategy, which in turn follows industry structure.

Porter's generic strategies detail the interaction between cost minimization strategies,

product differentiation strategies, and market focus strategies. Although he did not

introduce these terms, he showed the importance of choosing one of them rather than

trying to position your company between them. He also challenged managers to see their

industry in terms of a value chain. A firm will be successful only to the extent that it

contributes to the industry's value chain. This forced management to look at its operations

from the customer's point of view. Every operation should be examined in terms of what

value it adds in the eyes of the final customer.

In 1993, John Kay took the idea of the value chain to a financial level claiming “ Adding value

is the central purpose of business activity”, where adding value is defined as the difference

between the market value of outputs and the cost of inputs including capital, all divided by

the firm's net output. Borrowing from Gary Hamel and Michael Porter, Kay claims that the

role of strategic management is to identify your core competencies, and then assemble a

Page 18: Strategic Notes

collection of assets that will increase value added and provide a competitive advantage. He

claims that there are 3 types of capabilities that can do this; innovation, reputation, and

organizational structure.

The 1980s also saw the widespread acceptance of positioning theory. Although the theory

originated with Jack Trout in 1969, it didn’t gain wide acceptance until Al Ries and Jack Trout

wrote their classic book “Positioning: The Battle For Your Mind” (1979). The basic premise is

that a strategy should not be judged by internal company factors but by the way customers

see it relative to the competition. Crafting and implementing a strategy involves creating a

position in the mind of the collective consumer. Several techniques were applied to

positioning theory, some newly invented but most borrowed from other disciplines.

Perceptual mapping for example, creates visual displays of the relationships between

positions. Multidimensional scaling, discriminant analysis, factor analysis, and conjoint

analysis are mathematical techniques used to determine the most relevant characteristics

(called dimensions or factors) upon which positions should be based. Preference regression

can be used to determine vectors of ideal positions and cluster analysis can identify clusters

of positions.

Others felt that internal company resources were the key. In 1992, Jay Barney, for example,

saw strategy as assembling the optimum mix of resources, including human, technology,

and suppliers, and then configure them in unique and sustainable ways.[22]

Michael Hammer and James Champy felt that these resources needed to be restructured.[23]

This process, that they labeled reengineering, involved organizing a firm's assets around

whole processes rather than tasks. In this way a team of people saw a project through, from

inception to completion. This avoided functional silos where isolated departments seldom

talked to each other. It also eliminated waste due to functional overlap and

interdepartmental communications.

In 1989 Richard Lester and the researchers at the MIT Industrial Performance Center

identified seven best practices and concluded that firms must accelerate the shift away

from the mass production of low cost standardized products. The seven areas of best

practice were:[24]

Simultaneous continuous improvement in cost, quality, service, and product

innovation

Breaking down organizational barriers between departments

Page 19: Strategic Notes

Eliminating layers of management creating flatter organizational hierarchies.

Closer relationships with customers and suppliers

Intelligent use of new technology

Global focus

Improving human resource skills

The search for “best practices” is also called benchmarking.[25] This involves determining

where you need to improve, finding an organization that is exceptional in this area, then

studying the company and applying its best practices in your firm.

A large group of theorists felt the area where western business was most lacking was

product quality. People like W. Edwards Deming,[26] Joseph M. Juran,[27] A. Kearney,[28] Philip

Crosby,[29] and Armand Feignbaum [30] suggested quality improvement techniques like Total

Quality Management (TQM), continuous improvement, lean manufacturing, Six Sigma, and

Return on Quality (ROQ).

An equally large group of theorists felt that poor customer service was the problem. People

like James Heskett (1988),[31] Earl Sasser (1995), William Davidow,[32] Len Schlesinger,[33] A.

Paraurgman (1988), Len Berry,[34] Jane Kingman-Brundage,[35] Christopher Hart, and

Christopher Lovelock (1994), gave us fishbone diagramming, service charting, Total

Customer Service (TCS), the service profit chain, service gaps analysis, the service

encounter, strategic service vision, service mapping, and service teams. Their underlying

assumption was that there is no better source of competitive advantage than a continuous

stream of delighted customers.

Process management uses some of the techniques from product quality management and

some of the techniques from customer service management. It looks at an activity as a

sequential process. The objective is to find inefficiencies and make the process more

effective. Although the procedures have a long history, dating back to Taylorism, the scope

of their applicability has been greatly widened, leaving no aspect of the firm free from

potential process improvements. Because of the broad applicability of process management

techniques, they can be used as a basis for competitive advantage.

Some realized that businesses were spending much more on acquiring new customers than

on retaining current ones. Carl Sewell,[36] Frederick F. Reichheld,[37] C. Gronroos,[38] and Earl

Sasser[39] showed us how a competitive advantage could be found in ensuring that

Page 20: Strategic Notes

customers returned again and again. This has come to be known as the loyalty effect after

Reicheld's book of the same name in which he broadens the concept to include employee

loyalty, supplier loyalty, distributor loyalty, and shareholder loyalty. They also developed

techniques for estimating the lifetime value of a loyal customer, called customer lifetime

value (CLV). A significant movement started that attempted to recast selling and marketing

techniques into a long term endeavor that created a sustained relationship with customers

(called relationship selling, relationship marketing, and customer relationship management).

Customer relationship management (CRM) software (and its many variants) became an

integral tool that sustained this trend.

James Gilmore and Joseph Pine found competitive advantage in mass customization.[40]

Flexible manufacturing techniques allowed businesses to individualize products for each

customer without losing economies of scale. This effectively turned the product into a

service. They also realized that if a service is mass customized by creating a “performance”

for each individual client, that service would be transformed into an “experience”. Their

book, The Experience Economy,[41] along with the work of Bernd Schmitt convinced many to

see service provision as a form of theatre. This school of thought is sometimes referred to as

customer experience management (CEM).

Like Peters and Waterman a decade earlier, James Collins and Jerry Porras spent years

conducting empirical research on what makes great companies. Six years of research

uncovered a key underlying principle behind the 19 successful companies that they studied:

They all encourage and preserve a core ideology that nurtures the company. Even though

strategy and tactics change daily, the companies, nevertheless, were able to maintain a

core set of values. These core values encourage employees to build an organization that

lasts. In Built To Last (1994) they claim that short term profit goals, cost cutting, and

restructuring will not stimulate dedicated employees to build a great company that will

endure.[42] In 2000 Collins coined the term “built to flip” to describe the prevailing business

attitudes in Silicon Valley. It describes a business culture where technological change

inhibits a long term focus. He also popularized the concept of the BHAG (Big Hairy

Audacious Goal).

Arie de Geus (1997) undertook a similar study and obtained similar results. He identified

four key traits of companies that had prospered for 50 years or more. They are:

Sensitivity to the business environment — the ability to learn and adjust

Cohesion and identity — the ability to build a community with personality, vision, and

purpose

Page 21: Strategic Notes

Tolerance and decentralization — the ability to build relationships

Conservative financing

A company with these key characteristics he called a living company because it is able to

perpetuate itself. If a company emphasizes knowledge rather than finance, and sees itself as

an ongoing community of human beings, it has the potential to become great and endure for

decades. Such an organization is an organic entity capable of learning (he called it a

“learning organization”) and capable of creating its own processes, goals, and persona.

The military theorists

In the 1980s some business strategists realized that there was a vast knowledge base

stretching back thousands of years that they had barely examined. They turned to military

strategy for guidance. Military strategy books such as The Art of War by Sun Tzu, On War by

von Clausewitz, and The Red Book by Mao Zedong became instant business classics. From

Sun Tzu they learned the tactical side of military strategy and specific tactical prescriptions.

From Von Clausewitz they learned the dynamic and unpredictable nature of military

strategy. From Mao Zedong they learned the principles of guerrilla warfare. The main

marketing warfare books were:

Business War Games by Barrie James, 1984

Marketing Warfare by Al Ries and Jack Trout, 1986

Leadership Secrets of Attila the Hun by Wess Roberts, 1987

Philip Kotler was a well-known proponent of marketing warfare strategy.

There were generally thought to be four types of business warfare theories. They are:

Offensive marketing warfare strategies

Defensive marketing warfare strategies

Flanking marketing warfare strategies

Guerrilla marketing warfare strategies

Page 22: Strategic Notes

The marketing warfare literature also examined leadership and motivation, intelligence

gathering, types of marketing weapons, logistics, and communications.

By the turn of the century marketing warfare strategies had gone out of favour. It was felt

that they were limiting. There were many situations in which non-confrontational

approaches were more appropriate. The “Strategy of the Dolphin” was developed in the mid

1990s to give guidance as to when to use aggressive strategies and when to use passive

strategies. A variety of aggressiveness strategies were developed.

In 1993, J. Moore used a similar metaphor.[43] Instead of using military terms, he created an

ecological theory of predators and prey (see ecological model of competition), a sort of

Darwinian management strategy in which market interactions mimic long term ecological

stability.

Strategic change

In 1970, Alvin Toffler in Future Shock described a trend towards accelerating rates of

change.[44] He illustrated how social and technological norms had shorter lifespans with each

generation, and he questioned society's ability to cope with the resulting turmoil and

anxiety. In past generations periods of change were always punctuated with times of

stability. This allowed society to assimilate the change and deal with it before the next

change arrived. But these periods of stability are getting shorter and by the late 20th

century had all but disappeared. In 1980 in The Third Wave, Toffler characterized this shift

to relentless change as the defining feature of the third phase of civilization (the first two

phases being the agricultural and industrial waves).[45] He claimed that the dawn of this new

phase will cause great anxiety for those that grew up in the previous phases, and will cause

much conflict and opportunity in the business world. Hundreds of authors, particularly since

the early 1990s, have attempted to explain what this means for business strategy.

In 1997, Watts Waker and Jim Taylor called this upheaval a "500 year delta."[46] They claimed

these major upheavals occur every 5 centuries. They said we are currently making the

transition from the “Age of Reason” to a new chaotic Age of Access. Jeremy Rifkin (2000)

popularized and expanded this term, “age of access” three years later in his book of the

same name.[47]

In 1968, Peter Drucker (1969) coined the phrase Age of Discontinuity to describe the way

change forces disruptions into the continuity of our lives.[48] In an age of continuity attempts

to predict the future by extrapolating from the past can be somewhat accurate. But

Page 23: Strategic Notes

according to Drucker, we are now in an age of discontinuity and extrapolating from the past

is hopelessly ineffective. We cannot assume that trends that exist today will continue into

the future. He identifies four sources of discontinuity: new technologies, globalization,

cultural pluralism, and knowledge capital.

In 2000, Gary Hamel discussed strategic decay, the notion that the value of all strategies,

no matter how brilliant, decays over time.[49]

In 1978, Dereck Abell (Abell, D. 1978) described strategic windows and stressed the

importance of the timing (both entrance and exit) of any given strategy. This has led some

strategic planners to build planned obsolescence into their strategies.[50]

In 1989, Charles Handy identified two types of change.[51] Strategic drift is a gradual

change that occurs so subtly that it is not noticed until it is too late. By contrast,

transformational change is sudden and radical. It is typically caused by discontinuities (or

exogenous shocks) in the business environment. The point where a new trend is initiated is

called a strategic inflection point by Andy Grove. Inflection points can be subtle or

radical.

In 2000, Malcolm Gladwell discussed the importance of the tipping point, that point where

a trend or fad acquires critical mass and takes off.[52]

In 1983, Noel Tichy recognized that because we are all beings of habit we tend to repeat

what we are comfortable with.[53] He wrote that this is a trap that constrains our creativity,

prevents us from exploring new ideas, and hampers our dealing with the full complexity of

new issues. He developed a systematic method of dealing with change that involved looking

at any new issue from three angles: technical and production, political and resource

allocation, and corporate culture.

In 1990, Richard Pascale (Pascale, R. 1990) wrote that relentless change requires that

businesses continuously reinvent themselves.[54] His famous maxim is “Nothing fails like

success” by which he means that what was a strength yesterday becomes the root of

weakness today, We tend to depend on what worked yesterday and refuse to let go of what

worked so well for us in the past. Prevailing strategies become self-confirming. In order to

avoid this trap, businesses must stimulate a spirit of inquiry and healthy debate. They must

encourage a creative process of self renewal based on constructive conflict.

Page 24: Strategic Notes

In 1996, Art Kleiner (1996) claimed that to foster a corporate culture that embraces change,

you have to hire the right people; heretics, heroes, outlaws, and visionaries [55] . The

conservative bureaucrat that made such a good middle manager in yesterday’s hierarchical

organizations is of little use today. A decade earlier Peters and Austin (1985) had stressed

the importance of nurturing champions and heroes. They said we have a tendency to

dismiss new ideas, so to overcome this, we should support those few people in the

organization that have the courage to put their career and reputation on the line for an

unproven idea.

In 1996, Adrian Slywotsky showed how changes in the business environment are reflected in

value migrations between industries, between companies, and within companies.[56] He

claimed that recognizing the patterns behind these value migrations is necessary if we wish

to understand the world of chaotic change. In “Profit Patterns” (1999) he described

businesses as being in a state of strategic anticipation as they try to spot emerging

patterns. Slywotsky and his team identified 30 patterns that have transformed industry after

industry.[57]

In 1997, Clayton Christensen (1997) took the position that great companies can fail precisely

because they do everything right since the capabilities of the organization also defines its

disabilities.[58] Christensen's thesis is that outstanding companies lose their market

leadership when confronted with disruptive technology. He called the approach to

discovering the emerging markets for disruptive technologies agnostic marketing, i.e.,

marketing under the implicit assumption that no one - not the company, not the customers -

can know how or in what quantities a disruptive product can or will be used before they

have experience using it.

A number of strategists use scenario planning techniques to deal with change. Kees van der

Heijden (1996), for example, says that change and uncertainty make “optimum strategy”

determination impossible. We have neither the time nor the information required for such a

calculation. The best we can hope for is what he calls “the most skillful process”.[59] The way

Peter Schwartz put it in 1991 is that strategic outcomes cannot be known in advance so the

sources of competitive advantage cannot be predetermined.[60] The fast changing business

environment is too uncertain for us to find sustainable value in formulas of excellence or

competitive advantage. Instead, scenario planning is a technique in which multiple

outcomes can be developed, their implications assessed, and their likeliness of occurrence

evaluated. According to Pierre Wack, scenario planning is about insight, complexity, and

subtlety, not about formal analysis and numbers.[61]

Page 25: Strategic Notes

In 1988, Henry Mintzberg looked at the changing world around him and decided it was time

to reexamine how strategic management was done.[62][63] He examined the strategic process

and concluded it was much more fluid and unpredictable than people had thought. Because

of this, he could not point to one process that could be called strategic planning. Instead he

concludes that there are five types of strategies. They are:

Strategy as plan - a direction, guide, course of action - intention rather than actual

Strategy as ploy - a maneuver intended to outwit a competitor

Strategy as pattern - a consistent pattern of past behaviour - realized rather than

intended

Strategy as position - locating of brands, products, or companies within the

conceptual framework of consumers or other stakeholders - strategy determined

primarily by factors outside the firm

Strategy as perspective - strategy determined primarily by a master strategist

In 1998, Mintzberg developed these five types of management strategy into 10 “schools of

thought”. These 10 schools are grouped into three categories. The first group is prescriptive

or normative. It consists of the informal design and conception school, the formal planning

school, and the analytical positioning school. The second group, consisting of six schools, is

more concerned with how strategic management is actually done, rather than prescribing

optimal plans or positions. The six schools are the entrepreneurial, visionary, or great leader

school, the cognitive or mental process school, the learning, adaptive, or emergent process

school, the power or negotiation school, the corporate culture or collective process school,

and the business environment or reactive school. The third and final group consists of one

school, the configuration or transformation school, an hybrid of the other schools organized

into stages, organizational life cycles, or “episodes”.[64]

In 1999, Constantinos Markides also wanted to reexamine the nature of strategic planning

itself.[65] He describes strategy formation and implementation as an on-going, never-ending,

integrated process requiring continuous reassessment and reformation. Strategic

management is planned and emergent, dynamic, and interactive. J. Moncrieff (1999) also

stresses strategy dynamics.[66] He recognized that strategy is partially deliberate and

partially unplanned. The unplanned element comes from two sources: emergent

strategies (result from the emergence of opportunities and threats in the environment) and

Strategies in action (ad hoc actions by many people from all parts of the organization).

Page 26: Strategic Notes

Some business planners are starting to use a complexity theory approach to strategy.

Complexity can be thought of as chaos with a dash of order. Chaos theory deals with

turbulent systems that rapidly become disordered. Complexity is not quite so unpredictable.

It involves multiple agents interacting in such a way that a glimpse of structure may appear.

Axelrod, R.,[67] Holland, J.,[68] and Kelly, S. and Allison, M.A.,[69] call these systems of multiple

actions and reactions complex adaptive systems. Axelrod asserts that rather than fear

complexity, business should harness it. He says this can best be done when “there are many

participants, numerous interactions, much trial and error learning, and abundant attempts to

imitate each others' successes”. In 2000, E. Dudik wrote that an organization must develop

a mechanism for understanding the source and level of complexity it will face in the future

and then transform itself into a complex adaptive system in order to deal with it.[70]

Information and technology driven strategy

Peter Drucker had theorized the rise of the “knowledge worker” back in the 1950s. He

described how fewer workers would be doing physical labor, and more would be applying

their minds. In 1984, John Nesbitt theorized that the future would be driven largely by

information: companies that managed information well could obtain an advantage, however

the profitability of what he calls the “information float” (information that the company had

and others desired) would all but disappear as inexpensive computers made information

more accessible.

Daniel Bell (1985) examined the sociological consequences of information technology, while

Gloria Schuck and Shoshana Zuboff looked at psychological factors.[71] Zuboff, in her five

year study of eight pioneering corporations made the important distinction between

“automating technologies” and “infomating technologies”. She studied the effect that both

had on individual workers, managers, and organizational structures. She largely confirmed

Peter Drucker's predictions three decades earlier, about the importance of flexible

decentralized structure, work teams, knowledge sharing, and the central role of the

knowledge worker. Zuboff also detected a new basis for managerial authority, based not on

position or hierarchy, but on knowledge (also predicted by Drucker) which she called

“participative management”.[72]

In 1990, Peter Senge, who had collaborated with Arie de Geus at Dutch Shell, borrowed de

Geus' notion of the learning organization, expanded it, and popularized it. The underlying

theory is that a company's ability to gather, analyze, and use information is a necessary

Page 27: Strategic Notes

requirement for business success in the information age. (See organizational learning.) In

order to do this, Senge claimed that an organization would need to be structured such that:[73]

People can continuously expand their capacity to learn and be productive,

New patterns of thinking are nurtured,

Collective aspirations are encouraged, and

People are encouraged to see the “whole picture” together.

Senge identified five components of a learning organization. They are:

Personal responsibility, self reliance, and mastery — We accept that we are the

masters of our own destiny. We make decisions and live with the consequences of

them. When a problem needs to be fixed, or an opportunity exploited, we take the

initiative to learn the required skills to get it done.

Mental models — We need to explore our personal mental models to understand the

subtle effect they have on our behaviour.

Shared vision — The vision of where we want to be in the future is discussed and

communicated to all. It provides guidance and energy for the journey ahead.

Team learning — We learn together in teams. This involves a shift from “a spirit of

advocacy to a spirit of enquiry”.

Systems thinking — We look at the whole rather than the parts. This is what Senge

calls the “Fifth discipline”. It is the glue that integrates the other four into a coherent

strategy. For an alternative approach to the “learning organization”, see Garratt, B.

(1987).

Since 1990 many theorists have written on the strategic importance of information,

including J.B. Quinn,[74] J. Carlos Jarillo,[75] D.L. Barton,[76] Manuel Castells,[77] J.P. Lieleskin,[78]

Thomas Stewart,[79] K.E. Sveiby,[80] Gilbert J. Probst,[81] and Shapiro and Varian[82] to name just

a few.

Thomas A. Stewart, for example, uses the term intellectual capital to describe the

investment an organization makes in knowledge. It is comprised of human capital (the

knowledge inside the heads of employees), customer capital (the knowledge inside the

Page 28: Strategic Notes

heads of customers that decide to buy from you), and structural capital (the knowledge that

resides in the company itself).

Manuel Castells, describes a network society characterized by: globalization, organizations

structured as a network, instability of employment, and a social divide between those with

access to information technology and those without.

Stan Davis and Christopher Meyer (1998) have combined three variables to define what they

call the BLUR equation. The speed of change, Internet connectivity, and intangible

knowledge value, when multiplied together yields a society's rate of BLUR. The three

variables interact and reinforce each other making this relationship highly non-linear.

Regis McKenna posits that life in the high tech information age is what he called a “real time

experience”. Events occur in real time. To ever more demanding customers “now” is what

matters. Pricing will more and more become variable pricing changing with each transaction,

often exhibiting first degree price discrimination. Customers expect immediate service,

customized to their needs, and will be prepared to pay a premium price for it. He claimed

that the new basis for competition will be time based competition.[83]

Geoffrey Moore (1991) and R. Frank and P. Cook[84] also detected a shift in the nature of

competition. In industries with high technology content, technical standards become

established and this gives the dominant firm a near monopoly. The same is true of

networked industries in which interoperability requires compatibility between users. An

example is word processor documents. Once a product has gained market dominance, other

products, even far superior products, cannot compete. Moore showed how firms could attain

this enviable position by using E.M. Rogers five stage adoption process and focusing on one

group of customers at a time, using each group as a base for marketing to the next group.

The most difficult step is making the transition between visionaries and pragmatists (See

Crossing the Chasm). If successful a firm can create a bandwagon effect in which the

momentum builds and your product becomes a de facto standard.

Evans and Wurster describe how industries with a high information component are being

transformed.[85] They cite Encarta's demolition of the Encyclopedia Britannica (whose sales

have plummeted 80% since their peak of $650 million in 1990). Many speculate that

Encarta’s reign will be short-lived, eclipsed by collaborative encyclopedias like Wikipedia

that can operate at very low marginal costs. Evans also mentions the music industry which

is desperately looking for a new business model. The upstart information savvy firms,

unburdened by cumbersome physical assets, are changing the competitive landscape,

Page 29: Strategic Notes

redefining market segments, and disintermediating some channels. One manifestation of

this is personalized marketing. Information technology allows marketers to treat each

individual as its own market, a market of one. Traditional ideas of market segments will no

longer be relevant if personalized marketing is successful.

The technology sector has provided some strategies directly. For example, from the

software development industry agile software development provides a model for shared

development processes.

Access to information systems have allowed senior managers to take a much more

comprehensive view of strategic management than ever before. The most notable of the

comprehensive systems is the balanced scorecard approach developed in the early 1990s

by Drs. Robert S. Kaplan (Harvard Business School) and David Norton (Kaplan, R. and

Norton, D. 1992). It measures several factors financial, marketing, production, organizational

development, and new product development in order to achieve a 'balanced' perspective.

The psychology of strategic management

Several psychologists have conducted studies to determine the psychological patterns

involved in strategic management. Typically senior managers have been asked how they go

about making strategic decisions. A 1938 treatise by Chester Barnard, that was based on his

own experience as a business executive, sees the process as informal, intuitive, non-

routinized, and involving primarily oral, 2-way communications. Bernard says “The process

is the sensing of the organization as a whole and the total situation relevant to it. It

transcends the capacity of merely intellectual methods, and the techniques of discriminating

the factors of the situation. The terms pertinent to it are “feeling”, “judgement”, “sense”,

“proportion”, “balance”, “appropriateness”. It is a matter of art rather than science.”[86]

In 1973, Henry Mintzberg found that senior managers typically deal with unpredictable

situations so they strategize in ad hoc, flexible, dynamic, and implicit ways. He says, “The

job breeds adaptive information-manipulators who prefer the live concrete situation. The

manager works in an environment of stimulous-response, and he develops in his work a

clear preference for live action.”[87]

In 1982, John Kotter studied the daily activities of 15 executives and concluded that they

spent most of their time developing and working a network of relationships from which they

gained general insights and specific details to be used in making strategic decisions. They

tended to use “mental road maps” rather than systematic planning techniques.[88]

Page 30: Strategic Notes

Daniel Isenberg's 1984 study of senior managers found that their decisions were highly

intuitive. Executives often sensed what they were going to do before they could explain why.[89] He claimed in 1986 that one of the reasons for this is the complexity of strategic

decisions and the resultant information uncertainty.[90]

Shoshana Zuboff (1988) claims that information technology is widening the divide between

senior managers (who typically make strategic decisions) and operational level managers

(who typically make routine decisions). She claims that prior to the widespread use of

computer systems, managers, even at the most senior level, engaged in both strategic

decisions and routine administration, but as computers facilitated (She called it “deskilled”)

routine processes, these activities were moved further down the hierarchy, leaving senior

management free for strategic decions making.

In 1977, Abraham Zaleznik identified a difference between leaders and managers. He

describes leadershipleaders as visionaries who inspire. They care about substance. Whereas

managers are claimed to care about process, plans, and form.[91] He also claimed in 1989

that the rise of the manager was the main factor that caused the decline of American

business in the 1970s and 80s. Lack of leadership is most damaging at the level of strategic

management where it can paralyze an entire organization.[92]

According to Corner, Kinichi, and Keats,[93] strategic decision making in organizations occurs

at two levels: individual and aggregate. They have developed a model of parallel strategic

decision making. The model identifies two parallel processes both of which involve getting

attention, encoding information, storage and retrieval of information, strategic choice,

strategic outcome, and feedback. The individual and organizational processes are not

independent however. They interact at each stage of the process.

Reasons why strategic plans fail

There are many reasons why strategic plans fail, especially:

Failure to understand the customer

o Why do they buy

o Is there a real need for the product

o inadequate or incorrect marketing research

Inability to predict environmental reaction

Page 31: Strategic Notes

o What will competitors do

Fighting brands

Price wars

o Will government intervene

Over-estimation of resource competence

o Can the staff, equipment, and processes handle the new strategy

o Failure to develop new employee and management skills

Failure to coordinate

o Reporting and control relationships not adequate

o Organizational structure not flexible enough

Failure to obtain senior management commitment

o Failure to get management involved right from the start

o Failure to obtain sufficient company resources to accomplish task

Failure to obtain employee commitment

o New strategy not well explained to employees

o No incentives given to workers to embrace the new strategy

Under-estimation of time requirements

o No critical path analysis done

Failure to follow the plan

o No follow through after initial planning

o No tracking of progress against plan

o No consequences for above

Failure to manage change

Page 32: Strategic Notes

o Inadequate understanding of the internal resistance to change

o Lack of vision on the relationships between processes, technology and

organization

Poor communications

o Insufficient information sharing among stakeholders

o Exclusion of stakeholders and delegates

Limitations of strategic management

Although a sense of direction is important, it can also stifle creativity, especially if it is rigidly

enforced. In an uncertain and ambiguous world, fluidity can be more important than a finely

tuned strategic compass. When a strategy becomes internalized into a corporate culture, it

can lead to group think. It can also cause an organization to define itself too narrowly. An

example of this is marketing myopia.

Many theories of strategic management tend to undergo only brief periods of popularity. A

summary of these theories thus inevitably exhibits survivorship bias (itself an area of

research in strategic management). Many theories tend either to be too narrow in focus to

build a complete corporate strategy on, or too general and abstract to be applicable to

specific situations. Populism or faddishness can have an impact on a particular theory's life

cycle and may see application in inappropriate circumstances. See business philosophies

and popular management theories for a more critical view of management theories.

In 2000, Gary Hamel coined the term strategic convergence to explain the limited scope

of the strategies being used by rivals in greatly differing circumstances. He lamented that

strategies converge more than they should, because the more successful ones get imitated

by firms that do not understand that the strategic process involves designing a custom

strategy for the specifics of each situation.[94]

Ram Charan, aligning with a popular marketing tagline, believes that strategic planning

must not dominate action. "Just do it!", while not quite what he meant, is a phrase that

nevertheless comes to mind when combatting analysis paralysis.

The Linearity Trap

Page 33: Strategic Notes

It is tempting to think that the elements of strategic management – (i) reaching consensus

on corporate objectives; (ii) developing a plan for achieving the objectives; and (iii)

marshalling and allocating the resources required to implement the plan – can be

approached sequentially. It would be convenient, in other words, if one could deal first with

the noble question of ends, and then address the mundane question of means.

But in the world in which strategies have to be implemented, the three elements are

interdependent. Means are as likely to determine ends as ends are to determine means.[95]

The objectives that an organization might wish to pursue are limited by the range of feasible

approaches to implementation. (There will usually be only a small number of approaches

that will not only be technically and administratively possible, but also satisfactory to the full

range of organizational stakeholders.) In turn, the range of feasible implementation

approaches is determined by the availability of resources.

And so, although participants in a typical “strategy session” may be asked to do “blue sky”

thinking where they pretend that the usual constraints – resources, acceptability to

stakeholders , administrative feasibility – have been lifted, the fact is that it rarely makes

sense to divorce oneself from the environment in which a strategy will have to be

implemented. It’s probably impossible to think in any meaningful way about strategy in an

unconstrained environment. Our brains can’t process “boundless possibilities”, and the very

idea of strategy only has meaning in the context of challenges or obstacles to be overcome.

It’s at least as plausible to argue that acute awareness of constraints is the very thing that

stimulates creativity by forcing us to constantly reassess both means and ends in light of

circumstances.

The key question, then, is, "How can individuals, organizations and societies cope as well as

possible with ... issues too complex to be fully understood, given the fact that actions

initiated on the basis of inadequate understanding may lead to significant regret?"[96]

The answer is that the process of developing organizational strategy must be iterative. It

involves toggling back and forth between questions about objectives, implementation

planning and resources. An initial idea about corporate objectives may have to be altered if

there is no feasible implementation plan that will meet with a sufficient level of acceptance

among the full range of stakeholders, or because the necessary resources are not available,

or both.

Even the most talented manager would no doubt agree that "comprehensive analysis is

impossible" for complex problems[97]. Formulation and implementation of strategy must thus

Page 34: Strategic Notes

occur side-by-side rather than sequentially, because strategies are built on assumptions

which, in the absence of perfect knowledge, will never be perfectly correct. Strategic

management is necessarily a "repetitive learning cycle [rather than] a linear progression

towards a clearly defined final destination."[98] While assumptions can and should be tested

in advance, the ultimate test is implementation. You will inevitably need to adjust corporate

objectives and/or your approach to pursuing outcomes and/or assumptions about required

resources. Thus a strategy will get remade during implementation because "humans rarely

can proceed satisfactorily except by learning from experience; and modest probes, serially

modified on the basis of feedback, usually are the best method for such learning."[99]

It serves little purpose (other than to provide a false aura of certainty sometimes demanded

by corporate strategists and planners) to pretend to anticipate every possible consequence

of a corporate decision, every possible constraining or enabling factor, and every possible

point of view. At the end of the day, what matters for the purposes of strategic management

is having a clear view – based on the best available evidence and on defensible assumptions

– of what it seems possible to accomplish within the constraints of a given set of

circumstances. As the situation changes, some opportunities for pursuing objectives will

disappear and others arise. Some implementation approaches will become impossible, while

others, previously impossible or unimagined, will become viable.

The essence of being “strategic” thus lies in a capacity for "intelligent trial-and error" [100]

rather than linear adherence to finally honed and detailed strategic plans. Strategic

management will add little value -- indeed, it may well do harm -- if organizational strategies

are designed to be used as a detailed blueprints for managers. Strategy should be seen,

rather, as laying out the general path - but not the precise steps - by which an organization

intends to create value.[101]Strategic management is a question of interpreting, and

continuously reinterpreting, the possibilities presented by shifting circumstances for

advancing an organization's objectives. Doing so requires strategists to think simultaneously

about desired objectives, the best approach for achieving them, and the resources implied

by the chosen approach. It requires a frame of mind that admits of no boundary between

means and ends.

Maris Martinsons. 1993. Strategic Innovation. Management Decision. Mark Schacter. 2008.

Interpreting the Possible. A Guide to Strategic Management in Public Service

Organizations.Interpreting the Possible

External links

Page 35: Strategic Notes

The Journal of Business Strategies

Strategic Planning Society

The Association of Internal Management Consultants -The nationwide network of

Strategic Management and Planning professionals

Business model

Business plan

Cost overrun

Hoshin Kanri

Integrated business

planning

Marketing

Marketing plan

Marketing strategies

Management

Management

consulting

Military strategy

Morphological

analysis

Overall Equipment

Effectiveness

Proximity mapping

Revenue shortfall

Strategic planning

Strategy visualization

Value migration

Strategic planning

Strategic planning is an organization's process of defining its strategy, or direction, and

making decisions on allocating its resources to pursue this strategy, including its capital and

people. Various business analysis techniques can be used in strategic planning, including

SWOT analysis (Strengths, Weaknesses, Opportunities, and Threats ) and PEST analysis

(Political, Economic, Social, and Technological analysis) or STEER analysis involving Socio-

cultural, Technological, Economic, Ecological, and Regulatory factors.

Contents

1 Introduction

2 Vision, mission and values

3 Methodologies

4 Situational analysis

5 Goals, objectives and targets

6 Mission statements and vision

Page 36: Strategic Notes

statements

Introduction

Strategies are different from tactics in that:

1. They are proactive and not re-active as tactics are.

2. They are internal in source, and the business venture has absolute control over their

application.

3. Strategy can only be applied once, after that it is process of application with no unique

element remaining.

4. The outcome is normally a strategic plan which is used as guidance to define functional

and divisional plans, including Technology, Marketing, etc.

Strategic planning is the formal consideration of an organization's future course. All strategic

planning deals with at least one of three key questions:

1. "What do we do?"

2. "For whom do we do it?"

3. "How do we excel?"

In business strategic planning, the third question is better phrased "How can we beat or

avoid competition?". (Bradford and Duncan, page 1).

In many organizations, this is viewed as a process for determining where an organization is

going over the next year or more -typically 3 to 5 years, although some extend their vision

to 20 years.

In order to determine where it is going, the organization needs to know exactly where it

stands, then determine where it wants to go and how it will get there. The resulting

document is called the "strategic plan".

It is also true that strategic planning may be a tool for effectively plotting the direction of a

company; however, strategic planning itself cannot foretell exactly how the market will

evolve and what issues will surface in the coming days in order to plan your organizational

Page 37: Strategic Notes

strategy. Therefore, strategic innovation and tinkering with the 'strategic plan' have to be a

cornerstone strategy for an organization to survive the turbulent business climate.

Vision, mission and values

Vision: Defines the desired or intended future state of a specific organization or enterprise in

terms of its fundamental objective and/or strategic direction.

Mission: Defines the fundamental purpose of an organization or an enterprise, basically

describing why it exists.

Values: Beliefs that are shared among the stakeholders of an organization. Values drive an

organization's culture and priorities.

Methodologies

There are many approaches to strategic planning but typically a three-step process may be

used:

Situation - evaluate the current situation and how it came about.

Target - define goals and/or objectives (sometimes called ideal state)

Path - map a possible route to the goals/objectives

One alternative approach is called Draw-See-Think

Draw - what is the ideal image or the desired end state?

See - what is today's situation? What is the gap from ideal and why?

Think - what specific actions must be taken to close the gap between today's

situation and the ideal state?

Plan - what resources are required to execute the activities?

An alternative to the Draw-See-Think approach is called See-Think-Draw

See - what is today's situation?

Think - define goals/objectives

Draw - map a route to achieving the goals/objectives

Page 38: Strategic Notes

In other terms strategic planning can be as follows:

Vision - Define the vision and set a mission statement with hierarchy of goals

SWOT - Analysis conducted according to the desired goals

Formulate - Formulate actions and processes to be taken to attain these goals

Implement - Implementation of the agreed upon processes

Control - Monitor and get feedback from implemented processes to fully control the

operation

Situational analysis

When developing strategies, analysis of the organization and its environment as it is at the

moment and how it may develop in the future, is important. The analysis has to be executed

at an internal level as well as an external level to identify all opportunities and threats of the

external environment as well as the strengths and weaknesses of the organizations.

There are several factors to assess in the external situation analysis:

1. Markets (customers)

2. Competition

3. Technology

4. Supplier markets

5. Labor markets

6. The economy

7. The regulatory environment

It is rare to find all seven of these factors having critical importance. It is also uncommon to

find that the first two - markets and competition - are not of critical importance. (Bradford

"External Situation - What to Consider")

Analysis of the external environment normally focuses on the customer. Management should

be visionary in formulating customer strategy, and should do so by thinking about market

environment shifts, how these could impact customer sets, and whether those customer

sets are the ones the company wishes to serve.

Page 39: Strategic Notes

Analysis of the competitive environment is also performed, many times based on the

framework suggested by Michael Porter.

Goals, objectives and targets

Strategic planning is a very important business activity. It is also important in the public

sector areas such as education. It is practiced widely informally and formally. Strategic

planning and decision processes should end with objectives and a roadmap of ways to

achieve those objectives.

The following terms have been used in strategic planning: desired end states, plans, policies,

goals, objectives, strategies, tactics and actions. Definitions vary, overlap and fail to achieve

clarity. The most common of these concepts are specific, time bound statements of intended

future results and general and continuing statements of intended future results, which most

models refer to as either goals or objectives (sometimes interchangeably).

One model of organizing objectives uses hierarchies. The items listed above may be

organized in a hierarchy of means and ends and numbered as follows: Top Rank Objective

(TRO), Second Rank Objective, Third Rank Objective, etc. From any rank, the objective in a

lower rank answers to the question "How?" and the objective in a higher rank answers to the

question "Why?" The exception is the Top Rank Objective (TRO): there is no answer to the

"Why?" question. That is how the TRO is defined.

People typically have several goals at the same time. "Goal congruency" refers to how well

the goals combine with each other. Does goal A appear compatible with goal B? Do they fit

together to form a unified strategy? "Goal hierarchy" consists of the nesting of one or more

goals within other goal(s).

One approach recommends having short-term goals, medium-term goals, and long-term

goals. In this model, one can expect to attain short-term goals fairly easily: they stand just

slightly above one's reach. At the other extreme, long-term goals appear very difficult,

almost impossible to attain. Strategic management jargon sometimes refers to "Big Hairy

Audacious Goals" (BHAGs) in this context. Using one goal as a stepping-stone to the next

involves goal sequencing. A person or group starts by attaining the easy short-term goals,

then steps up to the medium-term, then to the long-term goals. Goal sequencing can create

a "goal stairway". In an organizational setting, the organization may co-ordinate goals so

that they do not conflict with each other. The goals of one part of the organization should

mesh compatibly with those of other parts of the organization.

Page 40: Strategic Notes

Mission statements and vision statements

Organizations sometimes summarize goals and objectives into a mission statement and/or

a vision statement:

While the existence of a shared mission is extremely useful, many strategy specialists

question the requirement for a written mission statement. However, there are many models

of strategic planning that start with mission statements, so it is useful to examine them

here.

A Mission statement tells you the fundamental purpose of the organization. It

concentrates on the present. It defines the customer and the critical processes. It

informs you of the desired level of performance.

A Vision statement outlines what the organization wants to be. It concentrates on

the future. It is a source of inspiration. It provides clear decision-making criteria.

Many people mistake vision statement for mission statement. The Vision describes a future

identity while the Mission serves as an ongoing and time-independent guide. The Mission

describes why it is important to achieve the Vision. A Mission statement defines the purpose

or broader goal for being in existence or in the business and can remain the same for

decades if crafted well. A Vision statement is more specific in terms of both the future state

and the time frame. Vision describes what will be achieved if the organization is successful.

A mission statement can resemble a vision statement in a few companies, but that can be a

grave mistake. It can confuse people. The vision statement can galvanize the people to

achieve defined objectives, even if they are stretch objectives, provided the vision is SMART

(Specific, Measurable, Achievable, Relevant and Timebound). A mission statement provides

a path to realize the vision in line with its values. These statements have a direct bearing on

the bottom line and success of the organization.

Which comes first? The mission statement or the vision statement? That depends. If you

have a new start up business, new program or plan to re engineer your current services,

then the vision will guide the mission statement and the rest of the strategic plan. If you

have an established business where the mission is established, then many times, the

mission guides the vision statement and the rest of the strategic plan. Either way, you need

to know your fundamental purpose - the mission, your current situation in terms of internal

resources and capbalities (strengths and/or weaknesses) and external conditions

Page 41: Strategic Notes

(opportunities and/or threats), and where you want to go - the vision for the future. It's

important that you keep the end or desired result in sight from the start

Features of an effective vision statement include:

Clarity and lack of ambiguity

Vivid and clear picture

Description of a bright future

Memorable and engaging wording

Realistic aspirations

Alignment with organizational values and culture

To become really effective, an organizational vision statement must (the theory states)

become assimilated into the organization's culture. Leaders have the responsibility of

communicating the vision regularly, creating narratives that illustrate the vision, acting as

role-models by embodying the vision, creating short-term objectives compatible with the

vision, and encouraging others to craft their own personal vision compatible with the

organization's overall vision.

Mission statement

A mission statement is a brief statement of the purpose of a company, organization, or

group. Companies sometimes use their mission statement as an advertising slogan, but the

intention of a mission statement is to keep members and users aware of the organization's

purpose. In the case of public commercial companies, the primary purpose must always be

to uphold the interests of shareholders, whatever the mission statement.

Contents

1 Structure of a mission

statement

2 Stakeholder conflict

resolution

3 See also

Page 42: Strategic Notes

4 References

Structure of a mission statement

The following elements can be included in a mission statement. Their sequence can be

different. It is important, however, that some elements supporting the accomplishment of

the mission be present and not just the mission as a "wish" or dream.

Purpose and values of the organization

(products or services, market) or who are the organization's primary "clients"

(stakeholders)

What are the responsibilities of the organization towards these "clients"

What are the main objectives supporting the company in accomplishing its mission

Stakeholder conflict resolution

The mission statement can be used to resolve differences between business stakeholders.

Stakeholders include: employees including managers and executives, stockholders, board of

directors, customers, suppliers, distributors, creditors, governments (local, state, federal,

etc.), unions, competitors, NGO's, and the general public. Stakeholders affect and are

affected by the organization's strategies. According to Vern McGinis, a mission should:

Define what the company is

Define what the company aspires to be

Limited to exclude some ventures

Broad enough to allow for creative growth

Distinguish the company from all others

Serve as framework to evaluate current activities

Stated clearly so that it is understood by all

Chief executive officer

"Chief Executive" redirects here. For other uses, see Chief Executive (disambiguation).

"CEO" and "CEOs" redirect here. For the island, see Ceos.

A Chief Executive Officer (CEO) or Chief Executive is typically the highest-ranking

corporate officer (executive) or administrator in charge of total management of a

Page 43: Strategic Notes

corporation, company, organization, or agency, reporting to the board of directors. In

internal communication and press releases, many companies capitalize the term and those

of other high positions, even when they are not proper nouns.

Contents

1 International Use

2 Structure

International Use

In some European Union countries, there are two separate boards, one executive board for

the day-to-day business and one supervisory board for control purposes (elected by the

shareholders). In these countries, the CEO presides over the executive board and the

chairman presides over the supervisory board, and these two roles will always be held by

different people. This ensures a distinction between management by the executive board

and governance by the supervisory board. This allows for clear lines of authority. The aim is

to prevent a conflict of interest and too much power being concentrated in the hands of one

person. There is a strong parallel here with the structure of government, which tends to

separate the political cabinet from the management civil service.

In other parts of the world, such as Asia, it is possible to have two or three CEOs in charge of

one corporation. In the UK, many charities and government agencies are headed by a chief

executive who is answerable to a board of trustees or board of directors. In the UK, the chair

(of the board) in public companies is more senior than the chief executive (who is usually

known as the managing director). Most public companies now split the roles of chair and

chief executive.

In France, a CEO/MD is known as the "PDG" (French: président directeur général); in

Sweden, the CEO/MD is known as "VD" (Swedish: verkställande direktör); in Australia, the

CEO can be known as the "MD" (managing director); in Spain, the usual name is "director

general"; while in Italy, the position is called "AD" (Italian: amministratore delegato). In

Denmark and Norway the CEO is known as the "administrerende direktør", abbr. adm.dir.

In the US, and in business, the executive officers are the top officers of a corporation, the

chief executive officer (CEO) being the best-known type. The definition varies; for instance,

the California Corporate Disclosure Act defines "executive officers" as the five most highly-

compensated officers not also sitting on the board of directors. In the case of a sole

Page 44: Strategic Notes

proprietorship, an executive officer is the sole proprietor. In the case of a partnership, an

executive officer is a managing partner, senior partner, or administrative partner. In the

case of a limited liability company, an executive officer is any member, manager, or officer.

In the airline industry, the Executive Officer, more commonly known as the First Officer, is

the second in command of the aircraft. In a fixed wing aircraft the First Officer sits in the

right-hand seat but in a rotary wing aircraft they sit on the left.

Structure

Typically, a CEO has several subordinate executives, each of whom has specific functional

responsibilities.

Common associates includes a chief financial officer (CFO), chief operating officer (COO),

chief technical officer (CTO), chief marketing officer (CMO), chief information officer (CIO),

and a director, or Vice-President of human resources.

Board of directors

A board of directors is a body of elected or appointed persons who jointly oversee the

activities of a company or organization. The body sometimes has a different name, such as

board of trustees, board of governors, board of managers, or executive board. It is often

simply referred to as "the board."

A board's activities are determined by the powers, duties, and responsibilities delegated to it

or conferred on it by an authority outside itself. These matters are typically detailed in the

organization's bylaws. The bylaws commonly also specify the number of members of the

board, how they are to be chosen, and when they are to meet.

In an organization with voting members, e.g. a professional society, the board acts on behalf

of, and is subordinate to, the organization's full assembly, which usually chooses the

members of the board. In a stock corporation, the board is elected by the stockholders and

is the highest authority in the management of the corporation. In a nonstock corporation

with no general voting membership, e.g. a university, the board is the supreme governing

body of the institution.[1]

Typical duties of boards of directors include:[2][3]

Governing the organization by establishing broad policies and objectives

Page 45: Strategic Notes

Selecting, appointing, supporting and reviewing the performance of the chief

executive

Ensuring the availability of adequate financial resources

Approving annual budgets

Accounting to the stakeholders for the organization's performance

The legal responsibilities of boards and board members vary with the nature of the

organization, and with the jurisdiction within which it operates. For public corporations,

these responsibilities are typically much more rigorous and complex than for those of other

types.

Typically the board chooses one of its members to be the chair or chairperson of the board

of directors, traditionally also called chairman or chairwoman.

Contents

1 Corporations

2 Classification

3 History

4 Election and removal

5 Exercise of powers

6 Duties

o 6.1 Acting in bona fide

o 6.2 "Proper purpose"

o 6.3 "Unfettered discretion"

o 6.4 "Conflict of duty and interest"

6.4.1 Transactions with the company

6.4.2 Use of corporate property, opportunity, or information

Page 46: Strategic Notes

6.4.3 Competing with the company

o 6.5 Common law duties of care and skill

o 6.6 Remedies for breach of duty

o 6.7 The future

7 Failures

8 Sarbanes-Oxley Act

Corporations

Theoretically, the control of a company is divided between two bodies: the board of

directors, and the shareholders in general meeting. In practice, the amount of power

exercised by the board varies with the type of company. In small private companies, the

directors and the shareholders will normally be the same people, and thus there is no real

division of power. In large public companies, the board tends to exercise more of a

supervisory role, and individual responsibility and management tends to be delegated

downward to individual professional executive directors (such as a finance director or a

marketing director) who deal with particular areas of the company's affairs.

Another feature of boards of directors in large public companies is that the board tends to

have more de facto power. Between the practice of institutional shareholders (such as

pension funds and banks) granting proxies to the board to vote their shares at general

meetings and the large numbers of shareholders involved, the board can comprise a voting

bloc that is difficult to overcome. However, there have been moves recently to try to

increase shareholder activism amongst both institutional investors and individuals with small

shareholdings. [4] [5] A board-only organization is one whose board is self-appointed, rather

than being accountable to a base of members through elections; or in which the powers of

the membership are extremely limited.

It is worth noting that in most cases, serving on a board is not a career unto itself. Inside

directors are not usually paid for sitting on a board in its own right, but the duty is instead

considered part of their larger job description. Outside directors on a board likewise are

frequently unpaid for their services and sit on the board as a volunteer in addition to their

other jobs.

Page 47: Strategic Notes

Classification

A board of directors is a group of people elected by the owners of a business entity who

have decision-making authority, voting authority, and specific responsibilities which in each

case is separate and distinct from the authority and responsibilities of owners and managers

of the business entity. The precise name for this group of individuals depends on the law

under which the business entity is formed.

Directors are the members of a board of directors. Directors must be individuals. Directors

can be owners, managers, or any other individual elected by the owners of the business

entity. Directors who are owners and/or managers are sometimes referred to as inside

directors, insiders or interested directors. Directors who are managers are sometimes

referred to as executive directors. Directors who are not owners or managers are sometimes

referred to as outside directors, outsiders, disinterested directors, independent directors, or

non-executive directors.

Boards of directors are sometimes compared to an advisory board or board of advisors

(advisory group). An advisory group is a group of people selected (but not elected) by the

person wanting advice. An advisory group has no decision-making authority, no voting

authority, and no responsibility. An advisory group does not replace a board of directors; in

other words, a board of directors continues to have authority and responsibility even with an

advisory group.

The role and responsibilities of a board of directors vary depending on the nature and type

of business entity and the laws applying to the entity (see types of business entity). For

example, the nature of the business entity may be one that is traded on a public market

(public company), not traded on a public market (a private, limited or closely held

company), owned by family members (a family business), or exempt from income taxes (a

non-profit, not for profit, or tax-exempt entity). There are numerous type of business entities

available throughout the world such as a corporation, limited liability company, cooperative,

business trust, partnership, private limited company, and public limited company.

Much of what has been written about boards of directors relate to boards of directors of

business entities actively traded on public markets. [6]More recently, however, material is

becoming available for boards of private and closely held businesses including family

businesses.[7]

History

Page 48: Strategic Notes

The development of a separate board of directors to manage the company has occurred

incrementally and indefinitely over legal history. Until the end of the nineteenth century, it

seems to have been generally assumed that the general meeting (of all shareholders) was

the supreme organ of the company, and the board of directors was merely an agent of the

company subject to the control of the shareholders in general meeting.[8]

By 1906 however, the English Court of Appeal had made it clear in the decision of Automatic

Self-Cleansing Filter Syndicate Co v Cunningham [1906] 2 Ch 34 that the division of powers

between the board and the shareholders in general meaning depended upon the

construction of the articles of association and that, where the powers of management were

vested in the board, the general meeting could not interfere with their lawful exercise. The

articles were held to constitute a contract by which the members had agreed that "the

directors and the directors alone shall manage."[9]

The new approach did not secure immediate approval, but it was endorsed by the House of

Lords in Quin & Artens v Salmon [1909] AC 442 and has since received general acceptance.

Under English law, successive versions of Table A have reinforced the norm that, unless the

directors are acting contrary to the law or the provisions of the Articles, the powers of

conducting the management and affairs of the company are vested in them.

The modern doctrine was expressed in Shaw & Sons (Salford) Ltd v Shaw [1935] 2 KB 113 by

Greer LJ as follows:

"A company is an entity distinct alike from its shareholders and its directors. Some of

its powers may, according to its articles, be exercised by directors, certain other

powers may be reserved for the shareholders in general meeting. If powers of

management are vested in the directors, they and they alone can exercise these

powers. The only way in which the general body of shareholders can control the

exercise of powers by the articles in the directors is by altering the articles, or, if

opportunity arises under the articles, by refusing to re-elect the directors of whose

actions they disapprove. They cannot themselves usurp the powers which by the

articles are vested in the directors any more than the directors can usurp the powers

vested by the articles in the general body of shareholders."

It has been remarked that this development in the law was somewhat surprising at the time,

as the relevant provisions in Table A (as it was then) seemed to contradict this approach

rather than to endorse it.[10]

Page 49: Strategic Notes

Election and removal

In most legal systems, the appointment and removal of directors is voted upon by the

shareholders in general meeting.[11]

Directors may also leave office by resignation or death. In some legal systems, directors

may also be removed by a resolution of the remaining directors (in some countries they may

only do so "with cause"; in others the power is unrestricted).

Some jurisdictions also permit the board of directors to appoint directors, either to fill a

vacancy which arises on resignation or death, or as an addition to the existing directors.

In practice, it can be quite difficult to remove a director by a resolution in general meeting.

In many legal systems the director has a right to receive special notice of any resolution to

remove him;[12] the company must often supply a copy of the proposal to the director, who is

usually entitled to be heard by the meeting.[13] The director may require the company to

circulate any representations that he wishes to make.[14] Furthermore, the director's contract

of service will usually entitle him to compensation if he is removed, and may often include a

generous "golden parachute" which also acts as a deterrent to removal.

Exercise of powers

The exercise by the board of directors of its powers usually occurs in meetings. Most legal

systems provide that sufficient notice has to be given to all directors of these meetings, and

that a quorum must be present before any business may be conducted. Usually a meeting

which is held without notice having been given is still valid so long as all of the directors

attend, but it has been held that a failure to give notice may negate resolutions passed at a

meeting, as the persuasive oratory of a minority of directors might have persuaded the

majority to change their minds and vote otherwise.[15]

In most common law countries, the powers of the board are vested in the board as a whole,

and not in the individual directors.[16] However, in instances an individual director may still

bind the company by his acts by virtue of his ostensible authority (see also: the rule in

Turquand's Case).

Duties

Because directors exercise control and management over the company, but companies are

run (in theory at least) for the benefit of the shareholders, the law imposes strict duties on

Page 50: Strategic Notes

directors in relation to the exercise of their duties. The duties imposed upon directors are

fiduciary duties, similar in nature to those that the law imposes on those in similar positions

of trust: agents and trustees.

In relation to director's duties generally, two points should be noted:

1. the duties of the directors are several (as opposed to the exercise by the directors of

their powers, which must be done jointly); and

2. the duties are owed to the company itself, and not to any other entity.[17] This doesn't

mean that directors can never stand in a fiduciary relationship to the individual

shareholders; they may well have such a duty in certain circumstances.[18]

Acting in bona fide

Directors must act honestly and in bona fide. The test is a subjective one—the directors

must act in "good faith in what they consider—not what the court may consider—is in the

interests of the company..."[19] However, the directors may still be held to have failed in this

duty where they fail to direct their minds to the question of whether in fact a transaction

was in the best interests of the company.[20]

Difficult questions can arise when treating the company too much in the abstract. For

example, it may be for the benefit of a corporate group as a whole for a company to

guarantee the debts of a "sister" company,[21] even though there is no ostensible "benefit" to

the company giving the guarantee. Similarly, conceptually at least, there is no benefit to a

company in returning profits to shareholders by way of dividend. However, the more

pragmatic approach illustrated in the Australian case of Mills v Mills (1938) 60 CLR 150

normally prevails:

"[directors are] not required by the law to live in an unreal region of detached

altruism and to act in the vague mood of ideal abstraction from obvious facts which

must be present to the mind of any honest and intelligent man when he exercises his

powers as a director."

"Proper purpose"

Directors must exercise their powers for a proper purpose. While in many instances an

improper purpose is readily evident, such as a director looking to feather his or her own nest

or divert an investment opportunity to a relative, such breaches usually involve a breach of

Page 51: Strategic Notes

the director's duty to act in good faith. Greater difficulties arise where the director, while

acting in good faith, is serving a purpose that is not regarded by the law as proper.

The seminal authority in relation to what amounts to a proper purpose is the Privy Council

decision of Howard Smith Ltd v Ampol Ltd [1974] AC 832. The case concerned the power of

the directors to issue new shares.[22] It was alleged that the directors had issued a large

number of new shares purely to deprive a particular shareholder of his voting majority. An

argument that the power to issue shares could only be properly exercised to raise new

capital was rejected as too narrow, and it was held that it would be a proper exercise of the

director's powers to issue shares to a larger company to ensure the financial stability of the

company, or as part of an agreement to exploit mineral rights owned by the company. [23] If

so, the mere fact that an incidental result (even if it was a desired consequence) was that a

shareholder lost his majority, or a takeover bid was defeated, this would not itself make the

share issue improper. But if the sole purpose was to destroy a voting majority, or block a

takeover bid, that would be an improper purpose.

Not all jurisdictions recognised the "proper purpose" duty as separate from the "good faith"

duty however.[24]

"Unfettered discretion"

Directors cannot, without the consent of the company, fetter their discretion in relation to

the exercise of their powers, and cannot bind themselves to vote in a particular way at

future board meetings.[25] This is so even if there is no improper motive or purpose, and no

personal advantage to the director.

This does not mean, however, that the board cannot agree to the company entering into a

contract which binds the company to a certain course, even if certain actions in that course

will require further board approval. The company remains bound, but the directors retain the

discretion to vote against taking the future actions (although that may involve a breach by

the company of the contract that the board previously approved).

"Conflict of duty and interest"

As fiduciaries, the directors may not put themselves in a position where their interests and

duties conflict with the duties that they owe to the company. The law takes the view that

good faith must not only be done, but must be manifestly seen to be done, and zealously

patrols the conduct of directors in this regard; and will not allow directors to escape liability

Page 52: Strategic Notes

by asserting that his decision was in fact well founded. Traditionally, the law has divided

conflicts of duty and interest into three sub-categories.

Transactions with the company

By definition, where a director enters into a transaction with a company, there is a conflict

between the director's interest (to do well for himself out of the transaction) and his duty to

the company (to ensure that the company gets as much as it can out of the transaction).

This rule is so strictly enforced that, even where the conflict of interest or conflict of duty is

purely hypothetical, the directors can be forced to disgorge all personal gains arising from it.

In Aberdeen Ry v Blaikie (1854) 1 Macq HL 461 Lord Cranworth stated in his judgment that:

"A corporate body can only act by agents, and it is, of course, the duty of those

agents so to act as best to promote the interests of the corporation whose affairs

they are conducting. Such agents have duties to discharge of a fiduciary nature

towards their principal. And it is a rule of universal application that no one, having

such duties to discharge, shall be allowed to enter into engagements in which he has,

or can have, a personal interest conflicting or which possibly may conflict, with the

interests of those whom he is bound to protect... So strictly is this principle adhered

to that no question is allowed to be raised as to the fairness or unfairness of the

contract entered into..." (emphasis added)

However, in many jurisdictions the members of the company are permitted to ratify

transactions which would otherwise fall foul of this principle. It is also largely accepted in

most jurisdictions that this principle should be capable of being abrogated in the company's

constitution.

In many countries there is also a statutory duty to declare interests in relation to any

transactions, and the director can be fined for failing to make disclosure.[26]

Use of corporate property, opportunity, or information

Directors must not, without the informed consent of the company, use for their own profit

the company's assets, opportunities, or information. This prohibition is much less flexible

than the prohibition against the transactions with the company, and attempts to circumvent

it using provisions in the articles have met with limited success.

In Regal (Hastings) Ltd v Gulliver [1942] All ER 378 the House of Lords, in upholding what

was regarded as a wholly unmeritorious claim by the shareholders,[27] held that:

Page 53: Strategic Notes

"(i) that what the directors did was so related to the affairs of the company that it can

properly be said to have been done in the course of their management and in the

utilisation of their opportunities and special knowledge as directors; and (ii) that what

they did resulted in profit to themselves."

And accordingly, the directors were required to disgorge the profits that they made, and the

shareholders received their windfall.

The decision has been followed in several subsequent cases,[28] and is now regarded as

settled law.

Competing with the company

Directors cannot, clearly, compete directly with the company without a conflict of interests

arising. Similarly, they should not act as directors of competing companies, as their duties to

each company would then conflict with each other.

Common law duties of care and skill

Traditionally, the level of care and skill which has to be demonstrated by a director has been

framed largely with reference to the non-executive director. In Re City Equitable Fire

Insurance Co [1925] Ch 407, it was expressed in purely subjective terms, where the court

held that:

"a director need not exhibit in the performance of his duties a greater degree of skill

than may reasonably be expected from a person of his knowledge and experience."

(emphasis added)

However, this decision was based firmly in the older notions (see above) that prevailed at

the time as to the mode of corporate decision making, and effective control residing in the

shareholders; if they elected and put up with an incompetent decision maker, they should

not have recourse to complain.

However, a more modern approach has since developed, and in Dorchester Finance Co v

Stebbing [1989] BCLC 498 the court held that the rule in Equitable Fire related only to skill,

and not to diligence. With respect to diligence, what was required was:

Page 54: Strategic Notes

"such care as an ordinary man might be expected to take on his own behalf."

This was a dual subjective and objective test, and one deliberately pitched at a higher level.

More recently, it has been suggested that both the tests of skill and diligence should be

assessed objectively and subjectively; in the United Kingdom the statutory provisions

relating to directors' duties in the new Companies Act 2006 have been codified on this basis.[29] More recently, it has been suggested that both the tests of skill and diligence should be

assessed objectively and subjectively and in the United Kingdom the statutory provisions in

the new Companies Act 2006 reflect this.[29]

Remedies for breach of duty

In most jurisdictions, the law provides for a variety of remedies in the event of a breach by

the directors of their duties:

1. injunction or declaration

2. damages or compensation

3. restoration of the company's property

4. rescission of the relevant contract

5. account of profits

6. summary dismissal

The future

Historically, directors' duties have been owed almost exclusively to the company and its

members, and the board was expected to exercise its powers for the financial benefit of the

company. However, more recently there have been attempts to "soften" the position, and

provide for more scope for directors to act as good corporate citizens. For example, in the

United Kingdom, the Companies Act 2006, not yet in force, will require a director of a UK

company "to promote the success of the company for the benefit of its members as a

whole", but sets out six factors to which a director must have regards in fulfilling the duty to

promote success. These are:

Page 55: Strategic Notes

the likely consequences of any decision in the long term

the interests of the company’s employees

the need to foster the company’s business relationships with suppliers, customers

and others

the impact of the company’s operations on the community and the environment

the desirability of the company maintaining a reputation for high standards of

business conduct, and

the need to act fairly as between members of a company

This represents a considerable departure from the traditional notion that directors' duties

are owed only to the company. Previously in the United Kingdom, under the Companies Act

1985, protections for non-member stakeholders were considerably more limited (see e.g.

s.309 which permitted directors to take into account the interests of employees but which

could only be enforced by the shareholders and not by the employees themselves. The

changes have therefore been the subject of some criticism.[30]

Failures

While the primary responsibility of boards is to ensure that the corporation's management is

performing its job correctly, actually achieving this in practice can be difficult. In a number

of "corporate scandals" of the 1990s, one notable feature revealed in subsequent

investigations is that boards were not aware of the activities of the managers that they

hired, and the true financial state of the corporation. A number of factors may be involved in

this tendency:

Most boards largely rely on management to report information to them, thus allowing

management to place the desired 'spin' on information, or even conceal or lie about

the true state of a company.

Boards of directors are part-time bodies, whose members meet only occasionally and

may not know each other particularly well. This unfamiliarity can make it difficult for

board members to question management.

CEOs tend to be rather forceful personalities. In some cases, CEOs are accused of

exercising too much influence over the company's board.

Page 56: Strategic Notes

Directors may not have the time or the skills required to understand the details of

corporate business, allowing management to obscure problems.

The same directors who appointed the present CEO oversee his or her performance.

This makes it difficult for some directors to dispassionately evaluate the CEO's

performance.

Directors often feel that a judgement of a manager, particularly one who has

performed well in the past, should be respected. This can be quite legitimate, but

poses problems if the manager's judgement is indeed flawed.

All of the above may contribute to a culture of "not rocking the boat" at board

meetings.

Because of this, the role of boards in corporate governance, and how to improve their

oversight capability, has been examined carefully in recent years, and new legislation in a

number of jurisdictions, and an increased focus on the topic by boards themselves, has seen

changes implemented to try and improve their performance.

Sarbanes-Oxley Act

In the United States, the Sarbanes-Oxley Act (SOX) has introduced new standards of

accountability on the board of directors for U.S. companies or companies listed on U.S. stock

exchanges. Under the Act members of the board risk large fines and prison sentences in the

case of accounting crimes. Internal control is now the direct responsibility of directors. This

means that the vast majority of public companies now have hired internal auditors to ensure

that the company adheres to the highest standards of internal controls. Additionally, these

internal auditors are required by law to report directly to the audit board. This group consists

of board of directors members where more than half of the members are outside the

company and one of those members outside the company is an accounting expert.

Balanced scorecard

The Balanced Scorecard (BSC) is a performance management tool which began as a

concept for measuring whether the smaller-scale operational activities of a company are

aligned with its larger-scale objectives in terms of vision and strategy.

By focusing not only on financial outcomes but also on the operational, marketing and

developmental inputs to these, the Balanced Scorecard helps provide a more

Page 57: Strategic Notes

comprehensive view of a business, which in turn helps organizations act in their best long-

term interests.

Organisations were encouraged to measure—in addition to financial outputs—what

influenced such financial outputs. For example, process performance, market share /

penetration, long term learning and skills development, and so on.

The underlying rationale is that organisations cannot directly influence financial outcomes,

as these are "lag" measures, and that the use of financial measures alone to inform the

strategic control of the firm is unwise. Organisations should instead also measure those

areas where direct management intervention is possible. In so doing, the early versions of

the Balanced Scorecard helped organisations achieve a degree of "balance" in selection of

performance measures. In practice, early Scorecards achieved this balance by encouraging

managers to select measures from three additional categories or perspectives: "Customer,"

"Internal Business Processes" and "Learning and Growth."

Contents

1 History

2 Use

o 2.1 Original methodology

o 2.2 Improved methodology

o 2.3 Popularity

o 2.4 Variants, Alternatives and Criticisms

3 The Four Perspectives

4 Key Performance Indicators

o 4.1 Financial

o 4.2 Customer

o 4.3 Internal Business Processes

o 4.4 Learning & Growth

History

Page 58: Strategic Notes

In 1992, Robert S. Kaplan and David P. Norton began publicizing the Balanced Scorecard

through a series of journal articles. In 1996, they published the book The Balanced

Scorecard.

Since the original concept was introduced, Balanced Scorecards have become a fertile field

of theory, research and consulting practice. The Balanced Scorecard has evolved

considerably from its roots as a measure selection framework. While the underlying

principles were sound, many aspects of Kaplan & Norton's original approach were

unworkable in practice. Both in firms associated with Kaplan & Norton (Renaissance

Solutions Inc. and BSCOL), and elsewhere (Cepro in Sweden, and 2GC Active Management in

the UK), the Balanced Scorecard has changed so that there is now much greater emphasis

on the design process than previously. There has also been a rapid growth in consulting

offerings linked to Balanced Scorecards at the level of branding only. Kaplan & Norton

themselves revisited Balanced Scorecards with the benefit of a decade's experience since

the original article.

The Balanced Scorecard is a performance planning and measurement framework, with

similar principles as Management by Objectives, which was publicized by Robert S. Kaplan

and David P. Norton in the early 1990s.

Use

Implementing Balanced Scorecards typically includes four processes:

1. Translating the vision into operational goals;

2. Communicating the vision and link it to individual performance;

3. Business planning;

4. Feedback and learning, and adjusting the strategy accordingly.

The Balanced Scorecard is a framework, or what can be best characterized as a “strategic

management system” that claims to incorporate all quantitative and abstract measures of

true importance to the enterprise. According to Kaplan and Norton, “The Balanced Scorecard

provides managers with the instrumentation they need to navigate to future competitive

success”.

Page 59: Strategic Notes

Many books and articles referring to Balanced Scorecards confuse the design process

elements and the Balanced Scorecard itself. In particular, it is common for people to refer to

a “strategic linkage model” or “strategy map” as being a Balanced Scorecard.

Although it helps focus managers' attention on strategic issues and the management of the

implementation of strategy, it is important to remember that the Balanced Scorecard itself

has no role in the formation of strategy. In fact, Balanced Scorecards can comfortably co-

exist with strategic planning systems and other tools.

Original methodology

The earliest Balanced Scorecards comprised simple tables broken into four sections -

typically these "perspectives" were labeled "Financial", "Customer", "Internal Business

Processes", and "Learning & Growth". Designing the Balanced Scorecard required selecting

five or six good measures for each perspective.

Many authors have since suggested alternative headings for these perspectives, and also

suggested using either additional or fewer perspectives. These suggestions were notably

triggered by a recognition that different but equivalent headings would yield alternative sets

of measures. The major design challenge faced with this type of Balanced Scorecard is

justifying the choice of measures made. "Of all the measures you could have chosen, why

did you choose these?" This common question is hard to ask using this type of design

process. If users are not confident that the measures within the Balanced Scorecard are well

chosen, they will have less confidence in the information it provides. Although less common,

these early-style Balanced Scorecards are still designed and used today.

In short, early-style Balanced Scorecards are hard to design in a way that builds confidence

that they are well designed. Because of this, many are abandoned soon after completion.

Improved methodology

In the mid 1990s, an improved design method emerged. In the new method, measures are

selected based on a set of "strategic objectives" plotted on a "strategic linkage model" or

"strategy map". With this modified approach, the strategic objectives are typically

distributed across a similar set of "perspectives", as is found in the earlier designs, but the

design question becomes slightly less abstract.

Managers have to identify five or six goals within each of the perspectives, and then

demonstrate some inter-linking between these goals by plotting causal links on the diagram.

Page 60: Strategic Notes

Having reached some consensus about the objectives and how they inter-relate, the

Balanced Scorecard is devised by choosing suitable measures for each objective. This type

of approach provides greater contextual justification for the measures chosen, and is

generally easier for managers to work through. This style of Balanced Scorecard has been

commonly used since 1996 or so.

Several design issues still remain with this enhanced approach to Balanced Scorecard

design, but it has been much more successful than the design approach it superseded.

In the late 1990s, the design approach had evolved yet again. One problem with the "2nd

generation" design approach described above was that the plotting of causal links amongst

twenty or so medium-term strategic goals was still a relatively abstract activity. In practice it

ignored the fact that opportunities to intervene, to influence strategic goals are, and need to

be anchored in the "now;" in current and real management activity. Secondly, the need to

"roll forward" and test the impact of these goals necessitated the creation of an additional

design instrument; the Vision or Destination Statement. This device was a statement of what

"strategic success," or the "strategic end-state" looked like. It was quickly realised, that if a

Destination Statement was created at the beginning of the design process then it was much

easier to select strategic Activity and Outcome objectives to respond to it. Measures and

targets could then be selected to track the achievement of these objectives. Destination

Statement driven, or 3rd Generation Balanced Scorecards represent the current state of the

art in Scorecard design.

Popularity

Kaplan and Norton found that companies are using Balanced Scorecards to:

Drive strategy execution;

Clarify strategy and make strategy operational;

Identify and align strategic initiatives;

Link budget with strategy;

Align the organization with strategy;

Conduct periodic strategic performance reviews to learn about and improve strategy.

In 1997, Kurtzman found that 64 percent of the companies questioned were measuring

performance from a number of perspectives in a similar way to the Balanced Scorecard.

Page 61: Strategic Notes

Balanced Scorecards have been implemented by government agencies, military units,

business units and corporations as a whole, non-profit organizations, and schools.

Many examples of Balanced Scorecards can be found via Web searches. However, adapting

one organization's Balanced Scorecard to another is generally not advised by theorists, who

believe that much of the benefit of the Balanced Scorecard comes from the implementation

method. Indeed, it could be argued that many failures in the early days of Balanced

Scorecard could be attributed to this problem, in that early Balanced Scorecards were often

designed remotely by consultants. Managers did not trust, and so failed to engage with and

use these measure suites created by people lacking knowledge of the organisation and

management responsibility.

Variants, Alternatives and Criticisms

Since the late 1990s, various alternatives to the Balanced Scorecard have emerged, such as

The Performance Prism, Results Based Management and Third Generation Balanced

Scorecard. These tools seek to solve some of the remaining design issues, in particular

issues relating to the design of sets of Balanced Scorecards to use across an organization,

and issues in setting targets for the measures selected.

Applied Information Economics (AIE) has been researched as an alternative to Balanced

Scorecards. In 2000, the Federal CIO Council commissioned a study to compare the two

methods by funding studies in side-by-side projects in two different agencies. The Dept. of

Veterans Affairs used AIE and the US Dept. of Agriculture applied Balanced Scorecards. The

resulting report found that while AIE was much more sophisticated, AIE actually took slightly

less time to utilize. AIE was also more likely to generate findings that were newsworthy to

the organization, while the users of Balanced Scorecards felt it simply documented their

inputs and offered no other particular insight. However, Balanced Scorecards are still much

more widely used than AIE.[citation needed]

A criticism of Balanced Scorecards is that the scores are not based on any proven economic

or financial theory, and therefore have no basis in the decision sciences. The process is

entirely subjective and makes no provision to assess quantities (e.g., risk and economic

value) in a way that is actuarially or economically well-founded.

Another criticism is that the Balanced Scorecard does not provide a bottom line score or a

unified view with clear recommendations: it is simply a list of metrics [1].

Page 62: Strategic Notes

Some people also claim that positive feedback from users of Balanced Scorecards may be

due to a placebo effect, as there are no empirical studies linking the use of Balanced

Scorecards to better decision making or improved financial performance of companies.

The Four Perspectives

The grouping of performance measures in general categories (perspectives) is seen to aid in

the gathering and selection of the appropriate performance measures for the enterprise.

Four general perspectives have been proposed by the Balanced Scorecard:

Financial Perspective;

Customer Perspective;

Internal process Perspective;

Innovation & Learning Perspective.

The financial perspective examines if the company’s implementation and execution of its

strategy are contributing to the bottom-line improvement of the company. It represents the

long-term strategic objectives of the organization and thus it incorporates the tangible

outcomes of the strategy in traditional financial terms. The three possible stages as

described by Kaplan and Norton (1996) are rapid growth, sustain and harvest. Financial

objectives and measures for the growth stage will stem from the development and growth of

the organization which will lead to increased sales volumes, acquisition of new customers,

growth in revenues etc. The sustain stage on the other hand will be characterized by

measures that evaluate the effectiveness of the organization to manage its operations and

costs, by calculating the return on investment, the return on capital employed, etc. Finally,

the harvest stage will be based on cash flow analysis with measures such as payback

periods and revenue volume. Some of the most common financial measures that are

incorporated in the financial perspective are EVA, revenue growth, costs, profit margins,

cash flow, net operating income etc.

The customer perspective defines the value proposition that the organization will apply in

order to satisfy customers and thus generate more sales to the most desired (i.e. the most

profitable) customer groups. The measures that are selected for the customer perspective

should measure both the value that is delivered to the customer (value position) which may

involve time, quality, performance and service and cost and the outcomes that come as a

Page 63: Strategic Notes

result of this value proposition (e.g., customer satisfaction, market share). The value

proposition can be centered on one of the three: operational excellence, customer intimacy

or product leadership, while maintaining threshold levels at the other two.

The internal process perspective is concerned with the processes that create and deliver

the customer value proposition. It focuses on all the activities and key processes required in

order for the company to excel at providing the value expected by the customers both

productively and efficiently. These can include both short-term and long-term objectives as

well as incorporating innovative process development in order to stimulate improvement. In

order to identify the measures that correspond to the internal process perspective, Kaplan

and Norton propose using certain clusters that group similar value creating processes in an

organization. The clusters for the internal process perspective are operations management

(by improving asset utilization, supply chain management, etc), customer management (by

expanding and deepening relations), innovation (by new products and services) and

regulatory & social (by establishing good relations with the external stakeholders).

The Innovation & Learning Perspective is the foundation of any strategy and focuses on

the intangible assets of an organization, mainly on the internal skills and capabilities that

are required to support the value-creating internal processes. The Innovation & Learning

Perspective is concerned with the jobs (human capital), the systems (information capital),

and the climate (organization capital) of the enterprise. These three factors relate to what

Kaplan and Norton claim is the infrastructure that is needed in order to enable ambitious

objectives in the other three perspectives to be achieved. This of course will be in the long

term, since an improvement in the learning and growth perspective will require certain

expenditures that may decrease short-term financial results, whilst contributing to long-term

success.

Key Performance Indicators

According to each perspective of the Balanced Scorecard, a number of KPIs can be used

such as:

Financial

Cash flow

ROI

Financial Result

Page 64: Strategic Notes

Return on capital employed

Return on equity

Customer

Delivery Performance to Customer - by Date

Quality Performance to Customer - by Quality

Customer satisfaction rate

Customer Loyalty

Customer retention

Internal Business Processes

Number of Activities

Opportunity Success Rate

Accident Ratios

Overall Equipment Effectiveness

Learning & Growth

Investment Rate

Illness rate

Internal Promotions %

Employee Turnover

Gender Ratios

Further lists of general and industry-specific KPIs can be found in the case studies and

methodological articles and books presented in the references section.

Resource-based view

The resource-based view (RBV) is an economic tool used to determine the strategic

resources available to a firm. The fundamental principle of the RBV is that the basis for a

competitive advantage of a firm lies primarily in the application of the bundle of valuable

Page 65: Strategic Notes

resources at the firm’s disposal (Wernerfelt, 1984, p172; Rumelt, 1984, p557-558). To

transform a short-run competitive advantage into a sustained competitive advantage

requires that these resources are heterogeneous in nature and not perfectly mobile (Barney,

1991, p105-106; Peteraf, 1993, p180). Effectively, this translates into valuable resources

that are neither perfectly imitable nor substitutable without great effort (Hoopes, 2003,

p891; Barney, 1991, p117). If these conditions hold, the firm’s bundle of resources can assist

the firm sustaining above average returns.

Contents

1 Concept

2 Definitions

o 2.1 What constitutes a "resource"?

o 2.2 What constitutes "competitive advantage"?

3 History of the resource-based view

o 3.1 Barriers to imitation of resources

o 3.2 Developing resources for the future

o 3.3 Complementary work

4 Criticisms

Concept

The key points of the theory are:

1. Identify the firm’s potential key resources.

2. Evaluate whether these resources fulfill the following (VRIN) criteria:

o Valuable - A resource must enable a firm to employ a value-creating strategy,

by either outperforming its competitors or reduce its own weaknesses

(Barney, 1991, p99; Amit and Shoemaker, 1993, p36). Relevant in this

perspective is that the transaction costs associated with the investment in the

resource cannot be higher than the discounted future rents that flow out of

the value-creating strategy (Mahoney and Prahalad, 1992, p370; Conner,

1992, p131).

Page 66: Strategic Notes

o Rare - To be of value, a resource must be by definition rare. In a perfectly

competitive strategic factor market for a resource, the price of the resource

will be a reflection of the expected discounted future above-average returns

(Barney, 1986a, p1232-1233; Dierickx and Cool, 1989, p1504; Barney, 1991,

p100).

o In-imitable - If a valuable resource is controlled by only one firm it could be a

source of a competitive advantage (Barney, 1991, p107). This advantage

could be sustainable if competitors are not able to duplicate this strategic

asset perfectly (Peteraf, 1993, p183; Barney, 1986b, p658). The term isolating

mechanism was introduced by Rumelt (1984, p567) to explain why firms

might not be able to imitate a resource to the degree that they are able to

compete with the firm having the valuable resource (Peteraf, 1993, p182-183;

Mahoney and Pandian, 1992, p371). An important underlying factor of

inimitability is causal ambiguity, which occurs if the source from which a firm’s

competitive advantage stems is unknown (Peteraf, 1993, p182; Lippman and

Rumelt, 1982, p420). If the resource in question is knowledge-based or

socially complex, causal ambiguity is more likely to occur as these types of

resources are more likely to be idiosyncratic to the firm in which it resides

(Peteraf, 1993, p183; Mahoney and Pandian, 1992, p365; Barney, 1991,

p110). Conner and Prahalad go so far as to say knowledge-based resources

are “…the essence of the resource-based perspective” (1996, p477).

o Non-substitutable - Even if a resource is rare, potentially value-creating and

imperfectly imitable, an equally important aspect is lack of substitutability

(Dierickx and Cool, 1989, p1509; Barney, 1991, p111). If competitors are able

to counter the firm’s value-creating strategy with a substitute, prices are

driven down to the point that the price equals the discounted future rents

(Barney, 1986a, p1233; Conner, 1991, p137), resulting in zero economic

profits.

3. Care for and protect resources that possess these evaluations.

The characteristics mentioned under 2 are individually necessary, but not sufficient

conditions for a sustained competitive advantage (Dierickx and Cool, 1989, p1506; Priem

and Butler, 2001a, p25). Within the framework of the resource-based view, the chain is as

strong as its weakest link and therefore dependent on the resource displaying each of the

Page 67: Strategic Notes

four characteristics to be a possible source of a sustainable competitive advantage (Barney,

1991, 105-107).

Definitions

What constitutes a "resource"?

Jay Barney (1991, p101) referring to Daft (1983) says: "...firm resources include all assets,

capabilities, organizational processes, firm attributes, information, knowledge, etc;

controlled by a firm that enable the firm to conceive of and implement strategies that

improve its efficiency and effectiveness (Daft,1983)."

A subsequent distinction made by Amit & Schoemaker (1993, p35) is that the encompassing

construct previously called resources can be split up into resources and capabilities. In this

respect resources are tradable and non-specific to the firm, while capabilities are firm-

specific and used to utilize the resources within the firm, such as implicit processes to

transfer knowledge within the firm (Makadok, 2001, p388-389; Hoopes, Madsen and Walker,

2003, p890). This distinction has been widely adopted throughout the resource-based view

literature (Conner and Prahalad, 1996, p477; Makadok, 2001, p338; Barney, Wright and

Ketchen, 2001, p630-31).

What constitutes "competitive advantage"?

A competitive advantage can be attained if the current strategy is value-creating, and not

currently being implemented by present or possible future competitors (Barney, 1991,

p102). Although a competitive advantage has the ability to become sustained, this is not

necessarily the case. A competing firm can enter the market with a resource that has the

ability to invalidate the prior firm's competitive advantage, which results in reduced (read:

normal) rents (Barney, 1986b, p658). Sustainability in the context of a sustainable

competitive advantage is independent with regards to the time-frame. Rather, a competitive

advantage is sustainable when the efforts by competitors to render the competitive

advantage redundant have ceased (Barney, 1991, p102; Rumelt, 1984, p562). When the

imitative actions have come to an end without disrupting the firm’s competitive advantage,

the firm’s strategy can be called sustainable. This is contrary to other views (e.g. Porter) that

a competitive advantage is sustained when it provides above-average returns in the long

run. (1985).

Page 68: Strategic Notes

History of the resource-based view

Some aspects of theories are thought of long before they are formally adopted and brought

together into the strict framework of an academic theory. The same could be said with

regards to the resource-based view.

While this influential body of research within the field of Strategic Management was named

by Birger Wernerfelt in his article A Resource-Based View of the Firm (1984), the origins of

the resource-based view can be traced back to earlier research. Retrospectively, elements

can be found in works by Coase (1937), Selznick (1957), Penrose (1959), Stigler (1961),

Chandler (1962, 1977), and Williamson (1975), where emphasis is put on the importance of

resources and its implications for firm performance (Conner, 1991, p122; Rumelt, 1984,

p557; Mahoney and Pandian, 1992, p263; Rugman and Verbeke, 2002). This paradigm shift

from the narrow neoclassical focus to a broader rationale, and the coming closer of different

academic fields (industrial organization economics and organizational economics being most

prominent) was a particular important contribution (Conner, 1991, p133; Mahoney and

Pandian, 1992).

Two publications closely following Wernerfelt’s initial article came from Barney (1986a,

1986b). Even though Wernerfelt was not referred to, the statements made by Barney about

strategic factor markets and the role of expectations can, looking back, clearly be seen

within the resource-based framework as later developed by Barney (1991). Other concepts

that were later integrated into the resource-based framework have been articulated by

Lippman and Rumelt (uncertain imitability, 1982), Rumelt (isolating mechanisms, 1984) and

Dierickx and Cool (inimitability and its causes, 1989). Barney’s framework proved a solid

foundation for other to build on, which was provided with a stronger theoretical background

by Conner (1991), Mahoney and Pandian (1992), Conner and Prahalad (1996) and Makadok

(2001), who positioned the resource-based view with regards to various other research

fields. More practical approaches were provided for by Amit and Shoemaker (1993), while

later criticism came from among others from Priem and Butler (2001a, 2001b) and Hoopes,

Madsen and Walker (2003).

The resource based view has been a common interest for management researchers and

numerous writings could be found for same. Resource based view explains a firms ability to

reach sustainable competitive advantage when different resources are employed and these

resources can not be imitated by competitors which ultimately creates a competitive barrier

(Mahoney and Pandian 1992 cited by Hooley and Greenley 2005, p.96 , Smith and Rupp

2002, p.48). RBV explains that a firm’s sustainable competitive advantage is reached by

Page 69: Strategic Notes

virtue of unique resources which these resources have the characteristics of being rare,

valuable, inimitable, non-tradable, non-substitutable as well as firm specific (Barney 1999

cited by Finney et al.2004, p.1722, Makadok 2001, p. 94). These authors write about the fact

that a firm may reach a sustainable competitive advantage through unique resources which

it holds, and these resources can not be easily bought, transferred, copied and

simultaneously they add value to a firm while being rare. It also highlights the fact that all

resources of a firm may not contribute to a firm’s sustainable competitive advantage.

Varying performance between firms is a result of heterogeneity of assets (Lopez 2005,

p.662, Helfat and Peteref 2003, p.1004) and RBV is focused on the factors that cause these

differences to prevail (Grant 1991, Mahoney and Pandian 1992, Amit and Shoemaker 1993,

Barney 2001 cited by Lopez 2005, p.662).

Fundamental similarity in these writings is that unique value creating resources will

generate a sustainable competitive advantage to the extent no competitor has the ability to

use same type of resources either through acquisition or imitation. Major concern in the RBV

is focused on the ability of the firm to maintain a combination of resources that can not be

possessed or build up in a similar manner by competitors. Further such writings provide us

the base to understand that the sustainability strength of competitive advantage depends

on the ability of competitors to use identical or similar resources that makes the same

implications on a firm’s performance. This ability of a firm to avoid imitation of their

resources should be analysed in depth to understand the sustainability strength of a

competitive advantage.

Barriers to imitation of resources

Resources are the inputs or the factors available to a company which helps to perform its

operations or carry out its activities (Amit and Shoemaker 1993, Black and Boal 1994, Grant

1995 cited by Ordaz et al.2003, p.96). Also these authors state that resources, if considered

as isolated factors doesn’t result in productivity hence coordination of resources is

important. The ways a firm can create a barrier to imitation is known as “isolating

mechanisms” and are reflected in the aspects of corporate culture, managerial capabilities,

information asymmetries and property rights (Hooley and Greenlay 2005, p.96, Winter

2003,p. 992). Further, they mention that except for legislative restrictions created through

property rights, other three aspects are direct or indirect results of managerial practises.

Page 70: Strategic Notes

King (2007, p.156) mentions inter-firm causal ambiguity may results in sustainable

competitive advantage for some firms. Causal ambiguity is the continuum that describes the

degree to which decision makers understand the relationship between organisational inputs

and outputs (Ghinggold and Johnson 1998,p.134,Lippman and Rumlet 1982 cited by King

2007, p.156, Matthyssens and Vandenbempt 1998, p.46). Their argument is that inability of

competitors to understand what causes the superior performance of another (inter-firm

causal ambiguity), helps to reach a sustainable competitive advantage for the one who is

presently performing at a superior level. What creates this inability to understand the cause

for superior performance of firm? Is it the intended consequence of a firm’s action? Holley

and Greenley (2005, p.96) state that social context of certain resource conditions act as an

element to create isolating mechanisms and further mentions that three characteristics of

certain resources underpins the causal ambiguity scenario which are tacitness (accumulated

skill-based resources acquired through learning by doing) complexity (large number of inter-

related resources being used) and specificity (dedication of certain resources to specific

activities) and ultimately these three characteristics will consequent in a competitive barrier.

Referring back to the definitions stated previously regarding the competitive advantage that

mentions superior performance is correlated to resources of the firm (Christensen and Fahey

1984, Kay 1994, Porter 1980 cited by Chacarbaghi and Lynch 1999, p.45) and consolidating

writings of King (2007, p.156) stated above we may derive the fact that inter-firm causal

ambiguity regarding resources will generate a competitive advantage at a sustainable level.

Further, it explains that the extent to which competitors understand what resources are

underpinning the superior performance, will determine the sustainability strength of a

competitive advantage. In a scenario that a firm is able to overcome the inter-firm causal

ambiguity it does not necessarily result in imitating resources. As to Johnson (2006, p.02)

and Mahoney (2001, p.658), even after recognising competitors valuable resources, a firm

may not imitate due to the social context of these resources or availability of more pursuing

alternatives. Certain resources like company reputation are path dependent that are

accumulated over time and a competitor may not be able to perfectly imitate such (Zander

and Zandre 2005, p.1521 , Santala and Parvinen 2007, p.172).

They argue on the base that certain resources, even imitated may not bring the same

impact since the maximum impact of same is achieved over longer periods of time. Hence,

such imitation will not be successful. In consideration of the reputation fact as a resource,

does this imply that a first mover to a market always holds a competitive advantage? Can a

late entrant exploit any opportunity for a competitive advantage? Kim and Park (2006, p.45)

mentions three reasons new entrants may be outperformed by early entrants. First, early

Page 71: Strategic Notes

entrants have a technological know how which helps them to perform at a superior level.

Secondly, early entrants have developed capabilities with time that enhances their strength

to perform above late entrants. Thirdly, switching costs incurred to customers if decided to

migrate, will help early entrants to dominate the market evading the late entrants

opportunity to capture market share. Customer awareness and loyalty is a rational benefit

early entrants enjoy (Liberman and Montgomery 1988, Porter 1985, Hill 1997, Yoffie 1990

cited by Ma 2004, p.914, Agrawal et al 2003, p. 117).

However, first mover advantage is active in evolutionary technological transitions which are

technological innovations based on previous developments (Kim and Park 2006, p, 45,

Cottam et al 2001, p. 142). Same authors further argue that revolutionary technological

changes (changes that significantly disturb the existing technology) will eliminate the

advantage of early entrants. Such writings elaborate that though early entrants enjoy

certain resources by virtue of the forgone time periods in the markets, rapidly changing

technological environments may make those resources obsolete and curtail the firm’s

dominance. Late entrants may comply with the technological innovativeness and increase

pressure of competition, hence, seek for a competitive advantage through making the

existing competences and resources of early entrants invalid or outdated. In other words

innovative technological implications will significantly change the landscape of the industry

and the market, making early mover’s advantage minimum. However, in a market where

technology does not play a dynamic role, early mover advantage may prevail.

Analysing the above developed framework for Resource Based View, it reflects a unique

feature which is, sustainable competitive advantage is achieved in an environment where

competition doesn’t exist. According to the characteristics of the Resource based view

rivalry firms may not perform at a level that could be identified as a considerable

competition for the incumbents of the market since they do not possess the required

resources to perform at a level that creates a threat hence create competition. Through

barriers to imitation incumbents ensure that rivalry firms do not reach a level to perform in a

similar manner to them. In other words, the sustainability of the winning edge is determined

by the strength of not letting other firms compete in the same level. The moment

competition becomes active competitive advantage becomes ineffective since two or more

firms begins to perform at a superior level evading the possibility of single firm dominance

hence no firm will enjoy a competitive advantage. Ma (2003, p.76) agrees stating that by

definition, the sustainable competitive advantage discussed in the Resource based view is

ant-competitive. Further such sustainable competitive advantage could exist in the world of

Page 72: Strategic Notes

no competitive imitation (Barney 1991, Petref 1993 cited by Ma 2003, p.77, Ethiraj et al,

2005, p. 27).

Developing resources for the future

Based on the empirical writings stated above RBV provides us the understanding that

certain unique existing resources will result in superior performance and ultimately build a

competitive advantage. Sustainability of such advantage will be determined by the ability of

competitors to imitate such resources. However, the existing resources of a firm may not be

adequate to facilitate the future market requirement due to volatility of the contemporary

markets. There is a vital need to modify and develop resources in order to encounter the

future market competition. An organisation should exploit existing business opportunities

using the present resources while generating and developing a new set of resources to

sustain its competitiveness in the future market environments, hence an organisation should

be engaged in resource management and resource development (Chaharbaghi and Lynch

1999, p.45, Song et al, 2002, p.86). Their writings explain that in order to sustain the

competitive advantage, it’s crucial to develop resources that will strengthen their ability to

continue the superior performance. Any industry or market reflects high uncertainty and in

order to survive and stay ahead of competition new resources becomes highly necessary.

Morgan (2000 cited by Finney et al.2004, p.1722) agrees stating that, need to update

resources is a major management task since all business environments reflect highly

unpredictable market and environmental conditions. The existing winning edge needed to

be developed since various market dynamics may make existing value creating resources

obsolete. (" Achieving a sustainable competitive advantage in the IT industry through hybrid

business strategy:A contemporary perspective"- Tharinda Jagathsiri (MBA-University of East

London)

Complementary work

Building on the RBV, Hoopes, Madsen & Walker (2003) suggest a more expansive discussion

of sustained differences among firms and develop a broad theory of competitive

heterogeneity. “The RBV seems to assume what it seeks to explain. This dilutes its

explanatory power. For example, one might argue that the RBV defines, rather than

hypothesizes, that sustained performance differences are the result of variation in resources

and capabilities across firms. The difference is subtle, but it frustrates understanding the

RBV’s possible contributions (Hoopes et al, 2003: 891).

Page 73: Strategic Notes

“The RBV’s lack of clarity regarding its core premise and its lack of any clear boundary

impedes fruitful debate. Given the theory’s lack of specificity, one can invoke the definition-

based or hypothesis-based logic any time. Again, we argue that resources are but one

potential source of competitive heterogeneity. Competitive heterogeneity can obtain for

reasons other than sticky resources (or capabilities)” (Hoopes et al 2003: 891). Competitive

heterogeneity refers to enduring and systematic performance differences among close

competitors (Hoopes et al, 2003: 890).

Criticisms

Priem and Butler (2001) made four key criticisms:

The RBV is tautological, or self-verifying. Barney has defined a competitive

advantage as a value-creating strategy that is based on resources that are, among

other characteristics, valuable (1991, p106). This reasoning is circular and therefore

operationally invalid (Priem and Butler, 2001a, p31). For more info on the tautology,

see also Collins, 1994

Different resource configurations can generate the same value for firms and thus

would not be competitive advantage

The role of product markets is underdeveloped in the argument

The theory has limited prescriptive implications

However, Barney (2001) provided counter-arguments to these points of criticism.

Further criticisms are:

It is perhaps difficult (if not impossible) to find a resource which satisfies all of the

Barney's VRIN criterion.

There is the assumption that a firm can be profitable in a highly competitive market

as long as it can exploit advantageous resources, but this may not necessarily be the

case. It ignores external factors concerning the industry as a whole; Porter’s Industry

Structure Analysis ought also be considered.

Long-term implications that flow from its premises: A prominent source of sustainable

competitive advantages is causal ambiguity (Lippman & Rumelt, 1982, p420). While

this is undeniably true, this leaves an awkward possibility: the firm is not able to

manage a resource it does not know exists, even if a changing environment requires

Page 74: Strategic Notes

this (Lippman & Rumelt, 1982, p420). Through such an external change the initial

sustainable competitive advantage could be nullified or even transformed into a

weakness (Priem and Butler, 2001a, p33; Peteraf, 1993, p187; Rumelt, 1984, p566).

Premise of efficient markets: Much research hinges on the premise that markets in

general or factor markets are efficient, and that firms are capable of precisely pricing

in the exact future value of any value-creating strategy that could flow from the

resource (Barney, 1986a, p1232). Dierickx and Cool argue that purchasable assets

cannot be sources of sustained competitive advantage, just because they can be

purchased. Either the price of the resource will increase to the point that it equals the

future above-average return, or other competitors will purchase the resource as well

and use it in a value-increasing strategy that diminishes rents to zero (Peteraf, 1993,

p185; Conner, 1991, p137).

The concept ‘rare’ is obsolete: Although prominently present in Wernerfelt’s original

articulation of the resource-based view (1984) and Barney’s subsequent framework

(1991), the concept that resources need to be rare to be able to function as a

possible source of a sustained competitive advantage is unnecessary (Hoopes,

Madsen and Walker, 2003, p890). Because of the implications of the other concepts

(e.g. valuable, inimitable and nonsubstitutability) any resource that follows from the

previous characteristics is inherently rare.

Sustainable: The lack of exact definition with regards to the concept sustainable

makes its premise difficult to test empirically. Barney’s statement (1991, p102-103)

that the competitive advantage is sustained if current and future rivals have ceased

their imitative efforts is versatile from the point of view of developing a theoretical

framework, but a disadvantage from a more practical point of view as there is no

explicit end-goal.

Michael Porter

Michael Eugene Porter (born 1947) is a University Professor at Harvard Business School,

with academic interests in management and economics. He is one of the founders of The

Monitor Group. Porter's main academic objectives focus on how a firm or a region can build

a competitive advantage and develop competitive strategy. He is also a Fellow Member of

the Strategic Management Society. Porter graduated from Princeton University in 1969,

where he was an outstanding intercollegiate golfer. He has two daughters and is divorced.

He has three dogs. One of his most significant contributions is the five forces.

Page 75: Strategic Notes

Porter's strategic system consists primarily of:

Porter's Five Forces Analysis

strategic groups (also called strategic sets)

the value chain

the generic strategies of cost leadership, product differentiation, and focus

the market positioning strategies of variety based, needs based, and access based

market positions.

Porter's clusters of competence for regional economic development

Contents

1 Key Work

2 Criticisms

3 See also

4 External links

Key Work

Porter, M. (1979) "How competitive forces shape strategy", Harvard business Review,

March/April 1979.

Porter, M. (1980) Competitive Strategy, Free Press, New York, 1980.

Porter, M. (1985) Competitive Advantage, Free Press, New York, 1985.

Porter, M. (1987) "From Competitive Advantage to Corporate Strategy", Harvard

Business Review, May/June 1987, pp 43-59.

Porter, M. (1996) "What is Strategy", Harvard Business Review, Nov/Dec 1996.

Porter, M. (1998) On Competition, Boston: Harvard Business School, 1998.

Porter, M. (1990, 1998) "The Competitive Advantage of Nations", Free Press

Porter, M. (2001) "Strategy and the Internet", Harvard Business Review, March 2001,

pp. 62-78.

Page 76: Strategic Notes

Porter, Michael E. & Stern, Scott (2001) "Innovation: Location Matters", MIT Sloan

Management Review, Summer 2001, Vol. 42, No. 4, pp. 28-36.

Porter, Michael E. and Kramer, Mark R. (2006) "Strategy and Society: The Link

Between Competitive Advantage and Corporate Social Responsibility", Harvard

Business Review, December 2006, pp. 78-92.

Porter, M. & Elizabeth Olmsted Teisberg (2006) "Redefining Health Care: Creating

Value-Based Competition On Results", Harvard Business School Press

Criticisms

Porter has been criticised by some academics for inconsistent logical argument in his

assertions.[1] Critics have also labelled Porter's conclusions as lacking in empirical support

and as justified with selective case studies.

Gap analysis

In business and economics, gap analysis is a business resource assessment tool enabling a

company to compare its actual performance with its potential performance. At its core are

two questions:

Where are we?

Where do we want to be?

If a company or organization is under-utilizing its current resources or is forgoing investment

in capital or technology, then it may be producing or performing at a level below its

potential. This concept is similar to the base case of being below one's production

possibilities frontier.

This goal of the gap analysis is to identify the gap between the optimized allocation and

integration of the inputs, and the current level of allocation. This helps provide the company

with insight into areas which could be improved.

The gap analysis process involves determining, documenting and approving the variance

between business requirements and current capabilities. Gap analysis naturally flows from

benchmarking and other assessments. Once the general expectation of performance in the

industry is understood, it is possible to compare that expectation with the level of

performance at which the company currently functions. This comparison becomes the gap

Page 77: Strategic Notes

analysis. Such analysis can be performed at the strategic or operational level of an

organization.

'Gap analysis' is a formal study of what a business is doing currently and where it wants to

go in the future. It can be conducted, in different perspectives, as follows:

1. Organization (e.g., human resources)

2. Business direction

3. Business processes

4. Information technology

Gap analysis provides a foundation for measuring investment of time, money and human

resources required to achieve a particular outcome (e.g. to turn the salary payment process

from paper-based to paperless with the use of a system).

Note that 'GAP analysis' has also been used as a means for classification of how well a

product or solution meets a targeted need or set of requirements. In this case, 'GAP' can be

used as a ranking of 'Good', 'Average' or 'Poor'. This terminology does appear in the

PRINCE2 project management publication from the OGC.

Contents

1 Gap analysis and new products

o 1.1 Usage gap

o 1.2 Market potential

o 1.3 Existing usage

o 1.4 Product Gap

o 1.5 Competitive gap

2 Market gap analysis

Gap analysis and new products

The need for new products or additions to existing lines may have emerged from portfolio

analyses, in particular from the use of the Boston Consulting Group Growth-share matrix, or

Page 78: Strategic Notes

the need will have emerged from the regular process of following trends in the requirements

of consumers. At some point a gap will have emerged between what the existing products

offer the consumer and what the consumer demands. That gap has to be filled if the

organization is to survive and grow.

To identify a gap in the market, the technique of gap analysis can be used. Thus an

examination of what profits are forecasted for the organization as a whole compared with

where the organization (in particular its shareholders) 'wants' those profits to be represents

what is called the 'planning gap': this shows what is needed of new activities in general and

of new products in particular.

The planning gap may be divided into four main elements:

Usage gap

This is the gap between the total potential for the market and the actual current usage by all

the consumers in the market. Clearly two figures are needed for this calculation:

market potential

existing usage

Market potential

The most difficult estimate to make is that of the total potential available to the whole

market, including all segments covered by all competitive brands. It is often achieved by

determining the maximum potential individual usage, and extrapolating this by the

maximum number of potential consumers. This is inevitably a judgment rather than a

scientific extrapolation, but some of the macro-forecasting techniques may assist in making

this estimate more soundly based.

The maximum number of consumers available will usually be determined by market

research, but it may sometimes be calculated from demographic data or government

statistics. Ultimately there will, of course, be limitations on the number of consumers. For

guidance one can look to the numbers using similar products. Alternatively, one can look to

what has happened in other countries. It is often suggested that Europe follows patterns set

in the USA, but after a time-lag of a decade or so. The increased affluence of all the major

Western economies means that such a lag can now be much shorter.

Page 79: Strategic Notes

The maximum potential individual usage, or at least the maximum attainable average usage

(there will always be a spread of usage across a range of customers), will usually be

determined from market research figures. It is important, however, to consider what lies

behind such usage.

Existing usage

The existing usage by consumers makes up the total current market, from which market

shares, for example, are calculated. It is usually derived from marketing research, most

accurately from panel research such as that undertaken by A.C. Nielsen but also from 'ad

hoc' work. Sometimes it may be available from figures collected by government

departments or industry bodies; however, these are often based on categories which may

make sense in bureaucratic terms but are less helpful in marketing terms.

The 'usage gap' is thus:

usage gap = market potential – existing usage

This is an important calculation to make. Many, if not most marketers, accept the 'existing'

market size, suitably projected over the timescales of their forecasts, as the boundary for

their expansion plans. Although this is often the most realistic assumption, it may

sometimes impose an unnecessary limitation on their horizons. The original market for

video-recorders was limited to the professional users who could afford the high prices

involved. It was only after some time that the technology was extended to the mass market.

In the public sector, where the service providers usually enjoy a `monopoly', the usage gap

will probably be the most important factor in the development of the activities. But

persuading more `consumers' to take up family benefits, for example, will probably be more

important to the relevant government department than opening more local offices.

The usage gap is most important for the brand leaders. If any of these has a significant

share of the whole market, say in excess of 30 per cent, it may become worthwhile for the

firm to invest in expanding the total market. The same option is not generally open to the

minor players, although they may still be able to target profitably specific offerings as

market extensions.

All other `gaps' relate to the difference between the organization's existing sales (its market

share) and the total sales of the market as a whole. This difference is the share held by

competitors. These `gaps' will, therefore, relate to competitive activity.

Page 80: Strategic Notes

Product Gap

The product gap, which could also be described as the segment or positioning gap,

represents that part of the market from which the individual organization is excluded

because of product or service characteristics. This may have come about because the

market has been segmented and the organization does not have offerings in some

segments, or it may be because the positioning of its offering effectively excludes it from

certain groups of potential consumers, because there are competitive offerings much better

placed in relation to these groups.

This segmentation may well be the result of deliberate policy. Segmentation and positioning

are very powerful marketing techniques; but the trade-off, to be set against the improved

focus, is that some parts of the market may effectively be put beyond reach. On the other

hand, it may frequently be by default; the organization has not thought about its positioning,

and has simply let its offerings drift to where they now are.

The product gap is probably the main element of the planning gap in which the organization

can have a productive input; hence the emphasis on the importance of correct positioning.

Competitive gap

What is left represents the gap resulting from the competitive performance. This

competitive gap is the share of business achieved among similar products, sold in the same

market segment, and with similar distribution patterns - or at least, in any comparison, after

such effects have been discounted. Needless to say, it is not a factor in the case of the

monopoly provision of services by the public sector.

The competitive gap represents the effects of factors such as price and promotion, both the

absolute level and the effectiveness of its messages. It is what marketing is popularly

supposed to be about.

Market gap analysis

In the type of analysis described above, gaps in the product range are looked for. Another

perspective (essentially taking the `product gap' to its logical conclusion) is to look for gaps

in the 'market' (in a variation on `product positioning', and using the multidimensional

`mapping'), which the company could profitably address, regardless of where its current

products stand.

Page 81: Strategic Notes

Many marketers would, indeed, question the worth of the theoretical gap analysis described

earlier. Instead, they would immediately start proactively to pursue a search for a

competitive advantage.

Types of integration

Horizontal integration

In microeconomics and strategic management, the term horizontal integration describes

a type of ownership and control. It is a strategy used by a business or corporation that seeks

to sell a type of product in numerous markets. Horizontal integration in marketing is much

more common than vertical integration in production. Horizontal integration occurs when a

firm in the same industry and in the same stage of production is being taken-over or merged

with another firm which is in the same industry and in the same stage of production as of

with the merged firm. Eg: A car manufacturer merging with another car manufacturer. In

this case both the companies are in the same stage of production and also in the same

industry.

A monopoly created through horizontal integration is called a horizontal monopoly.[citation

needed].

A term that is closely related with horizontal integration is horizontal expansion. This is

the expansion of a firm within an industry which it is already active, the purpose is to

increase its share of the market for a particular product or service.

Contents

1 Benefits of horizontal

integration

2 Media terms

3 References

4 See also

Benefits of horizontal integration

Horizontal integration allows:

Economies of scale

Page 82: Strategic Notes

Economies of scope

Economies of stocks

Strong presence in the reference market

Media terms

Media critics, such as Robert McChesney, have noted that the current trend within the

entertainment industry has been toward the increased concentration of media ownership

into the hands of a smaller number of transmedia and transnational conglomerates. [1] Media

nowadays tend to be in the hands of those who are rich enough to buy the media such as

Rupert Murdoch.

Horizontal integration, that is, the consolidation of holdings across multiple industries, has

displaced the old vertical integration of the Hollywood studios. [2] The idea of owning many

media outlets, which run almost the same content, is considered to be very productive,

since it requires only minor changes of format and information to use in multiple media

forms. For example, within a conglomerate, the content used in broadcasting television

would be used in broadcasting radio as well, or the content used in hard copy of the

newspaper would also be used in online newspaper website.

What emerged are new strategies of content development and distribution designed to

increase the “synergy’ between the different divisions of the same company. Studios seek

content that can move fluidly across media channels. [3]

Vertical integration

In microeconomics and management, the term vertical integration describes a style of

management control. Vertically integrated companies are united through a hierarchy with a

common owner. Usually each member of the hierarchy produces a different product or

service, and the products combine to satisfy a common need. It is contrasted with horizontal

integration. Vertical integration is one method of avoiding the hold-up problem. A monopoly

produced through vertical integration is called a vertical monopoly, although it might be

more appropriate to speak of this as some form of cartel. Andrew Carnegie actually

introduced the idea of vertical integration. This led other businessmen to using the

system to promote better financial growth and efficiency in their companies and businesses.

Contents

Page 83: Strategic Notes

1 Three types

2 Examples

o 2.1 Carnegie Steel

o 2.2 American Apparel

o 2.3 Oil industry

3 Problems and benefits

o 3.1 Static technology

o 3.2 Dynamic technology

4 Vertical expansion

Three types

Vertical integration is the degree to which a firm owns its upstream suppliers and its

downstream buyers. Contrary to horizontal integration, which is a consolidation of many

firms that handle the same part of the production process, vertical integration is typified by

one firm engaged in different aspects of production (e.g. growing raw materials,

manufacturing, transporting, marketing, and/or retailing).

There are three varieties: backward (upstream) vertical integration, forward (downstream)

vertical integration, and balanced (horizontal) vertical integration.

In backward vertical integration, the company sets up subsidiaries that produce

some of the inputs used in the production of its products. For example, an

automobile company may own a tire company, a glass company, and a metal

company. Control of these three subsidiaries is intended to create a stable supply of

inputs and ensure a consistent quality in their final product. It was the main business

approach of Ford and other car companies in the 1920s, who sought to minimize

costs by centralizing the production of cars and car parts.

In forward vertical integration, the company sets up subsidiaries that distribute or

market products to customers or use the products themselves. An example of this is

a movie studio that also owns a chain of theaters.

In balanced vertical integration, the company sets up subsidiaries that both

supply them with inputs and distribute their outputs.

Page 84: Strategic Notes

For example, a hamburger manufacturer that owns the farms where they raise the cows,

chickens, potatoes and wheat as well as the factories that processes these agricultural

products is practising backwards vertical integration. Forwards vertical integration would

mean that they would own the regional distribution centers and shops or fast food

restaurants where the hamburgers are sold. Balanced vertical integration would mean that

they own all of these components.

Carnegie Steel

One of the earliest, largest and most famous examples of vertical integration was the

Carnegie Steel company. The company controlled not only the mills where the steel was

manufactured but also the mines where the iron ore was extracted, the coal mines that

supplied the coal, the ships that transported the iron ore and the railroads that transported

the coal to the factory, the coke ovens where the coal was coked, etc. The company also

focused heavily on developing talent internally from the bottom up, rather than importing it

from other companies.[1] Later on, Carnegie even established an institute of higher learning

to teach the steel processes to the next generation.

American Apparel

American Apparel is a fashion retailer and manufacturer that actually advertises itself as

vertically integrated industrial company.[2][3] The brand is based in downtown Los Angeles,

where from a single building they control the dyeing, finishing, designing, sewing, cutting,

marketing and distribution of the company's product.[4][3][5] The company shoots and

distributes its own advertisements, often using its own employees as subjects. [6][2] It also

owns and operates each of its retail locations as opposed to franchising.[7] According to the

management, the vertically integrated model allows the company to design, cut, distribute

and sell an item globally in the span of a week.[8] The original founder Dov Charney has

remained the majority shareholder and CEO. [9] Since the company controls both the

production and distribution of its product, it is an example of a balanced vertically

integrated corporation.

Oil industry

Oil companies, both multinational (such as ExxonMobil, Royal Dutch Shell, ConocoPhillips or

BP) and national (e.g. Petronas) often adopt a vertically integrated structure. This means

that they are active all the way along the supply chain from locating crude oil deposits,

drilling and extracting crude, transporting it around the world, refining it into petroleum

Page 85: Strategic Notes

products such as petrol/gasoline, to distributing the fuel to company-owned retail stations,

where it is sold to consumers.

Problems and benefits

There are internal and external (e.g. society-wide) gains and losses due to vertical

integration. They will differ according to the state of technology in the industries involved,

roughly corresponding to the stages of the industry lifecycle.

Static technology

This is the simplest case, where the gains and losses have been studied extensively.

Internal gains:

Lower transaction costs

Synchronization of supply and demand along the chain of products

Lower uncertainty and higher investment

Ability to monopolize market throughout the chain by market foreclosure

Internal losses:

Higher monetary and organizational costs of switching to other suppliers/buyers

Benefits to society:

Better opportunities for investment growth through reduced uncertainty

Losses to society:

Monopolization of markets

Rigid organizational structure, having much the same shortcomings as the socialist

economy (cf. John Kenneth Galbraith's works), etc...

Page 86: Strategic Notes

Monopoly on intermediate components (with opportunity for price gouging) leads to a

throwaway society

Dynamic technology

Some argue that vertical integration will eventually hurt a company because when new

technologies are available, the company is forced to reinvest in its infrastructures in order to

keep up with competition. Some say that today, when technologies evolve very quickly, this

can cause a company to invest into new technologies, only to reinvest in even newer

technologies later, thus costing a company financially. However, a benefit of vertical

integration is that all the components that are in a company product will work harmoniously,

which will lower downtime and repair costs.[citation needed]

Vertical expansion

Vertical expansion, in economics, is the growth of a business enterprise through the

acquisition of companies that produce the intermediate goods needed by the business or

help market and distribute its final goods. Such expansion is desired because it secures the

supplies needed by the firm to produce its product and the market needed to sell the

product. The result is a more efficient business with lower costs and more profits.

Related is lateral expansion, which is the growth of a business enterprise through the

acquisition of similar firms, in the hope of achieving economies of scale.

Vertical expansion is also known as a vertical acquisition. Vertical expansion or acquisitions

can also be used to increase scales and to gain market power. The acquisition of DirectTV by

News Corporation is an example of vertical expansion or acquisition. DirectTV is a satellite

TV company through which News Corporation can distribute more of its media content:

news, movies, and television shows.

Franchising

Franchising refers to the methods of practicing and using another person's philosophy of

business. The franchisor grants the independent operator the right to distribute its products,

techniques, and trademarks for a percentage of gross monthly sales and a royalty fee.

Various tangibles and intangibles such as national or international advertising, training, and

other support services are commonly made available by the franchisor. Agreements

typically last five to twenty years, with premature cancellations or terminations of most

contracts bearing serious consequences for franchisees.

Page 87: Strategic Notes

Contents

1 Overview

2 History

3 Businesses for which franchising works best

4 Advantages

o 4.1 For franchisors

4.1.1 Expansion

4.1.2 Legal considerations

4.1.3 Operational considerations

o 4.2 For franchisees

4.2.1 Quick start

4.2.2 Expansion

4.2.3 Training

5 Disadvantages

o 5.1 For franchisors

5.1.1 Limited pool of viable franchisees

5.1.2 Control

o 5.2 For franchisees

5.2.1 Control

5.2.2 Price

5.2.3 Conflicts

6 Legal aspects

o 6.1 Australia

Page 88: Strategic Notes

o 6.2 United States

o 6.3 Russia

o 6.4 UK

o 6.5 Kazakhstan

7 Social franchises

8 Event franchising

Overview

The term "franchising" is used to describe business systems which may or may not fall into

the legal definition provided above. For example, a vending machine operator may receive a

franchise for a particular kind of vending machine, including a trademark and a royalty, but

no method of doing business. This is called "product franchising" or "trade name

franchising".

A franchise agreement will usually specify the given territory the franchisee retains

exclusive control over, as well as the extent to which the franchisee will be supported by the

franchisor (e.g. training and marketing campaigns).

The franchisor typically earns royalties on the gross sales of the franchisee. [1] In such cases,

franchisees must pay royalties whether or not they are realizing profits from their franchised

business.

Cancellations or terminations of franchise agreements before the completion of the contract

have serious consequences for franchisees. Franchise agreement terms typically result in a

loss of the sunk costs of the first-owner franchisees who build out the branded physical units

and who lease the branded name, marks, and business plan from the franchisors if the

franchise is canceled or terminated for any reason before the expiration of the entire term of

the contract.[citation needed] (Item 15 of the Rule of the Federal Trade Commission requires

disclosure of terms that cover termination of the franchise agreement and the terms

substantiate this statement)

History

Page 89: Strategic Notes

Franchising dates back to at least the 1850s; Isaac Singer, who made improvements to an

existing model of a sewing machine, wanted to increase the distribution of his sewing

machines. His effort, though unsuccessful in the long run, was among the first franchising

efforts in the United States. A later example of franchising was John S. Pemberton's

successful franchising of Coca-Cola.[2] Early American examples include the telegraph

system, which was operated by various railroad companies but controlled by Western

Union [3] , and exclusive agreements between automobile manufacturers and operators of

local dealerships.[4] Earlier models of product franchising collected royalties or fees on a

product basis and not on the gross sales of the business operations of the franchisees.

Modern franchising came to prominence with the rise of franchise-based food service

establishments. This trend started before 1933 with quick service restaurants such as A&W

Root Beer.[5] In 1935, Howard Deering Johnson teamed up with Reginald Sprague to establish

the first modern restaurant franchise. [6] [7] The idea was to let independent operators use

the same name, food, supplies, logo and even building design in exchange for a fee.

The growth in franchises picked up steam in the 1930s when such chains as Howard

Johnson's started franchising motels.[8] The 1950s saw a boom of franchise chains in

conjunction with the development of the U.S. interstate highway system.[9] Fast food

restaurants, diners and motel chains exploded. In regard to contemporary franchise chains,

McDonalds is unarguably the most successful worldwide with more restaurant units than any

other franchise network.

According to Franchising in the Economy, 1991-1993, a study done by the University of

Louisville, franchising helped to lead America out of its economic downturn at the time.[1]Franchising is a unique business model that has encouraged the growth of franchised

chain formula units because the franchisors collect royalties on the gross sales of these units

and not on the profits. Conversely, when good jobs are lost in the economy, franchising

picks up because potential franchisees are looking to buy jobs and to earn profits from the

purchase of franchise rights. The manager of the United States Small Business

Administration's Franchise Registry concludes that franchising there is continuing to grow

and that franchising is growing in the national economy. [10]

Franchising is a business model used in more than 70 industries and that generates more

than $1 trillion in U.S. sales annually.[11]

Businesses for which franchising works best

Page 90: Strategic Notes

Businesses for which franchises is said to works best have the following characteristics

Businesses with a good track record of profitability.

Businesses built around a unique or unusual concept.

Businesses with broad geographic appeal.

Businesses which are relatively easy to operate.

Businesses which are relatively inexpensive to operate.

Businesses which are easily duplicated.

Advantages

For franchisors

Expansion

Franchising is one of the only means available to access investment capital without the need

to give up control in the process. After their brand and formula are carefully designed and

properly executed, franchisors are able to expand rapidly across countries and continents

using the capital and resources of their franchisees, and can earn profits commensurate with

their contribution to those societies.

Additionally, the franchisor may choose to leverage the franchisee to build a distribution

network.

Legal considerations

The franchisor is relieved of many of the mundane duties necessary to start a new outlet,

such as obtaining the necessary licences and permits. In some jurisdictions, certain permits

(especially liquor licenses) are more easily obtained by locally based, owner-operator type

applicants while companies based outside the jurisdiction (and especially if they originate in

another country) find it difficult if not impossible to get such licences issued to them directly.

For this reason, hotel and restaurant chains that sell liquor often have no viable option but

to franchise if they wish to expand to another state or province.

Page 91: Strategic Notes

Operational considerations

Franchisees are said to have a greater incentive than direct employees to operate their

businesses successfully because they have a direct stake in the operation. The need of

franchisors to closely scrutinize the day to day operations of franchisees (compared to

directly-owned outlets) is greatly reduced.

For franchisees

Quick start

As practiced in retailing, franchising offers franchisees the advantage of starting up a new

business quickly based on a proven trademark and formula of doing business, as opposed to

having to build a new business and brand from scratch (often in the face of aggressive

competition from franchise operators). A well run franchise would offer a turnkey business:

from site selection to lease negotiation, training, mentoring and ongoing support as well as

statutory requirements and troubleshooting.

Expansion

With the help of the expertise provided by the franchisers the franchisees are able to take

their franchise business to that level which they wouldn't have had been able to without the

expert guidance of their franchisors.

Training

Franchisors often offer franchisees significant training, which is not available for free to

individuals starting their own business. Although training is not free for franchisees, it is both

supported through the traditional franchise fee that the franchisor collects and tailored to

the business that is being started.

Disadvantages

For franchisors

Page 92: Strategic Notes

Limited pool of viable franchisees

In any city or region there will be only a limited pool of people who have both the resources

and the desire to set up a franchise in a certain industry, compared to the pool of individuals

who would be able to competently manage a directly-owned outlet.

Control

Successful franchising necessitates a much more careful vetting process when evaluating

the limited number of potential franchisees than would be required to hire a direct

employee. An incompetent manager of a directly-owned outlet can easily be replaced, while

regardless of the local laws and agreements in place removing an incompetent franchisee is

much more difficult. Incompetent franchisees can easily damage the public's goodwill

towards the franchisor's brand by providing inferior goods and services. If a franchisee is

cited for legal violations, (s)he will probably face the legal consequences alone but the

franchisor's reputation could still be damaged.

For franchisees

Control

For franchisees, the main disadvantage of franchising is a loss of control. While they gain the

use of a system, trademarks, assistance, training, marketing, the franchisee is required to

follow the system and get approval for changes from the franchisor. For these reasons,

franchisees and entrepreneurs are very different. The United States Office of Advocacy of

the SBA indicates that a franchisee "is merely a temporary business investment where he

may be one of several investors during the lifetime of the franchise. In other words, he is

"renting or leasing" the opportunity, not "buying a business for the purpose of true

ownership." [12] Additionally, "A franchise purchase consists of both intrinsic value and time

value. A franchise is a wasting asset due to the finite term, unless the franchisor chooses to

contractually obligate itself it is under no obligation to renew the franchise." [13]

Price

Starting and operating a franchise business carries expenses. In choosing to adopt the

standards set by the franchisor, the franchisee often has no further choice as to signage,

shop fitting, uniforms etc. The franchisee may not be allowed to source less expensive

alternatives. Added to that is the franchise fee and ongoing royalties and advertising

contributions. The contract may also bind the franchisee to such alterations as demanded by

Page 93: Strategic Notes

the franchisor from time to time. (As required to be disclosed in the state disclosure

document and the franchise agreement under the FTC Franchise Rule)

Conflicts

The franchisor/franchisee relationship can easily cause conflict if either side is incompetent

(or acting in bad faith). An incompetent franchisor can destroy its franchisees by failing to

promote the brand properly or by squeezing them too aggressively for profits. Franchise

agreements are unilateral contracts or contracts of adhesion wherein the contract terms

generally are advantageous to the franchisor when there is conflict in the relationship. [14]

Additionally, the legal publishing website Nolo.com listed the "Lack of Legal Recourse" as

one of Ten Good Reasons Not to Buy a Franchise:

“ As a franchisee, you have little legal recourse if you're wronged by the franchisor.

Most franchisors make franchisees sign agreements waiving their rights under federal

and state law, and in some cases allowing the franchisor to choose where and under

what law any dispute would be litigated. Shamefully, the Federal Trade Commission

(FTC) investigates only a small minority of the franchise-related complaints it

receives.[15] ”

Legal aspects

Australia

In Australia, franchising is regulated by the Franchising Code of Conduct, a mandatory code

of conduct made under the Trade Practices Act 1974.

The Code requires franchisors to produce a disclosure document which must be given to a

prospective franchisee at least 14 days before the franchise agreement is entered into.

The Code also regulates the content of franchise agreements, for example in relation to

marketing funds, a cooling-off period, termination and the resolution of disputes by

mediation.

United States

In the United States, franchising falls under the jurisdiction of a number of state and federal

laws. Franchisors are required by the Federal Trade Commission to provide a Uniform

Page 94: Strategic Notes

Franchise Offering Circular (UFOC) to disclose essential information to potential franchisees

about their purchase. States may require the UFOC to contain specific requirements but the

requirements in the State disclosure documents must be in compliance with the Federal

Rule that governs federal regulatory policy.[16] There is no private right of action under the

FTC Rule for franchisor violation of the rule but fifteen or more of the States have passed

statutes that provide a private right of action to franchisees when fraud can be proved under

these special statutes.

The franchise agreement is a standard part of franchising. It is the essential contract signed

by the franchisee and the franchisor that formalizes and specifies the terms of the business

arrangement, as well as many issues discussed in less detail in the UFOC. Unlike the UFOC,

the franchise agreement is a fluid document, crafted to meet the specific needs of the

franchise, with each having its own set of standards and requirements. But much like a

lease, there are elements commonly found in every agreement.[16] "There is a difference

between a discrete contract and a relational contract, and franchise contracts are a distinct

subset of relational contracts." Franchise contracts form a unique and ongoing relationship

berween the parties. "Unlike a traditional contract, franchise contracts establish a

relationship where the stronger party can unilaterally alter the fundamental nature of the

obligations of the weaker party......." [17]

There is no federal registry of franchises or any federal filing requirements for information.

States are the primary collectors of data on franchising companies, and enforce laws and

regulations regarding their presence and their spread in their jurisdictions. In response to

the soaring popularity of franchising, an increasing number of communities are taking steps

to limit these chain businesses and reduce displacement of independent businesses through

limits on "formula businesses."[18]

The majority of franchisors have inserted mandatory arbitration clauses into their

agreements with their franchisees. Since 1980, the U.S. Supreme Court has dealt with cases

involving direct franchisor/franchisee conflicts at least four times, and three of those cases

involved a franchisee who was resisting the franchisor's motion to compel arbitration. Two of

the latter cases involved large, well-known restaurant chains (Burger King in Burger King v.

Rudzewicz and Subway in 517 US 681 (1996) Doctor's Associates, Inc. v. Casarotto ); the

third involved Southland Corporation, the parent company of 7-Eleven in Southland Corp. v.

Keating , 465 US 1 (1984) .

Russia

Page 95: Strategic Notes

In Russia, under ch. 54 of the Civil Code (passed 1996), franchise agreements are invalid

unless written and registered, and franchisors cannot set standards or limits on the prices of

the franchisee’s goods. Enforcement of laws and resolution of contractual disputes is a

problem: Dunkin' Donuts chose to terminate its contract with Russian franchisees that were

selling vodka and meat patties contrary to their contracts, rather than pursue legal

remedies.

UK

In the United Kingdom, there are no franchise-specific laws; franchises are subject to the

same laws that govern other businesses. For example, franchise agreements are produced

under regular contract law and do not have to conform to any further legislation or

guidelines.[citation needed] There is some self-regulation through the British Franchise Association

(BFA). However there are many franchise businesses which do not become members, and

many businesses that refer to themselves as franchisors that do not conform to these rules.[citation needed] There are several people and organisations in the industry calling for the creation

of a framework to help reduce the number of "cowboy" franchises and help the industry

clean up its image.[who?]

On 22 May 2007, hearings were held in the UK Parliament concerning citizen initiated

petitions for special regulation of franchising by the government of the UK due to losses of

citizens who had invested in franchises. The Minister of Industry, Margaret Hodge,

conducted hearings but resisted any government regulation of franchising with the advice

that government regulation of franchising might lull the public into a false sense of security.

The Minister of Industry indicated that if due diligence were performed by the investors and

the banks, the current laws governing business contracts in the UK offered sufficient

protection for the public and the banks.[19]

Kazakhstan

Until 2002, franchising rules in Kazakhstan were also governed by Chapter 45 of the Kazakh

Civil Code (CC). Measures of state support franchising generally been included in the

programme of support for business. Measures to promote franchising were provided in

paragraph 2.4.1 of the state program for small business development and support for the

1999–2000. Key provisions of Chapter 45, as well as the rules governing the franchise in

more detail relations, entered the law "About integrated business license (franchise)", dated

24 June 2002, No. 330 - II. It should be noted that amongst the Commonwealth of

Independent States, Kazakhstan is one of the first countries to introduce the legal definition

Page 96: Strategic Notes

of franchising in a special law. Kazakhstan serves as a gateway for entering the markets of

Central Asia for foreign franchises. Average estimate of Kazakhstan franchising business

market is about 0,5 billion US$ annually [20].

Social franchises

In recent years, the idea of franchising has been picked up by the social enterprise sector,

which hopes to simplify and expedite the process of setting up new businesses. A number of

business ideas, such as soap making, wholefood retailing, aquarium maintenance, and hotel

operation, have been identified as suitable for adoption by social firms employing disabled

and disadvantaged people.

The most successful example is probably the CAP Markets, a steadily growing chain of some

50 neighborhood supermarkets in Germany. Other examples are the St. Mary's Place Hotel

in Edinburgh and the Hotel Tritone in Trieste.

Event franchising

Event franchising is the duplication of public events in other geographical areas, while

retaining the original brand (logo), mission, concept and format of the event.[21] As in classic

franchising, event franchising is built on precisely copying successful events. Good example

of event franchising is the World Economic Forum, or just Davos forum which has regional

event franchisees in China, Latin America etc.

Mergers and acquisitions

The phrase mergers and acquisitions (abbreviated M&A) refers to the aspect of

corporate strategy, corporate finance and management dealing with the buying, selling and

combining of different companies that can aid, finance, or help a growing company in a

given industry grow rapidly without having to create another business entity.

Contents

1 Overview

2 Acquisition

o 2.1 Types of acquisition

3 Merger

Page 97: Strategic Notes

o 3.1 Classifications of mergers

o 3.2 Distinction between Mergers and Acquisitions

4 Business valuation

5 Financing M&A

o 5.1 Cash

o 5.2 Financing

o 5.3 Hybrids

o 5.4 Factoring

6 Specialist M&A advisory firms

7 Motives behind M&A

8 Effects on management

9 M&A marketplace difficulties

10 The Great Merger Movement

o 10.1 Short-run factors

o 10.2 Long-run factors

11 Cross-border M&A

12 Sovereign Wealth Funds set up a shared corporate

acquisitions database

13 Major M&A in the 1990s

14 Major M&A from 2000 to present

Overview

A merger is a tool used by companies for the purpose of expanding their operations often

aiming at an increase of their long term profitability. There are 15 different types of actions

that a company can take when deciding to move forward using M&A. Usually mergers occur

Page 98: Strategic Notes

in a consensual (occurring by mutual consent) setting where executives from the target

company help those from the purchaser in a due diligence process to ensure that the deal is

beneficial to both parties. Acquisitions can also happen through a hostile takeover by

purchasing the majority of outstanding shares of a company in the open market against the

wishes of the target's board. In the United States, business laws vary from state to state

whereby some companies have limited protection against hostile takeovers. One form of

protection against a hostile takeover is the shareholder rights plan, otherwise known as the

"poison pill".

Historically, mergers have often failed (Straub, 2007) to add significantly to the value of the

acquiring firm's shares (King, et al., 2004). Corporate mergers may be aimed at reducing

market competition, cutting costs (for example, laying off employees, operating at a more

technologically efficient scale, etc.), reducing taxes, removing management, "empire

building" by the acquiring managers, or other purposes which may or may not be consistent

with public policy or public welfare. Thus they can be heavily regulated, for example, in the

U.S. requiring approval by both the Federal Trade Commission and the Department of

Justice.

The U.S. began their regulation on mergers in 1890 with the implementation of the Sherman

Act. It was meant to prevent any attempt to monopolize or to conspire to restrict trade.

However, based on the loose interpretation of the standard "Rule of Reason", it was up to

the judges in the U.S. Supreme Court whether to rule leniently (as with U.S. Steel in 1920) or

strictly (as with Alcoa in 1945).

Acquisition

An acquisition, also known as a takeover, is the buying of one company (the ‘target’) by

another. An acquisition may be friendly or hostile. In the former case, the companies

cooperate in negotiations; in the latter case, the takeover target is unwilling to be bought or

the target's board has no prior knowledge of the offer. Acquisition usually refers to a

purchase of a smaller firm by a larger one. Sometimes, however, a smaller firm will acquire

management control of a larger or longer established company and keep its name for the

combined entity. This is known as a reverse takeover.

Types of acquisition

The buyer buys the shares, and therefore control, of the target company being

purchased. Ownership control of the company in turn conveys effective control over

Page 99: Strategic Notes

the assets of the company, but since the company is acquired intact as a going

business, this form of transaction carries with it all of the liabilities accrued by that

business over its past and all of the risks that company faces in its commercial

environment.

The buyer buys the assets of the target company. The cash the target receives from

the sell-off is paid back to its shareholders by dividend or through liquidation. This

type of transaction leaves the target company as an empty shell, if the buyer buys

out the entire assets. A buyer often structures the transaction as an asset purchase

to "cherry-pick" the assets that it wants and leave out the assets and liabilities that it

does not. This can be particularly important where foreseeable liabilities may include

future, unquantified damage awards such as those that could arise from litigation

over defective products, employee benefits or terminations, or environmental

damage. A disadvantage of this structure is the tax that many jurisdictions,

particularly outside the United States, impose on transfers of the individual assets,

whereas stock transactions can frequently be structured as like-kind exchanges or

other arrangements that are tax-free or tax-neutral, both to the buyer and to the

seller's shareholders.

The terms "demerger", "spin-off" and "spin-out" are sometimes used to indicate a situation

where one company splits into two, generating a second company separately listed on a

stock exchange.

Merger

In business or economics a merger is a combination of two companies into one larger

company. Such actions are commonly voluntary and involve stock swap or cash payment to

the target. Stock swap is often used as it allows the shareholders of the two companies to

share the risk involved in the deal. A merger can resemble a takeover but result in a new

company name (often combining the names of the original companies) and in new branding;

in some cases, terming the combination a "merger" rather than an acquisition is done purely

for political or marketing reasons.

Classifications of mergers

Horizontal mergers take place where the two merging companies produce similar

product in the same industry.

Vertical mergers occur when two firms, each working at different stages in the

production of the same good, combine.

Page 100: Strategic Notes

Congeneric mergers occur where two merging firms are in the same general industry,

but they have no mutual buyer/customer or supplier relationship, such as a merger

between a bank and a leasing company. Example: Prudential's acquisition of Bache &

Company.

Conglomerate mergers take place when the two firms operate in different industries.

A unique type of merger called a reverse merger is used as a way of going public without

the expense and time required by an IPO.

The contract vehicle for achieving a merger is a "merger sub".

The occurrence of a merger often raises concerns in antitrust circles. Devices such as the

Herfindahl index can analyze the impact of a merger on a market and what, if any, action

could prevent it. Regulatory bodies such as the European Commission, the United States

Department of Justice and the U.S. Federal Trade Commission may investigate anti-trust

cases for monopolies dangers, and have the power to block mergers.

Accretive mergers are those in which an acquiring company's earnings per share (EPS)

increase. An alternative way of calculating this is if a company with a high price to earnings

ratio (P/E) acquires one with a low P/E.

Dilutive mergers are the opposite of above, whereby a company's EPS decreases. The

company will be one with a low P/E acquiring one with a high P/E.

The completion of a merger does not ensure the success of the resulting organization;

indeed, many mergers (in some industries, the majority) result in a net loss of value due to

problems. Correcting problems caused by incompatibility—whether of technology,

equipment, or corporate culture— diverts resources away from new investment, and these

problems may be exacerbated by inadequate research or by concealment of losses or

liabilities by one of the partners. Overlapping subsidiaries or redundant staff may be allowed

to continue, creating inefficiency, and conversely the new management may cut too many

operations or personnel, losing expertise and disrupting employee culture. These problems

are similar to those encountered in takeovers. For the merger not to be considered a failure,

it must increase shareholder value faster than if the companies were separate, or prevent

the deterioration of shareholder value more than if the companies were separate.

Distinction between Mergers and Acquisitions

Page 101: Strategic Notes

Although they are often uttered in the same breath and used as though they were

synonymous, the terms merger and acquisition mean slightly different things.

When one company takes over another and clearly established itself as the new owner, the

purchase is called an acquisition. From a legal point of view, the target company ceases to

exist, the buyer "swallows" the business and the buyer's stock continues to be traded.

In the pure sense of the term, a merger happens when two firms, often of about the same

size, agree to go forward as a single new company rather than remain separately owned and

operated. This kind of action is more precisely referred to as a "merger of equals". Both

companies' stocks are surrendered and new company stock is issued in its place. For

example, both Daimler-Benz and Chrysler ceased to exist when the two firms merged, and a

new company, DaimlerChrysler, was created.

In practice, however, actual mergers of equals don't happen very often. Usually, one

company will buy another and, as part of the deal's terms, simply allow the acquired firm to

proclaim that the action is a merger of equals, even if it is technically an acquisition. Being

bought out often carries negative connotations, therefore, by describing the deal

euphemistically as a merger, deal makers and top managers try to make the takeover more

palatable.

A purchase deal will also be called a merger when both CEOs agree that joining together is

in the best interest of both of their companies. But when the deal is unfriendly - that is,

when the target company does not want to be purchased - it is always regarded as an

acquisition.

Whether a purchase is considered a merger or an acquisition really depends on whether the

purchase is friendly or hostile and how it is announced. In other words, the real difference

lies in how the purchase is communicated to and received by the target company's board of

directors, employees and shareholders. It is quite normal though for M&A deal

communications to take place in a so called 'confidentiality bubble' whereby information

flows are restricted due to confidentiality agreements (Harwood, 2005).

Business valuation

The five most common ways to valuate a business are

asset valuation ,

historical earnings valuation,

Page 102: Strategic Notes

future maintainable earnings valuation,

relative valuation (comparable company & comparable transactions),

discounted cash flow (DCF) valuation

Professionals who valuate businesses generally do not use just one of these methods but a

combination of some of them, as well as possibly others that are not mentioned above, in

order to obtain a more accurate value. These values are determined for the most part by

looking at a company's balance sheet and/or income statement and withdrawing the

appropriate information. The information in the balance sheet or income statement is

obtained by one of three accounting measures: a Notice to Reader, a Review Engagement or

an Audit.

Accurate business valuation is one of the most important aspects of M&A as valuations like

these will have a major impact on the price that a business will be sold for. Most often this

information is expressed in a Letter of Opinion of Value (LOV) when the business is being

valuated for interest's sake. There are other, more detailed ways of expressing the value of

a business. These reports generally get more detailed and expensive as the size of a

company increases, however, this is not always the case as there are many complicated

industries which require more attention to detail, regardless of size.

Financing M&A

Mergers are generally differentiated from acquisitions partly by the way in which they are

financed and partly by the relative size of the companies. Various methods of financing an

M&A deal exist:

Cash

Payment by cash. Such transactions are usually termed acquisitions rather than mergers

because the shareholders of the target company are removed from the picture and the

target comes under the (indirect) control of the bidder's shareholders alone.

A cash deal would make more sense during a downward trend in the interest rates. Another

advantage of using cash for an acquisition is that there tends to lesser chances of EPS

dilution for the acquiring company. But a caveat in using cash is that it places constraints on

the cash flow of the company.

Page 103: Strategic Notes

Financing

Financing capital may be borrowed from a bank, or raised by an issue of bonds.

Alternatively, the acquirer's stock may be offered as consideration. Acquisitions financed

through debt are known as leveraged buyouts if they take the target private, and the debt

will often be moved down onto the balance sheet of the acquired company.

Hybrids

An acquisition can involve a combination of cash and debt, or a combination of cash and

stock of the purchasing entity.

Factoring

Factoring can provide the necessary extra to make a merger or sale work. Hybrid can work

as ad e-denit

Specialist M&A advisory firms

Although at present the majority of M&A advice is provided by full-service investment banks,

recent years have seen a rise in the prominence of specialist M&A advisers, who only

provide M&A advice (and not financing). To perform these services in the US, an advisor

must be a licensed broker dealer, and subject to SEC (FINRA) regulation. More information

on M&A advisory firms is provided at corporate advisory.

Motives behind M&A

The dominant rationale used to explain M&A activity is that acquiring firms seek improved

financial performance. The following motives are considered to improve financial

performance:

Synergies : This refers to the fact that the combined company can often reduce its

fixed costs by removing duplicate departments or operations, lowering the costs of

the company relative to the same revenue stream, thus increasing profit margins.

Increased revenue/Increased Market Share: This assumes that the buyer will be

absorbing a major competitor and thus increase its market power (by capturing

increased market share) to set prices.

Page 104: Strategic Notes

Cross selling : For example, a bank buying a stock broker could then sell its banking

products to the stock broker's customers, while the broker can sign up the bank's

customers for brokerage accounts. Or, a manufacturer can acquire and sell

complementary products.

Economies of Scale : For example, managerial economies such as the increased

opportunity of managerial specialization. Another example are purchasing economies

due to increased order size and associated bulk-buying discounts.

Taxes : A profitable company can buy a loss maker to use the target's loss as their

advantage by reducing their tax liability. In the United States and many other

countries, rules are in place to limit the ability of profitable companies to "shop" for

loss making companies, limiting the tax motive of an acquiring company.

Geographical or other diversification: This is designed to smooth the earnings results

of a company, which over the long term smoothens the stock price of a company,

giving conservative investors more confidence in investing in the company. However,

this does not always deliver value to shareholders (see below).

Resource transfer: resources are unevenly distributed across firms (Barney, 1991)

and the interaction of target and acquiring firm resources can create value through

either overcoming information asymmetry or by combining scarce resources.[1]

Vertical integration : Vertical Integration occurs when an upstream and downstream

firm merge (or one acquires the other). There are several reasons for this to occur.

One reason is to internalise an externality problem. A common example is of such an

externality is double marginalization. Double marginalization occurs when both the

upstream and downstream firms have monopoly power, each firm reduces output

from the competitive level to the monopoly level, creating two deadweight losses. By

merging the vertically integrated firm can collect one deadweight loss by setting the

upstream firm's output to the competitive level. This increases profits and consumer

surplus. A merger that creates a vertically integrated firm can be profitable.[2]

However, on average and across the most commonly studied variables, acquiring firms’

financial performance does not positively change as a function of their acquisition activity. [3]

Therefore, additional motives for merger and acquisition that may not add shareholder value

include:

Page 105: Strategic Notes

Diversification: While this may hedge a company against a downturn in an individual

industry it fails to deliver value, since it is possible for individual shareholders to

achieve the same hedge by diversifying their portfolios at a much lower cost than

those associated with a merger.

Manager's hubris: manager's overconfidence about expected synergies from M&A

which results in overpayment for the target company.

Empire building : Managers have larger companies to manage and hence more power.

Manager's compensation: In the past, certain executive management teams had

their payout based on the total amount of profit of the company, instead of the profit

per share, which would give the team a perverse incentive to buy companies to

increase the total profit while decreasing the profit per share (which hurts the owners

of the company, the shareholders); although some empirical studies show that

compensation is linked to profitability rather than mere profits of the company.

Effects on management

A study published in the July/August 2008 issue of the Journal of Business Strategy suggests

that mergers and acquisitions destroy leadership continuity in target companies’ top

management teams for at least a decade following a deal. The study found that target

companies lose 21 percent of their executives each year for at least 10 years following an

acquisition – more than double the turnover experienced in non-merged firms.[4]

M&A marketplace difficulties

No marketplace currently exists in many states for the mergers and acquisitions of privately

owned small to mid-sized companies. Market participants often wish to maintain a level of

secrecy about their efforts to buy or sell such companies. Their concern for secrecy usually

arises from the possible negative reactions a company's employees, bankers, suppliers,

customers and others might have if the effort or interest to seek a transaction were to

become known. This need for secrecy has thus far thwarted the emergence of a public

forum or marketplace to serve as a clearinghouse for this large volume of business. In some

states, a Multiple Listing Service (MLS) of small businesses for sale is maintained by

organizations such as Business Brokers of Florida (BBF). Another MLS is maintained by

International Business Brokers Association (IBBA).

At present, the process by which a company is bought or sold can prove difficult, slow and

expensive. A transaction typically requires six to nine months and involves many steps.

Page 106: Strategic Notes

Locating parties with whom to conduct a transaction forms one step in the overall process

and perhaps the most difficult one. Qualified and interested buyers of multimillion dollar

corporations are hard to find. Even more difficulties attend bringing a number of potential

buyers forward simultaneously during negotiations. Potential acquirers in an industry simply

cannot effectively "monitor" the economy at large for acquisition opportunities even though

some may fit well within their company's operations or plans.

An industry of professional "middlemen" (known variously as intermediaries, business

brokers, and investment bankers) exists to facilitate M&A transactions. These professionals

do not provide their services cheaply and generally resort to previously-established personal

contacts, direct-calling campaigns, and placing advertisements in various media. In servicing

their clients they attempt to create a one-time market for a one-time transaction. Stock

purchase or merger transactions involve securities and require that these "middlemen" be

licensed broker dealers under FINRA (SEC) in order to be compensated as a % of the deal.

Generally speaking, an unlicensed middleman may be compensated on an asset purchase

without being licensed. Many, but not all, transactions use intermediaries on one or both

sides. Despite best intentions, intermediaries can operate inefficiently because of the slow

and limiting nature of having to rely heavily on telephone communications. Many phone

calls fail to contact with the intended party. Busy executives tend to be impatient when

dealing with sales calls concerning opportunities in which they have no interest. These

marketing problems typify any private negotiated markets. Due to these problems and other

problems like these, brokers who deal with small to mid-sized companies often deal with

much more strenuous conditions than other business brokers. Mid-sized business brokers

have an average life-span of only 12-18 months and usually never grow beyond 1 or 2

employees. Exceptions to this are few and far between. Some of these exceptions include

The Sundial Group, Geneva Business Services and Robbinex.

The market inefficiencies can prove detrimental for this important sector of the economy.

Beyond the intermediaries' high fees, the current process for mergers and acquisitions has

the effect of causing private companies to initially sell their shares at a significant discount

relative to what the same company might sell for were it already publicly traded. An

important and large sector of the entire economy is held back by the difficulty in conducting

corporate M&A (and also in raising equity or debt capital). Furthermore, it is likely that since

privately held companies are so difficult to sell they are not sold as often as they might or

should be.

Previous attempts to streamline the M&A process through computers have failed to succeed

on a large scale because they have provided mere "bulletin boards" - static information that

Page 107: Strategic Notes

advertises one firm's opportunities. Users must still seek other sources for opportunities just

as if the bulletin board were not electronic. A multiple listings service concept was

previously not used due to the need for confidentiality but there are currently several in

operation. The most significant of these are run by the California Association of Business

Brokers (CABB) and the International Business Brokers Association (IBBA) These

organizations have effectivily created a type of virtual market without compromising the

confidentiality of parties involved and without the unauthorized release of information.

One part of the M&A process which can be improved significantly using networked

computers is the improved access to "data rooms" during the due diligence process however

only for larger transactions. For the purposes of small-medium sized business, these

datarooms serve no purpose and are generally not used. Reasons for frequent failure of M&A

was analyzed by Thomas Straub in "Reasons for frequent failure in mergers and acquisitions

- a comprehensive analysis", DUV Gabler Edition, 2007.

The Great Merger Movement

The Great Merger Movement was a predominantly U.S. business phenomenon that

happened from 1895 to 1905. During this time, small firms with little market share

consolidated with similar firms to form large, powerful institutions that dominated their

markets. It is estimated that more than 1,800 of these firms disappeared into consolidations,

many of which acquired substantial shares of the markets in which they operated. The

vehicle used were so-called trusts. To truly understand how large this movement was—in

1900 the value of firms acquired in mergers was 20% of GDP. In 1990 the value was only 3%

and from 1998–2000 is was around 10–11% of GDP. Organizations that commanded the

greatest share of the market in 1905 saw that command disintegrate by 1929 as smaller

competitors joined forces with each other. However, there were companies that merged

during this time such as DuPont, Nabisco, US Steel, and General Electric that have been able

to keep their dominance in their respected sectors today due to growing technological

advances of their products, patents, and brand recognition by their customers. These

companies that merged were consistently mass producers of homogeneous goods that could

exploit the efficiencies of large volume production. Companies which had specific fine

products, like fine writing paper, earned their profits on high margin rather than volume and

took no part in Great Merger Movement.[citation needed]

Short-run factors

Page 108: Strategic Notes

One of the major short run factors that sparked in The Great Merger Movement was the

desire to keep prices high. That is, with many firms in a market, supply of the product

remains high. During the panic of 1893, the demand declined. When demand for the good

falls, as illustrated by the classic supply and demand model, prices are driven down. To

avoid this decline in prices, firms found it profitable to collude and manipulate supply to

counter any changes in demand for the good. This type of cooperation led to widespread

horizontal integration amongst firms of the era. Focusing on mass production allowed firms

to reduce unit costs to a much lower rate. These firms usually were capital-intensive and

had high fixed costs. Because new machines were mostly financed through bonds, interest

payments on bonds were high followed by the panic of 1893, yet no firm was willing to

accept quantity reduction during this period.[citation needed]

Long-run factors

In the long run, due to the desire to keep costs low, it was advantageous for firms to merge

and reduce their transportation costs thus producing and transporting from one location

rather than various sites of different companies as in the past. This resulted in shipment

directly to market from this one location. In addition, technological changes prior to the

merger movement within companies increased the efficient size of plants with capital

intensive assembly lines allowing for economies of scale. Thus improved technology and

transportation were forerunners to the Great Merger Movement. In part due to competitors

as mentioned above, and in part due to the government, however, many of these initially

successful mergers were eventually dismantled. The U.S. government passed the Sherman

Act in 1890, setting rules against price fixing and monopolies. Starting in the 1890s with

such cases as U.S. versus Addyston Pipe and Steel Co., the courts attacked large companies

for strategizing with others or within their own companies to maximize profits. Price fixing

with competitors created a greater incentive for companies to unite and merge under one

name so that they were not competitors anymore and technically not price fixing.

Cross-border M&A

In a study conducted in 2000 by Lehman Brothers, it was found that, on average, large M&A

deals cause the domestic currency of the target corporation to appreciate by 1% relative to

the acquirer's. For every $1-billion deal, the currency of the target corporation increased in

value by 0.5%. More specifically, the report found that in the period immediately after the

deal is announced, there is generally a strong upward movement in the target corporation's

domestic currency (relative to the acquirer's currency). Fifty days after the announcement,

the target currency is then, on average, 1% stronger.[5]

Page 109: Strategic Notes

The rise of globalization has exponentially increased the market for cross border M&A. In

1996 alone there were over 2000 cross border transactions worth a total of approximately

$256 billion. This rapid increase has taken many M&A firms by surprise because the majority

of them never had to consider acquiring the capabilities or skills required to effectively

handle this kind of transaction. In the past, the market's lack of significance and a more

strictly national mindset prevented the vast majority of small and mid-sized companies from

considering cross border intermediation as an option which left M&A firms inexperienced in

this field. This same reason also prevented the development of any extensive academic

works on the subject.

Due to the complicated nature of cross border M&A, the vast majority of cross border actions

have unsuccessful results. Cross border intermediation has many more levels of complexity

to it then regular intermediation seeing as corporate governance, the power of the average

employee, company regulations, political factors customer expectations, and countries'

culture are all crucial factors that could spoil the transaction.[6][7] However, with the weak

dollar in the U.S. and soft economies in a number of countries around the world, we are

seeing more cross-border bargain hunting as top companies seek to expand their global

footprint and become more agile at creating high-performing businesses and cultures across

national boundaries.[8]

Even mergers of companies with headquarters in the same country are very much of this

type (cross-border Mergers). After all,when Boeing acquires McDonnell Douglas, the two

American companies must integrate operations in dozens of countries around the world. This

is just as true for other supposedly "single country" mergers, such as the $27 billion dollar

merger of Swiss drug makers Sandoz and Ciba-Geigy (now Novartis).

Sovereign Wealth Funds set up a shared corporate acquisitions database

A number of western government officials are expressing concern over the commercial

information for corporate acquisitions being sourced by sovereign governments & state

enterprises.

Page 110: Strategic Notes

An ad hoc group of SWF Investment Directors and Managers have now established a

database called SWF Investments and this database provides shared acquisition information

to the SWFs.

The SWF website is restricted and it states: "SWF Investments are a resource which has

been established by a number of sovereign wealth funds and state enterprises to produce

acquisition and investment databases and forecasting tools for potential acquisition targets.

Subscription to SWF Investments is by invitation only, and is restricted to government

organisations or state enterprises." [9]

The database seems to be initially concentrating on London Stock Exchange listed

companies; however it is believed that the database will in a matter of weeks be extended

to include all the companies listed on the stock exchanges of most of the developed

countries.

Western government are now in a difficult position, as public opinion and the trades unions

prefer the protection and domestic ownership of national companies, however the reality of

the present economic situation suggests that an injection of capital into many of the target

company may in fact save those companies from bankruptcy.

Major M&A in the 1990s

Top 10 M&A deals worldwide by value (in mil. USD) from 1990 to 1999:

Ran

k

Yea

rPurchaser Purchased

Transaction value (in

mil. USD)

1199

9

Vodafone Airtouch

PLC [10] Mannesmann 183,000

2199

9Pfizer [11] Warner-Lambert 90,000

3199

8Exxon [12] [13] Mobil 77,200

4199

9Citicorp Travelers Group 73,000

5199

9SBC Communications

Ameritech

Corporation63,000

Page 111: Strategic Notes

6199

9Vodafone Group

AirTouch

Communications60,000

7199

8Bell Atlantic [14] GTE 53,360

8199

8BP [15] Amoco 53,000

9199

9

Qwest

CommunicationsUS WEST 48,000

10199

7Worldcom MCI Communications 42,000

Major M&A from 2000 to present

Top 9 M&A deals worldwide by value (in mil. USD) since 2000:[16]

Ran

k

Yea

rPurchaser Purchased

Transaction value (in

mil. USD)

1200

0

Fusion: America Online Inc.

(AOL)[17][18]Time Warner 164,747

2200

0Glaxo Wellcome Plc. SmithKline Beecham Plc. 75,961

3200

4Royal Dutch Petroleum Co.

Shell Transport & Trading

Co74,559

4200

6AT&T Inc.[19][20] BellSouth Corporation 72,671

5200

1Comcast Corporation

AT&T Broadband &

Internet Svcs72,041

6200

4Sanofi-Synthelabo SA Aventis SA 60,243

7200

0

Spin-off: Nortel Networks

Corporation59,974

8200

2Pfizer Inc. Pharmacia Corporation 59,515

9200

4JP Morgan Chase & Co[21] Bank One Corp 58,761

Page 112: Strategic Notes

Retrieved from "http://en.wikipedia.org/wiki/Mergers_and_acquisitions"

Categories: Political economy | Corporate finance | Mergers and acquisitions

Modern portfolio theory

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Capital Market Line

Modern portfolio theory (MPT) proposes how rational investors will use diversification to

optimize their portfolios, and how a risky asset should be priced. The basic concepts of the

theory are Markowitz diversification, the efficient frontier, capital asset pricing model, the

alpha and beta coefficients, the Capital Market Line and the Securities Market Line.

MPT models an asset's return as a random variable, and models a portfolio as a weighted

combination of assets so that the return of a portfolio is the weighted combination of the

assets' returns. Moreover, a portfolio's return is a random variable, and consequently has an

expected value and a variance. Risk, in this model, is the standard deviation of return.

Page 113: Strategic Notes

Contents

[hide]

1 Risk and return

o 1.1 Mean and variance

1.1.1 Mathematically

o 1.2 Diversification

o 1.3 Capital allocation line

o 1.4 The efficient frontier

2 The risk-free asset

o 2.1 Mathematically

o 2.2 Portfolio leverage

o 2.3 The market portfolio

o 2.4 Capital market line

3 Asset pricing

o 3.1 Systematic risk and specific risk

o 3.2 Security characteristic line

o 3.3 Capital asset pricing model

o 3.4 Securities market line

4 Applications to project portfolios and other "non-

financial" assets

5 Applications of Modern Portfolio Theory in Other

Disciplines

6 Comparison with arbitrage pricing theory

7 References

Page 114: Strategic Notes

8 See also

o 8.1 Contrasting investment philosophy

9 External links

Risk and return

The model assumes that investors are risk averse, meaning that given two assets that offer

the same expected return, investors will prefer the less risky one. Thus, an investor will take

on increased risk only if compensated by higher expected returns. Conversely, an investor

who wants higher returns must accept more risk. The exact trade-off will differ by investor

based on individual risk aversion characteristics. The implication is that a rational investor

will not invest in a portfolio if a second portfolio exists with a more favorable risk-return

profile – i.e., if for that level of risk an alternative portfolio exists which has better expected

returns.

Mean and variance

It is further assumed that investor's risk / reward preference can be described via a

quadratic utility function. The effect of this assumption is that only the expected return and

the volatility (i.e., mean return and standard deviation) matter to the investor. The investor

is indifferent to other characteristics of the distribution of returns, such as its skew

(measures the level of asymmetry in the distribution) or kurtosis (measure of the thickness

or so-called "fat tail").

Note that the theory uses a parameter, volatility, as a proxy for risk, while return is an

expectation on the future. This is in line with the efficient market hypothesis and most of the

classical findings in finance such as Black and Scholes European Option Pricing (martingale

measure: shortly speaking means that the best forecast for tomorrow is the price of today).

Recent innovations in portfolio theory, particularly under the rubric of Post-Modern Portfolio

Theory (PMPT), have exposed several flaws in this reliance on variance as the investor's risk

proxy:

The theory uses a historical parameter, volatility, as a proxy for risk, while return is

an expectation on the future. (It is noted though that this is in line with the Efficiency

Hypothesis and most of the classical findings in finance such as Black and Scholes

which make use of the martingale measure, i.e. the assumption that the best

forecast for tomorrow is the price of today).

Page 115: Strategic Notes

The statement that "the investor is indifferent to other characteristics" seems not to

be true given that skewness risk appears to be priced by the market[citation needed].

Under the model:

Portfolio return is the proportion-weighted combination of the constituent assets'

returns.

Portfolio volatility is a function of the correlation ρ of the component assets. The

change in volatility is non-linear as the weighting of the component assets changes.

Mathematically

In general:

Expected return:-

Where Ri is return and wi is the weighting of component asset i.

Portfolio variance:-

,

where i≠j. Alternatively the expression can be written as:

,

where ρij = 1 for i=j.

Portfolio volatility:-

Page 116: Strategic Notes

For a two asset portfolio:-

Portfolio return:

Portfolio variance:

matrices are preferred for calculations of the efficient frontier. In matrix form, for a given

"risk tolerance" , the efficient front is found by minimizing the following

expression:

where

w is a vector of portfolio weights. Each and

Page 117: Strategic Notes

∑wi =

1

Page 118: Strategic Notes

The front is calculated by repeating the optimization for various

The optimization can for example be conducted by

in many software packages, including Microsoft Excel, Matlab and R.

Diversification

An investor can reduce portfolio risk simply by holding instruments which are not

perfectly correlated. In other words, investors can reduce their exposure to

individual asset risk by holding a

allow for the same portfolio return with reduced risk.

If all the assets of a portfolio have a correlation of 1, i.e., perfect correlation, the

portfolio volatility (standard deviation) will be equal to the weighted sum of the

individual asset volatilities. Hence the portfolio

square of the total weighted sum of the individual asset volatilities.

If all the assets have a correlation of 0, i.e., perfectly uncorrelated, the portfolio

variance is the sum of the individual asset weights squared times the individual

asset variance (and volatility is the square root of this sum).

If correlation is less than zero, i.e., the assets are inversely correlated, the

portfolio variance and hence volatility will be less than if the correlation is 0.

Capital allocation line

The

Page 120: Strategic Notes

Participa

nts

Investor · Stock trader/investor · Market maker · Floor trader · Floor broker ·

Broker-dealer

Exchang

esStock exchange · List of stock exchanges · Over-the-counter

Stock

valuation

Gordon model · Dividend yield · Earnings per share · Book value · Earnings yield ·

Beta · Alpha · CAPM · Arbitrage pricing theory

Financial

ratios

P/CF ratio · P/E · PEG · Price/sales ratio · P/B ratio · D/E ratio · Dividend payout

ratio · Dividend cover · SGR · ROIC · ROCE · ROE · ROA · EV/EBITDA · RSI · Sharpe

ratio · Treynor ratio · Cap rate

Related termsTrading theories

Efficient market hypothesis · Fundamental analysis · Technical analysis · Modern portfolio

theory · Post-modern portfolio theory · Mosaic theory Dividend   · Stock split   · Reverse stock

split   · Growth stock   · Speculation   · Trade   · IPO   · Market trends   · Short Selling   · Momentum   ·

Day trading   · Swing trading   · DuPont Model   · Dark liquidity   · Market depth Growth-share

matrix

The BCG matrix (aka B.C.G. analysis, BCG-matrix, Boston Box, Boston Matrix, Boston

Consulting Group analysis) is a chart that had been created by Bruce Henderson for the

Boston Consulting Group in 1970 to help corporations with analyzing their business units or

product lines. This helps the company allocate resources and is used as an analytical tool in

brand marketing, product management, strategic management, and portfolio analysis.

Contents

1 Chart

2 Practical Use of the BCG Matrix

Page 121: Strategic Notes

o 2.1 Relative market share

o 2.2 Market growth rate

o 2.3 Alternatives

3 Other usesChart

BCG Matrix

To use the chart, analysts plot a scatter graph to rank the business units (or products) on

the basis of their relative market shares and growth rates.

Cash cows are units with high market share in a slow-growing industry. These units

typically generate cash in excess of the amount of cash needed to maintain the

business. They are regarded as staid and boring, in a "mature" market, and every

corporation would be thrilled to own as many as possible. They are to be "milked"

continuously with as little investment as possible, since such investment would be

wasted in an industry with low growth.

Dogs, or more charitably called pets, are units with low market share in a mature,

slow-growing industry. These units typically "break even", generating barely enough

cash to maintain the business's market share. Though owning a break-even unit

provides the social benefit of providing jobs and possible synergies that assist other

business units, from an accounting point of view such a unit is worthless, not

generating cash for the company. They depress a profitable company's return on

assets ratio, used by many investors to judge how well a company is being managed.

Dogs, it is thought, should be sold off.

Question marks (also known as problem child) are growing rapidly and thus

consume large amounts of cash, but because they have low market shares they do

Page 122: Strategic Notes

not generate much cash. The result is a large net cash consumption. A question

mark has the potential to gain market share and become a star, and eventually a

cash cow when the market growth slows. If the question mark does not succeed in

becoming the market leader, then after perhaps years of cash consumption it will

degenerate into a dog when the market growth declines. Question marks must be

analyzed carefully in order to determine whether they are worth the investment

required to grow market share.

Stars are units with a high market share in a fast-growing industry. The hope is that

stars become the next cash cows. Sustaining the business unit's market leadership

may require extra cash, but this is worthwhile if that's what it takes for the unit to

remain a leader. When growth slows, stars become cash cows if they have been able

to maintain their category leadership, or they move from brief stardom to dogdom.

As a particular industry matures and its growth slows, all business units become either cash

cows or dogs. The natural cycle for most business units is that they start as question marks,

then turn into stars. Eventually the market stops growing thus the business unit becomes a

cash cow. At the end of the cycle the cash cow turns into a dog.

The overall goal of this ranking was to help corporate analysts decide which of their business

units to fund, and how much; and which units to sell. Managers were supposed to gain

perspective from this analysis that allowed them to plan with confidence to use money

generated by the cash cows to fund the stars and, possibly, the question marks. As the BCG

stated in 1970:

Only a diversified company with a balanced portfolio can use its strengths to truly

capitalize on its growth opportunities. The balanced portfolio has:

stars whose high share and high growth assure the future;

cash cows that supply funds for that future growth; and

question marks to be converted into stars with the added funds.

Practical Use of the BCG Matrix

Page 123: Strategic Notes

For each product or service, the 'area' of the circle represents the value of its sales. The BCG

Matrix thus offers a very useful 'map' of the organization's product (or service) strengths and

weaknesses, at least in terms of current profitability, as well as the likely cashflows.

The need which prompted this idea was, indeed, that of managing cash-flow. It was

reasoned that one of the main indicators of cash generation was relative market share, and

one which pointed to cash usage was that of market growth rate.

Derivatives can also be used to create a 'product portfolio' analysis of services. So

Information System services can be treated accordingly.

Relative market share

This indicates likely cash generation, because the higher the share the more cash will be

generated. As a result of 'economies of scale' (a basic assumption of the BCG Matrix), it is

assumed that these earnings will grow faster the higher the share. The exact measure is the

brand's share relative to its largest competitor. Thus, if the brand had a share of 20 percent,

and the largest competitor had the same, the ratio would be 1:1. If the largest competitor

had a share of 60 percent; however, the ratio would be 1:3, implying that the organization's

brand was in a relatively weak position. If the largest competitor only had a share of 5

percent, the ratio would be 4:1, implying that the brand owned was in a relatively strong

position, which might be reflected in profits and cash flows. If this technique is used in

practice, this scale is logarithmic, not linear.

On the other hand, exactly what is a high relative share is a matter of some debate. The

best evidence is that the most stable position (at least in FMCG markets) is for the brand

leader to have a share double that of the second brand, and triple that of the third. Brand

leaders in this position tend to be very stable—and profitable; the Rule of 123.[1]

The reason for choosing relative market share, rather than just profits, is that it carries more

information than just cashflow. It shows where the brand is positioned against its main

competitors, and indicates where it might be likely to go in the future. It can also show what

type of marketing activities might be expected to be effective.

Market growth rate

Rapidly growing in rapidly growing markets, are what organizations strive for; but, as we

have seen, the penalty is that they are usually net cash users - they require investment. The

reason for this is often because the growth is being 'bought' by the high investment, in the

Page 124: Strategic Notes

reasonable expectation that a high market share will eventually turn into a sound

investment in future profits. The theory behind the matrix assumes, therefore, that a higher

growth rate is indicative of accompanying demands on investment. The cut-off point is

usually chosen as 10 per cent per annum. Determining this cut-off point, the rate above

which the growth is deemed to be significant (and likely to lead to extra demands on cash) is

a critical requirement of the technique; and one that, again, makes the use of the BCG

Matrix problematical in some product areas. What is more, the evidence,[1] from FMCG

markets at least, is that the most typical pattern is of very low growth, less than 1 per cent

per annum. This is outside the range normally considered in BCG Matrix work, which may

make application of this form of analysis unworkable in many markets.

Where it can be applied, however, the market growth rate says more about the brand

position than just its cash flow. It is a good indicator of that market's strength, of its future

potential (of its 'maturity' in terms of the market life-cycle), and also of its attractiveness to

future competitors. It can also be used in growth analysis.

The matrix ranks only market share and industry growth rate, and only implies actual

profitability, the purpose of any business. (It is certainly possible that a particular dog can be

profitable without cash infusions required, and therefore should be retained and not sold.)

The matrix also overlooks other elements of industry. With this or any other such analytical

tool, ranking business units has a subjective element involving guesswork about the future,

particularly with respect to growth rates. Unless the rankings are approached with rigor and

scepticism, optimistic evaluations can lead to a dot com mentality in which even the most

dubious businesses are classified as "question marks" with good prospects; enthusiastic

managers may claim that cash must be thrown at these businesses immediately in order to

turn them into stars, before growth rates slow and it's too late. Poor definition of a

business's market will lead to some dogs being misclassified as cash bulls.

As originally practiced by the Boston Consulting Group,[1] the matrix was undoubtedly a

useful tool, in those few situations where it could be applied, for graphically illustrating

cashflows. If used with this degree of sophistication its use would still be valid. However,

later practitioners have tended to over-simplify its messages. In particular, the later

application of the names (problem children, stars, cash cows and dogs) has tended to

overshadow all else—and is often what most students, and practitioners, remember.

This is unfortunate, since such simplistic use contains at least two major problems:

Page 125: Strategic Notes

'Minority applicability'. The cashflow techniques are only applicable to a very limited number

of markets (where growth is relatively high, and a definite pattern of product life-cycles can

be observed, such as that of ethical pharmaceuticals). In the majority of markets, use may

give misleading results.

'Milking cash bulls'. Perhaps the worst implication of the later developments is that the

(brand leader) cash bulls should be milked to fund new brands. This is not what research

into the FMCG markets has shown to be the case. The brand leader's position is the one,

above all, to be defended, not least since brands in this position will probably outperform

any number of newly launched brands. Such brand leaders will, of course, generate large

cash flows; but they should not be `milked' to such an extent that their position is

jeopardized. In any case, the chance of the new brands achieving similar brand leadership

may be slim—certainly far less than the popular perception of the Boston Matrix would

imply.

Perhaps the most important danger[1] is, however, that the apparent implication of its four-

quadrant form is that there should be balance of products or services across all four

quadrants; and that is, indeed, the main message that it is intended to convey. Thus, money

must be diverted from `cash cows' to fund the `stars' of the future, since `cash cows' will

inevitably decline to become `dogs'. There is an almost mesmeric inevitability about the

whole process. It focuses attention, and funding, on to the `stars'. It presumes, and almost

demands, that `cash bulls' will turn into `dogs'.

The reality is that it is only the `cash bulls' that are really important—all the other elements

are supporting actors. It is a foolish vendor who diverts funds from a `cash cow' when these

are needed to extend the life of that `product'. Although it is necessary to recognize a `dog'

when it appears (at least before it bites you) it would be foolish in the extreme to create one

in order to balance up the picture. The vendor, who has most of his (or her) products in the

`cash cow' quadrant, should consider himself (or herself) fortunate indeed, and an excellent

marketer, although he or she might also consider creating a few stars as an insurance policy

against unexpected future developments and, perhaps, to add some extra growth.

Alternatives

As with most marketing techniques, there are a number of alternative offerings vying with

the BCG Matrix although this appears to be the most widely used (or at least most widely

taught—and then probably 'not' used). The next most widely reported technique is that

developed by McKinsey and General Electric, which is a three-cell by three-cell matrix—

Page 126: Strategic Notes

using the dimensions of `industry attractiveness' and `business strengths'. This approaches

some of the same issues as the BCG Matrix but from a different direction and in a more

complex way (which may be why it is used less, or is at least less widely taught). Perhaps

the most practical approach is that of the Boston Consulting Group's Advantage Matrix,

which the consultancy reportedly used itself though it is little known amongst the wider

population.

Other uses

The initial intent of the growth-share matrix was to evaluate business units, but the same

evaluation can be made for product lines or any other cash-generating entities. This should

only be attempted for real lines that have a sufficient history to allow some prediction; if the

corporation has made only a few products and called them a product line, the sample

variance will be too high for this sort of analysis to be meaningful.

G. E. multi factoral analysis

The GE matrix is an alternative technique used in brand marketing and product

management to help a company decide what product(s) to add to its product portfolio, and

which market opportunities are worthy of continued investment. Also known as the

'Directional Policy Matrix,' the GE multi-factor model was first developed by General Electric

in the 1970s.

Conceptually, the GE Matrix is similar to the Boston Box as it is plotted on a two-

dimensional grid. In most versions of the matrix:

the Y-Axis comprises industry attractiveness measures, such as Market

Profitability, Fit with Core Skills etc. and

the X-Axis comprises business strength measures, such as Price, Service

Levels etc.

Each product, brand, service, or potential product is mapped as a piechart onto this industry

attractiveness/business strength space. The diameter of each piechart is proportional to the

Volume or Revenue accruing to each opportunity, and the solid slice of each pie represents

the share of the market enjoyed by the planning company.

The planning company should invest in opportunities that appear to the top left of the

matrix. The rationale is that the planning company should invest in segments that are both

attractive and in which it has established some measure of competitive advantage.

Page 127: Strategic Notes

Opportunities appearing in the bottom right of the matrix are both unattractive to the

planning company and in which it is competitively weak. At best, these are candidates for

cash management; at worst candidates for divestment. Opportunities appearing 'in between'

these extremes pose more of a problem, and the planning company has to make a strategic

decision whether to 'redouble its efforts' in the hopes of achieving market leadership,

manage them for cash, or cut its losses and divest.

Marketing

The General Electric Business Screen was originally developed to help marketing managers

overcome the problems that are commonly associated with the Boston Matrix (BCG), such as

the problems with the lack of credible business information, the fact that BCG deals

primarily with commodities not brands or Strategic Business Units (SBU's), and that cashflow

if often a more reliable indicator of position as opposed to market growth/share.

The GE Business Screen introduces a three by three matrix, which now includes a medium

category. It utilizes industry attractiveness as a more inclusive measure than BCG's market

growth and substitutes competitive position for the original's market share.

So in come Strategic Business Units (SBU's). A large corporation may have many SBU's,

which essentially operate under the same strategic umbrella, but are distinctive and

individual. A loose example would refer to Microsoft, with SBU's for operating systems,

business software, consumer software and mobile and Internet technologies.

Growth/share are replaced by competitive position and market attractiveness. The point is

that successful SBU's will go and do well in attractive markets because they add value that

customers will pay for. So weak companies do badly for the opposite reasons. To help break

down decision-making further, you then consider a number of sub-criteria:

For market attractiveness:

* Size of market.

* Market rate of growth.

* The nature of competition and its diversity.

* Profit margin.

* Impact of technology, the law, and energy efficiency.

* Environmental impact.

Page 128: Strategic Notes

. . . and for competitive position:

* Market share.

* Management profile.

* R & D.

* Quality of products and services.

* Branding and promotions success.

* Place (or distribution).

* Efficiency.

* Cost reduction.

At this stage the marketing manager adapts the list above to the needs of his strategy. The

GE matrix has 5 steps:

* One - Identify your products, brands, experiences, solutions, or SBU's.

* Two - Answer the question, What makes this market so attractive?

* Three - Decide on the factors that position the business on the GE matrix.

* Four - Determine the best ways to measure attractiveness and business position.

* Five - Finally rank each SBU as either low, medium or high for business strength, and

low, medium and high in relation

to market attractiveness.

Now follow the usual words of caution that go with all boxes, models and matrices. Yes the

GE matrix is superior to the Boston Matrix since it uses several dimensions, as opposed to

BCG's two. However, problems or limitations include:

* There is no research to prove that there is a relationship between market attractiveness

and business position.

* The interrelationships between SBU's, products, brands, experiences or solutions is not

taken into account.

* This approach does require extensive data gathering.

* Scoring is personal and subjective.

* There is no hard and fast rule on how to weight elements.

* The GE matrix offers a broad strategy and does not indicate how best to implement it.

Porter 5 forces analysis

Page 129: Strategic Notes

A graphical representation of Porter's Five Forces

Porter's 5 forces analysis is a framework for the industry analysis and business strategy

development developed by Michael E. Porter of Harvard Business School in 1979 . It uses

concepts developed in Industrial Organization (IO) economics to derive 5 forces that

determine the competitive intensity and therefore attractiveness of a market. Attractiveness

in this context refers to the overall industry profitability. An "unattractive" industry is one

where the combination of forces acts to drive down overall profitability. A very unattractive

industry would be one approaching "pure competition". Porter referred to these forces as the

micro environment, to contrast it with the more general term macro environment. They

consist of those forces close to a company that affect its ability to serve its customers and

make a profit. A change in any of the forces normally requires a company to re-assess the

marketplace. The overall industry attractiveness does not imply that every firm in the

industry will return the same profitability. Firms are able to apply their core competences,

business model or network to achieve a profit above the industry average. A clear example

of this is the airline industry. As an industry, profitability is low and yet individual companies,

by applying unique business models have been able to make a return in excess of the

industry average.

Strategy consultants occasionally use Porter's five forces framework when making a

qualitative evaluation of a firm's strategic position. However, for most consultants, the

framework is only a starting point or 'check-list' they might use. Like all general frameworks,

an analysis that uses it to the exclusion of specifics about a particular situation is considered

naive.

Porter's Five Forces include three forces from 'horizontal' competition: threat of substitute

products, the threat of established rivals, and the threat of new entrants; and two forces

from 'vertical' competition: the bargaining power of suppliers, bargaining power of

customers.

Page 130: Strategic Notes

Contents

1 Model/framework

2 References

3 See also

4 External linksModel/framework

The threat of substitute products

The existence of close substitute products increases the propensity of customers to switch

to alternatives in response to price increases (high elasticity of demand).

buyer propensity to substitute

relative price performance of substitutes

buyer switching costs

perceived level of product differentiation

The threat of the entry of new competitors

Profitable markets that yield high returns will draw firms. This results in many new entrants,

which will effectively decrease profitability. Unless the entry of new firms can be blocked by

incumbents, the profit rate will fall towards a competitive level (perfect competition).

the existence of barriers to entry (patents, rights, etc.)

economies of product differences

brand equity

switching costs or sunk costs

capital requirements

access to distribution

absolute cost advantages

learning curve advantages

expected retaliation by incumbents

government policies

Page 131: Strategic Notes

The intensity of competitive rivalry

For most industries, this is the major determinant of the competitiveness of the industry.

Sometimes rivals compete aggressively and sometimes rivals compete in non-price

dimensions such as innovation, marketing, etc.

number of competitors

rate of industry growth

intermittent industry overcapacity

exit barriers

diversity of competitors

informational complexity and asymmetry

fixed cost allocation per value added

level of advertising expense

Economies of scale

Sustainable competitive advantage through improvisation

The bargaining power of customers

Also described as the market of outputs. The ability of customers to put the firm under

pressure and it also affects the customer's sensitivity to price changes.

buyer concentration to firm concentration ratio

bargaining leverage, particularly in industries with high fixed costs

buyer volume

buyer switching costs relative to firm switching costs

buyer information availability

ability to backward integrate

availability of existing substitute products

buyer price sensitivity

differential advantage (uniqueness) of industry products

Page 132: Strategic Notes

RFM Analysis

The bargaining power of suppliers

Also described as market of inputs. Suppliers of raw materials, components, and services

(such as expertise) to the firm can be a source of power over the firm. Suppliers may refuse

to work with the firm, or e.g. charge excessively high prices for unique resources.

supplier switching costs relative to firm switching costs

degree of differentiation of inputs

presence of substitute inputs

supplier concentration to firm concentration ratio

threat of forward integration by suppliers relative to the threat of backward

integration by firms

cost of inputs relative to selling price of the product

This 5 forces analysis is just one part of the complete Porter strategic models. The other

elements are the value chain and the generic strategies.

Competitive advantage

Competitive advantage is a position a firm occupies against its competitors.

The two forms of competitive advantage are cost advantage and differentiation

advantage [1] . Cost advantage occurs when a firm delivers the same services as its

competitors but at a lower cost. Differentiation advantage occurs when a firm delivers

greater services for the same price of its competitors. They are collectively known as

positional advantages because they denote the firm's position in its industry as a leader in

either superior services or cost.

Many forms of competitive advantage cannot be sustained indefinitely because the promise

of economic rents invites competitors to duplicate the competitive advantage held by any

one firm.

A firm possesses a sustainable competitive advantage when its value-creating

processes and position have not been able to be duplicated or imitated by other firms [2].

Page 133: Strategic Notes

Sustainable competitive advantage results, according to the Resource-based View theory, in

the creation of above-normal (or supranormal) rents in the long run.

Analysis of competitive advantage is the subject of numerous theories of strategy, including

the five forces model pioneered by Michael Porter of the Harvard Business School.

The primary factors of competitive advantage are innovation, reputation and relationships.

SWOT analysis

SWOT Analysis is a strategic planning method used to evaluate the Strengths,

Weaknesses, Opportunities, and Threats involved in a project or in a business venture. It

involves specifying the objective of the business venture or project and identifying the

internal and external factors that are favourable and unfavourable to achieving that

objective. The technique is credited to Albert Humphrey, who led a research project at

Stanford University in the 1960s and 1970s using data from Fortune 500 companies.

Contents

1 Strategic

and Creative

Use of SWOT

Analysis

o 1.1

Strateg

ic Use:

Orienti

ng

SWOTs

to An

Objecti

ve

o 1.2

Creativ

e Use

of

SWOTs:

Genera

Strategic and

Creative Use of

SWOT Analysis

Strategic Use:

Orienting SWOTs to

An Objective

Illustrative diagram

of SWOT analysis

If a SWOT analysis

Weaknesses

Page 134: Strategic Notes

ting

Strateg

ies

2 Matching

and

converting

o 2.1

Evidenc

e on

the Use

of

SWOT

3 Internal and

external

factors

4 Use of

SWOT

Analysis

5 SWOT-

landscape

analysis

6 Corporate

planning

o 6.1

Marketi

ng

7 See also

8 References

9 External

links

does not start with

defining a desired

end state or

objective, it runs

the risk of being

useless. A SWOT

analysis may be

incorporated into

the strategic

planning model. An

example of a

strategic planning

technique that

incorporates an

objective-driven

SWOT analysis is

SCAN analysis.

Strategic Planning,

including SWOT and

SCAN analysis, has

been the subject of

much research.

Strengt

hs:

attribut

es of

the

organiz

ation

that

are

helpful

to

achievi

ng the

Page 135: Strategic Notes

objecti

ve.

Weakn

esses:

attribut

es of

the

organiz

ation

that

are

harmful

to

achievi

ng the

objecti

ve.

Opport

unities:

externa

l

conditi

ons

that

are

helpful

to

achievi

ng the

objecti

ve.

Threats

:

externa

l

conditi

Page 136: Strategic Notes

ons

which

could

do

damag

e to the

busines

s's

perfor

mance.

Identification of

SWOTs is essential

because subsequent

steps in the process

of planning for

achievement of the

selected objective

may be derived

from the SWOTs.

First, the decision

makers have to

determine whether

the objective is

attainable, given

the SWOTs. If the

objective is NOT

attainable a

different objective

must be selected

and the process

repeated.

Creative Use of

SWOTs: Generating

Page 137: Strategic Notes

Strategies

If, on the other

hand, the objective

seems attainable,

the SWOTs are used

as inputs to the

creative generation

of possible

strategies, by

asking and

answering each of

the following four

questions, many

times:

How

can we

Use

each

Strengt

h?

How

can we

Improv

e each

Weakn

ess?

How

can we

Exploit

each

Opport

unity?

How

Page 138: Strategic Notes

can we

Mitigat

e each

Threat?

Ideally a cross-

functional team or a

task force that

represents a broad

range of

perspectives should

carry out the SWOT

analysis. For

example, a SWOT

team may include

an accountant, a

salesperson, an

executive manager,

an engineer, and an

ombudsman.

Matching and

converting

Another way of

utilizing SWOT is

matching and

converting.

Matching is used to

find competitive

advantages by

matching the

strengths to

opportunities.

Converting is to

apply conversion

Page 139: Strategic Notes

strategies to

convert threats or

weaknesses into

strengths or

opportunities. [1]

An example of

conversion strategy

is to find new

markets.

If the threats or

weaknesses cannot

be converted a

company should try

to minimize or avoid

them.[2]

Evidence on the Use

of SWOT

SWOT analysis may

limit the strategies

considered in the

evaluation. "In

addition, people

who use SWOT

might conclude that

they have done an

adequate job of

planning and ignore

such sensible things

as defining the

firm's objectives or

calculating ROI for

alternate

strategies." [3]

Page 140: Strategic Notes

Findings from

Menon et al. (1999) [4] and Hill and

Westbrook (1997) [5]

have shown that

SWOT may harm

performance. As an

alternative to

SWOT, J. Scott

Armstrong

describes a 5-step

approach

alternative that

leads to better

corporate

performance.[6]

These criticisms are

addressed to an old

version of SWOT

analysis that

precedes the SWOT

analysis described

above under the

heading "Strategic

and Creative Use of

SWOT Analysis."

This old version did

not require that

SWOTs be derived

from an agreed

upon objective.

Examples of SWOT

analyses that do not

state an objective

are provided below

Page 141: Strategic Notes

under "Human

Resources" and

"Marketing."

Internal and

external factors

The aim of any

SWOT analysis is to

identify the key

internal and

external factors

that are important

to achieving the

objective. These

come from within

the company's

unique value chain.

SWOT analysis

groups key pieces

of information into

two main

categories:

Interna

l

factors

– The

strengt

hs and

weakne

sses

interna

l to the

organiz

ation. -

Use a

Page 142: Strategic Notes

PRIMO-

F

analysi

s to

help

identify

factors

Externa

l

factors

– The

opport

unities

and

threats

present

ed by

the

externa

l

environ

ment to

the

organiz

ation. -

Use a

PEST or

PESTLE

analysi

s to

help

identify

factors

The internal factors

may be viewed as

strengths or

Page 143: Strategic Notes

weaknesses

depending upon

their impact on the

organization's

objectives. What

may represent

strengths with

respect to one

objective may be

weaknesses for

another objective.

The factors may

include all of the

4P's; as well as

personnel, finance,

manufacturing

capabilities, and so

on. The external

factors may include

macroeconomic

matters,

technological

change, legislation,

and socio-cultural

changes, as well as

changes in the

marketplace or

competitive

position. The results

are often presented

in the form of a

matrix.

SWOT analysis is

just one method of

categorization and

Page 144: Strategic Notes

has its own

weaknesses. For

example, it may

tend to persuade

companies to

compile lists rather

than think about

what is actually

important in

achieving

objectives. It also

presents the

resulting lists

uncritically and

without clear

prioritization so

that, for example,

weak opportunities

may appear to

balance strong

threats.

It is prudent not to

eliminate too

quickly any

candidate SWOT

entry. The

importance of

individual SWOTs

will be revealed by

the value of the

strategies it

generates. A SWOT

item that produces

valuable strategies

is important. A

Page 145: Strategic Notes

SWOT item that

generates no

strategies is not

important.

Use of SWOT

Analysis

The usefulness of

SWOT analysis is

not limited to profit-

seeking

organizations.

SWOT analysis may

be used in any

decision-making

situation when a

desired end-state

(objective) has been

defined. Examples

include: non-profit

organizations,

governmental units,

and individuals.

SWOT analysis may

also be used in pre-

crisis planning and

preventive crisis

management.

SWOT-landscape

analysis

Page 146: Strategic Notes

The SWOT-

landscape grabs

different managerial

situations by

visualizing and

foreseeing the

dynamic

performance of

comparable objects

according to

findings by Brendan

Kitts, Leif Edvinsson

and Tord Beding

(2000).[7]

Changes in relative

performance are

continuously

identified. Projects

(or other units of

measurements) that

could be potential

risk or opportunity

objects are

highlighted.

SWOT-landscape

also indicates which

underlying

strength/weakness

factors that have

Page 147: Strategic Notes

had or likely will

have highest

influence in the

context of value in

use (for ex. capital

value fluctuations).

Corporate planning

As part of the

development of

strategies and plans

to enable the

organization to

achieve its

objectives, then

that organization

will use a

systematic/rigorous

process known as

corporate planning.

SWOT alongside

PEST/PESTLE can be

used as a basis for

the analysis of

business and

environmental

factors.[8]

Set

objecti

ves –

definin

g what

the

organiz

ation is

Page 148: Strategic Notes

going

to do

Environ

mental

scannin

g

o I

n

t

e

r

n

a

l

a

p

p

r

a

i

s

a

l

s

o

f

t

h

e

o

r

g

a

n

i

z

Page 149: Strategic Notes

a

ti

o

n

'

s

S

W

O

T

,

t

h

i

s

n

e

e

d

s

t

o

i

n

c

l

u

d

e

a

n

a

s

s

e

s

s

Page 150: Strategic Notes

m

e

n

t

o

f

t

h

e

p

r

e

s

e

n

t

s

it

u

a

ti

o

n

a

s

w

e

ll

a

s

a

p

o

r

t

f

o

Page 151: Strategic Notes

li

o

o

f

p

r

o

d

u

c

t

s

/

s

e

r

v

i

c

e

s

a

n

d

a

n

a

n

a

l

y

s

i

s

o

f

t

Page 152: Strategic Notes

h

e

p

r

o

d

u

c

t

/

s

e

r

v

i

c

e

li

f

e

c

y

c

l

e

Analysi

s of

existin

g

strateg

ies,

this

should

determ

ine

relevan

Page 153: Strategic Notes

ce from

the

results

of an

interna

l/extern

al

apprais

al. This

may

include

gap

analysi

s which

will

look at

environ

mental

factors

Strateg

ic

Issues

defined

– key

factors

in the

develo

pment

of a

corpora

te plan

which

needs

to be

addres

sed by

Page 154: Strategic Notes

the

organiz

ation

Develo

p

new/re

vised

strateg

ies –

revised

analysi

s of

strateg

ic

issues

may

mean

the

objecti

ves

need to

change

Establi

sh

critical

success

factors

– the

achieve

ment of

objecti

ves and

strateg

y

implem

entatio

Page 155: Strategic Notes

n

Prepar

ation of

operati

onal,

resourc

e,

project

s plans

for

strateg

y

implem

entatio

n

Monitor

ing

results

mappin

g

against

plans,

taking

correcti

ve

action

which

may

mean

amendi

ng

objecti

ves/str

ategies

Page 156: Strategic Notes

.[9]

Marketing

In many competitor

analyses, marketers

build detailed

profiles of each

competitor in the

market, focusing

especially on their

relative competitive

strengths and

weaknesses using

SWOT analysis.

Marketing

managers will

examine each

competitor's cost

structure, sources

of profits, resources

and competencies,

competitive

positioning and

product

differentiation,

degree of vertical

integration,

historical responses

to industry

developments, and

other factors.

Marketing

management often

finds it necessary to

invest in research

Page 157: Strategic Notes

to collect the data

required to perform

accurate marketing

analysis.

Accordingly,

management often

conducts market

research

(alternately

marketing research)

to obtain this

information.

Marketers employ a

variety of

techniques to

conduct market

research, but some

of the more

common include:

Qualita

tive

marketi

ng

researc

h, such

as

focus

groups

Quantit

ative

marketi

ng

researc

h, such

as

Page 158: Strategic Notes

statisti

cal

surveys

Experi

mental

techniq

ues

such as

test

market

s

Observ

ational

techniq

ues

such as

ethnog

raphic

(on-

site)

observ

ation

Marketi

ng

manag

ers

may

also

design

and

overse

e

various

environ

mental

Page 159: Strategic Notes

scannin

g and

compet

itive

intellig

ence

process

es to

help

identify

trends

and

inform

the

compan

y's

marketi

ng

analysi

s.

Using SWOT to

analyse the market

position of a small

management

consultancy with

specialism in HRM.[10]

Strengths

Large consultancies operating at a minor levelExpertise at partner level in HRM consultancyUnable to deal with multi-disciplinary assignments because of size or lack of abilityIdentified market for consultancy in areas other than HRMOther small consultancies looking to invade the

Shortage of consultants at operating level rather than partner

level

Page 160: Strategic Notes

marketplaceTrack record – successful assignmentsstrength- market related , finance related , operational related , research and development related, hr relatedReputation in marketplaceOpportunitiesThreats

market related- product quality,

packaging , advertisement, service,

distribution channel finance related-

optimum debt/equity ratio, number of

share holders, inventory size ,

optimum use of the financial

resources, low cost of borrowings

proper investment of the financial

products

operational related- low cost , higher

productivity , excellent quality ,

modernized technology,

Porter generic strategies

Michael Porter has described a

category scheme consisting of three

general types of strategies that are

commonly used by businesses to

achieve and maintain competitive

advantage. These three generic

strategies are defined along two

dimensions: strategic scope and

strategic strength. Strategic scope is

a demand-side dimension (Porter was

originally an engineer, then an

economist before he specialized in

strategy) and looks at the size and

composition of the market you intend

to target. Strategic strength is a

supply-side dimension and looks at

Page 161: Strategic Notes

the strength or core competency of

the firm. In particular he identified

two competencies that he felt were

most important: product

differentiation and product cost

(efficiency).

He originally ranked each of the three

dimensions (level of differentiation,

relative product cost, and scope of

target market) as either low,

medium, or high, and juxtaposed

them in a three dimensional matrix.

That is, the category scheme was

displayed as a 3 by 3 by 3 cube. But

most of the 27 combinations were

not viable.

Porter's Generic Strategies

In his 1980 classic Competitive

Strategy: Techniques for Analysing

Industries and Competitors, Porter

simplifies the scheme by reducing it

down to the three best strategies.

They are cost leadership,

differentiation, and market

segmentation (or focus). Market

segmentation is narrow in scope

while both cost leadership and

differentiation are relatively broad in

Page 162: Strategic Notes

market scope.

Empirical research on the profit

impact of marketing strategy

indicated that firms with a high

market share were often quite

profitable, but so were many firms

with low market share. The least

profitable firms were those with

moderate market share. This was

sometimes referred to as the hole in

the middle problem. Porter’s

explanation of this is that firms with

high market share were successful

because they pursued a cost

leadership strategy and firms with

low market share were successful

because they used market

segmentation to focus on a small but

profitable market niche. Firms in the

middle were less profitable because

they did not have a viable generic

strategy.

Combining multiple strategies is

successful in only one case.

Combining a market segmentation

strategy with a product

differentiation strategy is an effective

way of matching your firm’s product

strategy (supply side) to the

characteristics of your target market

segments (demand side). But

combinations like cost leadership

with product differentiation are hard

(but not impossible) to implement

Page 163: Strategic Notes

due to the potential for conflict

between cost minimization and the

additional cost of value-added

differentiation.

Since that time, some commentators

have made a distinction between cost

leadership, that is, low cost

strategies, and best cost strategies.

They claim that a low cost strategy is

rarely able to provide a sustainable

competitive advantage. In most

cases firms end up in price wars.

Instead, they claim a best cost

strategy is preferred. This involves

providing the best value for a

relatively low price.

Contents

1 Cost Leadership Strategy

2 Differentiation Strategy

3 Focus Strategy

4 Recent developments

5 Criticisms of generic

strategies

Well established position with a well defined market niche.Cost Leadership Strategy

This strategy emphasizes efficiency. By producing high volumes of standardized products,

the firm hopes to take advantage of economies of scale and experience curve effects. The

product is often a basic no-frills product that is produced at a relatively low cost and made

available to a very large customer base. Maintaining this strategy requires a continuous

search for cost reductions in all aspects of the business. The associated distribution strategy

Page 164: Strategic Notes

is to obtain the most extensive distribution possible. Promotional strategy often involves

trying to make a virtue out of low cost product features.

To be successful, this strategy usually requires a considerable market share advantage or

preferential access to raw materials, components, labour, or some other important input.

Without one or more of these advantages, the strategy can easily be mimicked by

competitors. Successful implementation also benefits from:

process engineering skills

products designed for ease of manufacture

sustained access to inexpensive capital

close supervision of labour

tight cost control

incentives based on quantitative targets.

always ensure that the costs are kept at the minimum possible level.

Examples include low-cost airlines such as EasyJet and Southwest Airlines, and

supermarkets such as KwikSave.

When a firm designs, produces and markets a product more efficiently than competitors

such firm has implemented a cost leadership strategy (Allen et al 2006, p.25,). Cost

reduction strategies across the activity cost chain will represent low cost leadership (Tehrani

2003, p.610, Beheshti 2004, p. 118). Attempts to reduce costs will spread through the whole

business process from manufacturing to the final stage of selling the product. Any processes

that do not contribute towards minimization of cost base should be outsourced to other

organisations with the view of maintaining a low cost base (Akan et al 2006, p.48). Low costs

will permit a firm to sell relatively standardised products that offer features acceptable to

many customers at the lowest competitive price and such low prices will gain competitive

advantage and increase market share (Porter 1980 cited by Srivannboon 2006, p.88; Porter

1979;1987;1986, Bauer and Colgan 2001; Hyatt 2001; Anon 1988; Davidson 2001; Cross

1999 cited by Allen and Helms 2006, p.435). These writings explain that cost efficiency

gained in the whole process will enable a firm to mark up a price lower than competition

which ultimately results in high sales since competition could not match such a low cost

base. If the low cost base could be maintained for longer periods of time it will ensure

consistent increase in market share and stable profits hence consequent in superior

Page 165: Strategic Notes

performance. However all writings direct us to the understanding that sustainability of the

competitive advantage reached through low cost strategy will depend on the ability of a

competitor to match or develop a lower cost base than the existing cost leader in the

market.

A firm attempts to maintain a low cost base by controlling production costs, increasing their

capacity utilization, controlling material supply or product distribution and minimizing other

costs including R&D and advertising (Prajogo 2007,p.70). Mass production, mass

distribution, economies of scale, technology, product design, learning curve benefit, work

force dedicated for low cost production, reduced sales force, less spending on marketing will

further help a firm to main a low cost base (Freeman 2003, p.86; Trogovicky et al 2005,

p.18). Decision makers in a cost leadership firm will be compelled to closely scrutinise the

cost efficiency of the processes of the firm. Maintaining the low cost base will become the

primary determinant of the cost leadership strategy. For low cost leadership to be effective a

firm should have a large market share (Robinson and Chiang 2000, p.857; Hyatt 2001 cited

by Allen and Helms 2006, p.435). New entrants or firms with a smaller market share may

not benefit from such strategy since mass production, mass distribution and economies of

scale will not make an impact on such firms. Low cost leadership becomes a viable strategy

only for larger firms. Market leaders may strengthen their positioning by advantages

attained through scale and experience in a low cost leadership strategy. But is their any

superiority in low cost strategy than other strategic typologies? Can a firm that adopts a low

cost strategy out perform another firm with a different competitive strategy? If firms costs

are low enough it may be profitable even in a highly competitive scenario hence it becomes

a defensive mechanism against competitors (Kim et al 2004, p.21). Further they mention

that such low cost may act as entry barriers since new entrants require huge capital to

produce goods or services at the same or lesser price than a cost leader. As discussed in the

academic frame work of competitive advantage raising barriers for competition will

consequent in sustainable competitive advantage and in consolidation with the above

writings we may establish the fact that low cost competitive strategy may generate a

sustainable competitive advantage.

Further in consideration of factors mentioned above that facilitate a firm in maintaining a

low cost base; some factors such as technology which may be developed through innovation

(mentioned as creative accumulation in Schumpeterian innovation) and some may even be

resources developed by a firm such as long term healthy relationships build with distributors

to maintain cost effective distribution channels or supply chains (inimitable, unique, valuable

non transferable resource mentioned in RBV). Similarly economies of scale may be an

Page 166: Strategic Notes

ultimate result of a commitment made by a firm such as capital investments for expansions

(as discussed in the commitment approach). Also raising barriers for competition by virtue of

the low cost base that enables the low prices will result in strong strategic positioning in the

market (discussed in the IO structural approach). These significant strengths align with the

four perspectives of sustainable competitive advantage mentioned in the early parts of this

literature review. Low cost leadership could be considered as a competitive strategy that will

create a sustainable competitive advantage.

However, low cost leadership is attached to a disadvantage which is less customer loyalty

(Vokurka and Davis 2004, p. 490, Cross 1999 cited by Allen and Helms 2006, p.436).

Relatively low prices will result in creating a negative attitude towards the quality of the

product in the mindset of the customers (Priem 2007, p.220). Customer’s impression

regarding such products will enhance the tendency to shift towards a product which might

be higher in price but projects an image of quality. Considering analytical in depth view

regarding the low cost strategy, it reflects capability to generate a competitive advantage

but development and maintenance of a low cost base becomes a vital, decisive task.

Differentiation Strategy

Differentiation is aimed at the broad market that involves the creation of a product or

services that is perceived throughout its industry as unique. The company or business unit

may then charge a premium for its product. This specialty can be associated with design,

brand image, technology, features, dealers, network, or customers service. Differentiation is

a viable strategy for earning above average returns in a specific business because the

resulting brand loyalty lowers customers' sensitivity to price.Increased costs can usually be

passed on to the buyers. Buyers loyalty can also serve as an entry barrier-new firms must

develop their own distinctive competence to differentiate their products in some way in

order to compete successfully. Examples of the successful use of a differentiation strategy

are Hero Honda, Asian Paints, HLL, Nike athletic shoes, Apple Computer, and Mercedes-Benz

automobiles. Research does suggest that a differentiation strategy is more likely to generate

higher profits than is a low cost strategy because differentiation creates a better entry

barrier. A low-cost strategy is more likely, however, to generate increases in market share.

Focus Strategy

In this strategy the firm concentrates on a select few target markets. It is also called a focus

strategy or niche strategy. It is hoped that by focusing your marketing efforts on one or two

narrow market segments and tailoring your marketing mix to these specialized markets, you

Page 167: Strategic Notes

can better meet the needs of that target market. The firm typically looks to gain a

competitive advantage through effectiveness rather than efficiency. It is most suitable for

relatively small firms but can be used by any company. As a focus strategy it may be used

to select targets that are less vulnerable to substitutes or where a competition is weakest to

earn above-average return on investment.

Recent developments

Michael Treacy and Fred Wiersema (1993) have modified Porter's three strategies to

describe three basic "value disciplines" that can create customer value and provide a

competitive advantage. They are operational excellence, product leadership, and customer

intimacy.

Criticisms of generic strategies

Several commentators have questioned the use of generic strategies claiming they lack

specificity, lack flexibility, and are limiting.

In particular, Miller (1992) questions the notion of being "caught in the middle". He claims

that there is a viable middle ground between strategies. Many companies, for example, have

entered a market as a niche player and gradually expanded. According to Baden-Fuller and

Stopford (1992) the most successful companies are the ones that can resolve what they call

"the dilemma of opposites".

A popular post-Porter model was presented by W. Chan Kim and Renée Mauborgne in their

1999 Harvard Business Review article "Creating New Market Space". In this article they

described a "value innovation" model in which companies must look outside their present

paradigms to find new value propositions. Their approach fundamentally goes against

Porter's concept that a firm must focus either on cost leadership or on differentiation. They

later went on to publish their ideas in the book Blue Ocean Strategy.

An up-to-date critique of generic strategies and their limitations, including Porter, appears in

Bowman, C. (2008) Generic strategies: a substitute for thinking?, 360° The Ashridge Journal,

Spring, pp. 6 - 11 [1]

References

From the three generic business strategies Porter stress the idea that only one

strategy should be adopted by a firm and failure to do so will result in “ stuck in the

Page 168: Strategic Notes

middle” scenario (Porter 1980 cited by Allen et al 2006,Torgovicky et al 2005). He

discuss the idea that practising more than one strategy will loose the entire focus of

the organisation hence clear direction of the future trajectory could not be

established. The argument is based on the fundamental that differentiation will incur

costs to the firm which clearly contradicts with the basis of low cost strategy and in

the other hand relatively standardised products with features acceptable to many

customers will not carry any differentiation (Panayides 2003, p.126) hence, cost

leadership and differentiation strategy will be mutually exclusive ( Porter 1980 cited

by Trogovicky et al 2005, p.20). Two focal objectives of low cost leadership and

differentiation clash with each other resulting in no proper direction for a firm.

However, contrarily to the rationalisation of Porter, contemporary research has shown

evidence of firms practising such a “hybrid strategy”. Hambrick (1983 cited by Kim et al

2004, p.25) identified successful organisations that adopt a mixture of low cost and

differentiation strategy (Kim et al 2004, p.25). Research writings of Davis (1984 cited by

Prajogo 2007, p.74) state that firms employing the hybrid business strategy (Low cost and

differentiation strategy) outperform the ones adopting one generic strategy. Sharing the

same view point, Hill (1988 cited by Akan et al 2006, p.49) challenged Porter’s concept

regarding mutual exclusivity of low cost and differentiation strategy and further argued that

successful combination of those two strategies will result in sustainable competitive

advantage. As to Wright and other (1990 cited by Akan et al 2006, p.50) multiple business

strategies are required to respond effectively to any environment condition. In the mid to

late 1980’s where the environments were relatively stable there was no requirement for

flexibility in business strategies but survival in the rapidly changing, highly unpredictable

present market contexts will require flexibility to face any contingency (Anderson 1997,

Goldman et al 1995, Pine 1993 cited by Radas 2005, p.197).After eleven years Porter

revised his thinking and accepted the fact that hybrid business strategy could exist (Porter

cited by Projogo 2007, p.70) and writes in the following manner

…Competitive advantage can be divided into two basic types: lower costs than rivals, or the

ability to differentiate and command a premium price that exceeds the extra costs of doing

so. Any superior performing firm has achieved one type of advantage, the other or both

( 1991,p.101).

Though Porter had a fundamental rationalisation in his concept about the invalidity of hybrid

business strategy, the highly volatile and turbulent market conditions will not permit survival

of rigid business strategies since long term establishment will depend on the agility and the

quick responsiveness towards market and environmental conditions. Market and

Page 169: Strategic Notes

environmental turbulence will make drastic implications on the root establishment of a firm.

If a firm’s business strategy could not cope with the environmental and market

contingencies, long term survival becomes unrealistic. Diverging the strategy into different

avenues with the view to exploit opportunities and avoid threats created by market

conditions will be a pragmatic approach for a firm.

Critical analysis done separately for cost leadership strategy and differentiation strategy

identifies elementary value in both strategies in creating and sustaining a competitive

advantage. Consistent and superior performance than competition could be reached with

stronger foundations in the event “hybrid strategy” is adopted. Depending on the market

and competitive conditions hybrid strategy should be adjusted regarding the extent which

each generic strategy (cost leadership or differentiation) should be given priority in practise.

Value chain

Popular Visualization

The value chain, also known as value chain analysis, is a concept from business

management that was first described and popularized by Michael Porter in his 1985 best-

seller, Competitive Advantage: Creating and Sustaining Superior Performance.

A value chain is a chain of activities. Products pass through all activities of the chain in order

and at each activity the product gains some value. The chain of activities gives the products

more added value than the sum of added values of all activities. It is important not to mix

the concept of the value chain with the costs occurring throughout the activities. A diamond

cutter can be used as an example of the difference. The cutting activity may have a low

cost, but the activity adds to much of the value of the end product, since a rough diamond is

significantly less valuable than a cut diamond.

Page 170: Strategic Notes

The value chain categorizes the generic value-adding activities of an organization. The

"primary activities" include: inbound logistics, operations (production), outbound logistics,

marketing and sales (demand), and services (maintenance). The "support activities" include:

administrative infrastructure management, human resource management, information

technology, and procurement. The costs and value drivers are identified for each value

activity. The value chain framework quickly made its way to the forefront of management

thought as a powerful analysis tool for strategic planning. Its ultimate goal is to maximize

value creation while minimizing costs.

The concept has been extended beyond individual organizations. It can apply to whole

supply chains and distribution networks. The delivery of a mix of products and services to

the end customer will mobilize different economic factors, each managing its own value

chain. The industry wide synchronized interactions of those local value chains create an

extended value chain, sometimes global in extent. Porter terms this larger interconnected

system of value chains the "value system." A value system includes the value chains of a

firm's supplier (and their suppliers all the way back), the firm itself, the firm distribution

channels, and the firm's buyers (and presumably extended to the buyers of their products,

and so on).

Capturing the value generated along the chain is the new approach taken by many

management strategists. For example, a manufacturer might require its parts suppliers to

be located nearby its assembly plant to minimize the cost of transportation. By exploiting

the upstream and downstream information flowing along the value chain, the firms may try

to bypass the intermediaries creating new business models, or in other ways create

improvements in its value system.

The Supply-Chain Council, a global trade consortium in operation with over 700 member

companies, governmental, academic, and consulting groups participating in the last 10

years, manages the de facto universal reference model for Supply Chain including Planning,

Procurement, Manufacturing, Order Management, Logistics, Returns, and Retail; Product and

Service Design including Design Planning, Research, Prototyping, Integration, Launch and

Revision, and Sales including CRM, Service Support, Sales, and Contract Management which

are congruent to the Porter framework. The "SCOR" framework has been adopted by

hundreds of companies as well as national entities as a standard for business excellence,

and the US DOD has adopted the newly-launched "DCOR" framework for product design as a

standard to use for managing their development processes. In addition to process elements,

these reference frameworks also maintain a vast database of standard process metrics

Page 171: Strategic Notes

aligned to the Porter model, as well as a large and constantly researched database of

prescriptive universal best practices for process execution.

Value Reference Model

A Value Reference Model (VRM) developed by the global not for profit Value Chain Group

offers an open source semantic dictionary for value chain management encompassing one

unified reference framework representing the process domains of product development,

customer relations and supply networks.

The integrated process framework guides the modeling, design, and measurement of

business performance by uniquely encompassing the plan, govern and execute

requirements for the design, product, and customer aspects of business.

The Value Chain Group claims VRM to be next generation Business Process Management

that enables value reference modeling of all business processes and provides product

excellence, operations excellence, and customer excellence.

Six business functions of the Value Chain:

Research and Development

Design of Products, Services, or Processes

Production

Marketing & Sales

Distribution

Customer Service

Business process management

Page 172: Strategic Notes

It has been suggested that this article or section be merged with Business

Process Improvement. (Discuss)

Business process management (BPM) is a method of efficiently aligning an organization

with the wants and needs of clients. It is a holistic management approach that promotes

business effectiveness and efficiency while striving for innovation, flexibility and integration

with technology. As organizations strive for attainment of their objectives, BPM attempts to

continuously improve processes - the process to define, measure and improve your

processes – a ‘process optimization' process.

Roughly speaking, the idea of (business) process is as traditional as concepts of tasks,

department, production, outputs. In the other hand, the current management and

improvement approach, with formal definitions and technical modeling has been around

since the early 1990's (see business process modeling). However, in recent years, there has

been a common confusion in the IT community when the term 'business process' is often

used as synonymous of management of middleware processes; or integrating application

software tasks. Therefore, this must be kept in mind when reading software engineering

papers using this, or the 'business process modeling' terms.

Contents

1 Overview

o 1.1 Technology

2 Business process management life-

cycle

o 2.1 Design

o 2.2 Modelling

o 2.3 Execution

o 2.4 Monitoring

Page 173: Strategic Notes

o 2.5 Optimisation

3 Future developments

4 Business process management in

practice

o 4.1 Use of software

5 Standardization

6 See also

7 Bibliography

8 External links

9 Notes

Overview

A business process is a collection of related , structured activities that produce a service or

product that meet the needs of a client. These processes are critical to any organization as

they generate revenue and often represent a significant proportion of costs.

BPM articles and pundits often discuss BPM from one of two viewpoints: people and

technology.

Technology

BPM System (BPMS) is sometimes seen as the whole of BPM. Some see that information

moves between enterprise software packages and immediately think of Service Oriented

Architecture (SOA); while others believe that modeling is the only way to create the ‘perfect’

process, so they think of modeling as BPM.

Both of these concepts go into the definition of Business Process Management. For instance,

the size and complexity of daily tasks often requires the use of technology to model

efficiently. Bringing the power of technology to staff is part of the BPM credo. Many thought

of BPM as the bridge between Information Technology (IT) and Business.

Page 174: Strategic Notes

BPMS could be industrial specific and can be driven by a software such as Agilent OpenLAB

BPM. Some other products may focus on Enterprise Resource Planning and warehouse

management. Validation of BPMS is another technical issue which vendors and users need to

be aware of, if regulatory compliances are mandatory [1], [2], [3], [4]. The task could be

performed either by an authenticated third party or by user themselves. In either way,

validation documentation need to be generated. The validation document usually can either

be published officially or well retained by users [5], [6], [7], [8].

Business process management life-cycle

The activities which constitute business process management can be grouped into five

categories: design, modeling, execution, monitoring, and optimization.

Design

Process Design encompasses both the identification of existing processes and designing the

"to-be" process. Areas of focus include: representation of the process flow, the actors within

it, alerts & notifications, escalations, Standard Operating Procedures, Service Level

Agreements, and task hand-over mechanisms.

Good design reduces the number of problems over the lifetime of the process. Whether or

not existing processes are considered, the aim of this step is to ensure that a correct and

efficient theoretical design is prepared.

The proposed improvement could be in human to human, human to system, and system to

system workflows, and might target regulatory, market, or competitive challenges faced by

the businesses.

Modelling

Page 175: Strategic Notes

Modelling takes the theoretical design and introduces combinations of variables, for

instance, changes in the cost of materials or increased rent, that determine how the process

might operate under different circumstances.

It also involves running "what-if analysis" on the processes: What if I have 75% of resources

to do the same task? What if I want to do the same job for 80% of the current cost?

Execution

One way to automate processes is to develop or purchase an application that executes the

required steps of the process; however, in practice, these applications rarely execute all the

steps of the process accurately or completely. Another approach is to use a combination of

software and human intervention; however this approach is more complex, making

documenting process difficult.

As a response to these problems, software has been developed that enables the full

business process (as developed in the process design activity) to be defined in a computer

language which can be directly executed by the computer. The system will either use

services in connected applications to perform business operations (e.g. calculating a

repayment plan for a loan) or, when a step is too complex to automate, will message a

human requesting input. Compared to either of the previous approaches, directly executing

a process definition can be more straightforward and therefore easier to improve. However,

automating a process definition requires flexible and comprehensive infrastructure which

typically rules out implementing these systems in a legacy IT environment.

Business rules have been used by systems to provide definitions for governing behavior, and

a business rule engine can be used to drive process execution and resolution.

Monitoring

Monitoring encompasses the tracking of individual processes so that information on their

state can be easily seen and statistics on the performance of one or more processes

provided. An example of the tracking is being able to determine the state of a customer

order (e.g. ordered arrived, awaiting delivery, invoice paid) so that problems in its operation

can be identified and corrected.

In addition, this information can be used to work with customers and suppliers to improve

their connected processes. Examples of the statistics are the generation of measures on

how quickly a customer order is processed or how many orders were processed in the last

Page 176: Strategic Notes

month. These measures tend to fit into three categories: cycle time, defect rate and

productivity.

The degree of monitoring depends on what information the business wants to evaluate and

analyze and how business wants it to be monitored, in real-time or ad-hoc. Here, business

activity monitoring (BAM) extends and expands the monitoring tools in generally provided

by BPMS.

Process mining is a collection of methods and tools related to process monitoring. The aim of

process mining is to analyze event logs extracted through process monitoring and to

compare them with an 'a priori' process model. Process mining allows process analysts to

detect discrepancies between the actual process execution and the a priori model as well as

to analyze bottlenecks.

Optimisation

Process optimisation includes retrieving process performance information from modeling or

monitoring phase and identifying the potential or actual bottlenecks and potential rooms for

cost savings or other improvements and then applying those enhancements in the design of

the process thus continuing the value cycle of business process management.

Future developments

Although the initial focus of BPM was on the automation of mechanistic business processes,

it has since been extended to integrate human-driven processes in which human interaction

takes place in series or parallel with the mechanistic processes. A common form is where

individual steps in the business process which require human intuition or judgment to be

performed are assigned to the appropriate members of an organization (as with workflow

systems).

More advanced forms such as human interaction management are in the complex

interaction between human workers in performing a workgroup task. In this case many

people and system interact in structured, ad-hoc, and sometimes completely dynamic ways

to complete one to many transactions.

BPM can be used to understand organizations through expanded views that would not

otherwise be available to organize and present. These views include the relationships of

processes to each other which, when included in the process model, provide for advanced

Page 177: Strategic Notes

reporting and analysis that would not otherwise be available. BPM is regarded by some as

the backbone of enterprise content management.

Business process management in practice

Whilst the steps can be viewed as a cycle, economic or time constraints are likely to limit

the process to one or more iterations.

In addition, organizations often start a BPM project or program with the objective to optimize

an area which has been identified as an area for improvement.

In financial sector, BPM is critical to make sure the system delivers a quality service while

the regulatory compliance is also not compromised [9].

Use of software

Some say that not all activities can be effectively modeled with BPMS, and so some

processes are best left alone. Taking this viewpoint, the value in BPMS is not in automating

very simple or very complex tasks, it is in modeling processes where there is the most

opportunity.

The alternate view is that a complete process modeling language, supported by a BPMS, is

needed; the purpose is not purely automation to replace manual tasks, but to enhance

manual tasks with computer assisted automation. In this sense, the argument over whether

BPM is about replacing human activity with automation or simply analyzing for greater

understanding of process is a sterile debate; all processes modeled using BPMS must be

executable in order to bring to life the software application that the human users interact

with at run time.

Standardization

Currently, the international standards for the task have only limited to the application for IT

sectors and ISO/IEC 15944 covers the operational aspects of the business. However, some

corporations with the culture of good business practices do use standard operating

procedures to regulate their operational process [10].

Benchmarking

Page 178: Strategic Notes

Benchmarking is the process of comparing the cost, time or quality of what one

organization does against what another organization does. The result is often a business

case for making changes in order to make improvements.

Also referred to as "best practice benchmarking" or "process benchmarking", it is a process

used in management and particularly strategic management, in which organizations

evaluate various aspects of their processes in relation to best practice, usually within their

own sector. This then allows organizations to develop plans on how to make improvements

or adopt best practice, usually with the aim of increasing some aspect of performance.

Benchmarking may be a one-off event, but is often treated as a continuous process in which

organizations continually seek to challenge their practices.

Contents

1 Collaborative benchmarking

2 Procedure

3 Cost of benchmarking

4 Technical benchmarking or Product

Benchmarking

5 Types of Benchmarking

6 See also

7 References

Collaborative benchmarking

Benchmarking, originally invented as a formal process by Rank Xerox, is usually carried out

by individual companies. Sometimes it may be carried out collaboratively by groups of

companies (eg subsidiaries of a multinational in different countries). One example is that of

the Dutch municipally-owned water supply companies, which have carried out a voluntary

collaborative benchmarking process since 1997 through their industry association. Another

example is the UK construction industry which has carried out benchmarking since the late

1990's again through its industry association and with financial support from the UK

Government

Page 179: Strategic Notes

Procedure

There is no single benchmarking process that has been universally adopted. The wide

appeal and acceptance of benchmarking has led to various benchmarking methodologies

emerging. The most prominent methodology is the 12 stage methodology by Robert Camp

(who wrote the first book on benchmarking in 1989)[1].

The 12 stage methodology consisted of 1. Select subject ahead 2. Define the process 3.

Identify potential partners 4. Identify data sources 5. Collect data and select partners 6.

Determine the gap 7. Establish process differences 8. Target future performance 9.

Communicate 10. Adjust goal 11. Implement 12. Review/recalibrate.

The following is an example of a typical shorter version of the methodology:

1. Identify your problem areas - Because benchmarking can be applied to any

business process or function, a range of research techniques may be required. They

include: informal conversations with customers, employees, or suppliers; exploratory

research techniques such as focus groups; or in-depth marketing research,

quantitative research, surveys, questionnaires, re-engineering analysis, process

mapping, quality control variance reports, or financial ratio analysis. Before

embarking on comparison with other organizations it is essential that you know your

own organization's function, processes; base lining performance provides a point

against which improvement effort can be measured.

2. Identify other industries that have similar processes - For instance if one were

interested in improving hand offs in addiction treatment he/she would try to identify

other fields that also have hand off challenges. These could include air traffic control,

cell phone switching between towers, transfer of patients from surgery to recovery

rooms.

3. Identify organizations that are leaders in these areas - Look for the very best

in any industry and in any country. Consult customers, suppliers, financial analysts,

trade associations, and magazines to determine which companies are worthy of

study.

4. Survey companies for measures and practices - Companies target specific

business processes using detailed surveys of measures and practices used to identify

business process alternatives and leading companies. Surveys are typically masked

to protect confidential data by neutral associations and consultants.

Page 180: Strategic Notes

5. Visit the "best practice" companies to identify leading edge practices -

Companies typically agree to mutually exchange information beneficial to all parties

in a benchmarking group and share the results within the group.

6. Implement new and improved business practices - Take the leading edge

practices and develop implementation plans which include identification of specific

opportunities, funding the project and selling the ideas to the organization for the

purpose of gaining demonstrated value from the process.

Cost of benchmarking

Benchmarking is a moderately expensive process, but most organizations find that it more

than pays for itself. The three main types of costs are:

Visit Costs - This includes hotel rooms, travel costs, meals, a token gift, and lost labor

time.

Time Costs - Members of the benchmarking team will be investing time in

researching problems, finding exceptional companies to study, visits, and

implementation. This will take them away from their regular tasks for part of each

day so additional staff might be required.

Benchmarking Database Costs - Organizations that institutionalize benchmarking into

their daily procedures find it is useful to create and maintain a database of best

practices and the companies associated with each best practice now.

The cost of benchmarking can substantially be reduced through utilizing the many internet

resources that have sprung up over the last few years. These aim to capture benchmarks

and best practices from organizations, business sectors and countries to make the

benchmarking process much quicker and cheaper.

Technical benchmarking or Product Benchmarking

The technique initially used to compare existing corporate strategies with a view to

achieving the best possible performance in new situations (see above), has recently been

extended to the comparison of technical products. This process is usually referred to as

"Technical Benchmarking" or "Product Benchmarking". Its use is particularly well developed

within the automotive industry ("Automotive Benchmarking"), where it is vital to design

products that match precise user expectations, at minimum possible cost, by applying the

best technologies available worldwide. Many data are obtained by fully disassembling

Page 181: Strategic Notes

existing cars and their systems. Such analyses were initially carried out in-house by car

makers and their suppliers. However, as they are expensive, they are increasingly

outsourced to companies specialized in this area. Indeed, outsourcing has enabled a drastic

decrease in costs for each company (by cost sharing) and the development of very efficient

tools (standards, software).

Types of Benchmarking

Process benchmarking - the initiating firm focuses its observation and

investigation of business processes with a goal of identifying and observing the best

practices from one or more benchmark firms. Activity analysis will be required where

the objective is to benchmark cost and efficiency; increasingly applied to back-office

processes where outsourcing may be a consideration.

Financial benchmarking - performing a financial analysis and comparing the

results in an effort to assess your overall competitiveness.

Performance benchmarking - allows the initiator firm to assess their competitive

position by comparing products and services with those of target firms.

Product benchmarking - the process of designing new products or upgrades to

current ones. This process can sometimes involve reverse engineering which is

taking apart competitors products to find strengths and weaknesses.

Strategic benchmarking - involves observing how others compete. This type is

usually not industry specific meaning it is best to look at other industries.

Functional benchmarking - a company will focus its benchmarking on a single

function in order to improve the operation of that particular function. Complex

functions such as Human Resources, Finance and Accounting and Information and

Communication Technology are unlikely to be directly comparable in cost and

efficiency terms and may need to be disaggregated into processes to make valid

comparison.

Scenario planning

Scenario planning [or scenario thinking or scenario analysis] is a strategic planning

method that some organizations use to make flexible long-term plans. It is in large part an

adaptation and generalization of classic methods used by military intelligence.

Page 182: Strategic Notes

The original method was that a group of analysts would generate simulation games for

policy makers. The games combine known facts about the future, such as demographics,

geography, military, political, industrial information, and mineral reserves, with plausible

alternative social, technical, economic, environmental, educational, political and aesthetic

(STEEEPA) trends which are key driving forces.

In business applications, the emphasis on gaming the behavior of opponents was reduced

(shifting more toward a game against nature). At Royal Dutch/Shell for example, scenario

planning was viewed as changing mindsets about the exogenous part of the world, prior to

formulating specific strategies.

Scenario planning shines especially if it includes systems thinking, which recognizes that

many factors may combine in complex ways to create sometime surprising futures (due to

non-linear feedback loops). The method also allows the inclusion of factors that are difficult

to formalize, such as novel insights about the future, deep shifts in values, unprecedented

regulations or inventions. Systems thinking used in conjunction with scenario planning leads

to plausible scenario story lines because the causal relationship between factors can be

demonstrated. In these cases when scenario planning is integrated with a systems thinking

approach to scenario development, it is known is sometimes referred to as structural

dynamics.

Contents

1 Crafting scenarios

o 1.1 Zero-sum game scenarios

2 How military scenario planning or scenario thinking is done

o 2.1 Scenario planning in military applications

3 Development of scenario analysis in business organizations

4 History of use by academic and commercial organizations

o 4.1 Critique of Shell's use of scenario planning

5 General Limitations of Scenario Planning

Page 183: Strategic Notes

6 Use of scenario planning by managers

7 Process

o 7.1 Step 1 - decide assumptions/drivers for change

o 7.2 Step 2 - bring drivers together into a viable framework

o 7.3 Step 3 - produce initial (seven to nine) mini-scenarios

o 7.4 Step 4 - reduce to two or three scenarios

o 7.5 Step 5 - write the scenarios

o 7.6 Step 6 - identify issues arising

8 Use of scenarios

9 Scenario planning compared to other techniques

10 Companies Offering Scenario Planning Services [16]

Crafting scenarios

These combinations and permutations of fact and related social changes are called

"scenarios." The scenarios usually include plausible, but unexpectedly important situations

and problems that exist in some small form in the present day. Any particular scenario is

unlikely. However, future studies analysts select scenario features so they are both possible

and uncomfortable. Scenario planning helps policy-makers to anticipate hidden weaknesses

and inflexibilities in organizations and methods.

When disclosed years in advance, these weaknesses can be avoided or their impacts

reduced more effectively than if a similar real-life problems were considered under duress of

an emergency. For example, a company may discover that it needs to change contractual

terms to protect against a new class of risks, or collect cash reserves to purchase

anticipated technologies or equipment. Flexible business continuity plans with "PREsponse

protocols" help cope with similar operational problems and deliver measurable future value-

added.

Zero-sum game scenarios

Page 184: Strategic Notes

Strategic military intelligence organizations also construct scenarios. The methods and

organizations are almost identical, except that scenario planning is applied to a wider

variety of problems than merely military and political problems.

As in military intelligence, the chief challenge of scenario planning is to find out the real

needs of policy-makers, when policy-makers may not themselves know what they need to

know, or may not know how to describe the information that they really want.

Good analysts design wargames so that policy makers have great flexibility and freedom to

adapt their simulated organizations. Then these simulated organizations are "stressed" by

the scenarios as a game plays out. Usually, particular groups of facts become more clearly

important. These insights enable intelligence organizations to refine and repackage real

information more precisely to better-serve the policy-makers' real-life needs. Usually the

games' simulated time runs hundreds of times faster than real life, so policy-makers

experience several years of policy decisions, and their simulated effects, in less than a day.

This chief value of scenario planning is that it allows policy-makers to make and learn from

mistakes without risking career-limiting failures in real life. Further, policymakers can make

these mistakes in a safe, unthreatening, game-like environment, while responding to a wide

variety of concretely-presented situations based on facts. This is an opportunity to "rehearse

the future," an opportunity that does not present itself in day-to-day operations where every

action and decision counts.

How military scenario planning or scenario thinking is done

1. Decide on the key question to be answered by the analysis. By doing this, it is

possible to assess whether scenario planning is preferred over the other methods. If

the question is based on small changes or a very few number of elements, other

more formalized methods may be more useful.

2. Set the time and scope of the analysis. Take into consideration how quickly changes

have happened in the past, and try to assess to what degree it is possible to predict

common trends in demographics, product life cycles et al. A usual timeframe can be

five to 10 years.

3. Identify major stakeholders. Decide who will be affected and have an interest in the

possible outcomes. Identify their current interests, whether and why these interests

have changed over time in the past.

Page 185: Strategic Notes

4. Map basic trends and driving forces. This includes industry, economic, political,

technological, legal and societal trends. Assess to what degree these trends will

affect your research question. Describe each trend, how and why it will affect the

organisation. In this step of the process, brainstorming is commonly used, where all

trends that can be thought of are presented before they are assessed, to capture

possible group thinking and tunnel vision.

5. Find key uncertainties. Map the driving forces on two axes, assessing each force on

an uncertain/(relatively) predictable and important/unimportant scale. All driving

forces that are considered unimportant are discarded. Important driving forces that

are relatively predictable (f.ex. demographics) can be included in any scenario, so

the scenarios should not be based on these. This leaves you with a number of

important and unpredictable driving forces. At this point, it is also useful to assess

whether any linkages between driving forces exist, and rule out any "impossible"

scenarios (f.ex. full employment and zero inflation).

6. Check for the possibility to group the linked forces and if possible, reduce the forces

to the two most important. (To allow the scenarios to be presented in a neat xy-

diagram)

7. Identify the extremes of the possible outcomes of the (two) driving forces and check

the dimensions for consistency and plausibility. Three key points should be assessed:

1. Time frame: are the trends compatible within the time frame in question?

2. Internal consistency: do the forces describe uncertainties that can construct

probable scenarios.

3. Vs the stakeholders: are any stakeholders currently in disequilibrium

compared to their preferred situation, and will this evolve the scenario? Is it

possible to create probable scenarios when considering the stakeholders? This

is most important when creating macro-scenarios where governments, large

organisations et al. will try to influence the outcome.

8. Define the scenarios, plotting them on a grid if possible. Usually, 2 to 4 scenarios are

constructed. The current situation does not need to be in the middle of the diagram

(inflation may already be low), and possible scenarios may keep one (or more) of the

forces relatively constant, especially if using three or more driving forces. One

approach can be to create all positive elements into one scenario and all negative

Page 186: Strategic Notes

elements (relative to the current situation) in another scenario, then refining these. In

the end, try to avoid pure best-case and worst-case scenarios.

9. Write out the scenarios. Narrate what has happened and what the reasons can be for

the proposed situation. Try to include good reasons why the changes have occurred

as this helps the further analysis. Finally, give each scenario a descriptive (and

catchy) name to ease later reference.

10. Assess the scenarios. Are they relevant for the goal? Are they internally consistent?

Are they archetypical? Do they represent relatively stable outcome situations?

11. Identify research needs. Based on the scenarios, assess where more information is

needed. Where needed, obtain more information on the motivations of stakeholders,

possible innovations that may occur in the industry and so on.

12. Develop quantitative methods. If possible, develop models to help quantify

consequences of the various scenarios, such as growth rate, cash flow etc. This step

does of course require a significant amount of work compared to the others, and may

be left out in back-of-the-envelope-analyses.

13. Converge towards decision scenarios. Retrace the steps above in an iterative process

until you reach scenarios which address the fundamental issues facing the

organization. Try to assess upsides and downsides of the possible scenarios.

Scenario planning in military applications

Scenario planning is also extremely popular with military planners. Most states' departments

of war maintain a continuously-updated series of strategic plans to cope with well-known

military or strategic problems. These plans are almost always based on scenarios, and often

the plans and scenarios are kept up-to-date by war games, sometimes played out with real

troops. This process was first carried out (arguably the method was invented by) the

Prussian general staff of the mid-19th century.

Development of scenario analysis in business organizations

In the past, strategic plans have often considered only the "official future," which was

usually a straight-line graph of current trends carried into the future. Often the trend lines

were generated by the accounting department, and lacked discussions of demographics, or

qualitative differences in social conditions.

Page 187: Strategic Notes

These simplistic guesses are surprisingly good most of the time, but fail to consider

qualitative social changes that can affect a business or government. Scenarios focus on the

joint effect of many factors. Scenario planning helps us understand how the various strands

of a complex tapestry move if one or more threads are pulled. When you just list possible

causes, as for instance in fault tree analysis, you may tend to discount any one factor in

isolation. But when you explore the factors together, you realize that certain combinations

could magnify each other’s impact or likelihood. For instance, an increased trade deficit may

trigger an economic recession, which in turn creates unemployment and reduces domestic

production. Schoemaker offers a strong managerial case for the use of scenario planning in

business and had wide impact.[1]

Scenarios planning starts by dividing our knowledge into two broad domains: (1) things we

believe we know something about and (2) elements we consider uncertain or unknowable.

The first component – trends – casts the past forward, recognizing that our world possesses

considerable momentum and continuity. For example, we can safely make assumptions

about demographic shifts and, perhaps, substitution effects for certain new technologies.

The second component – true uncertainties – involve indeterminables such as future interest

rates, outcomes of political elections, rates of innovation, fads and fashions in markets, and

so on. The art of scenario planning lies in blending the known and the unknown into a

limited number of internally consistent views of the future that span a very wide range of

possibilities.

Numerous organizations have applied scenario planning to a broad range of issues, from

relatively simple, tactical decisions to the complex process of strategic planning and vision

building. [2][3][4] The power of scenario planning for business was originally established by

Royal Dutch/Shell, which has used scenarios since the early 1970s as part of a process for

generating and evaluating its strategic options.[5][6] Shell has been consistently better in its

oil forecasts than other major oil companies, and saw the overcapacity in the tanker

business and Europe’s petrochemicals earlier than its competitors.[2] But ironically, the

approach may have had more impact outside Shell than within, as many others firms and

consultancies started to benefit as well from scenario planning. Scenario planning is as

much art as science, and prone to a variety of traps (both in process and content) as

enumerated by Schoemaker .[1]

History of use by academic and commercial organizations

Though the concept was first introduced, as 'La Prospective', by Berger in 1964 [7] and the

word 'scenario' itself was reportedly first used by Herman Kahn in 1967, the theoretical

Page 188: Strategic Notes

foundations of scenario forecasting were mainly developed in the 1970s, especially by Godet

(between 1974 and 1979). By the early 1980s these approaches had developed into a

sophisticated forecasting technique which was primarily recommended for the integration of

the output from other sophisticated (qualitative) approaches to long-range forecasting.

Although it was inevitably based upon judgmental forecasts, its use typically revolved

around forecasting techniques which brought together groups of experts in order to reduce

the risk involved. These included Delphi and, especially in the context of scenarios, Cross-

Impact Matrices, which were popular at that time.

Possibly as a result of these very sophisticated approaches, and of the difficult techniques

they employed (which usually demanded the resources of a central planning staff),

scenarios earned a reputation for difficulty (and cost) in use. Even so, the theoretical

importance of the use of alternative scenarios, to help address the uncertainty implicit in

long-range forecasts, was dramatically underlined by the widespread confusion which

followed the Oil Shock of 1973. As a result many of the larger organizations started to use

the technique in one form or another. Indeed, just ten years later, in 1983 Diffenbach

reported that 'alternate scenarios' were the third most popular technique for long-range

forecasting - used by 68% of the large organizations he surveyed.[8]

Practical development of scenario forecasting, to guide strategy rather than for the more

limited academic uses which had previously been the case, was started by Wack in 1971 at

the Royal Dutch Shell group of companies - and it, too, was given impetus by the Oil Shock

two years later. Shell has, since that time, led the commercial world in the use of scenarios -

and in the development of more practical techniques to support these. Indeed, as - in

common with most forms of long-range forecasting - the use of scenarios has (during the

depressed trading conditions of the last decade) reduced to only a handful of private-sector

organisations, Shell remains almost alone amongst them in keeping the technique at the

forefront of forecasting.

There has only been anecdotal evidence offered in support of the value of scenarios, even as

aids to forecasting; and most of this has come from one company - Shell. In addition, with so

few organisations making consistent use of them - and with the timescales involved

reaching into decades - it is unlikely that any definitive supporting evidenced will be

forthcoming in the foreseeable future. For the same reasons, though, a lack of such proof

applies to almost all long-range planning techniques. In the absence of proof, but taking

account of Shell's well documented experiences of using it over several decades (where, in

the 1990s, its then CEO ascribed its success to its use of such scenarios), can be significant

Page 189: Strategic Notes

benefit to be obtained from extending the horizons of managers' long-range forecasting in

the way that the use of scenarios uniquely does. [9]

Critique of Shell's use of scenario planning

In the 1970s, many energy companies were surprised by both environmentalism and the

OPEC cartel, and thereby lost billions of dollars of revenue by misinvestment. The dramatic

financial effects of these changes led at least one organization, Royal Dutch Shell, to

implement scenario planning. The analysts of this company publicly estimated that this

planning process made their company the largest in the world.[2] However other observers of

Shell's use of scenario planning have suggested that few if any significant long term

business advantages accrued to Shell from the use of scenario methodology. Whilst the

intellectual robustness of Shell's long term scenarios was seldom in doubt their actual

practical use was seen as being minimal by many senior Shell executives. A Shell insider has

commented "The scenario team were bright and their work was of a very high intellectual

level. However neither the high level "Group scenarios" nor the country level scenarios

produced with operating companies really made much difference when key decisions were

being taken".

The use of scenarios was audited by Arie de Geus's team in the early 1980s and they found

that the decision making processes following the scenarios were the primary cause of the

lack of strategic implementation, rather than the scenarios themselves. Many practitioners

today spend as much time on the decision making process as on creating the scenarios

themselves.

General Limitations of Scenario Planning

Although scenario planning has gained much adherence in industry, its subjective and

heuristic nature leaves many academics uncomfortable. How do we know if we have the

right scenarios? And how do we go from scenarios to decisions? These concerns are

legitimate and scenario planning would gain in academic standing if more research were

conducted on its comparative performance and underlying theoretical premises. A collection

of chapters by noted scenario planners[10] failed to contain a single reference to an academic

source! In general, there are few academically validated analyses of scenario planning (for a

notable exception, see Schoemaker). [11] The technique was born from practice and its

appeal is based more on anecdotal than scientific evidence. Furthermore, significant

misconceptions remain about its intent and claims. Above all, scenario planning is a tool for

collective learning, reframing perceptions and preserving uncertainty when the latter is

Page 190: Strategic Notes

pervasive. Too many decision makers want to bet on one future scenario, falling prey to the

seductive temptation of trying to predict the future rather than to entertain multiple futures.

Another trap is to take the scenarios too literally as though they were static beacons that

map out a fixed future. In actuality, their aim is to bound the future but in a flexible way that

permits learning and adjustment as the future unfolds.

One criticism of the two-by-two technique commonly used is that the resulting matrix results

in four somewhat arbitrary scenario themes. If other key uncertainties had been selected, it

might be argued, very different scenarios could emerge. How true this is depends on

whether the matrix is viewed as just a starting point to be superseded by the ensuing

blueprint or is considered as the grand architecture that nests everything else. In either

case, however, the issue should not be which are the “right” scenarios but rather whether

they delineate the range of possible future appropriately. Any tool that tries to simplify a

complex picture will introduce distortions, whether it is a geographic map or a set of

scenarios. Seldom will complexity decompose naturally into simple states. But it might.

Consider, for example, the behavior of water (the molecule H2O) which, depending on

temperature and pressure, naturally exists in just one of three states: gas, liquid or ice. The

art of scenarios is to look for such natural states or points of bifurcation in the behavior of a

complex system.

Apart from some inherent subjectivity in scenario design, the technique can suffer from

various process and content traps.[12] These traps mostly relate to how the process is

conducted in organizations (such as team composition, role of facilitators, etc.) as well as

the substantive focus of the scenarios (long vs. short term, global vs. regional, incremental

vs. paradigm shifting, etc). One might think of these as merely challenges of

implementation, but since the process component is integral to the scenario experience,

they can also be viewed as weaknesses of the methodology itself. Limited safeguards exist

against political derailing, agenda control, myopia and limited imagination when conducting

scenario planning exercises within real organizations. But, to varying extents, all forecasting

techniques will suffer from such organizational limitations. The benchmark to use is not

perfection, especially when faced with high uncertainty and complexity, or even strict

adherence to such normative precepts as procedural invariance and logical consistency, but

whether the technique performs better than its rivals. And to answer this question fairly,

performance must be carefully specified. It should clearly include some measures of

accuracy as well as a cost-benefit analysis that considers the tradeoff between effort and

accuracy. In addition, legitimation criteria may be important to consider as well as the ability

to refine and improve the approach as more experience is gained.

Page 191: Strategic Notes

A third limitation of scenario planning in organizational settings is its weak integration into

other planning and forecasting techniques. Most companies have plenty of trouble dealing

with just one future, let alone multiple ones. Typically, budgeting and planning systems are

predicated on single views of the future, with adjustments made as necessary through

variance analysis, contingency planning, rolling budgets, and periodic renegotiations. The

weaknesses of these traditional approaches were very evident after the tragic attack of

Sept. 11, 2001 when many companies became paralyzed and quite a few just threw away

the plan and budget. Their strategies were not future-proof and they lacked organized

mechanisms to adjust to external turmoil. In cases of crisis, leadership becomes important

but so does some degree of preparedness. Once the scenarios are finished, the real works

starts of how to craft flexible strategies and appropriate monitoring systems. [13] Managers

need a simple but comprehensive compass to navigate uncertainty from beginning to end.

Scenario planning is just one component of a more complete management system. The

point is that scenario thinking needs to be integrated with the existing planning and

budgeting system, as awkward as this fit may be. The reality is that most organizations do

not handle uncertainty well and that researchers have not provided adequate answers about

how to plan under conditions of high uncertainty and complexity.

Use of scenario planning by managers

The basic concepts of the process are relatively simple. In terms of the overall approach to

forecasting, they can be divided into three main groups of activities (which are, generally

speaking, common to all long range forecasting processes):[9]

1. Environmental analysis

2. Scenario planning

3. Corporate strategy

The first of these groups quite simply comprises the normal environmental analysis. This is

almost exactly the same as that which should be undertaken as the first stage of any serious

long-range planning. However, the quality of this analysis is especially important in the

context of scenario planning.

Page 192: Strategic Notes

The central part represents the specific techniques - covered here - which differentiate the

scenario forecasting process from the others in long-range planning.

The final group represents all the subsequent processes which go towards producing the

corporate strategy and plans. Again, the requirements are slightly different but in general

they follow all the rules of sound long-range planning.

Process

The part of the overall process which is radically different from most other forms of long-

range planning is the central section, the actual production of the scenarios. Even this,

though, is relatively simple, at its most basic level. As derived from the approach most

commonly used by Shell, it follows six steps:

1. Decide drivers for change/assumptions

2. Bring drivers together into a viable framework

3. Produce 7-9 initial mini-scenarios

4. Reduce to 2-3 scenarios

5. Draft the scenarios

6. Identify the issues arising

Step 1 - decide assumptions/drivers for change

The first stage is to examine the results of environmental analysis to determine which are

the most important factors that will decide the nature of the future environment within

which the organisation operates. These factors are sometimes called 'variables' (because

they will vary over the time being investigated, though the terminology may confuse

scientists who use it in a more rigorous manner). Users tend to prefer the term 'drivers' (for

change), since this terminology is not laden with quasi-scientific connotations and reinforces

the participant's commitment to search for those forces which will act to change the future.

Whatever the nomenclature, the main requirement is that these will be informed

assumptions.

This is partly a process of analysis, needed to recognise what these 'forces' might be.

However, it is likely that some work on this element will already have taken place during the

preceding environmental analysis. By the time the formal scenario planning stage has been

Page 193: Strategic Notes

reached, the participants may have already decided - probably in their sub-conscious rather

than formally - what the main forces are.

In the ideal approach, the first stage should be to carefully decide the overall assumptions

on which the scenarios will be based. Only then, as a second stage, should the various

drivers be specifically defined. Participants, though, seem to have problems in separating

these stages.

Perhaps the most difficult aspect though, is freeing the participants from the preconceptions

they take into the process with them. In particular, most participants will want to look at the

medium term, five to ten years ahead rather than the required longer-term, ten or more

years ahead. However, a time horizon of anything less than ten years often leads

participants to extrapolate from present trends, rather than consider the alternatives which

might face them. When, however, they are asked to consider timescales in excess of ten

years they almost all seem to accept the logic of the scenario planning process, and no

longer fall back on that of extrapolation. There is a similar problem with expanding

participants horizons to include the whole external environment.

Brainstorming

In any case, the brainstorming which should then take place, to ensure that the list is

complete, may unearth more variables - and, in particular, the combination of factors may

suggest yet others.

A very simple technique which is especially useful at this - brainstorming - stage, and in

general for handling scenario planning debates is derived from use in Shell where this type

of approach is often used. An especially easy approach, it only requires a conference room

with a bare wall and copious supplies of 3M Post-It Notes!

The six to ten people ideally taking part in such face-to-face debates should be in a

conference room environment which is isolated from outside interruptions. The only special

requirement is that the conference room has at least one clear wall on which Post-It notes

will stick. At the start of the meeting itself, any topics which have already been identified

during the environmental analysis stage are written (preferably with a thick magic marker,

so they can be read from a distance) on separate Post-It Notes. These Post-It Notes are then,

at least in theory, randomly placed on the wall. In practice, even at this early stage the

participants will want to cluster them in groups which seem to make sense. The only

Page 194: Strategic Notes

requirement (which is why Post-It Notes are ideal for this approach) is that there is no bar to

taking them off again and moving them to a new cluster.

A similar technique - using 5" by 3" index cards - has also been described (as the 'Snowball

Technique'), by Backoff and Nutt, for grouping and evaluating ideas in general. [14]

As in any form of brainstorming, the initial ideas almost invariably stimulate others. Indeed,

everyone should be encouraged to add their own Post-It Notes to those on the wall .

However it differs from the 'rigorous' form described in 'creative thinking' texts, in that it is

much slower paced and the ideas are discussed immediately. In practice, as many ideas

may be removed, as not being relevant, as are added. Even so, it follows many of the same

rules as normal brainstorming and typically lasts the same length of time - say, an hour or

so only.

It is important that all the participants feel they 'own' the wall - and are encouraged to move

the notes around themselves. The result is a very powerful form of creative decision-making

for groups, which is applicable to a wide range of situations (but is especially powerful in the

context of scenario planning). It also offers a very good introduction for those who are

coming to the scenario process for the first time. Since the workings are largely self-evident,

participants very quickly come to understand exactly what is involved.

Important and uncertain

This step is, though, also one of selection - since only the most important factors will justify

a place in the scenarios. The 80:20 Rule here means that, at the end of the process,

management's attention must be focused on a limited number of most important issues.

Experience has proved that offering a wider range of topics merely allows them to select

those few which interest them, and not necessarily those which are most important to the

organisation.

In addition, as scenarios are a technique for presenting alternative futures, the factors to be

included must be genuinely 'variable'. They should be subject to significant alternative

outcomes. Factors whose outcome is predictable, but important, should be spelled out in the

introduction to the scenarios (since they cannot be ignored). The Important Uncertainties

Matrix, as reported by Kees van der Heijden of Shell, is a useful check at this stage.[3]

Page 195: Strategic Notes

At this point it is also worth pointing out that a great virtue of scenarios is that they can

accommodate the input from any other form of forecasting. They may use figures, diagrams

or words in any combination. No other form of forecasting offers this flexibility.

Step 2 - bring drivers together into a viable framework

The next step is to link these drivers together to provide a meaningful framework. This may

be obvious, where some of the factors are clearly related to each other in one way or

another. For instance, a technological factor may lead to market changes, but may be

constrained by legislative factors. On the other hand, some of the 'links' (or at least the

'groupings') may need to be artificial at this stage. At a later stage more meaningful links

may be found, or the factors may then be rejected from the scenarios. In the most

theoretical approaches to the subject, probabilities are attached to the event strings. This is

difficult to achieve, however, and generally adds little - except complexity - to the outcomes.

This is probably the most (conceptually) difficult step. It is where managers' 'intuition' - their

ability to make sense of complex patterns of 'soft' data which more rigorous analysis would

be unable to handle - plays an important role. There are, however, a range of techniques

which can help; and again the Post-It-Notes approach is especially useful:

Thus, the participants try to arrange the drivers, which have emerged from the first stage,

into groups which seem to make sense to them. Initially there may be many small groups.

The intention should, therefore, be to gradually merge these (often having to reform them

from new combinations of drivers to make these bigger groups work). The aim of this stage

is eventually to make 6 - 8 larger groupings; 'mini-scenarios'. Here the Post-It Notes may be

moved dozens of times over the length - perhaps several hours or more - of each meeting.

While this process is taking place the participants will probably want to add new topics - so

more Post-It Notes are added to the wall. In the opposite direction, the unimportant ones are

removed (possibly to be grouped, again as an 'audit trail' on another wall). More important,

the 'certain' topics are also removed from the main area of debate - in this case they must

be grouped in clearly labelled area of the main wall.

As the clusters - the 'mini-scenarios' - emerge, the associated notes may be stuck to each

other rather than individually to the wall; which makes it easier to move the clusters around

(and is a considerable help during the final, demanding stage to reducing the scenarios to

two or three).

Page 196: Strategic Notes

The great benefit of using Post-It Notes is that there is no bar to participants changing their

minds. If they want to rearrange the groups - or simply to go back (iterate) to an earlier

stage - then they strip them off and put them in their new position.

Step 3 - produce initial (seven to nine) mini-scenarios

The outcome of the previous step is usually between seven and nine logical groupings of

drivers. This is usually easy to achieve. The 'natural' reason for this may be that it

represents some form of limit as to what participants can visualise.

Having placed the factors in these groups, the next action is to work out, very approximately

at this stage, what is the connection between them. What does each group of factors

represent?

Step 4 - reduce to two or three scenarios

The main action, at this next stage, is to reduce the seven to nine mini-scenarios/groupings

detected at the previous stage to two or three larger scenarios. The challenge in practice

seems to come down to finding just two or three 'containers' into which all the topics can be

sensibly fitted. This usually requires a considerable amount of debate - but in the process it

typically generates as much light as it does heat. Indeed, the demanding process of

developing these basic scenario frameworks often, by itself, produces fundamental insights

into what are the really important (perhaps life and death) issues affecting the organisation.

During this extended debate - and even before it is summarised in the final reports - the

participants come to understand, by their own involvement in the debate, what the most

important drivers for change may be, and (perhaps even more important) what their peers

think they are. Based on this intimate understanding, they are well prepared to cope with

such changes - reacting almost instinctively - when they actually do happen; even without

recourse to the formal reports which are eventually produced!

There is no theoretical reason for reducing to just two or three scenarios, only a practical

one. It has been found that the managers who will be asked to use the final scenarios can

only cope effectively with a maximum of three versions! Shell started, more than three

decades ago, by building half a dozen or more scenarios - but found that the outcome was

that their managers selected just one of these to concentrate on. As a result the planners

reduced the number to three, which managers could handle easily but could no longer so

easily justify the selection of only one! This is the number now recommended most

frequently in most of the literature.

Page 197: Strategic Notes

Complementary scenarios

As used by Shell, and as favoured by a number of the academics, two scenarios should be

complementary; the reason being that this helps avoid managers 'choosing' just one,

'preferred', scenario - and lapsing once more into single-track forecasting (negating the

benefits of using 'alternative' scenarios to allow for alternative, uncertain futures). This is,

however, a potentially difficult concept to grasp, where managers are used to looking for

opposites; a good and a bad scenario, say, or an optimistic one versus a pessimistic one -

and indeed this is the approach (for small businesses) advocated by Foster. In the Shell

approach, the two scenarios are required to be equally likely, and between them to cover all

the 'event strings'/drivers. Ideally they should not be obvious opposites, which might once

again bias their acceptance by users, so the choice of 'neutral' titles is important. For

example, Shell's two scenarios at the beginning of the 1990s were titled 'Sustainable World'

and 'Global Mercantilism'[xv]. In practice, we found that this requirement, much to our

surprise, posed few problems for the great majority, 85%, of those in the survey; who easily

produced 'balanced' scenarios. The remaining 15% mainly fell into the expected trap of

'good versus bad'. We have found that our own relatively complex (OBS) scenarios can also

be made complementary to each other; without any great effort needed from the teams

involved; and the resulting two scenarios are both developed further by all involved, without

unnecessary focusing on one or the other.

Testing

Having grouped the factors into these two scenarios, the next step is to test them, again, for

viability. Do they make sense to the participants? This may be in terms of logical analysis,

but it may also be in terms of intuitive 'gut-feel'. Once more, intuition often may offer a

useful - if academically less respectable - vehicle for reacting to the complex and ill-defined

issues typically involved. If the scenarios do not intuitively 'hang together', why not? The

usual problem is that one or more of the assumptions turns out to be unrealistic in terms of

how the participants see their world. If this is the case then you need to return to the first

step - the whole scenario planning process is above all an iterative one (returning to its

beginnings a number of times until the final outcome makes the best sense).

Step 5 - write the scenarios

The scenarios are then 'written up' in the most suitable form. The flexibility of this step often

confuses participants, for they are used to forecasting processes which have a fixed format.

The rule, though, is that you should produce the scenarios in the form most suitable for use

Page 198: Strategic Notes

by the managers who are going to base their strategy on them. Less obviously, the

managers who are going to implement this strategy should also be taken into account. They

will also be exposed to the scenarios, and will need to believe in these. This is essentially a

'marketing' decision, since it will be very necessary to 'sell' the final results to the users. On

the other hand, a not inconsiderable consideration may be to use the form the author also

finds most comfortable. If the form is alien to him or her the chances are that the resulting

scenarios will carry little conviction when it comes to the 'sale'.

Most scenarios will, perhaps, be written in word form (almost as a series of alternative

essays about the future); especially where they will almost inevitably be qualitative which is

hardly surprising where managers, and their audience, will probably use this in their day to

day communications. Some, though use an expanded series of lists and some enliven their

reports by adding some fictional 'character' to the material - perhaps taking literally the idea

that they are stories about the future - though they are still clearly intended to be factual.

On the other hand, they may include numeric data and/or diagrams - as those of Shell do

(and in the process gain by the acid test of more measurable 'predictions').

Step 6 - identify issues arising

The final stage of the process is to examine these scenarios to determine what are the most

critical outcomes; the 'branching points' relating to the 'issues' which will have the greatest

impact (potentially generating 'crises') on the future of the organisation. The subsequent

strategy will have to address these - since the normal approach to strategy deriving from

scenarios is one which aims to minimise risk by being 'robust' (that is it will safely cope with

all the alternative outcomes of these 'life and death' issues) rather than aiming for

performance (profit) maximisation by gambling on one outcome.

Use of scenarios

It is important to note that scenarios may be used in a number of ways:

a) Containers for the drivers/event strings

Page 199: Strategic Notes

Most basically, they are a logical device, an artificial framework, for presenting the individual

factors/topics (or coherent groups of these) so that these are made easily available for

managers' use - as useful ideas about future developments in their own right - without

reference to the rest of the scenario. It should be stressed that no factors should be

dropped, or even given lower priority, as a result of producing the scenarios. In this context,

which scenario contains which topic (driver), or issue about the future, is irrelevant.

b) Tests for consistency

At every stage it is necessary to iterate, to check that the contents are viable and make any

necessary changes to ensure that they are; here the main test is to see if the scenarios

seem to be internally consistent - if they are not then the writer must loop back to earlier

stages to correct the problem. Though it has been mentioned previously, it is important to

stress once again that scenario building is ideally an iterative process. It usually does not

just happen in one meeting - though even one attempt is better than none - but takes place

over a number of meetings as the participants gradually refine their ideas.

c) Positive perspectives

Perhaps the main benefit deriving from scenarios, however, comes from the alternative

'flavours' of the future their different perspectives offer. It is a common experience, when

the scenarios finally emerge, for the participants to be startled by the insight they offer - as

to what the general shape of the future might be - at this stage it no longer is a theoretical

exercise but becomes a genuine framework (or rather set of alternative frameworks) for

dealing with that.

Scenario planning compared to other techniques

Scenario planning differs from contingency planning, sensitivity analysis and computer

simulations.[13]

Contingency planning is a "What if" tool, that only takes into account one uncertainty.

However, scenario planning considers combinations of uncertainties in each scenario.

Planners also try to select especially plausible but uncomfortable combinations of social

developments.

Sensitivity analysis analyzes changes in one variable only, which is useful for simple

changes, while scenario planning tries to expose policy makers to significant interactions of

major variables.

Page 200: Strategic Notes

While scenario planning can benefit from computer simulations, scenario planning is less

formalized, and can be used to make plans for qualitative patterns that show up in a wide

variety of simulated events.

During the past 5 years, computer supported Morphological Analysis has been employed as

aid in scenario development by the Swedish National Defence Research Agency in

Stockholm.[15] This method makes it possible to create a multi-variable morphological field

which can be treated as an inference model – thus integrating scenario planning techniques

with contingency analysis and sensitivity analysis.

Companies Offering Scenario Planning Services[16]

The Applied Futures - http://www.exploit-the-future.com

Batelle - http://www.battelle.org

Decision Strategies International, Inc. - http://www.thinkdsi.com

The Foundation for Our Future - http://www.foff.org

The Futures Strategy Group - http://www.futurestrat.com

GBN Global Business Network - http://www.gbn.com

The Applied Futures - http://www.exploit-the-future.com

SAMI Consulting - http://www.samiconsulting.co.uk

Scenarios + Vision - http://www.scenarios-vision.com

SRI Consulting Business Intelligence - http://www.sric-bi.com

Scenario Management International - http://www.scmi.de

Strategy dynamics

The word ‘dynamics’ appears frequently in discussions and writing about strategy, and is

used in two distinct, though equally important senses.

The dynamics of strategy and performance concerns the ‘content’ of strategy –

initiatives, choices, policies and decisions adopted in an attempt to improve performance,

and the results that arise from these managerial behaviors. The dynamic model of the

strategy process is a way of understanding how strategic actions occur. It recognizes that

Page 201: Strategic Notes

strategic planning is dynamic, that is, strategy-making involves a complex pattern of actions

and reactions. It is partially planned and partially unplanned.

A literature search shows the first of these senses to be both the earliest and most widely used meaning

of ‘strategy dynamics’, though that is not to diminish the importance of the dynamic view of the strategy

process.

Early research

For instance, early strategy and organizational theorists

emphasized complexity-like thinking including:

Herbert Simon 's interest in decomposable systems

and computational complexity.

Karl Weick 's loose coupling theory and interest in

causal dependencies

Burns and Stalker 's contrast between organic and

mechanistic structures

Charles Perrow 's interest in the link between complex

organization and catastrophic accidents

James March 's contrast between exploration and

exploitation, which owes a debt to complexity theorist

John Holland.

Later research

More recently work by organizational scholars and

their colleagues have added greatly to our

understanding of how concepts from the complexity

sciences can be used to understand strategy and

organizations. The work of Dan Levinthal, Jan Rivkin,

Nicolaj Siggelkow, Kathleen Eisenhardt, Nelson

Repenning, Phil Anderson and their research groups

have been influential in their use of ideas from the

complexity sciences in the fields of strategic

management and organizational studies.

Page 202: Strategic Notes

Chaos theory

A plot of the Lorenz attractor for values r = 28,

σ = 10, b = 8/3

In mathematics, chaos theory describes the behavior

of certain dynamical systems – that is, systems whose

states evolve with time – that may exhibit dynamics

that are highly sensitive to initial conditions (popularly

referred to as the butterfly effect). As a result of this

sensitivity, which manifests itself as an exponential

growth of perturbations in the initial conditions, the

behavior of chaotic systems appears to be random.

This happens even though these systems are

deterministic, meaning that their future dynamics are

fully defined by their initial conditions, with no random

elements involved. This behavior is known as

deterministic chaos, or simply chaos.

Chaotic behaviour is also observed in natural systems,

such as the weather. This may be explained by a

chaos-theoretical analysis of a mathematical model of

such a system, embodying the laws of physics that

are relevant for the natural system.

Contents

1 Overview

2 History

Page 203: Strategic Notes

3 Chaotic dynamics

o 3.1 Attractors

o 3.2 Strange attractors

o 3.3 Minimum complexity of a chaotic system

o 3.4 Mathematical theory

4 Distinguishing random from chaotic data

5 Applications

6 See also

7 References

8 Scientific Literature

o 8.1 Articles

o 8.2 Textbooks

o 8.3 Semi-technical and popular works

9 External links

Contents

1 Static Models of Strategy and Performance

2 The need for a Dynamic Model of Strategy and

Performance

3 A Possible Dynamic Model of Strategy and

Performance

4 The Static Model of the Strategy Process

5 The Dynamic Model of the Strategy Process

6 Criticisms of Dynamic Strategy Process Models

Static Models of Strategy and PerformanceOverview

Page 204: Strategic Notes

Chaotic behavior has been observed in the laboratory in a variety of systems including

electrical circuits, lasers, oscillating chemical reactions, fluid dynamics, and mechanical and

magneto-mechanical devices. Observations of chaotic behavior in nature include the

dynamics of satellites in the solar system, the time evolution of the magnetic field of

celestial bodies, population growth in ecology, the dynamics of the action potentials in

neurons, and molecular vibrations. Everyday examples of chaotic systems include weather

and climate.[1] There is some controversy over the existence of chaotic dynamics in the plate

tectonics and in economics.[2][3][4]

Systems that exhibit mathematical chaos are deterministic and thus orderly in some sense;

this technical use of the word chaos is at odds with common parlance, which suggests

complete disorder. A related field of physics called quantum chaos theory studies systems

that follow the laws of quantum mechanics. Recently, another field, called relativistic chaos,[5] has emerged to describe systems that follow the laws of general relativity.

This article tries to describe limits on the degree of disorder that computers can model with

simple rules that have complex results. For example, the Lorenz system pictured is chaotic,

but has a clearly defined structure. Bounded chaos is a useful term for describing models of

disorder.

History

Page 205: Strategic Notes

Fractal fern created using chaos game. Natural forms (ferns, clouds, mountains, etc.) may

be recreated through an Iterated function system (IFS).

The first discoverer of chaos was Henri Poincaré. In 1890, while studying the three-body

problem, he found that there can be orbits which are nonperiodic, and yet not forever

increasing nor approaching a fixed point.[6] In 1898 Jacques Hadamard published an

influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of

constant negative curvature.[7] In the system studied, "Hadamard's billiards," Hadamard was

able to show that all trajectories are unstable in that all particle trajectories diverge

exponentially from one another, with a positive Lyapunov exponent.

Much of the earlier theory was developed almost entirely by mathematicians, under the

name of ergodic theory. Later studies, also on the topic of nonlinear differential equations,

were carried out by G.D. Birkhoff,[8] A.   N. Kolmogorov ,[9][10][11] M.L. Cartwright and J.E.

Littlewood,[12] and Stephen Smale.[13] Except for Smale, these studies were all directly

inspired by physics: the three-body problem in the case of Birkhoff, turbulence and

astronomical problems in the case of Kolmogorov, and radio engineering in the case of

Cartwright and Littlewood.[citation needed] Although chaotic planetary motion had not been

observed, experimentalists had encountered turbulence in fluid motion and nonperiodic

oscillation in radio circuits without the benefit of a theory to explain what they were seeing.

Despite initial insights in the first half of the twentieth century, chaos theory became

formalized as such only after mid-century, when it first became evident for some scientists

that linear theory, the prevailing system theory at that time, simply could not explain the

observed behaviour of certain experiments like that of the logistic map. What had been

beforehand excluded as measure imprecision and simple "noise" was considered by chaos

theories as a full component of the studied systems.

The main catalyst for the development of chaos theory was the electronic computer. Much

of the mathematics of chaos theory involves the repeated iteration of simple mathematical

formulas, which would be impractical to do by hand. Electronic computers made these

repeated calculations practical, while figures and images made it possible to visualize these

systems. One of the earliest electronic digital computers, ENIAC, was used to run simple

weather forecasting models.

Page 206: Strategic Notes

Turbulence in the tip vortex from an airplane wing. Studies of the critical point beyond which

a system creates turbulence was important for Chaos theory, analyzed for example by the

Soviet physicist Lev Landau who developed the Landau-Hopf theory of turbulence. David

Ruelle and Floris Takens later predicted, against Landau, that fluid turbulence could develop

through a strange attractor, a main concept of chaos theory.

An early pioneer of the theory was Edward Lorenz whose interest in chaos came about

accidentally through his work on weather prediction in 1961.[14] Lorenz was using a simple

digital computer, a Royal McBee LGP-30, to run his weather simulation. He wanted to see a

sequence of data again and to save time he started the simulation in the middle of its

course. He was able to do this by entering a printout of the data corresponding to conditions

in the middle of his simulation which he had calculated last time.

To his surprise the weather that the machine began to predict was completely different from

the weather calculated before. Lorenz tracked this down to the computer printout. The

computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit

number, so a value like 0.506127 was printed as 0.506. This difference is tiny and the

consensus at the time would have been that it should have had practically no effect.

However Lorenz had discovered that small changes in initial conditions produced large

changes in the long-term outcome.[15] Lorenz's discovery, which gave its name to Lorenz

attractors, proved that meteorology could not reasonably predict weather beyond a weekly

period (at most).

The year before, Benoît Mandelbrot found recurring patterns at every scale in data on cotton

prices.[16] Beforehand, he had studied information theory and concluded noise was patterned

like a Cantor set: on any scale the proportion of noise-containing periods to error-free

periods was a constant – thus errors were inevitable and must be planned for by

incorporating redundancy.[17] Mandelbrot described both the "Noah effect" (in which sudden

discontinuous changes can occur, e.g., in a stock's prices after bad news, thus challenging

Page 207: Strategic Notes

normal distribution theory in statistics, aka Bell Curve) and the "Joseph effect" (in which

persistence of a value can occur for a while, yet suddenly change afterwards).[18][19] In 1967,

he published "How long is the coast of Britain? Statistical self-similarity and fractional

dimension," showing that a coastline's length varies with the scale of the measuring

instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small

measuring device.[20] Arguing that a ball of twine appears to be a point when viewed from far

away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved

strand (1-dimensional), he argued that the dimensions of an object are relative to the

observer and may be fractional. An object whose irregularity is constant over different scales

("self-similarity") is a fractal (for example, the Koch curve or "snowflake", which is infinitely

long yet encloses a finite space and has fractal dimension equal to circa 1.2619, the Menger

sponge and the Sierpi ń ski gasket ). In 1975 Mandelbrot published The Fractal Geometry of

Nature, which became a classic of chaos theory. Biological systems such as the branching of

the circulatory and bronchial systems proved to fit a fractal model.

Chaos was observed by a number of experimenters before it was recognized; e.g., in 1927

by van der Pol[21] and in 1958 by R.L. Ives.[22][23] However, Yoshisuke Ueda seems to have

been the first experimenter to have identified a chaotic phenomenon as such by using an

analog computer on November 27, 1961. The chaos exhibited by an analog computer is a

real phenomenon, in contrast with those that digital computers calculate, which has a

different kind of limit on precision. Ueda's supervising professor, Hayashi, did not believe in

chaos, and thus he prohibited Ueda from publishing his findings until 1970.[24]

In December 1977 the New York Academy of Sciences organized the first symposium on

Chaos, attended by David Ruelle, Robert May, James Yorke (coiner of the term "chaos" as

used in mathematics), Robert Shaw (a physicist, part of the Eudaemons group with J. Doyne

Farmer and Norman Packard who tried to find a mathematical method to beat roulette, and

then created with them the Dynamical Systems Collective in Santa Cruz, California), and the

meteorologist Edward Lorenz.

The following year, Mitchell Feigenbaum published the noted article "Quantitative

Universality for a Class of Nonlinear Transformations", where he described logistic maps.[25]

Feigenbaum had applied fractal geometry to the study of natural forms such as coastlines.

Feigenbaum notably discovered the universality in chaos, permitting an application of chaos

theory to many different phenomena.

In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg,

presented his experimental observation of the bifurcation cascade that leads to chaos and

Page 208: Strategic Notes

turbulence in convective Rayleigh–Benard systems. He was awarded the Wolf Prize in

Physics in 1986 along with Mitchell J. Feigenbaum "for his brilliant experimental

demonstration of the transition to turbulence and chaos in dynamical systems".[26]

Then in 1986 the New York Academy of Sciences co-organized with the National Institute of

Mental Health and the Office of Naval Research the first important conference on Chaos in

biology and medicine. Bernardo Huberman thereby presented a mathematical model of the

eye tracking disorder among schizophrenics.[27] Chaos theory thereafter renewed physiology

in the 1980s, for example in the study of pathological cardiac cycles.

In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review

Letters [28] describing for the first time self-organized criticality (SOC), considered to be one of

the mechanisms by which complexity arises in nature. Alongside largely lab-based

approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have

centred around large-scale natural or social systems that are known (or suspected) to

display scale-invariant behaviour. Although these approaches were not always welcomed (at

least initially) by specialists in the subjects examined, SOC has nevertheless become

established as a strong candidate for explaining a number of natural phenomena, including:

earthquakes (which, long before SOC was discovered, were known as a source of scale-

invariant behaviour such as the Gutenberg–Richter law describing the statistical distribution

of earthquake sizes, and the Omori law [29] describing the frequency of aftershocks); solar

flares; fluctuations in economic systems such as financial markets (references to SOC are

common in econophysics); landscape formation; forest fires; landslides; epidemics; and

biological evolution (where SOC has been invoked, for example, as the dynamical

mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and

Stephen Jay Gould). Worryingly, given the implications of a scale-free distribution of event

sizes, some researchers have suggested that another phenomenon that should be

considered an example of SOC is the occurrence of wars. These "applied" investigations of

SOC have included both attempts at modelling (either developing new models or adapting

existing ones to the specifics of a given natural system), and extensive data analysis to

determine the existence and/or characteristics of natural scaling laws.

The same year, James Gleick published Chaos: Making a New Science, which became a best-

seller and introduced general principles of chaos theory as well as its history to the broad

public. At first the domains of work of a few, isolated individuals, chaos theory progressively

emerged as a transdisciplinary and institutional discipline, mainly under the name of

nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed

in The Structure of Scientific Revolutions (1962), many "chaologists" (as some self-

Page 209: Strategic Notes

nominated themselves) claimed that this new theory was an example of such as shift, a

thesis upheld by J. Gleick.

The availability of cheaper, more powerful computers broadens the applicability of chaos

theory. Currently, chaos theory continues to be a very active area of research, involving

many different disciplines (mathematics, topology, physics, population biology, biology,

meteorology, astrophysics, information theory, etc.).

Chaotic dynamics

For a dynamical system to be classified as chaotic, it must have the following properties:[30]

Assign z to z² minus the conjugate of z, plus the original value of the pixel for each pixel,

then count how many cycles it took when the absolute value of z exceeds two; inversion

(borders are inner set), so that you can see that it threatens to fail that third condition, even

if it meets condition two.

1. it must be sensitive to initial conditions,

2. it must be topologically mixing, and

3. its periodic orbits must be dense.

Sensitivity to initial conditions means that each point in such a system is arbitrarily closely

approximated by other points with significantly different future trajectories. Thus, an

arbitrarily small perturbation of the current trajectory may lead to significantly different

future behaviour.

Sensitivity to initial conditions is popularly known as the "butterfly effect", so called because

of the title of a paper given by Edward Lorenz in 1972 to the American Association for the

Advancement of Science in Washington, D.C. entitled Predictability: Does the Flap of a

Butterfly’s Wings in Brazil set off a Tornado in Texas? The flapping wing represents a small

change in the initial condition of the system, which causes a chain of events leading to

large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the system

might have been vastly different.

Page 210: Strategic Notes

Sensitivity to initial conditions is often confused with chaos in popular accounts. It can also

be a subtle property, since it depends on a choice of metric, or the notion of distance in the

phase space of the system. For example, consider the simple dynamical system produced by

repeatedly doubling an initial value (defined by iterating the mapping on the real line that

maps x to 2x). This system has sensitive dependence on initial conditions everywhere, since

any pair of nearby points will eventually become widely separated. However, it has

extremely simple behaviour, as all points except 0 tend to infinity. If instead we use the

bounded metric on the line obtained by adding the point at infinity and viewing the result as

a circle, the system no longer is sensitive to initial conditions. For this reason, in defining

chaos, attention is normally restricted to systems with bounded metrics, or closed, bounded

invariant subsets of unbounded systems.

Even for bounded systems, sensitivity to initial conditions is not identical with chaos. For

example, consider the two-dimensional torus described by a pair of angles (x,y), each

ranging between zero and 2π. Define a mapping that takes any point (x,y) to (2x, y + a),

where a is any number such that a/2π is irrational. Because of the doubling in the first

coordinate, the mapping exhibits sensitive dependence on initial conditions. However,

because of the irrational rotation in the second coordinate, there are no periodic orbits, and

hence the mapping is not chaotic according to the definition above.

Topologically mixing means that the system will evolve over time so that any given region or

open set of its phase space will eventually overlap with any other given region. Here,

"mixing" is really meant to correspond to the standard intuition: the mixing of colored dyes

or fluids is an example of a chaotic system.

Linear systems are never chaotic; for a dynamical system to display chaotic behaviour it has

to be nonlinear. Also, by the Poincaré–Bendixson theorem, a continuous dynamical system

on the plane cannot be chaotic; among continuous systems only those whose phase space is

non-planar (having dimension at least three, or with a non-Euclidean geometry) can exhibit

chaotic behaviour. However, a discrete dynamical system (such as the logistic map) can

exhibit chaotic behaviour in a one-dimensional or two-dimensional phase space.

Attractors

Some dynamical systems are chaotic everywhere (see e.g. Anosov diffeomorphisms) but in

many cases chaotic behaviour is found only in a subset of phase space. The cases of most

interest arise when the chaotic behaviour takes place on an attractor, since then a large set

of initial conditions will lead to orbits that converge to this chaotic region.

Page 211: Strategic Notes

An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction

of the attractor, and then simply plot its subsequent orbit. Because of the topological

transitivity condition, this is likely to produce a picture of the entire final attractor.

Phase diagram for a damped driven pendulum, with double period motion

For instance, in a system describing a pendulum, the phase space might be two-

dimensional, consisting of information about position and velocity. One might plot the

position of a pendulum against its velocity. A pendulum at rest will be plotted as a point, and

one in periodic motion will be plotted as a simple closed curve. When such a plot forms a

closed curve, the curve is called an orbit. Our pendulum has an infinite number of such

orbits, forming a pencil of nested ellipses about the origin.

Strange attractors

While most of the motion types mentioned above give rise to very simple attractors, such as

points and circle-like curves called limit cycles, chaotic motion gives rise to what are known

as strange attractors, attractors that can have great detail and complexity. For instance, a

simple three-dimensional model of the Lorenz weather system gives rise to the famous

Lorenz attractor. The Lorenz attractor is perhaps one of the best-known chaotic system

diagrams, probably because not only was it one of the first, but it is one of the most complex

and as such gives rise to a very interesting pattern which looks like the wings of a butterfly.

Another such attractor is the Rössler map, which experiences period-two doubling route to

chaos, like the logistic map.

Strange attractors occur in both continuous dynamical systems (such as the Lorenz system)

and in some discrete systems (such as the Hénon map). Other discrete dynamical systems

have a repelling structure called a Julia set which forms at the boundary between basins of

Page 212: Strategic Notes

attraction of fixed points - Julia sets can be thought of as strange repellers. Both strange

attractors and Julia sets typically have a fractal structure.

The Poincaré-Bendixson theorem shows that a strange attractor can only arise in a

continuous dynamical system if it has three or more dimensions. However, no such

restriction applies to discrete systems, which can exhibit strange attractors in two or even

one dimensional systems.

The initial conditions of three or more bodies interacting through gravitational attraction

(see the n -body problem ) can be arranged to produce chaotic motion.

Minimum complexity of a chaotic system

Bifurcation diagram of a logistic map, displaying chaotic behaviour past a threshold

Simple systems can also produce chaos without relying on differential equations. An

example is the logistic map, which is a difference equation (recurrence relation) that

describes population growth over time. Another example is the Ricker model of population

dynamics.

Even the evolution of simple discrete systems, such as cellular automata, can heavily

depend on initial conditions. Stephen Wolfram has investigated a cellular automaton with

this property, termed by him rule 30.

A minimal model for conservative (reversible) chaotic behavior is provided by Arnold's cat

map.

Mathematical theory

Sarkovskii's theorem is the basis of the Li and Yorke (1975) proof that any one-dimensional

system which exhibits a regular cycle of period three will also display regular cycles of every

other length as well as completely chaotic orbits.

Page 213: Strategic Notes

Mathematicians have devised many additional ways to make quantitative statements about

chaotic systems. These include: fractal dimension of the attractor, Lyapunov exponents,

recurrence plots, Poincaré maps, bifurcation diagrams, and transfer operator.

Distinguishing random from chaotic data

It can be difficult to tell from data whether a physical or other observed process is random or

chaotic, because in practice no time series consists of pure 'signal.' There will always be

some form of corrupting noise, even if it is present as round-off or truncation error. Thus any

real time series, even if mostly deterministic, will contain some randomness.[31]

All methods for distinguishing deterministic and stochastic processes rely on the fact that a

deterministic system always evolves in the same way from a given starting point. [32][31] Thus,

given a time series to test for determinism, one can:

1. pick a test state;

2. search the time series for a similar or 'nearby' state; and

3. compare their respective time evolutions.

Define the error as the difference between the time evolution of the 'test' state and the time

evolution of the nearby state. A deterministic system will have an error that either remains

small (stable, regular solution) or increases exponentially with time (chaos). A stochastic

system will have a randomly distributed error.[33]

Essentially all measures of determinism taken from time series rely upon finding the closest

states to a given 'test' state (i.e., correlation dimension, Lyapunov exponents, etc.). To

define the state of a system one typically relies on phase space embedding methods. [34]

Typically one chooses an embedding dimension, and investigates the propagation of the

error between two nearby states. If the error looks random, one increases the dimension. If

you can increase the dimension to obtain a deterministic looking error, then you are done.

Though it may sound simple it is not really. One complication is that as the dimension

increases the search for a nearby state requires a lot more computation time and a lot of

data (the amount of data required increases exponentially with embedding dimension) to

find a suitably close candidate. If the embedding dimension (number of measures per state)

is chosen too small (less than the 'true' value) deterministic data can appear to be random

but in theory there is no problem choosing the dimension too large – the method will work.

Applications

Page 214: Strategic Notes

Chaos theory is applied in many scientific disciplines: mathematics, biology, computer

science, economics, engineering, finance, philosophy, physics, politics, population dynamics,

psychology, and robotics.[35]

One of the most successful applications of chaos theory has been in ecology, where

dynamical systems such as the Ricker model have been used to show how population

growth under density dependence can lead to chaotic dynamics.

Chaos theory is also currently being applied to medical studies of epilepsy, specifically to the

prediction of seemingly random seizures by observing initial conditions.[36]

Systems thinking

Systems Thinking is any process of estimating or inferring how local policies, actions, or

changes influences the state of the neighboring universe. It is an approach to problem

solving that views "problems" as parts of an overall system, rather than reacting to present

outcomes or events and potentially contributing to further development of the undesired

issue or problem. [1] Systems thinking is a framework that is based on the belief that the

component parts of a system can best be understood in the context of relationships with

each other and with other systems, rather than in isolation. The only way to fully understand

why a problem or element occurs and persists is to understand the part in relation to the

whole.[2] Standing in contrast to Descartes's scientific reductionism and philosophical

analysis, it proposes to view systems in a holistic manner. Consistent with systems

philosophy, systems thinking concerns an understanding of a system by examining the

linkages and interactions between the elements that compose the entirety of the system.

Systems thinking attempts to illustrate that events are separated by distance and time and

that small catalytic events can cause large changes in complex systems. Acknowledging

that an improvement in one area of a system can adversely affect another area of the

system, it promotes organizational communication at all levels in order to avoid the silo

effect. Systems thinking techniques may be used to study any kind of system — natural,

scientific, engineered, human, or conceptual.

Contents

Page 215: Strategic Notes

1 The concept of a system

2 The systems approach

3 ApplicationsThe concept of a system

Both systems thinkers and futurists consider that:

a "system" is a dynamic and complex whole, interacting as a structured functional

unit;

energy , material and information flow among the different elements that compose

the system;

a system is a community situated within an environment;

energy, material and information flow from and to the surrounding environment via

semi-permeable membranes or boundaries

systems are often composed of entities seeking equilibrium but can exhibit

oscillating, chaotic, or exponential behavior.

A holistic system is any set (group) of interdependent or temporally interacting parts. Parts

are generally systems themselves and are composed of other parts, just as systems are

generally parts or holons of other systems.

Systems and the application of systems thinking has been grouped into three categories

based on the techniques used to tackle a system:

Hard systems — involving simulations, often using computers and the techniques of

operations research. Useful for problems that can justifiably be quantified. However it

cannot easily take into account unquantifiable variables (opinions, culture, politics,

etc), and may treat people as being passive, rather than having complex motivations.

Soft systems — For systems that cannot easily be quantified, especially those

involving people holding multiple and conflicting frames of reference. Useful for

understanding motivations, viewpoints, and interactions and addressing qualitative

as well as quantitative dimensions of problem situations. Soft systems are a field that

utilizes foundation methodological work developed by Peter Checkland, Brian Wilson

and their colleagues at Lancaster University. Morphological analysis is a

complementary method for structuring and analysing non-quantifiable problem

complexes.

Page 216: Strategic Notes

Evolutionary systems — Bela H. Banathy developed a methodology that is applicable

to the design of complex social systems. This technique integrates critical systems

inquiry with soft systems methodologies. Evolutionary systems, similar to dynamic

systems are understood as open, complex systems, but with the capacity to evolve

over time. Banathy uniquely integrated the interdisciplinary perspectives of systems

research (including chaos, complexity, cybernetics), cultural anthropology,

evolutionary theory, and others.

The systems approach

The Systems thinking approach incorporates several tenets:[3]

Interdependence of objects and their attributes - independent elements can never

constitute a system

Holism - emergent properties not possible to detect by analysis should be possible to

define by a holistic approach

Goal seeking - systemic interaction must result in some goal or final state

Inputs and Outputs - in a closed system inputs are determined once and constant; in

an open system additional inputs are admitted from the environment

Transformation of inputs into outputs - this is the process by which the goals are

obtained

Entropy - the amount of disorder or randomness present in any system

Regulation - a method of feedback is necessary for the system to operate predictably

Hierarchy - complex wholes are made up of smaller subsystems

Differentiation - specialized units perform specialized functions

Equifinality - alternative ways of attaining the same objectives (convergence)

Multifinality - attaining alternative objectives from the same inputs (divergence)

Some examples:

Rather than trying to improve the braking system on a car by looking in great detail

at the material composition of the brake pads (reductionist), the boundary of the

braking system may be extended to include the interactions between the:

Page 217: Strategic Notes

brake disks or drums

brake pedal sensors

hydraulics

driver reaction time

tires

road conditions

weather conditions

time of day

Using the tenet of "Multifinality", a supermarket could be considered to be:

a "profit making system" from the perspective of management and owners

a "distribution system" from the perspective of the suppliers

an "employment system" from the perspective of employees

a "materials supply system" from the perspective of customers

an "entertainment system" from the perspective of loiterers

a "social system" from the perspective of local residents

a "dating system" from the perspective of single customers

As a result of such thinking, new insights may be gained into how the supermarket

works, why it has problems, how it can be improved or how changes made to one

component of the system may impact the other components.

Applications

Systems thinking is increasingly being used to tackle a wide variety of subjects in fields such

as computing, engineering, epidemiology, information science, health, manufacture,

management, and the environment.

Some examples:

Page 218: Strategic Notes

Organizational architecture

Job Design

Team Population and Work Unit Design

Linear and Complex Process Design

Supply Chain Design

Business continuity planning with FMEA protocol

Critical Infrastructure Protection via FBI Infragard

Delphi method — developed by RAND for USAF

Futures studies — Thought leadership mentoring

Leadership development

Oceanography — forecasting complex systems behavior

Quality function deployment (QFD)

Quality management — Hoshin planning methods

Quality storyboard — StoryTech framework (LeapfrogU-EE)

Software quality

Programme Management

Bradford Keeney (2002 - revised) Aesthetics of Change. (Guilford Press) ISBN 1-572-

30830-3

1. Business philosophies and popular management theories

A business philosophy or popular management theory is any of a range of approaches

to accounting, marketing, public relations, operations, training, labor relations, executive

time management, investment, and/or corporate governance claimed (by their proponents,

and sometimes only by their proponents and selected clients) to improve business

performance in some measurable or otherwise provable way.

These management theories often have their own vocabulary (jargon). They sometimes

depend on the business insights of a single guru. They rarely have the sophistication or

Page 219: Strategic Notes

internal consistency to qualify as a school of philosophy in the conventional sense - some

(branded "biz-cults") resemble a cult religion. They tend to have in common high-cost

consulting fees to consult with the "business gurus" who have created the "philosophy".

Only rarely do such schools transmit to any trusted students the capacity to teach others -

one of the key requirements of any legitimate non-esoteric school of thought or academic

discipline.

Most of these theories tend to experience a limited period of popularity (about 5 to 10

years). Then they disappear from the popular consciousness. Occasionally one has lasting

value and gets incorporated into textbooks and into academic management thought. For

every theory that gets incorporated into strategic management textbooks about a hundred

remain forgotten. Many theories tend either to have too narrow a focus to build a complete

corporate strategy on, or appear too general and abstract for applicability to specific

situations. The management-talk circuit fuels the low success rate: in that circuit hundreds

of self-appointed gurus queue in turn to sell their books and to explain their "revolutionary"

and "groundbreaking" theories to audiences of business executives for phenomenal fees.

Note too, however, that management theories often undergo testing in the real world.

Disciples apply or attempt to apply such theories, and find them sometimes consistently

applicable over time, sometimes merely an "idea du jour". The relevant and valuable

principles become recognized, and in this way may get incorporated into academic

management thought.

The static assessment of strategy and performance, and its tools and frameworks dominate

research, textbooks and practice in the field. They stem from a presumption dating back to

before the 1980s that market and industry conditions determine how firms in a sector

perform on average, and the scope for any firm to do better or worse than that average. E.g.

the airline industry is notoriously unprofitable, but some firms are spectacularly profitable

exceptions.

The ‘industry forces’ paradigm was established most firmly by Michael Porter, (1980) in his

seminal book ‘Competitive Strategy’, the ideas of which still form the basis of strategy

analysis in many consulting firms and investment companies. Richard Rumelt (1991) was

amongst the first to challenge this presumption of the power of ‘industry forces’, and it has

since become well-understood that business factors are more important drivers of

performance than are industry factors – in essence, this means you can do well in difficult

Page 220: Strategic Notes

industries, and struggle in industries where others do well. Although the relative importance

of industry factors and firm-specific factors continues to be researched, the debate is now

essentially over – management of strategy matters.

The increasing interest in how some businesses in an industry perform better than others

led to the emergence of the ‘resource based view’ {RBV) of strategy (Wernerfelt, 1984;

Barney, 1991, Grant 1991), which seeks to discover the firm-specific sources of superior

performance – a research interest that has increasingly come to dominate research in

The need for a Dynamic Model of Strategy and Performance

The debate about the relative influence of industry and business factors on performance,

and the RBV-based explanations for superior performance both, however, pass over a more

serious problem. This concerns exactly what the ‘performance’ is that management seeks to

improve. Would you prefer, for example, (A) to make $15m per year indefinitely, or (B)

$12m this year, increasing by 20% a year, starting with the same resources?

Nearly half a century ago, Edith Penrose (1959) pointed out that superior profitability (e.g.

return on sales or return on assets) was neither interesting to investors – who value the

prospect of increasing future cash flows – nor sustainable over time. Profitability is not

entirely unimportant – it does after all provide the investment in new resources to enable

growth to occur. More recently, Rugman and Verbeke (2002) have reviewed the implications

of this observation for research in strategy. Richard Rumelt (2007) has again raised the

importance of making progress with the issue of strategy dynamics, describing it as still ‘the

next frontier … underresearched, underwritten about, and underunderstood’.

The essential problem is that tools explaining why firm A performs better than firm B at a

point in time are unlikely to explain why firm B is growing its performance more rapidly than

firm A.

This is not just of theoretical concern, but matters to executives too – efforts by the

management of firm B to match A’s profitability could well destroy its ability to grow profits,

for example. A further practical problem is that many of the static frameworks do not

provide sufficiently fine-grained guidance on strategy to help raise performance. For

example, an investigation that identifies an attractive opportunity to serve a specific market

segment with specific products or services, delivered in a particular way is unlikely to yield

fundamentally different answers from one year to the next. Yet strategic management has

much to do from month to month to ensure the business system develops strongly so as to

Page 221: Strategic Notes

take that opportunity quickly and safely. What is needed, is a set of tools that explain how

performance changes over time, and how to improve its future trajectory – i.e. a dynamic

model of strategy and performance.

A Possible Dynamic Model of Strategy and Performance

To develop a dynamic model of strategy and performance requires components that explain

how factors change over time. Most of the relationships on which business analysis are

based describe relationships that are static and stable over time. For example, “profits =

revenue minus costs”, or “market share = our sales divided by total market size” are

relationships that are true. Static strategy tools seek to solve the strategy problem by

extending this set of stable relationships, e.g. “profitability = some complex function of

product development capability”. Since a company’s sales clearly change over time, there

must be something further back up the causal chain that makes this happen. One such item

is ‘customers’ – if the firm has more customers now than last month, then (everything else

being equal), it will have more sales and profits.

The number of ‘Customers’ at any time, however, cannot be calculated from anything else.

It is one example of a factor with a unique characteristic, known as an ‘asset-stock’. This

critical feature is that it accumulates over time, so “customers today = customers yesterday

+/- customers won and lost”. This is not a theory or statistical observation, but is axiomatic

of the way the world works. Other examples include cash (changed by cash-in and cash-out-

flows), staff (changed by hiring and attrition), capacity, product range and dealers. Many

intangible factors behave in the same way, e.g. reputation and staff skills. Dierickx and Cool

(1989) point out that this causes serious problems for explaining performance over time:

Time compression diseconomies i.e. it takes time to accumulate resources.

Asset Mass Efficiencies ‘the more you have, the faster you can get more’.

Interconnectedness of Asset Stocks .. building one resource depends on other

resources already in place.

Page 222: Strategic Notes

Asset erosion .. tangible and intangible assets alike deteriorate unless effort and

expenditure are committed to maintaining them

Causal ambiguity .. it can be hard to work out, even for the firm who owns a

resource, why exactly it accumulates and depletes at the rate it does.

The consequences of these features is that relationships in a business system are

highly non-linear. Statistical analysis will not, then, be able meaningfully to confirm

any causal explanation for the number of customers at any moment in time. If that is

true then statistical analysis also cannot say anything useful about any performance

that depends on customers or on other accumulating asset-stocks – which is always

the case.

Fortunately, a method known as system dynamics captures both the math of asset-

stock accumulation (i.e. resource- and capability-building), and the interdependence

between these components (Forrester, 1961; Sterman, 2000). The asset-stocks

relevant to strategy performance are resources [things we have] and capabilities

[things we are good at doing]. This makes it possible to connect back to the resource-

based view, though with one modification. RBV asserts that any resource which is

clearly identifiable, and can easily be acquired or built, cannot be a source of

competitive advantage, so only resources or capabilities that are valuable, rare, hard

to imitate or buy, and embedded in the organization [the ‘VRIO’ criteria] can be

relevant to explaining performance, for example reputation or product development

capability. Yet day-to-day performance must reflect the simple, tangible resources

such as customers, capacity and cash. VRIO resources may be important also, but it

is not possible to trace a causal path from reputation or product development

capability to performance outcomes without going via the tangible resources of

customers and cash.

Warren (2002),(2007) brought together the specification of resources [tangible and

intangible] and capabilities with the math of system dynamics to assemble a

framework for strategy dynamics and performance with the following elements:

Performance, P, at time t is a function of the quantity of resources R1 to Rn,

discretionary management choices, M, and exogenous factors, E, at that time

(Equation 1).

(1) P(t) = f{R1(t), .. Rn(t), M(t), E(t)}

Page 223: Strategic Notes

The current quantity of each resource Ri at time t is its level at time t-1 plus or minus

any resource-flows that have occurred between t-1 and t (Equation 2).

(2) Ri(t) = Ri (t-1) +/- Ri(t-1 .. t)

The change in quantity of Ri between time t-1 and time t is a function of the quantity

of resources R1 to Rn at time t-1, including that of resource Ri itself, on management

choices, M, and on exogenous factors E at that time (Equation 3).

(3) Ri(t-1 .. t) = f{R1(t-1), .. Rn(t-1), M(t-1), E(t-1)}

This set of relationships gives rise to an ‘architecture’ that depicts, both graphically

and mathematically, the core of how a business or other organization develops and

performs over time. To this can be added other important extensions, including :

the consequence of resources varying in one or more qualities or ‘attributes’ [e.g.

customer size, staff experience]

the development of resources through stages [disloyal and loyal customers, junior

and senior staff]

rivalry for any resource that may be contested [customers clearly, but also possibly

staff and other factors]

intangibe factors [e.g. reputation, staff skills]

capabilities [e.g. product development, selling]

The Static Model of the Strategy Process

According to many introductory strategy textbooks, strategic thinking can be divided

into two segments : strategy formulation and strategy implementation. Strategy

formulation is done first, followed by implementation.

Strategy formulation involves:

Page 224: Strategic Notes

1.

o Doing a situation analysis: both internal and external; both micro-

environmental and macro-environmental.

o Concurrent with this assessment, objectives are set. This involves crafting

vision statements (long term), mission statements (medium term), overall

corporate objectives (both financial and strategic), strategic business unit

objectives (both financial and strategic), and tactical objectives.

o These objectives should, in the light of the situation analysis, suggest a

strategic plan. The plan provides the details of how to obtain these goals.

o This three-step strategy formation process is sometimes referred to as

determining where you are now, determining where you want to go, and then

determining how to get there.

o The next phase, according to this linear model is the implementation of the

strategy. This involves:

1.

o Allocation of sufficient resources (financial, personnel, time, computer system

support)

o Establishing a chain of command or some alternative structure (such as cross-

functional teams)

o Assigning responsibility of specific tasks or processes to specific individuals or

groups

o It also involves managing the process. This includes monitoring results,

comparing to benchmarks and best practices, evaluating the efficacy and

efficiency of the process, controlling for variances, and making adjustments to

the process as necessary.

o When implementing specific programs, this involves acquiring the requisite

resources, developing the process, training, process testing, documentation,

and integration with (and/or conversion from) legacy processes

o The Dynamic Model of the Strategy Process

Page 225: Strategic Notes

o Several theorists have recognized a problem with this static model of the

strategy process: it is not how strategy is developed in real life. Strategy is

actually a dynamic and interactive process. Some of the earliest challenges to

the planned strategy approach came from Linblom in the 1960s and Quinn in

the 1980s.

o Charles Lindblom (1959) claimed that strategy is a fragmented process of

serial and incremental decisions. He viewed strategy as an informal process of

mutual adjustment with little apparent coordination.

o James Brian Quinn (1978) developed an approach that he called "logical

incrementalism". He claimed that strategic management involves guiding

actions and events towards a conscious strategy in a step-by-step process.

Managers nurture and promote strategies that are themselves changing. In

regard to the nature of strategic management he says: "Constantly

integrating the simultaneous incremental process of strategy formulation and

implementation is the central art of effective strategic management." (?page

145). Whereas Lindblom saw strategy as a disjointed process without

conscious direction, Quinn saw the process as fluid but controllable.

o Joseph Bower (1970) and Robert Burgelman (1980) took this one step further.

Not only are strategic decisions made incrementally rather than as part of a

grand unified vision, but according to them, this multitude of small decisions

are made by numerous people in all sections and levels of the organization.

o Henry Mintzberg (1987) made a distinction between deliberate strategy and

emergent strategy. Emergent strategy originates not in the mind of the

strategist, but in the interaction of the organization with its environment. He

claims that emergent strategies tend to exhibit a type of convergence in

which ideas and actions from multiple sources integrate into a pattern. This is

a form of organizational learning, in fact, on this view, organizational learning

is one of the core functions of any business enterprise (See Peter Senge's The

Fifth Discipline (1990).)

o Constantinos Markides (1999) describes strategy formation and

implementation as an on-going, never-ending, integrated process requiring

continuous reassessment and reformation.

Page 226: Strategic Notes

o A particularly insightful model of strategy process dynamics comes from J.

Moncrieff (1999). He recognized that strategy is partially deliberate and

partially unplanned, though whether the resulting performance is better for

being planned or not is unclear. The unplanned element comes from two

sources : “emergent strategies” result from the emergence of opportunities

and threats in the environment and “Strategies in action” are ad hoc actions

by many people from all parts of the organization. These multitudes of small

actions are typically not intentional, not teleological, not formal, and not even

recognized as strategic. They are emergent from within the organization, in

much the same way as “emergent strategies” are emergent from the

environment. However, it is again not clear whether, or under what

circumstances, strategies would be better if more planned.

o In this model, strategy is both planned and emergent, dynamic, and

interactive. Five general processes interact. They are strategic intention, the

organization's response to emergent environmental issues, the dynamics of

the actions of individuals within the organization, the alignment of action with

strategic intent, and strategic learning.

o Image:StrategyDynamics.png

Moncrieff Model of Strategy Dynamics

o The alignment of action with strategic intent (the top line in the diagram), is

the blending of strategic intent, emergent strategies, and strategies in action,

to produce strategic outcomes. The continuous monitoring of these strategic

outcomes produces strategic learning (the bottom line in the diagram). This

learning comprises feedback into internal processes, the environment, and

strategic intentions. Thus the complete system amounts to a triad of

continuously self regulating feedback loops. Actually, quasi self regulating is a

more appropriate term since the feedback loops can be ignored by the

organization. The system is self-adjusting only to the extent that the

organization is prepared to learn from the strategic outcomes it creates. This

requires effective leadership and an agile, questioning, corporate culture. In

this model, the distinction between strategy formation and strategy

implementation disappears. Do not copy from here .. there is no evidence for

its validity! No author's name ! No expert opinion. This website is just for

reference.

o Criticisms of Dynamic Strategy Process Models

Page 227: Strategic Notes

o Some detractors claim that these models are too complex to teach. No one

will understand the model until they see it in action. Accordingly, the two part

linear categorization scheme is probably more valuable in textbooks and

lectures.

o Also, there are some implementation decisions that do not fit a dynamic

model. They include specific project implementations. In these cases

implementation is exclusively tactical and often routinized. Strategic intent

and dynamic interactions influence the decision only indirectly.

o Complexity theory and strategy

o Complexity theory has been used extensively in the field of strategic

management and organizational studies, sometimes called 'complexity

strategy' or 'complex adaptive organization' on the internet or in popular

press. Broadly speaking, complexity theory is used in these domains to

understand how organizations or firms adapt to their environments. The

theory treats organizations and firms as collections of strategies and

structures. When the organization or firm shares the properties of other

complex adaptive systems - which is often defined as consisting of a small

number of relatively simple and partially connected structures – they are more

likely to adapt to their environment and, thus, survive. Complexity-theoretic

thinking has been present in strategy and organizational studies since their

inception as academic disciplines.

o Contents

1 Early research

2 Later research

3 External links

4 References