the six pillars of a successful testing program€¦ · six pillars . of a . successful testing...

12
Page 1 The Six Pillars of a Successful Testing Program This year, Cyber Monday became the largest shopping day of the year in the United States. According to Internet Retailer, total revenues closed above $3.45 billion, an increase of over 12 percent from 2015, and beat out the 2016 single- day revenue on Black Friday by 3.3 percent. For years, digital sales have been considered as much a brand play as an actual driver of revenue. As the total share of sales has shifted from traditional channels, however, digital has transformed from an opportunity to assist consumer decision-making to a critical point of conversion. Online retailers are not the only businesses feeling the impact of this transition. In a 2013 industry survey conducted by Adobe, 76 percent of the digital marketers questioned responded that they felt “marketing had changed more in the past two years than in the previous 50.” Since then, the dramatic pace of change has only become more acute. Beyond the sizable and still-growing share of business that digital marketing leaders are now responsible for, they must also manage an increasingly complex portfolio of tools and technologies, understand emerging research explaining the ever-evolving behavior of consumers in an omnichannel world, and drive toward lofty goals that can be measured, modeled, and projected quarters into the future. At the same time, the very nature of many organizations is changing. Reorgs are regular occurrences and the underlying culture of business has shifted.

Upload: others

Post on 06-Jun-2020

23 views

Category:

Documents


0 download

TRANSCRIPT

Page 1

The Six Pillars of a Successful Testing Program

This year, Cyber Monday became the largest

shopping day of the year in the United States.

According to Internet Retailer, total revenues

closed above $3.45 billion, an increase of over 12

percent from 2015, and beat out the 2016 single-

day revenue on Black Friday by 3.3 percent. For

years, digital sales have been considered as much

a brand play as an actual driver of revenue. As the

total share of sales has shifted from traditional

channels, however, digital has transformed from an

opportunity to assist consumer decision-making to

a critical point of conversion.

Online retailers are not the only businesses feeling

the impact of this transition. In a 2013 industry

survey conducted by Adobe, 76 percent of the

digital marketers questioned responded that they

felt “marketing had changed more in the past

two years than in the previous 50.” Since then,

the dramatic pace of change has only become

more acute. Beyond the sizable and still-growing

share of business that digital marketing leaders

are now responsible for, they must also manage

an increasingly complex portfolio of tools and

technologies, understand emerging research

explaining the ever-evolving behavior of consumers

in an omnichannel world, and drive toward lofty

goals that can be measured, modeled, and projected

quarters into the future. At the same time, the very

nature of many organizations is changing. Reorgs

are regular occurrences and the underlying culture

of business has shifted.

Page 2Page 2

Where once experience and assigned authority served as

evidence in favor of opinions, data is now the coin of the

realm. What that data means—not to mention, how we

acquire and compile it—remains a persistent challenge. In the

2016 Econsultancy Adobe Digital Trends Report, 77 percent

of respondents identified data-driven marketing as a first

or second priority for their organizations. This same survey,

however, found that most businesses still struggle to make

sense of the data available to them: 46 percent reported just

getting access to relevant data was “difficult” or “very difficult,”

42 percent said the same about their process for ensuring a

data-driven strategy was carried out effectively, and 41 percent

felt technology related to data-driven marketing presented

the same challenge. In 2011, McKinsey declared Big Data as

“the next frontier for innovation, competition, and productivity.”

Today, many businesses are still wondering when this big data

will yield big profits.

Applying data to business strategy and decisions is surprisingly

difficult. Sometimes, it seems, people, process, and technology

all conspire to preserve a status quo based on intuition,

opinion, and gut instinct. At Brooks Bell, we have found that

implementing a performance marketing program based on A/B

or multivariate testing is the most practical, effective means

of building a culture that embraces data as a tool for decision

making. At its core, testing is about asking challenging business

questions and seeking answers that are based on observed

results and validated buy actual in-market consumer behavior

data. It is an experiment-based method that allows marketing

leaders to understand, expand, and activate data from multiple

databases and across digital channels. Testing is a powerful

tool for actualizing the promise of “Big Data.” But, as useful and

practical as testing is, there are still many challenges inherent

in the adoption and success of a comprehensive testing

program. These challenges combine to create a “utilization

gap”—a discrepancy between the opportunity presented by

Page 3

testing at an organization and the realized return of

the testing program, if one exists.

We have had the opportunity to work directly with

dozens of A/B testing and optimization programs

and, thanks to events like Click Summit, have

built relationships with many more. This means

we have gained insight into hundreds of testing

programs spanning businesses of all sizes, across

all industries. From this data, we have identified

six key factors that predict or determine the

success of a testing program. These six pillars—

culture, team, process, strategy, performance, and

technology—combine with one another in different

ways in different organizations, but the effective

implementation and management of each is critical

if a testing program is to be successful.

These six pillars fall into three groups:

PeopleForming a high-performing, experienced team is often the first challenge a testing program faces; building a

culture that fully embraces experimentation and data-informed decision-making is a perennial hurdle.

ProcessTo ensure testing is executed in a consistent, reliable way, a formal, standardized process must be designed,

implemented, and enforced; generating ideas and refining them into test strategies requires a systematic

approach; a plan for measuring, reporting, and improving performance over time is necessary not only for

managing a testing program but also for illustrating its contribution to the business.

TechnologyThe tools and systems used for testing must be implemented and integrated effectively, address the

immediate needs of a business, produce valid, reliable results, and provide an opportunity for growth.

Page 4

PeopleWhen it comes to building a successful testing

program, one of the immediate challenges is

that of resources. Securing enough support to

design, develop, QA, and analyze a test can seem

impossible, especially when there is little support

for or buy-in to the essential idea of testing.

Communicating the potential value of the process is

critical at this stage and, though a bit of a chicken-

egg paradox, the program must produce tests to

support the argument for more testing.

Whether the testing program is completely new or

struggling to grow, there are two important factors

that must be managed: team and culture.

TeamMost testing programs will rely on resources drawn

from across various teams in an organization

including those that may not be direct reports. The

complex, matrixed nature of such an organization

is one of the challenges of building a team that

will be able to successfully develop and execute

testing plans. A group of individuals, after all, cannot

become a high-performing team, with a cluster of

priorities, goals, and responsibilities competing and

conflicting with one another.

To address this dilemma, identify people

within the organization who show a desire and

enthusiasm for testing. These members can help to

communicate testing strategies and methodologies

across organizations. Help them to share the

understanding that excellence in testing leads to

an excellent customer experience and this requires

all teams to participate. These inter-departmental

resources will also help you to scale the testing

program from a single team to a company-wide

priority. The speed of implementation and the

velocity of the testing processes will improve.

Page 5

Developing this team within the organization doesn’t have to be

a complex challenge. Training that explores the basics of testing

can help educate individuals unfamiliar with testing strategies

while also helping to identify resources who may be advocates

of your testing ecosystem. A basic training program is essential,

of course, for building competencies and capabilities across

the organization. Training is also critical for building engaged

partners and advocates of testing, which directly supports the

establishment and growth of a testing culture.

Start training programs by discussing the following questions:

• Why test? Explain how the business is impacted by positive

or negative results as well as the risks of not testing at all.

• What to test? Outline which variables and hypotheses are

the foundation of the testing program. Ask the group to

guess how past tests performed to spark the conversation.

• How to test? Explain how tests can scale in complexity

and duration. Not every test is quick and easy and it’s

not always the most complex tests that yield the most

interesting results.

CultureA successful testing program must include teams and business

units throughout the organization. Teams must understand

the potential impact and influence testing can have, and

how testing can be used to achieve their goals. By identifying

areas where the importance of testing may not be known or is

undervalued, then developing an ongoing plan to share testing

hypotheses and results it’s possible to increase awareness,

interest and support.

Page 6

• Define Priorities: Many testing programs will

fail because strategic priorities were not clearly

defined. Team members may not understand the

importance of the test to the business or how

the results will impact their work.

• Encourage Participation: When a testing

program is new, it often seems that keeping

it small, quiet, and isolated will help to avoid

conflicts and allow for growth. While testing

can be scary for some organizations, it’s

important to encourage participation early and

continuously. Establish a method by which

individuals can submit ideas, invite people

to training and reporting meetings, and offer

opportunities to engage with the growing

community of test practitioners.

• Identify Testing Leaders: Engaged individuals

capable of contributing to the organization-wide

testing program or leading testing within their

business units will become critical resources

for the growth of testing and advocates for the

practice more generally.

• Communicate Results: Analyzing the results of

a test is exciting. This momentum can lead to

changes in marketing strategy that will impact

many teams within the organization. Always

communicate results to teams that participated

in a test. The results will not only help to

improve future testing ideation, but will help to

nurture teams invested in testing efforts.

Building such organization-wide support, however, is not always easy. Testing can appear to be more work, a

challenge to established authority or processes, and a conflict with existing goals.

To avoid these hurdles, it’s important to:

Page 7

ProcessA successful testing program must be able

to answer several important questions. What

should we test? How do we run a test? What

will success look like? These questions point to

a broader necessity: having a standard process

that is enforced and adopted widely. A standard

approach to testing is essential for ensuring tests

are designed and executed consistently, proper

safeguards are in place, and results are analyzed

and interpreted in a valid, reliable way. Ultimately,

this process should drive the program toward a set

of clearly defined goals, allowing for a consistent

measure of performance over time.

To satisfy these requirements, a testing program

must actively manage three things: strategy,

process, and performance.

StrategyIf people know anything about testing, it’s a case

study about changing button colors to produce a

huge lift in engagement. It’s great if such stories

can generate interest and excitement around

testing, but a successful program needs more

than a few simple, superficial best practice-based

wins. Indeed, successful testing that grows in

prominence and effectiveness over time needs an

intentional, standardized approach to developing

test strategies.

In general, a successful experiment—one that

produces valuable insight regardless of whether it

wins or loses—must:

• Utilize data: Valuable test ideas start with

an analysis of existing data and are derived

from observed patterns of user behavior. This

data can come from an analytics platform,

focus group results, customer surveys, or any

combination of these and other sources.

• Answer a question: If the outcome of an

experiment is going to be useful, it must answer,

Page 8Page 8

or contribute to the answer of, an overarching

business question. Why customers only purchase one

item at a time, what is the most effective positioning for a

product, and how do adjacent prices influence purchase

behavior are just a few examples.

• Solve a problem: Good tests identify a user goal, potential

problems limiting a user’s ability to accomplish that goal,

and propose potential solutions to the problem. The result of

the experiment, when constructed in such a way, indicates

whether the proposed solution was effective or not.

• Test a hypothesis: Every experiment is designed to test

a hypothesis: An informed assumption associated with a

specific outcome. While the hypothesis is not always correct,

having one at the outset is critical for generating insight.

• Define success: The measure of success must be

determined before the experiment is designed and

launched. Moreover, most methods of analysis require the

establishment of a predetermined sample size and duration

to achieve statistical significance.

ProcessEven when testing is executed entirely by a single person, the

process can become chaotic without forethought and planning.

Developing a standard approach to strategy, we have seen,

leads to consistent, quality ideas. But the need for a systematic

process doesn’t end there. From development to QA, analysis

to communication, a testing program needs a clearly defined

outline for test execution. Such a plan ensures consistency over

time, reduces risk, and increases the reliability of results.

Page 9

An effective test process includes:

• A task map: Charting each task involved in

the development and launch of a test is the

priority in designing an effective process.

Such a process flow diagram helps identify

necessary resources, potential bottlenecks, and

estimate timelines for each test. It also helps to

communicate what is required for the launch of

a test.

• A RACI: The process must outline the roles

and responsibilities for each stage of test

execution. Defining who exactly is responsible,

accountable, consulted, and informed at each

step is important.

• A QA checklist: Testing can be a high-visibility,

high-risk proposition. Since the experiment

alters customer-facing code in a live

environment, there is always the possibility

something might break. To minimize this risk,

it is critical that a consistent quality assurance

process is followed prior to the launch of every

test. The checklist outlines the areas of focus

during this stage of the process.

• Reporting standards: Whether it’s a

presentation deck or an automated dashboard,

there must be some established standard for

what results reports will include and what form

the reports will take.

• Communication plans: The process must outline

what information is shared and with whom.

Ideally, test communications will be tailored to

three levels: practitioners immediately involved

in the test, managers and others interested

in the general pace of progress, and business

leaders interested in a summary of the results

and insights.

• Implementation guidelines: It is important

to have a plan of action that can be quickly

utilized when a test produces a winner. Moving

toward implementation is often the best option

and having a clearly defined process or set of

guidelines for this helps with everything from

contacting the necessary teams to creating a

user story and request ticket.

Page 10

Performance

A successful testing program doesn’t simply launch

experiments; it grows in maturity and contribution to the total

business over time. This growth must be monitored, measured,

and reported consistently, utilizing a standard framework for

the assessment. Without this kind of consistent measurement,

it’s impossible to effectively manage a program. To paraphrase

Peter Drucker: “What gets measured, gets managed.”

Popular program measures included:

• Volume: A measure of the number of tests that are launched

in a period.

• Velocity: The speed at which a test moves from initial idea

to final launch.

• Win rate: The percentage of winning tests or variations out

of the total.

• Learn rate: The percentage of valuable insights produced by

tests or variations.

• Bust rate: The percentage of tests that break or fail after

launch.

Ultimately, the proper measure of program performance

must be determined by each organization, based on priorities

and culture. Some combination of measures that capture

operational efficiency, test quality, and test volume is typically

the most useful.

Page 11

Technology

• Producing reliable data: The data captured by

the testing and analytics platforms must, above

all else, produce reliable data that is trusted by

the organization.

• Integrated effectively: Certainly, the testing and

analytics platforms must be tightly integrated.

In addition, adding functionality from a tag

management system, additional data from a

DMP or other database, or additional insights

from a mouse tracking tool can help accelerate

the growth in maturity of a testing program.

The tools and systems used for testing form the critical foundation on which a program rests. Without

a properly implemented tool, after all, no testing is possible. Beyond the testing tool itself, however, an

experimentation program relies on analytics platforms, tag management systems, databases, mouse

tracking, survey, and user testing technologies. Often, each of these tools is owned and managed by a

different group and rarely are they all integrated in a way that can be quickly and easily activated for testing.

When it comes to technology, the most important considerations are that the tools are:

Positioned for growth: Utilizing every available

capability of a tool stack is great in terms of

resource efficiency, but it does not provide room

for growth over time. When evaluating the

technology ecosystem for testing, it’s important

to think about how the platforms will support

the program as it grows and matures.

Page 12

Brooks Bell is the leader in scaling world-class A/B testing programs.

Learn more! Visit BrooksBell.com

Managing an effective, growing testing program has never

been more important. For businesses striving to utilize

an increasing pool of consumer and user behavior data to

generate more unique, satisfying experiences, there is no tool

more powerful than testing and experimentation. That said,

achieving operational and strategic excellence remains a

challenge as testing teams work against entrenched practices

and a quickening stream of demands. Indeed, the goals have

never been loftier and the pace of innovation never faster.

In this rapidly changing environment, the organizations that

produce the most effective testing programs will focus on six

pillars— culture, team, process, strategy, performance, and

technology—spread across people, process, and technology.

In doing so, they will achieve more than a mature testing

program. They will build an innovative culture that will provide

a competitive advantage in the marketplace.