how to conduct a survey - ebook

24
Click here to view a video about NBRI NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com A PRIMER ON SURVEY RESEARCH BY DR. JAN G. WEST, PH.D. How to Conduct a Survey

Upload: national-business-research-institute

Post on 30-Oct-2014

117 views

Category:

Business


5 download

DESCRIPTION

We’ve been conducting research for businesses for over 30 years. From employee engagement surveys to market research, NBRI has covered every inch of the survey data spectrum. As you can imagine, we’ve compiled a lot of benchmarking data and derived a lot of knowledge from it. Our motto is Research for Knowledge, Knowledge for Power, and what better way to empower than to share what we’ve learned? Available to you now is our e-book, How to Conduct a Survey. Inside you’ll find many of the tips and insights researchers use to elicit the most useful responses during surveys. Sure, you can create a simple survey yourself, but it’s much more difficult to create questions guaranteed to provide clear and actionable responses. At NBRI, we have teams of organizational psychologists and professional research consultants who design targeted questions for psychological research. Don’t let that phrase overwhelm you, it’s still surveying. But, our scientific approach to surveying gives you answers that allow measurement of attributes such as “attitude” and “belief.” A few of the other topics covered in the e-book include: • Confidence Levels • Mean Scores • Descriptive Statistics They say you can’t put a price on knowledge – so we didn’t! To download our free e-book, follow the link below. Crammed inside are quizzes, research, tables, and much more. How to Conduct a Survey is a great primer for any business that is contemplating scientific survey research. Download it today at http://www.nbrii.com/blog/get-our-new-e-book-how-to-conduct-a-survey-free/

TRANSCRIPT

Page 1: How to Conduct a Survey - eBook

Click here to view a video about NBRI

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

A PRIMER ON SURVEY RESEARCH

BY DR. JAN G. WEST, PH.D.

How to Conduct a Survey

Page 2: How to Conduct a Survey - eBook

2HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

You’ve been tasked with conducting a survey of your customers and/or employees, so the first thing you’re doing is searching the internet for ‘how to’ information. Or, like most people, you’ve always thought that building a survey meant pulling questions out of the air, slapping them on a piece of paper, and calling it a ‘survey!’ But now that you’re trying to build one yourself, you’ve run into your first stumbling block: the survey questions themselves. What do I ask? How do I ask them? You’ve come to the right place! Not only will you learn ‘how’ to conduct survey research here, but you will also learn ‘why.’ When the ‘why’ rings true with good, common sense, you will have great confidence in how you run your study, and you will be able to defend your actions and recommendations to others. Most importantly, you will be obtaining fact — and not fiction — from your research, so that stakeholders who use your information can be confident when basing important decisions and actions upon it.

The goal of great research is to drive business outcomes: improve Customer Satisfaction, Customer Intent to Return or Willingness to Recommend, and Employee Engagement in order to improve Financial Performance. Survey Research without Return on Investment may provide interesting information, but is truly a waste of time and money. For over 3 decades, the National Business Research Institute (“NBRI”) has helped business leaders understand the psychological constructs of attitude, opinion, and belief that determine employee and customer behavior, and drive profitability. In this eBook, we share our knowledge of the following aspects of survey research:

1. SURVEY CONTENT — IS YOUR SURVEY VALID? ........................................................................................ Page 4

a. Topics

b. Questions

c. The Scale

2. DEPLOYMENT METHODOLOGIES .................................................................................................................. Page 8

a. Paper

b. Online

c. Telephone

d. In Person

3. REPRESENTATIVE SAMPLE — IS YOUR DATA VALID? .............................................................................Page 10

a. Confidence Levels and Sampling Error

b. Sample Size Chart

4. REPORTING & THE STATISTICS OF SURVEY RESEARCH .......................................................................Page 13

a. Percent Favorable and Top Box Scores

b. Mean Scores

c. Benchmarking Scores

5. TARGETED MARCHING ORDERS .................................................................................................................Page 18

a. Descriptive Statistics

b. Inferential Statistics

6. TURNING DATA INTO ACTION .....................................................................................................................Page 22

Page 3: How to Conduct a Survey - eBook

3HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

TEST YOUR KNOWLEDGE OF SURVEY RESEARCH! Thinking about a recent survey project or a survey currently in deployment, answer True or False for each of the following statements. Each statement is worth 5 points.

1. My survey is ‘valid’ — represents the issues facing the target audience. T / F

2. My survey questions have been standardized through testing on millions of individuals. T / F

3. My data is ‘valid’ — represents the thinking of the target population. T / F

4. I have achieved Representative Sample at a minimum of 95% Confidence and 5% Error at the T / F Total Population Level and every segregation thereof.

5. The response scale I use does not bias my data artificially. T / F

6. Percent Favorable Scores are context, and are not used to interpret results or drive decision-making. T / F

7. Mean Scores are context, and are not used to interpret results or drive decision-making. T / F

8. I only base decisions on Benchmarked Scores. T / F

9. Qualitative Data is only used to drive survey content. T / F

10. My survey questions are pure, clean, and actionable. T / F

11. One of the primary root causes of Customer ‘Intent to Return’ is Employee ‘Engagement.’ T / F

12. One of the primary root causes of Financial Performance is Customer ‘Intent to Return.’ T / F

13. My data tells me the root causes of my Customers’ Intent to Return. T / F

14. My data tells me the root causes of my Employees’ Engagement. T / F

15. My Survey Vendor takes me from reporting my survey data to implementing solutions in 3 weeks. T / F

16. My Survey Vendor has the ability to benchmark my data against my industry’s data. T / F

17. I receive targeted action items following the survey. T / F

18. My Survey Vendor assists with communications to my target population. T / F

19. I have a project team to guide me through the survey process. T / F

20. My Survey Vendor provides Synergistic Research for my organization. T / F

BONUS: The integrity of my data is absolute, so I have confidence tying improvement goals T / F to management bonus plans.

In Best in Class Research Studies, such as those provided by NBRI, the answer to every question above is ‘True!’A = 90-100 points B = 80-85 points C = 70-75 points D = 60-65 points F = 55 or fewer points

Page 4: How to Conduct a Survey - eBook

4HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

1 • SURVEY CONTENT The first step, of course, is deciding on survey content. Do you decide this alone? Do you ask co-workers or other management personnel for their input? Do you ask the audience what they think you should be asking them? Never has it been more true than in survey research that ‘garbage in = garbage out.’ This can be a frightening and daunting task, as you realize the critical nature of customer and employee opinion as driving forces on your organization’s financial performance. What if customers do not intend to return to your business? What if employees are apathetic, and are actually running off customers? This makes it imperative that each study provide management with pure, valid data — fact — and not fiction. While it is easy to conduct survey research improperly, it is just as easy to conduct it properly by adhering to a few, simple principles.

There are really only three sources of survey content: you, us, and them. You, most likely an Executive of a Major Corporation; Us, Survey Research Consultants; and, Them, Your Target Audience. “From the horse’s mouth …” comes to mind about now because who better to tell you what you should ask on your survey than those who will be responding to it? Your employees and customers know very well what you should include in your survey, as they live the issues the survey will address every day. Clearly, your employees and customers are the best source of input on what the survey should cover.

‘Great,’ you think. Now I have to do research before I can even begin the survey. And how does one go about conducting Preliminary Research?

TOPICS

Beginning in 1982 and for over 3 decades now, NBRI has been conducting one-on-one telephone interviews of small, stratified, random samples of individuals in employee and customer populations within virtually every industry sector. As we have done so, we have discovered the issues that are important to all such populations. Thirty years ago, we called these ‘organizational dynamics’ because that’s exactly what they were, and there were no other names for them. Today, we call them ‘topics.’ Topics are simply broad categories of survey question types. Some of the most common Customer and Employee Survey Topics are shown on the following page.

Page 5: How to Conduct a Survey - eBook

5HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

CUSTOMER SURVEY TOPICS

Brand Engagement Ethics Sales AssistanceCommunications Financing Sales Process

Company Behavior Friendliness & Helpfulness Service & SupportCompany Image Invoicing & Statements Services

Competitive Position Job Knowledge TeamworkControl Systems Pricing Technical Expertise

Cost & Value Product Delivery Technical SupportCreativity Products Values

Customer Loyalty Professional Conduct Wait TimeCustomer Service Project Management WarrantiesDecision Making Quality Website

Employee Behavior Safety

EMPLOYEE SURVEY TOPICS

Autonomy Handling Ambiguity Professional ConductBenefits Health Profit Improvement

Career Development Human Resources Project ManagementChange Management Integrity Quality

Climate Interpersonal Skills RecognitionCommunications Job Satisfaction RespectCompany Image Job Training Results DrivenCompensation Leadership Safety

Competitive Position Leading Change Sexual HarassmentControl Systems Learning Agility Short & Long Term Goals

Corporate Performance Life Balance Social ActivitiesCreativity Management Style Social Responsibility

Culture Managing Performance Strategic AgilityCustomer Service Mergers & Acquisitions SupervisionDeveloping People Morale Teamwork

Diversity Organization TechnologyEmployee Commitment Organizational Change Values

Employee Values Organizational Structure VisionEngagement Performance Evaluations Work Life

Ethics Planning Working RelationshipsFlexibility Policies

Fostering Relationships Productivity

Page 6: How to Conduct a Survey - eBook

6HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

QUESTIONS

Also beginning in 1982 and for over 3 decades now, NBRI has been writing the questions to assess those topics; questions that have become the very foundation of the Survey Research Industry. So, as you see, you do not need to interview your customers and employees first! You do not need to discover the issues of importance to them. And you do not need to write the survey questions. That has all been done for you — by NBRI.

It is important to reiterate that you are conducting psychological research of your customers and employees. “What? I just wanted to survey them!” Right! And assessing the psychological constructs of attitude, opinion, and belief, those human perceptions that drive behavior, is by definition, psychological research! As such, how each survey question is phrased is very important. The wording of each question has the potential to bias responses artificially, either to the positive or negative.

Consider, for example:

“I can always talk to my supervisor when I need to”versus

“My supervisor is available when I need him/her”

People will typically zero in on the word ‘always,’ and that’s not really the point of the question, is it? The word ‘always’ causes the respondent to drift toward a negative rating because no one is ‘always’ available. Since the intent is to determine if supervisors are available when needed, the latter question will gather clean, pure data, whereas the former question will generate data that is artificially skewed to the negative. The result: management may believe there is a problem when there isn’t one. There will be no ‘red flags’ warning about the corrupting influence of the wording on the data.

Consider another example:

“My Account Executive is friendly and knowledgeable”

This is what is commonly called a ‘double-barrel.’ It is seeking to assess two issues in one question. When — not if — you receive less than 100% positive feedback, the next question Management will ask you is: ‘Which issue were they rating — friendliness or knowledge?’ This is best broken out into two, separate questions.

And there are many more examples of poor survey question wording, such as vague wording that provides no meaningful, actionable data, and even harmful wording that disseminates a negative connotation about the Company to the audience.

Today, it is NBRI questions that are used by thousands of small and mid-sized organizations, as well as one-fourth of the largest corporations in the United States, and numerous organizations around the world. Each question you may select for inclusion in your study is a standardized, scientific, psychological research

Page 7: How to Conduct a Survey - eBook

7HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

instrument that has been tested on millions of individuals. Or would you rather ‘reinvent the wheel’ and risk conducting a survey that delivers misinformation?

The order in which questions are presented may also bias responses. It is important to randomize the order of questions on employee surveys, but not customer surveys. Customers should be taken through the survey in a thoughtful, logical order, with questions grouped by topic. But employees may have biases against certain topics, so the order of survey questions to employees should be randomized to eliminate biasing the data.

THE SCALE

Even the scale may cause your data to be more positive or negative than data collected with an unbiased scale. Consider, for example, a scale of ‘Excellent, Extremely Good, Very Good, Good, Fair.’ This type of scale will produce data that is artificially biased to the positive, rendering the results worse than meaningless — the data will actually be misleading.

The scale typically used in psychological research is a 6-point, balanced scale with 3 points of positive and 3 points of negative. This way, the respondent has high, medium, and low options of both the positive and negative. And, there is no neutral point. After all, the intent of the survey is to gather opinions. If your survey is valid, defined as representative of the issues facing your audience, then as psychologists, we know that an opinion exists for each and every survey question. Yes, it is a psychological fact that if you have knowledge of an issue, you will necessarily have an opinion about it. Even if it is as simple as ‘vanilla ice cream.’ If you have experienced it, you have an opinion. The goal is to gather opinions.

THE NORMATIVE SCALE

Strongly Agree

Moderately Agree

Slightly Agree

Slightly Disagree

Moderately Disagree

Strongly Disagree

• 6-point scale

• Balanced

• Unbiased to the positive or negative

• 100 years of psychological research

• No neutral point: force the opinion

When NBRI has imported data for analysis from organizations that previously conducted their own research and used a neutral point, we have found that respondents mark the neutral point about 20% of the time. This data must be discarded. It is meaningless. This means that 20% of the survey dollars were just spent on no data at all! When a respondent wants to skip a question, they can and they will. Why would anyone want to offer a ‘no opinion’ option on every single survey question?

Page 8: How to Conduct a Survey - eBook

8HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

2 • DEPLOYMENT METHODOLOGIES

PAPER

Paper surveys, while far less popular since the inception of the internet, represent a valuable methodology for reaching the large numbers of employees and customers without computer access, email addresses or telephone numbers. When considering deploying a paper survey, outbound and inbound postage must be considered, as well as ink, paper, printing, folding, stuffing, sealing, etc., and the labor cost to do so. Furthermore, one can expect only about a 10% response rate from customers asked to respond to a paper survey. If this is your only option, be sure to check out Section 3 of this eBook to learn about the validity of data and determine whether or not you will be able to have confidence in your results.

ONLINE

Online surveys have become the most popular means of deploying a survey because of their time- and cost-effectiveness. In addition, surveyors can quickly respond to poor scores and negative feedback from the respondent submitting the survey. This is critical, indeed, as customers who experience issues are much more likely to patronize an organization again if issues are dealt with promptly and satisfactorily, while those whose issues are ignored or dealt with poorly tend to spread the word and cost the organization not only their own business but that of many others, as well.

Because of the relatively low cost of online surveys, and the higher customer response rate of 20% as compared to paper surveys, many ‘Do-It-Yourself’ options have sprung up on the internet in the last decade. Companies like Zoomerang and Survey Monkey make millions of dollars per year by offering basic survey programming for the uneducated who would create their own surveys. Completely ignored by these tools are considerations of biases in the wording of the questions, the validity of the survey and the validity of the data gathered, reaching representative sample, employing appropriate statistical analyses and data interpretation, benchmarking, and assistance with responding to survey results. This places every user in the precarious situation of basing important business decisions on misinformation. Users of these ‘tools’ generally have no idea that their data is worse than worthless — it is actually misleading.

There are no warning signs in research. You either know how to conduct research properly, or you don’t. Most college-educated business people do not learn how to conduct psychological research unless they have participated in doctoral-level courses that emphasize research methods. Companies like Zoomerang and Survey Monkey perpetuate the myth that a survey can be created by pulling questions out of the air and sending it to a large number of people. Research that drives your business — when done properly — will provide a return on investment that is huge compared to the cost of a study done well. While cost is an important consideration, the choice between spending a few hundred dollars for junk data versus a few thousand dollars for hard, objective data should clearly be no choice at all.

Page 9: How to Conduct a Survey - eBook

9HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

TELEPHONE

The highest labor cost but also the highest response rate of all for customer surveys is through telephone deployments. An online survey is still programmed, but instead of deploying it online, telephone interviewers read the survey to participants over the phone and enter the data online for them.

Some survey research organizations offer IVR (Interactive Voice Response) as an inexpensive alternative to live telephone interviewers, but beware of the insult this may cause to your valued customers! NBRI’s research shows that ill-will is often caused by the use of recorded telephone interviewers, and highly recommends against it. On the other hand, customer loyalty actually increases when you put your customer feedback in the hands of a highly skilled researcher. Deeper exploration of opinions and beliefs is only possible between two humans, not between a human and a machine, and customers are keenly aware of whether or not you feel they are ‘worth it.’

Due to the fact that interviewers are able to call a phone number numerous times if no one is reached the first several times, and can do so without upsetting anyone, response rates of customer surveys deployed via telephone are typically around 30%! Caller ID and leaving a call-back number also increase participation.

IN PERSON

The rarest of all survey deployments is ‘in person.’ Due to the high labor cost, usually out of the office and often out of town, thus requiring transportation and accommodations, it is unusual to find a situation that is not better handled via paper, online, or telephone. Indeed, only when extremely high response rates are required due to the small size of the population should In Person Deployment even be considered. With proper planning and scheduling, however, customer response rates of 100% are possible through this methodology!

The response rates for employee feedback do not differ by deployment methodology: one can expect an 85% to 95% response rate when Best Practices are followed.

The response rates for customer surveys differ greatly by deployment methodology as discussed above.

Page 10: How to Conduct a Survey - eBook

10HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

3 • REPRESENTATIVE SAMPLE Just as a ‘valid’ survey truly represents the issues facing the target population, ‘valid’ data truly represents the thinking of the target population. The validity of psychological data is easily determined by measuring the Confidence Level and Sampling Error.

Important! All surveys have Confidence Levels and Sampling Error, including one in which 100% of the population participates. It is critical to measure and report these statistics in any form of survey research in order to prove the validity of the data.

CONFIDENCE LEVELS AND SAMPLING ERROR

Let’s define Confidence and Error. They are both probabilities. Confidence is the probability that the data does, in fact, reflect the thinking of the target population, as if every person had participated. This is an important consideration since gathering data from 100% of any population, even an employee population, is typically impossible. Someone is on vacation; someone refuses to participate; someone is out sick; etc. The issue is even more complex with customers, particularly if you have hundreds of thousands of them and don’t want to spend the money surveying each and every one.

CONFIDENCE LEVEL

The probability that the sample represents the whole.

SAMPLING ERROR

The probability that there are differences in the sample from the whole.

So, if you have a population of 100 employees or customers, and only 10 complete the survey, does your data represent how all 100 people think? What if you have 10,000 employees or customers, and 1,000 complete the survey? Will a 10% response rate from any size population provide us with the confidence to claim that our data is valid and truly represents the thinking of the entire population?

Sampling Error is basically the opposite of Confidence Level. Sampling Error is the probability that we have a preponderance of people in our data that feel differently than the majority, such as extremely pleased or displeased people. Data with large amounts of Error in it misrepresents the thinking of the population as a whole, and is not valid. Error, then, must be kept to a minimum.

Reminder! All surveys have Sampling Error, including one in which 100% of the population participates, and since it is critical to report this statistic in research, you need to know how to measure it. Do we pull numbers out of the

Page 11: How to Conduct a Survey - eBook

11HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

air? Do we hope, think, wish our way to 50% being o.k. … 75% being o.k. … 90% being o.k.? Certainly not! This is hard, objective, scientific research upon which our Clients often base multi-million dollar decisions. We have to know, and so do you!

Based on 100 years of psychological research, NBRI depends on a proven Sample Size Chart!

SAMPLE SIZE CHART

Across the top of the chart shown below are Confidence Levels, from 99% to 90%. Under the Confidence Levels are Sampling Error Levels from 5% to 3%. Confidence can be as high as 99.99% or very low. Generally, Confidence below 80% is considered ‘junk’ data. Sampling Errors can be as low as less than 1% or very high. Generally, Sampling Error above 7% is considered ‘junk’ data. The left column shows various Population Sizes.

SCIENTIFIC PSYCHOLOGICAL RESEARCH

SAMPLE SIZE CHART

Confidence Level 99% 95% Response Rate

99% 95% 90%Sampling Error 5% 5% 3% 3% 3%Population Size

50 46.5 44.2 88.4% 48.6 47.7 46.8100 86.9 79.3 79.3% 94.8 91.4 88.2500 285.1 217.2 43.4% 393.3 340.4 300.2

1,000 381.9 269.2 26.9% 604.7 488.2 409.52,000 498.2 322.2 16.1% 959.2 695.8 546.3

10,000 622.2 369.9 3.7% 1556.3 964.2 699.150,000 654.8 381.2 0.7% 1777.7 1044.8 740.5

100,000 659.2 382.6 0.4% 1809.9 1055.8 746.0

The box around 95% Confidence Level and 5% Sampling Error highlights the fact that these levels are perfectly acceptable in business and science. Note that with a Population size of 1,000, 269.2 completed surveys are required to reach a 95% Confidence Level and 5% Sampling Error, but 604.7 completed surveys are required to reach a 99% Confidence Level and 3% Sampling Error. It is not necessary to spend money collecting and analyzing more data than is needed without an imperative to do so. Many of NBRI’s studies are submitted to governmental agencies, such as the Federal Drug Administration (FDA), and a 99.9% CL with less than 1% SE is required. In general, however, NBRI recommends the 95%/5% to Clients because of the high validity of the data and the cost-savings to do so.

So, does 10% from populations of 100 or 1,000 obtain valid data for us? As the chart shows, a 79.3% Response Rate, or 79.3 completed surveys is required to obtain valid data at a 95% Confidence Level and 5% Sampling Error

Page 12: How to Conduct a Survey - eBook

12HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

from a total population of 100 individuals, so 10% would be ‘junk’ data. For a total population of 1,000 individuals, a 26.9% Response Rate would be required for a 95% & 5% result, so 10% would be insufficient in this case, as well. However, 10% would be more than enough to reach 95% & 5% for 10,000 people and 100,000 people, as only 3.7% and 0.4% Response Rates are needed for these populations, respectively.

It is important to note that valid data must be obtained at the total population level and at each level of segregation thereof. For example, if your total population in your Market Research Study is 1,000 people, and you want to learn how the thinking of 500 males and 500 females differs, you will need a 43.4% Response Rate, or 217.2 completed surveys from each subgroup. Likewise, valid data based on the Confidence Level and Sampling Error you select must be obtained for each division, business unit, or department of a Company in an Employee Survey if you want to compare the thinking of such groups.

Clearly, conducting Scientific Psychological Research that results in pure, clean, hard data that truly represents the thinking of the population and its subgroups is best done by those with extensive education, training, and experience in the field. Sources of error abound and are typically overlooked by the untrained eye. Defending the quality and purity of one’s research is a basic requirement for Doctoral Degrees, and represents a key value that Ph.D.s at NBRI bring to every Client’s Survey Research Study.

Page 13: How to Conduct a Survey - eBook

13HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

4 • REPORTING & THE STATISTICS OF SURVEY RESEARCH There are basically only 3 statistics used in Survey Research: percent favorable (or ‘top box’), mean scores, and benchmarking scores. These are known as ‘Descriptive’ Statistics because they describe the body of data in terms of its strengths and weaknesses. But these statistics cannot tell you the most important issues to work on. This will be addressed later employing ‘Inferential’ Statistics.

When reporting results, it is best to report all 3 types of descriptive statistics so that the report user has the full context of the information from which all scores are derived. Consider the report format shown below.

In the upper left corner is the Topic: “Employee Behavior.” Then, there are 3 Survey Questions assessing this Topic:

• “Company personnel deliver what is promised.”

• “Company personnel exhibit professionalism.”

• “Company personnel are well trained.”

Beneath each Survey Question are demographic segregations of the data:

• “Total Customer Base — 2013.”

• “Total Customer Base — 2012” for historical comparison, also known as internal benchmarking.

• “Customer Type 1, 2, 3, and 4.”

Page 14: How to Conduct a Survey - eBook

14HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

The 2nd column shows the N, or Number Responding. The 3rd column shows the Mean Scores. The 4th column shows the Benchmark Scores. Benchmark Scores are graphed and color-coded in the 5th column. Finally, the Scale and entire Distribution of Responses (DOR) are shown in the last 6 columns.

In this way, the report user has all of the information in one place. This shows exactly how many individuals answer in exactly which way, and rolls up all of the information to Total Customer Base benchmarked against the Industry level via the Benchmark Score.

Now let’s look at the 3 demographics outlined in red: Customer Types 2, 3, and 4, and let’s consider each statistic individually.

PERCENT FAVORABLE AND TOP BOX SCORES

As scientific researchers who always choose the objective, absolute over the subjective, debatable, NBRI warns Clients about Percent Favorable and Top Box Scores as we believe them to be misleading.

First, one must define what constitutes Percent Favorable and Top Box Scores. Bob may feel that it should only be those who Strongly Agree. Sally may want to define it as those who Strongly and Moderately Agree. And Ted may feel it should include all respondents who agree whether Strongly, Moderately, or Slightly. This has allowed subjectivity to enter into what was otherwise an objective research study, which greatly weakens it. How one defines Percent Favorable or Top Box is clearly open to debate.

Next, let’s say they decide that Percent Favorable should include all respondents who marked ‘Agree,’ including Strongly, Moderately, and Slightly. In the example above, Customer Types 2, 3, and 4 outlined in red all equal 100% Percent Favorable! If they report the single statistic of 100% for each group, their readers will believe there were no differences between these groups in this instance. Clearly, this would be false information.

Finally, how does one decide if a Percent Favorable or Top Box Score is high or low, good or bad? The only option is to compare the score to a scale of 1 to 100. Unfortunately, the scale does not represent reality, and results in false information! The same is true with Mean Scores, so we discuss this in more detail below.

MEAN SCORES

In our example above, it is apparent that there are differences between the Customer Types 2, 3, and 4, with Mean Scores of 4.77, 5.45, and 5.67, respectively. The Mean Score is the average of all scores, and is calculated by weighting the six response choices. It is clear from the Mean Scores that true differences exist between these groups, but what do the Mean Scores mean? Is Customer Type 4 significantly higher than Customer Type 3 and 2? And are these ‘good,’ ‘average,’ or ‘poor’ scores? Does the Vice President of Customer Type 4 qualify for a bigger bonus than the Vice President of Customer Type 2?

Many people compare the Mean Score to the Scale itself to determine whether it is high or low, good or bad. Hint: the mean of a 6-point scale is 3.5 (not 3, see below). Comparing the Mean Score to the scale (whether it is above or below the mean of the scale, for instance) results in false information because the scale does not represent

Page 15: How to Conduct a Survey - eBook

15HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

reality! That is, the average of all responses to all questions from all groups of all people is not 3.5! Consider the two Customer Survey questions in the picture below, both of which receive a Mean Score of 3.99, for example.

BENCHMARKING SCORES

What if you knew how people normally answer any given question? NBRI does! This is called ‘normative’ or ‘benchmarking’ data.

SCIENTIFIC PSYCHOLOGICAL RESEARCH

Strongly Agree

Moderately Agree

Slightly Agree

Slightly Disagree

Moderately Disagree

Strongly Disagree

6 5 4 3 2 1

3.99

Is this a high score or a low score?

I would recommend doing business with the Company to others. Sales Representatives are responsive to my needs.

NBRI compares the data from your respondents with data from people in your industry who have answered the same survey questions about their places of work (employees) or about your competitors (customers)! Benchmarking Data is a mapping of mean scores from thousands of individuals to a Scale of the 1st to 100th percentiles. The Average of all mean scores is at the 50th percentile; the 75th percentile represents Stretch Performance; and the 90th to 100th percentiles represent Best in Class Performance. These are the Companies with the highest Employee Engagement, Customer Loyalty, and Financial Performance!

When we look at Benchmarking Data, we may find that a Mean Score of 3.99 is a poor score for one survey question, but a great score for a different survey question! The charts below demonstrate this principle.

For the survey question, “I would recommend doing business with the Company to others,” a mean score of 3.99 is a poor score at the 38th percentile of this particular benchmarking database, with the Industry Average at 4.56 (50th), Stretch Performance at 5.18 (75th), and Best in Class Performance at 5.64 (90th).

Page 16: How to Conduct a Survey - eBook

16HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

SCIENTIFIC PSYCHOLOGICAL RESEARCH

Strongly Agree

Moderately Agree

Slightly Agree

Slightly Disagree

Moderately Disagree

Strongly Disagree

6 5 4 3 2 1

I would recommend doing business with the Company to others.

For another survey question, “Sales representatives are responsive to my needs,” a mean score of 3.99 is a Best in Class score at the 96th percentile of the benchmarking database!

SCIENTIFIC PSYCHOLOGICAL RESEARCH

Strongly Agree

Moderately Agree

Slightly Agree

Slightly Disagree

Moderately Disagree

Strongly Disagree

6 5 4 3 2 1

Sales representatives are responsive to my needs.

In short, it is impossible to understand the meaning of any score — percent favorable, top box, mean score, or any other — without benchmarking data.

Another powerful benefit of benchmarking data is the ability to group results by quartile into a ‘SWOT Analysis:’ Strengths, Opportunities, Weaknesses and Threats. When the Total Company receives a benchmark score, each Business Unit, Division, and Department receives benchmark scores, each Customer Type or other Customer Demographic Group receives a benchmark score, and every Topic and Question on the survey receives a benchmark score, it is immediately evident how each score ranks against the Industry (external benchmarking) and against each other (internal benchmarking). In addition, because 5 or more benchmarking percentiles is statistically significant (defined as ‘true, factual, not due to chance’), you know immediately if differences between scores in the present study are significant or not, and you know immediately if differences between scores in the present versus past studies are significant or not. The goal of great research is to drive business outcomes: improve Customer Satisfaction, Customer Intent to Return or Willingness to Recommend, and Employee

3.99MEAN 96TH

2.2650TH

2.7875TH

3.6490TH

3.99MEAN 38TH

4.5650TH

5.1875TH

5.6490TH

Page 17: How to Conduct a Survey - eBook

17HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

Engagement in order to improve Financial Performance. Survey Research consistently provides interesting information about an organization but wastes time and money if no action is taken on the results to effect a return on investment. When actions are taken, it is important to measure their effects. Benchmarking data provides an immediate understanding of the differences between scores.

SWOT ANALYSIS

Category Benchmarking RangeStrength 75th to 100th Percentile

Opportunity 50th to 74th PercentileWeakness 25th to 49th Percentile

Threat 1st to 24th Percentile

Immediate understanding of results

5 or more percentiles = statistical significance

Page 18: How to Conduct a Survey - eBook

18HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

5 • TARGETED MARCHING ORDERS

DESCRIPTIVE STATISTICS

In the Chapter above, we discussed the 3 basic statistics of Survey Research: percent favorable (or ‘top box’), mean scores, and benchmarking scores. These are known as ‘Descriptive’ Statistics because they describe the body of data in terms of its strengths and weaknesses. In particular, employing benchmarking scores puts us in the strong position of seeing exactly where the organization is performing well and where it needs intervention by region, or department, or customer type, or any other demographic, as well as by topic and question, and as an organization as a whole.

The chart below shows typical results for improvements in survey question scores from an Employee Survey that has been deployed annually for 3 years.

ITEM SWOT ANALYSIS

In the First Annual Assessment in 2011, there were a preponderance of Threats and Weaknesses, shown in green. In the Second Assessment in 2012, the distribution of Item Scores was more evenly spread from Threats to Strengths, shown in light blue. And in the Third Assessment in 2013, there is a preponderance of Strengths and Opportunities, shown in dark blue. Moving the scores at the item or survey question level ensures Total Company improvement, as in the example on the following page.

Strength100th - 75th

16

5

0

8 755

15

1

6

1012

2013 2012 2011

0

10

20

30

Num

ber o

f Ite

ms

Opportunity75th - 50th

Weakness50th - 25th

Threat25th - 1st

Page 19: How to Conduct a Survey - eBook

19HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

Ice Cream Sales Swimming Accidents

TOTAL COMPANY SCORE

In the chart above, ‘ABC Company’ moved from the 63rd percentile of their industry in 2011 to the 72nd percentile in 2012, and to the 79th percentile in 2013. This is a good example of the amount of improvement typically seen by NBRI Clients. As important as it is to conduct survey research properly and obtain hard, valid data, it is simply an exercise in futility without targeted action that dramatically improves the organization. You have two choices: work on low scoring items that may only affect themselves; or, work on root cause perceptions that, when improved, will effect dramatic and immediate widespread improvement across the organization.

INFERENTIAL STATISTICS

Those who work with psychological data on a regular basis, such as College Professors and NBRI Organizational Psychologists, apply advanced analytics to the raw data in order to pull the root causes, or primary perceptions, out of the thinking of the survey respondents. As important as the descriptive statistics described above are, they cannot tell us the key factors driving customer and employee behavior. So, we put those aside for now, and return to the Clients’ raw scores.

Most Survey Research Consultants offer their Clients ‘correlations,’ which they call ‘key factors’ or ‘key drivers.’ This is in large part a misnomer, as shown below. A very famous correlation is that of Ice Cream Sales and Swimming Accidents. The numbers of both increase and decrease together: as one goes up, so does the other, and as one comes down, so does the other. This is the essence of a correlation: items that move together.

CORRELATION

Industry Average

ABC 2011

ABC 2012

Stretch ABC 2013

Best in Class

50

90

6372 75

79

0

25

50

75

100

Benc

hmar

king

Per

cent

ile

Page 20: How to Conduct a Survey - eBook

20HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

I Trust Management My Morale is Good

Ice Cream Sales Swimming Accidents

HOT SUMMER DAYS

COLD WINTER DAYS

WEATHER

If your Survey Research Consultant were to tell you that these are your Key Factors or Key Drivers, what would you work on? Unfortunately, the vast majority of Consultants simply lack the in-depth training and knowledge required to properly analyze psychological data.

NBRI applies two additional analyses to Client raw data: regressions and path analyses. Following our example above, when we analyze the data with a Regression Analysis, we find the Key Factor or Key Driver that is, in fact, causing the Correlation is the Weather: Hot Summer Days cause both scores (Ice Cream Sales and Swimming Accidents) to increase, and Cold Winter Days cause both scores to decrease.

REGRESSION

Armed with this information, we now have a shortcut to improve a vast number of variables, from ice cream sales to flip flop sales to sunglasses sales, etc. The same principle should be applied to psychological data from customer and employee surveys. An example is shown below.

A major metropolitan city in the northeast United States was having high turnover; employees were dissatisfied on several levels; NBRI was called in to pinpoint the source of dissatisfaction. We found the following correlation.

CORRELATION

Had NBRI stopped here, Management would have been faced with a decision to work on ‘employee trust in management’ and/or ‘employee morale.’ But advanced analytics provided Management with keen insight into the thinking of the employee population by conducting a regression analysis and path analyses on the raw data.

Page 21: How to Conduct a Survey - eBook

21HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

I Trust Management My Morale is Good

JOB PROMOTIONS ARE FAIR = NEGATIVE ROOT CAUSE

It was found that the employees’ overriding perception concerned job promotions being unfair. This was their Cold Winter Days, driving down the scores of employee trust and morale.

REGRESSION

Indeed, the perception that job promotions were unfair was driving down the scores of well over 40% of all survey questions, having far-reaching effects into perceptions of Discrimination, Diversity, Policies, Culture, and more.

ROOT CAUSE MODEL

Item# Topic Item Text Norm24 Culture Job promotions are fair. 36

2 Morale My morale at work is good. 53

7 Management I can trust what management says. 43

14 Discrimination The Company is free of discrimination. 34

16 Diversity The Company treats employees equally regardless of gender. 46

25 Policies Policies are carried out in a consistent manner. 43

43 Culture If I do a good job, I have a better chance of getting ahead. 42

54 Culture I feel free to express my opinions without worrying about negative results.

37

When Management addressed the issue of favoritism that had contaminated the process of job promotions for many years, the score of the ‘Root Cause’ increased dramatically, but more importantly, the scores of all items driven by this Root Cause increased dramatically as well, which in turn effected huge gains in Total Organization Scores, Employee Satisfaction and Employee Engagement, greatly reducing turnover and improving productivity. Clearly, the research had provided huge ROI and paid for itself many times over.

Once you have built a valid survey, gathered valid, representative sample data, described the body of data with descriptive statistics, and ‘cut to the chase’ with inferential statistics to pinpoint the most important issues to address, it is time to turn FACT into ACT, with Action Planning!

Page 22: How to Conduct a Survey - eBook

22HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

6 • TURNING DATA INTO ACTION In ‘the olden days,’ it was Senior Management’s responsibility to digest the survey data, debate which of the low-scoring issues should be addressed, develop their own list of initiatives to address the issues, and then consider the budget, manpower, and timing requirements of each before implementation. This process typically required months, all while the data was aging and benefits from improvements were on hold. More importantly, it has since been proven that simply working on low-scoring items is not the key to effecting change in drivers or root causes of customer loyalty and employee engagement, so minimal improvement typically resulted. While the burden is still on Management to provide final approvals of recommended interventions to root causes, the Action Planning Process described below minimizes the time required of Management, while vastly improving the efficacy of the process, and turning data into action in about 3 weeks!

The basic concept of Action Planning is circular. In Step One the research is conducted. In Step Two the Root Causes are determined. In Step Three, all employee recommendations to improve the root causes are submitted to small Action Planning Teams who in Step Four, develop the short list; the short list is submitted to Senior Management for approvals in Step Five, and in Step Six, implementation occurs. In this manner, all minds are focused on improving the mission critical findings of the research study. For Customer Surveys, the solutions are proposed by all employees, including those closest to the customer which is rarely Senior Management. For Employee Surveys, the solutions are proposed by all employees, as well. After all, it is they who experienced and reported the issues in the first place, and who will be responsible for implementing the solutions.

The action-planning teams should be comprised of middle-management employees. Because there may be up to five (5) Root Causes, there should be five (5) action-planning teams established. Each team should have an odd number of members, such as 3, 5, or 7 individuals. The sole purpose of each team is to sift through the recommendations, and develop a short list of solutions that will go to Senior Management for approval.

The Action Planning Teams select one member from among themselves to broadcast to all employees that they are the single point of contact to receive suggestions to improve [their particular root cause survey question]. Offering prizes and rewards is, of course, optional. The goal of the communication is to create excitement and buzz among the employees to make sound, inventive recommendations for improving the all-important root causes.

Page 23: How to Conduct a Survey - eBook

23HOW TO CONDUCT A SURVEY: A Primer on Survey Research

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

The Action Planning Teams should also be creative about ways to build excitement and collect ideas. Sending reminder emails and counting the days to the close, putting up posters in hallways and break rooms, putting balloons on suggestion drop boxes, sponsoring departmental contests with movie or dinner ticket awards, and other fun, celebratory ideas can be highly motivational to involve the workforce.

At the end of two weeks, each Action Planning Team meets individually. These meetings typically require no more than a morning. When the meeting is over, it is typically not rescheduled, and a hard stop should be agreed to in advance. Each Single Point of Contact will bring their ‘long list’ of all recommendations received to improve the Root Cause assigned to them, with a copy for every team member. The Single POC will not remove any suggestions made to him or her without approval of their Action Planning Team.

The team goes through the list one item at a time. As each item is mentioned, the team votes to place each item in one of the following categories:

1. unanimously agreed to be discarded, or

2. unanimously agreed to go on the ‘short list’ for Senior Management approval, or

3. requires further discussion.

The team members then debate, defend, and discuss the remaining items ‘requiring discussion’ in an effort to gain consensus regarding their disposition. Depending on the number of items remaining on the list after the initial vote, and the amount of time remaining for the meeting, teams may set a time limit of 5 to 10 minutes per item for discussion. At the end of the discussions, the final vote takes place, and majority rules on any additional items to be placed on the short list. The final short list is then ready to be presented to Senior Management.

Employees will provide innovative and valuable recommendations for Senior Management’s consideration. Recommendations from the Action Planning Teams should be given high priority, and approvals from Senior Management should be provided as swiftly as possible.

Implementation of solutions to root cause issues should occur quickly. How it is done varies by subject matter, organizational structure, culture, manpower availability, and a variety of other factors. Senior Management, working closely with the Action Planning Teams, can lend guidance to the most efficient and effective means of implementing these important solutions.

Regularly surveying and continuous improvement will ensure that your organization reaches Best in Class performance!

Page 24: How to Conduct a Survey - eBook

NATIONAL BUSINESS RESEARCH INSTITUTE, INC. · 15305 Dallas Parkway 3rd Floor, Addison, TX 75001 · (800) 756-6168 · www.NBRII.com

Click here to view a video about NBRI