start surveys - housemark · 2.1 how transactional surveys fit into social housing research...

51
A guide to running StarT surveys Denise Rain, Acuity Vicki Howe, HouseMark April 2015

Upload: ngodan

Post on 31-Aug-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

A guide to running

StarT surveys Denise Rain, Acuity

Vicki Howe, HouseMark

April 2015

2

© HouseMark 2017

1 Introduction .............................................................................................................................................. 4

1.1 The need for a review ........................................................................................................................... 5

1.2 The HouseMark review ........................................................................................................................ 6

1.3 Review findings ....................................................................................................................................... 7

2 Measuring customer satisfaction in social housing ..................................................................... 8

2.1 How transactional surveys fit into social housing research practices ................................ 8

2.2 Star and transactional surveys .......................................................................................................... 8

2.3 The use of core Star questions in transactional surveys ......................................................... 9

2.4 Transactional surveys .......................................................................................................................... 9

2.5 Research methods .............................................................................................................................. 10

2.6 Surveys in Wales and Scotland ....................................................................................................... 10

2.7 Star and StarT benchmarking service .......................................................................................... 11

3 Transactional surveys - the essentials .......................................................................................... 12

3.1 Who to survey ....................................................................................................................................... 12

3.2 Building your survey ............................................................................................................................ 12

3.3 Core questions ..................................................................................................................................... 12

3.4 Survey responses and rating scales ............................................................................................. 12

3.5 Survey methods ................................................................................................................................... 13

3.6 Timing and frequency ......................................................................................................................... 14

3.7 Representative sample sizes and statistical reliability ............................................................ 15

4 StarT questions ..................................................................................................................................... 15

4.1 Introduction ............................................................................................................................................ 15

4.2 Responsive repairs .............................................................................................................................. 16

4.3 Complaints ............................................................................................................................................. 16

4.4 Anti-social behaviour .......................................................................................................................... 17

4.5 Lettings .................................................................................................................................................... 17

4.6 Filtering .................................................................................................................................................... 18

4.7 Open-ended questions ...................................................................................................................... 18

4.8 Follow up questions ............................................................................................................................ 18

4.9 Other surveys ........................................................................................................................................ 18

5 Achieving reliable and representative results from transactional surveys ........................ 18

5.1 Census or sample ................................................................................................................................ 19

5.2 Sampling ................................................................................................................................................. 20

5.3 Deciding on the sample size ............................................................................................................ 20

5.4 Response rates and sample size .................................................................................................... 21

5.5 Checking representativeness, quotas and weighting ............................................................. 22

5.6 Confidentiality, anonymity and data protection ........................................................................ 23

6 Good practice in designing customer satisfaction surveys .................................................... 24

6.1 Planning and reviewing surveys ...................................................................................................... 25

6.2 Introducing new surveys and questions ...................................................................................... 25

3

© HouseMark 2017

6.3 Survey aims ............................................................................................................................................ 25

6.4 Managing the surveys ........................................................................................................................ 26

6.5 Avoiding survey fatigue ..................................................................................................................... 27

6.6 Incentives ............................................................................................................................................... 28

6.7 Branding and design of surveys ..................................................................................................... 28

6.8 Budgets and resources ...................................................................................................................... 28

6.9 Internal or external research ............................................................................................................ 29

6.10 Strategic overview ............................................................................................................................ 29

7. Good practice in conducting customer satisfaction surveys ................................................ 30

7.1 Question sets ........................................................................................................................................ 30

7.2 Representative sample sizes and statistical reliability ............................................................ 31

7.3 Survey methods ................................................................................................................................... 31

7.4 Conducting the survey ....................................................................................................................... 33

7.4.1 Running a postal self-completion survey ............................................................................ 33

7.4.2 Running telephone interviews ................................................................................................. 34

7.4.3 Running face-to-face interviews ............................................................................................ 36

7.4.4 Running online surveys .............................................................................................................. 37

7.4.5 Running SMS/text surveys ....................................................................................................... 38

7.4.6 Running PDA surveys ................................................................................................................. 38

7.4.7 Running IVR surveys ................................................................................................................... 39

7.5 Data input and checking ............................................................................................................... 40

7.6 Survey retention ................................................................................................................................... 41

8. Good practice in analysing satisfaction surveys ....................................................................... 43

8.1 Analysis and interpretation ............................................................................................................... 43

8.2 Rolling or moving averages and year-to-date (YTD) ................................................................ 44

8.3 Trend analysis and seasonal trends .............................................................................................. 44

8.4 Reporting and action on the results .............................................................................................. 45

9. HouseMark and seeking help and advice ..................................................................................... 46

9.1 Sharing good practice ........................................................................................................................ 46

9.2 HouseMark validation and statistical help ................................................................................... 46

10. Appendix 1 – Transactional surveys in the sector .................................................................. 47

10.1 Guidance on carrying out responsive repairs surveys......................................................... 47

10.2 Guidance on carrying out complaints surveys ........................................................................ 48

10.3 Guidance on carrying out ASB complaints surveys .............................................................. 48

10.4 Guidance on carrying out lettings surveys ............................................................................... 49

4

© HouseMark 2017

Star and StarT, and the intellectual property rights in them, are owned by HouseMark Ltd (“HouseMark”). We give

express permission for anyone who is a social landlord, or their subcontractor, to use, store and transmit Star and

StarT. This includes the right to copy the Star and StarT questions and the approach set out in written guidance

provided by HouseMark. Star and StarT surveys and questionnaires do not have to be acknowledged to

HouseMark. The Star logo (available at www.housemark.co.uk/star) may be reproduced on questionnaires, reports

and other materials based on Star on the basis that you use it in the same way that HouseMark uses it, including

without limitation, its format, colours and proportions. You are expressly prohibited from using the Star logo in a

way that is detrimental to the reputation of HouseMark or Star or in a way that is not in keeping with the way that

HouseMark uses the logo.

You acknowledge that Star and StarT has not been developed to meet your individual requirements. Whilst care

has been taken to ensure the accuracy and completeness of the questions, we exclude all conditions, warranties,

representations or other terms, so far as is permitted by law, concerning Star and StarT, including without

limitation that relating to fitness for purpose, accuracy, completeness or intellectual property. We hereby disclaim

all liability and responsibility for your use of Star and StarT, or any part thereof, so far as it is permitted by law.

1 Introduction

5

© HouseMark 2017

This guide provides the information needed to design, run and benchmark

transactional surveys. The guidance sits alongside the Star (Survey of Tenants and

Residents) framework and introduces StarT for transactional surveys (such as

repairs, lettings, ASB, complaints).

The distinction between the two survey types of Star and StarT is not the questions

they ask but the trigger for asking them. A transactional survey, such as StarT,

follows an interaction or event, and is conducted as soon as possible afterwards,

while the experience is still fresh in the person’s mind. A general perception survey,

such as Star, deals with customers’ ongoing relationships with an organisation and

may be carried out at any time.

HouseMark’s new benchmarking service for transactional surveys was developed

following an extensive review carried out at the end of 2014. A brief summary of the

review is provided below. More details about the review can be found in the report

Benchmarking customer experience: findings from review of HouseMark members

and the accompanying summary report New thinking on benchmarking customer

experience: the role of Star and transactional surveys.

This guide to running StarT surveys is intended for housing practitioners

responsible for managing transactional surveys. An overview of StarT for non-

specialists can also be found in the summary report.

The content for this guide has been based on the evidence obtained during our

2014 review. A lead-in time is necessary before introducing a new benchmarking

system for satisfaction surveys, in this case for transactional surveys. During

2015/16, HouseMark subscribers can use this guide to adapt their survey methods

and questions to meet our new StarT framework for repairs, lettings, ASB and

complaints surveys if they wish to take up our offer of a new transactional

satisfaction benchmarking service. From 2016, the new online benchmarking

systems will be available to HouseMark subscribers to input their survey data and

see how they compare.

Good practice in measuring customer satisfaction in social housing is also included

in this guide following requests from subscribers in the review.

If you have any feedback or queries on this guide, please contact

[email protected].

1.1 The need for a review

HouseMark recognises the need to adapt its data collection and reporting systems

in response to changes in the housing sector operating environment. As part of

this process, HouseMark undertook a review of the use of transactional surveys in

the sector and to assess the demand for benchmarking customer satisfaction data

from such surveys. The review also sought to understand what guidance the sector

might require on transactional surveys and their relationship to perception

customer satisfaction (Star) surveys.

In 2011, HouseMark launched the Star framework for a periodic perception-based

survey of customer satisfaction that provides social housing landlords with the

means to compare results. Some 350 landlords now regularly collect and

benchmark this information using HouseMark services. Many now carry out

6

© HouseMark 2017

surveys annually or even more frequently; some have moved away from standalone

surveys, repeating the surveys a number of times each year to track satisfaction.

Transactional surveys follow an event or interaction with a resident, such as a

repair, a letting or a complaint. They can be an invaluable way of measuring the

customer experience at an operational level, obtaining immediate feedback while

the experience is still fresh in the customer’s mind.

There are many reasons for conducting a transactional survey. Most commonly,

they are carried out as a means of identifying dissatisfied customers and putting

problems right quickly. Transactional surveys can also provide an organisation with

a better understanding of how customers perceive the quality of services being

provided. While individual surveys are tailored to each organisation, many will

incorporate questions from the wider Star question set. In addition, a range of

transactional survey customer satisfaction questions have been collected

historically alongside operational data throughout HouseMark’s various

benchmarking modules and toolkits.

A review of the extent to which transactional surveys are used in the sector and an

assessment of the demand for an expansion of benchmarking of customer

satisfaction captured through transactional surveys was clearly needed. Guidance

in conducting transactional surveys was required to introduce a level of

consistency for organisations wishing to benchmark their results and to define the

use of transactional surveys alongside the Star framework. HouseMark carried out

such a review at the end of 2014.

This guidance document aims to help members who wish to do so to amend the

transactional surveys they conduct in 2015/2016, with a view to beginning data

submission in 2016/17. It includes methodology and guidance to facilitate a

transactional satisfaction survey data benchmarking system.

This guidance document focuses on transactional surveys. A further update, based

on a review of Star itself, will be issued in July 2015 and will determine whether any

changes are required to Star surveys.

1.2 The HouseMark review

Following a roundtable discussion with a cross section of members in October

2014, HouseMark commissioned Acuity to assist with the review. They designed a

consultation exercise, based on two online member surveys and ten in-depth

telephone interviews. The online surveys were split into two parts:

• Part A – Measuring customer satisfaction, Star, benchmarking and guidance

• Part B – Transactional surveys in detail. Members were also asked to submit

examples of their Star and transactional surveys.

A detailed draft report on the research findings was circulated to a consultation

group to determine the scope of the good practice guidance and any further

clarification and consultation that may be required. The research findings are

available in two formats – the summary report New thinking on benchmarking

customer experience: the role of Star and transactional surveys and the full

findings report Benchmarking customer experience: findings from review of

HouseMark members.

7

© HouseMark 2017

1.3 Review findings

Transactional surveys

Virtually all the organisations that took part in the review carry out transactional

surveys. The review identified over 30 different types, including responsive repairs

surveys (conducted by 94% of those responding), lettings, complaints and ASB

surveys. Less than half (44%) carry out customer service contact surveys.

Two-thirds of organisations said they would like to see a set of core and optional

questions developed for transactional surveys similar to the Star framework.

The majority said they would like guidance and advice on different survey methods,

advice on sample sizes, sampling and sampling errors, information on the latest

survey techniques, and advice on checking representativeness and weighting.

Benchmarking transactional surveys

Over half of the organisations that took part in the review said they would like to

see additional satisfaction measures added to cover contact centres, asset

management, ASB, complaints and development. All of the respondents who took

part in the in-depth telephone interviews could see the merits in benchmarking

transactional surveys, with strong support for benchmarking lettings and

responsive repairs found in the online survey. Organisations did not feel the need

for more frequent than annual reporting for satisfaction benchmarking (or quarterly

in some of the mini-modules).

There were some reservations about benchmarking transactional surveys. A few

organisations pointed out that the purpose of their transactional surveys is to

focus on the reasons behind scoring less than 100% satisfaction, or that the value

lies in monitoring their rolling performance, which makes benchmarking these

scores with other landlords superfluous in their view. Others cited the fact that

organisations run their day-to-day services very differently from each other and

there is a wide range of surveying approaches, which would make consistency and

comparability difficult. Small sample sizes were an issue for some organisations,

while others valued the ability to adapt and tailor the surveys to their business

needs.

HouseMark recognises that these concerns are all valid and should be carefully

considered in any benchmarking system. In developing the new StarT system we

have sought to create a set of standards that facilitate meaningful comparison

while allowing sufficient flexibility to meet different operational needs and

organisational size and structure. Our belief remains, however, that despite the

potential problems, there is merit in adjusting the questions previously asked to

conform to a national set to facilitate the benchmarking of transactional surveys.

8

© HouseMark 2017

2 Measuring customer satisfaction in social housing

2.1 How transactional surveys fit into social housing research

practices

Social landlords carry out a wide range of research activities, some of which are

aimed at measuring customer satisfaction and the customer experience.

The benefits of measuring the customer experience and satisfaction are numerous

and well documented. Organisations that took part in the review valued the ability

to measure current performance, gain greater customer insight, tailor service

delivery and deliver value for money. Organisations have many varied approaches

to customer satisfaction measurement and what is right for one may not suit

another.

The various surveys can be grouped in three main classes: perception satisfaction

surveys, transactional surveys and other surveys/research.

Perception satisfaction surveys

A perception satisfaction survey is often a large-scale survey aimed at measuring

all1 customers’ views, impressions and opinions about their landlord and the

services it delivers. The Star framework is one such example. Historically,

perception satisfaction surveys were carried out at specific points in time, typically

every two or three years. Today, many landlords carry out their perception surveys

as trackers by running the same survey at set intervals throughout the year.

Transactional surveys

For the purpose of this guide, we define transactional surveys as those that collect

customer feedback about an interaction or event. They enable an organisation to

gain a better understanding of their customers' experience and perception of

service quality while a service interaction is still fresh in customers’ minds.

Examples include complaints surveys and responsive repairs surveys.

Other research practices

For the purpose of simplicity we have grouped all other surveys into a third group.

We recognise, however, that organisations carry out a broad range of research and

customer engagement activities. Some of these, such as mystery shopping, estate

walkabouts or journey mapping, are also designed to measure the customer

experience. Residents also provide organisations with their opinions and views

through a range of involvement activities.

2.2 Star and transactional surveys

In practice, there has not always been a clear distinction between the questions

used in perception satisfaction and transactional surveys. Some landlords have

asked perception questions in both types of survey – such as asking about

satisfaction with overall services at the end of a responsive repairs survey. A

1 To capture ‘all’ views, the landlord will either conduct a census survey or a carefully constructed sample to ensure

statistically valid representation.

9

© HouseMark 2017

perception satisfaction survey, such as Star, can also include transactional

questions – for example, if the wider set of questions is used then questions about

the last contact or last repair are more transactional in their nature.

For the purpose of clarity, our definition of the distinction between the two survey

types is not the questions asked but the trigger for asking them. A transactional

survey follows an interaction or an event, while a perception survey does not.

2.3 The use of core Star questions in transactional surveys

One of the remits for the review was to quantify and evaluate the use of core Star

questions in transactional surveys. The review found the use of core and optional

Star questions to be commonplace in transactional surveys, suggesting that the

sector values the framework.

It is highly likely that core Star questions asked in transactional surveys are subject

to numerous influences on the scores, including differences in survey methods, the

time of year etc. In addition, responses to core Star questions often produce higher

scores in transactional surveys than in general perception surveys, one example

being the repairs and maintenance questions. Because of this, HouseMark does

not accept into its Star benchmarks any Star questions that have been collected

from transactional surveys.

2.4 Transactional surveys

For many organisations, transactional surveys form a large part of their housing

research activities. In addition to Star surveys, the review identified over 30

different transactional surveys being carried out by organisations. The list below is

not definitive:

Community-based surveys • Community development

• Neighbourhood surveys

• Resident involvement surveys

• Safer communities surveys

Complaints • ASB complaints

• General complaints

Asset management • Aids and adaptations

• Asset management/Planned

works/Major works

• Decent homes

• Gas and electric surveys

• Gas service and repairs

• Responsive repairs

• General repairs

Rent and service charges • Rent arrears/Income recovery/Income

maximization

• Service charges

Services • Care services care and support

10

© HouseMark 2017

• Communal garden and grounds

maintenance

• Contact centre

• Emergency call system

• Estate services

• Families unit

• Handyman services

• Housing management visit

• Out of hours’ service

• Reception service

• Sheltered housing

• Supported housing

• Tenancy support

Development • New-build development

• Design surveys

Homeowners • Shared owners

• Leaseholders

Lettings and voids • Exit and voids

• New tenancy/New lettings

In recent years, some organisations have reduced the number of questions asked

in transactional surveys and indeed the number of transactional surveys

themselves, as there was a feeling that (a) they did not provide valuable

information, and (b) they were causing survey fatigue and possibly increasing

dissatisfaction. The transactional approach has many advantages, however, so

should not be dismissed without some consideration.

2.5 Research methods

Organisations make use of a wide range of survey methods, with telephone and

postal surveys among the most popular. Many organisations use mystery

shoppers, focus groups, face-to-face surveys, online surveys, drop cards, SMS

text surveys, journey mapping, PDAs (Personal Digital Assistant) and IVR

(Interactive Voice Response) surveys. Other survey methods include tenant

inspectors, email panels, interactive voting and online focus groups.

2.6 Surveys in Wales and Scotland

The guidance on transactional surveys applies equally to organisations in England

and in Wales. Transactional surveys are currently being reviewed in Scotland and

the information from this guide is feeding into that review.

11

© HouseMark 2017

2.7 Star and StarT benchmarking service

Benchmarking is the process of comparing performance in order to seek best

practice. This is often done by using simple performance indicators as ‘can

openers’ to help focus on underlying processes that make for good or poor

performance. Benchmarking with other organisations allows you to:

• Share information about performance and practice

• Understand why performance varies between providers

• Identify those aspects of service provision that are contributing to under-

performance

• Identify processes and practices that will help improve performance.

HouseMark’s core benchmarking service goes further, by combining core Star

satisfaction data with cost, performance and contextual data to help you inform

value for money judgements through comparison with others.

The standardised nature of the Star framework introduced in July 2011 made it a

powerful resource for benchmarking satisfaction, continuing some of the data

trends set up by STATUS. The introduction of benchmarking transactional surveys,

now housed in one place via StarT, will greatly enhance this service. HouseMark’s

benchmarking services allow you to create your own peer groups for comparison.

The Star and StarT benchmarking service will let you compare your results using a

wide range of contextual filters.

12

© HouseMark 2017

3 Transactional surveys - the essentials

This section outlines the requirements for StarT – HouseMark’s benchmarking

service for transactional surveys.

3.1 Who to survey

Transactional surveys can be used to survey all residents who receive the relevant

services and interactions regardless of tenure. As with any survey, however,

consideration needs to be given to the suitability of the survey method to each

resident group, as some approaches may be more successful than others. The

level of support to complete the survey may also differ between resident groups.

For transactional surveys, include all property tenures and types for which the

surveying landlord is responsible for delivering the service.

3.2 Building your survey

As with Star, the StarT framework encourages organisations to design their own

surveys and to ask as many questions as they wish. When compiling a question set

an organisation might consider the following:

1. Review any existing survey to assess what questions worked well but also to

identify those where the results have not been used

2. Review the core StarT questions for the transactional survey

3. Review the Star and StarT optional questions and decide whether to include any of

them

4. Identify and include any additional questions of your own, ensuring that they do not

duplicate or re-write any Star/StarT questions that are also in your survey

5. Adopt the appropriate rating scale for each question

6. Consider including open-ended questions to probe further – for example, where

there are higher levels of dissatisfaction and/or as a general open question around

improving a service

7. Determine the most appropriate order for the survey questions

8. Consider adding questions to ask residents whether they agree to waive their

anonymity and/or to be re-contacted by the organisation. Note that anonymity is

always implicitly assumed in surveys of this type, so you need to collect their opt-in

to be identified, rather than their opt-out from being identified.

3.3 Core questions

Individual StarT core satisfaction questions for four transactional surveys can be

found in section 4 under each survey type. If organisations wish to benchmark

these questions they must use the associated survey responses and rating scales

exactly as listed, or an acceptable alternative (see below). In addition to the data,

contextual information will also be collected at organisational level, including the

survey methodology, frequency of surveying, the time/date of the survey, the

number of responses and statistical validity to allow the filtering of the results.

3.4 Survey responses and rating scales

The majority of Star and StarT questions are phrased ‘How satisfied or dissatisfied

13

© HouseMark 2017

are you …’ and have a five-point descriptive rating scale (i.e. very satisfied; fairly

satisfied; neither; fairly dissatisfied; very dissatisfied). There has always been a

great deal of debate on rating scales and differing opinions remain. Some people

prefer long numerical scales, which offer increased differentiation of responses

and greater scope in analysis. However, when the Star methodologies were

developed, residents overwhelmingly said that they preferred five-point

descriptive scales. The issue was revisited by HouseMark in 2012 and again in

2014 as part of the consultation exercise for developing transactional surveys.

Numerical scales

We recognise that many organisations carry out telephone surveys and some have

reported that they find surveys with numerical scales easy to conduct on the

telephone. HouseMark will, therefore, accept results from organisations that have

converted the rating scale from a five-point descriptive scale to a five-point

numerical scale (for example, where five equals very satisfied and one equals very

dissatisfied).

We also recognise that, in exceptional cases, some organisations have invested in

establishing surveys based on an alternative rating scale. Some of these scales are

more compatible than others. Data from organisations using a ten-point numeric

scale will be accepted into HouseMark benchmarking systems for StarT questions

when converted correctly.

We will continue to flag such data so that other benchmarking users are aware that

a different rating scale was used. Scales that are not compatible with a base

number of five (e.g. four-point, seven-point, 11-point, zero-to-ten scales) will not be

accepted into HouseMark benchmarking for Star or StarT. There is nothing

intrinsically wrong with the other scales; it is simply the need for consistency for

comparison purposes.

We have included the ‘neither’ option since this provides respondents with an

opportunity to state that they are neither satisfied nor dissatisfied and encourages

a response to the question in all cases. Without this option, respondents may

choose not to answer certain questions because they feel they are not being

provided with an option relevant to them.

The middle ground

The 2014 review found that just over half of the organisations contacted wanted to

retain the neutral middle ground in satisfaction questions (sometimes seen as the

‘neither’ option in descriptive scales), while a third would like to see it dropped.

Given that slim majority, coupled with the need to maintain consistency and allow

for trend analysis, the new StarT framework includes a middle ground response.

Nonetheless, in recognition of the preferences of some organisations to have the

‘neither’ proportions included in comparison data, HouseMark will introduce an

extra calculation of ‘% dissatisfied’ (and its inverse, ‘% non-dissatisfied’) to sit

alongside the standard ‘% satisfied’ in online benchmarking reports.

3.5 Survey methods

Transactional surveys are undertaken using a variety of survey methods, and unlike

Star there are no survey methods that are not permitted in the StarT framework. As

14

© HouseMark 2017

long as the requirements for a statistically robust and representative survey are

met, HouseMark will accept transactional data from surveys collected by postal

self-completion questionnaires and drop cards, telephone interviews, face-to-face

interviews, online surveys, text/SMS surveys, IVR and DPA.

Some of the survey methods are more likely than others to introduce survey bias

due to the likelihood that the sample would not be representative of the population

who have used a particular service or interaction. We recommend that

representative checks are made and other methods used to supplement the

survey.

3.6 Timing and frequency

The best time to carry out a transactional survey will vary depending on the

transaction.

Some transactional surveys should be carried out as soon as possible after the

service has been received as part of the operational process, in order to implement

an immediate response if there is service failure (for example a complaints survey).

The second benefit from the survey is to carry out analysis and track trends in

order to learn from the residents’ experience and so help to implement service

improvements.

In deciding when to carry out transactional surveys, organisations need to balance

the capacity to run the survey at the desired frequency, such as weekly or monthly,

with the optimum time to interview the resident, such as immediately after an

interaction.

A resident’s view and recollection of an interaction may change over time. Even if

administratively it is not possible to survey the resident within the optimum window

of opportunity, it is recommended that organisations aim to conduct each survey at

the same time wherever possible (for example within a two- or four-week period)

for consistency and comparability.

In addition, an organisation many need to take into account other considerations:

• Timeliness – value to the organisation in having data that is up-to-date and current,

such as in real-time versus weekly, quarterly, six-monthly or annual data, which may

be easier to collect

• Validity – whether the volume of transactions and the ability to measure customer

satisfaction data makes the sample size robust and suitable for reviewing

• Contractual obligations – such as those built into contracts with repair contractors

• Strategic value – i.e. whether the satisfaction data remains useful in informing

business decisions and action to improve services

• Cost of collection – i.e. whether expenditure of resources on satisfaction

measurement represents good value for money.

A further factor to take into account is the value in continuing to carry out

transactional surveys that consistently report very high satisfaction scores.

Surveys in such areas could be scaled back or carried out less frequently.

We are aware that some organisations will choose to conduct their surveys more

frequently than others and at this stage we are proposing to collect quarterly

submissions for responsive repairs and/or annual submissions for other

15

© HouseMark 2017

transactional surveys.

3.7 Representative sample sizes and statistical reliability

For StarT transactional surveys, the margins of error for results reported to

HouseMark are set out in the table below.

A minimum number of replies are necessary to achieve the required margin of

error. This minimum varies according to the population size of each transactional

survey, which can vary considerably depending on the size of the organisation.

Table 1: Minimum number of responses required for HouseMark benchmarking

Number of

interactions

Required minimum

margin of error

(annual)

Desired minimum

number of annual

responses

Percentage of

users

Under 100 ±10% 49 49% +

100 to 199 ±10% 49 to 65 49% to 33%

200 to 499 ±8% 86 to 116 43% to 23%

500 to 999 ±6% 174 to 211 35% to 21%

1,000 to 2,999 ±6% 211 to 245 21% to 8%

3,000 to 4,999 ±5% 341 to 357 11% to 7%

5,000 to 9,999 ±4% 536 to 566 11% to 6%

10,000 to 14,999 ±4% 566 to 577 6% to 4%

15,000 to 19,999 ±4% 577 to 583 4% to 3%

20,000 to 49,999 ±3% 1,013 to 1,045 5% to 2%

50,000 to 99,999 ±3% 1,045 to 1,056 2% to 1%

Over 100,000 ±3% Over 1,056 Approx. 1%

Organisations that are unable to meet the required margin of error.

Organisations may struggle to meet the required margin of error in transactional

surveys for any service or interaction with fewer than 200 to 300 users in a year, as

this requires responses from a very high percentage of users. Organisations can

still submit their results for benchmarking providing that there are more than 20

responses in the required time period (i.e. 20 per quarter for quarterly benchmarks,

20 per year for annual benchmarks). HouseMark’s benchmarking services will allow

organisations the choice of whether to include the results from these surveys when

comparing themselves with others.

Subgroups

Note that these minimum numbers of replies only apply for an analysis across the

group of service users covered by a survey. If you want to analyse your results at

the level of sub-groups while retaining the desired margin of error the number

would have to be increased. If you analyse the results by sub-groups you need to

ensure a minimum number of responses in each sub-group and you may need to

check the representativeness and calculate the statistical reliability of each of the

sub-groups.

4 StarT questions

4.1 Introduction

HouseMark has decided to expand its benchmarking service for transactional

16

© HouseMark 2017

surveys in response to the demand from members. Four types of transactional

surveys – responsive repairs, anti-social behaviour, complaints and all lettings –

have been chosen as they were the most popular services identified by the

organisations participating in the review and HouseMark already collects customer

satisfaction measures for them. Further information collected in the review

regarding how individual landlords carry out these four transactional surveys can

be found in the appendix.

For each of the four areas, we outline the transactional questions that are available

for benchmarking in StarT Questions, which are available here. Organisations do

not have to ask all of the questions in each service area to participate in

benchmarking. They can expand the question set from other sources, such as Star,

the optional StarT questions and their own questions to tailor the survey to meet

their exact needs.

For ease of reference, the StarT questions that are available for benchmarking are

listed here for each of the four service areas. Only the questions are listed here, but

using the accompanying required response options is equally important if you wish

to benchmark the results. The full set of StarT questions, including the required

response options, can be found in our StarT Questions Excel spreadsheet.

4.2 Responsive repairs

StarT code Question only

Trep1 Thinking about the last repair completed, how satisfied or

dissatisfied were you with the following?

Trep1a The ease of reporting the repair

Trep1b The workers overall performance in terms of their attitude,

treatment of your home and tidying up after the work

Trep1c The overall quality of work

Trep2 How good or poor do you feel [your social housing provider] was at

keeping you informed throughout the repairs process?

Trep3 Overall, how satisfied or dissatisfied are you with the repairs service

you received on this occasion?

Trep4 How likely would you be to recommend the repairs service to other

residents on a scale of 1 to 10, where 1 is not at all and 10 is

extremely likely?

4.3 Complaints

StarT code Question only

Tcom1 How easy was it to make your complaint?

Tcom2 How satisfied or dissatisfied were you with the following aspects of the

complaints service?

17

© HouseMark 2017

Tcom2a The level of customer service from staff who dealt with your complaint

Tcom2b The information and advice provided by staff

Tcom3 How well were you kept informed about the progress of your complaint?

Tcom4 Overall, how satisfied or dissatisfied are you with the final outcome of

your complaint?

Tcom5 Overall, how satisfied or dissatisfied are you with the way your complaint

was handled by [your social housing provider]?

Tcom6 How willing would you be to make a complaint to [your social housing

provider] in the future?

Tcom7 How likely would you be to recommend the complaints service to other

people who may need to use the service, on a scale of 1 to 10, where 1 is

not at all and 10 is extremely likely?

4.4 Anti-social behaviour

StarT code Question only

Tasb1 At the beginning, how easy or difficult was it to contact a member of staff

to report your anti-social behaviour complaint?

Tasb2 How would you rate how quickly you were initially interviewed about your

complaint (either in person or over the phone)?

Tasb3 How satisfied or dissatisfied were you with the following aspects of the

anti-social behaviour service?

Tasb3a The level of customer service from staff who dealt with your complaint

Tasb3b The support provided by staff

Tasb3c The speed with which your anti-social behaviour case was dealt with

overall

Tasb3d The final outcome of your anti-social behaviour complaint

Tasb4 How well were you kept up to date with what was happening throughout

your anti-social behaviour case?

Tasb5 Overall, how satisfied or dissatisfied are you with the way your anti-social

behaviour complaint was handled by [your social housing provider]?

Tasb5 How willing would you be to report any anti-social behaviour to [your

social housing provider] in the future?

Tasb6 How likely would you be to recommend the anti-social behaviour

complaints service to other people who may need to use the service, on a

scale of 1 to 10, where 1 is not at all and 10 is extremely likely?

4.5 Lettings

The questions are designed to be used for all lettings, whether these are re-lets of

existing properties or the first letting of a newly-built property.

StarT code Question only

Tlet1 Thinking about the lettings service, how satisfied or dissatisfied were

you with the following?

18

© HouseMark 2017

Tlet1a The information and advice provided

Tlet1b The helpfulness of staff dealing with your new tenancy

Tlet2 How satisfied or dissatisfied were you with the overall condition of

your home at the time of letting?

Tlet3 How good or poor do you feel [your social housing provider] was at

keeping you informed throughout the lettings process?

Tlet4 Overall, how satisfied or dissatisfied are you with the lettings

process?

Tlet5 How likely would you be to recommend the lettings process to other

people who may need to use the service, on a scale of 1 to 10, where

1 is not at all and 10 is extremely likely?

4.6 Filtering

In addition to data from the StarT questions that can be benchmarked,

organisations will need to provide additional contextual information to facilitate

filtering the results when StarT benchmarking goes live in April 2016. The filters will

differ for each survey.

4.7 Open-ended questions

In addition to the benchmarking questions, we recommend including a number of

open-ended questions. When residents answer ‘neither’ or ‘dissatisfied’ to a

question, organisations can collect the underlying reason(s) by asking questions

such as ‘If you answered neither or fairly/very dissatisfied, could you explain your

reasons for this?’

At the end of the survey, organisations should consider adding any additional

questions to capture a general view on how to improve the service, such as ‘Is

there anything we could have done better?’ or ‘How could we improve the service?’

4.8 Follow up questions

Providing the survey is not anonymised a question asking whether the resident

would like a follow-up call may or may not be appropriate.

4.9 Other surveys

Providing the launch of StarT questions is successful and there is substantial take

up and interest in benchmarking transactional surveys in 2016, HouseMark may

expand its benchmarking services to other transactional surveys.

5 Achieving reliable and representative results from

transactional surveys

One of the fundamental issues organisations have with transactional surveys is the

balance between cost and sample size. Understanding statistical significance and

margins of error can lead to improved cost effectiveness.

The sampling stage can be quite technical in its nature, although the guidance

19

© HouseMark 2017

included here should be sufficient for your organisation to carry out its own

sampling and reliability tests. If there are any doubts you should consider further

reading, seeking outside help or contracting out this area of work. A basic

understanding of statistics is key to deciding how many residents to survey and

whether the responses are representative and robust enough to inform business

decisions.

5.1 Census or sample

One of the first steps you need to undertake prior to carrying out a transactional

survey is to decide whether to undertake a census or a sample survey. A census is

the entire population of residents who have received the service, whereas a sample

is a sub-set of the population.

For some transactional surveys, such as an anti-social behaviour or complaints, the

total number of transactions in a year may be very low, and a decision to undertake

a census is relatively straightforward. At some organisations, however, the volume

for even the complaints service could be in the thousands, in which case the

organisation has a choice to make.

The key factors in determining whether to carry out a sample survey often include:

• Importance of the results to your organisation – is it a key service area and/or one

where you are reviewing the service and need to collect more reliable information

for a period of time?

• The reliability requirements for the survey

• The likely response to the survey – if you know from past experience that the

survey suffers from a low response rate you may need to include more residents in

the sample, and once the sample size is close to the total population then a census

should be undertaken to avoid any residents feeling excluded

• Budget implications

• The potential for survey fatigue

Some transactional surveys will be measuring high-volume interactions – such as

responsive repairs, gas servicing and customer contact. A different approach is

needed for these surveys, where it is more likely that sample surveys are

undertaken.

The table below highlights some of the common approaches adopted by

organisations that took part in the review and is intended to inform the reader

rather than set a hard and fast rule.

Table 2: Sampling frames and survey methods from organisations who took part in the review

Repairs Lettings Complaints ASB

Percentage of

organisations

undertaking

survey

96% 83% 73% 86%

Census or

sample 73% sample 65% census 67% census 80% census

Survey method 54% phone, 50% phone, 74% phone, 60% phone,

20

© HouseMark 2017

22% postal, 8%

PDA

27% face-to-

face, 23%

postal

21% postal 38% postal

If you have a small number of service users and you are carrying out a census, you

will not have to worry about devising a sampling frame. However, you will still need

to maximise your response rate and take statistical reliability into account.

5.2 Sampling

Sampling is a process that selects a proportion of your residents to be surveyed.

Accurate sampling and a good response rate can generate reliable feedback at a

fraction of the cost of a survey of all of your residents. If sampling is carried out

properly, according to set rules, you can be reassured that the sample used for the

survey is as representative as possible for your residents as a whole.

The greater the number of residents who reply, or who are interviewed in the

survey, the more confident you may be that the views of the sample are

representative of the views of all residents.

For transactional surveys, the ‘sample frame’, or the list of residents whose views

you want to explore, is simply all of those who have had a transaction or used the

particular service within the timeframe in which you are carrying out the surveys.

This information on this group of residents is used to select the sample if a census

is not being conducted.

5.3 Deciding on the sample size

If you have decided not to undertake a census, you will need to decide the size of

the sample. The starting point is to work out how many completed questionnaires

or interviews you will need for the population being surveyed. This is important

because:

• Generally, the larger the number of responses, the more accurate the results

• The larger the number of responses that are achieved, the more detailed analysis

you can do on sub-groups (for example by area or repair trade type)

• The other relevant factor is the number of residents who use the service – the

general rule is that the smaller the number, the higher the percentage of responses

needed. When deciding the sample size, you will need to weigh up accuracy of the

results (the more responses, the better) with the available budget and other

resources available for the survey.

For example, if you have 1,000 service users in any one year you would require 278

responses as a minimum to achieve a margin of error of ±5%; with 2,000 service

users, this would increase to 322 minimum responses; and 5,000 service users

would require 357 minimum responses. The service with the largest number of

users (5,000 users) requires only 79 more responses than the smallest service

(1,000 users) to meet the same level of statistical reliability.

Guide to margin of error

It is not uncommon in social housing for organisations to work with a margin of

error for surveys of between 4% and 8% at the 95% confidence level. Larger

21

© HouseMark 2017

surveys sometimes work to a lower margin of error, such as 2%, while a margin of

error of up to 10% is often acceptable for subgroups.

It is up to individual organisations to determine what level of reliability is acceptable

for their surveys. With transactional surveys that are repeated regularly it often

does not take long to determine whether the results are useful or fluctuate so

widely that it is hard to understand exactly what is happening and the usefulness of

the survey is rightly questioned. At this point an organisation has the choice to

increase the number of responses, or look to combine surveys results into longer

periods (such as analysing data quarterly rather than monthly), or abandon the

survey.

5.4 Response rates and sample size

The response rate to any survey will vary according to the method used and the

number of attempts made to encourage residents to respond.

For any postal survey achieving less than 30% it should be asked whether the right

survey methodology is being used. However, there are a number of surveys that

typically achieve much lower response rates, such as complaints, anti-social

behaviour and rent arrears/income management (around 15% to 25% from a postal

survey).

For telephone surveys, the response rates are different, as quotas are usually set

on the number of calls to be achieved for each survey and group of residents.

However, it is still important to measure the response rates and reasons for non-

response.

Table 3: Response rates by survey type and survey method from the review (excluding

telephone surveys)

Survey method Survey type Response rate

Postal Gas servicing, complaints 10% to 20%

Responsive repairs, ASB complaints

10% to 30%

Asset management 10% to 60%

New build 20% to 80%

New lettings 25% to 50%

22

© HouseMark 2017

Face-to-face New build 95% to 100%

New lettings 50% to 100%

IVR Customer services 1% to 5%

Online New build 10%

PDA Responsive repairs 25% to 30%

Text/SMS Responsive repairs, gas servicing, customer services

Up to 20%

5.5 Checking representativeness, quotas and weighting

Once the fieldwork has been completed and the reliability of the responses

determined, organisations may still need to check that the results are

representative of the service users, even for transactional surveys.

One of the advantages of telephone and face-to-face interviews for surveys that

cover a large number of interactions is that quotas can be set to ensure that the

results of the survey represent the views of all services users. If conducted

successfully there is no need to check that the response is representative.

Postal and online surveys, on the other hand, will appeal to particular groups of

residents and consequently their views can be over-represented in the results.

Even with a transactional survey, if it is based on a relatively small number of

service users you will need to ensure that it is not just one particular group of

residents (for example older residents) who respond.

To correct for this, responses can be weighted to make them representative. The

response is checked using one or more criteria that are known in advance for all

residents, such as the number of bedrooms, property type or management area.

The decision on which criteria to use is a technical one.

The weights applied are a series of multipliers that adjust the results from particular

categories of respondents so as to reflect the population that the sample has been

drawn from. If no adjustment is required, the weight is 1.0. If a category has twice as

many respondents as it should have to be representative, the weight applied is 0.5,

and so on.

Survey bias

It is important to recognise that each survey method carries its own bias – not just

in how individual questions are answered but also in terms of how different

demographic groups respond to different methodologies.

For example, postal surveys can include a disproportionately high response from

older residents, who tend to be more satisfied. This will have a greater impact in

surveys with lower overall response rates. There is some evidence to suggest that

telephone surveys can produce lower satisfaction scores than postal surveys, a

likely reason being that they reach a more representative group of residents. In

general, higher response rates tend to increase the proportion of ‘non-extreme’

23

© HouseMark 2017

scorers, leading to a lower mean satisfaction rating, so it may be that it is the

response rate rather than survey method that accounts for the observed

difference.

We would expect organisations to take these factors into account when analysing

results and, if possible, correct any survey bias where applicable. However,

HouseMark recognises that this may be difficult to carry out in practice.

Where bias has been introduced through differing response rates, with

transactional surveys it is unlikely that extra surveys could be conducted for the

under-represented subgroup. You may therefore need to consider applying

weightings to correct the bias.

Online surveys will have their own survey bias as the type of resident who

completes a survey on the internet may differ from the overall survey population. If

the results from an online survey are added to those collected using a different

survey method, the characteristics of the two populations need to be compared

and, if required, weighting applied to correct any under- or over-representation.

For transactional surveys where the total population is less than a few hundred, if

the response rate is high enough there is still some merit in checking to see if there

is any bias in the response. However, it is less likely that anything can be done to

correct this other than to add a note in the findings.

Most software survey packages can carry out weighting functions and produce

tables analysing your findings using weighted and unweighted data.

5.6 Confidentiality, anonymity and data protection

Individuals undertaking surveys should be clear in their own minds and be able to

clarify to others the difference between ‘confidentiality’ and ‘anonymity’.

• Confidentiality means following good practice to ensure that no personal

information about individual residents, or very small groups of residents, is released

into the public domain, or within your organisation, except among a very small

group of data controllers.

• Anonymity means that information provided to you or your contractors, in the form

of survey responses and views expressed by the respondent, cannot be tracked

back to an individual resident.

The majority of transactional surveys are confidential rather than anonymous

because:

• You need to be able to relate survey data to information held on individuals or

properties

• Some survey methodologies may require the use of reminders or booster surveys,

so you need to know who has responded

• You may want to link residents to other characteristics of their homes when

carrying out further analysis

• You may wish to re-contact the residents (subject to permission) about an issue

raised in the survey.

Carrying out transactional surveys is categorised as market research and is

defined under the Data Protection Act (DPA) 1998 as ‘classic research’. In

24

© HouseMark 2017

summary, the key principles of these guidelines are as follows:

• The data collected should not identify individual respondents, unless they have

given clear and express permission for this

• The data should not be processed to support measures or decisions with respect

to the particular individuals

• The data should not be processed in such a way that substantial damage or

substantial distress is caused to any data subject

• Unless expressly stated to respondents, the information cannot be used to

generate further research exercises. This means that the data cannot be used to

identify, for example, respondents who are dissatisfied with a particular service with

a view to surveying this group subsequently or inviting them to a focus group,

unless the respondents explicitly agree to this. For example, they could give

permission by ticking a box in a postal survey agreeing to this, or by agreeing over

the telephone or in a face-to-face interview. It is essential to seek their active

agreement to any ‘release clause’. You cannot assume they have agreed because

they failed to tick a box to say they disagree.

The consequences of these requirements and guidelines is that any data collected

during a transactional survey should not be held on your customer database in

such a way that staff or others might have full access to it. While the data may be

stored in the database, this should only be accessible to a small number of ‘data

controllers’ and the key identifying fields (e.g. property identifier) should not be

released in a way that would enable other staff to connect the survey responses

with individuals.

It is important that organisations abide by these requirements as breaches of these

protocols can result in heavy penalties by the Information Commissioner. Each

organisation should already be registered with the Information Commissioner’s

Office.

Where an organisation is passing over details of individuals to an external agency,

this is permitted under Section 29 of the DPA 1998, which allows for exemptions on

data protection for market research purposes.

6 Good practice in designing customer satisfaction

surveys

The next three sections give an overview of good practice that may be helpful to

organisations when conducting or reviewing customer satisfaction surveys. The

advice could apply to any sort of survey, but is written with Star and StarT surveys

in mind. It is worth re-emphasising that the right sort of survey method for one

organisation may well be different to that for another, and that the ‘right’ survey

methodology is one that works best to meet the research needs of your

organisation.

Organisations might find some of the options put forward in this section worth

considering before either embarking on any new transactional surveys or changing

25

© HouseMark 2017

current practices to include some of the StarT transactional questions.

6.1 Planning and reviewing surveys

The majority of organisations carry out transactional surveys. Many organisations

find themselves in the position where their range of such surveys has evolved over

time with little or no coordination. Some organisations may have carried out an

internal audit or review at some stage to measure the extent to which they survey

their residents, while others may feel that such a review is long overdue.

Ideally, organisations should periodically carry out systematic reviews either on

individual surveys or collectively as part of a strategic review of customer

satisfaction measurement. The timing of such reviews may be part of a coordinated

long-term strategy with reviews at set intervals over time, or undertaken as and

when staff feel there is a need to overhaul the current surveys.

Organisations following best practice are likely to have a clear strategy for

reviewing surveys, not in isolation but collectively with all other ongoing and one-

off surveys, in the short- to medium-term. Any review should also consider how the

data collected can help to inform and improve the business.

6.2 Introducing new surveys and questions

Most organisations hold a wealth of information about their customers on their

housing management systems. Customer information is routinely collected at

various points throughout the organisation whenever a resident makes contact.

Providing those internal systems are kept up to date, all relevant information is

captured and made available to those who need it. Customer relationship

management (CRM) systems have the capacity to enhance this process and

capture local-level information. Before embarking on any new survey or adding

questions to existing surveys, staff should consider:

• whether the data is already available

• whether there is suitable proxy data information available

• is this survey the most appropriate point of collection?

• could the information be collected elsewhere?

• the true cost of collecting the data.

Fit for purpose

In addition to this, any survey question should be relevant and useful, rather than

simply ‘nice to know’. Be very clear on how you will use the information and make

sure you understand the difference between, on the one hand, questions that are

very specific, measuring tangible events, and, on the other hand more general

perception questions. Don’t ask questions you already know the answer to (such as

whether the repair worker turned up on time if this is captured at the operational

level) or that will not elicit a measureable answer due to different perceptions (such

as ‘Was the repair completed to your satisfaction?’).

6.3 Survey aims

A fundamental part of any good survey is to define and be very clear about what

you are trying to measure and how the results will be used. Another consideration

26

© HouseMark 2017

for some surveys is that if appropriate they should be aligned with, and possibly

even measure, any corporate aims. The surveys should help to promote the vision

of the organisation by asking questions in relevant areas, where applicable. This is

even more important for transactional surveys, which may be measuring

performance against published service standards and performance targets.

At the outset of any new survey, or as part of a review of that survey, it is important

to set some parameters to help define what you hope to achieve. These can be

derived by asking questions such as:

• Why are we carrying out the survey?

• Are there clear aims and objectives?

• What are the critical pieces of information we want?

• How does this survey relate to our performance management and resident

involvement strategies?

• What is the most effective survey method to use?

• How will the information captured in the survey feed into our current information

systems?

• What reporting outputs are needed, and by what deadlines?

• Who needs to see the results, and how quickly?

• What resources are needed? Do we have them?

• Does the survey represent value for money?

Resident commitment

If you are confident that you fully understand the purpose of the survey it is much

more likely that you can explain this to a resident. This is a vital point for any survey

but is often overlooked. You do need to tell the resident what the survey is for, why

you are carrying it out and what you will do with the results, if you want them to

participate.

6.4 Managing the surveys

Some organisations have a central point from which all customer surveys are

coordinated. These surveys are often of high quality as staff are likely to be trained

in market research methods, and will have gained experience in terms of what

works and what doesn’t at the organisation. The surveys can also be scheduled to

ensure that residents are not overloaded with surveys within a given time period.

Having a coordinated approach means that the surveys are more likely to have a

consistent feel and look, and to match the tone and voice of the organisation. A

central coordination point that has the support of and input from any marketing or

communications staff is also likely to appear more professional. This does not

mean that other service areas should not get involved in carrying out surveys, but

that they should be centrally coordinated to ensure residents are not asked to take

part in too many surveys at any one time.

Sometimes surveys are carried out by individual sections or departments with little

or no consistency in approach or coordination in timing. While this might work in

practice for some organisations, there are many advantages to coordinating the

surveys.

27

© HouseMark 2017

6.5 Avoiding survey fatigue

The best way to avoid survey fatigue or overload is not to send out or to ask the

same residents to participate in too many surveys. Survey fatigue is a potential

problem in the sector and it can lead to dissatisfaction.

Although it is likely that larger organisations will have proportionately just as many

interactions with residents as smaller organisations, they have the advantage that

they do not need to survey as high a percentage of residents to get the same level

of statistical reliability. Residents of small landlords are therefore more likely to

suffer from survey overload.

Example: In order to achieve a sampling margin of error of ±5% at

the 95% confidence level, an organisation with 1,000 properties

would need 278 completed interviews (28% of the population),

compared with 379 completed interviews for a landlord with 30,000

properties (1% of the population).

Varying the survey methods and explaining to residents the reason for and use

made of each survey will help to maintain residents’ interest and commitment.

Some organisations have set up systems so they can monitor how many surveys a

resident is asked to complete in a given period.

The more you communicate with your residents about why they are being

surveyed, how long it will take, the number of questions and what will happen to the

information, the more likely it is that they will give you their time.

Organisations should consider using survey protocols. Once a resident has taken

part in a survey they can be excluded from that survey or any other surveys for a

set period of time. For example, some organisations have an exclusion period in

place so as not to survey a resident who has made a complaint within a certain time

frame. Other organisations feel that it is important to be inclusive and give

everyone a chance to be included in any given sample.

Many organisations hold lists of residents who are excluded from surveys, such as

people on a ‘safe list’ or who have requested not to take part in any future surveys.

Some surveys are, by their very nature, more important than others. Priority should

be given to those considered most valuable to the organisation and where it is

difficult to get a good response. For example, if you are about to carry out a key

research project, you should be careful not to over-survey residents in the period

leading up to that survey, or to prioritise surveying residents about a complaint or

ASB ahead of a responsive repair survey if the latter is your key focus.

Without these systems in place it is possible that residents might be contacted

several times within a short period. The ability to measure survey inclusion and

participation at the individual level is dependent on the capability of internal

systems to capture and monitor who is surveyed.

Some tips to avoid survey fatigue include:

• Don’t over-survey the same residents

• Keep the surveys short and easy to understand

• Make the process as simple as possible

• Try to get the timing right

28

© HouseMark 2017

• Ask relevant questions

• Compensate respondents for their time

• Follow up on comments requesting action

• Thank people for their time

6.6 Incentives

The majority of respondents in the review felt that the use of incentives did not

make a great difference to response rates and that it is difficult to gauge whether

they work. Some respondents stated that even a large incentive had failed to make

much difference to response rates and many no longer use an incentive. A couple

of respondents mentioned that changing the incentive did have an impact on the

demographics of the respondents. With no concrete evidence of the success of

using an incentive it is up to the individual organisation as to whether to offer one.

Some landlords choose not to do so for transactional surveys but would include

them as part of a larger customer satisfaction survey.

6.7 Branding and design of surveys

Some organisations have produced visually appealing Star surveys, which look

inviting and easy to complete. The clever use of colour, graphics, photos and

design styles have enhanced many surveys by making them appear more

engaging. Changing the appearance of surveys can help them to look fresh and

new. Some surveys are now branded with a unique name, which helps them stand

out from other communications from the organisation and lead to improved

response rates. Any printed or online survey should look professional, well-

designed and error free, though colour printing and the use of high-quality paper

both add extra cost and budget constraints must be borne in mind.

Organisations should aim for a consistent visual style to maintain a brand identity,

in line with house style guidelines if appropriate. Try to avoid having a range of

different visual styles for surveys that when placed side by side are hard to

recognise as being from the same organisation.

A number of organisations provide customer information to place the survey in

context. This is used to explain the importance of the customer surveys to the

organisation, how they provide key information for the business, the type of

surveys carried out, the maximum number of surveys a resident might expect in

any one year and when a resident should expect to be surveyed.

Version control

Organisations may find it helpful to use version control, by adopting a numbering

system that will assist in keeping a check on future survey amendments, not least

by keeping a record of which questions are added and dropped in drafting a

survey.

6.8 Budgets and resources

Organisations need to calculate the costs for all of their housing research activities

on an annual basis. The full costs include not just the fieldwork elements of any

survey but also the staff time spent designing, managing, analysing and reporting

the results. Calculating the true cost is an important part of assessing value for

29

© HouseMark 2017

money for each survey to the organisation and helps departments to assess the

value of the information provided. Whether it is perceived to provide value for

money will be linked to the value and insight placed on the information provided

and its usefulness in informing service changes.

Knowing the true costs for surveys also allows for a comparison with an external

market research company, which may be able to provide some or all of the services

at a lower cost. There are advantages and disadvantages to both external and in-

house approaches that should carefully be considered and reviewed periodically.

6.9 Internal or external research

Careful consideration should always be given when deciding whether to carry out

the survey process in-house or externally. The decision often depends on whether

the expertise and resources are available internally, and whether this approach is

cost effective.

It may be that some aspects are easily done in-house and others bought in as and

when necessary. It is possible to train staff in many of the skills needed to run a

successful programme. One important element that should always be addressed is

the need for rigorous checks on the sampling, response, weighting, data cleaning

and analysis to ensure the survey’s integrity. Without this it is impossible to be

confident in the data and it may be advisable to obtain external professional

validation and authentication of your results from time to time.

One of the other considerations is the need sometimes to distance frontline

service staff from the collection and analysis of survey data. It is not unknown for

customer satisfaction surveys to be conducted by the team delivering that service,

which may raise some impartiality issues when measuring satisfaction with those

services and acting on the findings.

A final recommendation is to check and ensure the quality and rigour of survey

techniques used by any contractors carrying out surveys on your behalf by

assessing their approach against guidance such as that presented here.

6.10 Strategic overview

Organisations may find it useful to carry out an audit of current surveys and to

produce a strategic programme for customer satisfaction measurement, not just

for transactional surveys but ideally for all customer satisfaction activities. These

are likely to include transactional surveys, one-off or tracking general customer

satisfaction (such as a Star survey), in-depth research, ad-hoc surveys and any

other activity that involves residents giving their views. In addition to the more

traditional methods of conducting satisfaction research, most organisations at

some point carry out ‘softer’ measurements such as feedback at events, online

website surveys and resident involvement.

A cohesive survey design programme should include factors such as survey

reliability and assess which are the most important surveys to the organisation at

each particular point in time. There should also be a clear remit of the aims for each

survey, and these should fit within an organisation’s vision and corporate plan.

Any review needs to classify surveys into key survey types – perception surveys,

transactional surveys, profiling surveys etc. Any programme should take into

30

© HouseMark 2017

account the need to capture residents’ change of circumstances to keep records

up to date – an essential part of understanding the population that a survey is trying

to represent.

A good starting point in any attempt to coordinate customer satisfaction surveys is

to list all the surveys currently being carried out – those that have been carried out

in the last two to three years and any that are planned in the next year.

Information can then be added to the list to include the total population for the

survey, the intended sample size, survey method, frequency and estimated

response rate. This will also allow some calculation of sampling errors for each

survey. Included in the list and identified for each survey should be a reference as

to who conducts the survey – whether this is an internal survey (and if so who is

responsible), an external market research company or a contractor.

Finally, the total number of surveys sent out each year can be calculated and

considered against the resident population. While not all surveys will be sent to

every resident, it is highly likely that some residents will be asked to take part in

more surveys than others.

A follow-up piece of work might be to calculate the cost of each survey, giving

careful consideration as to which costs are included

7. Good practice in conducting customer satisfaction surveys

7.1 Question sets

Regardless of whether you are designing a new survey or reviewing an existing

transactional survey there are a series of key considerations to ensure the survey

is well designed.

Regardless of the survey method, you need to build up a set of questions to ask

residents. Advice on how to go about this has already been set out in section 4.1.

However, once the questions are drawn up it is important to consider whether they

are suitable for your intended survey method. A review of the questions may

narrow down the survey methods available to you or necessitate question

changes. For example, you may wish to ask too many questions for an SMS/text

survey, while a long, detailed question might work better in a postal survey where

residents can reflect for longer on how they answer the question. Questions about

the staff who delivered the service might best be posed by other staff or an

external company.

31

© HouseMark 2017

In general, if you are setting up a survey of any type without involving a research

agency or consultant, you would be well-advised to read the Market Research

Society’s Top10 Tips for DIY Surveys (at

https://www.mrs.org.uk/standards/guidelines), which highlights some useful points

to consider.

7.2 Representative sample sizes and statistical reliability

To be able to draw general conclusions about the opinions of residents who use

the service, you will need to ensure that you have enough responses to your

transactional survey to give statistical validity to the results.

To do this, you need to use two particular measures:

• Confidence level: this describes how certain you can be that the results of your

survey reflect the views of the whole of your resident population, within a range of

possible error. For transactional surveys HouseMark asks organisations to work at

the 95 per cent confidence level – that is, there is a 95 per cent probability that your

sample reflects the total population who have used that service.

• Sampling error: this is the estimated bias that may occur with the use of sampling.

Usually referred to as the margin of error, this is expressed as a plus or minus (+/-)

percentage figure.

7.3 Survey methods

We set out below the advantages and disadvantages of each survey method –

some are self-completion methods and others involve interviewers. One of the

advantages of transactional surveys is that during the interaction or service use

organisations have the opportunity to flag the follow-up surveys and capture

residents’ survey method preferences.

Postal self-completion questionnaires

Advantages

• Relatively easy to set up

• Can be very cost effective

• Can survey large numbers

• Respondents may be more willing to

express views as there is no interviewer

• Convenient for respondents as have more

time to complete the survey and can

chose when to do so

Disadvantages

• Likely to have a survey bias that is

exaggerated if low response

• Does not take into account any adult

literacy issues

• Can suffer from low return rates

• Slow response - takes more time to get

results

• Data has to be input

• Respondents cannot be probed and there

is no opportunity to clarify answers

• No control over who fills out the

questionnaire

• Respondent can choose not to answer

some questions

• May be overlooked if sent out with other

information

32

© HouseMark 2017

Telephone interviews

Advantages

• Can provide quick results

• Good for short and very focused

interviews

• Greater control of response if using

quotas

• Flexible design – ability to alter survey

length with probes in key areas

• Ability to explain questions to respondents

• Real-time capture of data, can be analysed

immediately

• Can achieve 100% response rate to each

question

Disadvantages

• Availability of telephone numbers

• Cannot ask complex detailed questions

• Interview length lifted by a potential short

attention span as residents lose interest,

especially if calling a mobile number

• A cold call approach may be unwelcome

and seen as intrusive

• Residents may be called at an

inconvenient time, though calls back can

always be scheduled to suit the

respondent

Face-to-face interviews

Advantages

• Good response rates

• Suitable for longer interviews with more

complex/probing questions

• Can capture verbal and non-verbal

responses as attitude, emotions and

behaviour can be observed (more likely

used in qualitative in-depth interviews

rather than transactional surveys)

• Can achieve 100% response rate to each

question

• Sampling can ensure representative

response

Disadvantages

• The most expensive type of survey to

undertake

• Time consuming

• Difficult to cover remote and rural

locations (cluster sampling can resolve

this but introduces sample bias)

• May produce a non-representative sample

• Possible interview bias

• A good interviewer requires considerable

training

• Data generated can be harder to analyse

• May require manual data entry

Online/Mobile device surveys

Advantages

• Low cost

• Automated process with potential for high

reach

• Real-time capture of data, can be analysed

immediately

• Convenience for respondents

• Design flexibility with question routing

• Respondents may be more willing to

express views as there is no interviewer

• Can be visually appealing and include

videos and graphics (but don’t get carried

away!)

• Data captured in electronic form

Disadvantages

• Can suffer from low response rates from

those who do not regularly communicate

online

• With no interviewer, respondents cannot

be probed

• Respondents may not be representative

• Lack of up to date and correct email

addresses

• Spam filters and firewalls can bounce

survey invitations back

• People can be reluctant to click on an

email link to a survey form, given security

concerns

SMS / Text surveys

Advantages Disadvantages

33

© HouseMark 2017

• Cost effective and quick results

• Good for very short surveys with short

questions

• Survey can be sent very soon after the

event

• Needs simple rating scales

• Automated

• Data captured in electronic form

• Not all residents use SMS/Text

• Only collects limited information using

simple questions

• No interviewer, respondents cannot be

probed

• Low response rate

• Respondents may not be representative

• Responses can be skewed towards the

extremes of very good and very poor

service

Interactive Voice Response (IVR)

Advantages

• Cost effective and quick

results/immediate feedback

• Good for short surveys

• Voice recording of comments can capture

what customer said

• Automated

• Suitable for high volume surveys

• Data captured in electronic form

Disadvantages

• Limited number of questions

• High non-response

• Works best with simple responses, but not

so good for longer lists or more complex

questions

• Responses can be skewed towards the

extremes of very good and very poor

service

Personal Digital Assistant (PDA)

Advantages

• Survey tools are contained in small

portable device

• Data entered during survey

• Real-time capture of data, can be analysed

immediately

Disadvantages

• High initial investment costs

• Potential security risk of PDAs being

stolen

• If contractor-run, some concerns over who

completed the survey – the respondent or

the contractor?

• Possibility of the contractor’s presence

influencing responses

7.4 Conducting the survey

Detailed advice on how to carry out postal self-completion surveys and a brief

summary of how to run telephone surveys or face-to-face surveys can be found in

A guide to running Star (section 5).

7.4.1 Running a postal self-completion survey

For a postal self-completion survey, we would advise you to keep the following key

principles in mind when developing your questionnaire:

• Keep the questionnaire as short as possible – we would recommend that your

postal questionnaire is no longer than two pages of A4

• Lay out the questionnaire so that it looks professional

34

© HouseMark 2017

• Give clear instructions on how to complete each question, with clear routing for

questions that may only apply to particular respondents

• Put the questions in a logical sequence, starting where possible with a question that

is easy to complete, so respondents are not put off at the beginning

• Include the closing date of the survey and how to return the survey

Make sure each questionnaire has a unique identifier so it can be linked to the

respondent’s property details. Issues regarding anonymity and confidentiality

should be addressed in any covering letter and at the start of the questionnaire.

Producing the covering letter or survey instructions

If you are using a covering letter or a set of instructions at the start of the

questionnaire, do make sure that it informs residents about the survey, motivates

them to complete and return the questionnaire, and gives them a contact name and

number to discuss concerns or queries about the survey.

You should consider the following when drafting the covering letter:

• Keep it short

• State the survey objectives

• Emphasise confidentiality – this will help reduce any fears residents may have about

their responses being individually identified.

• Give a deadline for the questionnaire to be returned

• Give details of alternative ways to complete the questionnaire (if available)

• Include an opt-out clause – the Data Protection Registrar has suggested that good

practice includes giving tenants and residents an opportunity not to receive follow-

up reminders if they do not want to take part in the survey or be included in a

booster survey using a different methodology

• Address any accessibility issues – use a translation statement explaining how to

obtain translated copies of the questionnaire, or how to obtain large-print copies.

Monitoring the returns

Completed questionnaires need to be logged as they are returned. A booking-in

system can be used to mail-merge follow-up questionnaires to non-respondents,

excluding those who have already responded, and to determine the need for a

booster survey, perhaps using a different survey method. However, not all

transactional surveys use reminders to help increase the survey response rate and

this decision needs careful consideration.

If the response to a mail-out of the survey is low, you may need to consider using a

different methodology to boost the response rate or increasing the sample size if

that is an option. Determining whether you have a representative sample is covered

in the next section.

7.4.2 Running telephone interviews

The use of telephone interviews for transactional surveys has always been popular

and more organisations are now conducting their own telephone surveys. Other

organisations outsource the work for various reasons. These include:

• The survey methods can be complex and should be conducted in line with the

Market Research Society guidelines. Among other requirements, these specify that

interviewers should be properly trained and supervised, ideally to Interviewer

35

© HouseMark 2017

Quality Control Scheme (IQCS) standards.

• It may be more cost-effective to use external agencies that regularly conduct such

work rather than in-house resources that may be diverted from other tasks.

• There is a danger of response bias if respondents do not feel confident about

telling their landlord’s staff their views. An external agency should be seen as

offering an independent ear.

• Using paper questionnaires as the basis for asking the questions is acceptable, but

there are many benefits to be derived from using custom-designed survey

software. Such software can also be linked to a database of respondents’ details,

including whether or not each respondent has been called and when, when to call

back, and any remarks on the call outcome.

• Managing the progress of fieldwork against quota targets can be difficult when

managing hundreds of contacts and call outcomes, and again may be better

handled by a company using specialist software for this purpose.

• Interviewers must be prepared to be refused and to be told in no uncertain terms

that they are wasting respondents’ time with their cold call. Experienced telephone

interviewers are used to this and are familiar with how to deal with calls as calmly as

possible, without taking offence when respondents are annoyed at being called.

• Interviewers should be clearly briefed on the survey content and the respondent

group. Questions must be put to respondents in a balanced and non-judgemental

way.

You can apply many of the same key principles to a telephone script as a self-

completion questionnaire, although there are some particular principles that should

be applied to telephone interviews:

• Draft a simple but clear introductory statement, setting out who is calling and on

whose behalf, and the purpose of the survey. Also, indicate in this statement how

long the interview might take and reassure the respondent that their responses will

be kept confidential.

• Try to keep the responses short and reduce questions that involve reading out a

long list of attributes. It will be hard for the respondent to keep track of all these,

and such questions can sound very repetitive.

• Similarly, don’t expect respondents to be able to listen to a long list of answer

options or statements and then choose one or more as their preferred option, or

their 1st/2nd/3rd preferences. Such questions may work when the respondent can

see the questionnaire, but are not practical over the telephone.

• Be clear on when interviewers should read out the answer options and when they

should not (e.g. when you’re looking for spontaneous rather than prompted

responses).

• Where a respondent is unwilling or unable to answer a question, do not offer them a

‘don’t know’ or similar response, but nevertheless include this default category in

the questionnaire so that the interviewer can enter this null response and move on

to the next question.

• Interviewers must be trained to clarify responses, so that someone who says that

they are ‘satisfied’ is gently prompted to confirm whether they mean ‘very’ or ‘fairly’.

• As telephone surveys are an interruption of someone’s time it is recommended that

they take no longer than 15 minutes. This may mean changing a single open-ended

36

© HouseMark 2017

question that requires a lengthy answer into a few closed questions.

• Do not ask questions where the answer is already known.

• If a respondent is too busy to undertake the survey when you call do not persuade

them to continue by saying ‘It won’t take long.’ Try to make an appointment to call at

a more convenient time – and make sure this appointment is kept.

• Try to vary the timings of calls to include evenings and weekends. To some extent

times of calling are set according to the needs of a particular project. However, calls

should ideally not be made after 8pm Monday to Friday, after 2pm on a Saturday or

at all on a Sunday.

• Recording calls can not only be useful in transcribing interviews and accurately

representing the words spoken, but also as a way of monitoring the process –

providing security if complaints are made. However, respondents must be advised

that this is taking place.

• Levels of literacy, command of English, verbal reasoning and cultural understanding

may vary considerably within your resident population, requiring the drawing up of

appropriate strategies and protocols to deal with such issues.

7.4.3 Running face-to-face interviews

Few transactional surveys are likely to be conducted by face-to-face interviews.

Those that do take place are likely to be part of other processes – such as a six-

week tenancy visit at the start of a letting.

The same key principles apply to face-to-face as to telephone surveys. However,

you should be aware that in order to carry out high quality research, interviewers

should be properly trained. There is also a danger of response bias if respondents

are asked about some of the services delivered by the member of staff carrying out

the interview, or even by another interviewer who is clearly a member of that staff

team.

Using an external agency’s face-to-face interviewers may overcome these

problems. These interviewers, like external telephone interviewers, will be trained in

survey research methods. They will be used to knockbacks and will have

developed their introduction style based on the key elements of the standard

survey introduction to encourage people to take part.

It should be borne in mind that it is much more labour intensive to arrange face-to-

face interview appointments. Interviewers may be faced with cancellations,

postponements, and return visits and unless you have a very clustered set of

respondents, face-to-face fieldwork for a transactional survey would need to be

spread over a wide area, involving a significant amount of travelling between

addresses. It is unusual to complete more than four half-hour interviews in a day.

This is unlikely to make it a cost-efficient option in comparison with postal or

telephone surveys. It should also be remembered that when interview numbers are

small, the problem of representativeness becomes even more acute.

• As with a postal survey, if face-to-face interviewing is done using paper

questionnaires, make sure that each one has a unique identifier so it can be linked

to the respondent’s property details. Issues regarding anonymity and

confidentiality should be addressed in any covering letter sent in advance of

fieldwork or handed to the respondent at the start of the questionnaire.

• Face-to-face interviewers can make good use of show cards setting out the list of

responses to specific questions, which can help respondents to select their

answers as they would in a postal survey.

37

© HouseMark 2017

• Some external agencies will offer fieldwork via PC or tablet-based interviewing

software packages that can provide very rapid updates of fieldwork progress and

real-time reporting of interim results. This is difficult and potentially time-

consuming to set up from scratch if you do not already use some kind of data

capture software on mobile devices.

• Potential interviewees should be encouraged to have a family member, support

worker or carer with them during the interview if they would feel more comfortable.

• It is important to be sensitive to how the interview may be affecting the interviewee.

It may help to re-phrase some questions or change the order. However, it is critical

that the meaning or focus of the questions is not altered.

• The more the interviewer talks (too broad a range of options/ideas), the less the

respondent does, and this will affect the quality of the data and possibly bias the

findings. Careful design of questionnaires for face-to-face interviewing should

avoid this problem.

As highlighted in the telephone interview guidelines (section 7.4.2), pre-planning

and briefings need to take into account how to accommodate differing levels of

literacy, common languages and cultural understanding within the sample

population.

7.4.4 Running online surveys

More and more organisations are now using online surveys, most often to

supplement another survey method, such as a postal or telephone survey.

Typically, organisations use one of a range of popular online survey software

providers. The surveys are relatively easy to set up and administer.

The easiest way to invite residents to take part in an online transactional survey is

to send an email, though this will need to be linked to their contact details and any

relevant information fields in a sample database. This allows what is effectively an

email-merge, and most packages that enable this to be done will also track replies

in such a way that you can send reminders only to those who have not taken part.

You will still need to consider whether this can be done in-house by team members

with sufficient ability in handling e-communications and online questionnaires, or

whether it would best be done by an external agency.

Given that you will have had recent contact with the resident their willingness to

take part in an online survey and current email address should have been captured.

Successful online surveys can have response rates in excess of 30%, and they are

a good way to capture the views of younger residents. Online surveys work best

when organisations regularly use electronic communication and have a well-

maintained and accurate email database.

As with any self-completion survey, the fundamentals, such as a well-designed

survey that takes into account best practices, will ensure a better response rate.

• Online surveys can be run by making the script openly available on a website, but

bear in mind that anyone could access and complete this, making your sample of

respondents very hard to verify or manage. It is normally better to link the

questionnaire script directly to a database that allows only those invited to access

and complete a questionnaire. This in turn limits the sample to only those for whom

you have an email address.

38

© HouseMark 2017

• It is possible to host a copy of the questionnaire on your website, but make it

accessible only by using one of a list of secret codes, each code corresponding to

an address in your sample list. These codes can be issued, with the weblink address

of the questionnaire, in a letter or postcard to your sample list, allowing only the

recipient to be able to open and participate in the survey. This overcomes the

problem of not having an email address for a given set of residents.

• The email invitation needs to be written in the same way as a postal invitation (see

above) – i.e. to give a clear explanation and to encourage a response – but needs to

be brief, and look authentic. Email inboxes are subject to all manner of junk email

and spam, and the email invitation needs to convey rapidly that the survey is a

genuine exercise meriting the recipients’ attention.

• The script itself can share most of the approaches that would be used in a postal

survey, but has the benefit, like most telephone surveys, that automatic routing can

be used to ensure that respondents do not mistakenly skip between questions.

• An important consideration is that around half of online survey completions are

made using smartphones/tablets, which render the screen differently to a typical

desktop or laptop/computer. You will need to check how the software that you are

using handles this, and you may find that some question types (e.g. several

statements requiring a five-point scale answer) need to be split across different

screens, one sub-question per screen, to avoid layout problems.

• Online questionnaires have the benefit of real-time data capture, and very rapid

distribution of reminders, so that you can view the up-to-date results throughout

the course of fieldwork.

7.4.5 Running SMS/text surveys

An SMS/text survey can be considered as a simpler version of an online survey.

Clearly only people for whom you have mobile phone numbers can participate in

the survey.

• SMS surveys are unlikely to generate a particularly high response given the

relatively unengaging nature of a text message, and the frequency of spam

messaging to people’s mobile phones.

• The questionnaire is limited to simple questions requiring simple answers, e.g.

Yes/No, or a score on a five-point scale. You will need to consider which questions

(probably no more than five) are most important and would form the core of the

SMS survey.

• You should also ask a consent question, to confirm whether the respondent is

willing to be identified with their responses.

• Responses may be prone to being mistyped, but most SMS users should be able to

enter their responses easily enough.

• A mobile phone supplier already used by your organisation may well be able to offer

good rates for the volume of outgoing and incoming SMS messages required.

Some may be willing to handle the SMS campaign for a relatively small fee.

Alternatively, some external research or marketing agencies will be well-versed in

running SMS campaigns, and will be able to offer their services.

• Feedback will be simple, but you should be able to monitor responses received in

real time, depending on how the SMS campaign is distributed. Always ensure that

you can tie the responses back to the mobile phone number that gave them.

7.4.6 Running PDA surveys

PDAs were first used in housing for stock condition surveys, followed by their

operational use in repairs and maintenance services, where the repair worker uses

the device to keep track of the progress of jobs throughout the day. The addition of

a customer satisfaction survey on the devices for residents to complete is also

39

© HouseMark 2017

now widely used in the sector by organisations and their contractors. Outside of

responsive repairs any move to using PDA devices is likely to require a full review

and consultation with other parts of the organisation to ensure a fit with current ICT

systems.

Typically, the devices send the data to the central information system and allow

quick and easy analysis of the results. The process works best when staff are in

physical contact with the resident as part of their normal day-to-day role – such as

a responsive repair or a new tenancy visit. With mobile devices and much improved

software there is more choice in the marketplace and some organisations have

purchased PDAs for survey purposes alone.

As these are, in effect, mini face-to-face surveys, many of the same issues outlined

in that section will apply – especially in respect of response bias. In commissioning

this type of research, you should bear in mind that contractors are unlikely to be

members of the Market Research Society or follow MRS guidelines, and so the

survey will not be conducted/mediated by trained interviewers. Consider also that

the respondent can easily be influenced by the contractor’s presence, whether

indirectly or directly.

7.4.7 Running IVR surveys

Interactive Voice Recognition surveys (IVR) are most commonly used following a

call to a customer contact centre. Some organisations offer residents the

opportunity to opt in to an automated survey at the end of a call. The surveys

monitor performance and give instant feedback. Note that this general type of

immediate post-call feedback can also be conducted as a fully online survey or an

SMS survey depending on how your customer has made their enquiry or accessed

a service.

Careful consideration needs to be given to the overall value of the information to

the organisation as this approach will:

• necessitate the purchase of good quality software that fits the needs of the

organisation and its resident population (age range and differing speech patterns)

• ask your customer questions through recorded rather than live speech, which

some customers may object to

• need to be a very short survey in order not to aggravate the respondent at the end

of the main part of their call.

As there is no human interaction in this part of the call, it may be advisable to add a

recontact / consent question. If the client has a grievance, they can confirm

whether they are willing to be called back as part of your customer service

recovery process and whether they are happy for that request to be linked to the

other responses they have given earlier in the call.

As with PDA devices, introducing this technology needs to be part of a full review

and consultation with other parts of the organisation to ensure a fit with current ICT

and telephone systems.

40

© HouseMark 2017

7.5 Data input and checking

If the response was not captured electronically the data needs to be input. Data

inputting involves converting the written responses on each of the questionnaires

into an electronic format – a spreadsheet or database – so that the information can

be brought together in a coherent manner for checking, analysis and interpretation.

Many survey methods capture the data in an electronic format as the interview is

taking place, so no additional data entry will be required. If a paper questionnaire is

completed by the interviewer the information will still need to be input into an

electronic form.

If you do not have the resources in-house to input and check the data, you can

contract it out, but make sure your agreement sets high standards for data entry,

verification of the accuracy of the data and data cleaning. If you enter the data in-

house you should ensure that you have in place a high-quality process to ensure its

accurate capture.

It is critical to keep the processes you adopt during these stages as simple and

cost-effective as possible. They should not be time-consuming, as with

transactional surveys you will be repeating the exercises regularly.

Coding the data

Responses to each question will typically need to be ‘coded’ in order for them to be

analysed and interpreted by computer packages. A code is simply a number that

corresponds to a particular answer to a question. Whatever computer package you

use, you should set up each question with its own column or field. This column or

field should have textual labels that describe all the possible coded responses to

that particular question. For questions with numerical answers, the code should

represent the number or group of numbers.

Coding frame

If you want to summarise responses to an open-ended question as part of the

overall survey results, you will need to create a ‘coding frame’. This is best done

after all other data is input. Some specialist software packages will identify

commonly occurring words to help with this.

To create your own coding frames:

• Separate out all the questionnaires that have responses to an open-ended question

• Read the first response and summarise its essence on a list

• If it deals with multiple unconnected matters, make a separate list entry for each

item

• Go on to the next response: if it makes a similar or connected comment to a

previous one, record it in a similar fashion or put a tally by the comment

• If it raises a different matter, make a new list entry

• You will end up with a list showing groups of common answers

• Give these answers numerical codes, with the most frequently occurring being

numbered 1, the next 2 etc.

• You may consider grouping the items in the list into common themes

• You will probably find you have several items that have been raised by only one or

two people. Batch these together as an ‘Other’ category.

41

© HouseMark 2017

After you have constructed your coding frame, go back and write in the relevant

code or codes by the open comment question on the questionnaires. Once you

have a coding frame that captures the most popular responses, it is possible to use

this list during future interviews as the interviewer would be able to use the pre-

coded responses during the survey.

Computer and manual checking and cleaning the data

You will find that there are errors, incomplete answers and inconsistencies in some of your questionnaires. As far as possible, you should correct these before you analyse your data. Types of error include missing values, range errors, consistency errors and mistakes in routing. If you experience a high number of errors you should re-visit your questionnaire design. Inputting the data

Begin data inputting as soon as you have some questionnaires – processing them in batches reduces the size of the task to manageable proportions and the possibility of the data inputting delaying the project. Produce a test data set as soon you can, and check the contents against your source questionnaire to ensure that data is being exported as required. If you have contracted out data entry, you will want to know what quality control mechanisms are in place to ensure data has been accurately transferred from questionnaire to computer. This should be apparent from the contractor’s tender. One common method is ‘double keying’ – that is, each questionnaire is input twice. If you are data inputting in-house you are unlikely to have the resources to do this. Instead, the project manager should periodically take a random sample of questionnaires and check these against the data input. Once again, requesting a test dataset is a good idea, so that you can test that all data has been recorded as intended at an early stage of fieldwork Once a cut-off point for inputting completed questionnaires has been reached, it is time to carry out the computer ‘cleaning’ of the data, to check for inconsistencies, routing errors and other problems. If necessary, look back at individual questionnaires that remain anomalous. After this, you are ready to begin the checking and data analysis.

Checking your results

Data analysis is the process that converts the processed data into information

about your tenants and residents in the form of tables and charts. The first step in

data analysis is to produce a set of basic frequency tables showing how many

respondents gave each answer to each question.

7.6 Survey retention

The Data Protection Act does not set out any specific minimum or maximum

periods for retaining personal data. Instead, it says that: ‘Personal data processed

for any purpose or purposes shall not be kept for longer than is necessary for that

purpose or those purposes.’

42

© HouseMark 2017

This is the fifth data protection principle. In practice, it means that you will need to:

• review the length of time you keep personal data

• consider the purpose or purposes you hold the information for in deciding whether

(and for how long) to retain it

• securely delete information that is no longer needed for this purpose or these

purposes

• update, archive or securely delete information if it goes out of date.

Postal surveys should be securely disposed of and a receipt confirming this

obtained. Three to six months is normally a perfectly adequate length of time to

store questionnaires before shredding. There are software programmes that can

electronically ‘shred’ digital data files that are advisable in preference to simply

‘deleting’ such data.

43

© HouseMark 2017

8. Good practice in analysing satisfaction surveys

8.1 Analysis and interpretation

With repetitive transactional surveys it is likely that you will be carrying out the

same data analysis on a regular basis. The more automated you can make this the

better.

Frequency tables

Frequency tables are an excellent method of illustrating the results for each

question in the survey. Because they are easy and quick to produce, and contain

the basic information needed to assess initial results, they are usually produced for

all questions in the survey. They should contain both simple counts and

percentages.

Cross-tabulations

Cross-tabulations (or ‘cross-tabs’) are tables in which the responses are analysed

in relation to other information about the respondent or property. For example, you

may wish to analyse satisfaction with the responsive repairs service by job type or

contractor.

Cross-tabs can also be used to illustrate relationships between the answers to

different questions. Some questions can be used with the other resident- and

property-based criteria to set up a standard set of ‘cross-breaks’ against which the

responses to particular questions can be cross-tabulated.

When choosing cross-tabulations you need enough responses to be able to draw

conclusions. The sample should have been designed to provide adequate numbers

of all the main sub-groups of interest. You should be cautious about using multiple

cross-tabs that result in very small numbers being analysed – for example, ‘nested’

tabulations of satisfaction by both ethnic group and age. The Market Research

Society recommends that any cells in a table with fewer than 20 responses should

not be reported in case individuals within that particular cell could be identified.

Calculating sampling error

A good sampling procedure should mean that the number of responses is large

enough for statistically reliable results, assuming they are broadly representative of

the service users. However, you should also estimate the degree of accuracy of the

results (the sampling error, expressed as a range with a confidence level – see

section 5). This may be particularly important for sub-groups.

Interpreting the data

Conclusions drawn from the data must be justified and backed up by the survey

results. To keep such interpretations valid, you should:

• Be aware of the dangers of focusing too much on relatively small changes in

percentages, especially those that are within the margins of statistical error (i.e. as

likely to be due to random fluctuations in the figures as to any real differential

effect)

• Be aware of the number of responses to any particular question. Percentages on

their own can be misleading, particularly when looking at subgroups. If you feel you

44

© HouseMark 2017

cannot be confident of an interpretation, but still want to flag it up, give it a ‘health

warning’

• Be aware of the accuracy of the results. When you have calculated the sampling

errors for each population group you may find that the results for some groups

(particularly where there is a small population) have sampling errors in excess of ±

7% or higher, and you will need to ensure that the reader is aware that such results

need to be treated with some caution

• Be aware that differences between sub-groups need to be quite marked if they are

to be statistically significant

• Look for lower levels of satisfaction and higher levels of dissatisfaction – it may

make gloomier reading, but it will help identify and improve problem areas

• Record only the results that the analysis demonstrates. You should avoid being

influenced by your own assumptions about the reasons for responses or personal

opinions

• Consider any major or significant changes in service delivery that may have

affected the responses. However, it is important not to draw conclusions based on

this evidence alone.

8.2 Rolling or moving averages and year-to-date (YTD)

Rolling averages are a simple way to remove the ‘noise’ or randomness from time

series data. Often people find it useful to plot the underlying data series on the

same graph.

They are called ‘moving’ averages because the figure is recalculated as new data

becomes available, usually by dropping the earliest value and adding the most

recent. For example, the moving average of satisfaction with repairs over a six-

month period may be calculated by taking the average of ratings from January to

June, then the average of ratings from February to July, then March to August, and

so on. Rolling averages therefore reduce the effect of temporary variations in data,

show trends more clearly and highlight any values above or below the trend.

There are disadvantages to using rolling averages in that there is a time lag in

reflecting significant changes.

If you are looking at a series of rolling average figures, bear in mind that each

element in the time series shares some responses with its neighbours (i.e. a

respondent from any given month could appear in three adjacent rolling quarterly

groupings). This means that the different quarters’ data are not independent of one

another and so standard statistical comparisons cannot be applied.

8.3 Trend analysis and seasonal trends

It is important to check some transactional surveys for seasonal trends. For

example, you may find a rise in anti-social behaviour cases in the summer or a peak

in repairs following a storm.

The recommended approach to identify such effects is to settle on the time

periods of greatest interest/relevance, whether weeks, months or quarters, and

study the data from each independently rather than using any rolling average. You

should choose time periods for which you have a reasonable number of responses,

at least 100 if possible. You can conduct statistical tests to assess whether the

differences between time periods are significant.

45

© HouseMark 2017

Your null hypothesis would be that there is no trend or no impact in the wake of an

event such as a storm. Checking the trend across successive waves for any

statistically significant differences will reveal whether any changes are genuine

effects. If these clearly coincide with the event that you are considering, you can

consider the null hypothesis to have been rejected, and the link of results to events

to be real. In practice, you may need to call on independent statistical advice from a

research agency or specialist to verify this.

8.4 Reporting and action on the results

The results from transactional surveys often provide strategic management

information used to inform organisations of business performance. The results

from some surveys may be used at weekly or monthly performance meetings, while

others are reported less frequently, typically quarterly or annually.

The most successful organisations will maximise the use of the data across the

relevant business areas to inform decisions and will have in place the tools and

software to facilitate the smooth flow of information. In an ideal world, the process

will appear seamless; in reality organisations’ capabilities vary widely.

There will be some organisations where staff conduct manual counting and rely on

time-intensive systems that are prone to human error. At the other end of the

spectrum, some organisations have invested in ICT solutions that allow full

integration of survey results with housing management systems to produce results

online, in real time, with built-in functionality to allow mapping and interrogation.

Some organisations are beginning to explore how the information can be used to

predict future scenarios, but it is still early days in this field.

46

© HouseMark 2017

9. HouseMark and seeking help and advice

9.1 Sharing good practice

There is a strong demand within the sector for advice and greater sharing of good

practice in measuring customer satisfaction and understanding customer

experience. HouseMark is keen to facilitate this. This guidance report is just one

example of this commitment, and HouseMark’s new community site in this area will

act as one of the focal points to assist in the sharing of good practice.

9.2 HouseMark validation and statistical help

It is important that sampling and checking for representative responses is carried

out for your transactional surveys, as this will need to be validated when members

submit their core results for benchmarking to HouseMark.

Organisations that carry out the surveys in-house and do not have any internal

statistical expertise may need to invest in some sampling advice from a market

research company or a statistician to ensure their sample is representative. There

are a number of organisations providing this type of service and support to the

sector, including HouseMark.

For a discussion on this with HouseMark, email [email protected] or

telephone 024 7647 2703.

47

© HouseMark 2017

10. Appendix 1 – Transactional surveys in the sector

10.1 Guidance on carrying out responsive repairs surveys

The information below came from organisations that responded to our review.

Knowing how other organisations carry out responsive repairs surveys will help to

guide organisations as to how to conduct their own surveys.

Volume of repairs: Organisations that took part in the review carried out between

600 and over 200,000 response repairs each year.

Whether to conduct

the survey

The vast majority of organisations carry out responsive repairs surveys

(96% in the latest review).

Census or sample Around three-quarters of organisations carry out a sample survey. Those

that carried out a census tended to be smaller organisations under

10,000 units or used a survey method that facilitated a census, such as a

postal survey or PDA.

Sample size The most popular sample size was around 5% to 10% of repairs, with

some organisations surveying up to 20%.

Timing and

frequency of the

survey

Frequency: Around a third of organisations carry out weekly or monthly

surveys. Only a small number carry out surveys as infrequently as every

three months.

Timing: A fifth of organisations carry out the survey on the same day,

around a third complete them within one week, and a sixth within two-

weeks. Just under a fifth of organisations carry out the survey within a

month of the completed repair.

The remaining organisations do not have a specific timing or frequency

for conducting the survey as the process dictated when the survey was

completed – such as sending out a questionnaire with the appointment

letter, leaving a survey with the resident when the repair was completed,

or sending it out when the repairs job is logged as closed.

Survey method Telephone is the most popular survey method, currently used by over

half of the organisations in the review. A fifth of organisations carry out

postal surveys, with the remaining organisations using text/SMS, online

surveys, PDAs and other methods.

Response rates Response rates vary depending on the methodology used. Telephone

surveys often have quotas to achieve rather than a target response rate.

Quotas are typically set at around 3% to 10% of repair works, although

higher quotas of 20% to 30% are also used. Postal survey response rates

typically vary between 10% and 30%. The use of text/SMS surveys is

relatively new – one organisation reported a 19% response, while those

using PDAs reporting response rates of between 23% and 30%.

Number of

questions

The average number of questions asked by organisation participating in

the review was 13, with two-thirds asking between eight and 15

questions. The shortest survey was just four questions, while 31 were

asked in the longest survey.

Questions being

asked

Just over half of the organisations used some of the Star responsive

repair questions. A quarter of organisations also include other Star

questions, and a third include non-repair questions.

Anonymity At least three-quarters of the surveys are completed non-anonymously

with residents’ details known to the organisation.

48

© HouseMark 2017

Internal or external

surveys

Over half of the organisations carry out responsive repairs surveys

internally, with over a third carried out by external research agencies and

around one in ten by the contractors themselves. Internal surveys are

most often conducted by the customer services team or other members

of staff, with around one in seven using research staff. Few organisations

use residents to carry out the surveys.

10.2 Guidance on carrying out complaints surveys

Volume of complaints: Organisations that took part in the review received between

50 and over 2,000 complaints a year.

Whether to conduct

the survey

Around three-quarters of organisations carry out complaints surveys.

The reasons given by those who are currently not doing so are linked to

insufficient numbers or poor response rates; others are reviewing the

survey and looking to re-introduce it.

Census or sample Two thirds of organisations carry out a census survey; while a third prefer

a sample survey. Organisations who carry out sample surveys tend to

receive a far higher number of complaints that those carrying out a

census.

Timing and

frequency of the

survey

Frequency: Monthly surveys are the most popular, with around a quarter

of organisations carrying out weekly surveys. A small number carry out

quarterly or six-monthly surveys.

Timing: Many organisations carry out a survey when the case is closed

rather than specifying a time. Relatively few surveys are carried out within

a week of the case being closed, with around a fifth taking place within

two weeks and half within a month. One in six surveys are carried out

within three months of the case being closed.

Survey method A quarter of organisations use at least one additional survey method to

help increase the response rate, such as online, phone, text and postal

surveys.

Response rates Response rates vary depending on the methodology used. Telephone

survey rates varied between 5% and 90%, while response rates of 10% to

20% are typical for postal surveys, although one organisation reported

60%.

Number of

questions

The average number of questions asked by the organisations that

participated in the review was nine, with most asking between seven and

11 questions. The shortest survey had just five questions, while 27 were

asked in the longest survey.

Questions being

asked

Three quarters of organisations use HouseMark complaints questions

from the benchmarking service. The Star complaint questions were less

popular, with few organisations using all of them. The majority of surveys

also include non-complaint questions.

Anonymity Just over half of the surveys are completed with residents’ details known.

Internal or external

surveys

Just over two-thirds of the organisations carry out surveys internally, with

the rest carried out by external research agencies.

10.3 Guidance on carrying out ASB complaints surveys

Volume of ASB complaints: Organisations that took part in the review received

49

© HouseMark 2017

between 20 and over 2,000 ASB complaints a year.

Whether to conduct the

survey

The vast majority of organisations carry out ASB surveys (86% in the

latest review).

Census or sample Four out of five organisations carry out a census of complaints, with

a fifth carrying out a sample survey.

Timing and frequency of

the survey

Frequency: Around a third of organisations carry out monthly

surveys, with half that number surveying every week. Only a small

number carry out surveys as infrequently as quarterly, with many

linked to the closing of the case.

Timing: A fifth of organisations carry out a survey within a week of the

case being closed and a further fifth within two weeks. Half of

organisations carry out a survey within a month of the case being

closed. One in ten surveys are carried out within three months of the

case being closed.

Survey method Telephone interviews are the most popular survey method, currently

used by three out of five organisations in the review. Almost all of the

remaining organisations carried out postal surveys. A fifth of

organisations use an alternative methodology to boost the response

rate.

Response rates Response rates vary depending on the methodology used. Organisations using telephone surveys reported that they interviewed between 25% and 90% of residents. Postal survey response rates typically varied between 10% and 30%.

Number of questions The average number of questions asked by organisation who participated in the review was 11, with three-quarters of organisations asking between nine and 15 questions. The shortest survey was just five questions, while 21 questions were asked in the longest survey.

Questions being asked Just under a fifth of organisations use HouseMark’s question set for ASB, while three out of five use their own surveys, which include some of the HouseMark questions. Just over a fifth of organisations use their own surveys that do not include any HouseMark questions. Nine out of ten organisations include other non-ASB questions in the survey.

Anonymity Roughly half of the surveys are completed anonymously.

Internal or external

surveys

Three quarters of organisations carry out in-house surveys

10.4 Guidance on carrying out lettings surveys

Volume of lettings: Organisations that took part in the review received between

50

© HouseMark 2017

250 and over 3,500 lettings a year.

Whether to conduct the

survey

Five out of six organisations carry out all-lettings surveys. Many of

those currently not carrying out a survey have suspended the survey

while a review is carried out.

Census or sample Two-thirds carry out a census survey of all lettings, while a third carry

out a sample. There was no apparent correlation with the size of an

organisation.

Sample size Of the organisations that conduct sample surveys the sampling

frames vary but many are geared to achieve a specific margin of

error, or to achieve surveys of between 10% and 50% of lettings.

Timing and frequency of

the survey

Frequency: Around a fifth of organisations carry out surveys weekly,

with half of organisations doing so monthly.

Timing: A small number of organisations carry out the survey on the

day of letting or within seven days. A fifth of surveys are carried out

within 14 days of the letting, while half are carried out within a month.

A further fifth of organisations carry out the survey within six weeks

of the letting.

Over a quarter of organisations use a different period, which is

generally dependent on the lettings process itself – either as part of

sign up, or at a set period after letting (such as at the two-week

follow up visit or within six weeks).

Survey method Telephone is the most popular survey method, used by half of the

organisations, with the others split relatively evenly between postal

and face-to-face surveys.

Response rates Response rates vary depending on the methodology used. Postal

surveys achieve between 28% and 53% response rates. Those

carrying out telephone surveys typically achieve interviews with

between 25% and 50% of residents. Organisations carrying out

face-to-face surveys typically report interviewing between 50% and

100% of residents.

Number of questions The average number of questions asked by organisation

participating in the review is 15, with three-quarters asking between

seven and 21 questions. The shortest survey was just five questions,

while 42 questions were asked in the longest.

Questions being asked Over a third of organisations asked Star questions or non-lettings

based questions as part of the survey.

Anonymity The majority of surveys are completed with residents’ details known;

under a fifth are anonymous.

Internal or external

surveys

Around three-quarters of surveys are carried out in-house, with a

quarter carried out by external research agencies. Many are carried

out by housing management and lettings staff.

4 Riley Court, Millburn Hill Road, University of Warwick Science Park, CV4 7HP.

Company Reg. No 3822761 VAT Reg. No. 744 0353 53

T: 024 7646 0500

E: [email protected]

W: www.housemark.co.uk