investigating and promoting ux practice in industry: an experimental...

24
1 Investigating and promoting UX practice in industry: an experimental study Carmelo Ardito 1 , Paolo Buono 1,2 , Danilo Caivano 1,3 , Maria Francesca Costabile 1 , Rosa Lanzilotti 1 1 Università degli Studi di Bari “Aldo Moro” 2 LARE s.r.l. Spin Off Company of the University of Bari 3 SER&Practices s.r.l. Spin Off Company of the University of Bari {name.surname}@uniba.it ABSTRACT The efforts of addressing user experience (UX) in product development keep growing, as demonstrated by the proliferation of workshops and conferences bringing together academics and practitioners, who aim at creating interactive software able to satisfy their users. This special issue focuses on “Interplay between User Experience Evaluation and Software Development”, stating that the gap between human-computer interaction and software engineering with regard to usability has somewhat been narrowed. Unfortunately, our experience shows that software development organizations perform few usability engineering activities or none at all. Several authors acknowledge that, in order to understand the reasons of the limited impact of usability engineering and UX methods, and to try to modify this situation, it is fundamental to thoroughly analyze current software development practices, involving practitioners and possibly working from inside the companies. This article contributes to this research line by reporting an experimental study conducted with software companies. The study has confirmed that still too many companies either neglect usability and UX, or do not properly consider them. Interesting problems emerged. This article gives suggestions on how they may be properly addressed, since their solution is the starting point for reducing the gap between research and practice of usability and UX. It also provides further evidence on the value of the research method, called Cooperative Method Development, based on the collaboration of researchers and practitioners in carrying out empirical research; it has been used in a step of the performed study and has revealed to be instrumental for showing practitioners why to improve their development processes and how to do so. Keywords: Software life cycle, Human-centered design, Survey, Interview, Focus group, Cooperative Method Development 1. Introduction The proliferation of many competitive interactive systems pushes researchers to debate a lot on how to design systems that sell better, since they are capable to provide a positive UX. Designing for UX requires understanding user requirements from both a pragmatic (system functionalities and interaction) and a hedonic point of view (Väänänen-Vainio-Mattila et al. 2008). Indeed, UX extends the more traditional concept of usability, focused primarily on ease-of-use, by emphasizing

Upload: truongnga

Post on 15-Feb-2019

217 views

Category:

Documents


0 download

TRANSCRIPT

1

Investigating and promoting UX practice in industry: an experimental study

Carmelo Ardito1, Paolo Buono1,2, Danilo Caivano1,3, Maria Francesca Costabile1, Rosa Lanzilotti1

1 Università degli Studi di Bari “Aldo Moro”

2 LARE s.r.l. Spin Off Company of the University of Bari

3 SER&Practices s.r.l. Spin Off Company of the University of Bari

{name.surname}@uniba.it

ABSTRACT

The efforts of addressing user experience (UX) in product development keep growing, as demonstrated by the

proliferation of workshops and conferences bringing together academics and practitioners, who aim at creating

interactive software able to satisfy their users. This special issue focuses on “Interplay between User Experience

Evaluation and Software Development”, stating that the gap between human-computer interaction and software

engineering with regard to usability has somewhat been narrowed. Unfortunately, our experience shows that software

development organizations perform few usability engineering activities or none at all. Several authors acknowledge

that, in order to understand the reasons of the limited impact of usability engineering and UX methods, and to try to

modify this situation, it is fundamental to thoroughly analyze current software development practices, involving

practitioners and possibly working from inside the companies. This article contributes to this research line by reporting

an experimental study conducted with software companies. The study has confirmed that still too many companies

either neglect usability and UX, or do not properly consider them. Interesting problems emerged. This article gives

suggestions on how they may be properly addressed, since their solution is the starting point for reducing the gap

between research and practice of usability and UX. It also provides further evidence on the value of the research

method, called Cooperative Method Development, based on the collaboration of researchers and practitioners in

carrying out empirical research; it has been used in a step of the performed study and has revealed to be instrumental for

showing practitioners why to improve their development processes and how to do so.

Keywords: Software life cycle, Human-centered design, Survey, Interview, Focus group, Cooperative Method

Development

1. Introduction

The proliferation of many competitive interactive systems pushes researchers to debate a lot on how

to design systems that sell better, since they are capable to provide a positive UX. Designing for UX

requires understanding user requirements from both a pragmatic (system functionalities and

interaction) and a hedonic point of view (Väänänen-Vainio-Mattila et al. 2008). Indeed, UX extends

the more traditional concept of usability, focused primarily on ease-of-use, by emphasizing

carmelo
Text Box
Pre-print version ARDITO, C., BUONO, P., CAIVANO, D., COSTABILE, M. F. and LANZILOTTI, R. 2014. Investigating and promoting UX practice in industry: An experimental study. International Journal of Human-Computer Studies 72, 6 (2014), 542-551. http://dx.doi.org/10.1016/j.ijhcs.2013.10.004

2

subjective attributes like aesthetics, emotions and social involvement. Human-Centered Design

(HCD) is the starting point to take into account usability and UX (ISO/IEC 9241-210 2010): it

prescribes to focus on users, their tasks and the context in which they operate, and to carry out

iterative design by creating and evaluating prototypes of increasing complexity, possibly with users.

Unfortunately, HCD and methods addressing usability and UX are always mentioned in research

papers, but very seldom applied in the current practice of software development. Today most

software companies still devote a substantial amount of resources on determining the right

functionality of their products. Moreover, daily experiences in working with various software

artifacts (e.g. text editors, e-commerce websites) show that, despite the powerful functionality they

offer, users complain about the many difficulties in using them. A user interface hard to understand

and use causes many problems to users. Usability is a measure of the extent to which users are able

to perform their activities in their specific context of use (ISO/IEC 9241-11 1998). A low level of

usability means that users cannot work out how to use a system, no matter how elaborate its

functionality is (Nielsen 1993). Today the trend is towards increasing emphasis on UX, i.e. to

devote greater attention on how to motivate, attract and engage users.

Although documented benefits of usability engineering exist (Bias and Mayhew 2005), literature

provides studies showing that software development organizations have limited or no usability

engineering activities; examples are (e.g. (Ardito et al. 2011; Bak et al. 2008; Rosenbaum et al.

2000; Vredenburg et al. 2002)). While many authors have discussed the benefits and challenges of

usability engineering in general, fewer have studied specific software development practices (e.g.

(Dittrich and Lindeberg 2004; Kautz 2011; Larusdottir 2012; Rönkkö et al. 2008)). We agree with

those researchers who state that, in order to optimize the impact of usability and UX on software

development, it is fundamental to analyse current development practices, involving practitioners

and possibly working from inside the companies (Lethbridge et al. 2005; Robinson et al. 2007).

This article provides a contribution to this research line by describing an experimental study that we

have conducted with companies to investigate the use of HCD in their software development

processes, and in particular to analyse how they address usability and UX of the products they

create. Some obstacles were identified, which prevent the use in practice of HCD methods and

possible modifications of the development processes that may lead to the creation of systems

capable to provide a better UX. The research work is based on an approach that considers primarily

qualitative methods by triangulating results gathered by: 1) a questionnaire-based survey,

2) interviews, 3) a focus group, and 4) an exploratory study aimed at exploring UX methods in

3

industrial settings, inspired by both ethnography-based research and the Cooperative Method

Development (CMD) proposed by Dittrich et al. (Dittrich et al. 2008).

While in our research we wanted to understand how usability and UX are addressed in current

practices, the results we got show that still today too many companies neglect these important

quality factors. Other companies do not properly address them. Interesting problems emerged: some

were already known, e.g. company complaining that HCD and UX methods are very much resource

demanding and that no methods suitable to companies’ needs exist; other problems were more

surprising and challenging, such as the fact that companies are not willing to consider usability and

UX requirements because they are not mentioned in the Call for Tenders published by public

organizations (as it will be discussed in detail in the article). Addressing such problems is the

starting point for improving the current situation. The last step of our study provided further

evidence that a method like CMD has great potential to support both researchers and practitioners in

identifying and improving critical aspects of software development processes. Specific hints about

methods that companies could easily adopt are also given.

The article has the following organization. Section 2 reports some related work. Section 3 discusses

the value of qualitative research for analyzing software engineering issues. Section 4 reports the

experimental study we have conducted, by first illustrating the research methodology adopted and

then describing the four steps of the study. Section 5 discusses the results of the study and provides

some suggestions to reduce the gap between research and practice of UX. Section 6 concludes the

article.

2. Related work

Since the 80s, when Human-Centered Design was proposed, a considerable amount of research has

been performed to define many methods that support design and evaluation of usable interactive

systems. There are studies that documented the economic benefits of usability evaluation in terms

of increased sales, increased user productivity, reduced training costs, and decreased needs for user

support (Bias and Mayhew 2005). Nevertheless, literature provides many examples revealing that

this research has had little impact on software development practice. This has been shown both in

questionnaire and interview-based studies performed more than one decade ago (e.g. (Rosenbaum et

al. 1999; Rosenbaum et al. 2000; Vredenburg et al. 2002)), and in more recent ones (e.g. (Ardito et

al. 2011; Bak et al. 2008; Boivie et al. 2003; Cajander et al. 2006; Hudson 2008; Lallemand 2011;

Nørgaard and Hornbæk 2006; Seffah et al. 2006; Venturi and Troost 2004). Specifically,

Rosenbaum et al. (2000) found that the major obstacles to the integration of HCD activities in

4

software development processes are related to the lack of knowledge about what usability is, to the

resistance of engineers and/or managers to usability since they do not understand its value, and to

the feeling that too many resources (e.g. time and costs) are necessary. Other authors remarked that

different meanings are given to usability depending on the roles of the employees in the company

(e.g. director, manager, head of operation management, usability designer, etc.): usability is often

reduced to consistency of font and colour, simply regards performance and response times, or is

used as synonym of efficiency contributing to customer satisfaction (Cajander et al. 2006). Such

different interpretations of usability give further evidence to the gap between academics and

practitioners (Nørgaard and Hornbæk 2006). Companies also complain that recruiting trained

usability/HCD experts is difficult; indeed, in most cases such professionals are a very short of

company employees (Venturi and Troost 2004).

In the last decade, researchers have also investigated the combination of HCD with agile design

methodologies, like SCRUM, since both are based on similar principles, i.e. iterative design, user

involvement, continuous testing and prototyping (Blomkvist 2005). Some studies have revealed that

adopting some HCD technique adds value to the developed product and increases its usability

(Hussain et al. 2009a; Hussain et al. 2009b). With regards to which HCD techniques are primarily

used, a study conducted in 14 Swedish industries, which adopted SCRUM design methodology,

indicated that practitioners mainly used very informal methods, such as blogs from users, lunch

meetings, comments on prototypes and meeting with users. Only in few cases formal methods, e.g.

inspection techniques, were also performed (Larusdottir 2012).

However, there are studies indicating that the combination HCD-Agile presents two important

problems. The first one is related to the communication between developers and designers. A power

struggle occurs between developers, more oriented to software architecture and functionality, and

designers, who advocate end user needs (Chamberlain et al. 2006). In an attempt to solve this power

struggle, the Personas method was applied. Unfortunately, the method appeared to be not very

useful, since the precise identification of a user archetype negatively affected external clients and

potential future clients (Rönkkö et al. 2004). The other important problem regards the distinction of

the role of the two different actors, i.e. the customer and the user, participating in a HCD-Agile

development process. Agile methodologies require the continuous customer participation in the

software lifecycle in a collaborative partnership based on a daily interaction with developers

(Highsmith 2002). HCD is based on a similar principle that considers user participation in the

whole software development process. In an agile development process, customer and user play the

same role, not distinguishing between customers, who have required the system but could not use it,

5

and users, who will actually use the system. Kautz carried out a study to explore a fruitful customer

and user participation in an agile software development project (Kautz 2011). The customer played

an informative, consultative and participative role, while the user was involved in ensuring the right

flexibility of the product during the whole development process; both the project and the final

product were considered a success by the customer as well as by the developing organization

(Kautz 2011).

A few scattered experiences of designing and evaluating UX in practice are reported so far in

literature. For example, at Nokia, which has a long history in designing for UX, the product

development process includes continuous evaluation of usability and UX in different phases of the

life cycle. After the release of the product in the market, feedback is gathered from the field through

controlled and uncontrolled studies in order to collect information for improving successive releases

(Roto et al. 2008). As it is highlighted in the motivation of this special issue, integrating such

activities in software development practices is still very challenging.

3. Qualitative research in software engineering

The overall study reported in this article has been primarily based on a qualitative research

approach. Qualitative research has been largely performed in fields such as psychology, social

sciences, human-computer interaction, and keeps increasing in software engineering, since it is an

effective way to explore the in-situ practice of software engineering (Seaman 1999). As pointed out

in (Dittrich et al. 2007), “There is no ‘one way’ of doing qualitative research. The only common

denominator of qualitative research is that it is based on qualitative data”. Traditionally, quantitative

methods have been claimed better than qualitative ones, since they provide objective measurements

and enable replication of studies, while qualitative methods build on subjective interpretation. This

is not true anymore, since it has been shown that careful qualitative researchers may analyse data

and present results in ways that ensure the necessary objectivity and soundness. To perform good

qualitative research, it is suggested to adopt different methods, to consider different data sources,

and to have different researchers cooperating in order to counterbalance possible individual or

methodological biases (e.g. (Dittrich et al. 2007; Rogers et al. 2011)). Another acknowledged

property of qualitative research is that the flexibility of the research design provides the possibility

to involve practitioners in the process, enabling researchers to discuss and evaluate with them the

possible improvements of their practices. Indeed, qualitative research is particularly suited to

address software engineering as a social activity, thus emphasizing how software practitioners may

incorporate new methods in their daily practices. Understanding the social side of software

6

engineering helps to develop, select and use methods and tools to support the engineering practices

(Dittrich et al. 2007). Practitioners can actively participate in the whole research process addressing

their problems and discussing them with researchers. Thus, qualitative research is very appropriate

in the cooperation with industries.

In the work reported in this article, along with more traditional methods such as questionnaire-based

survey, interview and focus group, we have used the qualitative research approach, called

‘Cooperative Method Development’ (CMD): it “combines qualitative social science fieldwork with

problem-oriented method, technique and process improvement” (Dittrich et al. 2008). The CMD

research process is an evolutionary cycle, which consists of qualitative empirical research, technical

and methodological innovation defined in cooperation with the involved practitioners, and the

implementation of these innovations evaluated through accompanying empirical research. CMD can

be seen as a domain-specific adaptation of action research (Easterbrook 2007); it consists of three

phases that can be repeatedly applied in the same context:

Phase 1 - Understand Practice: the research begins with qualitative empirical investigations

into the problem domain, in order to understand existing practices based on their historical and

situational context, and to identify aspects that are problematic from the involved

practitioners’ point of view.

Phase 2 - Deliberate Improvements: the results of the first phase are used as an input for the

design of possible improvements. This is done in cooperation between researchers and the

involved practitioners.

Phase 3 - Implement and Observe Improvements: the improvements are implemented. The

researchers follow these method improvements as observers. The results are evaluated together

with the involved practitioners.

The results of the third phase summarize concrete results for the involved companies. Moreover,

they allow researchers to build the basis on which other scientists can evaluate the proposed

improvements.

4. The experimental study

The aim of our work is to investigate if and how software companies consider HCD in their

software development processes, how they address usability or UX of the products they create, and

possibly how they can improve their development process in order to create better products. In fact,

there are still very few studies carried out with the involvement of software companies that try to

7

understand the reasons of the large gap between what is proposed in academia - and widely

published in literature - and the actual practices of software development. This section describes the

experimental study we have conducted. The work has been driven by a research methodology

illustrated in the following section. The successive sections describe the specific steps of the study.

Figure 1. The research methodology.

4. 1. Research methodology

The research work presented in this article has been articulated in four steps (see Figure 1). As first

step, we performed a survey based on an online questionnaire, since we wanted to get data from a

considerable number of companies. By using a questionnaire-based survey, our research

methodology actually considered some quantitative data, still remaining primarily qualitative-based.

The survey results highlighted issues that we wanted to analyse in more depth. In fact, a

questionnaire does not permit to discuss the meaning of questions and concepts, and to explore new

opinions and views that might emerge in the respondents (Karlsson et al. 2007). Thus, we carried

out the three further steps almost in parallel, as described in the following.

The second step consisted in semi-structured interviews that were conducted to achieve a higher

degree of discussion between the interviewer and the interviewees, thus getting a better

understanding of practitioners’ views on the issues and the reasons of their practices to software

development. Some questionnaire answers indicated that in some cases the company management is

more reluctant to employ usability engineering methods. Thus, as third step we organized a focus

group in which we involved employees of different levels in a company, in order to compare their

views in addressing usability requirements and evaluations.

8

Our experience confirms what is already known as a limitation of interviews and focus groups in

trying to get information about a certain practice (Rogers et al. 2011): in some cases, what the

interviewees say is not exactly what they do, know or think. This may be due to several factors. One

is that “practitioners exhibit a kind of knowing in practice, most of which is tacit” (Schön 1983), i.e.

they are not able to express their knowledge in words, but it is tacit knowledge that they currently

use in their activities, even being unaware of it. Moreover, relevant information is conveyed

through documents and artefacts that they use in practice and share with colleagues; it is difficult to

get this information from interviews. Finally, questionnaires, interviews and focus groups are

affected by a “please the researcher” bias, namely the tendency of respondents to answer in a way

that can be viewed favourably by the researcher.

In-situ observations can fruitfully complement such methods. As pointed out in (Lethbridge et al.

2005; Robinson et al. 2007), the practice of software engineering is better understood if software

practitioners are analyzed while they are engaged with their software engineering problems in real

settings. Various studies performed in recent years show that ethnographically-inspired research

provides very good means to get an in-depth understanding of the socio-technological realities

surrounding everyday software-development practice (Dittrich et al. 2007; Sharp et al. 2010). The

empirical base of these studies is collected through a combination of methods, such as observation

of practice and of the used artefacts, interviews and discussions with practitioners. Actually, in

classic ethnography the researchers are immersed in the area under study for a considerable amount

of time (several months and even years). This is not usually possible in the case of software

companies because of considerations of time or confidentiality. Thus, classical ethnography

approaches have been adapted in order to conduct shorter studies (for examples, see (Robinson et

al. 2007)).

We then planned, as fourth step of our research methodology, an exploratory study aimed at

exploring HCD and UX methods from inside a company. While it could be useful to understand the

company practice through the immersion of a person for a couple of months, the important

objective of our study was to show to the company employees and managers that their development

practice could be improved by adopting methods of HCD in order to create products of better

quality. More specifically, in our study we adopted an approach inspired by the Cooperative

Method Development proposed by Dittrich (Dittrich et al. 2008). In particular, we felt useful that

the researchers could show to practitioners how they would apply some specific methods to collect

requirements or to create a system prototype. In some other cases, it could be even better to

cooperate with practitioners, so that they could appreciate the value of a certain activity. In the

9

performed study, two collaborators of ours participated in the development practices of a company

for three months each.

4. 2. Step 1: Questionnaire-based survey

Planning

As first step of our study, we conducted a survey to investigate the practical impact of usability

engineering in software development companies (Ardito et al. 2011). It consisted of a

questionnaire-based survey, whose aim was to determine whether software development companies

were evaluating the usability of their software, and to identify key obstacles. We wanted to address

companies located in southern Italy, replicating a survey conducted in northern Denmark three

years earlier (Bak et al. 2008). The questionnaire was the same used in the Danish study; being

originally written in Danish, it was translated to English by its authors and then translated to Italian

by the Italian researchers; eventually, it was checked by a Danish native speaker, who has lived and

worked in Italy for 20 years, comparing the Italian version with the original Danish version. Special

care was devoted to avoid any influence of the results of the Danish study in the analysis of the

collected data and in results interpretation.

Data Collection

The survey was conducted in southern Italy in November-December 2010 using an online

questionnaire. The questionnaire contained open and closed questions. The initial section aimed at

acquiring information about the company (number of employees, product types, platforms,

development method) and also presented questions to get the respondent’s understanding of the

term “usability evaluation”. Then, the questions focused on experience of usability evaluation in the

company and on its pros and cons.

Thirty-six companies answered to the Italian questionnaire, and thirty-nine had answered to the

Danish questionnaire.

Analysis & Results

The collected data were analysed by three Italian HCI researchers. In order to avoid influence from

the Danish study, they had no knowledge of that study. However, they were asked to adopt the same

methodology. For the closed questions, they made a quantitative analysis. For the open questions,

they followed the grounded theory approach: they individually analysed the data from each of the

open questions and put codes on sentences. Then, the code for each sentence was discussed among

the three of them, and a single code was agreed upon. They individually assigned codes to

10

categories. Again, the individual assignment of codes to categories was agreed upon in a joint

session. Reliability was high (agreement over 85%) for the codes reported in this paper. This

process resulted in a list of categories and codes, which was used to get a condensed overview of

the results from the questionnaire. At the end, the results were compared with those of the Danish

study. The detailed discussion of the obtained results is in (Ardito et al. 2011). We report here a

summary.

In order to be consistent with the Danish study, we considered companies with the following

characteristics: 1) they develop software with a graphical user interface (e.g. web applications,

mobile applications, business management software); 2) they develop for customers or for internal

use; 3) they have at least a development unit located in the Apulia region, in southern Italy, while

their headquarter could be elsewhere (e.g., the IBM unit located in Bari participated); 4) they

employ more than a single person. Specifically, of the 36 respondents, 5 belong to small companies

(having from 2 to 15 employees in the whole company), 11 belong to medium companies (from 16

to 150 employees) and 20 belong to big companies (more than 150 employees).

The first significant result is that several companies still do not conduct any form of usability

evaluation. The percentage of companies that perform usability-evaluation activities was almost the

same between the two studies. In the Danish study, 39 organizations responded to the questionnaire:

29 (74%) said that they perform usability evaluation, 10 (26%) said that they do not. In the Italian

study, 26 (72%) do, 10 (28%) do not. Figure 2 shows the percentage of small, medium and big

companies that perform/do not perform usability-evaluation activities.

Figure 2. Percentage of small, medium and big companies that perform (Yes)/do not perform (No)

usability-evaluation activities.

Looking at the first important question in the questionnaire, namely the understanding of the term

“usability evaluation”, only two Italian respondents said that they do not know its meaning, while

11

one confused accessibility evaluation as usability evaluation. All the other thirty-three respondents

either provided answers that refer to methods to evaluate usability or gave a correct definition of

usability without mentioning methods to evaluate it (fifteen respondents in the latter case).

As answer to the same question in the Danish study, which was carried out in 2007, thirteen

respondents actually addressed functionality tests instead of usability evaluation. This indicates that

companies are actually becoming more aware of what usability really is. However, the results we

got showed that they are still reluctant to evaluate usability in the software life cycle or they do not

have a clear understanding of how to evaluate it.

Figure 3. Problems in usability evaluations (from (Ardito et al. 2011)).

The respondents who said that their organization performs usability evaluations have been asked to

report the problems they encountered in introducing and performing usability evaluations in their

software life cycle, as well as the perceived advantages. Problems and advantages were two open

questions. The answers to the problems question were classified in the following categories: No

suitable methods, Resource demands, User availability, Developer mindset, No problems, Do not

know. Some respondents indicated more than one problem. As shown in Errore. L'origine

riferimento non è stata trovata., the mostly reported problems were the lack of suitable methods

for usability evaluation and the fact that they require a lot of resources in terms of cost, time and

involved people. Two answers pointed out that it is not easy to involve users in the evaluation.

Finally, by Developer mindset we refer to problems due to the fact that some professional

developers still devote great effort on the system back-end, having their main interest on qualities

such as code maintainability and reuse, and do not care too much about how the system

functionalities are presented to the end users through a usable interface. The HCI researchers who

12

analysed the questionnaire answers proposed different names for this category. At the end, we

decided to call it Developer mindset for analogy to the Danish study, which found similar categories

of problems.

The answers to the advantages question have been classified in the following categories: Quality

improvement, User satisfaction, Competitiveness, Resource saving, No advantages, Do not know.

Some respondents indicated more than one advantage. As shown in Figure 4, most answers pointed

out that advantages of usability evaluation are quality improvement of the developed products as

well as the fact that users get more satisfied. Five answers pointed out that usability evaluation

helps increasing company competitiveness. Despite the fact that resource demand was considered a

problem by some respondents, six answers reported resource saving as advantages of usability

evaluation. Those people appeared convinced that performing usability evaluation at the right time

of the product life cycle actually reduces overall costs.

Figure 4. Advantages of usability evaluation (from (Ardito et al. 2011)).

In order to get a more in-depth understanding of the topics addressed in the questionnaire, and in

particular for better discussing advantages and problems of usability engineering as they are

perceived by the individual companies, the second step in the research methodology consisted in

interviews performed with managers of some companies.

4. 3. Step 2: Interviews

Planning

We decided to interview representatives of those companies that, based on their answers in the

returned questionnaire, appeared to be motivated to improve the usability of the products they

develop. Four persons, of four different companies, were contacted.

13

A semi-structured interview was designed in order to understand in depth the socio-technological

realities surrounding everyday software development practices and why their companies are not

pushing for the adoption of usability engineering methods in their development processes.

Data Collection

Two or three researchers were involved in each of the four interviews: one was the interviewer; the

other(s) assisted by taking notes. The interviews were audio-recorded. Each interview was

transcribed before analysis.

Analysis & Results

The grounded theory approach for analysing the survey open questions was also adopted for

analysing the data collected in the interviews and in the focus group (Step 3). According to this

approach, two researchers independently examined the interview transcripts. This analysis led to an

iterative refining of the identified categories and a new category emerged. Reliability was high

(agreement over 90%).

The interviewees mainly pointed out the lack of suitable usability-evaluation methods with a high

benefits-costs ratio. They said that, since time and other resources are limited, during their product

development process companies focus only on the requirements established for that particular

project. In general, such requirements do not include usability and UX. Why should companies

bother about ensuring usability and an engaging UX if this is not explicitly required? This is mainly

true when considering public tenders. Indeed, it often happens that companies develop software

systems commissioned by public organizations, which specify the system requirements in a Call for

Tender (CfT). It is evident that the companies’ interest is to satisfy all and only the requirements

specified in the CfT. For example, an interviewee clearly said: “I do not burden myself with

anything not explicitly required in the Call for Tender”. Thus, in addition to the categories of

problems identified in the survey, a new category emerged, that we call “CfT problem”.

On the contrary, companies’ interest in software accessibility was explicitly stated. The interviews

confirmed that companies, especially those working for the public administration, have to produce

accessible products, since accessibility is one of the requirements in the CfT. In the questionnaire

returned by the company in which one of the interviewee works, an answer reported that the

company produces usable products since the Web Content Accessibility Guidelines (WCAG) are

followed. The interviewee also said: “Accessibility contains usability!... We use automatic tools to

validate our products”. He was referring to tools for assessing accessibility, like, for example,

AChecker (AChecker). During the interview, a short discussion was devoted to clarify this issue. It

14

is worth mentioning that in Italy companies are often requested to comply with the Stanca law about

accessibility, which was approved by the Italian Parliament in 2003. Thanks to this law, which

clearly indicates the different types of accessibility levels, it is easy to specify accessibility

requirements in the Call for Tenders.

4. 4. Step 3: Focus group

Planning

The aim of the focus group was to invite employees of different levels in a company (managers,

leaders and developers) to debate on issues related to performing HCD activities in their work. Four

employees of a company that participated in the survey in Step 1 were selected: one company

manager, two project leaders and one User Interface (UI) developer. We expected that conflicts

about advantages and problems in applying HCD methods would have aroused from employees

performing different activities and having different responsibilities in the software life cycle. The

location was a meeting room at the company.

Data Collection

One of the three researchers participating in the focus group started the meeting with a brief

introduction of the research goals. Then, he facilitated the discussion by asking some questions to

clarify the answers that a representative of their company gave in the survey, in particular to obtain

more details about number and competences of people working on product usability. Participants

were also asked about the company development process and the formal or informal meetings they

have with other employees to discuss their work. The two other researchers participated in the

meeting taking notes and in a few occasions asking about further details. The focus group was

audio-recorded and later transcribed.

Analysis & Results

The notes the researchers took during the focus group and the transcription of the audio-recording

were analysed by using the same methods of the interview analysis. The gathered data provided

information about the company, the type of projects it develops and its development process.

The company has 15 employees including managers, project leaders and developers. They develop

products commissioned by privates (50% of their work), as well as by public administrations

(30%). For the remaining 20%, they work on research projects funded by regional or national

government. Two persons take care of the UI aspects of the interactive systems they develop: a

15

graphic designer and a developer, called UI developer, who has some HCI competences acquired

during his recent university studies.

The company adopts an Agile development methodology. Being a small company, the managers are

also involved in critical phases of the overall design and development process. Initially, they

participate in an informal meeting with the project leaders and a customer representative to identify

the high-level needs and expectations of the customer. The collected information becomes a

guideline for system requirements definition. At this point, the specific application domain is

studied and competitor systems are analysed. Managers and project leaders, whose expertise is

primarily in Software Engineering (SE) and very low in HCI, sketch and discuss among them some

paper mock-ups. Once they agree, these mock-ups are delivered to the graphic designer who, based

on his experience and creativity, develops prototypes of the UI by using drawing tools, such as

Adobe Photoshop®. Then, the UI developer implements an evolving prototype using tools such as

Pencil, Artisteer, etc. At this point, the software developers create a Beta version of the system,

which implements the core functionality. This version is shown and discussed in a review meeting,

in which customer representatives are also observed while interacting with the system to perform

the main tasks. After the meeting, the detected criticalities are solved by the project team and the

updated Beta version is released to the customer to be tested by a restricted number of users, while

the company developers go on implementing other functions.

Participants in the focus group were asked why they do not involve end users. They explained that

during the requirement specification “users are considered a waste of time”, because “they are not

aware of their needs”, or “they ask the impossible”, or “they change idea too many times”.

Moreover, the company does not perform user tests because “users, aware of being observed, feel

awkward and are not able to perform even simple tasks”. They said that no useful indications come

from end users. The researchers decided to further investigate this point. It emerged that this

company usually develops software systems commissioned by small or medium enterprises; these

systems will be used by the enterprise employees. When they interviewed such end users for

requirement analysis, they did not get any useful information. Specifically, they often interviewed

some technology fanatics, who emphasized the technological appeal over the system functionality;

being convinced to be expert of technology, they tended to dictate their ideas about the system. In

other cases, they interviewed employees that consider their role as an opportunity for showing their

capabilities to their bosses. As a consequence, they tended to ask for much more functions than

really needed.

16

The researchers explained that interviews are not the only method to involve end users in

requirement analysis, just because situations like those described by the participants may happen.

They mentioned other methods that can be triangulated with interviews (e.g. user observation) and

they agreed with practitioners to meet again to show how they could be properly applied.

The focus group also confirmed what already emerged in the previous interviews, that is the Call

for Tender is one of the main reasons for not addressing usability engineering. All participants,

including the UI developer, said that they do not care about designing for usability and UX since in

the requirements they get from their customers, in the private as well as in the public sector, there is

no reference to usability; sometimes, it is only very generally mentioned that the system has to be

“easy to use”. Participants also said that their main objective, besides permitting to use the system

with no help of an operating manual, is to guarantee compliance with the Italian law on

accessibility, since this is usually explicitly required.

4. 5. Step 4: Exploratory study

Planning

The study was performed in a company of medium-high size located in southern Italy and was

inspired by both ethnography-based research and the Cooperative Method Development (CMD)

(Dittrich et al. 2008). Referring to the two objectives of the overall study, we aimed to view, capture

and understand the company’s work practice and to integrate in key points of the software

development life cycle some activities for addressing usability and UX, such as interviews to real

users and usage scenarios during the requirement analysis, prototype development and evaluation

during early phases of the system life cycle. According to the CMD, company employees were

involved in order to identify with them the possible improvements to be implemented in the

company software development practice.

The company participating in the study develops products in different domains, primarily public

administration and banking. The company is organized in three different Business Units (BUs):

Public Administration, Finance and Research. The latter is mainly involved in research projects.

Each BU could be considered as a separate small company, with its own personnel for carrying out

all the activities in the software life cycle: BU managers, project leaders, analysts, designers,

developers, etc. All BUs adopt a traditional waterfall life-cycle model for several reasons, including

managers’ background and project constraints, as it will be explained later.

A research group of four HCI senior researchers and two HCI junior researchers was formed. It was

agreed with the company that the two junior researchers would have worked in the company, each

17

one for sixty working days (twelve weeks). According to the CMD three phases (see Section 3), the

two junior researchers investigated the existing company practices in order to identify those aspects

that were problematic (Phase 1). Every two weeks short meetings were held, in which the two

researchers reported what happened in the company and evaluated the performed work with the

other members of the research group and the company representatives. In the meetings, possible

improvements were designed (Phase 2). Then, the two junior researchers implemented them in the

company software practices and collected the results to be evaluated in the next meeting (Phase 3).

Data collection

The study was carried out in the Public Administration and Research BUs. Specifically, Rossana, a

female researcher, was assigned to a project of the Public Administration BU, whose aim was to

create an application for tourists visiting a certain town, running on a mobile device; it was

commissioned by the municipality of that town. Diego, the male researcher working in the Research

BU, was assigned to a research project, whose aim was to develop hardware and software to

provide services to various people, from oceanography researchers to skippers, and others.

Data about current practice were gathered by: observations of practice reported in researchers’

diaries; observation of artifacts of practice; in-situ formal and informal interviews with

practitioners, BU managers and Project managers; informal discussions with BU employees. As

soon as critical aspects were identified, informal meetings among the researcher and practitioners

were carried out, where the researcher illustrated some methods that could be introduced in order to

improve these critical aspects of the development process. The methods were proposed by the

researchers on the basis of their academic knowledge, but the discussion about which method

should be selected and how it should be applied involved the practitioners very much. In some

cases, the researchers autonomously applied a method and then showed the results to practitioners.

Further discussions and interviews were conducted in order to evaluate the improvements.

The study ended with two interviews to each BU manager, performed about a month after the

exploratory study to get their opinions about Rossana’s and Diego’s work.

Analysis & Results

From the observation of the design and development processes in the company, Rossana and Diego

realized that the point of view of users was very much neglected in the requirement analysis. Thus,

they worked a lot to try to demonstrate the benefits of including a detailed specification of the user

requirements. They also performed some interviews to validate such requirements with other

stakeholders. Diego was working on a quite complex system, addressing very different types of

18

users; so, he insisted on involving real users, pointing out how different final users are from other

stakeholders in terms of needs and expectations; however, this was not possible.

Both Rossana and Diego used paper prototypes a lot, discussing them in participatory meetings with

other stakeholders, i.e. the other project partners in the case of Diego’s research project, while

Rossana organized short meetings with the design team. Such participatory meetings were

themselves a new activity with respect to the traditional process, and they were very much

appreciated by the practitioners, who realized how much useful information about the project was

provided by the different stakeholders expressing their own point of view. Because Rossana was

designing an application devoted to people visiting a certain town, she involved a few other persons

in the company (secretaries and staff members), who acted like tourists interacting with the

prototypes. Even if the approach might appear a bit naive, HCI researchers know well how useful

these “quick and dirty” methods might be (Rogers et al. 2011). As further example, to test a running

prototype with real users, Diego contacted two friends of his, who are professional skippers, and

performed thinking-aloud tests. The tests indicated some various problems and in particular showed

that a certain feature was not as useful as analysts had considered.

After an analysis of various tools for rapid prototyping, Diego selected Justinmind Prototyper

(Justinmind Prototyper) and used it for creating several successive prototypes. Rossana and Diego

also performed several heuristic evaluations of the prototypes. Thus, they used methods that are

very cost effective in order to demonstrate that such methods require limited resources and little

training of company employees who could apply them.

The study of the work practice showed one of the reasons why the company is still adopting a

waterfall model that neglects formative evaluation during software development. In fact, in some

cases the company develops systems whose requirement analysis and specification of the system

user interfaces are performed by other companies. Therefore, the company has only to make sure

that its development complies with the requirements they get from another company.

About one month after the end of Rossana’s and Diego’s work in the company, three researchers

interviewed the BU managers in two separate interviews. They said that they actually understood

how fruitful the new activities performed by the two researchers were and how meeting other

stakeholders helped resolving several concerns. The Research BU manager appreciated a lot the fact

that, in the requirement analysis, Diego insisted very much on including a detailed specification of

user requirements. Both managers highlighted that they are now convinced of the value of the early

prototyping activities and are enthusiastic of the prototyping tools used. In particular, they are

planning to get the tool used by Diego and to use it in the early design phase of their projects. They

19

are still far from carrying out systematic usability and UX evaluations, but it seems that the

feedback they got from this experience was very valuable to make them reconsider their

development process.

5. Discussion and suggestions

The results of the questionnaire-based survey conducted as first step of our research methodology

were discussed in details in (Ardito et al. 2011), where they were also compared with a similar

study carried out in a very different geographical region, northern Denmark. Both surveys (in Italy

and Denmark) showed that the number of organizations conducting some form of usability

activities is rather low. Even if software developers are becoming more aware of usability and of its

importance in order to improve their products, one of the main problems still remains what we call

“Developer mindset”: many developers have their minds set mainly on programming aspects,

technical challenges and functionality of the product rather than on its usability. Still too many of

them do not know well what usability is and they know even less about UX. Another main obstacle

they reported is the lack of suitable methods that could be integrated in their work practices without

demanding a lot of resources.

Interviews and focus group were conducted in order to investigate more in depth the issues raised in

the survey. In particular, the focus group aimed at understanding whether employees of different

levels in a company, e.g. developers, project leaders, managers, possibly had different views about

advantages and disadvantages of performing HCD activities. Both interviews and focus group

confirmed the main results of the survey; however, they were instrumental to: a) better highlight

some problems, or b) show new problems. Indeed, more problems on user involvement emerged. It

was evident that most practitioners are reluctant to involve end users in participatory design

activities as well as in evaluation. In the focus group, they clearly expressed that it is a waste of

time to involve end users since they are unable to explain their needs and expectations or provide

useful indications during the evaluation. Again, it was remarked that involving users is very

demanding in terms of costs and time, therefore it is not feasible to perform user testing. The work

of the research team in Step 4 had the great value to demonstrate that in many cases valid user-

based evaluation can be performed by using the so-called discounted methods, like thinking aloud

(Nielsen 1993); in order to get useful feedback, company employees or friends may be involved, if

they have characteristics similar to the actual end users of the application under development.

A new and interesting finding of interviews and focus group indicated that another reason why

companies neglect usability and UX is that they are not requirements considered in public tenders.

20

In most cases, companies develop software systems commissioned by public organizations, which

specify the system requirements in Call for Tenders. Companies do not address usability and UX

since they are not required in these Calls. Thus, another suggestion for changing the current

situation is to convince such public organizations of the need of explicitly mentioning usability and

UX requirements in their Calls for Tenders. In order to do so, we are already in touch with people

working at the office of the Apulia region (the region where our University is located), which has

been publishing in the last years several Call for Tenders about ICT systems, and we are discussing

such issues. While trying to convince them to address UX, we are facing another critical issue,

namely the lack of usability and UX requirements that are objectively verifiable; consequently, it is

not easy to specify them in the Calls. HCI researchers are urged to find proper solutions to this

problem and to invest research work in order to define more significant metrics for usability and

UX. It is worth remarking that companies focus a lot on accessibility since in Italy there is a

specific law illustrating different types of accessibility levels, thus the Call for Tenders easily

specify the accessibility requirements the software systems have to comply with.

The study performed as Step 4 of our research work confirmed the value of ethnographically

inspired research in order to get an in-depth understanding of the socio-technological realities

surrounding everyday software development practice (Dittrich et al. 2007; Sharp et al. 2010). The

CMD proved to be a method able to address the following questions (Dittrich et al. 2008): How do

software development practitioners carry out their daily work? What are the methods they use to

address software quality? What is their view about different ways of involving users in the software

life cycle? Which methods, if any exist, could be easily integrated in their software development

process to address the problems that practitioners experience? What are the improvements we get

by integrating a specific method? These questions help researchers and practitioners identifying,

managing and improving critical aspects of a company development process.

The study confirmed how it is important to develop paper prototypes and to discuss them with other

stakeholders, including end users. This is another important suggestion for companies. It might

appear that it is not a new finding, because it is obvious within the research community, whereas the

actual problem is to transfer the use of iterative prototyping in the practice of companies. The

company got also evidence of the advantages of informal meetings in which several stakeholders,

including end users, analyse prototypes starting from those on papers. This study and other previous

experiences of ours on HCD in practice (e.g. (Ardito et al. 2010)), as well as other relevant work in

literature (Wagner and Piccoli 2007), provide another important suggestion: running prototypes

have to be evaluated with samples of their end users in a real context of use, since “end users can

21

raise significant issues about system usability only when they get down to using the system, or even

a running prototype, in their real activity settings”. Only then, they are able to provide the right

indications about what is working well and what is not. If this is true for usability, it is further true

for UX, both because usability is part of UX and because the subjective aspects that UX impacts

can be really assessed only by end users in real contexts of use.

6. Conclusion

This article has presented an experimental study conducted with software companies to investigate

and promote usability engineering and UX methods. It has followed a research methodology that

triangulated different methods. It started with a survey conducted in southern Italy, whose aim is to

inspect the usability engineering practices in software development companies. It replicated a

similar survey performed three years earlier in northern Denmark. These surveys highlighted some

issues, which required a more in-depth analysis. Specifically, the goal was to understand why

developers, who appeared clearly motivated to increase the usability of their products, did not

actually adopt usability-engineering methods in their development processes. Thus, three further

steps were carried out: they consisted in semi-structured interviews of project leaders of software

companies, a focus group with employees of different levels in a company, and a study conducted

to explore UX practices in industrial context by collaborating with practitioners.

The article has discussed the results of the study and provided suggestions on methods that

companies can easily integrate in their software development practices. An interesting issue that

emerged in the study, on which both HCI and SE researchers have devote attention in order to

change the current situation, is to convince public organizations that they have to explicitly mention

usability and UX requirements in the Calls for Tenders for ICT products. This way, companies will

be obliged to consider these requirements. To make it possible, usability and UX requirements that

are objectively verifiable have to be defined, so that they can be clearly specified in the Calls. The

study has also provided more evidence on the fact that usability researchers have to be more careful

in transferring academic work into practical value for industry. As we said in (Ardito et al. 2011),

we believe “it is responsibility of academics to translate scientific articles, which formally describe

evaluation methods, into something that makes sense for companies and it is ready to be applied”.

As a further contribution, this article confirms the value of the research method, called Cooperative

Method Development (Dittrich et al. 2008). This method involves the cooperation between

researchers and practitioners to carry out qualitative empirical research, identify critical aspects and

possible improvements of their development process, and, finally, implement and evaluate such

22

improvements. The active involvement of the practitioners in the overall CMD process, along with

the feedback they get from evaluating the obtained improvement, were instrumental in persuading

them about the urgency and necessity to address usability and UX in their development processes.

7. Acknowledgments

The research reported in this article has been partially supported by: 1) EU through COST Action

IC0904 TWINTIDE; 2) Italian Ministry of University and Research (MIUR) under grant

VINCENTE; 3) Italian Ministry of Economic Development (MISE) under grant LOGIN; 4) Apulia

Region under grant DIPIS. We are grateful to the companies which were involved in our studies,

and to Rossana Pennetta and Diego Paladini who participated in the study within a company. We

also thank very much Yvonne Dittrich of the IT University of Copenhagen for her valuable

comments on Cooperative Method Development.

8. References

AChecker, http://achecker.ca/checker/index.php. Last access: February 22, 2013.

Ardito, C., Buono, P., Caivano, D., Costabile, M.F., Lanzilotti, R., Bruun, A., Stage, J., 2011. Usability evaluation: a

survey of software development organizations, in Proceedings of International Conference on Software Engineering and

Knowledge Engineering, SEKE '11, Miami, FL, US, pp. 282-287. Knowledge Systems Institute, Skokie.

Ardito, C., Buono, P., Costabile, M.F., 2010. Analysis of the UCD process of a web-based system, in Proceedings of

DMS 2010, Analysis of the UCD process of a web-based system, pp. 180-185. Knowledge Systems Institute.

Bak, J.O., Nguyen, K., Risgaard, P., Stage, J., 2008. Obstacles to usability evaluation in practice: a survey of software

development organizations, in Proceedings of 5th Nordic conference on Human-computer interaction, NordiCHI '08,

Lund, Sweden, pp. 23-32. ACM, New York, NY, USA.

Bias, R.G., Mayhew, D.J., 2005. Cost-Justifying Usability. An Update for the Internet Age, Second Edition. Morgan

Kaufmann.

Blomkvist, S., 2005. Towards a Model for Bridging Agile Development and User-Centered Design, in: Seffah, A.,

Gulliksen, J., Desmarais, M. (Eds.), Human-Centered Software Engineering — Integrating Usability in the Software

Development Lifecycle, vol. 8, pp. 219-244. Springer Netherlands.

Boivie, I., Aaborg, C., Persson, J., Lofberg, M., 2003. Why usability gets lost or usability in in-house software

development. Interacting with Computers, 15 (4), 623-639.

Cajander, A., Gulliksen, J., Boivie, I., 2006. Management perspectives on usability in a public authority: a case study, in

Proceedings of 4th Nordic conference on Human-computer interaction, NordiCHI '06, pp. 38-47. ACM, New York,

NY, USA.

Chamberlain, S., Sharp, H., Maiden, N., 2006. Towards a framework for integrating agile development and user-centred

design, in Proceedings of 7th International Conference on Extreme Programming and Agile Processes in Software

Engineering, XP '06, Oulu, Finland, pp. 143-153. Springer-Verlag, Berlin, Heidelberg.

Dittrich, Y., John, M., Singer, J., Tessem, B., 2007. For the Special issue on Qualitative Software Engineering

Research. Information and Software Technology, 49 (6), 531-539.

23

Dittrich, Y., Lindeberg, O., 2004. How use-oriented development can take place. Information and Software Technology,

46 (8), 603-617.

Dittrich, Y., Rönkkö, K., Eriksson, J., Hansson, C., Lindeberg, O., 2008. Cooperative method development. Empirical

Softw. Engg., 13 (3), 231-260.

Easterbrook, S., 2007. Empirical research methods for software engineering, in Proceedings of 22nd IEEE/ACM

international conference on Automated software engineering, ASE '07, Atlanta, Georgia, USA, pp. 574-574. ACM,

New York, NY, USA.

Highsmith, J., 2002. Agile software development ecosystems. Addison-Wesley Longman Publishing Co., Inc., Boston,

MA, USA.

Hudson, J., 2008. Beyond Usability to User Experience. In Proc. of UXEM: User Experience Evaluation Methods in

Product Development Workshop at CHI '08.

Hussain, Z., Milchrahm, H., Shahzad, S., Slany, W., Tscheligi, M., Wolkerstorfer, P., 2009a. Integration of Extreme

Programming and User-Centered Design: Lessons Learned, in: Abrahamsson, P., Marchesi, M., Maurer, F. (Eds.),

Agile Processes in Software Engineering and Extreme Programming, vol. 31, pp. 174-179. Springer, Berlin Heidelberg.

Hussain, Z., Slany, W., Holzinger, A., 2009b. Current State of Agile User-Centered Design: A Survey, in: Holzinger,

A., Miesenberger, K. (Eds.), HCI and Usability for e-Inclusion, vol. LNCS 5889, pp. 416-427. Springer, Berlin

Heidelberg.

ISO/IEC 9241-11, 1998. Ergonomic requirements for office work with visual display terminals (VDTs) -- Part 11:

Guidance on usability.

ISO/IEC 9241-210, 2010. Ergonomics of human-system interaction -- Part 210: Human-centred design for interactive

systems.

Justinmind Prototyper http://www.justinmind.com. Last access: February 22, 2013.

Karlsson, L., Dahlstedt, Å.G., Regnell, B., Natt och Dag, J., Persson, A., 2007. Requirements engineering challenges in

market-driven software development – An interview study with practitioners. Information and Software Technology, 49

(6), 588-604.

Kautz, K., 2011. Investigating the design process: participatory design in agile software development. Information

Technology & People, 24 (3), 217-235.

Lallemand, C., 2011. Toward a closer integration of usability in software development: a study of usability inputs in a

model-driven engineering process, in Proceedings of 3rd ACM SIGCHI symposium on Engineering interactive

computing systems, EICS '11, Pisa, Italy, pp. 299-302. ACM, New York, NY, USA.

Larusdottir, M.K., 2012. User Centred Evaluation in Experimental and Practical Settings. PhD Thesis, KTH Royal

Institute of Technology, Stockholm, Sweden.

Lethbridge, T.C., Sim, S., Singer, J., 2005. Studying Software Engineers: Data Collection Techniques for Software

Field Studies. Empir Software Eng, 10 (3), 311-341.

Nielsen, J., 1993. Usability Engineering. Morgan Kaufmann.

Nørgaard, M., Hornbæk, K., 2006. What do usability evaluators do in practice?: an explorative study of think-aloud

testing, in Proceedings of Conference on Designing Interactive systems, DIS '06, pp. 209-218. ACM, New York, NY,

USA.

Robinson, H., Segal, J., Sharp, H., 2007. Ethnographically-informed empirical studies of software practice. Information

and Software Technology, 49 (6), 540-551.

Rogers, Y., Sharp, H., Preece, J., 2011. Interaction Design: Beyond Human - Computer Interaction, 3rd ed. Wiley.

24

Rönkkö, K., Hellman, M., Dittrich, Y., 2008. PD method and socio-political context of the development organization, in

Proceedings of 10th Anniversary Conference on Participatory Design, PDC '08, Bloomington, Indiana, pp. 71-80.

Indiana University.

Rönkkö, K., Hellman, M., Kilander, B., Dittrich, Y., 2004. Personas is not applicable: local remedies interpreted in a

wider context, in Proceedings of 8th conference on Participatory design: Artful integration: interweaving media,

materials and practices, PDC '04, Toronto, Ontario, Canada, pp. 112-120. ACM, New York, NY, USA.

Rosenbaum, S., Bloomer, S., Rinehart, D., Rohn, J., Dye, K., Humburg, J., Nielsen, J., Wixon, D., 1999. What makes

strategic usability fail?: lessons learned from the field, in Proceedings of CHI '99 Extended Abstracts on Human Factors

in Computing Systems, CHI EA '99, Pittsburgh, Pennsylvania, pp. 93-94. ACM, New York, NY, USA.

Rosenbaum, S., Rohn, J.A., Humburg, J., 2000. A toolkit for strategic usability: results from workshops, panels, and

surveys, in Proceedings of SIGCHI conference on Human Factors in Computing Systems, CHI '00, The Hague, The

Netherlands, pp. 337-344. ACM, New York, NY, USA.

Roto, V., Ketola, P., Huotari, S., 2008. User Experience Evaluation in Nokia. In Proc. of UXEM: User Experience

Evaluation Methods in Product Development Workshop at CHI '08.

Schön, D.A., 1983. The Reflective Practitioner: How Professionals Think in Action. Basic Books, New York, USA.

Seaman, C.B., 1999. Qualitative methods in empirical studies of software engineering. IEEE Transactions on Software

Engineering, 25 (4), 557-572.

Seffah, A., Donyaee, M., Kline, R.B., Padda, H.K., 2006. Usability measurement and metrics: A consolidated model.

Software Quality Control, 14 (2), 159-178.

Sharp, H., deSouza, C., Dittrich, Y., 2010. Using ethnographic methods in software engineering research, Using

ethnographic methods in software engineering research, pp. 491-492. ACM.

Väänänen-Vainio-Mattila, K., Roto, V., Hassenzahl, M., 2008. Now let's do it in practice: user experience evaluation

methods in product development, in Proceedings of CHI '08 Extended Abstracts on Human Factors in Computing

Systems, CHI EA '08, Florence, Italy, pp. 3961-3964. ACM, New York, NY, USA.

Venturi, G., Troost, J., 2004. Survey on the UCD integration in the industry, in Proceedings of 3rd Nordic conference

on Human-computer interaction, NordiCHI '04, Tampere, Finland, pp. 449-452. ACM, New York, NY, USA.

Vredenburg, K., Mao, J.-Y., Smith, P.W., Carey, T., 2002. A survey of user-centered design practice, in Proceedings of

SIGCHI Conference on Human Factors in Computing Systems, CHI '02, Minneapolis, Minnesota, USA, pp. 471-478.

ACM, New York, NY, USA.

Wagner, E.L., Piccoli, G., 2007. Moving beyond user participation to achieve successful IS design. Commun. ACM, 50

(12), 51-55.