intranet satisfaction questionnaire: development and validationof a questionnaire to measure user...

10
Intranet satisfaction questionnaire: Development and validation of a questionnaire to measure user satisfaction with the Intranet Javier A. Bargas-Avila * , Jonas Lötscher, Sébastien Orsini, Klaus Opwis University of Basel, Faculty of Psychology, Department of Cognitive Psychology and Methodology, Missionsstrasse 62a, 4055 Basel, Switzerland article info Article history: Available online 8 August 2009 Keywords: Intranet Enterprise portal Questionnaire Survey Measure User satisfaction Usability abstract In recent years, Intranets have become increasingly important to their companies. Substantial invest- ments have been made to provide crucial information and workflows to employees. In this context the question of quality assurance arises: how can user satisfaction with the Intranet be measured? This arti- cle presents the development of a questionnaire to measure user satisfaction with the Intranet. After a first validation of the instrument (18 items) in an international insurance company (N 1 ¼ 881Þ, a final set of 13 items remained. These were tested with the Intranet of a national retail company (N 2 ¼ 1350Þ. The final version showed a high internal consistency (Cronbach aÞ of .89, good item difficul- ties (.36–.73) and discriminatory power coefficients (.48–.73), as well as a moderate average homogene- ity of .44. An exploratory factor analysis revealed two factors, ‘‘Content Quality” and ‘‘Intranet Usability”, explaining 56.54% of the variance. Meanwhile, the survey was translated into 10 languages: Chinese, Eng- lish, French, German, Italian, Japanese, Portuguese, Russian, Slovenian, and Spanish. Ó 2009 Elsevier Ltd. All rights reserved. 1. Introduction An increasing number of companies are developing internal information platforms, referred to as Intranets (Hoffmann, 2001). These portals support thousands of employees in their daily work. In this context, it is crucial to implement methods to measure, maintain and optimize the quality of the Intranet from the users’ point of view. A possible method of measuring user satisfaction is via a ques- tionnaire. Older tools such as that developed by Bailey and Pearson (1983) were made in a very different technological situation and are not suited to Intranets. Newer questionnaires such as those de- vised by Doll and Torkzadeh (1988) or Lewis (1995) proved to be reliable, but were developed as generic tools. Their questions do not cover important aspects of internal information and working platforms (e.g. quality of communication or search features). The main goal of this work is to develop a questionnaire to mea- sure user satisfaction with the Intranet. For the scale construction an explorative approach was chosen. The item analysis is based on classic test construction theories. This article begins with a short description of the theoretical background (Section 2). Then the first and second validation of the survey are presented (Sections 3 and 4). In the last Sections (5 and 6) further aspects and the results of the Intranet Satisfaction Questionnaire (ISQ) validation are discussed. 2. Theoretical background 2.1. Intranet and user satisfaction An ‘‘Intranet” is defined as a network of linked computers to which only a restriced group of an organizations members have ac- cess (Hoffmann, 2001). These links are usually based on common Internet technologies such as TCP/IP and HTTP protocols, naming conventions, and mark-up languages. There are three main factors that make these technologies very attractive to companies: (1) worldwide access through the global address system URI; (2) easy integration of different text, graphic, audio and video formats; and (3) it is easy to link these resources to each other with hyperlinks (Kaiser, 2000). These factors open up almost infinite possibilities for companies to provide new communication, information and knowledge pools that are targeted only to their employees. Intra- nets usually include common features such as news, forms, busi- ness processes or employee search, and sometimes also more advanced applications such as transaction processing systems, management information systems, decision support and expert systems. The construct ‘‘user satisfaction” in connection with computers is most often described as affective attitude. Bailey and Pearson (1983, P. 531) define user satisfaction as the ‘‘sum of one’s positive and negative reactions to a set of factors”. Doll and Torkzadeh (1988, P. 261) describe it as ‘‘the affective attitude toward a specific computer application by someone who interacts with the applica- tion directly”. Current authors also describe it as an affective 0747-5632/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2009.05.014 * Corresponding author. Tel.: +41 61 2673522; fax: +41 61 2670632. E-mail address: [email protected] (J.A. Bargas-Avila). Computers in Human Behavior 25 (2009) 1241–1250 Contents lists available at ScienceDirect Computers in Human Behavior journal homepage: www.elsevier.com/locate/comphumbeh

Upload: javier-a-bargas-avila

Post on 05-Sep-2016

225 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Intranet satisfaction questionnaire: Development and validationof a questionnaire to measure user satisfaction with the Intranet

Computers in Human Behavior 25 (2009) 1241–1250

Contents lists available at ScienceDirect

Computers in Human Behavior

journal homepage: www.elsevier .com/locate /comphumbeh

Intranet satisfaction questionnaire: Development and validation of a questionnaireto measure user satisfaction with the Intranet

Javier A. Bargas-Avila *, Jonas Lötscher, Sébastien Orsini, Klaus OpwisUniversity of Basel, Faculty of Psychology, Department of Cognitive Psychology and Methodology, Missionsstrasse 62a, 4055 Basel, Switzerland

a r t i c l e i n f o

Article history:Available online 8 August 2009

Keywords:IntranetEnterprise portalQuestionnaireSurveyMeasureUser satisfactionUsability

0747-5632/$ - see front matter � 2009 Elsevier Ltd. Adoi:10.1016/j.chb.2009.05.014

* Corresponding author. Tel.: +41 61 2673522; fax:E-mail address: [email protected] (J.A. Barga

a b s t r a c t

In recent years, Intranets have become increasingly important to their companies. Substantial invest-ments have been made to provide crucial information and workflows to employees. In this context thequestion of quality assurance arises: how can user satisfaction with the Intranet be measured? This arti-cle presents the development of a questionnaire to measure user satisfaction with the Intranet. After afirst validation of the instrument (18 items) in an international insurance company (N1 ¼ 881Þ, a finalset of 13 items remained. These were tested with the Intranet of a national retail company(N2 ¼ 1350Þ. The final version showed a high internal consistency (Cronbach aÞ of .89, good item difficul-ties (.36–.73) and discriminatory power coefficients (.48–.73), as well as a moderate average homogene-ity of .44. An exploratory factor analysis revealed two factors, ‘‘Content Quality” and ‘‘Intranet Usability”,explaining 56.54% of the variance. Meanwhile, the survey was translated into 10 languages: Chinese, Eng-lish, French, German, Italian, Japanese, Portuguese, Russian, Slovenian, and Spanish.

� 2009 Elsevier Ltd. All rights reserved.

1. Introduction

An increasing number of companies are developing internalinformation platforms, referred to as Intranets (Hoffmann, 2001).These portals support thousands of employees in their daily work.In this context, it is crucial to implement methods to measure,maintain and optimize the quality of the Intranet from the users’point of view.

A possible method of measuring user satisfaction is via a ques-tionnaire. Older tools such as that developed by Bailey and Pearson(1983) were made in a very different technological situation andare not suited to Intranets. Newer questionnaires such as those de-vised by Doll and Torkzadeh (1988) or Lewis (1995) proved to bereliable, but were developed as generic tools. Their questions donot cover important aspects of internal information and workingplatforms (e.g. quality of communication or search features).

The main goal of this work is to develop a questionnaire to mea-sure user satisfaction with the Intranet. For the scale constructionan explorative approach was chosen. The item analysis is based onclassic test construction theories. This article begins with a shortdescription of the theoretical background (Section 2). Then the firstand second validation of the survey are presented (Sections 3 and4). In the last Sections (5 and 6) further aspects and the results ofthe Intranet Satisfaction Questionnaire (ISQ) validation arediscussed.

ll rights reserved.

+41 61 2670632.s-Avila).

2. Theoretical background

2.1. Intranet and user satisfaction

An ‘‘Intranet” is defined as a network of linked computers towhich only a restriced group of an organizations members have ac-cess (Hoffmann, 2001). These links are usually based on commonInternet technologies such as TCP/IP and HTTP protocols, namingconventions, and mark-up languages. There are three main factorsthat make these technologies very attractive to companies: (1)worldwide access through the global address system URI; (2) easyintegration of different text, graphic, audio and video formats; and(3) it is easy to link these resources to each other with hyperlinks(Kaiser, 2000). These factors open up almost infinite possibilitiesfor companies to provide new communication, information andknowledge pools that are targeted only to their employees. Intra-nets usually include common features such as news, forms, busi-ness processes or employee search, and sometimes also moreadvanced applications such as transaction processing systems,management information systems, decision support and expertsystems.

The construct ‘‘user satisfaction” in connection with computersis most often described as affective attitude. Bailey and Pearson(1983, P. 531) define user satisfaction as the ‘‘sum of one’s positiveand negative reactions to a set of factors”. Doll and Torkzadeh(1988, P. 261) describe it as ‘‘the affective attitude toward a specificcomputer application by someone who interacts with the applica-tion directly”. Current authors also describe it as an affective

Page 2: Intranet satisfaction questionnaire: Development and validationof a questionnaire to measure user satisfaction with the Intranet

1242 J.A. Bargas-Avila et al. / Computers in Human Behavior 25 (2009) 1241–1250

reaction to a given information system (Fisher & Howell, 2004). Forthis work, user satisfaction is regarded as a ‘‘psychological ten-dency that is expressed by evaluating a particular entity with somedegree of favor or disfavor” (Eagly & Chaiken, 1998, P.296).

According to Huang, Yang, Jin, and Chiu (2004), user satisfactionis the most often used construct to measure the success of an infor-mation system. Nevertheless, there is little research that shows un-der which conditions user satisfaction arises. There is also littleconsensus as to how user satisfaction should be operationalizedand many studies do not provide a theoretical basis for these oper-ationalizations (Melone, 1990).

The Expectancy Value Theory of Fishbein (1967) explains thatan attitude can be understood as the sum of the products of a per-son’s expectations and evaluations (see Fig. 1). Adapted to thiswork, A0 stands for user satisfaction with the Intranet, bi are theexpectations that the Intranet provides certain attributes, and ei

are the evaluations of these attributes.In order to measure user satisfaction, it is therefore theoreti-

cally necessary to know all expectations (b’s). According to Gizycki(2002), web users have certain expectations regarding efficiencyand effectiveness. If these expectations are fulfilled it is to be as-sumed that users will be satisfied with the system. In this concept,user satisfaction is regarded as the output of a comparison processof expectations and the perceived performance level of the applica-tion (Töpfer & Mann, 1999;Schneider, 2000).

It is therefore expected that user satisfaction with the Intranetis reached, if the expectations of its users are fulfilled. The itemsof this questionnaire will focus on capturing the cognitive compo-nents of the user’s attitude toward the Intranet. Variables that mayalso influence user satisfaction, such as participation in the devel-opment of a system (McKeen, Guimaraes, & Wetherbe, 1994),usage frequency (Al-Gahtani & King, 1999), attitudes towards com-puters (Shaft, Sharfman, & Wu, 2004), and computer anxiety (Har-rison & Rainer, 1996), are not covered.

2.2. User satisfaction questionnaires

In this section a brief overview of existing questionnaires thatwere developed to measure user satisfaction in different contextsis provided.

Bailey and Pearson (1983) developed a tool with 39 items tomeasure user satisfaction with computers. The questionnaire datesfrom over 20 years ago, when computers had very limited possibil-ities and were mostly used in data processing. Therefore severalitems deal solely with the satisfaction of the data-processingpersonnel.

Technological advancements and the development of interac-tive software created a need for usable interfaces. Doll and Tork-zadeh (1988) developed a questionnaire with 12 items designedto measure ease of use with specific applications. They postulatedthat user satisfaction is composed of five factors (content, accuracy,format, ease of use and timeliness). Harrison and Rainer (1996)confirmed the validity and reliability of this tool and showed thatit could be used as a generic measuring tool for computerapplications.

Lewis (1995) developed and validated another questionnairewith 19 items, referred to as the ‘‘Computer Usability SatisfactionQuestionnaires” (CSUQ). He regards usability as the prime factorinfluencing user satisfaction (hence the name). The analysis of

Fig. 1. Expectancy Value Theory (Fishbein, 1967).

his data revealed three factors influencing user satisfaction: systemusefulness, information quality and interface quality.

Other authors developed satisfaction scales for specific areas suchas online-shopping (McKinney, Yoon, & Zahedi, 2002), company web-sites (Muylle, Moenaert, & Despontin, 2004), business-to-employeesystems (Huang et al., 2004), mobile commerce interfaces (Wang &Liao, 2007), knowledge management systems (Ong & Lai, 2007), andthe information systems of small companies (Palvia, 1996). Holzinger,Searle, Kleinberger, Seffah, and Javahery (2008) developed metricsspecially targeted at the elderly population.

A brief comparison of the items and scales of the named instru-ments reveals two main areas for consideration: (1) questionsdealing with the information offered by the system and (2) itemsreferring to the design of the human–computer interface. There-fore the development of this instrument will be based on thesetwo topics (see Sections 2.3 and 2.4), referred to as ‘‘content qual-ity” (1) and ‘‘Intranet usability” (2).

2.3. Quality of information

An approach to determine data quality was made by Wang andStrong (1996). In their work, they found four levels of data quality:

(1) Intrinsic data quality: this covers attributes that are embed-ded within the data such as accuracy, objectiveness orcredibility.

(2) Contextual data quality: this is given when the data are avail-able in such a way that they fit into the workflow. According toWang and Strong (1996), these are attributes such as rele-vance, added value, timeliness, and adequateness.

(3) Representational data quality: this describes how easily thedata can be understood and interpreted.

(4) Accessibility data quality: this describes how easily access(search possibilities and information channels) to the infor-mation is granted.

According to Wang and Strong (1996, P. 22), high data qualitycan be described as follows: ‘‘High-quality data should be intrinsi-cally good, contextually appropriate for the task, clearly repre-sented, and accessible to the data consumer”. For theconstruction of the ISQ, it is assumed that users expect high dataquality of information provided. This factor will subsequently becalled ‘‘Content Quality”.

2.4. Quality of human–computer interface

To conduct successful interactions with the Intranet, users haveto form a correct mental representation of the dialogue system(Spinas, 1987). This means that they form a mental map of the con-tent (information and functions) and the organization (structuralcriteria and access ways). Users should always be able to answerthe questions ‘‘Where am I? What can I do here? How did I arrivehere? Where can I go to? And how can I get there?” (Baitsch, Katz,Spinas, & Ulich, 1989). To ensure this, the dialogue has to be builtin accordance with the ways that users process information (Spinas,1987;Rauterberg, 1994;Herczeg, 1994;Rosson & Carroll, 2002;Holz-inger, 2005). This topic is most commonly described as ‘‘usability”.

There have been numerous attempts to determine generic rulesand standards to ensure high usability in computer applications.The International Organization for Standardization (ISO) describesseven principles for designing usable dialogue systems: (1) suit-ability for the task, (2) self-descriptiveness, (3) controllability, (4)conformity with user expectations, (5) error tolerance, (6) suitabil-ity for individualization and (7) suitability for learning (ISO, 1998).

For the construction of the ISQ it is assumed that the employeesof a company expect a usable and efficient Intranet that enables

Page 3: Intranet satisfaction questionnaire: Development and validationof a questionnaire to measure user satisfaction with the Intranet

J.A. Bargas-Avila et al. / Computers in Human Behavior 25 (2009) 1241–1250 1243

them to fulfill their tasks. If these expectancies are not met, userswill often react with negative feelings. There are many authorswho have developed items to measure the quality of the human–computer interface (e.g. Lewis, 1995). We will refer to this as‘‘Intranet usability”.

2.5. Generic vs. specific items: the need for a new questionnaire

The focus of most satisfaction scales lies in the development ofapplication-independent instruments to enable their use in variouscontexts. Considering the broad range of tools and applications,this is often a very difficult task. However, it can be assumed thatusers have context-specific expectancies that arise depending onthe information system used (here: the Intranet).

In 2003, the consulting company Stimmt AG used the genericscale Computer System Usability Questionnaire (Lewis, 1995) tomeasure user satisfaction in 16 Intranets (Oberholzer, Leuthold,Bargas-Avila, Karrer, & Glaeser, 2003). They found that the instru-ment was only partially suited to use for Intranets. Many employ-ees complained about the superficial questions, and several itemswere not applicable at all in this context. Intranet managers notedthat the surveys left several important questions open (e.g. regard-ing the quality of the search engine, the navigation or the up-to-dateness of information).

These observations led to the conclusion that there is a clearneed for a new survey. The ISQ must include specific questionsregarding topics that are relevant in the context of an Intranet.

3. Development and first validation

This section describes how the first version of the ISQ wasdeveloped and validated.

3.1. Development of the ISQ

3.1.1. ScaleLikert scales were used for all items. In this well established

method, respondents specify their level of agreement or disagree-ment with a positive statement. Here, a six-point scale was usedwith labeled extreme points (1 = I strongly disagree, 6 = I fullyagree). A higher number expresses a higher level of agreement,thus satisfaction. For the rating scale interval measurement is as-sumed, allowing the corresponding statistical validations. Theassumption of interval measurement for a rating scale withoutprior empirical validation is a videly used research practice (Bortz,1999).

To ensure a high reliability and validity, use of between five andseven categories for a Likert scale is recommended (Borg, 2001).With the six-point scale, participants have three options for a po-sitive and three for a negative attitude. Thus the user is ‘‘forced”to choose a direction (no neutral point is provided). For this instru-ment, an even number of options were chosen because researchshows that a neutral/middle option has several disadvantages(Rost, 2004). According to Mummendey (2003), participantschoose the middle option for multiple reasons: (1) they do indeedhave a neutral attitude, (2) they do not know how to answer, (3)they think the question is irrelevant, (4) they refuse to answer,or (5) they want to express their dislike of the question (protest an-swer). The middle option does not therefore always measure theintended neutral attitude. Additionally, it has been shown thatmotivated participants often avoid using the neutral category(Rost, 2004).

The questionnaire contains items that need certain usage expe-rience with the Intranet to be answered. It cannot be ruled out,however, that some participants are unable to answer a question

due to lack of knowledge. For such cases, an alternative optionwas introduced to the instrument (‘‘I can’t answer this question”).It can be argued here that this leads to similar problems as whenan uneven Likert scale with a neutral option is used. Participantscan choose this option to express different phenomenons: (1) theydo not know how to answer the question or (2) they do indeedhave a neutral attitude and therefore are not able to express theiropinion. Despite this disadvantage, a six-point Likert scale with analternative option was chosen, because it seemed to offer the bestcompromise.

3.1.2. LengthThe ISQ is intended to be used in a business context. Therefore it

is crucial that participants do not have to spend more than 10–15min answering the survey. This is in line with recommendationsmade by Batinic (1999). After reference to similar instruments (Le-wis, 1995;Doll & Torkzadeh, 1988), a maximum of 20 items waschosen for the first version of the ISQ.

3.1.3. Item-generation for the first versionBased on screening of the theoretical approaches and empirical

data, a first item pool was generated by the authors and a full-timeIntranet manager. These items were screened and unified into afirst draft of the ISQ. This draft was revised by three Intranet man-agers. They were instructed to evaluate whether the questionswere suited to measuring the construct ‘‘user satisfaction withthe Intranet”, whether they were easy to understand, and to checkwhether important aspects had been missed. Based on their feed-back, the first version of the ISQ, containing 18 items, was finalized.

The version in Table 1 was used for the first validation of theISQ.

3.2. Methodology

3.2.1. Experimental procedureTo validate the ISQ, it was implemented as online survey and

tested in cooperation with an international insurance companyemploying about 6000 people. To retain anonymity, the companywill be called ‘‘Company A”.

All registered data were logged electronically in a database ofCompany A. The questionnaire was conducted in three languages(German, French, Italian), but due to the small sample size of theFrench and Italian versions, only the German version will be con-sidered here.

The survey started with a short introductory text that high-lighted the importance of employees’ feedback, the type of ques-tions, the length of the survey and the anonymity of this enquiry.On the next pages, all 18 questions (see Table 1) were presentedone by one. When submitting incomplete questions, users wereforced by posterior error messages (Bargas-Avila, Oberholzer, Sch-mutz, de Vito, & Opwis, 2007) to choose one of the given options.Each item was provided with a free text commentary field. Thesequalitative data were recorded as an additional information sourcebut could also be left blank. After the ISQ questions had been an-swered, the survey ended with some demographic items. Thedemographic part was positioned at the end to avoid the ISQ an-swers being influenced by concerns that feedback could be back-tracked (Rogelberg, 2002).

The survey was online in October 2005 for 18 days. After 14days, the employees received a brief reminder by e-mail encourag-ing them to participate during the next 4 days. Such remindershave been shown to improve return rates (Batinic & Moser, 2005).

3.2.2. SampleIn total, 2359 employees (users of the main Intranet) were

asked by e-mail to participate in the survey, leading to 881 valid

Page 4: Intranet satisfaction questionnaire: Development and validationof a questionnaire to measure user satisfaction with the Intranet

Table 1First version of the ISQ (translated by the authors).

No. Item

1 The Intranet delivers relevant content for my work2 The content in the Intranet is up-to-date3 The Intranet is arranged in a manner that it is easy to orientate and to find the desired content4 The way information is communicated in the Intranet is clear and coherent5 I am familiar with the usage of the Intranet so I can apply it optimally for my needs6 The Intranet facilitates internal communication (e.g. via an employee-directory, messages from the management or discussion-forums etc.)7 When needed I can access the Intranet anytime I want8 I am satisfied with the quality of the search engine. It delivers good and useful results and is suited to find specific content or documents9 If something in the Intranet is outdated, wrong or incomplete, it is easy to contact the responsible person

10 The Intranet enables me to work more efficiently (e.g. internal workflows, accessing support or information retrieval)11 If I want to publish a message or a document in the Intranet, I know how to proceed12 With the Intranet I can work fast (e.g. fast page and document download)13 The Intranet is easy to use (e.g. personalization, handling the employee-directory)14 I am satisfied with the help and support I get when I encounter a problem or have a question regarding the Intranet (e.g. online-help or help-desk)15 The Intranet constantly provides up-to-date company news16 I encounter the work-relevant information on the Intranet in a format I can easily handle17 I can rely on the information in the Intranet18 Overall I am satisfied with the Intranet

Table 3Statistical parameters of the first validation.

Item N M SD Mode S K pv

1 881 4.82 1.17 5 �1.14 0.98 0.6832 881 4.77 0.93 5 �0.84 1.04 0.6573 881 3.37 1.27 4 �0.07 �0.73 0.364 881 4.41 0.99 5 �0.86 0.93 0.5675 881 4.35 1.11 5 �0.72 0.18 0.5586 881 4.35 1.16 5 �0.62 �0.08 0.567 881 5.15 1.06 6 �1.72 3.41 0.7678 881 3.3 1.38 4 �0.14 �0.9 0.3569 881 3.93 1.06 4 �0.41 0.41 0.463

10 881 4.21 1.1 4 �0.54 0.06 0.52611 881 3.27 1.64 1 0.11 �1.09 0.3812 881 4.25 1.13 5 �0.8 0.39 0.53813 881 4.38 1.07 5 �0.81 0.65 0.56414 881 4.66 0.95 5 �1.01 1.54 0.62915 881 4.89 0.96 5 �1.11 1.67 0.6916 881 4.27 1.05 5 �0.67 0.44 0.53717 881 4.9 1.11 5 �1.21 1.36 0.718 881 4.49 1 5 �0.8 0.66 0.586

Missing values = EM; SES ¼ :082; SEK ¼ :165.

1244 J.A. Bargas-Avila et al. / Computers in Human Behavior 25 (2009) 1241–1250

responses (27 responses had to be excluded for a variety of rea-sons; see Section 3.3).

Due to the internal policies of Company A, only a reduced set ofdemographic variables could be requested. Of participants, 25.1%were aged below 30 years old, 31.3% were aged between 31 and40 years, 27.9% were aged 41–50 years and 15.7% were aged 51or more years. A total of 26.8% of the participants had worked forCompany A for 1–5 years, 23.2% from 6 to 10 years and 41% formore than 10 years. Only 8.3% had been employed for less than 1year, showing that the majority of participants had been able touse the Intranet for quite a while.

3.3. Results

In total, 908 responses were registered. Four participants wereexcluded because they had not answered at least 11 items. One re-sponse was discarded, because the ‘‘I can’t answer it” option hadbeen chosen in more than half of the items. Four people were ex-cluded because they answered all 13 items exclusively with thebest or the worst item score. Finally, 18 participants were excludedbecause the boxplot showed their answers to be clear statisticaloutliers. The sample size for the item analysis therefore consistsof 881 participants. Table 2 gives an overview of the total missingvalues for each item after exclusion of the 27 participants, Table 3shows the statistical parameters of the first validation.

All missing values are due to the ‘‘I can’t answer it” option. Item9 with 309 (= 35.1%) and item 14 with 239 (= 27.1%) missing casesdiffer strongly from the rest. These numbers are not very surpris-ing: Both questions concern topics that may not have been neededby many employees (‘‘contact content owner” and ‘‘get support”).

To avoid sample size reduction with the Listwise and PairwiseDeletion, the Expectation-Maximization Algorithm (EM) was usedto replace the missing values. The replacement of missing valueswith EM has been proven to be a valid and reliable method and

Table 2Overview of missing values for each item (first version).

Item 1 2 3 4 5 6 7 8 9

N 878 858 881 877 877 857 880 814 572Missing Count 3 23 0 4 4 24 1 67 309

% 0.3 2.6 0.0 0.5 0.5 2.7 0.1 7.6 35.1

Item 10 11 12 13 14 15 16 17 18

N 851 753 842 845 642 871 839 875 880Missing Count 30 128 39 36 239 10 42 6 1

% 3.4 14.5 4.4 4.1 27.1 1.1 4.8 0.7 0.1

outclasses the listwise and pairwise deletion in many aspects(Schafer & Graham, 2002;Allison, 2001). There are virtually no dif-ferences between all values and the EM values (see Table 2).

For interval-scaled item responses, it is advisable to calculatethe discriminatory power with a product-moment correlation ofthe item score with the test score (Fisseni, 1997). Table 4 liststhe discriminatory power and Cronbach a for each item. The dis-criminatory coefficients range between.30 (item 11) and.70 (item10) with a mean of .54 ðSD ¼ :54Þ. Five items show a coefficient be-low .50 (item 1, 7, 8, 9 and 11). According to Borg and Groenen(1997), the lowest acceptable discriminatory power is .30. Item11 falls in this category with a value of .30. The rest of the itemsare in an acceptable to good range.

The homogeneity examines whether all items of the ISQ mea-sure the same construct (‘‘user satisfaction”) and whether thereare items that overlap (measure similar aspects of the construct).If the items of a test correlate with each other, it can be assumedthat they measure similar aspects of the common construct. Thistopic can be explored in the intercorrelation matrix (see Table 5).It shows significant correlations for all items ðp < 0:01Þ with nonegative correlations.

The correlations with the global item 18 range from .22 to .62,with items 10, 13 and 16 showing the highest ðr ¼ :60 or higher).Mid-level correlations are found for items 3, 4, 6, 12, 14 and 15

Page 5: Intranet satisfaction questionnaire: Development and validationof a questionnaire to measure user satisfaction with the Intranet

Table 4Discriminatory power and Cronbach a for each item (first version).

Corrected item – total correlation Alpha if item deleted

1 .4977 .88182 .5651 .87993 .5659 .87924 .6396 .87725 .5399 .88026 .6367 .87657 .4403 .88378 .3683 .88829 .4553 .8832

10 .6931 .874711 .2994 .894512 .5724 .879013 .6240 .877414 .5960 .878915 .5837 .879216 .6700 .875817 .5082 .8814

aitem1�17 ¼ :8869.N = 881; Missing values = EM.

J.A. Bargas-Avila et al. / Computers in Human Behavior 25 (2009) 1241–1250 1245

ð:50 < r < :60Þ, and finally items 11 ðr ¼ :22Þ;7ðr ¼ :37Þ and 9ðr ¼ :39Þ show very low correlations with the global item.

The intercorrelations of items 1 to 17 are relatively moderate.The average homogeneity index for the scale is at .35 and thehomogeneity indices for each item range from .19 to .49, withthe lowest values for items 7, 8, 9 and 11. One explanation forthe relatively moderate indices could lie in the complexity of theconstruct ‘‘user satisfaction”, a circumstance that requires theitems to be heterogeneous to cover the whole spectrum.

Cronbach a for the first version of the ISQ is relatively highða ¼ :8869Þ, indicating a good reliability for this instrument. Item18 represents the overall satisfaction expressed by users and wasnot taken into account in the calculation to avoid artificial inflationof Cronbach a. Table 4 shows that the internal consistency wouldbe higher if items 11 and 8 were to be deleted (for item 8 only asmall effect is stated).

3.4. Discussion

The validation of the ISQ shows promising results. At the sametime, it becomes clear that there are several problematic items thatneed to be modified or deleted. The discussion of the scale and the

Table 5Intercorrelation matrix and homogeneity indices for item 1– 18 (first version).

Item 1 2 3 4 5 6 7 8 9

1 12 .430 13 .241 .293 14 .320 .469 .482 15 .436 .336 .368 .406 16 .425 .372 .427 .456 .434 17 .310 .396 .165 .338 .250 .288 18 .122 .235 .414 .247 .110 .264 .160 19 .125 .302 .319 .332 .195 .288 .207 .307 1

10 .502 .412 .431 .464 .445 .543 .331 .312 .27911 .103 .085 .251 .159 .309 .243 .056 .127 .25312 .231 .314 .374 .407 .298 .392 .358 .218 .29713 .267 .303 .485 .492 .382 .458 .293 .257 .30314 .339 .389 .317 .411 .310 .386 .340 .232 .36915 .389 .491 .297 .454 .347 .440 .378 .164 .28516 .403 .363 .439 .509 .345 .455 .286 .340 .30217 .378 .461 .186 .353 .308 .334 .338 .161 .30818 .420 .498 .584 .565 .468 .528 .371 .436 .386H .320 .362 .357 .404 .338 .396 .286 .241 .286

All correlations are significant ðp < :01Þ. H, homogeneity coefficient.N = 881; Missing values = EM.

items of the ISQ form the basis for the second version of the ISQ(see Section 4).

3.4.1. ScaleThere is a clear tendency to use the ISQ scale in the upper part of

the six-point scale (see Fig. 2). This finding is not surprising: It canbe expected that an international insurance company will probablydevelop an acceptable Intranet. Additionally, this finding is in linewith survey results gathered with the Computer System UsabilityQuestionnaire (Lewis, 1995) by the consulting group Stimmt AG(Oberholzer et al., 2003;Leuthold et al., 2004), where the mean sat-isfaction level of 16 companies was 4.9 on a 7-point Likert scale.For the ISQ this means that the instrument differentiates well forthe upper scale range.

3.4.2. ItemsIn this section, only the problematic items will be discussed. An

item can be regarded as being problematic if the statistical param-eters show insufficient values (see Section 3.3) and/or if analysis ofusers’ comments (see Section 3.2.1) shows that participants mis-understood the corresponding item.

Item 1. The frequency distribution of item 1 shows a minor ceil-ing effect. The homogeneity index and the discriminatory powerare relatively low. An exclusion might have been considered, butdue to the importance of the item it was decided to leave it inthe pool for a second testing.

Item 3. Although the statistical paramenters for these questionsare satisfactory, the analysis of registered commentaries showedthat many participants made comments about the search engine.Probably the last part of the item ‘‘. . .to find the desired content”triggered associations with the search engine in many users. Thismix-up was not intended. To eliminate this problem the itemwas reformulated as the following statement: ‘‘The Intranet has aconcise layout and a comprehensible structure”.

Item 4. Compaired to the new formulation of item 3, the state-ment of item 4 is not sufficiently different. Whereas the first is in-tended to measure the comprehensibility of the structure, the latterdeals with the understandability of the content. Therefore item 4has been reformulated as ‘‘When I read something on the Intranetthe content is clear and understandable”.

Item 5. The statistical parameters of item 5 are in a mid-levelrange and provide arguments for both maintaining and eliminatingthe question. The item was intended to measure how familiar

10 11 12 13 14 15 16 17 18

1.236 1.446 .241 1.474 .252 .487 1.425 .203 .440 .456 1.440 .127 .345 .413 .440 1.568 .252 .465 .474 .444 .416 1.413 .085 .317 .285 .382 .410 .433 1.614 .221 .518 .600 .533 .503 .619 .487 1.431 .188 .362 .393 .377 .373 .418 .332 .491

Page 6: Intranet satisfaction questionnaire: Development and validationof a questionnaire to measure user satisfaction with the Intranet

.005.75

5.505.25

5.004.75

4.504.25

4.003.75

3.503.25

3.002.75

2.502.25

2.00

80

60

40

20

0

Mean scale for items 1 to 18

N = 881

Mean = 4.32SD = .69

Freq

uenc

y

Fig. 2. Frequency distribution of the individual mean scale for items 1–18 (firstversion).

1246 J.A. Bargas-Avila et al. / Computers in Human Behavior 25 (2009) 1241–1250

respondents are with using the Intranet. The analysis of the com-mentaries shows that many users made comments about the nav-igation and structure of the Intranet. This leads to the assumptionthat the item has strong overlaps with item 3. Therefore item 5 wasdiscarded.

Item 7. The discriminatory power of item 7 is sufficient, but rel-atively low. Additionally, it shows a high difficulty index (also re-flected in the observed ceiling effect). The question was intendedto investigate whether users can access the Intranet wheneverneeded. It is probable that new server and Internet technologieshave now eliminated downtimes, turning the technological uptimeof the Intranet into a daily commodity for most employees. Basedon this reasoning, it was decided to eliminate item 7 from the ISQ.

Item 8. Item 8 shows a relatively low discriminatory power andhomogeneity index. Cronbach a could be slightly augmented bythe elimination of this question. On the other hand, this itemshows a larger variance, leading to a better differentiation betweenparticipants (Kranz, 1979). The search engine item also showslower means, a finding that is in line with previous studies (Ober-holzer et al., 2003;Leuthold et al., 2004). Due to the importance ofthe search engine, the item will remain in the ISQ, but will beshortened. The new formulation is ‘‘When I search something withthe Intranet search-engine I find the desired information within areasonable amount of time”.

Item 9. Item 9 stands out due to the high number of users whochose the answer ‘‘I can’t answer this question” (35.1%). Addition-ally, the discriminatory power and homogeneity index are rela-tively low. The question was intended to investigate whethermissing, outdated or wrong content could be easily notified tothe content owner. It seems that many users do not know whetherthis can be done, probably because they have never done, orneeded to do this. Due to the weak statistical parameters and thehigh percentage of missing answers this item will be discardedfrom future versions of the ISQ.

Item 11. The statistical parameters of item 11 speak clearly forits elimination from the ISQ. It shows a low homogeneity, discrim-inatory power and reliability. The question was intended to inves-tigate whether users know how to publish information in theIntranet. This is probably a task that is not applicable for many par-ticipants and so the item was deleted.

Item 17. This item shows similar statistical parameters to item1. It was intended to investigate whether users think the informa-

tion in the Intranet is believable. A brief inspection of the com-ments registered on this topic showed that many participantswere irritated by this question and made remarks like ‘‘Why?Shouldn’t I trust the information in the Intranet?” or ‘‘I really hopethat if our company invests so much money in the Intranet, theinformation is at least to be relied on”. In consideration of the rel-atively weak statistical parameters and the irritation caused, thisitem was discarded.

The validation of the 18 ISQ items led to the deletion of five (5,7, 9, 11 and 17) and the modification of three (3, 4 and 6) ques-tions. The second version of the scale therefore contains 13 itemsand was subjected to another validation (see next section).

4. Modification and second validation

This section describes how the second version of the ISQ wasdeveloped and validated.

4.1. Modifications for the second version of the ISQ

4.1.1. Scale and lengthConsidering the positive experiences and results of the first val-

idation, it was decided to leave the scale in the ISQ unchanged fromthe first version (see Section 3.1.1).

The first validation led to the elimination of five items, resultingin 13 questions for inclusion in the second version of the ISQ. Thisis in line with the requirement of having an instrument that needsonly 10– 15 min to be completed.

4.1.2. Items for the second versionBased on the validation of the first version (see Section 3.3), the

second version of the ISQ contained 13 items (see Table 6).

4.2. Methodology

4.2.1. Experimental procedureTo validate the ISQ, it was implemented as online survey and

tested in cooperation with a national retail company employingabout 45,000 people. To retain its anonymity, the company willbe called ‘‘Company B”.

All registered data were logged electronically in a database ofthe Department of Psychology. The questionnaire was conductedin three languages (German, French, Italian), but due to the smallsample size of the French and Italian versions, again only the Ger-man version will be considered here.

The survey started with a short introductory text that high-lighted the importance of employees’ feedback, the type of ques-tions, the length of the survey and the anonymity of this enquiry.On the next pages all 13 questions (see Table 6) were presentedone by one. This time users were not forced to choose one of thegiven options. Each item was again provided with a free text com-mentary field. Due to the internal policies of Company B, it was notpermissible to pose demograhic questions.

The survey was online for 7 days. No reminder was sent becausethe return rate was very good.

4.2.2. SampleIn total, 7000 employees were asked by news posting to partic-

ipate in the survey, leading to 1350 valid feedbacks (25 feedbackshad to be excluded for various reasons; see Section 4.3).

4.3. Results

A total of 1375 responses were registered. The missing data con-sist of unanswered items and ‘‘I can’t answer it” statements. Twenty

Page 7: Intranet satisfaction questionnaire: Development and validationof a questionnaire to measure user satisfaction with the Intranet

Mean scale for items 1 to 13

N = 1350

Mean = 4.30SD = .78

Freq

uenc

y

0

100

50

150

200

250

Fig. 3. Frequency distribution of the individual mean scale for items 1–13 (secondversion).

Table 6Second version ot the ISQ (translated by the authors).

No. Item

1 The Intranet delivers relevant content for my work2 The content in the Intranet is up-to-date3 The Intranet has a concise layout and a comprehensible structure4 When I read something on the Intranet the content is clear and understandable5 The Intranet facilitates internal communication (e.g. via an employee-directory, messages from the management or discussion-forums etc.)6 When I search something with the Intranet search-engine I find the desired information within a reasonable amount of time7 The Intranet enables me to work more efficiently (e.g. internal workflows, accessing support or information retrieval)8 With the Intranet I can work fast (e.g. fast page and document download)9 The Intranet is easy to use (e.g. personalization, handling the employee-directory)

10 I am satisfied with the help and support I get when I encounter a problem or have a question regarding the Intranet (e.g. online-help or help-desk)11 The Intranet constantly provides up-to-date company news12 I encounter the work-relevant information on the Intranet in a format I can easily handle13 Overall I am satisfied with the Intranet

J.A. Bargas-Avila et al. / Computers in Human Behavior 25 (2009) 1241–1250 1247

participants were excluded, because they had not answered at leastseven items. Another response was discarded, because the ‘‘I can’tanswer it” option had been marked in more than half of the items.In the case of the last four excluded participants, it was assumed thatthey did not fill out the questionnaire seriously, because they an-swered all 13 items exclusively with the best or the worst item score.The sample size for the item analysis therefore consists of 1350 par-ticipants. Table 7 gives an overview of the total missing values foreach item after the exclusion of the 25 participants.

Most missing values are due to the ‘‘I can’t answer it” option.Item 10, with 375 missing cases (= 27.8%), differs strongly fromthe other items, which all have less than 7% missing values. Onlyeight of these 375 cases are blank; the rest of the participantsmarked the option ‘‘I can’t answer it”. These figures suggest thatmost of the participants have so far not required any help concern-ing the Intranet and therefore chose not to answer.

Again, the Expectation-Maximization Algorithm (EM) was usedto replace missing values, leading to virtually no differences be-tween all values and the EM values.

The average individual mean scale for the second ISQ is 4.30ðSD ¼ :78Þ. The lowest mean is 1.15 and the highest 5.95. Fig. 3shows the mean scales for items 1–13. The distribution of themean scale is again skewed to the right (Skewness = �.69,se ¼ :07Þ and shows a sharp peak (Kurtosis = .63, se ¼ :13Þ. Kol-mogorov-Smirnov and Shapiro-Wilk tests confirm that the shapeof the acquired data is not normally distributed ðp < :01Þ.

The means for items 1–13 reside within a range of 1.59 given bythe lowest score of item 3 ðM ¼ 3:46Þ and the highest score of item11 ðM ¼ 5:05Þ with an average standard deviation of 1.13. Table 8gives a brief overview of the most important statistical parameters.

The frequency distributions show for most items a slightlyasymmetric distribution to the right. All items show a negativeskewness, although some are almost zero. Most items show a shar-per peak than a normal distribution, with item 11 ðK ¼ 2:67Þ hav-ing the sharpest one. Exceptions are items 3, 6, 8 and 9 (slightlynegative Kurtosis values).

The item difficulty indices ðpv Þ, shown in Table 8, are againmeasured with consideration of the variances. They are distributedbetween .36 (item 6) and .73 (item 11). The average index is at .56ðSD ¼ :11Þ.

Table 7Overview of missing values for each item (second version).

Item 1 2 3 4 5 6

N 1333 1330 1338 1348 1265 13Missing Count 17 20 12 2 85 44

% 1.3 1.5 0.9 0.1 6.3 3.3

Table 9 lists the discriminatory power and Cronbach a for eachitem of the second version. The coefficients range from .48 (item 6)to .73 (item 7) with a mean of .60 ðSD ¼ :54Þ. This time, no ques-tions show worryingly low or high values.

Regarding homogeneity, the intercorrelation matrix (see Table10) shows significant correlations for all items ðp < 0:01Þ with nonegative correlations.

The correlations with the global item 13 range from a moderateto high level between .46 and .69. The average homogeneity indexfor the second version is now at .44 and the homogeneity indicesfor each item range from .34 to .57. Again, the relatively moderateindices are explained with the complexity of the construct ‘‘usersatisfaction” that requires the items to be heterogeneous to coverthe whole spectrum.

Cronbach a for the second version of the ISQ is again relativelyhigh ða ¼ :89Þ confirming the good reliability of this survey. Item13 represents the overall satisfaction expressed by users and was

7 8 9 10 11 12 13

06 1288 1307 1292 975 1330 1289 134062 43 58 375 20 61 104.6 3.2 4.3 27.8 1.5 4.5 0.7

Page 8: Intranet satisfaction questionnaire: Development and validationof a questionnaire to measure user satisfaction with the Intranet

Table 8Statistical parameters of the second validation.

Item N M SD Mode S K pv

1 1350 4.55 1.13 5 �0.75 0.26 0.612 1350 4.85 0.97 5 �1.13 1.76 0.683 1350 3.46 1.38 4 �0.06 �0.84 0.394 1350 4.74 1.02 5 �1.07 1.44 0.655 1350 4.30 1.17 5 �0.65 0.09 0.556 1350 3.35 1.37 4 �0.06 �0.86 0.367 1350 4.15 1.10 4 �0.54 0.07 0.518 1350 4.05 1.22 5 �0.54 �0.28 0.509 1350 4.19 1.20 5 �0.56 �0.17 0.53

10 1350 4.50 1.04 5 �0.82 0.94 0.5911 1350 5.05 0.95 5 �1.39 2.67 0.7312 1350 4.38 1.09 5 �0.79 0.55 0.5613 1350 4.34 1.09 5 �0.85 0.52 0.56

Missing values = EM; SES ¼ :067; SEK ¼ :133.

Table 9Discriminatory power and Cronbach a for each item (second version).

Corrected item – total correlation Alpha if item deleted

1 0.5149 0.88642 0.5198 0.88603 0.5821 0.88394 0.5722 0.88355 0.6649 0.87836 0.4848 0.89007 0.7389 0.87468 0.5831 0.88309 0.7058 0.8759

10 0.6302 0.880611 0.5882 0.883012 0.6640 0.8787

aitem1—12 ¼ :8908; N = 1350; Missing values = EM.

1248 J.A. Bargas-Avila et al. / Computers in Human Behavior 25 (2009) 1241–1250

not taken into account in the calculation to avoid artificial inflationof Cronbach a. Table 9 shows that no item-deletion brings a rele-vant improvement to the internal consistency.

In order to determine what factors were important for overallsatisfaction (item 13), a linear regression was conducted, withthe global item (13) as the dependent variable and the other 12items as predictors. The model fit was R2 ¼ :741 (adjusted), withF(12, 1337) = 322, and a significance level of p < :005. In the linearregression model, the standardized beta coefficients reveal theimportance of each item for the overall satisfaction level (see Table11). For all items except item 4, beta coefficients are significantðp < 0:05Þ. The most important item is the one concerning thecomprehensible structure (.301), followed by ease of use (.156)

Table 10Intercorrelation matrix and homogeneity indices for item 1–13 (second version).

Item 1 2 3 4 5 6

1 12 0.369 13 0.234 0.284 14 0.300 0.394 0.357 15 0.437 0.342 0.479 0.422 16 0.203 0.206 0.474 0.233 0.366 17 0.484 0.384 0.493 0.388 0.624 0.4928 0.277 0.322 0.392 0.384 0.419 0.3519 0.334 0.400 0.594 0.468 0.519 0.413

10 0.465 0.399 0.301 0.470 0.449 0.32811 0.450 0.486 0.261 0.493 0.421 0.23012 0.443 0.428 0.433 0.460 0.462 0.32313 0.457 0.480 0.689 0.486 0.603 0.514H 0.371 0.375 0.416 0.405 0.462 0.344

All correlations are significant ðp < :01Þ. H, homogeneity coefficient.N = 1350; Missing values = EM.

and reusable data format (.153). On the other hand, the item con-cerning content understandability (.007) is less important to pre-dict the overall satisfaction value. Altogether the regressionanalysis shows satisfactory values for the ISQ.

4.4. Discussion

The second validation of the ISQ reveals a stable measuringinstrument. All statistical parameters (item difficulties, discrimina-tory power, homogeneity and internal consistency) are within rea-sonable to good levels. However, there are some items that mustbe critically discussed here.

Item 10. With 27.8% missing values, this item is clearly theweakest point of the ISQ. Is it adequate to retain an item wheremore than a fourth of participants chose not to provide an answer?The statement of item 10 refers to the support and help qualityemployees receive, when they encounter problems in the Intranet(‘‘I am satisfied with the help and support I get when I encounter aproblem”). Interactive applications should be designed in such away that users are helped when they get to a dead end. Lindermanand Fried (2004) refer to this topic as ‘‘Contingency Design”. Itseems reasonable to claim that the level of help and support willinfluence satisfaction in the Intranet. At the same time, there willalways be users who never need help and therefore will not be ableto provide an answer to this question. Given the fact that all otherstatistical parameters are satisfactory and that the high level ofmissing values can be explained, it is justifiable to leave this itemin the ISQ.

Item 13. The global item was used to measure the internalvalidity of the construct. It can be argued, that after the successfulvalidation of the survey this item now could be discarded. On theother hand, use of the ISQ in different Intranets has shown that itis often useful to have this item to cross-check the overall meansof the survey. Therefore it is justifiable to leave this item in theISQ.

The first validation showed the necessity of modifying someitems. These modifications were shown to be adequate:

Item 3. The comments in the first version showed that manyparticipants were referring to the search engine, rather than tothe information architecture. Modification of the statement almosteliminated this problem. In the second validation, only sporadiccomments about the search engine were registered in this item.

Item 4. This question was modified to be clearly distinguishablefrom the modified item 3. There were no comments that suggesteda possible mix-up.

Item 6 (no. 8 in version 1). This item showed relatively low dis-criminatory power (0.37) and homogeneity (0.24) in the first vali-

7 8 9 10 11 12 13

10.515 10.554 0.519 10.508 0.410 0.488 10.431 0.354 0.414 0.476 10.539 0.432 0.485 0.478 0.504 10.672 0.533 0.689 0.554 0.494 0.632 10.507 0.409 0.490 0.444 0.418 0.468 0.567

Page 9: Intranet satisfaction questionnaire: Development and validationof a questionnaire to measure user satisfaction with the Intranet

Table 11Beta coefficients for items 1–12 (second version).

Item 1 2 3 4 5 6 7 8 9 10 11 12

Stand. beta coefficient .053 .084 .301 .007 .065 .087 .118 .051 .156 .094 .045 .153

J.A. Bargas-Avila et al. / Computers in Human Behavior 25 (2009) 1241–1250 1249

dation. Therefore it was shortened to facilitate understanding. Thesecond version showed improved values for both discriminatorypower (0.48) and homogeneity (0.34). Due to the importance ofthe search topic, this item should remain in the ISQ, even if thesevalues are only in a moderate range.

4.5. Exploratory factor analysis

To analyze whether items can be grouped in factors, anexploratory factor analysis was conducted on the first 12 itemsof the ISQ (the global item was again not included). Two factorsemerged with eigenvalues greater than 1.00, explaining 56.54%of the total variance. Accordingly, two factors were extractedand rotated using the Varimax with Kaiser Normalization meth-od. The factor loadings for the extracted factors are shown inTable 12.

Based on the item loadings and the theoretical background (seeSection 2), the characterization of the two factors is not too diffi-cult. The items that load on the first factor represent the qualityof the content (e.g. relevance, up-to-dateness, understandability).Hence this factor is named ‘‘Content Quality”. The items that loadon the second factor represent aspects of the Intranet’s usability(comprehensible structure, search engine quality, speed of work,etc.). Hence this factor is named ‘‘Intranet Usability”.

In conclusion, the data show evidence that the ISQ is based on abi-dimensional structure, comprising the factors that emphazisecontent quality and Intranet usability.

Table 12Rotated factor loadings of the exploratory factor analysis for the ISQ.

Factor 1 ContentQuality

Factor 2 IntranetUsability

Eigenvalues 5.593 1.191Relevant content (item 1) .693 .147Up-to-dateness (item 2) .688 .150Comprehensible structure (item 3) .141 .801Content understandable (item 4) .621 .297Facilitates internal communication (item 5) .469 .576Quality of the search engine (item 6) .044 .766Enables efficient work (item 7) .472 .659Performance/speed (item 8) .361 .580Ease of use (item 9) .407 .687Support quality (item 10) .663 .330Quality of company news (item 11) .788 .144Data format is reusable (item 12) .625 .413

Extraction method: principal component analysis.Rotation method: Varimax with Kaiser normalization.

Table 13Statistical parameters of ISQ surveys in other companies.

Employees Sector N % mis (min) % mis (max) pv (min)

2500 Technology 95 0 31.6 .43611,000 Government 663 0 38.2 .5846000 Technology 108 0 15.7 .34519,000 Chemical 105 0 35.2 .4958000 Finance 57 0 38.6 .4072000 Technology 292 0 29.5 .487

mis, missing values; pv , item difficulty; RIT , discriminatory power; H, homogeneity; a, i

5. Further aspects of the ISQ

5.1. Generalization

The ISQ was validated with the Intranets of two different ser-vice-industry companies. At this point, it can be argued that somenumbers could be biased toward certain aspects of the validationobjects. To counter these doubts, the ISQ was used in the Intranetsof five additional companies, varying in size and sectors. Table 13summarizes the most important statistical parameters.

It can be seen that the statistical parameters of the ISQ – if ap-plied in different Intranets – show similar values. This indicatesthat it can be used as a generic survey in different Intranets of dif-ferent companies and sectors. Again, the highest percentage ofmissing values (approximately 30-40%) was found for item 10(see Section 4.4) in all companies. In one case, item 10 showed only15.7% of missing values. In this company, the ISQ revealed in gen-eral very low satisfaction values. It was reported that there weresevere problems with the usability and stability of this portal,which probably led to an increased need for help and support.

5.2. Language effects

All ISQ validations were made with the German version of thesurvey. The translations of the survey into Chinese, English, French,Italian, Japanese, Portuguese, Russian and Spanish were made byprofessional language services. The Slovenian version was trans-lated by the language service of a Slovenian company. At the mo-ment, there are insufficient statistical data to make directcomparisons of different languages within the same company. Aqualitative analysis of the registered free text commentary fieldsshowed no feedbacks that could point to problematic translationeffects. In the future it would be important to conduct multilingualsurveys within large companies to make direct statistical compar-isons and to examine possible language biases.

6. Conclusions

Both validations of the ISQ show high Cronbach a values, evi-dence of excellent internal consistency. Furthermore, the secondvalidation indicates that Cronbach a cannot be substantially in-creased by item-exclusion. The homogeneity indices have beenincreased to a moderate level in the second validation. Given thatthe construct of user satisfaction is complex and implies hetero-geneous items, these values can be regarded as satisfactory. Thus,the overall reliability and validity of the ISQ are good. The itemswere corrected and approved by several Intranet managers, mak-

pv (max) RIT (min) RIT (max) H (min) H (max) a1—12

.727 .4675 .7978 .346 .617 .9041

.804 .5528 .6955 .405 .583 .9014

.637 .4732 .7041 .327 .535 .8873

.683 .4598 .7968 .320 .579 .8878

.760 .4871 .8073 .370 .596 .9114

.751 .4416 .7172 .321 .583 .8969

nternal consistency.

Page 10: Intranet satisfaction questionnaire: Development and validationof a questionnaire to measure user satisfaction with the Intranet

1250 J.A. Bargas-Avila et al. / Computers in Human Behavior 25 (2009) 1241–1250

ing it very likely that the most important aspects of user satisfac-tion in the Intranet were considered and leading to a good con-tent validity. The criterion-related validity is measured by thecorrelations of the global item as an external criterion; this alsoshows satisfactory results. The objectivity concerning the inde-pendency of the experimenters can be disregarded in an onlinestudy. The validation of the ISQ seems to be company-indepen-dant, because comparison with other companies shows similarresults.

In this work, a clear and identifiable need of the business sec-tor was adressed: how can we measure user satisfaction with theIntranet? With an explorative approach 18 items were generatedand tested. The validation showed that they could be reduced to13 items. The resulting measuring instrument was again vali-dated, turned out to be stable, and can be recommended forusage in the Intranet. In November 2006, the ISQ was offeredvia www.Intranetsatisfaction.com in various languages for freeon the Internet. Since then, over 500 companies from aroundthe world have downloaded the tool and dozens have alreadymade use of it. This clearly confirms the need for such aninstrument.

An Intranet is not a static construct. It is in constant modifica-tion and grows organically. New technologies open more and morepossibilities of developing useful services for employees. On theother hand, the market changes and brings new challenges to com-panies that lead to new requirements for the Intranet. Due to theseconditions, we consider that the current version of the ISQ willhave a lifespan of 3–5 years. It will be necessary to monitor currentand future trends, and at a given time to develop and validate asubsequent version.

References

Al-Gahtani, S. S., & King, M. (1999). Attitudes, satisfaction and usage: Factorscontributing to each in the acceptance of information technology. Behaviour &Information Technology, 18(4), 227–297.

Allison, P. D. (2001). Missing data. Vol. 136 of Sage University Papers: Quantitativeapplications in the social sciences. Sage Publications.

Bailey, J., & Pearson, S. (1983). Development of a tool for measuring and analyzingcomputer user satisfaction. Managment Science, 29(5), 530–545.

Baitsch, C., Katz, C., Spinas, P., & Ulich, E. (1989). Computerunterstützte Büroarbeit.Ein Leitfaden für Organisation und Gestaltung. Zürich: vdf.

Bargas-Avila, J., Oberholzer, G., Schmutz, P., de Vito, M., & Opwis, K. (2007). Usableerror message presentation in the world wide web: Do not show errors rightaway. Interacting with Computers, 19(3), 330–341.

Batinic, B. (1999). Online research: Methoden, Anwendungen und Ergebnisse.Internet und Psychologie. Göttingen: Hogrefe, Verl. für Psychologie.

Batinic, B., & Moser, K. (2005). Determinanten der Rücklaufquote in Online-Panels.Zeitschrift für Medienpsychologie, 17(2), 64–75.

Borg, I. (2001). Mitarbeiterbefragungen. In H. Schuler (Ed.), Lehrbuch derPersonalpsychologie. Goettingen: Hogrefe, Verlag für Psychologie.

Borg, I., & Groenen, P. (1997). Modern multidimensional scaling: Theory andapplications. Springer.

Bortz, J. (1999). Statistik für Sozialwissenschaftler (5th ed.). Berlin: Springer-Lehrbuch, Springer.

Doll, W. J., & Torkzadeh, G. (1988). The measurement of end-user computingsatisfaction. MIS Quaterly, 12(2), 259–274.

Eagly, E. A., & Chaiken, S. (1998). Attitude structure and function. In D. T. Gilbert, S.T. Fiske, & G. Lindzey (Eds.), Handbook of social psychology (4th ed.,pp. 269–322). New York: Mc Graw-Hill.

Fishbein, M. (1967). Readings in attitude theory and measurement. New York: JohnWiley.

Fisher, S. L., & Howell, A. W. (2004). Beyond user acceptance: An examination ofemployee reactions to information technology systems. Human ResourceManagement, 43(2 & 3), 243–258.

Fisseni, H.-J. (1997). Lehrbuch der psychologischen Diagnostik: Mit Hinweisen zurIntervention (2nd ed.). Göttingen: Hogrefe, Verlag für Psychologie.

Gizycki, G. v. (2002). Usability – Nutzerfreundliches Web-Design. In M. Beier & V.Gizycki (Eds.), Usability. Nutzerfreundliches Web-Design (pp. 1–17). Berlin:Springer.

Harrison, A. W., & Rainer, K. R. (1996). A general measure of user computingsatisfaction. Computers in Human Behavior, 12(1), 29–92.

Herczeg, M. (1994). Software-Ergonomie: Grundlagen der Mensch-Computer-Kommunikation. Bonn: Addison-Wesley Publishing.

Hoffmann, C. (2001). Das Intranet: ein Medium der Mitarbeiterkommunikation. Vol.9 of Medien und Märkte. UVK, Konstanz.

Holzinger, A. (2005). Usability engineering methods for software developers.Communications of the ACM, 48(1), 71–74.

Holzinger, A., Searle, G., Kleinberger, T., Seffah, A., & Javahery, H. (2008).Investigating usability metrics for the design and development of applicationsfor the elderly. In Proceedings of the 11th international conference on computershelping people with special Needs (pp. 98–105). Springer.

Huang, J.-H., Yang, C., Jin, B.-H., & Chiu, H. (2004). Measuring satisfaction withbusiness-to-employee systems. Computers in Human Behavior, 20, 17–35.

ISO (1998). Ergonomics requirements for office work with visual display terminals(VDTs). International Standardization Organisation (ISO).

Kaiser, T. M. (2000). Methode zur Konzeption von Intranets. Difo-Druck OHG,Bamberg.

Kranz, H. T. (1979). Einführung in die klassische Testtheorie. Vol. 8 of Methoden inder Psychologie. Frankfurt a.M: Fachbuchhandlung für Psychologie.

Leuthold, S., Hah, E.-J., Karrer, L., Glaeser, M., Guth, I., Bargas-Avila, J., Oberholzer, G.,& Springer, A. (2004). Stimmt Intranet Report 2004. Tech. rep., Stimmt AG.

Lewis, J. R. (1995). IBM computer usability satisfaction questionnaires:Psychometric evaluation and instructions for use. International Journal ofHuman-Computer Interaction, 7(1), 57–78.

Linderman, M., & Fried, J. (2004). Defensive design for the Web. New Riders,Indianapolis, Ind.

McKeen, J. D., Guimaraes, T., & Wetherbe, J. C. (1994). The relation between userpartricipation and user satisfaction: An investigation of four contingencyfactors. MIS Quaterly, 18(4), 427–451.

McKinney, V., Yoon, K., & Zahedi, F. M. (2002). The measurement of web-customersatisfaction: An expectation and disconfirmation approach. Information SystemsResearch, 13(3), 296–315.

Melone, N. P. (1990). A theoretical assessment of the user-satisfaction construct ininformation system research. Managment Science, 36(1), 76–91.

Mummendey, H. D. (2003). Die Fragebogen-Methode: Grundlagen und Anwendungin Persönlichkeits-, Einstellungs- und Selbstkonzeptforschung (4th ed.).Göttingen: Hogrefe, Verlag für Psychologie.

Muylle, S., Moenaert, R., & Despontin, M. (2004). The conceptualization andempirical validation of web site user satisfaction. Information & Management,41(5), 543–560.

Oberholzer, G., Leuthold, S., Bargas-Avila, J., Karrer, L., & Glaeser, M. (2003). StimmtIntranet Report 2003. Tech. rep., Stimmt AG.

Ong, C., & Lai, J. (2007). Measuring user satisfaction with knowledge managementsystems: Scale development, purification, and initial test. Computers in HumanBehavior, 23(3), 1329–1346.

Palvia, P. C. (1996). A model and instrument for measuring small business usersatisfaction with information technology. Information & Management, 31,151–163.

Rauterberg, M. (1994). Benutzerorientierte Software-Entwicklung: Konzepte,Methoden und Vorgehen zur Benutzerbeteiligung. Vol. 3 of Mensch TechnikOrganisation. Zürich: Verlag der Fachvereine.

Rogelberg, S.G. (2002). Handbook of research methods in industrial andorganizational psychology. Vol. 1 of Blackwell handbooks of researchmethods in psychology. Oxford: Blackwell.

Rosson, M.B., & Carroll, J.M. (2002). Usability engineering: Scenario-baseddevelopment of human–computer interaction. The Morgan Kaufmann seriesin interactive technologies. San Francisco: Morgan Kaufmann.

Rost, J. (2004). Lehrbuch Testtheorie-Testkonstruktion (2nd ed.). Bern: Hans Huber.Schafer, J., & Graham, J. (2002). Missing data: Our view of the state of the art.

Psychological Methods, 7(2), 147–177.Schneider, W. (2000). Kundenzufriedenheit: Strategie, Messung, Management.

Lerch: Verlag Moderne Industrie.Shaft, T., Sharfman, M., & Wu, W. (2004). Reliability assessment of the attitude

towards computers instrument (ATCI). Computers in Human Behavior, 20(5),661–689.

Spinas, P. (1987). Arbeitspsychologische Aspekte der Benutzerfreundlichkeit vonBildschirmsystemen. Zürich: Administration & Druck AG.

Töpfer, A., & Mann, A. (1999). Kundenzufriedenheit als Messlatte für den Erfolg. InA. Töpfer & E. Bauer (Eds.), Kundenzufriedenheit messen und steigern. 2nd Edition(pp. 59–110). Luchterhand: Hermann.

Wang, R. Y., & Strong, D. M. (1996). Beyond accuracy: What data quality means todata consumers. Journal of Management Information Systems, 12(4), 5–34.

Wang, Y., & Liao, Y. (2007). The conceptualization and measurement of m-commerce user satisfaction. Computers in Human Behavior, 23(1), 381–398.