lad hari 2010

Upload: c-madhavaiah

Post on 02-Jun-2018

232 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/10/2019 Lad Hari 2010

    1/14

    Developing e-service quality scales: A literature review

    Riadh Ladhari n

    Faculty of Business Administration, Laval University, Quebec, Canada

    a r t i c l e i n f o

    Keywords:

    E-service quality

    Scale development

    Dimensionality

    Psychometric properties

    a b s t r a c t

    This study reviews the literature on e-service quality (e-SQ), with an emphasis on the methodological

    issues involved in developing measurement scales and issues related to the dimensionality of the e-SQ

    construct. We selected numerous studies on e-SQ from well-known databases and subjected them to a

    thorough content analysis. The review shows that dimensions of e-service quality tend to be contingent

    on the service industry. Despite the common dimensions often used in evaluating e-SQ, regardless of

    the type of service on the internet (reliability/fulfilment, responsiveness, web design, ease of use/

    usability, privacy/security, and information quality/benefit), other dimensions are specific to

    e-service contexts. The study also identifies several conceptual and methodological limitations

    associated with developing e-SQ measurement such as the lack of a rigorous validation process, the

    problematic sample size and composition, the focus on functional aspects, and the use of a data-driven

    approach. This is the first study to undertake an extensive literature review of research on the

    development of e-SQ scales. The findings should be valuable to academics and practitioners alike.

    & 2010 Elsevier Ltd. All rights reserved.

    1. Introduction

    Online service quality has a significant influence on manyimportant aspects of electronic commerce (e-commerce). These

    include consumer trust in an online retailer (Gefen, 2002; Hsu,

    2008; Hwang and Kim, 2007); site equity (Yoo and Donthu, 2001);

    consumer attitudes towards the site (Hausman and Siekpe, 2009;

    Yoo and Donthu, 2001); attitude toward e-shopping (Ha and Stoel,

    2009); perceived value of the products/services (Hsu, 2008);

    willingness to pay more (Fassnacht and Kose, 2007), user online

    satisfaction (Cristobal et al., 2007; Fassnacht and Kose, 2007; Ho

    and Lee, 2007; Lee and Lin, 2005); site loyalty intentions (Ho and

    Lee, 2007; Yoo and Donthu, 2001); site recommendation inten-

    tions (Long and McMellon, 2004); and cross-buying (Fassnacht

    and Kose, 2007). In view of the apparent importance of electronic

    service quality (e-SQ),Hsu (2008)contends that the achievement

    of superior online service quality should be the crucial differ-

    entiating strategy for all e-retailers; indeed, e-SQ has been

    increasingly recognised as the most important determinant of

    long-term performance and success for e-retailers (Fassnacht

    and Koese, 2006; Holloway and Beatty, 2003; Santos, 2003;

    Wolfinbarger and Gilly, 2003; Zeithaml et al., 2000, 2002). An

    understanding of how consumers evaluate e-SQ is thus of the

    utmost importance for scholars and practitioners alike (Fassnacht

    and Koese, 2006; van Riel et al., 2001). However, despite the

    obvious importance of the issue, the conceptualisation and

    measurement of e-SQ are still at an early phase of development

    (Cristobal et al., 2007; Fassnacht and Koese, 2006; Santos, 2003; vanRiel et al., 2001) and studies in this field are still somewhat limited

    and disparate (Gounaris and Dimitriadis, 2003; Parasuraman et al.,

    2005). AsZeithaml et al. (2002, p. 371)note: Rigorous attention to

    the concept of service quality delivery through Web sites is needed.

    This would involve a comprehensive examination of the ante-

    cedents, composition, and consequences of service quality.

    Against this background, the present study undertakes a

    comprehensive review of the current state of knowledge regard-

    ing e-SQ. In doing so, the study reviews the literature on e-SQ

    measurement models with a view to (i) analysing the key

    methodological issues involved in the development of such scales,

    and (ii) discussing the dimensional structure of the e-SQ

    construct. As a result of these considerations, the paper provides

    valuable insights and implications for the development and

    application of e-SQ scales.

    2. Definition and nature of e-SQ

    Parasuraman et al. (2005, p. 217)define e-SQ as y the extent

    to which a web site facilitates efficient and effective shopping,

    purchasing and delivery. This definition makes it clear that the

    concept of e-SQ extends from the pre-purchase phase (ease of use,

    product information, ordering information, and personal informa-

    tion protection) to the post-purchase phase (delivery, customer

    support, fulfilment, and return policy). The online environment

    Contents lists available atScienceDirect

    journal homepage:ww w.elsevier.com/locate/jretconser

    Journal of Retailing and Consumer Services

    0969-6989/$ - see front matter & 2010 Elsevier Ltd. All rights reserved.

    doi:10.1016/j.jretconser.2010.06.003

    n Tel.: +1 418 6562131x7940; fax: +1 418 6562624.

    E-mail address: [email protected]

    Journal of Retailing and Consumer Services 17 (2010) 464477

    http://-/?-http://www.elsevier.com/locate/jretconserhttp://localhost/var/www/apps/conversion/tmp/scratch_2/dx.doi.org/10.1016/j.jretconser.2010.06.003mailto:[email protected]://localhost/var/www/apps/conversion/tmp/scratch_2/dx.doi.org/10.1016/j.jretconser.2010.06.003http://localhost/var/www/apps/conversion/tmp/scratch_2/dx.doi.org/10.1016/j.jretconser.2010.06.003mailto:[email protected]://localhost/var/www/apps/conversion/tmp/scratch_2/dx.doi.org/10.1016/j.jretconser.2010.06.003http://www.elsevier.com/locate/jretconserhttp://-/?-
  • 8/10/2019 Lad Hari 2010

    2/14

    differs from the traditional retail context in several ways. These

    can be summarised as follows:

    Convenience and efficiency: Consumers using the online

    environment have the convenience of saving time and effort

    in comparing the prices (and some technical features) of

    products more efficiently (Santos, 2003).

    Safety and confidentiality: Participation in the online environ-

    ment involves users in distinctive issues regarding privacy,safety, and confidentiality.

    Absence of face-to-face contact: Customers in the online

    environment interact with a technical interface (Fassnacht

    and Koese, 2006). The absence of person-to-person interaction

    means that the traditional concepts and ways of measuring

    service quality, which emphasise the personal interaction of

    the conventional service encounter, are inadequate when

    applied to e-SQ (van Riel et al., 2001).

    Co-production of service quality: Customers in the online

    environment play a more prominent role in co-producing the

    delivered service than is the case in the traditional retail

    context (Fassnacht and Koese, 2006).

    3. Literature review

    3.1. Issues of adequacy of dimensions of e-SQ

    Several measures of e-SQ are described by Zeithaml et al.

    (2002)as being ad hoc. These measures, which have attempted

    to assess e-SQ mainly in terms of the design and quality of

    websites, include factors that induce satisfaction with a website

    and/or repeat visits (Alpar, 2001; Muylle et al., 1999; Rice, 1997;

    Szymanski and Hise, 2000). In this regard,Alpar (2001)identifies

    four attributes of satisfaction with a website: (i) ease of use

    (response speed, navigation support, use of new web technolo-

    gies); (ii) information content (quantity, quality, accuracy,

    customised information); (iii) entertainment (amusement, excite-

    ment); and (iv) interactivity (e-mail, live-chats, notice boards).

    Liu and Arnett (2000)suggest that the determinants of website

    success included the following: (i) information and service

    quality; (ii) system use; (iii) playfulness; and (iv) system design

    quality.Szymanski and Hise (2000) report four dominant factors

    in consumer assessments of e-satisfaction: (i) convenience

    (shopping times, ease of browsing); (ii) merchandising (product

    offerings and information available online); (iii) site design

    (uncluttered screens, easy search paths, fast presentations); and

    (iv) financial security.

    Apart from thead hocuse of website parameters (as described

    above), other authors attempt to develop more direct and

    comprehensive measures of the construct of e-SQ. Some research-

    ers (such as Gefen, 2002) modify or replicate the well-knownSERVQUAL scale (Parasuraman et al., 1988, 1991), whereas others

    develop their own scales to measure the construct (e.g., Ho and

    Lee, 2007; Loiacono et al., 2002; Parasuraman et al., 2005).

    According toParasuraman et al. (1991, p. 445), SERVQUAL is a

    generic instrument with good reliability and validity and broad

    applicability. Parasuraman et al. (1988) find that consumers

    evaluate perceived service quality in terms of five dimensions:

    tangibility (the appearance of physical facilities, equipment, and

    personnel); responsiveness (the willingness to help customers

    and provide prompt service); reliability (the ability to perform the

    promised service accurately and dependably); empathy (the level

    of caring and individualised attention the firm provides to its

    customers); and assurance (the knowledge and courtesy of

    employees and their ability to inspire trust and confidence).

    These dimensions are measured by a total of 22 items, where each

    item is measured according to the performance of the service

    actually provided (P) and the expectations for the service (E). The

    gap score (G) is therefore calculated as the difference between

    performance and expectations (PE). The greater the gap scores,

    the higher the perceived service quality.

    It is true that SERVQUAL has been successfully applied in a

    wide variety of traditional service settingsincluding (among

    others) insurance services, library services, information systems,healthcare settings, bank services, hotel services, and dental clinic

    services. However, several difficulties exist with regard to the

    conceptualisation and operationalisation of the SERVQUAL scale

    (e.g., Buttle, 1996; Ladhari, 2009). In particular, questions have

    been raised about the applicability of the five generic SERVQUAL

    dimensions in several service industries. As a result, adaptations

    of SERVQUAL have been proposed for various industry-specific

    contexts, and the findings suggest that the attributes of service

    quality are context-bounded (e.g., Cai and Jun, 2003; Ladhari,

    2009). Similar doubts have been raised regarding the applicability

    of the five SERVQUAL dimensions in the e-service context. In this

    regard,Gefen (2002) apply an adapted SERVQUAL instrument to

    the online service context and reported that the five dimensions

    collapsed into three: (i) tangibles; (ii) a combined dimension of

    responsiveness, reliability, and assurance; and (iii) empathy. The

    tangibles dimension is the most critical for inducing customer

    loyalty whereas the combination dimension (responsiveness,

    reliability, and assurance) is the most important for promoting

    customer trust.

    Parasuraman et al. (2005)acknowledge these difficulties when

    they suggest that y studying e-SQ requires scale development

    that extends beyond merely adapting offline scales. In a similar

    vein,Parasuraman and Grewal (2000, p. 171)state that studies are

    needed on whether the definitions and relative importance of the

    five service quality dimensions change when customers interact

    with technology rather than with service personnel. SERVQUAL

    was developed in the context of services provided through

    personal interaction between customers and service providers;

    as a result, its dimensions might not transpose directly to the e-SQ

    context (Fassnacht and Koese, 2006; Hsu, 2008).Hsu (2008)notes

    that the SERVQUAL model does not consider such dimensions as

    security and ease of navigation, andGefen (2002)contends that

    the dimension of empathy is less important in the e-SQ context

    because the online environment lacks personal human interac-

    tion. In addition,van Riel et al. (2001) argue that the tangibility

    dimension of SERVQUAL could be replaced by a dimension of web

    design or user interface.

    In view of these difficulties, it is apparent that the traditional

    SERVQUAL model does not constitute a comprehensive instru-

    ment for assessing e-SQ. Several studies attempt to develop

    specific measurement scales for online service quality, but the

    task is neither simple nor straightforward. AsAladwani and Palvia

    (2002)observe: Construct measurementy

    in the context of webtechnologies and applicationsy is a challenging task.

    Despite the difficulties, several studies endeavour to identify

    and measure the dimensions of the e-SQ construct. These studies

    are the subject of the literature review that forms the substance of

    the present study. The studies for review are summarised in

    Table 1.

    3.2. Methodological issues in developing e-SQ scales

    The studies inTable 1come from well-known databasessuch

    as ScienceDirect, ABI/INFORM, and EBSCOhost. Only studies

    focusing on developing an instrument for measuring e-service are

    included and are subjected to a comprehensive in-depth content

    R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477 465

  • 8/10/2019 Lad Hari 2010

    3/14

    http://-/?-
  • 8/10/2019 Lad Hari 2010

    4/14

    http://-/?-http://-/?-
  • 8/10/2019 Lad Hari 2010

    5/14

  • 8/10/2019 Lad Hari 2010

    6/14

    store for electronic

    equipment.

    Ibrahim et al. (2006) E-banking service

    quality

    135 UK banking

    customers.

    e-bank services 26 items 5 point scale

    Offline administration

    Exploratory factor

    analysis

    25 items

    Cristobal et al. (2007) E-service quality 461 internet users who

    visited, bought or used

    Internet service at least

    once during the

    previous three months.

    NA 25 items 7 point scale

    Offline administration

    Exploratory factor

    analysis; Confirmatory

    factor analysis

    17 items

    Ho and Lee (2007) E-travel service quality 289 online purchasers

    for the development

    stage and 382 online

    purchasers for thevalidation stage.

    E-travel services 30 items 7 point scale

    Online administration

    Exploratory factor

    analysis; Confirmatory

    factor analysis

    18 items

    Sohn and Tadisina

    (2008)

    E-service quality 204 customers

    experienced with

    internet-based

    financial services.

    Internet based

    financial services

    30 items Online and

    offline administration

    Exploratory factor

    analysis; Confirmatory

    factor analysis

    25 items

    a NA: Not addressed/Wide variety of sites categories.b NI: No information about the number of original items.c IP: internet purchasers; INP: internet non-purchasers.

  • 8/10/2019 Lad Hari 2010

    7/14

    analysis of the key methodological aspects of the development of

    the various e-SQ scales and their proposed dimensions. This

    review of the studies listed in Table 1 reveals: (i) several

    methodological issues related to the development of e-SQ mea-

    surement; and (ii) several pertinent observations regarding the

    dimensionality of the e-SQ construct (including the identification

    of several common dimensions of various e-SQ scales and certain

    limitations associated with the development of e-SQ scales).

    A detailed discussion of these two subjects is presented below.The methodological issues identified in this review can be

    summarised as follows: research methods; sampling methods;

    service industries considered; survey administration; generation

    of items; purification and assessment of items; analysis of

    dimensionality; scale reliability and validity.

    3.2.1. Research methods

    Studies of e-SQ measurement use a variety of methodolo-

    giesqualitative (Santos, 2003; Zeithaml et al., 2000), quantita-

    tive (Bauer et al., 2006); and mixed (Wolfinbarger and Gilly, 2003;

    Yang et al., 2004). Using qualitative methods, Zeithaml et al.

    (2000)identify 11 dimensions of e-SQ: (i) reliability; (ii) access;

    (iii) ease of navigation; (iv) efficiency; (v) responsiveness; (vi)

    flexibility; (vii) price knowledge; (viii) assurance/trust; (ix)

    security/privacy; (x) site aesthetics; and (xi) personalization.

    Santos (2003) also uses qualitative methods in conducting 30

    offline focus groups to investigate e-SQ dimensions. Yang et al.

    (2004)use a mixed approach that combines content analysis of

    critical incidents of online banking services and a web-based

    survey of these services. They analyze two online consumer

    review web sites (Gomez.com and ratingwonders.com) to obtain

    848 consumer anecdotes about their banks. Their analysis of the

    reviews finds 17 dimensions of online service quality that they

    groups into three categories: customer service quality, online

    system quality, and product or service variety.

    In developing electronic service quality measurement scales,

    researcher should use qualitative research at the earliest stage

    possible of their work, using one of several methods. One method

    that researchers seldom use is the critical incident technique

    (CIT), a qualitative interview method to study significant

    processes, incidents and events identified by respondents (Chell,

    1998). The goal is to understand significant incidents from the

    consumer perspective, taking into account behavioural, affective,

    and cognitive aspects (Chell, 1998). The technique allows

    respondents to indentify which events are the most important

    to them (Gremler, 2004). In service research, the respondents

    recall specific events they experienced with the service used. They

    are asked to think of a time when they felt very satisfied

    (dissatisfied) with the service received, to describe the service and

    why they felt so happy (unhappy) (Johnston, 1995). The CIT

    technique has a several advantages for electronic service quality

    measure development. First, the respondents can use their ownlanguage and terms to express their perceptions. Second,

    respondents can classify the critical incidents into satisfactory

    and unsatisfactory occurrences (Gremler, 2004; Johnston, 1995).

    Previous studies report that determinants of satisfactory online

    service quality are not the same as the determinants of

    unsatisfactory online service quality (e.g., Yang and Fang, 2004).

    Third, CIT can serve as an exploratory method to increase

    knowledge about online service quality and identify the relevant

    dimensions in a given online context (e.g., banking, travel agent

    services, education, grocery shopping, bookstores, and libraries).

    Previous studies show that traditional service quality components

    vary depending on the service industry (Ladhari, 2008). In that

    case, a purely quantitative approach can complement the CIT

    technique. Fourth, CIT can be used both qualitatively and

    quantitatively to identify the type and nature of incidents and

    the frequency of occurrence (Gremler, 2004).

    In one of the few studies applying the CIT to the online

    environment, Holloway and Beatty (2003) address service

    failure and service recovery. Using a combination of qualitative

    (30 in-depth interviews) and quantitative methods (a survey

    of 295 online shoppers who had experienced at least one

    service failure within the past 6 months), they report six types

    of service problems encountered by online shoppers: websitedesign, payment, delivery, product quality, and customer

    service. When asked about service recovery, respondents reported

    several reasons for dissatisfaction: lengthy delays, poor commu-

    nication, poor quality customer service support, and generic

    recoveries.

    3.2.2. Sampling methods

    The samples for research into e-SQ are drawn from a variety of

    populations. Several studies use convenience sampling (e.g., Cai

    and Jun, 2003; Lee and Lin, 2005; Long and McMellon, 2004),

    whereas only few use random sampling (e.g., Fassnacht and

    Koese, 2006; Parasuraman et al., 2005). Several of the studies use

    students for their surveys (e.g.,Aldwani and Palvia, 2002; Lee and

    Lin, 2005; Loiacono et al., 2002; Yoo and Donthu, 2001), despite a

    major limitation being that these respondents are not usually

    actual internet purchasers, but students who are merely invited to

    visit websites and rate them.Yoo and Donthu (2001)gather data

    from convenience samples of students who were asked to visit

    and evaluate internet shopping sites over a period of 2 days.

    Loiacono et al. (2002)also use a convenience sample of students

    to visit and evaluate websites. They told undergraduate students

    to explore a designated website and asked them to imagine that

    they are searching for a book. Only few studies use non-student

    samples. For example, Long and McMellon (2004) use respon-

    dents who were about to purchase items online. These respon-

    dents were asked to complete a questionnaire on expectations

    before going on the internet, followed by a questionnaire on

    perceptions after purchasing a product online.Parasuraman et al.

    (2005)utilise only respondents who had visited the internet on at

    least 12 occasions and made at least three purchases during the

    preceding 3 months.

    The studies reviewed have some limitations. First, the samples

    used in most of the studies consist of student populations, which

    may limit the generalisability of the scales and reduce their

    applicability to the broader population of online users. Loiacono

    et al. (2002, p. 435) question their own use of student samples:

    While these subjects are typical of a substantial body of web

    users, they are not a representative sample of all users. Kim and

    Stoel (2004, p. 112) criticize the use of student samples: By

    having students visit and rate websites with which they were not

    familiar or interested in, those studies may have suffered

    limitations in the accuracy of findings with regard to perceptionsof actual users. In addition, most of these respondents are not

    regular customers or users of the websites selected (Loiacono

    et al., 2002).

    Second, several studies use mostly US respondents. For

    instance, all the respondents in Cai and Juns (2003) study are

    from the southwest and the midwest regions in the US.

    Ranganathan and Ganapathy (2002)use a sample of respondents

    from Illinois. Seventy-four percent of the respondents in theYang

    et al. (2004)study are US residents. The reasons for internet use

    and the behavior of these participants may differ from those in

    other countries. Therefore, future studies should use more

    diversified samples. The literature on traditional service quality

    shows that dimensions of service quality differ from one country

    to another (Ladhari, 2008).

    R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477470

  • 8/10/2019 Lad Hari 2010

    8/14

    Third, many respondents in these studies use the internet as an

    information source and not for commercial transactions (e.g.,

    Yang et al., 2004). They may have different perceptions of service

    quality dimensions. Respondents who have not engaged in

    commercial transactions on the internet may have concerns

    about security compared to experienced internet buyers (Yang

    et al., 2004). For instance,Cai and Jun (2003)reported differences

    between online buyers and information searchers with respect to

    their e-service quality perceptions. Information searchers rate thefour dimensions of e-service quality (trustworthiness, web site

    design/content, communication, and prompt/reliable service)

    lower than online buyers do. In addition, these two groups

    differ on the relative importance of the four e-service quality

    dimensions.

    Finally, several studies used limited sample sizes. For instance,

    Cai and Jun (2003) use a sample of only 171 respondents,

    including 61 online searchers and 110 online buyers. In their

    study on electronic banking service quality, Ibrahim et al. (2006)

    use a sample of 131 customers. In another study, Aldwani and

    Palvia (2002)use a sample of 101 students in their first study and

    127 students in their second study. These sample sizes are

    relatively small for developing new scales. Future studies should

    use more larger and diversified samples.

    3.2.3. Service industries considered

    Some studies collect data across several industries

    (Ranganathan and Ganapathy, 2002; Yoo and Donthu, 2001),

    whereas other studies focus on particular sectors. Ranganathan

    and Ganapathy (2002) examine the key dimensions of B2C

    websites and retain in their sample individuals who completed

    at least one online purchase in the last 6 months. Yoo and Donthu

    (2001)ask their student respondents to evaluate a broad range of

    websitesincluding sites offering books, music and videos,

    department stores, electronics, computers, sports and fitness,

    flowers and gifts, health and beauty, and travel and auto. In

    contrast, other studies focus on such specific sectors as library

    services (ONeill et al., 2001), books (Barnes and Vidgen, 2002),apparel (Kim and Stoel, 2004), financial services (Sohn and

    Tadisina, 2008), and travel services (Ho and Lee, 2007).

    A few studies focus on support services related to purchasing

    goods on the internet (e.g., Francis and White, 2002; Janda et al.,

    2002; Kim and Stoel, 2004) while other studies focus on pure

    service offers such as Web portal quality (e.g., Gounaris and

    Dimitriadis, 2003; Yang et al., 2005). Other researchers develop

    scales for measuring service quality for support services and pure

    information web sites (e.g., Fassnacht and Koese, 2006; Yoo and

    Donthu, 2001). For instance, Fassnacht and Koese (2006) discuss

    three types of electronic service: online shopping for electronic

    equipment (i.e., support services), the creation and maintenance

    of home pages (i.e., pure service offer), and sports coverage (i.e.,

    content offer).

    3.2.4. Survey administration

    Scholars use both online and offline approaches for collecting

    data in their studies. In qualitative research, researchers use

    online and offline focus group studies (Wolfinbarger and Gilly,

    2003), and offline in-depth interviews (Cristobal et al., 2007). In

    quantitative studies, researchers use mail surveys (Kim and Stoel,

    2004; Sohn and Tadisina, 2008), website surveys (Parasuraman

    et al., 2005; Sohn and Tadisina, 2008), and in-person surveys

    (ONeill et al., 2001; Cristobal et al., 2007). For example,Kim and

    Stoel (2004)collect data via a mail survey sent to a mailing list of

    1000 randomly selected female shoppers who had purchased

    apparel online in the preceding 3 years.Parasuraman et al. (2005)

    utilise a research firm to administer an online survey to a random

    sample of internet users. Cristobal et al. (2007) collect data

    through personal interviews among a random sample of 461

    internet users who had used the services provided by online

    shops. Sohn and Tadisina (2008) use a combination of a mail

    survey and a website survey by mailing a hardcopy of a

    questionnaire to 2000 potential respondents and posting the

    same questionnaire on the web to accommodate respondents

    who were more familiar with the online interface.

    Researchers should report more details about the mode ofadministration of their surveys. They should also describe why

    they choose one mode rather than another. For instance,

    Ranganathan and Ganapathy (2002)give no information on how

    they administer their survey. Considering that the object of the

    research is e-service quality, researchers are expected to use web-

    based or e-mail-based surveys. Using other modes of administra-

    tion may cause a disparity between the target population and the

    framed population. In addition, web surveys have several

    advantages (van Selm and Jankowski, 2006): convenience for

    participants, no interviewer bias, direct data entry to electronic

    files, easier recruitment of respondents and lower cost. Therefore,

    studies using other modes of administration need to justify an

    alternate choice.

    3.2.5. Generation of items

    Because e-SQ is a relatively new concept, scale items are

    generated using both inductive methods (such as literature

    reviews) and deductive methods (such as exploratory research),

    but most came through deductive methods. A review of the

    literature on service quality and electronic commerce byLoiacono

    et al. (2002)generated 142 items.Cristobal et al. (2007)also use a

    literature review (in combination with in-depth interviews) to

    generate an extensive list of 86 potential items. Exploratory

    studies for the generation of items utilise focus groups (ONeill

    et al., 2001; Wolfinberger and Gilly, 2003), in-depth interviews

    (Bauer et al., 2006; Cristobal et al., 2007; Yang and Jun, 2002), and

    content analysis of customer reviews (Yang et al., 2004). For

    example, Yang et al. (2004)access two online review websites to

    collect and analyse 848 customer reviews in generating potential

    attributes of online banking services. Focus groups were con-

    ducted with university students (ONeill et al., 2001) and with

    online shoppers (Wolfinberger and Gilly, 2003). In some studies,

    experts or managers are asked to comment on the wording of

    items and/or to propose items (Gounaris and Dimitriadis, 2003;

    Fassnacht and Koese, 2006; Yang et al., 2005). Other studies do not

    state exactly how they generate their items or the number of items

    initially generated (e.g.,Ranganathan and Ganapathy, 2002).

    There is no agreement in the literature about the exact nature

    and definition of e-service quality dimensions. For example, the

    web site design dimension is conceptualized and operationalized

    in different ways. Ranganathan and Ganapathy (2002) include

    delay and ease of navigation and the presence of visualpresentation aids in the web design construct. Loiacono et al.

    (2002) report a dimension they call entertainment which

    includes visual appeal (aesthetics of the web site), innovativeness

    (the uniqueness and creativity of site design), and flow (the

    emotional effect of using the web site).Cristobal et al. (2007)state

    that web design consists of user-friendliness, content layout, and

    content updating. Wolfinberger and Gilly (2003) maintain that

    the construct refers to navigation, order processing, appropriate

    personalization, information search, and product selection.

    Fassnacht and Koese (2006) conclude that dimensions reported

    in one study cannot be compared to those in other studies on

    e-service quality.

    These different constructs highlight the lack of consensus

    about the components of e-service quality. In fact, most of the

    R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477 471

  • 8/10/2019 Lad Hari 2010

    9/14

    papers analyzed do not include clear definitions of e-service

    dimensions. In addition, several studies adopt data-driven

    approaches to the construct of e-service quality and components.

    The scale items are derived from exploratory factor analysis and

    the dimensions identification is based on which items have the

    most important loading scores for each factor. The luck of

    consensus leads to differences among studies on the number

    and nature of the items generated and those finally retained.

    In several studies, items generated are based solely onqualitative research such as in-depth interviews and focus groups.

    Future research should develop a more specific theoretical

    framework that defines the e-service quality construct and

    its dimensions more consistently and identifies pertinent

    scale-items.

    3.2.6. Assessment and purification of items

    In several studies, total item correlation serves as a criterion

    for initial assessment and purification. Various cut-off points are

    adopted: 0.30 by Cristobal et al. (2007), 0.40 by Loiacono et al.

    (2002), and 0.50 byFrancis and White (2002)and Kim and Stoel

    (2004). Items are rejected byLoiacono et al. (2002)if they possess

    a high correlation with items on other putative constructs (that is,

    discriminant validity problems). Cristobal et al. (2007) useconfirmatory factor analysis to eliminate indicators whose

    standardised coefficients are less than 0.5. Wolfinberger and Gilly

    (2003)are rigorous in retaining only items that (i) load at 0.50 or

    more on a factor, (ii) do not load at more than 0.50 on two factors,

    and (iii) have an item to total correlation of more than 0.40.

    3.2.7. Analysis of dimensionality

    The dimensionality of the scale is assessed using exploratory

    factor analysis (EFA) and/or confirmatory factor analysis (CFA).

    Exploratory analysis is used by several researchers such asFrancis

    and White (2002), Loiacono et al. (2002), Ranganathan and

    Ganapathy (2002), Kim and Stoel (2004), Ibrahim et al. (2006),

    Cristobal et al. (2007),Ho and Lee (2007), andSohn and Tadisina

    (2008). Confirmatory factor analysis is utilised by Janda et al.

    (2002),Loiacono et al. (2002),Kim and Stoel (2004),Lee and Lin

    (2005), Cristobal et al. (2007), Ho and Lee (2007), andSohn and

    Tadisina (2008).

    As noted, most of the studies reviewed use EFA to reduce the

    number of items in their constructs, but many researchers still

    criticized its use.Kwok and Sharp (1998)describe the use of EFA

    as a fishing expedition. In fact, this technique has a number of

    shortcomings. First, the estimates obtained for factor loadings are

    not unique and the factor structure obtained is only one of an

    infinite number of potential solutions (Segars and Grovers, 1993).

    Second, the CFA provides goodness-of-fit indicators to evaluate

    whether the factors structure fits the data, which is not the case

    for EFA (Marsh and Hocevar, 1985). Third, when applied to data

    exhibiting correlated factors, common factor analysis withvarimax rotation can produce distorted factor loadings and

    incorrect conclusions on the factor solution (Segars and Grovers,

    1993). Fourth, it is possible for items to load on more than one

    factor in EFA, which may affect their distinctiveness and the

    interpretation (Sureshchandar et al., 2002). Finally, contrary to

    EFA, CFA allows researcher to compare several model specifica-

    tions and to examine invariance of a specific parameter in the

    factor solution (Marsh and Hocevar, 1985). Given the limitations

    of EFA, researchers should use a combination of EFA and CFA.

    Other studies use multinational samples of internet users,

    which may also create bias. Previous research reports that

    cultural differences in response styles, such as extreme responses

    and use of mid-points, are sources of bias that can threaten the

    validity of scales (Diamantopoulos et al., 2008). Since the response

    style cannot be completely eliminated through research design, so

    researchers should establish measurement invariance via multi-

    group confirmatory factor analysis (Steenkamp and Baumgartner,

    1998).

    3.2.8. Scale reliability and validity

    The reliability of scales (that is, the internal homogeneity of a

    set of items) is usually assessed by Cronbachs a coefficient or byJorskogs r coefficient. Most scales in the present review exhibit

    good reliability in terms of Cronbachs a coefficient, with values

    greater than 0.70 (Fornell and Larcker, 1981;Nunally, 1978). For

    example, Loiacono et al. (2002) report 12 dimensions with

    Cronbachs a ranging from 0.72 to 0.93; Ranganathan and

    Ganapathy (2002)find four dimensions with Cronbachs aranging

    from 0.87 to 0.89; and Yang et al. (2004) report six dimensions

    with r coefficients ranging from 0.77 to 0.91 (see Table 1).

    However, in a few studies the reliability coefficients were under

    the recommended level (as reported in Table 1). For example,

    Long and McMellon (2004) find that reliability coefficient values

    are only 0.51 and 0.59 for purchasing process and responsive-

    ness dimensions, respectively. Yang and Jun (2002) find Cron-

    bachs a value of the credibility dimension to be only 0.59.

    Ibrahim et al. (2006)report Cronbachs a values at 0.33 and 0.57,

    with friendly/responsive customer service and targeted custo-

    mer service dimensions, respectively. This means that certain

    scales reported in the literature are problematic.

    Convergent validity (that is, the extent to which a set of items

    assumed to represent a construct does in fact converge on the

    same construct) is verified in various ways. Gounaris and

    Dimitriadis (2003) evaluate this by calculating the average

    variance extracted for each factor and confirming convergent

    validity when the shared variance accounted for 0.50 or more of

    the total variance. Other studies assess convergent validity by

    correlating their scales with a measure of overall service quality.

    For example, Loiacono et al. (2002) establish the convergent

    validity of their WebQual scale by correlating the total score of

    36 items with an overall quality single item measure.

    Discriminant validity (that is, the extent to which measures of

    theoretically unrelated constructs do not correlate with one

    another) is established byGounaris and Dimitriadis (2003)when

    the average variance extracted for each factor is greater than the

    squared correlation between that construct and other constructs

    in the model. Discriminant validity is also evaluated by comparing

    the fit of two correlated factors with the fit of a single-factor

    model for each pair of dimensions (Kim and Stoel, 2004; Loiacono

    et al., 2002); discriminant validity is established when the fit of

    the two-factor model is better than the fit of the one-factor model

    for each pair of factors. In other studies, discriminant validity is

    examined by constraining the inter-factor correlations between

    pairs of dimensions (one at a time) to unity, and repeating

    confirmatory factor analysis (Parasuraman et al., 2005; Yang et al.,2005); discriminant validity is confirmed when the constrained

    model produces an increase in the chi-square statistic compared

    with the non constrained model.

    To assess predictive/nomological validity (that is, the extent to

    which the scores of one construct are empirically related to the

    scores of other conceptually related constructs), authors examine

    the impact of particular e-SQ dimensions on (i) users overall

    quality rating (Aladwani and Palvia, 2002; Bauer et al., 2006;

    Parasuraman et al., 2005; Yang et al., 2004), (ii) satisfaction (Bauer

    et al., 2006; Fassnacht and Koese, 2006; Kim and Stoel, 2004;

    Yang et al., 2004), (iii) perceived value (Bauer et al., 2006;

    Parasuraman et al., 2005), (iv) relationship duration (Bauer et al.,

    2006), and (v) behavioural intentions (Bauer et al., 2006; Francis

    and White, 2002; Loiacono et al., 2002; Parasuraman et al., 2005 ).

    R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477472

  • 8/10/2019 Lad Hari 2010

    10/14

    The psychometric propertiesthe three types of validity (i.e.,

    convergent, discriminant, and predictive)of most of the scales in

    this review are not examined. Long and McMellon (2004) prove

    only the predictive validity of their newly developed scale.

    The same observation applies to Yang and Yuns study (2002).

    Few studies examine and confirm the convergent, discriminant,

    and predictive validity of their newly developed scale (e.g.,

    Aladwani and Palvia, 2002; Cristobal et al., 2007; Kim and Stoel,

    2004; Loiacono et al., 2002; Parasuraman et al., 2005). The researchagenda in e-service quality should consider the validation process a

    major issue. Future studies addressing the measurement of

    e-service quality scale should rigorously test and report on the

    psychometric properties of their newly developed scales.

    3.3. Dimensionality and structure of the e-SQ construct

    It is apparent from this review that certain general observa-

    tions can be made regarding the dimensionality and structure of

    the e-SQ construct as presented in these studies: (i) there is no

    consensus on the number and the nature of the dimensions in

    the e-SQ construct but globally six dimensions recur more

    consistently (reliability/fulfilment, responsiveness, ease of

    use/usability, privacy/security, web design, and informationquality/benefit); (ii) some of the e-SQ dimensions in this review

    are identical (or at least similar) to those reported for traditional

    service quality; (iii) the studies reviewed here concentrate on

    functionalquality. Only a few studies deal with outcome quality;

    and (iv) despite the general support for a hierarchical multi-

    dimensional model of service quality, little effort is made by the

    authors reviewed here to examine such a structure for e-SQ. These

    observations are discussed in greater detail below.

    3.3.1. Dimensionality of the e-SQ construct

    All of the studies in Table 1 find the construct of e-SQ to be

    multidimensional, with the number of reported dimensions

    ranging from three (Gounaris and Dimitriadis, 2003) to 12

    (Loiacono et al., 2002). It is apparent that there is no consensus

    on the number and the nature of the dimensions of the e-SQ

    construct identified in previous research. It is true that some

    dimensions (such as reliability and ease of use) appear

    consistently in the various models, which indicates that there

    are some common dimensions used by customers in evaluating

    e-SQ regardless of the type of service being delivered on the

    Internet (Fassnacht and Koese, 2006; Zeithaml et al., 2000).

    However, other dimensions mentioned in the various studies

    appear to be specific to particular e-service contexts. These

    observations mirror the debate regarding generic or specific

    measures in assessing traditional/physical service quality (e.g.,

    Karatepe et al., 2005; Ladhari, 2008). E-service quality dimensions

    tend to be contingent on the service industry involved. Even in the

    same industry, these dimensions depend on the type of userservice (Barrutia et al., 2009). For instance, informational content

    is essential to portal web and internet banking services and less

    important for companies such Amazon.com that produce physical

    products (Barrutia et al., 2009).Kim and Stoels study (2004)uses

    the 36-item scale developed byLoiacono et al. (2002)and reports

    a different number of dimensions for the apparel industry. In their

    study,Loiacono et al. (2002)report 12 dimensions.

    The benefit electronic web sites may yield depends on the

    service setting. Each industry deals with different basic and

    supplementary services and user needs. For instance, Fassnacht

    and Koese (2006) distinguish between stand-alone services

    (where the electronic service provided represents the main

    benefit for users) and support services (where the electronic

    service facilitates the purchase of goods or services such online

    reservations or online shopping). Stand- alone services are also

    grouped into pure service offerings (e.g., online banking) and

    content offerings (e.g. news and sport coverage).

    Among the various dimensions the literature review cites, six

    appear consistently: (i) reliability/fulfilment; (ii) responsiveness;

    (iii) ease of use/usability; (iv) privacy/security, (v) web design;

    and (vi) information quality/content.

    The first of these, reliability/fulfilment, which is also one of the

    prominent dimensions in the traditional SERVQUAL instrument,refers to the performance of a promised service in an accurate and

    timely manner and to the delivery of intact and correct products

    (or services) at times convenient to customers (Yang and Jun,

    2002). In the studies reviewed here, this dimension is a significant

    determinant of (i) overall service quality (Lee and Lin, 2005;

    Parasuraman et al., 2005; Sohn and Tadisina, 2008; Wolfinbarger

    and Gilly, 2003; Yang and Jun, 2002), (ii) satisfaction (Lee and Lin,

    2005; Wolfinbarger and Gilly, 2003), (iii) perceived value (Bauer

    et al., 2006; Parasuraman et al., 2005), (iv) intention to purchase

    (Lee and Lin, 2005; Wolfinbarger and Gilly, 2003), and (v)

    repurchase intentions (Bauer et al., 2006).

    The second of the dimensions that appears consistently in the

    studies reviewed here is responsiveness, which refers to a

    willingness to help users (Li et al., 2002; ONeill et al., 2001),

    prompt responses to customers enquiries and problems (Bauer

    et al., 2006; Yang and Jun, 2002; Yang et al., 2004), and the

    availability of alternative communication channels (Bauer, 2006).

    In this regard, Lee and Lin (2005) report that responsiveness

    influences overall service quality and satisfaction.

    Ease of use/usability refers to user friendliness, especially with

    regard to searching for information (Yang et al., 2005; Yoo and

    Donthu, 2001). Ease of access to available information is an

    important reason for consumers choosing to purchase through the

    internet (Cristobal et al., 2007; Wolfinbarger and Gilly, 2003).

    Such usability is an important aspect of e-SQ because the

    e-business environment can be intimidating and complex for

    many customers (Parasuraman et al., 2005).

    The fourth dimension, privacy/security, refers to the protection of

    personal and financial information (Yoo and Donthu, 2001) and the

    degree to which the site is perceived by consumers as being safe from

    intrusion (Parasuraman et al., 2005). This dimension is relevant

    because of the perceived risk of financial loss and fraud in the online

    environment (Parasuraman et al., 2005). Security has been identified

    as the most important factor in determining e-SQ for consumers of

    online banking services (White and Nteli, 2004). Security is the most

    important influence on intentions to revisit a site and make purchases

    (Ranganathan and Ganapathy, 2002; Yoo and Donthu, 2001).

    The fifth common dimension, web design, refers to aesthetics

    features and content as well as structure of online catalogues ( Cai

    and Jun, 2003). According toSohn and Tadisina (2008), a website

    design similar to a physical store environment influences

    customer perceptions of the online service provider and subse-

    quent behavioural intentions. The design of a web site plays animportant role in attracting and retaining visitors and is as

    important as its content (Ranganathan and Ganapathy, 2002).

    The sixth dimension, information quality/benefit, refers to the

    adequacy and accuracy of the information users get when visiting

    a web site (e.g.,Collier and Bienstock, 2006; Fassnacht and Koese,

    2006; Ho and Lee, 2007; Yang et al., 2005). This dimension

    becomes important for pure service offerings such as web portal

    services (Gounaris and Dimitriadis, 2003; Yang et al., 2005).Yang

    et al. (2005) find that two of the five dimensions refer to the

    quality of information: its adequacy and usefulness. The ade-

    quacy-of-information dimension was measured by items referring

    to comprehensiveness, content completeness, sufficiency, and

    detailed contact. Usefulness of content refers to relevance,

    uniqueness and whether it is up-to-date, as perceived by the user.

    R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477 473

  • 8/10/2019 Lad Hari 2010

    11/14

  • 8/10/2019 Lad Hari 2010

    12/14

    privacy and security, website design, quality of information,

    and personalization.

    The review shows that several e-SQ dimensions are relevant

    across several industries, whereas others are more or less specific

    to particular online service industries. Any attempt to promote a

    global (or generic) measure of e-SQ could be subject to similar

    criticisms to those directed at generic measurement instruments

    in traditional service quality (such as SERVQUAL). It is apparent

    from this review that the generic e-SQ dimensions identified inthis study should be complemented by sector-specific dimensions

    in particular contexts. The development of valid industry-specific

    quality measurement scales would seem to be a fruitful avenue

    for future research (just as it is proving to be in the case of

    traditional service quality).

    Apart from industry-specific scales, future research could also

    seek to develop and compare specific e-SQ measurement scales

    for different functional types of e-business. In undertaking

    this research, the two-dimensional classification scheme of

    internet businesses Francis and White (2004) devise could be

    useful. According to this model, two dimensions can be used to

    classify e-businesses: (i) fulfilment process (which can be

    distinguished into electronic delivery and offline delivery); and

    (ii) product (which can be distinguished into the purchase of

    services and the purchase of goods). By combining these two

    dimensions (and the two sub-divisions of each), four types of

    internet retailing can be identified: (i) offlinegoods (that is,

    offline delivery of purchased goods); (ii) offline-services (offline

    delivery of purchased services); (iii) electronicgoods (electronic

    delivery of purchased goods); and (iv) electronicservices

    (electronic delivery of purchased services). It is likely that the

    quality factors considered by internet users will differ across

    these various categories of internet retailing. It could be interest-

    ing for future studies to examine the relative importance

    of service quality dimensions across these four categories of

    e-business.

    The present study also finds that several newly developed

    scales, such as SITEQUAL (Yoo and Donthu, 2001), WEBQUAL

    (Loiacono et al., 2002), E-S-QUAL (Parasuraman et al., 2005),

    eTransQual (Bauer et al., 2006), and PeSQ (Cristobal et al., 2007),

    lack specific application and validation. A possible avenue of

    future research is to replicate these e-SQ scales across different

    contexts with a view to enhancing their external validity. Indeed,

    Table 1 shows that studies are largely confined to business-to-

    consumer relationships. Developing measurement scale of elec-

    tronic service quality in business-to-business industry would be

    valuable. It is clear that the type of user (individual or

    organizational) and the nature of the service setting should

    determine the e-service quality dimensions retained.

    The present study also finds that the dimensionality of the

    e-SQ construct is not stable across studies, which probably

    reflects the diversity of the scope of the studies examined in this

    paper. Some studies examine websites that sell goods or serviceswhereas other studies examine non-selling sites. Moreover,

    some studies develop generic e-SQ scales whereas others

    develop industry-specific scales. In addition, different methodo-

    logical approaches are adopted for the identification of the

    dimensions of e-SQ and/or the generation (and number) of

    potential items within those dimensions. The small samples

    used in several of the studies reported in this review do not

    permit an adequate assessment of the validity of the scales;

    moreover, several studies use convenience samples of students,

    which limits the generalisability of the findings. To ensure

    external validity, it is recommended that future research should

    use random samples of appropriate internet shoppers to identify

    the key dimensions and their relative influence on online

    consumer behaviour.

    Most of the studies in this review focus on functional quality

    (that is, the service-delivery process that takes place on the

    internet) rather than technical quality (the outcome of the service

    process). According to Brady et al. (2002), such a reliance on

    functional quality can constitute a misspecification of service

    quality (at least with regard to traditional service quality). If this

    contention also applies in the online context, it would seem that

    the conceptualisation of e-SQ requires further development with

    a view to paying appropriate attention to technical quality, as wellas functional quality.

    Finally, most of the studies reviewed here describe e-SQ in

    terms of reflective attributes rather than formative attributes.

    Jarvis et al. (2003) cite four rules for determining whether a

    construct is reflective or formative: (i) direction of causality from

    construct to measure; (ii) interchangeability of the indicators;

    (iii) co-variation among the indicators; and (iv) nomological net

    among the construct indicators. For instance, in reflective

    measurement models, the direction of causality goes from

    construct to measure, while the opposite is true for formative

    measurement models. With formative measurement models,

    indicators do not necessarily co-vary with each other, while they

    are expected to co-vary with each other in reflective measure-

    ment models. In reflective models, indicators are supposed to

    have the same antecedents and consequences, which is not

    required in formative models (Jarvis et al., 2003). The formative

    model has been proposed as an alternative approach for

    measuring traditional service quality (Rossiter, 2002; Ladhari,

    2009) and electronic service quality (Parasuraman et al., 2005;

    Collier and Bienstock, 2006). Collier and Bienstock (2006), who

    question the use of reflective indicators to conceptualize electro-

    nic service quality, support the use of formative indicators.

    Parasuraman et al. (2005)state that calling scale items formative

    or reflective indicators of latent constructs is a challenging issue.

    Further studies are needed to examine the formative conceptua-

    lization of e-SQ in greater depth.

    4.2. Managerial implications

    In accordance with these findings, we propose several

    recommendations for consideration by e-business managers.

    First, given the fact that responsiveness and reliability are

    identified as key dimensions in e-SQ, online retailers and service

    providers must ensure that they are able to perform the promised

    services accurately and on time; moreover, all information about

    products and services (characteristics, price, warranty, and return

    policy) should be easy to locate and understand. The online

    provider should provide accurate customer information on such

    issues as billing and account balance. Second, to ensure that the

    key dimension of responsiveness is fulfilled, internet retailers

    should respond promptly to all enquiries from their customers

    and ensure that their e-mail systems perform well at all times.Third, given the importance of the dimension of ease of use,

    the structure of the website and any online catalogues should be

    logical and easy to navigate. Fourth, consumers must be made to

    feel secure in providing personal and sensitive information (such

    as credit card details). Managers should provide an explicit and

    reassuring guarantee that their websites respect and protect

    personal information at all times.

    Finally, website managers should recognise that each online

    business is unique and that it is therefore necessary for each

    business to identify how its particular internet users define e-SQ.

    Managers should then design their websites to ensure that they

    deliver e-SQ in a manner that meets the expectations of their

    cohort of customers. These expectations can be identified in depth

    using online focus groups of their customers. Internet service

    R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477 475

  • 8/10/2019 Lad Hari 2010

    13/14

    providers should also ensure that they continuously track their

    customers perceptions of service quality in terms of appropriate

    dimensions and attributes. These dimensions, which can be

    initially identified in Table 1 of the present study, should be

    complemented by specific dimensions and attributes that are

    identified from the managers own focus groups. Managers should

    always be careful to utilise e-SQ scales that are appropriate to the

    particular context in which they are applied.

    References

    Aladwani, A.M., Palvia, P.C., 2002. Developing and validating an instrument formeasuring user-perceived web quality. Information and Management 39 (6),467476.

    Alpar, P., 2001. Satisfaction with a web site: its measurement factors andcorrelates. Working Paper No. 99/01. Philipps-Universitat Marburg, Institut furWirtschaftsinformatik.

    Barnes, S.J., Vidgen, R.T., 2002. An integrative approach to the assessment ofE-commerce quality. Journal of Electronic Commerce Research 3 (3),114127.

    Barrutia, J.M., Charterina, J., Gilsanz, A., 2009. E-service quality: an internal,multichannel and pure service perspective. Service Industries Journal 29 (12),17071721.

    Bauer, H.H., Falk, T., Hammerschmidt, M., 2006. ETransQual: a transaction process-

    based approach for capturing service quality in online shopping. Journal ofBusiness Research 59, 866875.

    Brady, M.K., Cronin, J.J., Brand, R.R., 2002. Performance-only measurement ofservice quality: a replication and extension. Journal of Business Research 55(1), 1731.

    Buttle, F., 1996. SERVQUAL: review, critique, research agenda. European Journal ofMarketing 30 (1), 832.

    Cai, S., Jun, M., 2003. Internet users perceptions of online service quality: acomparison of online buyers and information searchers. Managing ServiceQuality 13 (6), 504519.

    Chell, E., 1998. Critical incident technique. In: Symon, G., Cassell, C. (Eds.),Qualitative Methods and Analysis in Organizational Research: A PracticalGuide. Sage, Thousand Oaks, CA, pp. 5172.

    Cristobal, E., Flavian, C., Guinaliu, M., 2007. Perceived e-service quality (PeSQ):measurement validation and effects on consumer satisfaction and web siteloyalty. Managing Service Quality 17 (3), 317340.

    Collier, J.E., Bienstock, C.C., 2006. Measuring service quality in e-retailing. Journalof Service Research 8 (3), 260275.

    Diamantopoulos, A., Riefler, P., Roth, K.P., 2008. Advancing formative measurement

    models. Journal of Business Research 61, 12031218.Fassnacht, M., Koese, I., 2006. Quality of electronic services: conceptualizing and

    testing a hierarchical model. Journal of Service Research 9 (1), 1937.Fassnacht, M., Kose, I., 2007. Consequences of web-based service quality:

    uncovering a multi-faceted chain of effects. Journal of Interactive Marketing21 (3), 3554.

    Fornell, C., Larcker, D., 1981. Evaluating structural equation models withunobservable variables and measurement error. Journal Marketing Research18 (1), 3950.

    Francis, J.E., White, L., 2002. PIRQUAL: a scale for measuring customer expectationsand perceptions of quality in Internet retailing. In: Evans, K., Scheer, L. (Eds.),Marketing Educators Conference: Marketing Theory and Applications, vol. 13;2002, American Marketing Association, Chicago, IL, pp. 263270.

    Francis, J.E., White, L., 2004. Value across fulfillment-product categories of internetshopping. Managing Service Quality 14 (2/3), 226234.

    Gefen, D., 2002. Customer loyalty in e-commerce. Journal of the Association forInformation Systems 3, 2751.

    Gounaris, S., Dimitriadis, S., 2003. Assessing service quality on the web: evidence

    from business-to-consumer portals. Journal of Services Marketing 17 (4/5),529548.Gremler, D.D., 2004. The critical incident technique in service research. Journal of

    Service Research 7 (1), 6589.Gronroos, C., 1990. Service Management and Marketing. Lexington Books,

    Lexington, MA.Ha, S., Stoel, L., 2009. Consumer e-shopping acceptance: antecedents in a

    technology acceptance model. Journal of Business Research 62, 565571.Hausman, A.V., Siekpe, J.S., 2009. The effect of web interface features on consumer

    online purchase intentions. Journal of Business Research 62, 513.Ho, C.-I., Lee, Y.-L., 2007. The development of an e-travel service quality scale.

    Tourism Management 26, 14341449.Holloway, B.B., Beatty, S.E., 2003. Service failure in online retailing: a recovery

    opportunity. Journal of Service Research 6 (1), 92105.Hsu, S.-H., 2008. Developing an index for online customer satisfaction: adaptation

    of American customer satisfaction index. Expert Systems with Applications 34,30333042.

    Hwang, Y., Kim, D.J., 2007. Customer self-service systems: the effects of perceivedweb quality with service contents on enjoyment, anxiety, and e-trust. Decision

    Support Systems 43, 746760.

    Ibrahim, E.E., Joseph, M., Ibeh, K.I.N., 2006. Customers perception of electronicservice delivery in the UK retail banking sector. International Journal of BankMarketing 24 (7), 475493.

    Janda, S., Trocchia, P.J., Gwinner, K.P., 2002. Consumer perceptions of Internetretail service quality. International Journal of Service Industry Management 13(5), 412431.

    Jarvis, C.B., Mackenzie, S.B., Podsakoff, P.M., 2003. A critical review of constructindicators and measurement model misspecification in marketing andconsumer research. Journal of Consumer Research 30, 199218.

    Johnston, R., 1995. The determinants of service quality: satisfiers and dissatisfiers.International Journal of Service Industry Management 6 (5), 5371.

    Jun, M., Yang, Z., Kim, D., 2004. Customers perceptions of online retailing servicequality and their satisfaction. International Journal of Quality and ReliabilityManagement 2004 21 (8), 817840.

    Karatepe, O.M., Yavas, U., Babakus, E., 2005. Measuring service quality of banks:scale development and validation. Journal of Retailing and Consumer Services12 (5), 373383.

    Kim, S., Stoel, L., 2004. Apparel retailers: website quality dimensions andsatisfaction. Journal of Retailing and Consumer Services 11 (2), 109117.

    Kwok, W.C.C., Sharp, D.J., 1998. A review of construct measurement issuesin behavioral accounting research. Journal of Accounting Literature 17,137174.

    Ladhari, R., 2008. Alternative measures of service quality: a review. ManagingService Quality 18 (1), 6586.

    Ladhari, R., 2009. A review of twenty years of SERVQUAL research. InternationalJournal of Quality and Services Sciences 1 (2), 172198.

    Lee, G., Lin, H., 2005. Customer perceptions of e-service quality in online shopping.International Journal of Retail and Distribution Management 33 (2), 161176.

    Li, Y.N., Tan, K.C., Xie, M., 2002. Measuring web-based service quality. Total Quality

    Management and Business Excellence 13 (5), 685700.Liu, C., Arnett, K.P., 2000. Exploring the factors associated with web site successin the context of electronic commerce. Information and Management 38,2333.

    Loiacono, E.T., Watson, R.T., Hoodhue, D.L., 2002. WEBQUAL: measure of web sitequality. Marketing Educators Conference: Marketing Theory and Applications13, 432437.

    Long, M., McMellon, C., 2004. Exploring the determinants of retail service qualityon the internet. Journal of Services Marketing 18 (1), 7890.

    Marsh, H.W., Hocevar, D., 1985. The application of confirmatory factor analysis tothe study of self-concept: first and higher order factor structures and theirinvariance across groups. Psychological Bulletin 97 (3), 562582.

    Muylle, S., Moenaert, R., Despontin, M., 1999. Measuring web site success: anintroduction to web site user satisfaction. Marketing Theory and Applications10, 176177.

    Nunnally, L.C., 1978. Psychometric Theory. McGraw-Hill, New York.ONeill, M., Wright, C., Fitz, F., 2001. Quality evaluation in on-line service

    environments: an application of the importanceperformance measurementtechnique. Managing Service Quality 11 (6), 402417.

    Parasuraman, A., Zeithaml, V.A., Berry, L.L., 1988. SERVQUAL: a multiple-item scalefor measuring consumer perceptions of service quality. Journal of Retailing 64,1240.

    Parasuraman, A., Zeithaml, V.A., Berry, L.L., 1991. Refinement and reassessment ofthe SERVQUAL scale. Journal of Retailing 67 (4), 420450.

    Parasuraman, A., Grewal, D., 2000. The impact of technology on the quality-value-loyalty chain: a research agenda. Journal of the Academy of Marketing Science28 (1), 168174.

    Parasuraman, A., Zeithaml, V.A., Malhotra, A., 2005. E-S-Qual: a multiple-itemscale for assessing electronic service quality. Journal of Service Research 7 (3),213233.

    Ranganathan, C., Ganapathy, S., 2002. Key dimensions of business-to-consumerweb sites. Information and Management 39, 457465.

    Rice, M., 1997. What makes users revisit web site? Marketing News 31 (6), 1213.Rossiter, J.R., 2002. The C-0AR-SE procedure for scale development in marketing.

    International Journal of Research in Marketing 19, 305335.Santos, J., 2003. E-service quality: a model of virtual service quality dimension.

    Managing Service Quality 13 (3), 233246.Segars, A.H., Grover, V., 1993. Re-examining perceived ease of use and usefulness:

    a confirmatory factor analysis. MIS Quarterly 17 (4), 517525.Sohn, C., Tadisina, S.K., 2008. Development of e-service quality measure for

    internet-based financial institutions. Total Quality Management and BusinessExcellence 19 (9), 903918.

    Steenkamp, J.-B.E.M., Baumgartner, H., 1998. Assessing measurement invariancein cross-national consumer research. Journal of Consumer Research 25 (1),7890.

    Sureshchandar, G.S., Rajendran, C., Anantharaman, R.N., 2002. Determinants ofcustomer-perceived service quality: a confirmatory factor analysis approach.

    Journal of Services Marketing 16 (1), 934.Szymanski, D.M., Hise, R.T., 2000. E-satisfaction: an initial examination. Journal of

    Retailing 76 (3), 309322.van Riel, A.C.R., Liljander, V., Jurriens, P., 2001. Exploring consumer evaluations of

    e-services: a portal site. International Journal of Service Industry Management12 (4), 359377.

    van Selm, M., Jankowski, N.W., 2006. Conducting online surveys. Quality &Quantity 40, 435456.

    White, H., Nteli, F., 2004. Internet banking in the UK: why are there not more

    customers? Journal of Financial Services Marketing 9 (1), 4956.

    R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477476

  • 8/10/2019 Lad Hari 2010

    14/14

    Wilkins, H., Merrilees, B., Herington, C., 2007. Toward an understanding of total servicequality in hotels. International Journal of Hospitality Management 26, 840853.

    Wolfinbarger, M., Gilly, M.C., 2003. ETailQ: dimensionalizing, measuring andpredicting retail quality. Journal of Retailing 79 (3), 183198.

    Yang, Z., Fang, X., 2004. Online service quality dimensions and their relationships withsatisfaction: a content analysis of customer reviews of securities brokerageservices. International Journal of Service Industry Management 15 (3), 302326.

    Yang, Z., Jun, M., 2002. Consumer perception of e-service quality: form purchaserand non purchaser perspectives. Journal of Business Strategies 19 (1), 1941.

    Yang, Z., Jun, M., Peterson, R.T., 2004. Measuring customer perceived online servicequality: scale development and managerial implications. International Journal

    of Operations and Production Management 21 (11), 11491174.

    Yang, Z., Cai, S., Zhou, Z., Zhou, N., 2005. Development and validation of aninstrument to measure user perceived service quality of informationpresenting Web portals. Information and Management 42, 575589.

    Yoo, B., Donthu, N., 2001. Developing a scale to measure the perceived quality ofInternet shopping sites (SITEQUAL). Quarterly Journal of Electronic Commerce2 (1), 3147.

    Zeithaml, V.A., Parasuraman, A., Malhotra, A., 2000. E-service quality: definition,dimensions and conceptual model. Working Paper, Marketing ScienceInstitute, Cambridge, MA.

    Zeithaml, V.A., Parasuraman, A., Malhotra, A., 2002. Service quality deliverythrough web sites: a critical review of extant knowledge. Journal of the

    Academy of Marketing Science 30 (4), 362375.

    R. Ladhari / Journal of Retailing and Consumer Services 17 (2010) 464477 477