measuring service quality in higher education: three...

21
International Journal of Research & Method in Education Vol. 29, No. 1, April 2006, pp. 71–89 ISSN 1743-727X (print)/ISSN 1743-7288 (online)/06/010071–19 © 2006 Taylor & Francis DOI: 10.1080/01406720500537445 Measuring service quality in higher education: three instruments compared Firdaus Abdullah* MARA University of Technology, Malaysia Taylor and Francis Ltd CWSE_A_153727.sgm 10.1080/01406720500537445 International Journal of Research and Method in Education 1743-727X (print)/1743-7288 (online) Original Article 2006 Taylor & Francis 29 1 000000April 2006 FirdausAbdullah [email protected] Measuring the quality of service in higher education is increasingly important, particularly as fees introduce a more consumerist ethic amongst students. This paper aims to test and compare the rela- tive efficacy of three measuring instruments of service quality (namely HEdPERF, SERVPERF and the moderating scale of HEdPERF-SERVPERF) within a higher education setting. The objective was to determine which instrument had the superior measuring capability in terms of unidimension- ality, reliability, validity and explained variance. Tests were conducted utilizing a sample of higher education students, and the findings indicated that HEdPERF scale resulted in more reliable esti- mations, greater criterion and construct validity, greater explained variance, and consequently was a better fit than the other two instruments. Consequently, a modified five-factor structure of HEdPERF is put forward as the more superior scale for the higher education sector. Introduction Service industries are playing an increasingly important role in the economy of many nations. In today’s world of global competition, rendering quality service is a key for success, and many experts concur that the most powerful competitive trend currently shaping marketing and business strategy is service quality. Since the 1980s service quality has been linked with increased profitability, and it is seen as providing an important competitive advantage by generating repeat sales, positive word-of-mouth feedback, customer loyalty and competitive product differentiation. As Zeithaml and Bitner (1996, p. 76) point out, ‘the issue of highest priority today involves under- standing the impact of service quality on profit and other financial outcomes of the organization’. Service quality has since emerged as a pervasive strategic force and a key strategic issue on management’s agenda. It is no surprise that practitioners and academics alike are keen on accurately measuring service quality in order to better understand its * MARA University of Technology, Sarawak Branch Samarahan Campus, Jalan Meranek, 94300 Kota Samarahan, Sarawak, Malaysia. Email: [email protected]

Upload: others

Post on 18-Jan-2020

30 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

International Journal of Research & Method in EducationVol. 29, No. 1, April 2006, pp. 71–89

ISSN 1743-727X (print)/ISSN 1743-7288 (online)/06/010071–19© 2006 Taylor & Francis DOI: 10.1080/01406720500537445

Measuring service quality in higher education: three instruments comparedFirdaus Abdullah*MARA University of Technology, MalaysiaTaylor and Francis LtdCWSE_A_153727.sgm10.1080/01406720500537445International Journal of Research and Method in Education1743-727X (print)/1743-7288 (online)Original Article2006Taylor & Francis291000000April [email protected]

Measuring the quality of service in higher education is increasingly important, particularly as feesintroduce a more consumerist ethic amongst students. This paper aims to test and compare the rela-tive efficacy of three measuring instruments of service quality (namely HEdPERF, SERVPERF andthe moderating scale of HEdPERF-SERVPERF) within a higher education setting. The objectivewas to determine which instrument had the superior measuring capability in terms of unidimension-ality, reliability, validity and explained variance. Tests were conducted utilizing a sample of highereducation students, and the findings indicated that HEdPERF scale resulted in more reliable esti-mations, greater criterion and construct validity, greater explained variance, and consequently wasa better fit than the other two instruments. Consequently, a modified five-factor structure ofHEdPERF is put forward as the more superior scale for the higher education sector.

Introduction

Service industries are playing an increasingly important role in the economy of manynations. In today’s world of global competition, rendering quality service is a key forsuccess, and many experts concur that the most powerful competitive trend currentlyshaping marketing and business strategy is service quality. Since the 1980s servicequality has been linked with increased profitability, and it is seen as providing animportant competitive advantage by generating repeat sales, positive word-of-mouthfeedback, customer loyalty and competitive product differentiation. As Zeithaml andBitner (1996, p. 76) point out, ‘the issue of highest priority today involves under-standing the impact of service quality on profit and other financial outcomes of theorganization’.

Service quality has since emerged as a pervasive strategic force and a key strategicissue on management’s agenda. It is no surprise that practitioners and academics alikeare keen on accurately measuring service quality in order to better understand its

*MARA University of Technology, Sarawak Branch Samarahan Campus, Jalan Meranek, 94300Kota Samarahan, Sarawak, Malaysia. Email: [email protected]

Page 2: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

72 F. Abdullah

essential antecedents and consequences, and ultimately establish methods forimproving quality to achieve competitive advantage and build customer loyalty. Thepressures driving successful organizations toward top quality services make themeasurement of service quality and its subsequent management of utmost impor-tance. Interest in the measurement of service quality is thus understandably high.However, the problem inherent in the implementation of such a strategy has beencompounded by the elusive nature of service quality construct, rendering it extremelydifficult to define and measure. Although researchers have devoted a great deal ofattention to service quality, there are still some unresolved issues that need to beaddressed, and the most controversial one refers to the measurement instrument.

An attempt to define the evaluation standard independent of any particular servicecontext has stimulated the setting up of several methodologies. In the last decade, theemergence of diverse instruments of measurement such as SERVQUAL (Parasuramanet al., 1988), SERVPERF (Cronin & Taylor, 1992) and EP (Teas, 1993a, b) hascontributed enormously to the development in the study of service quality.SERVQUAL operationalizes service quality by comparing the perceptions of theservice received with expectations, while SERVPERF maintains only the perceptionsof service quality. On the other hand, the EP scale measures the gap between perceivedperformance and the ideal amount of a feature rather than the customer’s expectations.Diverse studies using these scales have demonstrated the existence of difficultiesresulting from the conceptual or theoretical component as much as from the empiricalcomponent.

Nevertheless, many authors concur that customers’ assessments of continuouslyprovided services may depend solely on performance, thereby suggesting that perfor-mance-based measure explains more of the variance in an overall measure of servicequality (Oliver, 1989; Bolton & Drew, 1991a, b; Cronin & Taylor, 1992; Bouldinget al., 1993; Quester et al., 1995). These findings are consistent with other researchthat have compared these methods in the scope of service activities, thus confirmingthat SERVPERF (performance-only) results in more reliable estimations, greaterconvergent and discriminant validity, greater explained variance, and consequentlyless bias than the SERVQUAL and EP scales (Cronin & Taylor, 1992; Parasuramanet al., 1994; Quester et al., 1995; Llusar & Zornoza, 2000). Whilst its impact in theservice quality domain is undeniable, SERVPERF being a generic measure of servicequality may not be a totally adequate instrument by which to assess the perceivedquality in higher education.

Nowadays, higher education is being driven towards commercial competitionimposed by economic forces resulting from the development of global educationmarkets and the reduction of Government funds that forces tertiary institutions toseek other financial sources. Tertiary institutions had to be concerned with not onlywhat the society values in the skills and abilities of their graduates (Ginsberg, 1991;Lawson, 1992), but also how their students feel about their educational experience(Bemowski, 1991). These new perspectives call attention to the managementprocesses within the institutions as an alternative to the traditional areas of academicstandards, accreditation and performance indicators of teaching and research.

Page 3: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

Measuring service quality 73

Tertiary educators are being called to account for the quality of education thatthey provide. While more accountability in tertiary education is probably desirable,the mechanisms for its achievement are being hotly debated. Hattie (1990) andSoutar and McNeil (1996) oppose the current system of centralized control, inwhich the Government sets up a number of performance indicators that are linkedto funding decisions. There are a number of problems in developing performanceindicators in tertiary education. One such problem is that performance indicatorstend to become measures of activity rather than true measures of the quality ofstudents’ educational service (Soutar & McNeil, 1996). These performance indica-tors may have something to do with the provision of tertiary education, but theycertainly fail to measure the quality of education provided in any comprehensiveway.

A survey conducted by Owlia and Aspinwall (1997) examined the views of differentprofessionals and practitioners on the quality in higher education and concluded thatcustomer-orientation in higher education is a generally accepted principle. Theyconstrued that from the different customers of higher education, students were giventhe highest rank. Student experience in a tertiary education institution should be a keyissue of which performance indicators need to address. Thus it becomes important toidentify determinants or critical factors of service quality from the standpoint ofstudents being the primary customers.

In view of that, Firdaus (2004) proposed HEdPERF (Higher Education PERFor-mance-only), a new and more comprehensive performance-based measuring scalethat attempts to capture the authentic determinants of service quality within highereducation sector. The 41-item instrument has been empirically tested for unidimen-sionality, reliability and validity using both exploratory and confirmatory factor anal-ysis. Therefore, the primary question is directed at the measurement of servicequality construct within a single, empirical study utilizing customers of a singleindustry, namely higher education. Specifically, the ability of the more conciseHEdPERF scale is compared with that of two alternatives namely SERVPERFinstrument and the merged HEdPERF–SERVPERF as moderating scale. The goal isto assess the relative strengths and weaknesses of each instrument in order to deter-mine which instrument had the superior measurement capability in terms of unidi-mensionality, reliability, validity and explained variance of service quality.Eventually, the results of this comparative study were used to refine the HEdPERFscale, transforming it into an ideal measuring instrument of service quality for highereducation sector.

Research foundations

Many researchers (Parasuraman et al., 1985; Carman, 1990; Bolton & Drew, 1991a,b) concur that service quality is an elusive concept, and there is considerable debateabout how best to conceptualize this phenomenon. Lewis and Booms (1983, p. 100)were perhaps the first to define service quality as a ‘measure of how well the servicelevel delivered matches the customer’s expectations’. Thereafter, there seems to be a

Page 4: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

74 F. Abdullah

broad consensus that service quality is an attitude of overall judgement aboutservice superiority, although the exact nature of this attitude is still hazy. Somesuggest that it stems from a comparison of performance perceptions with expectations(Parasuraman et al., 1988), while others argue that it is derived from a comparison ofperformance with ideal standards (Teas, 1993a) or from perceptions of performancealone (Cronin & Taylor, 1992).

In terms of measurement methodologies, a review of literature provides plenty ofservice quality evaluation scales. Some stem from the realization of conceptualmodels produced to understand the evaluation process (Parasuraman et al., 1985),and others come from empirical analysis and experimentation on different servicesectors (Cronin & Taylor, 1992; Franceschini & Rossetto, 1997; Parasuramanet al., 1988). The most widely used methods applied to measure perceived qualitycan be characterized as primarily quantitative multi-attribute measurements.Within the attribute-based methods, a great number of variants exist and amongthese variants, the SERVQUAL and SERVPERF instruments have attracted thegreatest attention.

Generally, most researchers acknowledge that customers have expectations andthese serve as standards or reference points to evaluate the performance of an organi-zation. However, the unresolved issues of expectations as a determinant of perceivedservice quality have resulted in two conflicting measurement paradigms: the discon-firmation paradigm (SERVQUAL) which compares the perceptions of the servicereceived with expectations, and the perception paradigm (SERVPERF) which main-tains only the perceptions of service quality. These instruments share the sameconcept of perceived quality. The main difference between these scales lies in theformulation adopted for their calculation, and more concretely, the utilization ofexpectations and the type of expectations that should be used.

Most research studies do not support the five-factor structure of SERVQUALposited by Parasuraman et al. (1988), and administering expectation items is alsoconsidered unnecessary (Carman, 1990; Parasuraman et al., 1991a; Babakus & Boller,1992). Cronin and Taylor (1992) were particularly vociferous in their critiques, thusdeveloping their own performance-based measure, dubbed SERVPERF. In fact, theSERVPERF scale is the unweighted perceptions components of SERVQUAL, whichconsists of 22 perception items thus excluding any consideration of expectations. Intheir empirical work in four industries, Cronin and Taylor (1992) found thatunweighted SERVPERF measure (performance-only) performs better that any othermeasure of service quality, and that it has greater predictive power (ability to providean accurate service quality score) than SERVQUAL. They argue that current perfor-mance best reflects a customer’s perception of service quality, and that expectationsare not part of this concept.

Likewise, Boulding et al. (1993) reject the value of an expectations-basedSERVQUAL, and concur that service quality is only influenced by perceptions.Quester et al. (1995) perform similar analysis to Cronin and Taylor in the Australianadvertising industry, and their empirical tests show that SERVPERF performs best,while SERVQUAL performs worst, although the differences are small. Teas (1993a)

Page 5: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

Measuring service quality 75

on the other hand, discusses the conceptual and operational difficulties of using the‘expectations minus performance’ approach, with a particular emphasis on expecta-tions. His empirical test subsequently produces two alternatives of perceived servicequality measures namely evaluated performance (EP) and normed quality (NQ). Heconcludes that the EP instrument, which measures the gap between perceived perfor-mance and the ideal amount of a feature rather than the customer’s expectations,outperforms both SERVQUAL and NQ.

A review of service quality literature brings forward diverse arguments in relationto the advantages and disadvantages in the use of these instruments. In general, thearguments make reference to aspects related to the characteristics of these scales,notably their reliability and validity. Recently, Llusar and Zornoza (2000) concurredthat SERVPERF results in more reliable estimations, greater convergent and discrim-inant validity, greater explained variance and consequently less bias than the EP scale.These results are consistent with earlier research that had compared these methodsin the scope of service activities (Cronin & Taylor, 1992; Parasuraman et al., 1994).In fact, the marketing literature appears to offer considerable support for the superi-ority of simple performance-based measures of service quality (Mazis et al., 1975;Churchill & Surprenant, 1982; Carman, 1990; Bolton & Drew, 1991a, b; Bouldinget al., 1993; Teas, 1993a; Quester et al., 1995)

Research methodology

Research objectives

On the basis of the conceptual and operational concerns associated with the genericmeasures of service quality, the present research attempts to compare and contrastempirically the HEdPERF scale against two alternatives namely the SERVPERF andthe merged HEdPERF–SERVPERF scales. The primary goal is to assess the relativestrengths and weaknesses of each instrument in order to determine which instrumenthad the superior measurement capability in terms of unidimensionality, reliability,validity and explained variance of service quality. The findings were eventually usedin transforming HEdPERF into an ideal measuring instrument of service quality forthe higher education sector. The various steps involved in this comparative study areshown by means of a flow chart in Figure 1.Figure 1. Comparing HEdPERF, SERVPERF and HEdPERF-SERVPERF

Research design

Data were collected by means of a structured questionnaire comprising of foursections—namely A, B, C and D. Section A contained nine questions pertaining tothe student respondent profile. Sections B and C required respondents to evaluate theservice components of their tertiary institution, in which only perception data werecollected and analysed. Specifically, Section B consisted of 22 perception itemsextracted from the original SERVPERF scale (Cronin & Taylor, 1992), and modifiedto fit into higher education context.

Page 6: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

76 F. Abdullah

Section C on the other hand is composed of 41 items extracted from the originalHEdPERF (Firdaus, 2004), a scale uniquely developed to embrace different aspectsof tertiary institution’s service offering. As the items were generated and validatedwithin higher education context, no modification was required. All the items in

�������� ����� � ��� ����

�� �������

������

���������� �� � � ������������������� ������ �

���������� ����� �� ����� � ������� ���

�������� ���������

���������� �� � � ������������������

�� ���������

����������������

������� �������

���������� � ����� �� ������������� �� ��������

!��� �� �

������ ������������������� " ������ ����

���������� � ��� ������

���������������� ����

���������� �� � �

�������� ��������#

$%&'

����$

Figure 1. Comparing HEdPERF, SERVPERF and HEdPERF-SERVPERF

Page 7: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

Measuring service quality 77

Sections B and C were presented as statements on the questionnaire, with the samerating scale used throughout, and measured on a seven-point, Likert-type scale thatvaried from 1 = strongly disagree to 7 = strongly agree. In addition to the main scaleaddressing individual items, respondents were asked in Section D to provide anoverall rating of the service quality, satisfaction level and future visits. There were alsothree open-ended questions allowing respondents to give their personal views on howany aspect of the service could be improved.

The draft questionnaire was eventually subjected to pilot testing with a total of30 students, and they were asked to comment on any perceived ambiguities, omis-sions or errors concerning the draft questionnaire. The feedback received wasrather ambiguous, thus minor changes were made accordingly; for instance, techni-cal jargon was rephrased to ensure clarity and simplicity. The revised questionnairewas subsequently submitted to three experts (an academic, a researcher and apractitioner) for feedback before being administered for a full-scale survey. Theseexperts viewed the draft questionnaire as rather lengthy—which in fact coincidedwith the preliminary feedback from students. Nevertheless, in terms of the numberof items in the questionnaire, the current study conforms with similar research(Cronin & Taylor, 1992; Teas, 1993a, b; Lassar et al., 2000; Mehta et al., 2000;Robledo, 2001) that attempted to compare various measuring instruments ofservice quality.

In the subsequent full-scale survey, data were collected from students of six higherlearning institutions (two public universities, one private university and three privatecolleges) in Malaysia for the period between January and March 2004. Data hadbeen collected using the ‘personal contact’ approach as suggested by Sureshchandaret al. (2002) whereby ‘contact persons’ (a registrar or assistant registrar) have beenapproached personally, and the survey explained in detail. The final questionnairetogether with a cover letter was then handed personally or mailed to the ‘contactpersons’, who in turn distributed it randomly to students within their respectiveinstitutions.

A total of 560 questionnaires were distributed to six tertiary institutions. Of these390 were returned and nine discarded due to incomplete responses, thus leading to aresponse rate of 68.0%. The number of usable sample size of 381 for a populationsize of nearly 400,000 students in Malaysian tertiary institutions was in line with thegeneralized scientific guideline for sample size decisions as proposed by Krejcie andMorgan (1970).

In order to determine which instrument had the superior measurement capability,a new scale was developed by merging the two measuring instruments of HEdPERFand SERVPERF. The scope of the empirical investigation notably the methodologyutilized for the development of the merged scale was defined. Next, the resultsobtained from the three instruments were computed, comparing them based on thewidely-used criteria of unidimensionality, reliability, validity and their ability topredict service quality. The results of this comparative study were subsequently usedto refine the HEdPERF scale, transforming it into an ideal measuring instrument ofservice quality for higher education sector.

Page 8: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

78 F. Abdullah

Results and discussion

Developing the merged HEdPERF-SERVPERF scale

The literature appears to offer considerable support for the superiority of SERVPERFin comparison with other generic instruments. HEdPERF on the other hand, has beenempirically tested as the more comprehensive and industry-specific scale, which wasuniquely designed for higher education. Thus, it would seem rational to combine thetwo finest scales in developing possibly the preeminent one, and subsequently todetermine which of these three instruments had the superior measurement capabilityin terms of unidimensionality, reliability, validity and explained variance of servicequality. In developing the new scale, factor analysis was used to determine a newdimensional structure of service quality by merging HEdPERF and SERVPERFitems. Specifically, this technique allowed reduction of a large number of overlappingvariables to a much smaller set of factors.

One critical assumption underlying the appropriateness of factor analysis is toassess the overall significance of the correlation matrix with the Bartlett test of sphe-ricity, which provides the statistical probability that the correlation matrix has signif-icant correlations among at least some of the variables. The results were significant:χ2(50, N=381)=13073, (p=0.01), a clear indication of suitability for factor analysis.Next, the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy was computedto quantify the degree of intercorrelations among the variables, and the results indi-cate an index of 0.89, a ‘meritorious’ sign of adequacy for factor analysis (Kaiser,1970). As for the adequacy of the sample size, there is a seven-to-one ratio of obser-vations to variables in this study, which falls within acceptable limits.

HEdPERF’s proposed measure of service quality was a 41-item scale, consisting of13 items adapted from SERVPERF, and 28 items generated from literature reviewand various qualitative research inputs namely focus groups, pilot test and expert vali-dation (see Firdaus, 2004). The 41-item instrument has been empirically tested forunidimensionality, reliability and validity using both exploratory and confirmatoryfactor analysis. Therefore, all the 50 items (28 HEdPERF’s and 22 SERVPERF’sitems) were included in the factor analysis utilizing the maximum likelihood method,which was followed by a varimax rotation. The decision to include a variable in afactor was based on factor loadings greater than ±0.3 (Hair et al., 1995). The choiceregarding factor loadings greater than ±0.3 was not based on any mathematical prop-osition but relates more to practical significance. According to Hair et al. (1995,p. 385), factor loadings of 0.3 and above are considered significant at p=0.05 with asample size of 350 respondents (N=381 in this study).

The Scree test was used to identify the optimum number of factors that can beextracted before the amount of unique variance begins to dominate the common vari-ance structure (Cattell, 1966). The scree plot provided four factors, and these factorswere subsequently rotated using a varimax procedure. The variable’s communality,which represents the amount of variance accounted for by the factor solution foreach variable was also assessed to ensure acceptable levels of explanation. The resultsshow that communalities in thirteen variables were below 0.50, ‘too low for having a

Page 9: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

Measuring service quality 79

sufficient explanation’ (Hair et al., 1995, p. 387). Consequently, a new factor solutionwas derived with the nonloading variables eliminated and the results yielded fourfactors, which accounted for 41.0% of the variation in the data (compared to 37.2%of the variance explained in the first factor solution). Table 1 shows the results of thefactor analysis in terms of factor name, the variables loading on each factor and thevariance explained by each factor. The four factors identified in Table 1 can bedescribed as follows.

Factor 1: non-academic aspects

This factor contains variables that are essential to enable students fulfill their studyobligations, and it relates to duties and responsibilities carried out by non-academicstaff. In other words, it is concerned with the ability and willingness of administrativeor support staff to show respect, provide equal treatment, and safeguard confidenti-ality of information.

Factor 2: academic aspects

This factor represents the responsibilities of academics, and it highlights keyattributes such as having a positive attitude, good communication skills, allowingsufficient consultation and being able to provide regular feedback to students.

Factor 3: reliability

This factor consists of items that put emphasis on the ability to provide the pledgedservice on time, accurately and dependably. It is also concerned with the ability tofulfill promises, and the willingness to solve problems in a sympathetic and reassuringmanner.

Factor 4: empathy

This factor relates to the provision of individualized and personalized attention tostudents with clear understanding of their specific and growing needs while keepingtheir best interest at heart.

It is important to note that the four factors identified did not conform exactly witheither the six-factor structure of HEdPERF or the five-factor structure of SERVPERF.In fact, the new dimensions extracted were the result of the amalgamation betweenthe HEdPERF and SERVPERF scales, in which two factors (non-academic aspectsand academic aspects) were found in HEdPERF and the other two (reliability andempathy) were identified in SERVPERF.

Comparative test of unidimensionality

A highly mandatory condition for construct validity and reliability checking is theunidimensionality of the measure, which is referred to as the existence of a single

Page 10: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

80 F. Abdullah

Table 1. Results of factor analysis (factor loadings)

Variables

Factor 1:non-academic

aspects

Factor 2:AcademicAspects

Factor 3:reliability

Factor 4:empathy

1. Promises kept 0.652. Sympathetic and reassuring in solving problems 0.38 0.573. Dependability 0.524. On-time service provision 0.745. Responding to request promptly 0.516. Trust 0.447. Feeling secured with the transaction 0.47 0.318. Politeness 0.45 0.329. Individualized attention 0.6810. Giving personalized attention 0.7411. Knowing student needs 0.6612. Keeping student interests at heart 0.5513. Knowledge in course content 0.33 0.5714. Showing positive attitude 0.6615. Good communication 0.7516. Feedback on progress 0.6817. Sufficient and convenient consultation time 0.56 0.3218. Excellent quality programmes 0.6219. Variety of programmes/specializations 0.4320. Flexible syllabus and structure 0.31 0.6021. Reputable academic programmes 0.31 0.5622. Educated and experience academicians 0.5023. Efficient/prompt dealing with complaints 0.5124. Good communication 0.51 0.3725. Positive work attitude 0.53 0.3626. Knowledge of systems/procedures 0.50 0.3627. Providing service within reasonable time 0.5128. Equal treatment and respect 0.7529. Fair amount of freedom 0.5630. Confidentiality of information 0.6531. Easily contacted by telephone 0.5732. Counseling services 0.53 0.3233. Student’s union 0.3334. Feedback to improve service performance 0.36 0.32 0.3335. Standardized and simple delivery procedures 0.42 0.35

Eigenvalues 10.29 2.79 2.68 1.72Percentage of variance 26.2 5.9 5.8 3.0Cummulative percentage of variance 26.2 32.2 38.0 41.0

Page 11: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

Measuring service quality 81

construct/trait underlying a set of measures (Hattie, 1985; Anderson & Gerbing,1991). In order to perform a comparative check of unidimensionality, a measurementmodel is specified for each scale and confirmatory factor analysis is run by means ofstructural equation modeling within LISREL framework (Joreskog & Sorbom, 1978).Specifically, LISREL 8.3 (Scientific Software International, Chicago, IL) forWindows was used to compare the three measuring instruments where individualitems in the model are examined to see how closely they represent the same concept.

Table 2 presents the measures of model fit for all the three scales. The overall fit ofthe model to the data was evaluated in various ways. Specifically, an exact fit of amodel is indicated when the p for chi-square (χ2) is above a certain value (usually setto p >0.05) as well as indicated by other goodness-of-fit measures. While chi-squareis sensitive to sample size and tends to be significant in large samples, a relative like-lihood ratio between a chi-square and its degrees of freedom was used. According toEisen et al. (1999), a relative likelihood ratio of five or less was considered an accept-able fit, a prerequisite attained by all the three scales. A number of goodness-of-fitmeasures were proposed to eliminate or reduce the dependence on sample size.These indices have values ranging between zero and one, with higher values indicat-ing a better fit. Table 2 shows the indices for the three scales and all the values areclose to one, indicating that there is an evidence of unidimensionality for the scales(Byrne, 1994).

The next measure to consider is the root mean square error of approximation(RMSEA), which is the measure of the discrepancy per degree of freedom. TheRMSEA is generally regarded as one of the most informative fit indices (Brown &Cudeck, 1993; Diamantopoulos & Siguaw, 2000). As illustrated in Table 2, theRMSEA value for the HEdPERF was 0.07 (dimension ‘Understanding’ removed), anevidence of fair fit to the data. Both the SERVPERF and the moderating HEdPERF–SERVPERF scales showed a poor fit of 0.08. Therefore, it was concluded that the

Table 2. Unidimensionality check

Measures of fit HEdPERF* SERVPERFHEdPERF-SERVPERF

Chi-square (χ2) at p=0.01 2404.57 613.89 1938.79Degree of freedom (df) 726 200 540Relative likelihood ratio (χ2/df) 3.31 3.07 3.59Goodness-of-fit index (GFI) 0.75 0.87 0.76Adjusted goodness-of-fit index (AGFI) 0.71 0.83 0.72Comparative fit index (CFI) 0.95 0.91 0.92Non-normed fit index (NNFI) 0.95 0.89 0.92Incremental fit index (IFI) 0.95 0.91 0.92Root mean squared error of approximation (RMSEA)

0.07 0.08 0.08

Note. *The overall fit assessment was computed with the exclusion of dimension Understanding due to its poor fit, RMSEA=0.08

Page 12: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

82 F. Abdullah

modified HEdPERF model fits fairly and represents a reasonably close approximationin the population. Nevertheless, Byrne (1998) cautions that:

… fit indices provide no guarantee whatsoever that a model is useful. … Fit indices yieldinformation bearing only on the model’s lack of fit … they can in no way reflect the extentto which the model is plausible; this judgement rests squarely on the shoulders of theresearcher. (Byrne, 1998, p. 119)

Comparative test of reliability

Unidimensionality alone is not sufficient to ensure the usefulness of a scale. The reli-ability of the composite score should be assessed after unidimensionality has beenacceptably established. In this study coefficient alpha or Cronbach’s alpha wascomputed for the service quality dimensions of all the three instruments. As a guide-line, an alpha value of 0.70 and above is considered to be the criteria for demonstrat-ing internal consistency of new scales and established scales respectively (Nunnally,1988). In general, there was relatively good internal consistency in all the dimensionsrepresenting the three scales.

The Cronbach’s alpha for HEdPERF dimensions ranged from 0.81 to 0.92, withthe exception of the dimension ‘Understanding’ (α=0.63). Due to its low reliabilityscore, the dimension ‘Understanding’ was removed as part of the scale modificationprocess. The results, as shown in Table 3, indicated that the reliability scores of themodified HEdPERF were comparatively superior than SERVPERF’s range of 0.68 to0.76, and slightly better than HEdPERF–SERVPERF’s range of 0.77 to 0.91. In fact,previous replication studies (Carman, 1990; Babakus & Boller, 1991; Finn & Lamb,1991) that compared various measuring instruments within a single setting wereunable to demonstrate such superiority.

Comparative test of validity

While internal consistency estimates of reliability show higher values for the modifiedHEdPERF scale, the next comparative test involves assessing the validity of the threeinstruments. For the purpose of this study, two validity tests were conducted namely

Table 3. Reliability coefficients

HEdPERF DimensionsCronbachalpha (α)

SERVPERFdimensions

Cronbachalpha (α)

HEdPERF–SERVPERFdimensions

Cronbachalpha (α)

Non-academic aspects 0.92 Responsiveness 0.68 Non-academic aspects 0.91Academic aspects 0.89 Assurance 0.75 Academic aspects 0.87Reputation 0.85 Empathy 0.76 Reliability 0.88Access 0.88 Tangible 0.73 Empathy 0.77Programme issues 0.81 Reliability 0.74Understanding* 0.63

Note. *Dimension discarded.

Page 13: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

Measuring service quality 83

criterion validity and construct validity. The criterion variable used to compare thethree scales was the respondents’ global assessment of service quality (Bollen, 1989,p. 186). Whereas the variable used in the construct validity tests was the global pref-erence, which was measured by the summation of the overall satisfaction and futurevisit intentions (Bollen, 1989, p. 188)

The degree of criterion and construct validity was subsequently computed usingthe pairwise correlations between the global assessment of service quality, the globalpreference and each of HEdPERF, SERVPERF and SERVPERF–HEdPERFmeasures. The validity coefficients for the three scales are all significant at p=0.01level. The criterion and construct validity coefficients were 0.58 and 0.57 respectivelyfor the modified HEdPERF scale, 0.27 and 0.34 respectively for SERVPERF scale,and 0.53 and 0.57 respectively for the moderating SERVPERF–HEdPERF scale. Theresults indicated that the validity coefficients for the modified HEdPERF scale weresignificantly greater than both the SERVPERF and SERVPERF–HEdPERF scales. Inother words, these findings demonstrated yet again the superiority of the modifiedHEdPERF in terms of criterion and construct validity.

Comparative regression analysis

The effect size

The regression model considered the global assessment of service quality as a depen-dent variable and the service quality scores for the individual dimensions ofHEdPERF, SERVPERF and SERVPERF–HEdPERF as the independent variables. Amultiple regression analysis was subsequently conducted to evaluate how well thesescales predicted service quality level (see Table 4). The linear combination of themodified five-dimension HEdPERF (dimension ‘Understanding’ dropped) was signif-icantly related to the service quality level, R2=0.35, adjusted R2=0.34, F (5,354)=38.4, p=0.01. The sample multiple correlation coefficient was 0.59, indicatingthat approximately 34.8% of the variance of the service quality level in the sample canbe accounted for by the linear combination of the five dimensions of HEdPERF scale.

Likewise, the five dimensions of SERVPERF was also significantly related to theservice quality level, R2=0.24, adjusted R2=0.23, F (5, 354)=22.1, p=0.01.However, the sample multiple correlation coefficient of 0.49 was much lower thanHEdPERF’s, thus indicating that only 24.0% of the variance of the service qualitylevel can be accounted for by the linear combination of the five dimensions ofSERVPERF scale. As for the moderating SERVPERF–HEdPERF scale, the linearcombination of the four dimensions was also significantly related to the service qual-ity level, R2=0.31, adjusted R2=0.30, F(4,355)=28.92, p=0.01. The sample multi-ple correlation coefficient was 0.55, slightly lower than HEdPERF’s, thus indicatingthat approximately 30.5% of the variance of service quality level can be explained bythe four dimensions. The findings from the regression analysis once again demon-strated that the modified HEdPERF scale was better in explaining the variance ofservice quality level.

Page 14: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

84 F. Abdullah

The relative influence

Table 4 shows the results of the relative influence of the individual dimensions of thethree scales. The dependent variable was the global assessment of service qualitylevel. The resultant output of HEdPERF had an adjusted R2 of 0.34 (p=0.01) andyielded only one significant dimension namely ‘Access’, which concurred with thefindings by Firdaus (2004). It alone accounted for 13.7% (0.372 = 0.14) of the vari-ance of service quality level, while the other dimensions contribute an additional21.1% (34.8%–13.7%). This implied that the dimensions ‘Non-academic aspects’,‘Academic aspects’, ‘Reputation’ and ‘Program issues’ did not contribute signifi-cantly towards explaining the variance in the overall rating.

SERVPERF scale on the other hand had an adjusted R2 of 0.23 (p=0.01) andyielded two significant dimensions, namely ‘Tangible’ and ‘Assurance’, whichexplained almost all the variance in the service quality level. The other dimensionsnamely ‘Responsiveness’, ‘Reliability’ and ‘Empathy’ did not contribute significantlytowards explaining the variance in the overall rating. As for the SERVPERF–HEdPERF scale, the adjusted R2 was 0.30, and yielding two significant dimensionsnamely ‘Non-academic aspects’ and ‘Academic aspects’. The two dimensionsaccounted for 23.0% [(0.26+0.22)2=0.23] of the variance of service quality level,while the other two dimensions contribute an additional 7.5% (23.0%–30.5%).This

Table 4. Effect size and relative importance of the individual dimensions

Measuring scalesStandardized

coefficients (β) Significant (p)

HEdPERF* (R2=0.34)Non-academic aspects −0.11 0.39Academic aspects −0.02 0.85Reputation 0.29 0.03Access 0.41 0.01Programme issues 0.17 0.12SERVPERF (R2=0.23)Responsiveness −0.06 0.27Assurance 0.28 0.01Empathy −0.10 0.06Tangible 0.21 0.01Reliability 0.09 0.12HEdPERF–SERVPERF (R2=0.30)Non-academic aspects 0.26 0.03Academic aspects 0.22 0.01Reliability 0.11 0.32Empathy −0.07 0.10

Note. *The modified version with dimension Understanding removed.

Page 15: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

Measuring service quality 85

implied that the dimensions ‘Reliability’ and ‘Empathy’ did not contribute signifi-cantly towards explaining the variance in the overall rating.

The modified HEdPERF scale

The empirical analysis indicated that the modified five-factor structure with 38 itemsresulted in more reliable estimations, greater criterion and construct validity, greaterexplained variance, and consequently a better fit (see Table 5). Besides the betterquantitative results, the modified HEdPERF scale also had the advantage of beingmore specific in areas that are important in evaluating service quality within highereducation sector. Hence, service quality in higher education can be considered as afive-factor structure with conceptually clear and distinct dimensions namely ‘Non-academic aspects’, ‘Academic aspects’, ‘Reputation’, ‘Access’ and ‘Programmeissues’. The current study also showed that SERVPERF performed miserably.Although SERVPERF was developed and subsequently proven as the superiorgeneric scale to measure service quality in wide-ranging service industry, it did notprovide a better perspective for the higher education setting.

Conclusions and implications

The objectives of this research were twofold. The primary goal was to test andcompare the relative efficacy of the three conceptualizations of service quality in orderto determine which instrument had the superior measurement capability in terms ofunidimensionality, reliability, validity and explained variance. The other objective isconcerned with enhancing the HEdPERF scale, thus transforming it into an idealmeasuring instrument of service quality for higher education sector. The tests wereconducted utilizing students sample from Malaysian tertiary institutions, and thefindings indicated that the three measuring scales did not perform equivalently inthis particular setting. In fact, the results led us to conclude that the measurement ofservice quality by means of the HEdPERF method resulted in more reliableestimations, greater criterion and construct validity, greater explained variance, andconsequently better fit than the other two instruments namely SERVPERF and

Table 5. Comparison of the three scales

CriteriaModified

HEdPERF SERVPERFHEdPERF–SERVPERF

Cronbach alpha (α) range 0.81–0.92 0.68–0.76 0.77–0.91Criterion validity 0.58 0.27 0.53Construct validity 0.57 0.34 0.57Adjusted R2 0.34 0.23 0.30Root mean squared error of approximation (RMSEA)

0.07 0.08 0.08

Page 16: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

86 F. Abdullah

HEdPERF–SERVPERF. In short, the findings demonstrated an apparent superiorityof the modified five-factor structure of HEdPERF scale.

In what may be the first of its kind within higher education setting, the regressionanalyses compared the three scales so as to determine how well they predicted servicequality level. Although the modified five-factor structure of HEdPERF clearly outper-formed the SERVPERF and the moderating HEdPERF–SERVPERF dimensions interms of explaining the variances in service quality level, the implications of thesefindings are less clear. Since this study only examined the respective utilities of eachinstrument within a single industry, any suggestion that the HEdPERF is generallysuperior would still be premature. Nonetheless, the current findings do provide someimportant insights into how these instruments of service quality compare with oneanother.

The current results also suggest that the dimension ‘Access’ is the most importantdeterminant of service quality in higher education, thus reinforcing the recommenda-tion made by Firdaus (2004). In other words, students perceived ‘Access’ to be moreimportant than other dimensions in determining the quality of service they received.As the only HEdPERF dimension to achieve significance, ‘Access’ is concerned withsuch elements as approachability, ease of contact and availability of both the academ-ics and non-academics staff (Firdaus, 2004). Tertiary institutions should thereforeconcentrate their efforts on the dimension perceived to be important rather thanfocusing their energies on a number of different attributes, which they feel are impor-tant determinants of service quality. While the idea of providing adequate service onall dimensions may seem attractive to most service marketers and managers, failureto prioritize these attributes may result in inefficient allocation of resources. Inconclusion, the current study provides empirical support in favour of the idea that themodified five-factor structure of HEdPERF with 38 items may be the superior instru-ment in measuring service quality within higher education setting.

Limitations and suggestions for future research

The current study allows us to understand how three measuring instruments ofservice quality compare to one another. To date, these scales have not been comparedand contrasted empirically, thus making this research a unique contribution to theservices marketing literature. However, this work may pose more questions than itprovides answers. The present findings suggest that the modified HEdPERF scale isbetter suited in higher education service settings. Caution is necessary in generalizingthe findings although considerable evidence of relative efficacy was found in themodified HEdPERF scale. Given that the current study is limited to one serviceindustry, this assertion would need to be validated by further research. Future studiesshould apply the measurement instrument in other countries, in other industries, andwith different types of tertiary institutions in order to test whether the results obtainedare general and consistent across different samples.

Likewise, it may be worthwhile to compare different measuring instruments froma different perspective that is from other customer groups namely internal customers,

Page 17: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

Measuring service quality 87

employers, Government, parents and general public. Although in higher education,students must now be considered ‘primary customers’ (Crawford, 1991), the indus-try generally has a number of complementary and contradictory customers. Thisstudy has concentrated on the student customer only, but it is recognized that educa-tion has other customer groups which must be satisfied.

References

Anderson, J. C. & Gerbing, D. W. (1991) Predicting the performance of measures in a confirma-tory factor analysis with a pretest assessment of their substantive validities, Journal of AppliedPsychology, 76(5), 732–740.

Babakus, E. & Boller, G. W. (1992) An empirical assessment of the SERVQUAL scale, Journal ofBusiness Research, 24(3), 253–268.

Bemowski, K. (1991) Restoring the pillars of higher education, Quality Progress, October, 37–42.Bollen, K. A. (1989) Structural equations with latent variables (New York, John Wiley & Sons).Bolton, R. N. & Drew, J. H. (1991a) A longitudinal analysis of the impact of service changes on

customer attitudes, Journal of Marketing, 55, 1–9.Bolton, R. N. & Drew, J. H. (1991b) A multi stage model of customer’s assessments of service

quality and value, Journal of Consumer Research, 17, 375–384.Boulding, W., Kalra, A., Staelin, R. & Zeithaml, V. A. (1993) A dynamic process model of service

quality: from expectations to behavioural intentions, Journal of Marketing Research, 30, 7–27.Brown, M. W. & Cudeck, R. (1993) Alternative ways of assessing model fit, in: K. A. Bollen & J.

S. Long (Eds) Testing structural equation models (Sage, Newbury Park).Bryne, B. M. (1994) Structural equation modeling with EQS and EQS/Windows-Basic concepts, appli-

cations and programming (Thousand Oaks, CA, Sage).Bryne, B. M. (1998) Structural equation modeling with LISREL, PRELIS and SIMPLIS: Basic

concepts, applications and programming (Mahwah, NNJ, Lawrence Erlbaum Associates).Carman, J. M. (1990) Consumer perceptions of service quality: an assessment of the SERVQUAL

dimensions, Journal of Retailing, 66, 33–55.Cattell, R. N. (1966) The scree test for the number of factors, Multivariate Behavioural Research,

(1), 245–276.Churchill, G. A. & Surprenant, C. (1982) An investigation into the determinants of custome satis-

faction, Journal of Marketing Research, 19, 491–504.Crawford, F. (1991) Total quality management (London, Committee of Vice-Chancellors and

Principals Occasional Paper).Cronin, J. J. & Taylor, S. A. (1992) Measuring service quality: reexamination and extension,

Journal of Marketing, 56, 55–68.Diamantopoulos, A. & Siguaw, J. A. (2000) Introducing LISREL (London, Sage).Eisen, S. V., Wilcox, M. & Leff, H. S. (1999) Assessing behavioural health outcomes in outpatient

programs: reliability and validity of the BASIS-32, Journal of Behavioural Health Sciences &Research, 26(4), 5–17.

Finn, D. W. & Lamb, C. W. (1991) An evaluation of the SERVQUAL scale in a retailing setting,in: R. Holman & M. R. Solomon (Eds) Advances in Consumer research, Association for ConsumerResearch (Provo, UT), 483–490.

Firdaus, A. (2004) Managing service quality in higher education sector: a new perspective throughdevelopment of a comprehensive measuring scale, Proceedings of the Global Conference onExcellence in Education and Training: Educational Excellende through Creativity, Innovation &Enterprise, Singapore.

Franceschini, F. & Rossetto, S. (1997) Online service quality control: the ‘Qualitometro’ method,De Qualitac, 6(1), 43–57.

Page 18: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

88 F. Abdullah

Ginsberg, M. B. (1991) Understanding educational reforms in global context: economy, ideology and thestate (Garland, New York).

Hair, J. F. Jr., Anderson, R. E, Tatham, R. L. & Black, W. C. (1995) Multivariate data analysiswith readings (New Jersey, Prentice Hall International Editions).

Hattie, J. (1985) Methodology review: assessing unidimensionality of tests and items, AppliedPsychological Measurement, 9, 139–164.

Hattie, J. (1990) Performance indicators in education, Australian Journal of Education, 3, 249–276.Joreskog, K. G. & Sorbom, D. (1978) Analysis of linear structural relationships by method of maximum

likelihood (Chicago, IL, National Educational Resources).Kaiser, H. F. (1970) A second-generation little jiffy, Psychometrika, 35, 401–415.Krejcie, R. & Morgan, D. (1970) Determining sample size for research activities, Educational &

Psychological Measurement, 30, 607–610.Lassar, W. M., Manolis, C. & Winsor, R. D. (2000) Service quality perspective and satisfaction in

private banking, Journal of Services Marketing, 14(3), 244–271.Lawson, S. B. (1992) Why restructure? An international survey of the roots of reform, Journal of

Education Policy, 7, 139–154.Lewis, R. C. & Booms, B. H. (1983) The marketing aspects of service quality, in: L. Berry, G.

Shostack & G. Upah (Eds) Emerging perspectives on services marketing (Chicago, IL, AmericanMarketing), 99–107.

Llusar, J. C. B. & Zornoza, C. C. (2000) Validity and reliability in perceived quality measurementmodels: an empirical investigation in Spanish ceramic companies, International Journal ofQuality & Reliability Management, 17(8), 899–918.

Mazis, M. B., Ahtola, O. T. & Klippel, R. E. (1975) A comparison of four multi-attribute modelsin the prediction of consumer attitudes, Journal of Consumer Research, 2, 38–52.

Mehta, S. C., Lalwani, A. K. & Han, S. L. (2000) Service quality in retailing: relative efficiency ofalternative measurement scales for different product-service environments, InternationalJournal of Retail & Distribution Management, 28(2), 62–72.

Nunnally, J. C. (1988) Psychometric theory (Englewood-Cliffs, NJ, McGraw-Hill).Oliver, R. L. (1989) Processing of the satisfaction response in consumption: a suggested frame-

work and research propositions, Journal of Consumer Satisfaction, Dissatisfaction & ComplainingBehaviour, 2, 1–16.

Owlia. M. S. & Aspinwall, E. M. (1997) TQM in higher education—a review, International Journalof Quality & Reliability Management, 14(5), 527–543.

Parasuraman, A., Zeithaml, V. A. & Berry, L. L. (1985) A conceptual model of service quality andits implications for future research, Journal of Marketing, 49, 41–50.

Parasuraman, A., Zeithaml, V. A. & Berry, L. L. (1988) SERVQUAL: a multiple-item scale formeasuring consumer perceptions of service quality, Journal of Retailing, 64(1), 12–40.

Parasuraman, A., Berry, L. L. & Zeithaml, V. A. (1991a) Refinement and reassessment of theSERVQUAL scale, Journal of Retailing, 67(4), 420–450.

Parasuraman, A., Berry, L. L. & Zeithaml, V. A. (1991b) More on improving service qualitymeasurement, Journal of Retailing, 69(1), 140–147.

Parasuraman, A., Zeithaml, V. A. & Berry, L. L. (1994) Reassessment of expectations as acomparison standard in measuring service quality: implications for future research, Journal ofMarketing, 58, 111–124.

Quester, P., Wilkinson, J. W. & Romaniuk, S. (1995) A test of four service quality measurementscales: the case of the Australian advertising industry, Working Paper 39, Centre deRecherche et d’Etudes Appliquees, Group esc Nantes Atlantique, Graduate School ofManagement, Nantes, France.

Robledo, M. A. (2001) Measuring and managing service quality: integrating customer expecta-tions, Managing Service Quality, 11(1), 22–31.

Soutar, G. & McNeil, M. (1996) Measuring service quality in a tertiary institution, Journal ofEducational Administration, 34(1), 72–82.

Page 19: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions

Measuring service quality 89

Sureshchandar, G. S., Rajendran, C. & Anantharaman, R. N. (2002) Determinants of customer-perceived service quality: a confirmatory factor analysis approach, Journal of servicesMarketing, 16(1), 9–34.

Teas, R. K. (1993a) Expectations, performance evaluation, and consumers’ perceptions of quality,Journal of Marketing, 57(4), 18–34.

Teas, R. K. (1993b) Consumer expectations and the measurement of perceived service quality,Journal of Professional Services Marketing, 8(2), 33–54.

Zeithaml, V. A. & Bitner, M. J. (1996) Services marketing (Singapore, McGraw-Hill).

Page 20: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions
Page 21: Measuring service quality in higher education: three ...werken.ubiobio.cl/html/documentos/articulos_bibliografia_calidad_educacion_superior/...SERVPERF scale is the unweighted perceptions