journal of institutional research · of a formulaic judgment of performance as compared to others...

91
Volume 7 Number 2 2009 JIRSEA 1 Journal of Institutional Research Journal of Institutional Research Journal of Institutional Research Journal of Institutional Research South East Asia South East Asia South East Asia South East Asia JIRSEA Volume 7 Number 2 2009 ISSN 1675-6061 Editor: Nirwan Idrus PhD Monash IQA London All papers are refereed by two appropriately independent, qualified experts and evaluated according to: Significance in contributing new knowledge Appropriateness for the Journal Clarity of presentation Technical adequacy http://www.seaair.info JIRSEA is indexed with the Directory of Open Access Journals, SCOPUS, EBSCoHost (Education Research Index/Education Research Complete) and currently applying to INFORMIT (Australasian Scholarly Research)

Upload: others

Post on 04-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • Volume 7 Number 2 2009 JIRSEA 1

    Journal of Institutional ResearchJournal of Institutional ResearchJournal of Institutional ResearchJournal of Institutional Research

    South East AsiaSouth East AsiaSouth East AsiaSouth East Asia

    JIRSEA

    Volume 7 Number 2 2009

    ISSN 1675-6061

    Editor:

    Nirwan Idrus PhDMonashIQALondon

    All papers are refereed by two appropriately independent, qualified experts and evaluated according to:

    • Significance in contributing new knowledge

    • Appropriateness for the Journal • Clarity of presentation • Technical adequacy

    http://www.seaair.info

    JIRSEA is indexed with the Directory of Open Access Journals, SCOPUS, EBSCoHost (Education Research Index/Education Research Complete) and currently applying to INFORMIT (Australasian Scholarly Research)

    http://www.seaair.info/

  • Volume 7 Number 2 2009 JIRSEA 2

    CONTENTS

    Page EDITORIAL BOARD 3 EDITOR’S NOTE 4 ARTICLES

    Mohana Nambiar, Leela Koran, Asha Doshi, Kamila Ghazali, Hooi Moon Yee & Kan Ngat Har Generic Narrative Grade Descriptors for Quality Assurance in Tertiary Institutions: A Conceptualization Process 5 Teay Shawyun Strategic Balancing of the IQA = EQA Equation 19 Mahsood Shah & Chenicheri Sid Nair Using Student Voice to Improve Student Satisfaction: Two Australian Universities the same Agenda 43 Akram Al Basheer , Samer Khasawneh, Amjad Abu-Loum and Ahmad Qablan Online Testing: Perceptions of University Students in Jordan 56 Nirwan Idrus Comments: Strategic Alliances in higher education? 71 Emilio S. Capule & Alice T. Valerio Management expectations and students’ perceptions of the entry- Level technician skills: Evidence from the semiconductor industry in The Philippines 74

  • Volume 7 Number 2 2009 JIRSEA 3

    EDITORIAL BOARD Editor Professor Dr Nirwan Idrus Director, Special Projects, President/Vice-Chancellor’s Office, Universiti Tun Abdul Razak, Kuala Lumpur, Malaysia. [email protected] Members Professor Dr Zoraini Wati Abas Immediate Past President, SEAAIR Director, Institute for Quality, Research & Innovation (IQRI), Open University Malaysia, Kuala Lumpur, Malaysia [email protected] Dr Raj Sharma Member of Executive Committee, SEAAIR Senior Adviser, RMIT University, Melbourne, Australia [email protected] Professor George Gordon Director, Centre for Academic Practice, University of Strathclyde, Scotland Professor Somwung Pitiyanuwat Chulalongkorn University, and Director, Office of Education Standard & Evaluation, ONEC, Bangkok, Thailand Dr John Muffo Director, Office of Academic Assessment, Virginia Polytechnic Institute and State University, East Virginia, USA Dr Narend Baijnath Dean, Applied Community Sciences, Technikon Southern Africa, South Africa Dr Ng Gan Chee Principal, Australasian Consultants, Melbourne, Australia

    mailto:[email protected]:[email protected]:[email protected]

  • Volume 7 Number 2 2009 JIRSEA 4

    EDITOR’S NOTE Welcome to this second edition of JIRSEA in 2009 and thank you to all contributors. Indeed contributions to JIRSEA are coming in a steady fashion allowing the editors the pleasant task of bringing to you the readers a selected collection of articles that we think of high quality covering a number of aspects of Institutional Research. Assuring Quality of higher education and tertiary institutions is the impetus of the paper by Nambiar et al. Indeed many papers have been written about quality in higher education and more are still expected given the proliferation of aspects in this area that continually multiply. Nambiar et al specifically covers the matter of Generic Narrative Grade Descriptors from their conceptualization to implementation. Teay Shawyun expands the outlook on Quality Assurance in this edition by seeking some sort of convergence between the Internal Quality Assurance (IQA) and the External Quality Assurance (EQA). He shares with us his experiences from two distinctly different countries, Thailand and Saudi Arabia. Specifically he developed the What and How framework which includes a new methodology for institutions to prepare their IQA to approach the EQA. Shah and Nair bring in the immediate output of higher education process by discussing the dynamism of feedbacks and what should be done and should be seen to be done to them. They advocate the feeding back of feedbacks to the stakeholders in order to improve quality overall and share their experiences at two universities in Australia. Khasawneh et al have once again provided insights of quality in higher education in the Kingdom of Jordan. This time their paper is on On-line Testing of students in Hashemite University, Jordan and they found that there is an overwhelming support for such a method. This is of course encouraging given that almost every university that wishes to progress inevitably looks at ways and means of improving their teaching, learning and assessment methods. Extrapolating this will of course lead to quite a transformation in higher education assessments. Capule and Valerio conclude this edition by taking us to the reality of mismatches between what employers expect of graduates and graduates’ perceptions of what their employers want of them. Such mismatches of course point to the lack of quality awareness and perhaps consideration in assuring the fitness for purpose of the programs and courses of the higher education institution. Happy reading,

    Nirwan Idrus Editor

  • Volume 7 Number 2 2009 JIRSEA 5

    Generic Narrative Grade Descriptors for Quality Assurance in Tertiary Institutions: A Conceptualization Process

    Mohana Nambiar, Leela Koran, Asha Doshi, Kamila Ghazali, Hooi Moon

    Yee & Kan Ngat Har

    University of Malaya, Kuala Lumpur, Malaysia (contact: [email protected])

    Abstract

    This paper focuses on the issues involved in the process of conceptualizing generic narrative grade descriptors. Currently, tertiary institutions in Malaysia, like the University of Malaya, use a grading system which describes student performance in terms of marks which are equated to a letter grade that carries a grade point and only a brief descriptor such as excellent or fail. Clearly, this description does not give adequate information of the students’ abilities to the students themselves or other stakeholders. The issues in conceptualizing these descriptors involve firstly, defining what generic narrative grades are; secondly, how to develop them especially when a ‘top down’ approach is not feasible as imposing a grade system on an already existing program is bound to be disruptive; thirdly, what methodology to adopt in the face of a relatively new research area, and finally who should make decisions regarding the manner of instituting the descriptors. This paper shows how the careful addressing of the different issues involved in the conceptualization of the study was necessary. The findings would serve as guidelines for developing generic narrative grade descriptors.

    This contribution has been peer-reviewed. In line with the publication practices of open access e-journals, articles are free to use though the Editor requests proper attribution to JIRSEA. © Copyright is retained by authors.

    mailto:[email protected]

  • Volume 7 Number 2 2009 JIRSEA 6

    Quality assurance and institutions of higher learning With the current emphasis on globalization and the internationalization of education, institutions of higher education in developing countries like Malaysia are faced with real challenges to remain internationally competitive and, more importantly, relevant. Quality is not an option but is mandatory for any organization aspiring to be a market leader. Quality assurance and benchmarking are instruments by which the management monitors its performance, effectiveness and efficiency in all aspects of its core activities and to meet customers’ needs. Simultaneously, it is imperative to formalize the promotion of accountability, transparency and ethical values in the governance of a university. Quality assurance in Malaysian public universities is not a new phenomenon; tertiary institutions have always been practicing various measures, including the use of external examiners, as well as national and international peer evaluations of staff, to ensure the quality of their programs. However the rapid democratization of education within the country in the last two decades and the urgent need to be on par with other institutions of higher learning at the international level has necessitated a more formal and systematic approach towards quality assurance in tertiary institutions. According to the Ministry of Education’s Code of Practice for quality assurance in public universities, quality assurance consists of:

    “…all those planned and systematic actions (policies, strategies, attitudes, procedures and activities) necessary to provide adequate confidence that quality is being maintained and enhanced and the products and services meet the specified quality standards ”(Ministry of Education, 2004:7).

    Hence in the context of higher education, quality assurance is the “totality of systems, resources and information devoted to maintaining and improving the quality and standards of teaching, scholarship and research as well as students’ learning experience” (Ministry of Education, 2004:7). A clear indicator of the country’s commitment to quality assurance is the setting up in 2001 of the Quality Assurance Division in the Ministry of Education as the national agent responsible for managing and coordinating the quality assurance system for public universities. The mission of this body is to promote confidence amongst the public that “the quality of provision and standards of awards in higher education are being safeguarded and enhanced” (Ministry of Education, 2004: 7). Some of the measures taken to promote public confidence is the regular conducting of “academic reviews to evaluate the performance of program outcomes, the quality of learning opportunities and the institutional capacity and management of standards and quality” (Ministry of Education, 2004:8). One area under the focus of academic review is the management of student assessment processes. Testing methods drive student learning and the results of the assessment is used as the basis for conferring degrees and qualifications. Hence, student assessment is a crucial aspect of quality assurance. It is thus imperative that methods of

  • Volume 7 Number 2 2009 JIRSEA 7

    assessment are clear and support the aims of the program. The Ministry requires institutions of higher learning to provide “formal feedback” to students on their performance “during the course in time for remediation if necessary” (Ministry of Education, 2004:50). One of the methods of this formal feedback is to provide and document student performance in terms of narrative evaluation. Thus, in line with the Ministry of Higher Education’s efforts to formalize quality assurance measures, the Faculty of Languages and Linguistics, University of Malaya, in 2006, embarked on a research project to investigate the feasibility of employing generic narrative grade descriptors (GNGD) for its undergraduate program. This paper is an outcome of the study. It discusses the issues involved in the process of conceptualizing these descriptors. While highlighting the benefits of employing GNGD to ensure quality in tertiary education, the researchers take into cognizance the notion that GNGD may only be a miniscule mechanism in the larger scenario of quality assurance. Benefits of generic narrative grade descriptors Before examining the benefits that can be obtained from the use of generic narrative descriptors for grades, a look at the current system of grading in the local public universities is in order. Presently, the majority of tertiary institutions, including the University of Malaya, use a grading system which describes student performance in terms of marks which are equated to a letter grade. The grade carries a grade point and only a brief one or two-word descriptor such as excellent, credit, pass, marginal pass or fail. Implicit in the use of these descriptors is the notion that they are sufficient to convey all the qualities that are expected of specific grades. It is assumed that all qualities that are equated with that letter grade are universally understood without them being explicitly stated. Clearly, this description does not give adequate information of the students’ performance and abilities to the students themselves or other stakeholders such as parents, fund providers and employers. The benefits of employing a generic narrative grade description are manifold. In comparison to marks or single word descriptors, narrative descriptions provide more meaningful feedback for the stakeholders. Most teachers tend to return tests to students with a letter grade or a number score that merely show the number of right or wrong responses, giving absolutely no information of intrinsic interest to the student whatsoever (Brown, 1994). Brown states, “Grades and scores reduce a mountain of linguistic and cognitive performance data to an absurd minimum. At best they give a relative indication of a formulaic judgment of performance as compared to others in the class –which fosters competitive, not cooperative learning” (1994: 386). In other words, a more detailed evaluation of the student’s performance will enhance beneficial washback, that is, the positive effect of testing on learning and teaching (Hughes, 2003). Defining the numeric value of the grade in terms of knowledge, skills and performance also helps to establish employability. A case in point is the complaint to a local newspaper from a recruiter for a multinational company. He complained that although

  • Volume 7 Number 2 2009 JIRSEA 8

    only those who had scored an “A” in the Sijil Pelajaran Malaysia (Malaysian Certificate of Education) English language examination had been called for the interview, a mere 10%-15% were able to communicate fluently in English. The rest were stumbling and some could not even understand the questions (Rodrigues, 2006: N47). This complaint is a comment not only on the inconsistencies inherent in the “A” grade but also on the assessment processes. If the assessors had been provided with adequate narrative grade descriptors to assess the students, this inconsistency could have been reduced and potential employers would be able to gauge the value of an “A”.

    Other than contributing to aspects of feedback, there are other areas that closely tie a generic narrative grade (GNG) description to quality assurance. Narrative descriptions of grades facilitate institutional self-evaluation, fulfilling a condition clearly stated in the Code of Practice set out by the Ministry of Education (2004:5): “internal quality assessment is the responsibility of the university”. Furthermore, transparency of criteria used (a prerequisite for GNGD) would promote public confidence as well as allow accreditation and review of programs by outside bodies. In addition, if a system of GNG description could be employed across the whole nation, it would not only allow for standardization but also facilitate credit transfers across universities.

    Conceptualizing GNGD There is no doubt about the contribution of a GNG description to assuring quality in tertiary institutions; it is its feasibility that needs scrutiny. In our attempt to study whether a generic narrative description could be implemented across the undergraduate program, we had to first deal with a number of concerns. First we had to define GNGD; the next issue was one of how to develop GNGD. However before we could embark on that, we needed to review what other institutions, both local and international, had to offer in terms of GNGD. Having surveyed the literature, we were then in a better position to decide on the methodology to be adopted for the research. The final phase of our conceptualization was related to decision making - who should have a say regarding the manner of instituting the GNGD. Each of these concerns is addressed below in greater detail. Defining GNGD The first issue that needed to be addressed was the definition of GNGD itself. In terms of conceptualizing generic narrative grade descriptors, two concepts – generic and narrative – had to be defined. While the former is implicit in our current grading system, the latter is not. The Cambridge International Dictionary of English (Proctor, 1995:587) defines “generic” as something that is “shared by, typical of or relating to a whole group of similar things rather than to any particular thing”. Hence, a generic grade description would be one which is standard and nonspecific, one that encapsulates the common elements of a grade. The present grading system (e.g. A = 76-100 marks) applies to all courses; therefore, it is understood that an “A” in one course means the same “A” in another course. It can then be inferred that the grading system that is currently being used

  • Volume 7 Number 2 2009 JIRSEA 9

    is a generic one. However, although there is a one or two-word descriptor, there is no narrative description for each grade. The concept of narrative is not as easy to define. It appears that a variety of terms – narrative evaluations, performance evaluations, rating scales, weighted rubric, performance bands and grade descriptors – have been used in the literature to describe an underlying construct. Brown (1994) considers narrative evaluations as feedback of test performance that goes beyond a score, a grade or a phrase. He feels the teacher should “respond to as many details throughout the test as time will permit”, giving praise and constructive criticisms as well as giving “strategic hints on how a student might improve certain elements of performance” (p.386). A similar idea underlies the term performance evaluation. According to the University of California, Santa Cruz, (2006) performance evaluations, especially written ones, tend to “anchor a system that encourages students and instructors to get to know one another, that allows instructors to acknowledge and document the full range of student achievements, and that can provide much more information than do conventional transcripts”. From these two descriptions, it can be assumed that they are referring to the same concept but by slightly different terms. What is more pertinent is that narrative or performance evaluations are meant as individual feedback for teacher-conducted classroom tests and more suited for small groups. They are more appropriate for formative assessment and not so practical for summative, standardized tests involving large numbers. Hughes (2003) describes a rating scale in terms of the ‘criterial levels of performance’, that is, “the level(s) of performance for different levels of success” (p.61). What is meant is that the scale contains a series of ordered categories indicating different levels of ability. A similar concept is conveyed by the term weighted rubric, meaning the breaking down of a language skill like writing or speaking into categories and sub-categories and assigning a specific point value to each. When they come in the form of bands, they are known as performance bands. Rating scales/performance bands are often used as scoring guides by assessors, especially in the field of language testing. On the other hand, the term grade descriptor is very specific as it is linked to the grade. According to Greatorex (2001:451) grade descriptors “are the characteristics which are found in the performance of candidates at particular grades”. A similar definition is available from the University College Dublin (UCD) website, which states that grade descriptors “show how a given level of performance will be reflected in a grade” (UCD Registrar’s Office, University College Dublin, 2006). Studying the definitions for all these terms, the existence of a central or underlying idea becomes evident. It can be inferred that rating scales/weighted rubrics which are used as scoring guides by teachers can be the basis for giving feedback to students or when the need for informing other stakeholders (for example, employers, parents and fund providers) about students’ performance is required. In this case the rating scales function as performance bands. The rating scale takes on a slightly different role when it is used for the purpose of narrative or performance evaluation. It is then used to provide an individualized feedback of how a student has performed and it need not be tied up to a letter grade. On the other hand, a grade descriptor describes performance directly related

  • Volume 7 Number 2 2009 JIRSEA 10

    to the grade. Based on this analysis, we felt that our concept of “narrative grade descriptor” would be very similar to that of the grade descriptor. We conceived it as being generic in nature, providing a detailed description of the common elements of a grade which holds true for that specific grade across all courses in a particular program (in this case the undergraduate program at the Faculty). The GNGD has a function that is all encompassing in the sense that it serves as the scoring guide for assessors, the provider of detailed feedback for students and the descriptor of standards for the other stakeholders. A Survey of Narrative Grade Descriptors In order to further understand the practice of adopting a narrative grade description, a literature search was conducted. Despite the fact that the Ministry of Higher Education has stipulated that narrative feedback should be provided to students as discussed above, an online survey of the websites of the 20 public universities in Malaysia showed that the majority do not provide grading schemes on their websites and the four that did, provided only one or two-word grade descriptors. An Internet search of some educational institutions revealed that grade descriptors were used in the United States of America schools systems as a means to give feedback to students (for instance, those of the Pennsylvania Department of Education, n.d.). As such, they are used with classroom-based formative assessments and targeted individual students. Reporting systems ranging from ticking in the boxes or spread sheets to computerized systems such as Taskstream’s (n.d) Competency Assessment and Reporting Systems were available for producing profiles based on students’ performance in the assessments. However, these were subject-specific. At institutions of higher learning, grade descriptors have taken on other dimensions. It is a means of quality assurance. For instance, the University College Dublin has on its website a listing of grade descriptors approved by its Academic Council. Criteria for the six grades (A-G) awarded for the courses in this University are presented. These descriptors seem to cover both the cognitive and linguistic skills at two separate levels for each grade (a sample of grades A and B are provided in the Appendix). It is also interesting to note that parallels can be drawn between these descriptors and the descriptors given in the marking scheme of their programs (see http://www.ucd.ie/hispanic/marking_descriptors.htm). General descriptors for courses such as foreign language courses are also available. The University of Sydney website features the grading system employed in each of its faculties. At the onset itself, the justification for this system is given so as to make students understand “… the way their work is assessed within the unit of the study and the broader policy framework within which their grades are distributed.” (Department of Linguistics, University of Sydney, n.d.) Grade descriptors for the Linguistic Department from this University are presented at their website: (http//www.arts.usd.edu.au /departs/linguistics/undergrad/assessment.shtml).

  • Volume 7 Number 2 2009 JIRSEA 11

    Another relevant document that enabled us to appreciate the position of narrative grade descriptors, particularly within the framework of quality assurance, was the subject benchmark statement which is available from the Quality Assurance Agency for Higher Education (2001). It is stated here that “Subject benchmark statements provide a means for the academic community to describe the nature and characteristics of programs in a specific subject. They also represent general expectations about the standards for the award of qualifications at a given level and articulate the attributes and capabilities that those possessing such qualifications should be able to demonstrate” (ibid). It is the second part of this definition that sheds light on the relationships between subject benchmark statements and the narrative grade descriptors. Within this framework, the benchmarking of academic standards for linguistics had been undertaken by a group of subject specialists drawn from and acting on behalf of the subject community. The membership listing at the end of this document reveals that they are representatives from well-established universities in the UK. Additionally, associations representing individuals with special interests in this area i.e. British Association for Applied Linguistics (BAAL), Linguistics Association of Great Britain (LAGB) and so on had also been consulted in drawing up the statement. The final document contains the following sections: defining principles, subject skills and other skills, teaching-learning and assessment. The last section is on standards. Tan and Prosser (2004) in their phenomenographic study report on the different ways that academic staff understood and practised grade descriptors as forms of standards-based assessment. Four qualitatively different conceptions of grade descriptors were identified. Firstly, grade descriptors were described as generic descriptors as they depict achievement levels as descriptions of standards for generic purposes. Secondly, grade descriptors were understood as grade distributors as they focus on how students' work can be understood in terms of how they are distributed amongst different levels of achievement. Thirdly, grade descriptors were labeled as grade indicators since they indicate to staff and students what a piece of student's work might mean in terms of specific criteria. Finally, grade descriptors were labeled as grade interpreters since they are perceived as authentic bodies of intrinsic meaning as to what actual achievement levels are. In their study, Tan and Prosser (2004) seek to provide a basis for identifying and resolving different expectations for understanding and practising grade descriptors as well as clarifying the place of standards and criteria in assessment. Each of the conceptions is discussed in terms of providing a form of standards-based assessment. Suggestions for enhancing the use of grade descriptors as standards-based assessment are then made. Developing GNGD Having obtained a relatively comprehensive understanding of how other tertiary institutions implement GNGD, the next step of our conceptualization process involved the issue of how to develop GNGD that would be applicable to our purpose. Only one study on the process of developing generic descriptors could be located. This was in the form of a research report that had been commissioned and funded by the National Asian

    http://www.lagb.org.uk/meetings.htm

  • Volume 7 Number 2 2009 JIRSEA 12

    Languages and Studies in Australian Schools (NALSAS) Taskforce. Bringing together the experience of experts and information from students’ test performance, this study proposes a model for developing “exit proficiency descriptors” for the Japanese language program which would be “grounded in actual student performance” (Scarino, Jenkins, Allen & Taguchi,1997:9). The attempt here is to “embody what the students are able to accomplish and show the degree depicting standards” (ibid). The study highlights two important issues in arriving at the descriptors, namely, the “level at which exit proficiency is pitched” and “the style of the descriptor” (ibid). It suggests that further development is “essential to create the necessary assessment resources, i.e. sample test tasks, test specifications, marking and reporting formats, and moderation procedures, which will be useful accompaniments to the descriptors” (ibid). It needs to be emphasized that the focus of the NALSAS report was on the development of new courses and course material, which is contrary to the focus of our study which looks at developing a narrative grade descriptor system that will have to be imposed on the existing structure of courses that have already been offered at the Faculty for the last ten years. In other words, the NALSAS study took a “top-down” approach for instituting generic descriptors as theirs was a program yet to be implemented, but we preferred not to. While appreciating the contribution of the NALSAS model, we realised that we needed to take a different approach to the problem at hand. Imposing GNGD on an already existent program was bound to be disruptive and complicated. Furthermore, since our study was meant to be a feasibility study, we concluded that we should adopt a “bottom-up” approach, that is, examine the existing assessment system to see whether it could support a GNG description. In order to reflect this approach, our first research question was formulated as:

    • What are the elements in the current assessment system at the Faculty of Languages and Linguistics that affect the possibility of deriving a GNG description?

    Underlying this question was the belief that if the findings showed there existed enough common elements or patterns in the assessment practices currently employed, these could form the basis for deriving narrative descriptors for the grades. In other words, we wanted to know whether there existed a standardized approach to assessment for all the courses offered in the undergraduate program and whether there were sufficient commonalities in it on which a generic narrative description could be anchored. We were cognizant of the fact that the assessment practices which include the grading scheme, vetting and moderation procedures, marking formats, etc. could also be affected by the program structure such as the different categories of courses, course pre-requisites, and contact hours, to name a few. Hence all of these would have to be examined for commonalities which could then form the bases for our GNGD. As part of our bottom up approach, we also recognized that we should tap into the perceptions of the instructors teaching the different courses as to what they believe is implicitly stated when they assign grades to students. Thus the second research question of our study was conceived as:

  • Volume 7 Number 2 2009 JIRSEA 13

    • How do the perceptions of assessors regarding the meaning of a particular grade

    (for instance, grade A) affect the possibility of arriving at a generic narrative grade description?

    If the majority of the assessors had a common perception of what a particular grade meant or embodied, then it would be possible to use that commonality as a basis for deriving GNGD. On the other hand, if there were variations in responses, what would that entail? This was an issue that needed careful reflection. Initially, in order to elicit assessors’ perceptions of what a grade meant to them, the idea was to get the respondents to choose from a list of characteristics or descriptors of the grade prepared by the researchers. Such a list would have made the task of analyzing the responses easier as the data would have been more quantifiable. More important, the list, being a carefully thought-out product would have incorporated most of the significant characteristics of the grade as perceived by the researchers. However this idea was rejected in favour of an open-ended questionnaire requiring respondents to state in their own words their perceptions of what the grade embodied. This was done to capture the authentic thoughts of the assessors which may have been influenced if a list prepared by the researchers had been provided. Having made that decision, the researchers had to be prepared for the variability that was bound to occur in the responses due to the apparent ‘creativity’ of each respondent. In other words, due to the qualitative nature of the data, variations were bound to occur. The challenge for us was to determine whether these variations were only at the surface level, i.e. they were differences in expressions of the same ideas or whether the differences were truly reflections of variations in assessors’ beliefs and practices. If it was the latter, then the notion of a generic descriptor would be hard to arrive at. In other words, the bottom-up approach for deriving a GNG description would not be feasible. Methodology In line with our objectives and the particular approach that we had decided upon, our study adopted an “emergent design” (Denscombe, 1998:217) where conceptualization occurs simultaneously with data collection and preliminary analysis. A number of instruments such as questionnaires, interviews, analyses of students’ results and inspections of documents were employed to gather data. We started off with an exploratory study of the different courses offered for the undergraduate program via inspection of documents (such as the faculty handbook) which then led to a questionnaire administered to the total population to find out which courses were being offered by which instructor. The information gathered from the faculty documents and the findings from the first questionnaire resulted in the design of the second questionnaire. The purpose of the second questionnaire was to gather information about the courses taught, the evaluation process used by the assessors of those courses, the criteria used by assessors to arrive at grades and assessors’ perceptions about narrative grades. Analysis showed that more data was required on assessor’s perceptions about narrative grade descriptors such as what would characterize an ‘A’ student, resulting in the need for another instrument – the third questionnaire. This

  • Volume 7 Number 2 2009 JIRSEA 14

    questionnaire adopted open-ended items in order to elicit respondents’ authentic perceptions of what a grade meant to them. For purposes of triangulation, interviews were conducted with teaching staff regarding assessment procedures. In the context of this study, the data collected from the first two questionnaires represented factual data about the prevailing situation in the undergraduate program. The data from the third questionnaire comprised assessor perceptions about an ‘A’ student across all courses in the program. The responses were subjective in nature and there was a need for the data to be scrutinized carefully to determine the extent of commonality and/or variability in the assessor’s perceptions of the ‘A’ grade. Categorization of the data was done according to the three different course types and the language of response (i.e. English and Malay). Frequency lists were generated using Wordsmith Tools version 3.0 (1999). Concordance patterns were drawn for frequently occurring words which indicated concepts relating to ‘A’ students. The 10 most frequently occurring words that indicated overlapping concepts were placed in clusters and word clusters that were conceptually related were placed according to concept categories. A total of 12 word clusters were placed in 7 concept categories according to each course type in the program. This approach to data organization enabled the identification of criterial features in terms of ability, performance and personal attributes that would be used as input for drawing up a GNGD. Decision makers The final issue that needed to be addressed was who should make the decision regarding how to institute a GNG description. As observed in the case of The Quality Assurance Agency for Higher Education (2001) mentioned earlier, a number of people from different backgrounds had been involved in benchmarking - subject specialists, individuals as well as representatives from well-established universities in the UK. This composition of people reflects the extent benchmarking and quality assurance is given prominence in the UK. In our case, the management had taken the first step by appointing a team of academics to research on the feasibility of introducing GNGD for the undergraduate program. Although we realize that ideally all stakeholders such as students, parents, fund providers and employers should also be involved in this process, time limitations and other practical constraints did not allow us this privilege. However we were able to tap into the perceptions of the course assessors (see section 3.3) as we believed they should have a say in how GNGD should be developed since they are the course designers as well as instructors. Summary of Findings The careful addressing of the different issues involved in the conceptualization of the GNGD helped to ensure that the findings of the study were valid. The findings with reference to the first research question showed commonalities in a number of the current assessment practices such as the university grading scheme and a common structure of assessment at the faculty level which involved uniform apportionment of marks, systematic vetting and moderation procedures on which the GNGD could be anchored.

  • Volume 7 Number 2 2009 JIRSEA 15

    However, there were also other factors like the varied structure of certain courses of the undergraduate program which necessitate further standardization or restructuring before a generic narrative description could be drawn up and implemented. Where the second research question was concerned, the findings reveal a recurrence of certain salient characteristics in the assessors’ perceptions of what constitute a grade. It was found that a grade:

    • is a measure of students’ ability to perform • is a measure of observable performance • is an indication of possession of skills and knowledge • accounts for students’ attributes such as intellectual skills, motivation and

    leadership qualities • allows for caveats (even ‘A’ students are given leeway for minor errors) • must be contextualized (within the context of a particular course or program).

    These salient characteristics suggest three basic criterial features that could be used to draw up a GNG description for the undergraduate program, The criterial features for any GNG description for the Faculty should include in its description a measure of student abilities to perform, observable performance of these abilities and personal attributes of the student.

    Conclusion The process of conceptualizing a generic narrative grade description for a tertiary program unfolded its multi-faceted nature. Not only are standardized assessment practices vital for the development of a GNGD, but teacher perspectives of grades are as important. The next step would be the realization of such a GNG description based on the three criterial features which emerged from this study. In the course of conceptualizing a GNG description, institutional self-evaluation was facilitated, fulfilling one of the conditions for ensuring internal quality assurance. In addition, awareness was created among academia of the crucial role of a generic narrative grade description in tertiary institutions.

    References Brown, H.D. (1994) Teaching by principles, Englewood Cliffs, Prentice Hall Regents.

    Denscombe, M. (1998) The good research guide: For small-scale social research projects,(Buckingham: Oxford University Press Department of Linguistics, University of Sydney, (n.d.). Grading, Marking and assessment. Retrieved 27/7/2007, from http://www.arts.usyd.edu.au/departs/linguistics/undergrad/asessment.shtml.

    http://www.arts.usyd.edu.au/departs/linguistics/undergrad/asessment.shtml

  • Volume 7 Number 2 2009 JIRSEA 16

    Greatorex, J. (2001) Making the grade – how question choice and type affect the development of grade descriptors. Educational Studies, 27:4: 451-464) Retrieved 27/7/2007, from

    http://www.pde.state.pa.us/a and t/lib/a and t/Grade /a_and_t/Grade_6_Reading_PLDs.pdf

    Hughes, A. (2003) Testing for language teachers, Cambridge, Cambridge University Press

    Ministry of Education (2004) Quality assurance in public universities of Malaysia: Code of practice. 2nd edition.

    Pennsylvania Department of Education.(n.d).Grade 6 Reading Performance Level Descriptors. Retrieved 09/07/07 Proctor, P. (ed.) (1995) Cambridge International Dictionary of English, (Cambridge, Cambridge University Press).

    Rodrigues, S ( 2006). “A” in exam, “F” in interview. The Star, September 5, N47

    Scarino, A., Jenkins, M., Allen, J. & Taguchi, K (1997) Development of generic student proficiency outcomes and language specific descriptors for Japanese. Retrieved 20/3/2007, from http://www1.curriculum.edu.au/ nalsas/pdf/japanese.pdf

    School of Languages, Literatures and Film, University College Dublin, (n.d). Marking Scheme-Essay Work. Retrieved 09/07/07, from http://www.ucd.ie/ hispanic/marking_descriptors.htm Tan, H. K. & Prosser M (2004) ‘Qualitatively different ways of differentiating student achievement: A phenomenographic study of academics' conceptions of grade descriptors’, Assessment & Evaluation in Higher Education, 29( 3), pp.267 – 82. Taskstream.(n.d) Competency Assessment and Reporting. Retrieved 27/7/2007, from http://www.taskstream.com/pub/competency.asp The Quality Assurance Agency for Higher Education, (2001) Subject benchmark statements: Academic Standards- Linguistics. Retrieved 24/11/05, from http://www.qaa.ac.uk/academicinfrastructure/benchmark/default.asp UCD Registrar's Office, University College Dublin (2006) Sample Grade Descriptors. Retrieved 09/07/07, from http://www.ucd.ie/regist/ modularisation_and_semesterisation/images/gradedescriptors.pdf

    http://www.pde.state.pa.us/ahttp://www.pde.state.pa.us/a_and_t/lib/a_and_t/Grade_6_Reading_PLDs.pdfhttp://www.pde.state.pa.us/pde_internet/http://www1.curriculum.edu.au/%20nalsas/pdf/japanese.pdfhttp://www.ucd.ie/%20hispanic/marking_descriptors.htmhttp://www.ucd.ie/%20hispanic/marking_descriptors.htmhttp://www.informaworld.com/smpp/title~content=t713402663~db=allhttp://www.informaworld.com/smpp/title~content=t713402663~db=all~tab=issueslist~branches=29#v29http://www.informaworld.com/smpp/title~content=t713402663~db=all~tab=issueslist~branches=29#v29http://www.informaworld.com/smpp/title~content=g713611172~db=allhttp://www.taskstream.com/pub/competency.asphttp://www.qaa.ac.uk/academicinfrastructure/benchmark/default.asphttp://www.ucd.ie/regist/%20modularisation_and_semesterisation/images/gradedescriptors.pdfhttp://www.ucd.ie/regist/%20modularisation_and_semesterisation/images/gradedescriptors.pdf

  • Volume 7 Number 2 2009 JIRSEA 17

    University of California (2006) Advisory Guidelines on Writing Undergraduate Performance (Narrative) Evaluations. Retrieved 29/11/2006, from http://evals.ucsc.edu/

    APPENDIX

    Sample Grade Descriptors Grade descriptors allow module coordinators to set out in advance how a given level of performance will be reflected in a grade. They act as guidelines for student & coordinator. Here are some examples:

    Grade Criteria more relevant to levels 0, 1 and 2- Knowledge, understanding,

    application

    Additional criteria more relevant to levels 3, 4, and 5 – Analysis, synthesis, evaluation

    A B

    Excellent A comprehensive, highly-structured, focused and concise response to the assessment task, consistently demonstrating: § an extensive and detailed knowledge

    of the subject matter § a highly-developed ability to apply

    this knowledge to the task set § evidence of extensive background

    reading § clear, fluent, stimulating and original

    expression § excellent presentation (spelling,

    grammar, graphical) with minimal or no presentation errors

    Very Good A thorough and well-organised response to the assessment task, demonstrating: § a broad knowledge of the subject

    matter § considerable strength in applying that

    knowledge to the task set § evidence of substantial background

    reading § clear and fluent expression § quality presentation with few

    presentation errors

    A deep and systematic engagement with the assessment task, with consistently impressive demonstration of a comprehensive mastery of the subject matter, reflecting: § a deep and broad knowledge and

    critical insight as well as extensive reading;

    § a critical and comprehensive appreciation of the relevant literature or theoretical, technical or professional framework

    § an exceptional ability to organise, analyse and present arguments fluently and lucidly with a high level of critical analysis, amply supported by evidence, citation or quotation;

    § a highly-developed capacity for original, creative and logical thinking.

    A substantial engagement with the assessment task, demonstrating: § a thorough familiarity with the

    relevant literature or theoretical, technical or professional framework

    § well-developed capacity to analyse issues, organise material, present arguments clearly and cogently well supported by evidence, citation or quotation;

    § some original insights and capacity for creative and logical thinking.

    http://evals.ucsc.edu/

  • Volume 7 Number 2 2009 JIRSEA 18

    From: UCD Registrar's Office, University College Dublin. 2006. Sample Grade Descriptors. Retrieved 09/07/07, from http://www.ucd.ie/regist/ modularisation_and_semesterisation/images/gradedescriptors.pdf

    http://www.ucd.ie/regist/%20modularisation_and_semesterisation/images/gradedescriptors.pdfhttp://www.ucd.ie/regist/%20modularisation_and_semesterisation/images/gradedescriptors.pdf

  • Volume 7 Number 2 2009 JIRSEA 19

    Strategic Balancing of the IQA = EQA Equation

    Teay Shawyun Associate Professor in Strategic Management

    Consultant to Quality Deanship King Saud University, Saudi Arabia

    E-mail: [email protected]

    Abstract

    Accreditation has become the buzz word of the 21st Century. This created dilemma for the HEI (Higher Education Institutes) as to “what to” and “how to” respond to the imperatives and implications of Accreditation in order to address the requirements of the EQA (External Quality Assurance) by the IQA (Internal Quality Assurance). To develop the IQA = EQA equation, the researcher proposes a two-tier approach, the “What” and the “How” of the challenge. In this equation and challenge, the HEI represents the IQA and Accreditation, the EQA. The “What” and “How” framework will explore the requirements of the “What” aspects of the IQA and EQA equation. The “How” aims at proposing alternative methodology the HEI can use to develop its IQA to “balance” the EQA. The “What” aspects will tackle the Standards, Criteria or KPI (Key Performance Indicators), and the audit and assessment methodology as required by the EQA part of the equation. In developing the IQA of an institution, the basic statutory National Accreditation Standards must be met. This paper traced the development of the quality systems at King Saud University of Saudi Arabia and Assumption University of Thailand to show how they met and exceeded the respective country’s national accreditation standards. Based on the 4 “As” of quality, the solid foundation that IQA is built upon are “Audit and Assessment leads to Assurance and later Accreditation”. Key words: IQA, EQA, Accreditation Standards, Criteria and KPI requirements,

    This contribution has been peer-reviewed. In line with the publication practices of open access e-journals, articles are free to use though the Editor requests proper attribution to JIRSEA. © Copyright is retained by authors.

    mailto:[email protected]

  • Volume 7 Number 2 2009 JIRSEA 20

    Introduction In the fast moving dynamic changes (Brand, 1993, p. 7) and pace of development in the education arena, all HEI (Higher Education Institutions, note that in this paper, all academic institutions or universities are classified as HEI) are trying to outdo each other and get a bigger piece of the local, international and global education market (Currie and Newsom, 1998; Scott, 1998). With many players offering more programs to different target markets based on purchasing power, the education industry has become too commercialized affecting the quality of education due to declining resources and more competitive market players (Brand, 1993; Zemsky, Massy, & Oedel, 1993) and a vicious cycle of papers chasing. A key tool in the vast arrays of its armaments to compete and to be competitive is the quality mechanism that is subsumed to “assure the quality of the educational products and services offerings”. This quality assurance also needs certification in the form of accreditation by an external agency that can affect its target marketing meeting market expectations and academic ranking. In finding a balance between academic excellence and market expectations, Trout (1997a), said “In the marketplace, consumerism implies that the desires of the customer reign supreme . . . and that the customer should be easily satisfied. . . . When this . . . model is applied to higher education, however, it not only distorts the teacher/student mentoring relationship but renders meaningless such traditional notions as hard work, responsibility, and standards of excellence”. In this dilemma, three of the main challenges identified as basic issues that should be addressed in pursuing excellence in higher education are (Ruben, 2003):

    • Increasing our understanding of the needs of workplaces • Becoming more effective learning organizations • Integrating assessment, planning, and improvement

    In pursuing education excellence, accreditation has become the buzz word of the 21st Century. This has brought upon a big dilemma to the HEI not only in the Kingdom of Saudi Arabia but to the Middle East countries and also the developing nations that have just embarked on the Quality Education path and journey, The main question in mind is to find answers to “what to” and “how to” respond to the imperatives and implications of Accreditation. Practically, every HEI tries to find the panacea or the holy grail of Quality Assurance. It is posited that the search should begin with an understanding of the Accreditation aspects that forms the EQA (External Quality Assurance) part of the EQA = IQA Equation. In this equation and challenge, the HEI represents the IQA (Internal Quality Assurance). The key question is “what to and how to” address the requirements of the EQA by the IQA. To address the IQA = EQA equation, this paper proposes a two-tier approach, the “What” and “How” of the challenge. Part 1 of the paper is aimed at “Balancing the IQA = EQA Equation: Determining the EQA requirements” which deals with the “What” that will explore the requirements of the “What” aspects of the IQA and EQA equation. The “How” proposes the methodology the HEI uses to develop its IQA to “balance” the EQA. The “What” aspects will tackle the Standards, Criteria or KPI, and the audit and assessment methodology as required by the EQA. It is proposed that the “What” of existing accreditation frameworks across

  • Volume 7 Number 2 2009 JIRSEA 21

    different countries and continents do not differ as to its fundamentals and principles. On the contrary, there are more similarities in the fundamentals or principles through which the Standards, Criteria or KPI are created that are based on the same platform. It is proposed that the generic strands leading to quality education “fit for purpose”, revolves around key areas of teaching – learning – research, student – centric and learning outcomes focus, stakeholders, communities and social service centric focus, learning facilities and resources support, strategic and tactical mission, goals and objectives centricity, human and organizational resources development and information and metrics centricity. This represents the EQA “What to” part of the IQA = EQA equation. This EQA platform is aimed at certifying the ”fit for purpose” of quality in the IQA based on a set of similar standards, criteria or KPI. Part 2 of the paper is aimed at “Balancing the IQA and EQA Equation: Developing the IQA requirements” addresses the focus of the “How to” of the IQA of the HEI. In developing the IQA of one’s institution, one must meet the basic statutory National Accreditation Standards. In developing the IQA of the HEI, it is proposed that the HEI go beyond the National Accreditation Agency’s requirements. Based on the 4 “As” of quality, the solid foundation IQA is built upon are “Audit and Assessment leads to Assurance and later Accreditation” (certification of “fit for purpose”). In a nutshell, the paper’s aims are as follows:

    • To determine the constituents of the existing accreditations frameworks, collectively called the EQA approaches, standards and criteria. It is aimed at providing an overall synopsis of Quality basic requirements in relation to EQA Standards and Criteria, their assessment mechanism capped with the summary basic similarities of the QA principles from a diverse perspective in terms of their principles, standards and criteria.

    • To concentrate on a hands on pragmatic approach that the HEI can take or use in developing their own IQA system that will deal in-depth with the details of the IQA frameworks or approaches that the HEI can use in terms of its Standards, Criteria, Items that are the Process-based Values criterion and KPI (Key Performance Indicators) that are the Results-based Values criterion. It also discusses the organization of the IQA in the HEI in the development of its self-study and assessment through a Scaled Scoring Performance Guidelines of its Process-based Value and Results-based Values performance assessment.

    Part 1 Balancing the IQA = EQA Equation: Determining the EQA requirements “QUALITY” is an ever elusive and evolving, omnipotent and ubiquitous powerful business mechanism that has been used and manipulated by organizations to convince consumers that its product and service offers has achieved a level of acceptance based on

  • Volume 7 Number 2 2009 JIRSEA 22

    certain standards and criteria. Even the education industry has not escaped from this quality syndrome and all HEIs are bent on having their educational products and services achieve a certain level of acceptable standards and criteria that finally leads to its being certified “fit for purpose”. The key question is “what is quality in education?” Experts and exponents have searched and researched high and low for a definitive definition that constitutes “quality education”. Vroeijenstijn (1991) said “it is a waste of time to define quality” as it is a relative concept, but does this mean that we do not action on Quality? Rather than trying to define “quality education”, one can start with the HEI’s purpose or mission underpinning national and social development through skilled manpower through 2 activities and actions on these key activities which are:

    § Producing competent and qualified graduates to meet the organizational needs in all sectors

    § Pushing forward the frontier of knowledge via research

    This would mean that one would need to understand the context of the HEI’s mission which represents its “reason for existence” or its very purpose of the HEIs’. What the HEI does or sell must “fit for purpose” (Teay, 2007). This inevitably means that Quality in education is implicitly and explicitly about:

    § The outputs and outcomes of education which is of use that is fit for some purpose

    § The stakeholders of “the provider” and “the user” of education § The move forward towards improvements or innovations in education § The actions and activities in doing something in education effectively and

    efficiently. Since the late 80’s and into the 1990’s Quality in Higher Education and key literature in Quality in Higher Education (ENQA – European Association for Quality Assurance in Higher Education, 2005; Greene, 1994; Teay, 2005, 2006 and 2007) has reiterated that Quality in Higher Education had been, is and will always be about and actioned through:

    ü Traditional quality definition of benchmarking to the best. This might not be within the same context or content. As such, benchmarking to the best in an appropriate way based on the internal and external context.

    ü Conformance to Specifications or Standards which is static in nature as the criteria used to set the standard is unclear and that they are easily measurable and quantifiable which is not the case in higher education. Under such a situation, Conformance and Compliance to Specifications or Standards normally use proxy measures and assessment methodologies for the subjective quality educational performance measure qualitatively and quantitatively.

  • Volume 7 Number 2 2009 JIRSEA 23

    ü Fit for Purpose – emphasis on specifications based on the “mission or reason for the existence” of the HEI that is developmental as it recognizes that their purpose might change over time thus requiring a revaluation of appropriateness of the specifications.

    ü Quality as effectiveness in achieving institutional mission and goals. ü Quality as meeting customers’ stated or implied needs.

    To meet the basic principles of HEI and its quality requirements as noted above, key education standards and criteria worldwide that has a valid accreditation process must effectively address the quality of the institution or program in the following areas:

    ü Success with respect to student achievement in relation to the institution's mission, including, as appropriate, consideration of course completion, State licensing examination, and job placement rates.

    ü Curricula. ü Faculty. ü Facilities, equipment, and supplies. ü Fiscal and administrative capacity as appropriate to the specified scale of

    operations. ü Student support services. ü Recruiting and admissions practices, academic calendars, catalogs,

    publications, grading, and advertising. ü Measures of program length and the objectives of the degrees or

    credentials offered. ü Record of student complaints received by, or available to, the agency. ü Record of compliance with the institution's program responsibilities, the

    results of financial or compliance audits, program reviews, and any other information pertaining to quality assurance

    Fundamentally, five standards of quality assurance, that any education institution must address (Schray, 2006) are that it:

    1. Advances academic quality; 2. Demonstrates accountability; 3. Encourages purposeful change and needed improvement; 4. Employs appropriate and fair procedures in decision-making; and 5. Continually reassesses accreditation practices.

  • Volume 7 Number 2 2009 JIRSEA 24

    Table 1: The 4 ‘A”s of Quality and Accreditation

    EQA = IQA

    QUALITY = AUDIT + ASSESSMENT + ASSURANCE

    ACCREDITATION= Certification of “Fitness of Purpose”

    AUDIT = Ensuring that the system and documentation are developed and in place and in conformance and compliance with Standards and Criteria

    ASSESSMENT = Ensuring that the system is performing or determining the level of performance based on the Standards and Criteria

    ASSURANCE = Ensuring that performance is developmental bringing about improvements and innovations

    ACCREDITATION “twinned concept” QUALITY In this paper, it is posited that to offer quality education products and services, one must balance the IQA and EQA equation. As noted in Table 1, EQA is the highly touted accreditation and the IQA is a composite set of audit, assessment and assurance that leads to accreditation in what one can term it as the 4 “A”s of Quality. Quality as represented by the IQA and accreditation as represented by the EQA are inseparable as a Siamese Twin, leading to the “twinned concept” of quality and accreditation. An issue is that many people believe that audit and assessment and the assurance of quality in education are subjective (Callan, Doyle, and Finney, 2001). These lead to questions:

    ú How can we assess the quality of education offered by a college or university? , and

    ú How can we know reliably whether or when learning is taking place?

    Bennet (2001 and 2008) has researched into some of the valid assessment mechanisms that can be summarized as:

    1. Value Adding that calls for the determination of what is improved about students' capabilities or knowledge as a consequence of their education. In measuring the value addition, value requires the assessments of students' development or attainments as they begin college, and assessments of those same students after they have had the full benefit of their education at the college. Basically, Value added is the difference between their attainments when they have completed their

  • Volume 7 Number 2 2009 JIRSEA 25

    education and what they had already attained by the time they began. Value added is the difference a college makes in their education. The constituents of value are in the Dimensions of Value Addition which is Customer Value in Education (CV) = {Product Quality (PQ), Service Quality (SQ), Image (I), Relationships (R)} / Cost (C) (Gale, 1994)

    2. Outcomes that evaluate the students at graduation (or shortly after) on the skills

    and capabilities they have acquired or the recognition they gain in further competition. In addition, the evaluation of the Input (I), Processes (P), Outputs (O) leading to the OUTCOMES is the imperative.

    3. Expert Assessment that gives impartial and independent opinions of experts and

    their views on the performance assessment.

    4. Self–Study by asking stakeholders and is based on the stakeholders’ assessment. 5. Ask the students. The intent is to measure whether students are educated through

    processes that research has shown do in fact add value to students' attainments. The end sum game of assessment in Quality Education is Performance, Performance and Performance. Ultimately in the assessment that affects Quality Assurance in Higher Education, there are three major changes in the current environment that had affected the quality drive: 1) Growing demand for increased accountability – as over commercialization has downplayed the importance of accountability to the students, stakeholders and society; 2) Reduced funding and rising costs and pressures to find more cost-effective solutions in every aspect of higher education that had short-circuited the quality and value of a holistic approach of quality education products and services. 3) Changing structure and delivery of higher education including new types of educational institutions and the increasing use of distance learning that allows institutions to operate on a national and global scale. This meant that easier and more access to better education without a strong and quality set of infrastructure to deliver, that normally leads to the educational product offer being offered first and then only developing and setting up the QA system to assess and assure the presumed quality education, is a “too little and too late” in most instances. The above fundamental QA principles, assessment methodology and major changes have identified three major sets of questions and issues that all HEIs should and must address through its IQA. They are:

    ú Assuring Performance – How can the IQA system be held more accountable for assuring performance, including student-learning outcomes, in the institutions and programs?

    ú Open Standards and Processes – How can IQA standards and processes be changed to be more open to and supportive of innovation and diversity in higher education

  • Volume 7 Number 2 2009 JIRSEA 26

    ú Consistency and Transparency – How can IQA standards and processes be made more consistent to support greater transparency and greater opportunities for credit transfer between accredited institutions?

    Part 2 Balancing the IQA and EQA Equation: Developing the IQA requirements Case Studies of AU and KSU in addressing IQA Cases from Assumption University (AU) of Thailand and King Saud University (KSU) of the Kingdom of Saudi Arabia (KSA) are used as case studies of the “How to” approach to address the IQA part of the IQA = EQA equation. These two cases differed in that Thailand started with the definition of the Standards and KPI for IQA first and will move onto accreditation in the year 2012, while KSA started with the EQA Accreditation’s definition of the standards and criteria. In either case, the HEIs are left with the question of “how to” set up their IQAs to address their own IQAs as well as to address the EQA requirements. The following case studies illustrate the ordeal alluded to above. They represent two extremes that posed challenges to the researcher who had to develop and set up the QA systems. Case Study 1: Assumption University (AU) 1.1 The AU and Thailand QA Scenario and Dilemma The nine years of QA in Thailand between 1999 and 2008 were tumultuous for all HEIs in the country. This is the period when they embarked on the never ending journey towards the holy grail of “education excellence” as the hall mark and beacon of achieving better quality education for students. The QA movement started with the identification of the CHE’s (Commission on Higher Education) initial 9 sets KPI in 2000 and the introduction of the ONESQA’s (Office of National Educations Standards and Quality Assessment) 7 standards in 2006 (ONEC 1999 and ONESQA 2006). In May 2007 represented the year whereby the CHE developed its 44 sets of sub-KPIs. The year 2007 also represented the year when the Baldrige National Quality Program issued its 2007 Education Criteria for Performance Excellence. In 2007 the CHE also produced its 15-Year Plan (2008 – 2022). Regrettably it was not aligned with that of ONESQAs which created further strive for the HEIs in setting up their own IQAs. The emphasis was on measurements rather than a holistic approach towards a well-planned management of the whole university systems. These changes highlighted the discrepancies between planning and assessment based on data and evidence.

  • Volume 7 Number 2 2009 JIRSEA 27

    As a result, this instigated the evolution of an internal system that advocates the triangulation of planning-information-quality being managed holistically rather than independently (Teay, 2008). The emphasis on “management through measurement” highlighted the “chicken-egg” issue of what comes first. Sadly, the practice has been measurement first. This created the problem of mismatch between planning and measurements. In reality, management should precede measurement. 1.2 The AU Approach in addressing the IQA part of the EQA requirements As AU aims for the Thailand Quality Class (TQC) and Thailand Quality Award (TQA) in the coming years, AU’s existing QA system in use for 5 years between 2003 and 2007 was completely overhauled and revamped. Based initially on an adapted version of the MBNQA 7 Criteria and integrated into the CHE 9 guiding KPIs, the system is now tuned towards a higher level of challenge and a higher level of performance. As all the 3 sets of performance criteria from 3 different systems (CHE, ONESQA and MBNQA) were different, it was a Herculean task to integrate them into AU’s own unique system without losing the basic criteria, KPI and essence of all the 3 systems. In order not to lose the essence, the basic instruments were adapted with minimal changes from all the 3 systems to reflect and represent the internal requirements of AU’s. The result is the AUQS 2000 QMIPS (QMS) (Teay, 2007) that was launched as the standard and beacon and AU’s QA standard bearer to support AU performance measurement and management. This QMS retains a non-prescriptive approach as the ultimate definition of the systems and mechanisms. The tools and techniques to be used for the school performance is the sole jurisdiction of the school. The National and AU QMS framework form the minimum requirement standard. This QMS became the heart and soul of AU’s strive and never ending journey towards continuous quality improvement. The MBNQA description for each of the Standards were used in the case of the Process-based Value criterion (Table 2.1) while the CHE and ONESQA KPIs plus a new assessment methodology developed by the researcher were used for the Results-based criterion (Table 2.2) This resulted in a larger number of 132 sets of KPI to be assessed. The assessment follows a modified version of the MBNQA ADLI (Approach, Deployment, Learning, Integration) and LeTCI (Level, Trend, Comparison, Integration) as discussed in the second case study below. In conclusion, the IQA was derived and developed based on an internationally accepted MBNQA model as only the KPI were identified by the CHE and ONESQA. This approach was nationally recognized by being awarded an IQA Award by the Commission on Higher Education in 2009.

  • Volume 7 Number 2 2009 JIRSEA 28

    Table 2.1 Standards and Scoring for Process Based Value Criteria

    KPI IndicatorsScoring Criteria Standard Criteria

    Point

    Score

    Percent

    Dev Effective ScoreValues 0-100 %

    1 Vision Mission & Strategic plans 80.00 62.53

    78.17 1 1 3

    1.1 Vision & Mission 20.00 8.93 44.67

    1.1.1 Faculty Vision & Mission are in line with AU vision and mission

    Faculty vision and mission are used as guidelines for faculty development planning

    6.67 1.20 30.00 0 0 1

    1.1.2 Faculty member participated in confirming the faculty vision and mission.

    A review committee is set up, having an annual review system in place

    6.67 3.87 30.00 1 1 1

    1.1.3 Faculty members, Staff, and students are aware of and understand the faculty vision and mission.

    Training and communication of vision and mission are carried out every semester to foster the awareness

    6.67 3.87 30.00 1 1 1

    1.2 Strategic Plans 40.00 40.00

    100.00

    1.2.1 There is a planning system for one-year and five-year plans

    8.00 8.00 100.00

    The plans are in line with the faculty's vision and mission, and all plans implementations achieve the set objectives.

    4.00 4.00 100.00

    1 1 3

    Strategic objectives are balanced, and meet the requirements of learners and of other stakeholders.

    4.00 4.00 100.00

    1 1 3

    1.2.2 There is a systematic process for the strategic plan analysis and evaluation system

    8.00 8.00 100.00

    The strategy development process is clearly defined and documented.

    2.67 2.67 100.00

    1 1 3

    The process of analysis and strategy making is reviewed during and after each plan implementation.

    2.67 2.67 100.00

    1 1 3

    The analysis could indicate the achievement level of each objective of the implemented plan.

    2.67 2.67 100.00

    1 1 3

  • Volume 7 Number 2 2009 JIRSEA 29

    Table 2.2: Integrated KPI of CHE and ONESQA and performance assessment of Results-based Value Criterion

    2.6 Number of full-time equivalent students in proportion to the total number of full-time lecturers (percentage as deviated from the standard) (C 2.4 and O 6.2)

    Number of full-time equivalent students in proportion to the total number of full-time lecturers.

    7.50 7.50 100.00 1 1 3

    ≥+10% or ≤ -10% of

    the standard

    0.00

    6-9.99 % or – 6 –(9.99) %

    the standard(-5.99 )-5.99 % of

    the standard

    2.7 The proportion of the full-time lecturers holding bachelor, master, and doctoral degree or equivalent to the total number of fulltime lecturers. (C 2.5)

    The proportion of the full-time lecturers holding bachelor, master, and doctoral degree or equivalent to the total number of fulltime lecturers.

    7.00 7.00 100.00 1 1 3

    1

    Doctoral degree among 1-19 % orororor doctoraldegree among 20-29% butbutbutbut Bachelor degreemore than 5%

    2 1. Doctoral degree among 20-29% and2. Bachelor degree equal to or less than 5%

    OrOrOrOr1. Doctoral degree more than or equal to

    30% andBachelor degree more than 5%

    3Doctoral degree more than or equal to 30%and

    Y

    KPI IndicatorsScoring Criteria Standard Criteria

    Point

    Score

    Percent

    Dev Effective ScoreValues 0-100 %

  • Volume 7 Number 2 2009 JIRSEA 30

    Case Study 2: King Saud University (KSU) 2.1 The KSU dilemma The Kingdom of Saudi Arabia and King Saud University (KSU)’s QA dilemma arose when the NCAAA published its 11 Accreditation standards carrying hundreds of criteria in 2003. The mind boggling criteria were reaffirmed in 2008 when the NCAAA requested the HEI to go for accreditation. The chicken and egg dilemma surfaced when the HEI embarked on a journey towards accreditation without fully realizing that the foundation for successful accreditation was a strong and robust IQA. The situation left the HEI wondering “what and how” to address the EQA requirements. 2.2 The KSU options for an IQA In addressing this IQA issue, KSU has a choice of 1) full adoption of the EQA requirements without any changes, 2) creating a completely new IQA system not based on the EQA requirements but that can fulfill the requirements, and 3) adoption of the EQA standards but setting up its own internal approach within the local and international context. 2.3 The KSU’s choice Early in January 2009, the KSU QA Committee settled on option 3. The rationale for this is that:

    • the adoption of the NCAAA standard without any changes will minimize the impacts of the existing KSU – QMS IQA standards and criteria (Teay, 2009).

    • The adoption of an internationally accepted organizational performance assessment methodology, MBNQA (NIST, 2007 and 2009) will use the ADLI for its process based criteria and LeTCI for its results based criteria assessment.

    In not reinventing the wheel and in conformance and compliance to the EQA equation, KSU applied the National Accreditation Agencies Standards and Criteria as the blueprint of its IQA Standards and Criteria. The rationale was that if a different set of Standards and Criteria were used for the IQA system, it would complicate, confuse particularly new users and compromise the IQA system, By adhering to its simplicity and sophisticated philosophy, KSU maintained the basic standards and criteria by combining the NCAAA institution and program standards and criteria into a generic simplified and standardized set applicable to the institution, colleges, programs or administrative units so that the institution’s, schools’ and programs’ Standards and Criteria are aligned internally and externally (Table 3). This identifies three specific requirements at the Standard, Criteria and Item levels.

  • Volume 7 Number 2 2009 JIRSEA 31

    Table 3: KSU Standard, Criteria and Item requirement

    KSU – QMS Standards, Criteria and Items Explanationso Standard 1: Mission and Objectives STANDARD Requirement1.1 Appropriateness of the Mission 1.1 CRITERIA Requirement

    1.1.1 The mission for the school a nd programshould be consistent with the mission ofthe institution, and the institution’smission with the establishment charterof the institution.

    1.1.1 ITEM details Requirement

    1.1.2 The mission should establish directionsfor the development of the institution,schools or programs that are appropriatefor the institution, schools or programsof its type and be relevant to a nd servethe needs of students a nd communitiesin .

    1.1.2 ITEM details Requirement

    1.1.3 The mission should be consis tent withIslamic beliefs and values and theeconomics and cultural requirements ofthe .

    1.1.3 ITEM details Requirement

    1.1.4 The mission should be explained to itsstakeholders in ways that demonstrateits appropriateness.

    1.1.4 ITEM details Requirement

    The differentiating point begins in its being sophisticated by looking at it from an organizational performance perspective based on its process and results. Based on the Malcolm Baldrige 2009 Education Criteria (NIST, 2009), it builds up a systemic and systematic, innovative but yet generic approach to its audit and assessment organization and scoring criteria to determine the performance level using a set of standardized scoring criteria of A (Approach), D (Deployment), L (Learning) and I (Integration) as show in Table 3.1 and Table 3.2. This is supported by a set of qualitative and quantitative indicators that serve as measures of performance that identify its Le (Level), T (Trend), C (Comparison) and I (Integration) as shown in Table 4.1 and Table 4.2. The choice of using the ADLI and LeTCI scoring criteria is that a set of process-based criteria leads to a set of integrative and comprehensive set of outcome results which is measured using the ADLI and LeTCI respectively. It is noted that these scoring guidelines are more comprehensive and definitive as opposed to most systems that use a “Yes or No or Relevance” or a “Star Scoring” system.

  • Volume 7 Number 2 2009 JIRSEA 32

    Table 3.1: Performance Scoring of Process-based Standards and Criteria

    Overall Scaled Performance Scoring of Process –Based values Standard1st Column 2nd

    Column3rdCol.

    4thColumn

    5thCol.

    6thCol.

    7thColumn

    8thColumn

    9thColumn

    Institutional, School and Program Context

    Weights

    Score (%)

    Weighted

    Score

    Goals Set

    Goals

    Achv.

    Develop.

    Effective

    Overall

    Perf.

    Standard 1 Mission, Goals and Objectives 55

    1.1 Appropriateness of the Mission 4 2.2 70% 50% 0 0 1.761.1.1 The mission for the school and program should be

    consistent with the mission of the institution, andthe institution’s mission with the establishmentcharter of the institution.

    1 50

    0.5

    1.1.2 The mission should establish directions for thedevelopment of the institution, schools orprograms that are appropriate for the institution,schools or programs of its type and be relevant toand serve the needs of students and communitiesin Saudi Arabia.

    160 0.6

    1.1.3 The mission should be consistent with Islamicbeliefs and values and the economics and culturalrequirements of the Kingdom of Saudi Arabia.

    180 0.8

    1.1.4 The mission should be explained to its stakeholdersin ways that demonstrate its appropriateness.

    130 0.3

    Overall Assessment 2.2 1.76

    1 The weighted score for each item is derived from SCORE * WEIGHTS.

    2. The overall weighted score (2.2) is an averaged summation of each of the weighted score of each item and contributes 80% to overall performance.

    3. As there is no “development” and “effective”, 20% is lost, and the final Overall performance is 1.76 (which is 0.8 * 2.2)

  • Volume 7 Number 2 2009 JIRSEA 33

    Table 3.2: Performance Scoring Guidelines (ADLI) of Process-based Standards and Criteria

    Process – Based Values Criterion Scoring Guidelines

    SCORE PROCESS – based Performance Scoring Guidelines0% or 5% OR

    No StarThe practice, though relevant, is not followed at all based on the following:q No SYSTEMATIC APPROACH (methodical, orderly, regular and organize) to Standards requirements is evident; information lacks specific methods, measures,

    deployment mechanisms, and evaluation, improvement, and learning factors. (A) q Little or no DEPLOYMENT of any SYSTEMATIC APPROACH (methodical, orderly, regular and organize) is evident. (D)q An improvement orientation is not evident; improvement is achieved through reacting to problems. (L) q No organizational ALIGNMENT is evident; individual standards, areas or work units operate independently. (I)

    10%, 15%, 20% or 25% OR 1 Star

    The practice is followed occasionally but the quality is poor or not evaluated based on the following:q The beginning of a SYSTEMATIC APPROACH (methodical, orderly, regular and organize) to the BASIC REQUIREMENTS of the Standards is evident. (A)q The APPROACH (methodical, orderly, regular and organize) is in the early stages of DEPLOYMENT in most standards or work units, inhibiting progress in achieving the basic

    requirements of the Standards. (D)q Early stages of a transition from reacting to problems to a general improvement orientation are evident. (L) q The APPROACH is ALIGNED with other standards, areas or work units largely through joint problem solving. (I)

    30%, 35%, 40% or 45% OR

    2 StarsThe practice is usually followed but the quality is less than satisfactory based on the following:q An EFFECTIVE, SYSTEMATIC APPROACH, (methodical, orderly, regular and organize) responsive to the BASIC REQUIREMENTS of the Standards, is evident.

    (A)q The APPROACH is DEPLOYED, although some standards, areas or work units are in early stages of DEPLOYMENT. (D)q The beginning of a SYSTEMATIC APPROACH (methodical, orderly, regular and organize) to evaluation and improvement of KEY PROCESSES is evident. (L)q The APPROACH is in the early stages of ALIGNMENT with the basic Institution, College or Program or Administrative Unit needs identified in response

    to the Institution, College or Program or Administrative Unit Profile and other Process Standards. (I)

    50%, 55%, 60% or 65% OR

    3 StarsThe practice is followed most of the time. Evidence of the effectiveness of the activity is usually obtained and indicates that satisfactory standards of performance are normally achieved although there is some room for improvement. Plans for improvement in quality are made and progress in implementation is monitored.q An EFFECTIVE, SYSTEMATIC APPROACH (methodical, orderly, regular and organize), responsive to the OVERALL REQUIREMENTS of the Standards, Criteria and Items is

    evident. (A) q The APPROACH is well DEPLOYE