a comparison between evaluation of

16
International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014 DOI : 10.5121/ijsea.2014.5105 57 ACOMPARISON BETWEEN EVALUATION OF COMPUTER BASED TESTING AND PAPER BASED TESTING FOR SUBJECTS IN COMPUTER PROGRAMMING Samantha Mathara Arachchi 1 , Kapila Dias 2 , R.S. Madanayake 3 1,2,3 Department of Communication and Media Technology University of Colombo School of Computing, 35, Reid Avenue, Colombo 7, Sri Lanka. Eddy Siong Choy Chong 4 , 4 Management and Science University, Malaysia. Kennedy D. Gunawardana 5 5 Department of Accounting, University of Sri Jayawardhanapura, Sri Lanka ABSTRACT Evaluation criteria for different disciplines vary from subject to subject such as Computer science, Medicine, Management, Commerce, Art, Humanities and so on. Various evaluation criteria are being used to measure the retention of the students’ knowledge. Some of them are in-class assignments, take home assignments, group projects, individual projects and final examinations. When conducting lectures in higher education institutes there can be more than one method to evaluate students to measure their retention of knowledge. During the short period of the semester system we should be able to evaluate students in several ways. There are practical issues when applying multiple evaluation criteria. Time management, number of students per subject, how many lectures delivered per semester and the length of the syllabus are the main issues. The focus of this paper however, is on identification, finding advantages and proposing an evaluation criterion for Computer based testing for programming subjects in higher education institutes in Sri Lanka. Especially when evaluating hand written answers students were found to have done so many mistakes such as programming mistakes. Therefore they lose considerable marks. Our method increases the efficiency and the productivity of the student’s and also reduces the error rate considerably. It also has many benefits for the lecturer. With the low number of academic members it is hard to spend more time to evaluate them closely. A better evaluation criterion is very important for the students in this discipline because they should stand on their own feet to survive in the industry. KEYWORDS Computer based testing, Paper based testing, Hypotheses testing 1. Introduction This research was conducted by using the 2nd year students who follow computer subjects. The main purpose of this research is to enhance the students’ performance to score more marks with less programming mistakes such as syntax, punctuations errors and also to reduce the staff stress

Upload: ijseajournal

Post on 18-May-2015

96 views

Category:

Technology


3 download

DESCRIPTION

Evaluation criteria for different disciplines vary from subject to subject such as Computer science, Medicine, Management, Commerce, Art, Humanities and so on. Various evaluation criteria are being used to measure the retention of the students’ knowledge. Some of them are in-class assignments, take home assignments, group projects, individual projects and final examinations. When conducting lectures in higher education institutes there can be more than one method to evaluate students to measure their retention of knowledge. During the short period of the semester system we should be able to evaluate students in several ways. There are practical issues when applying multiple evaluation criteria. Time management, number of students per subject, how many lectures delivered per semester and the length of the syllabus are the main issues. The focus of this paper however, is on identification, finding advantages and proposing an evaluation criterion for Computer based testing for programming subjects in higher education institutes in Sri Lanka. Especially when evaluating hand written answers students were found to have done so many mistakes such as programming mistakes. Therefore they lose considerable marks. Our method increases the efficiency and the productivity of the student’s and also reduces the error rate considerably. It also has many benefits for the lecturer. With the low number of academic members it is hard to spend more time to evaluate them closely. A better evaluation criterion is very important for the students in this discipline because they should stand on their own feet to survive in the industry.

TRANSCRIPT

Page 1: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

DOI : 10.5121/ijsea.2014.5105 57

A COMPARISON BETWEEN EVALUATION OFCOMPUTER BASED TESTING AND PAPER BASED

TESTING FOR SUBJECTS IN COMPUTERPROGRAMMING

Samantha Mathara Arachchi 1, Kapila Dias 2, R.S. Madanayake 3

1,2,3 Department of Communication and Media TechnologyUniversity of Colombo School of Computing,

35, Reid Avenue, Colombo 7, Sri Lanka.Eddy Siong Choy Chong4,

4Management and Science University, Malaysia.Kennedy D. Gunawardana 5

5Department of Accounting, University of Sri Jayawardhanapura, Sri Lanka

ABSTRACT

Evaluation criteria for different disciplines vary from subject to subject such as Computer science,Medicine, Management, Commerce, Art, Humanities and so on. Various evaluation criteria are being usedto measure the retention of the students’ knowledge. Some of them are in-class assignments, take homeassignments, group projects, individual projects and final examinations. When conducting lectures inhigher education institutes there can be more than one method to evaluate students to measure theirretention of knowledge. During the short period of the semester system we should be able to evaluatestudents in several ways. There are practical issues when applying multiple evaluation criteria. Timemanagement, number of students per subject, how many lectures delivered per semester and the length ofthe syllabus are the main issues.

The focus of this paper however, is on identification, finding advantages and proposing an evaluationcriterion for Computer based testing for programming subjects in higher education institutes in Sri Lanka.Especially when evaluating hand written answers students were found to have done so many mistakes suchas programming mistakes. Therefore they lose considerable marks. Our method increases the efficiencyand the productivity of the student’s and also reduces the error rate considerably. It also has many benefitsfor the lecturer. With the low number of academic members it is hard to spend more time to evaluate themclosely. A better evaluation criterion is very important for the students in this discipline because theyshould stand on their own feet to survive in the industry.

KEYWORDS

Computer based testing, Paper based testing, Hypotheses testing

1. Introduction

This research was conducted by using the 2nd year students who follow computer subjects. Themain purpose of this research is to enhance the students’ performance to score more marks withless programming mistakes such as syntax, punctuations errors and also to reduce the staff stress

Page 2: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

58

due to paper marking [12] and save time, since there are more than five hundred answer scripts tobe marked for some subjects. When marking papers in bulk (such as more than five hundred) theman-power is not enough to mark all these within the given period. Often deadlines have to beextended. Therefore marking is a terrible burden [3].

Computer based testing is a very convenient method. In addition to this, it increases thetransparency of marking and students get the freedom to develop the application using their ownconcepts and methods. It develops their creativity as well as thinking power. Another reason isthat reading coding is very hard because most of their hand writing is not so neat. Since there isno way to debug their coding there are so many errors and mistakes. This research hasencountered advantages for the students as well as for the lecturers, in conducting computer basedassignments in subject in computer programming [7],[9].

Students have another advantage in using machines as they do not need to remember unnecessarytheories or memorise some points [4]. As we know when we are in front of the computer we willremember what to do next automatically compared with when answering a paper using acomputer.

Computer training is necessary for the software developers as an on the job training. In-housestaff development centers also conduct paper based examinations to measure their performanceand understand about the latest concepts and technology or new programming languages [16].Even though they are getting on the job training as well as they are in touch with theprogramming. According to survey questionnaire they also preferred practical based testing thenthe paper passed testing [17].

2. Methodology

2.1. Research Methodology

To test the proposed research model we used a survey method and collected data for analysis. Theapproaches followed for conducting the research include Chi Square, Analysis and FrequencyAnalysis.

Initially two groups have been formed as group “A” and Group “B” including 50 students foreach group and the same computer programs were given to them to develop. Computers were notallocated for group “A” and were assigned to write on paper. Group A is the paper based testinggroup. The Group “B” has been provided computers to develop the same programs within thesame time. Group B is the computer based testing group. Group “A” papers were evaluated by 20lecturers and the same paper was given for Group B evaluated by the same 20 lecturers. Beforefilling the questionnaire which was given to the lecturers. The results were evaluated to identifythe advantages of the evaluation criteria. The same tasks have been given to students who are inthe group “B” to develop. Lecturers have checked only its output. But there were students whocould not develop and get the final output. Only at that time the computer was used to examinethe source code to assign marks [20].

After the test a questionnaire was given to both groups to identify and measure the advantages ofthe method. The results have been evaluated to get the feedback from them. Anotherquestionnaire was given to the lecturers who have marked paper based testing and computerbased testing to collect their views and feedback.

Page 3: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

59

A special questionnaire has been given to the software developers who are in the industry whosethe main job is computer programming to develop software and Enterprise Resource Planning(ERP) applications. The objective of this is to measure the willingness or the views of computerbased testing from the people who need to follow in-house on the job training for knowledgeimprovement and increments [17].

2.1.1. The Sample

Data was collected from 50 students of the 2nd year who followed computer subjects and 20lecturers who teach computer subjects for higher education institutes as configuration teams.Twenty five students were females with an average age of 21 years and the other Twenty fivestudents were males with an average age of 21 years.

The second sample was based on the 50 system developers who are really engaging with writingprogramming to develop software applications.

2.1.2. Evaluate students’ paper based testing

The paper based testing answer scripts were evaluated according to the properly designedmarking scheme. After marking the answers it should work perfectly without any mistakes. AModel answer script is shown below;

Figure 1. Model Answer Script 1

The answer scripts of students’ who were in the group “A” (paper based testing) were evaluatedand generated the following statistics. When evaluating answer scripts the marks were allocatedfor the key main factors of syntax, punctuations, methodology and also for the final output[20],[22]. The statistics of number of students who received marks and did not score full marksfor the paper based testing as shown in the Table 1.

Page 4: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

60

Table 1. Number of students who received and did not score full marks from paper based testing

Only nine students have done the coding excellently and obtained the final output with full marks.The percentage of getting full mark is 18% while 82% were there who could not score full marksdue to the above reasons. The statistics of the number of students who repeatedly did mistakes forpaper based testing is shown in the Table 2.

Table 2. Number of students who repeatedly did mistakes for paper based testing

Number of Types of Mistakes Number ofStudents

1 One Mistakes 82 Two Mistake s 123 Three Mistakes 74 Four Mistakes 95 More than Four Mistakes 5

According to the above table it shows that more than 60% of students have done more than twomistakes and it is more than the average.

Screenshots of the students answer sheets are shown below. The mistakes that they have donewere encircled and categorized according to one, two, three, four and more than four mistakes.

Figure 2. Student’s Answer Script with one mistake

No Types of Mistakes Number of studentswho received marks

Number of students whodid not score full marks

1 Semi Colon 18 322 Colon 10 403 Period 12 384 Single Quotation 15 355 Double Quotation 16 346 Comma 14 367 Simple Brackets 22 288 Square Brackets 17 339 Methodology 35 1510 Final out put 9 41

Page 5: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

61

The above figure (Figure 2) shows one student’s paper based answer script and it has one mistake.He has missed the single quotation where the examiner has encircled. When writing a program ona piece of paper it never catches the eye or there is no way to debug the program. If you used thesame coding in the machine even if the mistake is just missing a single quotation the programdoes not work and it takes time to find the error. Through programming experience people cancatch this error. But when developing a system these tiny errors make big mistakes and take along period recover or solve the problems.

Figure 3. Student’s Answer Script with one mistake

Figure three (Figure 3) also shows a single mistake. When marking answer scripts it does notcatch the eye as discussed earlier. Because of that error the program also does not work. Thenhow can the examiner give full marks to the student because this piece of coding also does notgive the output we expect.

Figure 4. Student’s Answer Script with three mistakes

The above figure four (Figure 4) has three mistakes. An examiner has to read all the lines withtiny symbols or needs to follow the programme syntax to check whether the piece of coding isworking or not. It is a big issue to read a lot of coding as well as many lines for the programmingbased subjects. All these coding needs to be read line by line to catch mistakes considering

Page 6: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

62

possible error types such as syntax, semantic, logical and reserved key words in the mind of theexaminer. Without hundred percent knowing what the student wanted to write, how can theexaminer allocate full marks for the students?

Figure 5. Student’s Answer Script with four mistakes

Figure 6. Student’s Answer Script with more than four mistakes

The above figure (Figure. 6) has many mistakes where the examiner has encircled and underlined.When preparing and allocating marks for the above type paper it is hard because the examinerneeds to have a predefine marking scheme showing how to deduct or allocate marks for thecorrect and incorrect points. All the marks should be able to justify.

Figure 7. Student’s Answer Script with more than four mistakes and incomplete lines

Page 7: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

63

Figure 8. Student’s Answer Script with more than four mistakes and incomplete lines

The above figures (Figure 7 and Figure 8) have many mistakes. Also the student has missed someimportant lines. Therefore the examiner should be able to completely understand and check theentire coding. Sometimes we cannot reject the student’s answers saying it is not in the markingscheme, because that answer is also correct and the examiner never thought about that point ofview to solve the problem.

Figure 9.1. Student’s Answer Script with simple brackets and square brackets mistakes

Figure 9.2. Student’s Answer Script with simple brackets and square brackets mistakes

The students have been done mistakes in selecting the correct types of brackets. For instancesome of the students have used simple brackets instated of square brackets. Either it could becarelessness or they really do not know about the correct type of brackets as shown in the figure9.1 and 9.2 (Figure 9.1 and 9.2).

Page 8: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

64

Figure 10. Student’s Answer Script with spelling mistakes

Some of the students used wrong spelling for reserved keywords in that particular language suchas “mysql_connect”. In here students have spelled it as “mysql_conect” as shown in the abovefigure 10 (Figure 10). This coding is also not working correctly to get the expected output. Mostof the programming tools do not have spelling checkers. The Examiner as well as theprogrammer needs to be careful about spelling and self confident in order to reduce the mistakesand save time.

Figure 11. Student’s Answer Script with mistakes in the methodology

Some answer scripts were found with wrong programming methodology or wrong structure.Figure 11 (Figure 11) has such a mistake instead of the “for loop”, the while loop has been used.

2.1.3. Evaluation of the students’ questionnaire

After doing the above paper based testing students who were in the group “A” filled aquestionnaire to express their own attitudes towards this type of paper based testing. Thestatistics are shown in the Table 3.

Page 9: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

65

Table 3. Number of students who answered questionnaires for paper based testing

2.1.4. Evaluation of students - computer based testing

Students who were in the group “B” were allocated for the computer based testing [5], [23] andthe results given in table 4 have been obtained. When evaluating these outputs initially theexaminer has checked whether the students have received the correct outputs or not and thecoding was evaluated only for the students who couldn’t complete it within the given time [24].The statistics of the number of students who did not score full marks from computer based testingis shown in the Table 4.

Table 4. Summary of Number of students who received and who did not score full marks from computerbased testing

According to the above statistics only thirty eight students have done the coding excellently withthe final output and obtained full marks. The percentage of those getting full marks is 76%.

The number of students who have repeatedly done mistakes was also less due to facilities fordebugging while they were developing programmes. The system itself has shown to them thesyntax errors. There were students who could not complete the programme during the allocatedtime but their performance is better than for paper based testing. The statistics of the number ofstudents who repeatedly did mistakes for computer based testing is shown in the Table 5.

No Reason Number of Students

Yes No1 Need to memorise concepts and unnecessary theory 40 10

2 Reduce stress 15 353 More confident about answers 17 234 Answer is visible 0 505 Working in front of machine is comfortable 50 06 Interface provides the extra support 45 57 Can debug time to time to make it correct 0 508 More freedom to come up with alternative methods 0 509 Easy to score maximum marks 8 42

No Types of Mistakes Number of students whoreceived marks

Number of students whodid not score full marks

1 Semi Colon 42 82 Colon 45 53 Period 43 74 Single Quotation 42 8

5 Double Quotation 47 3

6 Comma 46 47 Simple Brackets 48 28 Square Brackets 39 119 Methodology 41 910 Final out put 38 12

Page 10: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

66

Table 5. Number of students who repeatedly did mistakes for computer based testing

Number of Types of Mistakes Number of Students

1 One Mistake Type 62 Two Mistake Types 33 Three Mistake Types 14 Four Mistake Types 25 More than Four Mistake Types 0

After doing the above computer based testing [6] the students who were in the group “B” filled aquestionnaire to express their own attitudes towards this type of computer based testing. Thestatistics are shown in Table 6.

Table 6. An Analysis of feedback given by students who answered the questionnaire for computer basedtesting

2.1.5. Feedback from the lecturer’s who evaluated paper based testing and computer basedtesting

The twenty lecturers who evaluated paper based examinations have given their feedbackaccording to the questionnaire. The statistics are shown in the Table 7.

No Reason Number of Students

Yes No1 Need to memorise the concepts and unnecessary theory 10 40

2 Reduce stress 42 83 More confident about answers 35 154 Answer is visible 50 05 Working in front of machine is comfortable 50 0

6 Interface provides the extra support 45 07 Can debug time to time to make it correct 50 0

8 More freedom to come up with alternative methods 50 0

9 Easy to score maximum marks 46 4

Page 11: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

67

Table 7. An Analysis of feedback given by lecturers who answered the questionnaire for computer basedtesting

2.1.6. Feedback from Software Developers

The selected fifty software developers went through the training and really faced the pre-test andpost test as a requirement of on the job training to move to the new project which is totally basedon different languages or to obtain the next increment level [19],[21]. The statistics are given inthe table 8.

Table 8. Feedback from the software developers in the Industry

Type of test No ofEmployees

No ofEmployees (%)

No of employees who preferred computer based testing toevaluate themself after the training session

43 76

No of employees who does not prefer computer basedtesting to evaluate themself after the training session

07 14

No of employees who does not response for theQuestionnaire

05 10

Software developers and the IT industry used this practical based method especially for the junioremployees or those undergoing training programmes to enhance their knowledge. With thepractical knowledge and the experience they can face this type of evaluation criteria moreconfidently [24].

No Reason Number ofLecturer

Yes No1 Reduce the stress, over head and ambiguity 20 02 Reduce Paper marking time 15 53 Increase the accuracy of marking 20 04 More effective and efficient 20 05 No need to read line by line to check syntax for mistakes 20 06 Reduce the unpleasant situation of reading untidy handwriting 20 07 Increase the transparency of marking 20 08 100% certainty about the correctness of the coding and It gives working

outputs20 0

9 Ability to use different techniques or methods by their own need notstick to one method

20 0

10 Debugging method can be used to evaluate the incomplete programmes. 18 2

Page 12: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

68

3. Hypotheses testing

3.1. Comparison between Paper based Testing and Computer based Testing (Using twosample proportion tests)

Table 8. Result of two sample proportion tests

Received Marks Lost Marks TotalPaper based Testing 168 332 500Computer based Testing 431 69 500Total 599 401 1000

Therefore we can conclude that; the true proportion of students who received marks fromcomputer based testing is higher than the true proportion of students who received marks frompaper based testing.

Ho: The true proportion of students who received marks from paper based testing andComputer base testing is the same

H1: The true proportion of students who received marks from computer based testing ishigher than the true proportion of students who received marks from paper based testing.

Test statistic:

Since the p value ( ) is extremely small we have enough evidence to reject Ho at anylevel of significance.

3.2. Comparison between the Computer based and Paper based questions among students(Using two sample proportion test)

Table 9. Comparison between the Computer based and Paper based questionnaires

Type of test Yes No TotalPaper based testing 175 265 440Computer based testing 378 67 445

Page 13: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

69

Ho: The true proportions of students who prefer computer based questions are the same as thetrue proportions of students who prefer paper based questionsH1: The true proportions of students who prefer computer based questions are higher than thetrue proportions of students who prefer paper based questions

Test statistic:

Since the p value ( is extremely small we have enough evidence to rejectat any level of significance.

Therefore we can conclude that; the true proportions of students who prefer computer basedquestions are higher than the true proportions of students who prefer paper based questions.

3.3. Evaluation of the questionnaire feedback from lecturers for Computer based Testing

Let be the true proportion of lecturers who prefer computer based questions

Test statistic:

Since the p value (0.0000) is extremely small we have enough evidence to reject Ho at any levelof significance. Therefore we can conclude that the true proportions of lecturers who prefercomputer based testing are higher than 0.5 and hence most of them prefer computer based testing.

3.4. Evaluation of the questionnaire feedback from software developers in the industry forcomputer based Testing

According to the selected software developers 76% were preferred for the computer based testingand 14% did not prefer and 10% did not respond. Therefore the people like software developerswhose job is really writing programmes also preferred to have computer based testing than apaper for pre or post test.

Page 14: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

70

Figure 3. Feedback Pie chart from software developers

4. Conclusion and Discussion

This section of the study highlights the key findings and is discussed in relation to the objectivesdefined at the prior stages of this research. The main aim of the discussion is to bring to light newfindings and value additions to the area of testing for the higher education sector especially incomputer related subjects to conduct the computer based testing than the paper based testing. Itwould be beneficial to the students and the lecturers.

According to the students’ questionnaire 85% said that computer based testing is much better andthey have proved it is more successful than the paper based testing due to the followingadvantages;

No need to by-heart concepts and unnecessary theory, Reduce stress, Build more confidenceabout the answers, The answer is visible, Working in front of computer is more comfortable,Interface provides the extra support, Ability to debug time to time to make it correct, Morefreedom to come up with alternative methods and Easy to score maximum marks.

The data was also analysed according to the given questionnaire to summarize the results fromthe lecturers’ point of view. Ten key important factors have been identified for conductingcomputer based testing;

Reduce the stress, Over-head and ambiguity, Reduce marking time, increase the accuracy ofmarking, It is a more efficient and effective method, Does not need to read line by line to checksyntax for mistakes, Reduces the unpleasant situation of reading untidy handwriting, Increases thetransparency of marking, Hundred percent of certainty about the correctness of the coding,Production of a working output, Ability to use different techniques or methods on their own, Noneed to stick to one method and debugging method can be used to test the logical errors in theprograms.

According to the survey conducted the software developers from the industry also preferred tohave a testing method for their in-house on the job training programmes.

Page 15: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

71

REFERENCES

[1] Anderson D., Black D.,Brown S., Day A. , McDowel S.l, McDowell L.,Peters L.& Smith B. 1999, p.125.

[2] Computer-based testing: a comparison of computer-based and paper-and-pencil assessment. [Online].Available: 12.03.2013

[3] http://www.thefreelibrary.com/Computer-based+testing%3a+a+comparison+of+computer-based+and...-a0241861838

[4] Anderson D., Black D.,Brown S., Day A. , McDowel S.l, McDowell L.,Peters L.& Smith B. 1999, p.170

[5] Anderson D., Black D.,Brown S., Day A. , McDowel S.l, McDowell L.,Peters L.& Smith B. 1999, p.139).

[6] Computer Based Testing. [Online]. Available: 12.03.2013http://www.nremt.org/nremt/about/CBT_home.asp

[7] What are important aspects of converting an exam to computer-based testing? [Online]. Available:15.03.2013.

[8] http://www.proftesting.com/test_topics/cbt.php[9] Roy Clariana and Partricia Wallace: Paper based versus computer based assessment: Vol 33 No. 5

2002 593-602” British Journal of Educational Technology[10] Heather F., Steve K. & Stephanie M. 1999, A Hand book for Teaching and Learning in Higher

Education, Kogan Page Limited .p. 134[11] Anderson D., Black D.,Brown S., Day A. , McDowel S.l, McDowell L.,Peters L.& Smith B. 1999, p.

113[12] Badders, W 2000, Methods of Assessment, viewed 3 August 2012,[13] <http://www.eduplace.com/science/profdev/articles/badders.html>.[14] Habeshaw S.,Gibbs G.,Habeshaw T. 1986, p. 57, 53 Interesting ways to assess your students,[15] Newble, Cannon R. & David 2000, p. 202, A Hand book for Teachers in Universities and Colleges[16] Honey P & Mumford A 1992, The Manual of Learning Styles, 3rd edn, Maidenhead: pete Honey.,

Viewed 15 May 2012 www.studyskills.soton.ac.uk/studyguides/Learning%20styles.doc>[17] Anderson D., Black D.,Brown S.,Day A. McDowel S.I.McDowell L.,Peters L & Smith B.1999, P.4[18] A Hand book for Teachers in Universities and colleges (Newble, Cannon R & David 2000, pp. 173-

205)[19] Littlejohn, A 1993, ‘Teaching school aged learners : a framework for understanding and some

indicators for change’.[20] R.D. Dowsing and S. Long,(1997),"The Do's and Don'ts of Computerising IT Skills

Assessment",University of East Anglia, NORWICH NR4 7TJ, UK[21] Helen Drury, (1997),"Integrating the Teaching of Writing Skills into Subject Area IT Programs",

Learning Assistance Centre, The University of Sydney[22] Michael J. Hannafin, (1997), “Educational Technology Research & Development (1997), 45(3), 101-

117., University of Georgia,Athens, GA USA 30602[23] Mary T. Rice , (1997),“Finding out What Works: Learning from Evaluations of Information

Technologies in Teaching and Learning”, Deakin Centre for Academic Development, DeakinUniversity

[24] Lynette Schaverien and Mark Cosgrove, (1997), “Computer Based Learning Environments in TeacherEducation: Helping Students to think Accurately, Boldly and Critically”.

Authors

Samantha Mathara Arachchi, is a Lecturer at the Department of Communication andMedia Technology (CMT), University of Colombo School of Computing (UCSC), SriLanka, who obtained his degree in Bachelor of Science in 1999 from the Open Universityof Sri Lanka. He holds a Postgraduate Diploma in Computer Technology in 2002 fromthe University of Colombo and a Postgraduate Diploma in Information Management in2007 from Sri Lanka Institute of Information Technology, and also holds a Masters in InformationManagement (2008) with merit pass. He is reading for PhD in Malaysia in Computer Science.

Page 16: A comparison between evaluation of

International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.1, January 2014

72

Dr. Kennedy D Gunawardana is a Professor in the Accounting Depatment at Universityof Sri Jayawardenapura, He teaches Cost and Management Accounting, AccountingInformation Systems, and Artificial Neural Networks for Accountig for theundergraduates and Post graduates students.With over 50 articles published in refereedConferences procedings, numerous other papers and monographs, 2 chapters of theinternational books and 3 English Medium text books and 3 Sinhala medium edited books.

R.S. Madanayake is a Lecturer attached to the University of Colombo School ofComputing, Sri Lanka. He obtained his B.Sc. from the University of Colombo, SriLanka in 2001. He obtained his M.Sc. in IT from the University of Colombo, Sri Lankain 2010. He is currently reading for his M.Phil. at the University of Colombo, Sri Lanka.

G.K.A. Dias is a Senior Lecturer at the University of Colombo School of Computingand is currently the head of its Department of Communication and Media Technologies.He obtained his B.Sc. from the University of Colombo in 1982. He obtained a PostGraduate Diploma in Computer Studies from the University of Essex, U.K. in 1986. Heobtained an MPhil by research from the University of Wales, U.K. in 1995.