[ieee 2013 learning and teaching in computing and enginering (latice) - macau (2013.3.21-2013.3.24)]...

4
The learning outcomes of exam questions in the Input/Output topic in Computer Architecture Edurne Larraza-Mendiluze Faculty of Informatics Dpt. of Computer Architecture and Technology University of the Basque Country (UPV/EHU) Donostia-San Sebastian, Spain 20018 Email: [email protected] Nestor Garay-Vitoria Faculty of Informatics Dpt. of Computer Architecture and Technology University of the Basque Country (UPV/EHU) Donostia-San Sebastian, Spain 20018 Email: [email protected] Abstract—The Input/Output (I/O) topic as a branch of the Computer Architecture field has been considered a key topic in both the Computer Science and the Computer Engineering curricula. Years of teaching have demonstrated this topic to be difficult for students to understand. Having made some changes to the methodology used to teach the topic and to the infrastructure used for practice, we have analysed the results and concluded that the research should be more centered on lecturers’ work than on students’ results. In this turnaround some exam questions have been analysed in order to determine the real learning outcomes achieved. I. I NTRODUCTION The importance of the Input/Output (I/O) topic is well defined in the Computer Engineering Curriculum [1] and the Computer Science Curriculum [2] of the IEEE-ACM joint task force. Previous work of the authors with the aim of improving and analyzing students’ learning in this area has been described in [3] and [5]. The aim of the present work is to answer to the following question: What are the learning outcomes of the questions we set? This paper will show the preliminary results of the work done to answer it. In the next section we will review some previous works that served as a guide for the research we are going to report in this paper. Section three will explain the process followed to gather the exam questions for the I/O topic from several Spanish universities. Such a database could serve as an inspiration for lecturers who are going to evaluate the learning outcomes of the students in the I/O topic. However, it would not be of any help without a criterion to be followed in order to select the most appropriate questions. Section four will describe the process followed to determine a set of criteria that could help in choosing from the different options and in developing new questions. A synthesis of the exam exercises analysed so far will be presented in section five. Since this is a work in progress, in section six some discussion questions will be posed in order to get the opinions of the conference attendees and see how this research could be improved. II. RELATED WORK For this particular topic (I/O), we have not been able to find any literature discussing assessment instruments, evaluation, or classification of exam questions. However, relevant literature in other topic areas confirms the importance of such work. Dean and Rodman [6] stress the importance of designing tests with both algorithmic and conceptual problems in order to be fair to all kind of students, while Swart [7] remarks that academics must produce effective questions in order to engage students in higher order cognitive processes. Although the work of Dean and Rodman [6] and Swart [7] was carried out in the field of engineering, the Computing Education Research community has also shown an increasing interest in such research centred in what lecturers do, mostly concerning programming courses. Most computing academics are overly optimistic about the validity of their grading – so optimistic that it doesn’t even occur to them to question the validity of their grading. [8] Petersen et al. [9] found that the majority of the exams reviewed were composed predominantly of high-value, inte- grative code-writing questions, with a high number of CS1 concepts required to answer them. Sheard et al. [10] describe the development of a classifica- tion scheme that can be used to investigate the characteristics of introductory programming examinations. Their work is part of a bigger project whose aims include investigating the pedagogical intentions of the educators who compile the previously mentioned examinations. Simon et al. [11] explore questions such as “Is there consensus on what students should learn in CS2?”, based on an analysis of final exams. Elliott Tew and Guzdial [12] raise the question of whether the poor results obtained by students in the exams are the 2013 Learning and Teaching in Computing and Engineering 978-0-7695-4960-6/13 $26.00 © 2013 IEEE DOI 10.1109/LaTiCE.2013.13 212 2013 Learning and Teaching in Computing and Engineering 978-0-7695-4960-6/13 $26.00 © 2013 IEEE DOI 10.1109/LaTiCE.2013.13 212

Upload: n

Post on 14-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - The Learning Outcomes

The learning outcomes of exam questions in theInput/Output topic in Computer Architecture

Edurne Larraza-MendiluzeFaculty of Informatics

Dpt. of Computer Architecture and Technology

University of the Basque Country (UPV/EHU)

Donostia-San Sebastian, Spain 20018

Email: [email protected]

Nestor Garay-VitoriaFaculty of Informatics

Dpt. of Computer Architecture and Technology

University of the Basque Country (UPV/EHU)

Donostia-San Sebastian, Spain 20018

Email: [email protected]

Abstract—The Input/Output (I/O) topic as a branch of theComputer Architecture field has been considered a key topicin both the Computer Science and the Computer Engineeringcurricula. Years of teaching have demonstrated this topic to bedifficult for students to understand. Having made some changes tothe methodology used to teach the topic and to the infrastructureused for practice, we have analysed the results and concluded thatthe research should be more centered on lecturers’ work than onstudents’ results. In this turnaround some exam questions havebeen analysed in order to determine the real learning outcomesachieved.

I. INTRODUCTION

The importance of the Input/Output (I/O) topic is well

defined in the Computer Engineering Curriculum [1] and the

Computer Science Curriculum [2] of the IEEE-ACM joint

task force. Previous work of the authors with the aim of

improving and analyzing students’ learning in this area has

been described in [3] and [5].

The aim of the present work is to answer to the following

question: What are the learning outcomes of the questions we

set? This paper will show the preliminary results of the work

done to answer it.

In the next section we will review some previous works that

served as a guide for the research we are going to report in this

paper. Section three will explain the process followed to gather

the exam questions for the I/O topic from several Spanish

universities. Such a database could serve as an inspiration for

lecturers who are going to evaluate the learning outcomes of

the students in the I/O topic. However, it would not be of

any help without a criterion to be followed in order to select

the most appropriate questions. Section four will describe the

process followed to determine a set of criteria that could help

in choosing from the different options and in developing new

questions.

A synthesis of the exam exercises analysed so far will be

presented in section five. Since this is a work in progress, in

section six some discussion questions will be posed in order

to get the opinions of the conference attendees and see how

this research could be improved.

II. RELATED WORK

For this particular topic (I/O), we have not been able to find

any literature discussing assessment instruments, evaluation, or

classification of exam questions. However, relevant literature

in other topic areas confirms the importance of such work.

Dean and Rodman [6] stress the importance of designing tests

with both algorithmic and conceptual problems in order to

be fair to all kind of students, while Swart [7] remarks that

academics must produce effective questions in order to engage

students in higher order cognitive processes. Although the

work of Dean and Rodman [6] and Swart [7] was carried

out in the field of engineering, the Computing Education

Research community has also shown an increasing interest in

such research centred in what lecturers do, mostly concerning

programming courses.

Most computing academics are overly optimisticabout the validity of their grading – so optimisticthat it doesn’t even occur to them to question thevalidity of their grading. [8]

Petersen et al. [9] found that the majority of the exams

reviewed were composed predominantly of high-value, inte-

grative code-writing questions, with a high number of CS1

concepts required to answer them.

Sheard et al. [10] describe the development of a classifica-

tion scheme that can be used to investigate the characteristics

of introductory programming examinations. Their work is

part of a bigger project whose aims include investigating

the pedagogical intentions of the educators who compile the

previously mentioned examinations.

Simon et al. [11] explore questions such as “Is thereconsensus on what students should learn in CS2?”, based on

an analysis of final exams.

Elliott Tew and Guzdial [12] raise the question of whether

the poor results obtained by students in the exams are the

2013 Learning and Teaching in Computing and Engineering

978-0-7695-4960-6/13 $26.00 © 2013 IEEE

DOI 10.1109/LaTiCE.2013.13

212

2013 Learning and Teaching in Computing and Engineering

978-0-7695-4960-6/13 $26.00 © 2013 IEEE

DOI 10.1109/LaTiCE.2013.13

212

Page 2: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - The Learning Outcomes

product of failures of student comprehension or the academics’

inability to accurately measure the students’ performance.

Our work is similar to those described above, but in the

field of I/O within computer architecture.

III. GATHERING I/O EXAM QUESTIONS

As this project grows, we would expect to analyse exam

questions from many universities in a range of countries.

However, as we have found no reports of such work on the I/O

topic, or more generally in the field of Computer Architecture

[13], we decided to start our study locally. We analysed the

curricula of several Spanish universities in order to find which

courses included the I/O topic, and discovered which lecturer

was in charge of each course. We then asked lecturers at

15 different universities to send us their exam questions, and

broadcast the same request to the mailing list of the Spanish

Society of Computer Science Education, indicating that we

wished to analyse the exam questions in order to determine

the learning outcomes that they assess.

Responses were received from nine of the 15 targeted

universities. Two of those said that they would not send their

personal work, and one sent only laboratory exercises, which

were not pertinent to our work. Another respondent, who had

only just taken over the course and had never prepared exam

questions for it, sent us the contact details of other lecturers

who had taught the subject in previous years.

In response to the request sent to the mailing list, one person

was initially interested in our work, but was unable to follow

up by sending us exam questions.

We have therefore collected 152 exam exercises from six

different universities, including our own. We agreed that we

should begin the work with these questions, in the hope of

attracting more interest once we have published our initial

findings.

IV. CLASSIFICATION CRITERIA

Petersen et al. [9], Sheard et al. [10], and Simon et al.

[11] have classified ‘CS1’, Programming, and Data Structures

examination questions, in a similar way to what we intend to

do here with I/O exam questions. The criteria used in these

works have been merged and modified in the way explained

in the following subsections.

The process to determine the classification criteria was

iterative. Once the two researchers had chosen some criteria,

based on the literature, they started classifying, found some

problems, discussed them, decided to change some of the

criteria, classified them again, discussed and made some small

modifications again and finally ended the classification pro-

cess. Due to the reduced length of this paper we will describe

only the final version, explaining some of the decisions taken.

A. Percentage of marks allocated

Sheard et al. [10] used the percentage of marks allocated

to weight their other findings. We have chosen not to do this,

since it does not affect the learning outcomes, and could vary

with the design of a specific exam.

B. Topics covered

Like Petersen et al. [9] and Sheard et al. [10], we consider

the topics covered in each question. However, unlike Sheard

et al. [10] we do not impose a maximum of three topics to

assigned to each question. As Petersen et al. [9] say, “Themind is limited in its ability to work with multiple conceptssimultaneously”. Therefore, it is worth identifying all of the

topics that are covered in each question.

For this purpose we use the topics used by Larraza-

Mendiluze and Garay-Vitoria [5] to design concept maps of the

I/O topic. The steps and types of concepts were not considered.

We thus obtained eleven topics: CPU, Memory, Peripheral,

I/O registers, I/O controller, Interrupt, Interrupt controller or

manager, Poll, DMA, DMA controller, and Others. However

it is evident that when the I/O controller is considered, the

I/O registers will be included, just as the Interrupt and DMA

will be included when the Interrupt controller or manager and

the DMA controller respectively, are considered. We therefore

decided to mark only the controllers, and to select I/O regis-

ters, Interrupt, and DMA only when they appear in isolation,

usually in lower order questions (explain in subsection IV.E).

C. Skills required to answer the question

The criteria for classifying the exam questions according to

the skills required to answer it are a mixture of those proposed

by Petersen et al. [9], Sheard et al. [10], and Simon et al.

[11]: pure knowledge recall, applied knowledge recall, trace or

explain code, write or modify code, design program, analysis

of performance, and knowledge of specific architecture.

The analysis of performance and the knowledge of a specific

architecture are skills specific to this subject, which may not

appear when classifying questions in other areas. The ‘Knowl-

edge of specific architecture’ criterion caused some discussion.

Some questions ask the students to code in assembler and,

therefore, knowledge of a specific architecture is needed to

answer these questions. However, if there is no requirement for

a specific assembly language, the question could be answered

based on knowledge of any architecture, and the question

could thus be used in other contexts. The decision was taken

to choose this criterion only in cases where the question is not

transferable.

Unlike Sheard et al. [10], we did restrict the choice to a

single skill, since needing more than one skill could make the

question more difficult to answer, and it is therefore useful to

know how many skills are needed. For example, one question

could require the student to depict something using a graphical

representation and also to give an explanation through a short

answer. However, for questions involving both program design

and coding, we agreed to use only the ‘Design program’

criterion, which subsumes code writing, and the style of the

question, in that case, would be ‘code’.

D. Style or format of the answers

We will use the following terms to describe how the students

have to answer: multiple choice, fill-in-the-blanks, short an-

swer, essay question, code and graphical representation. Once

213213

Page 3: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - The Learning Outcomes

again, this has been based on the criteria used by Petersen et

al. [9], Sheard et al. [10], and Simon et al. [11].

E. Degree of difficulty

The papers we are using as a guide ( [9], [10], and [11])

classify the degree of difficulty as easy, medium and hard,

arguing that Bloom’s [14] and SOLO [15] taxonomies require

knowledge of the content covered in class. We have chosen

instead to follow Swart [7], and use only low-order questions

(LOq) and high-order questions (HOq), where Bloom’s [14]

Knowledge and Comprehension are labelled as LOq, while

Application, Analysis, Synthesis, and Evaluation are labelled

as HOq; and SOLO’s [15] Unistructural and Multistructuralare labelled as LOq and Relational and Extended abstract are

labelled as HOq.

V. ANALYSIS

What follows is a summary of all the exercises gathered,

based on the previously stated criteria.

It must be taken into consideration that this topic is not

always taught in the same semester of a degree, and this could

have some implications. Three of the cases analysed in this

paper teach the I/O topic in the second semester, two in the

third semester, and one in the fourth semester.

A. Topics covered

Fig. 1 shows the percentage in which topics are covered

in the questions analysed. CPU, Memory and Peripherals are

the basic topics of an I/O system and are therefore covered

very frequently. Interrupts together with Interrupt controller

or manager, and I/O registers together with I/O controller, are

also very frequent topics, but Poll and DMA together with

DMA controller are not so frequent.

Fig. 1. Topics appearing in questions analysed.

It must also be taken into consideration that other topics

such as cache memory, buses, transfer velocity, etc. are also

covered in some exercises.

B. Skills required to answer the questions

Fig 2 shows the skills required to answer the questions.

The most frequent skills are ‘Analysis of performance’ and

‘Knowledge of specific architecture’. The former usually

requires knowledge of other topics, and the analysis to be

performed identifies this type of question as HOq. The latter,

‘Knowledge of specific architecture’, could be easily extended

to refer to the knowledge of any architecture rather than a

specific architecture.

Fig. 2. Skills required to answer the questions.

Pure knowledge recall and applied knowledge recall are

skills very closely associated with LOqs, which are much less

frequent than HOqs, and, therefore, these skills are not very

frequent either. However the least frequent skill is ‘Trace or

explain code’, which could be used in both LOq and HOq.

We believe that this skill should be tested in more exams than

those found.

C. Style or format of the answers

As shown in Fig. 3, the most frequent style of answer

required is the short answer, followed by the code writing.

All the rest appear in less than 10% of the questions.

Fig. 3. Style or format of the questions.

D. Degree of difficulty

Figs. 4 and 5 show the degree of difficulty of the questions

by showing the number of topics covered in high-order and

low-order questions, by university. It is clear that university

#1 is the one that uses more HOqs. Moreover, it is also the

university that uses more topics per question. However, this is

the university that teaches the I/O topic in the 4th semester,

and it would therefore be expected to use questions of a higher

degree of difficulty.

214214

Page 4: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - The Learning Outcomes

Fig. 4. Number of topics covered by HOqs.

Fig. 5. Number of topics covered by LOqs.

As for the rest, it is clear that LOqs use fewer topics than

the HOqs. However, the universities teaching the topic in the

2nd semester might be using too many HOqs with too many

topics covered.

VI. DISCUSSION

In conclusion we would say that, after completing the whole

process, we totally agree with Petersen et al. [9] that most of

the questions are HOqs and that they cover a large number of

topics.

We will now raise some issues in order to be able to take

this work forward. To begin with, a good database would need

a lot of exam questions. What method could be used to collect

a large number of exam questions?

Secondly this classification scheme needs to be verified by

a group of specialists. It possibly needs more criteria in order

to be valid in a wider context. What would be a good method

of contacting the right people?

Finally, there are two more questions that concern us. How

could we evaluate whether students are learning the I/O topic

or whether they are only learning how to answer the questions

needed to pass the exams? And how could we evaluate whether

lecturers are using appropriate questions to assess students?

ACKNOWLEDGMENT

We thank the lecturers who sent us their exams, since

without these data the research would have been impossible.

We also thank Mr. Simon, from the University of Newcastle,

for his valuable coments that helped us improve the paper.

This research work has been supported by the University of

the Basque Country UPV/EHU, under grant UFI11/45, and by

the Department of Education, Universities and Research of the

Basque Government.

REFERENCES

[1] IEEE Computer Society and ACM. 2004. “Computer engineering 2004:curriculum guidelines for undergraduate degree programs in computerengineerin”. http://www.acm.org/education/education/curric vols/CE-Final-Report.pdf

[2] ACM and IEEE Computer Society. 2008. “Computerscience curriculum 2008: An interim revision of CS 2001”.http://www.acm.org//education/curricula/ComputerScience2008.pdf

[3] Larraza-Mendiluze, E., Garay-Vitoria, N., Martın, J.I., Muguerza,J., Ruiz-Vazquez, T., Soraluze, I., et al. 2012. “Nintendo DSprojects to learn computer Input/Output”. In Proceedings of the17th Annual Conference on Innovation and Technology in Com-puter Science Education. ITICSE’12. Haifa, Israel, July 3-5, 2012.DOI=10.1145/2325296.2325388

[4] Goldman, K., Gross, P., Heeren, C., Herman, G.L., Kaczmar-czyk, L., Loui, M.C., and Zilles, C. 2010. “Setting the Scopeof Concept inventories for Introductory Computing Subjects”. ACMTransactions on Computing Education, Vol. 10, Issue 2, Art. 5.DOI=10.1145/1789934.1789935

[5] Larraza-Mendiluze, E., and Garay-Vitoria, N. 2012.“A comparison be-tween lecturers’ and students’ concept maps related to the Input/Outputtopic in computer architectur”. In Proceedings of the 12th Koli CallingInternational Conference on Computing Education Research. Tahko,Finland, November 15-18, 2012, in press.

[6] Dean, R., and Rodman, S. 1987.“Testing engineering students: Are wereally fair” IEEE transactions on Education, Vol. E-30, Issue 2, pp. 65-70. DOI=10.1109/TE.1987.5570523

[7] Swart, A. 2010. “Evaluation of final examination papers in engineering:A case study using Bloom’s taxonomy”. IEEE transactions on Education,2010, Vol. 53, Issue 2, pp. 257-264. DOI=10.1109/TE.2009.2014221

[8] Lister, R. 2010. “Geek genes and bimodal grades”. ACM Inroads, Vol.1, Issue 3, pp. 16-17.

[9] Petersen, M., Craig, M., and Zingaro, D. 2011.“Reviewing CS1 examquestion conten”. In Proceedings of the 42nd ACM technical Sympo-sium on Computer Science Education SIGCSE’11. pp. 631-636. Dallas,Texas, USA, March 9-12, 2011. DOI=10.1145/1953163.1953340

[10] Sheard, J., Simon, Carbone, A, Chinn, D., Laakso, M., Clear, T.,et al. 2011.“Exploring programming assessment instruments: A clas-sification scheme for examination question”. In proceedings of the7th International Workshop on Computing Education Research ICER2011. pp. 33-38. Providence, Rhode Island, USA, August 8-9, 2011.DOI=10.1145/2016911.2016920

[11] Simon, B., Clancy, M., McCartney, R., Morrison, B., Richards, B.,and Sanders, K. 2010.“Making sense of data structures exam”. InProceedings of the 6th International Workshop on Computing EducationResearch - ICER 2010. pp. 97-105. Aarhus, Denmark, August 8-11,2010. DOI=10.1145/1839594.1839612

[12] Elliott Tew, A., and Guzdial, M. 2010. “Developing a validatedassessment of fundamental CS1 concepts”. In Proceedings of the41st ACM technical symposium on Computer Science EducationSIGCSE 2010. pp 97-105. Milwaukee, Wisconsin, USA, March 10-13.DOI=10.1145/1734263.1734297

[13] Simon, Carbone, A., De Raadt, M., Lister, R., Hamilton, M. and Sheard,J. 2008. “Clasifying computing education papers: Process and results.” InProceedings of the 4th ICER workshop – ICER 2008. pp 161-171. Syd-ney, NSW, Australia. September 6-7. DOI=10.1145/1404520.1404536

[14] Bloom, B. S.(Ed.), Engelhart, M. D., Furst, E. J., Hill, W. H., and Krath-wohl, D. R. 1956 “Taxonomy of educational objectives: the classificationof educational goals. Handbook I: The cognitive domain.” New York:David McKay.

[15] Biggs, J.B., and Collis, K.F. 1982. “Evaluating the quality of learning:The SOLO taxonomy.” New York: Academic Press (1st ed.).

215215