[ieee 2013 learning and teaching in computing and enginering (latice) - macau (2013.3.21-2013.3.24)]...

8
Object Interaction Competence Model v. 2.0 Jens Bennedsen Aarhus University, School of Engineering Aarhus University Aarhus Denmark Email: [email protected] Carsten Schulte Freie Universit¨ at Berlin Berlin, Germany Email: [email protected] Abstract—Teaching and learning object oriented programming has to take into account the specific object oriented characteristics of program execution, namely the interaction of objects during runtime. Prior to the research reported in this article, we have developed a competence model for object interaction and a test instrument for that model. This article assesses the problems involved in the first version of the instrument and associated empirical results. Based on the discussion and analysis of the prior results, we describe the rationale behind refining the hierarchy, and present a refined test instrument. We then describe a first evaluative study with third semester students and present its results as well as an discussion of the validity of the refined instrument. The empirical results show that the hierarchy is taxonomic (i.e. to understand level n one needs to understand all lower levels) but there where problems with the test used. I. I NTRODUCTION Few computing educators of any experience would argue that students find learning to program easy. It has been debated at many conferences; books have been written about it; and at some of the major computer science education conferences, the majority of papers are about introductory programming courses [1]. The Lister and McCracken-Working groups [2], [3] have triggered interest in comparing and more precisely measuring learning outcomes and understanding learning difficulties. In order to measure learning outcomes, it becomes clear that a collection of questions alone is not sufficient, one needs to know that they are valid. Consequently criteria for questions are needed [4]. As one example, Robins recently presented the learning edge momentum theory [5], offering an alternative explanation to the widely observed bimodal grade distribution. In response, Lister [6] argued that it might be the case that problems with validity of exam questions are also affecting observed results. Petersen et. al. [7] analyzed exam questions and indeed found some problematic issues, e.g. that usually quite a lot of concepts are included in one question that are not intended to be evaluated by the question. In addition, their chosen group of nine experienced instructors had difficulties in evaluating the questions, another indicator of validity problems. Another approach is linking questions to concepts [8], or similar to define or empirically analyze learning concepts which then should be included in assessments [9]–[12], and then try to include only those concepts in a question that is to be evaluated [7]. However, these approaches are focusing on (and ending with) a collection of items or concepts to be asked. And in case of object-oriented programming there are quite a lot of concepts to be included (see e.g. [9], [10]). In addition, concepts are interrelated. Eckerdal et al. [13] chose a different approach. Based on prior work [14], which uncovered critical aspects of students’ perceptions of the concepts object and class, they defined a hierarchy of so-called text and action; and tested empirically, whether there is such a hierarchic progression from text to action. As a consequence, their test instrument and test questions were derived from a theoretical model, indicating the important aspect the question should focus on. In alignment with the perspective of research in education, we could label Eckerdal et al.’s hierarchy as a competence model, that was empirically evaluated. This line of research is done quite seldom in research of teaching and learning of programming. From a meta-study, Sheard et al. [15, p. 101] conclude, that “knowledge of what our programming students can do and what they have difficulty with is important, [but] if we are to deepen our understanding of why this is so then we would argue that we need to explore the process of learning in relation to theories and models of learning. It was a concern that we didn’t find many studies that investigated learning within such a theoretical framework. This perhaps indicates unawareness on behalf of the researchers or inapplicability of the models and theories found. If the latter is the case then we propose that there is a need to adapt models from other disciplines or to specifically develop models for the teaching and learning of programming. These models can be used to gain greater understanding of why programming students think in particular ways and how they learn”. In this work, we aim to take a step towards a validated test instrument that measures the degree of understanding a central concept of the object-oriented paradigm. Therefore we develop and refine two related aspects: 1) First, we revise a hierarchy of understanding object interaction. The object interaction hierarchy (OIH) is a model describing some important aspect of the learning outcome of (introductory) object oriented programming. Such a model can help to find criteria for validation of test instruments, as suggested by [15]. The OIH is our contribution in terms of a theory or (part of a) competence model about what it means to understand one central aspect of the object oriented paradigm. 2013 Learning and Teaching in Computing and Engineering 978-0-7695-4960-6/13 $26.00 © 2013 IEEE DOI 10.1109/LaTiCE.2013.43 9 2013 Learning and Teaching in Computing and Engineering 978-0-7695-4960-6/13 $26.00 © 2013 IEEE DOI 10.1109/LaTiCE.2013.43 9

Upload: c

Post on 27-Feb-2017

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Object Interaction Competence

Object Interaction Competence Model v. 2.0

Jens BennedsenAarhus University, School of Engineering

Aarhus University

Aarhus Denmark

Email: [email protected]

Carsten SchulteFreie Universitat Berlin

Berlin, Germany

Email: [email protected]

Abstract—Teaching and learning object oriented programminghas to take into account the specific object oriented characteristicsof program execution, namely the interaction of objects duringruntime. Prior to the research reported in this article, we havedeveloped a competence model for object interaction and a testinstrument for that model. This article assesses the problemsinvolved in the first version of the instrument and associatedempirical results. Based on the discussion and analysis of theprior results, we describe the rationale behind refining thehierarchy, and present a refined test instrument. We then describea first evaluative study with third semester students and presentits results as well as an discussion of the validity of the refinedinstrument. The empirical results show that the hierarchy istaxonomic (i.e. to understand level n one needs to understand alllower levels) but there where problems with the test used.

I. INTRODUCTION

Few computing educators of any experience would argue

that students find learning to program easy. It has been debated

at many conferences; books have been written about it; and

at some of the major computer science education conferences,

the majority of papers are about introductory programming

courses [1].

The Lister and McCracken-Working groups [2], [3] have

triggered interest in comparing and more precisely measuring

learning outcomes and understanding learning difficulties. In

order to measure learning outcomes, it becomes clear that a

collection of questions alone is not sufficient, one needs to

know that they are valid. Consequently criteria for questions

are needed [4].

As one example, Robins recently presented the learning

edge momentum theory [5], offering an alternative explanation

to the widely observed bimodal grade distribution. In response,

Lister [6] argued that it might be the case that problems

with validity of exam questions are also affecting observed

results. Petersen et. al. [7] analyzed exam questions and indeed

found some problematic issues, e.g. that usually quite a lot of

concepts are included in one question that are not intended to

be evaluated by the question. In addition, their chosen group

of nine experienced instructors had difficulties in evaluating

the questions, another indicator of validity problems. Another

approach is linking questions to concepts [8], or similar to

define or empirically analyze learning concepts which then

should be included in assessments [9]–[12], and then try to

include only those concepts in a question that is to be evaluated

[7]. However, these approaches are focusing on (and ending

with) a collection of items or concepts to be asked. And

in case of object-oriented programming there are quite a lot

of concepts to be included (see e.g. [9], [10]). In addition,

concepts are interrelated.Eckerdal et al. [13] chose a different approach. Based on

prior work [14], which uncovered critical aspects of students’

perceptions of the concepts object and class, they defined a

hierarchy of so-called text and action; and tested empirically,

whether there is such a hierarchic progression from text

to action. As a consequence, their test instrument and test

questions were derived from a theoretical model, indicating the

important aspect the question should focus on. In alignment

with the perspective of research in education, we could label

Eckerdal et al.’s hierarchy as a competence model, that was

empirically evaluated. This line of research is done quite

seldom in research of teaching and learning of programming.

From a meta-study, Sheard et al. [15, p. 101] conclude, that

“knowledge of what our programming students can do and

what they have difficulty with is important, [but] if we are to

deepen our understanding of why this is so then we would

argue that we need to explore the process of learning in

relation to theories and models of learning. It was a concern

that we didn’t find many studies that investigated learning

within such a theoretical framework. This perhaps indicates

unawareness on behalf of the researchers or inapplicability of

the models and theories found. If the latter is the case then

we propose that there is a need to adapt models from other

disciplines or to specifically develop models for the teaching

and learning of programming. These models can be used to

gain greater understanding of why programming students think

in particular ways and how they learn”.In this work, we aim to take a step towards a validated

test instrument that measures the degree of understanding a

central concept of the object-oriented paradigm. Therefore we

develop and refine two related aspects:

1) First, we revise a hierarchy of understanding object

interaction. The object interaction hierarchy (OIH) is a

model describing some important aspect of the learning

outcome of (introductory) object oriented programming.

Such a model can help to find criteria for validation

of test instruments, as suggested by [15]. The OIH is

our contribution in terms of a theory or (part of a)

competence model about what it means to understand

one central aspect of the object oriented paradigm.

2013 Learning and Teaching in Computing and Engineering

978-0-7695-4960-6/13 $26.00 © 2013 IEEE

DOI 10.1109/LaTiCE.2013.43

9

2013 Learning and Teaching in Computing and Engineering

978-0-7695-4960-6/13 $26.00 © 2013 IEEE

DOI 10.1109/LaTiCE.2013.43

9

Page 2: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Object Interaction Competence

2) The second aspect is the development and refinement of

an associated test instrument, named COIN (Compre-

hension of Object InteractioN).

Prior to this research, we have developed a competence

model for object interaction and a test instrument for that

model. The first version of the OIH was tested empirically,

and was overall confirmed as being taxonomic [16], where

persons understanding level n also understand level n − 1;

but the overall hardest question of the test did not fit well in

the taxonomy (only questions including polymorphism were

assigned to the highest level, and the hardest question (i3)

was not, see discussion in [16, p. 221f.]. Instead, this question

relied on reference semantics.

This article discusses the first version of the instrument and

associated empirical results. Based on the discussion and anal-

ysis of the prior results, we concluded that some adaptations

to the original object interaction hierarchy would be useful in

order to make it easier to use, easier to interpret and easier

to apply to different teaching situations. The main issues to

change were: removing strict mapping of polymorphism to

one level of the hierarchy; refining the two highest levels by

taking into account the problems students have with multiple

references to the same object; overall being more precise in

describing the levels.

With regard to the test instrument (for evaluating the stu-

dents object interaction competence), we have found that some

of the prior test questions have problems because they are more

related to calculations than to object interaction. We present a

revised test instrument without these difficulties.

In the following, we first discuss the theoretical aspect, our

refined model of object interaction (OIH) and give reasons for

the model. Following that, we then describe a first evaluative

study with third semester students and present its results as

well as an discussion of the validity of the refined instrument

(COIN).

II. TEACHING AND LEARNING OBJECT INTERACTION

During learning object oriented programming one has also

take into account how the object oriented program text is

understood and processed by the computer – what [13] call

the action aspect.

A. Why Interaction?

Understanding the notional machine - or program execution

- is important for imperative programming; it is also important

for understanding object-oriented programs, too [19]–[21].

Ragonis et.al. s [22, p. 229] conducted a study with high

school students in the 10th grade, investigating the under-

standing of static and dynamic aspects of object-oriented

programs. They concluded that the difficulties found “indicate

that novice students have difficulty in capturing the overall

picture of program execution. This is succinctly conveyed by

the question that came up again and again: What is happening

and when? When students were required to use separate pieces

of knowledge, they did very well. [] But still they didn’t have

a clear picture about the execution”.

Milne and Rowe [20, p. 55] conclude from the results of

a study with first-year university students that most reported

learning problems are due to the “inability [of students] to

comprehend what is happening to their program in memory,

as they are incapable of creating a clear mental model of

its execution”. In [13, p. 35f.], action was operationalized as

the students ability to “identify the state of a variable after a

specified sequence of method calls. The method calls involved

assignment, addition, subtraction and a single if statement”.

Sorva [23, p. 23] discusses the importance of program

dynamics for learning programming in terms of being a

threshold concept. In particular he concludes that the text

aspect is tangible, but the action aspect much less so, and al-

though students know about stepwise execution, he concludes

that “there is considerable evidence of how students fail to

understand what happens when a program is run”.

Therefore Sorva refers (among others) to [24]. Their study

results “indicate that students have a vague understanding of

dynamic aspects of OO programs”. The authors also conclude

the possibility that “students thus develop a wrong mental

representation of object references and identification”.

Beck and Cunningham, two of the “grand old men” in

object-orientation, used CRC cards for teaching object-

orientation [25]. CRC stands for class-responsibility-

collaboration – the focus is on what classes are needed and

how they interact.

Object-interaction is a very important part of knowing

object-orientation. This work focuses on describing taxonomic

levels of understanding object-interaction for novices and

especially to be used in teaching introductiory programming.

Based on our results with the prior version of the OIH,

understanding references is indeed a most difficult topic (at

least in our OIH questions), and is also seen as difficult by

educators and therefore was also commented in teachers and

educators comments on the prior version of the OIH [16],

[26]. We will discuss these results in more detail in the next

subsection.

B. The first version of OIH

The OIH was developed as a tool to capture an essential

part of learning object oriented programming, in particular

understanding of the object oriented interaction. A four leveled

competence hierarchy of object-interaction was derived from

literature and our experience in teaching object-orientation,

and presented to teachers in order to comment it.

Results were ( [26, p. 26]) that 80% found the hierarchy to

be a taxonomy. But at the same time several commented on

polymorphism as being included at the wrong level of the

hierarchy, or as wrongly included as it can be introduced

on several levels. We thus decided to delete the issue of

polymorphism from the model in this second version.

The first version of the OIH was also tested empirically

with learners, and could overall being confirmed as being

taxonomic [16]; but the overall hardest question of the test

did not fit well in the taxonomy. Only questions including

polymorphism were assigned to the highest level, and the

1010

Page 3: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Object Interaction Competence

hardest question (i3) was not, see discussion in [16, p. 221f.]).

Instead, this question relied on reference semantics, see later.

The OIH and a briefly adapted test instrument was later used

in another study [27] but without including the problematic

highest level of the taxonomy.

1) Analysis of the results of the previous test instrument:For this current work we re-analyzed the first version of

the test instrument (COIN-questionnaire (Comprehension of

ObjectINteraction)).

The previous test instrument had two aims:

1) Test if there was a correlation between understanding the

OIH and code reading skills (for this we used questions

from [2]),

2) Evaluate the participants understanding of object inter-

action.

In this work we focus on evaluating the participant’s under-

standing of object-orientation. As a consequence, we excluded

the questions used from [2] (i.e. question c, d, n and o).

Based on our analysis of the results from the first test

instrument, we concluded that some errors are more likely

to result from small mathematical errors (adding three-digit

numbers in the head causes problems for freshmen) than errors

with understanding object-interaction. One example of this

was question h1 where a couple of objects stored in a container

should be summed up by iteration, see listing 1. 127 was

the correct result. Here some students answered 117, 122,

128, 129, 137; it is very likely that it is due to mathematical

errors. This conclusion could be taken as an argument for

using multiple choice questions (MCQ). However, using MCQ

introduces other problems, students fits the answers with the

question instead of tracing the code, it is difficult to include all

answers that students misunderstanding of object interaction

will give, as well as students tend to answer all question even

the one they can not (see [28] for an example of answering

techniques justifying the above concerns)

List<A> l= new ArrayList<A>();l.add(new A(76));l.add(new A(-10));l.add(new A(6));l.add(new A(43));l.add(new A(12));int sum = 0;for(A a:l)sum+=a.getA();

Listing 1. Question h1 in the old questionnaire

As described in [16] the hardest question was i3 (see excerpt

in listing 2). In the loop a reference to the same object

was added five times to the list, not five different objects,

this makes this question so difficult. Most students answered

same as the output before (19%) or decreased by 5 (19%).

The correct answer was that the sum was decreased by 25

(answered by 17%), because five entries in the list referred to

this object. We thought this indirect, invisible change is indeed

hard to detect since it relies on reference semantics.

// previous: added objects to a list.Overall sum: 79

A a2 = l.get(4); // retrieves on objectfrom a list l

a2.test(5); // changes the objects valuefrom 10 to 5

sum=0;for(A aIt3:l) {sum+=aIt3.getA(); //computes overall sum

of values}System.out.println(sum); // what is the

output here?

Listing 2. Question i3 in the old questionnaire

C. Conclusion of the analysis

We concluded that some adoptions to the original OIH

would be useful in order to make it easier to use, easier to

interpret and easier to apply to different teaching situations.

The issues overall to change were: removing strict mapping

of polymorphism to one level of the hierarchy; combining the

two highest levels, being more precise in describing the levels.

In general, we concluded that the degree of visibility of

the impact of object interaction on the object structure is the

most important factor for the difficulty of understanding object

interaction.

III. THE REFINED TEST INSTRUMENT

Based on the new hierarchy, we have also refined the test

instrument. This section describes the new test instrument and

gives a brief analysis of the results from a first empirical study

in which the test instrument was used. The results will be used

to check whether the new OIH is a taxonomy and to discuss

its validity.

A. Test instrument

As in the first version, the instrument uses items that require

the participants to state the outcome of the execution of short

programs given on paper. The programs are written in Java.

The complete test instrument can be found at http://staff.iha.

dk/jbb/COIN v2.pdf

In designing the items, the aim was to have only necessary

aspects included. Specifically we chose to detain from includ-

ing domain knowledge and therefore used arbitrary names for

classes (like A and B). In order to construct items with more

complex object structures we included ArrayLists, as these

should be familiar to the students. Consequently, all questions

are short code examples with one or two classes and a class

containing a main method where the action takes place. In

order to save time, the same program contains more than one

question.

In total there are five programs with 18 questions overall.

The programs are numbered from “a” to “e”, within one

program the questions are numbered from 1 to n. The output

of program “d” was from a container with two objects (in

a loop), these questions were numbered “d1a” and “d1b”. In

1111

Page 4: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Object Interaction Competence

the following we will give some examples of questions on the

three levels:

1) level one questions - interaction with objects: The

student must demonstrate knowledge about simple forms of

interaction between objects. The execution must be directly

visible from the program text. Listing 3 gives an example of

a question (the question “b1”).

public class A {private int a;public A(int newA) {a = newA;

}public int getA() {return a;

}}import java.util.*;public class TestClass {public static void main() {List<A> l= new ArrayList<A>();l.add(new A(2));l.add(new A(4));l.add(new A(3));l.add(new A(1));l.add(new A(2));int sum = 0;for(A a:l) {sum+=a.getA();

}System.out.println(sum);// what is the output here?

}}

Listing 3. Typical “interaction with objects” question

2) level two questions - interaction on object structures:The student must demonstrate knowledge about interaction

between objects where the state of the objects change over

time (thus taking the previous state of the object into account).

The execution must be directly visible from the program text.

Listing 4 gives an example of a question (the question “b2”).

The question uses the same class A as in listing 3.

public class TestClass {public static void main() {

... same as listing \ref{level-1-question}

l.add(new A(5));l.set(0,new A(5));sum = 0;for(A a:l) {

sum+=a.getA();}System.out.println(sum);// what is the output here?

}}

Listing 4. Typical “interaction on object structures” question

3) level three questions - interaction on dynamic objectstructures: The student must demonstrate knowledge about

interaction between objects where the state of the objects

change over time; and there are indirect changes included,

e.g. by more than one reference to the same object or other

examples where reference semantics has to be taken into

account.Listing 5 gives an example of a question (the question “c2”).

The change to a1 is indirect since a local variable refers to

the same object as a1 and this local variable is changed (thus

changing a1 as well)

public class A {private int a;public A(int newA) {a = newA;

}public int getA(){return a;

}public void change(int newA) {a=newA;

}}public class B {private int b;public B(int newB){b=newB;

}public int getB() {return b;

}public void change(A a) {A temp = a;temp.change(b*2);b = b*3;

}}public class TestClass {public static void main() {A a1= new A(2);B b1 = new B(2);b1.change(a1);System.out.println(a1.getA());// what is the output here?

}}

Listing 5. Typical “interaction on dynamic object structures” question

B. Participants in the test of the test instrumentThe questionnaire was given to 110 students from Freie

Universitat Berlin during a usual lecture. The students partici-

pated in a course called “Algorithm and Programming 3”, it is

1212

Page 5: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Object Interaction Competence

their third semester-long course on programming. During the

first semester, they learned a functional approach, and in the

second semester they studied basic object-oriented concepts.

The questionnaire was answered by 93 students; of these 83

answers were included in the analysis. Four answers were

excluded due to comment indicating that the students did not

take the questionnaire seriously. And additional six answers

were excluded due to a misunderstanding of the questionnaire;

a slight syntactical error prompted them to answer “compiler

error”, although an announcement was made to ignore the

syntax error and answer as if the code would be correct.

IV. REFINING THE HIERARCHY

In this section we describe the refined OIH and the refined

COIN-instrument. Our aim in refining the OIH is to focus it on

understanding object interaction conceptually. Polymorphism,

which were seen as central in the first version, was therefore

excluded, in order to focus more clearly on the basic concepts

of the execution of an object oriented program. In addition, we

refined the associated test-instrument, the COIN-questionnaire

(Comprehension of ObjectINteraction).

A. The refined hierarchy

Our aim is to describe a taxonomy that reflects the de-

velopment of expertise. This subsection describes the refined

OIH based on the previous analysis of the old hierarchy, on

the literature (see [16] for references) and our expertise in

teaching object-orientation. The hierarchy is – like the old

one – intended to be a taxonomy. In the following the word

taxonomy includes this hierarchical feature.

1) Interaction with objects The student can understand

simple forms of interactions between (a couple of)

objects, such as method calls and creation of objects

(including simple methods calls to initialize an object

like adding objects to a container in the beginning). The

student is aware that the results of method calls depend

on the identity and state of the object(s) involved. The

objects do not change state.

2) Interaction on object structures The student is able

to comprehend a sequence of interactions, in which

the history of changes has to be taken into account.

(Interaction is dynamically changing a structure ofobjects) including iteration through object structures.

The structure is created and changed explicitly via

creations, additions and deletions.

3) Interaction on dynamic object structures The student

knows the dynamic nature of object structures, under-

stands the overall state of the structure and is aware of

reference semantics. The student takes into account that

the interaction on the structure or elements of it can lead

to side-effects (e.g. implicit changes in the structure).

E.g. several elements in a structure refer to the same

object, so a change in one reference has effects on others.

The test was administered in the middle of the lecture;

instead of a usual short break. The students was asked to

self-report on the time taken. Unfortunately, only few students

did that. Consequently, we can not use this data to check for

correlation between the time to complete the questionnaire

and the result. However, we have done this previously where

we found no correlation between the time to complete the

questionnaire and the result. Overall it seems as roughly 10-

20 minutes were needed to complete the questionnaire.

V. ANALYSIS

Based on the gathered data, we will examine the instrument

and check if the levels are taxonomic. The questionnaire was

given on paper, so the students were able not to answer. In

the following analysis, missing answer was interpreted as a

wrong answer. We ranked questions by difficulty. In figure

?? the result of each of the questions can be found. Of the

students, 100% answered the easiest question correctly (c1),

but only 10% accurately answered the hardest (e3).

Fig. 1. Percentage of correct answers for the questions (blue/grey: level1;light gray with frame: level2, black:level3)

Overall the assumed difficulty of level is in agreement with

the difficulty level of questions. However, there are some

observations:

1) For level 1 the questions e1 and e2 seem to be more

difficult. Or maybe, the first four questions used are

too easy for this audience. They were almost answered

completely correct.

2) For level 2 questions c3 and b2 (and probably d1a) seem

somewhat too easy compared to the other questions.

3) For level 3, question c2 may be somewhat easier than

the other questions.

Overall, we see for every level of the hierarchy some questions

that are seemingly different. This may be due to the problems

mentioned in the introduction: These types of questions nec-

essarily involve several concepts/language constructs. In this

test e.g. the use of a container class and iteration was included

as somewhat additional concepts. (We will discuss this issue

in the discussion section)

While there are some issues with some questions, we

nevertheless used all questions including their theoretically

defined categorization into one of three levels to compute the

’OIH-level’ of each test score. In a first step, for each level,

the number of correct answered questions was summarized.

We used two different methods to compute the overall level

a test will be assigned to. It is the same procedure as defined in

[16], where we have used roughly 70The so called hierarchical

level HierLevel was computed with individual criteria for each

level:

• more than 4 out of 6 level 1 correct for level1 (or: 5+

from 6 for level1)

1313

Page 6: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Object Interaction Competence

• more than 5 out of 8 level 2 correct for level2 (or: 6+

from 8 for level2)

• more than 2 out of 4 level 1 correct for level3 (or: 3+

from 4 for level3)

The final level was the maximum level reached. It is therefore

possible, that a participant with only two correct questions

in the whole test has a HierLevel of 3 if these tow correct

answers were for questions assigned to level3.

The second method computes taxonomic levels. For a

TaxLevel a participant must fulfill the criteria for the current

level plus the criteria for all lower levels. Thus a participant

without meeting criteria for level1 cannot reach level3.

Table I shows how many participants were assigned to what

level using either the hierarchic or the taxonomic method:

What can be seen in the table is that the numbers differ. Indeed

TABLE ICOMPARISON OF LEVELS AS HIERARCHICAL OR TAXONOMIC

Taxonomy Hierarchy

no. of ans. no. of ans.None 32 26

1 30 302 11 153 10 12

one person has level3 using HierLevel, but otherwise, using

TaxLevel, has only level0 (case no. 36). What does this mean?

The idea of the hierarchy as a competence model was that

persons on a certain level indeed are able to answer questions

on all lower levels (with a high probability) correctly. This

indeed seems to be the case for the asked persons. However,

this not entirely a save assumption, as from 83 participants,

two would fall from an assigned level 3 to level 0, and another

4 participants would fall from level 2 to 0. For the other

paricipants both clculations result in the same level.

To further check for the validity of the results, we have

checked the correlation between HierLevel, TaxLevel and sum,

which are all higher than 0.73 (Spearman rank correlation).

Overall, we conclude that the proposed hierarchy a) matches

well the in-creasing complexity of questions and b) is a reliable

instrument for measuring the degree of understanding object-

interactions.

VI. DISCUSSION

In this section, we will discuss some of issues with the OIH

and especially with the associated test instrument.

Du Bouley describes five overlapping areas that a student

must master in order to be able to learn programming. This

implies, that it is impossible to just check one area without

relying on the knowledge of others. In this test, we are relying

on the fact, that a student can understand some programming

constructs.

A. The hierarchy (OIH)

As there are always several dimensions, or more specifically,

several different constructs involved, it is difficult to describe

the hierarchy in a way so that every possible example can be

easily interpreted as belonging to a specific level of the OIH.

Consider for example a small but complex object-oriented

program that features composite classes, objects that are given

as method arguments, delegation of work to other objects, and

mutual dependencies between objects that call each other’s

methods. However, this is all purely functional object-oriented

program; objects create and return new objects instead of

changing themselves. Surely a program like this would require

a very solid understanding of object interaction. But it seems

hard to place anywhere but on level one since it does not

feature state changes. This is why we have included the vague

requirement ”between (a couple of) objects” for distinguishing

level 1 from more complex levels. It may be possible (as

is our current suggestion) to include ”no state change” as

additional marker for level 1; or to allow more fine grained

levels distingushing 1a and 1b probably using state changes

as differentiation. In the current version of the test instrument

we have included some question (e1) with state change that

we still have interpreted as level 1 question. One question

is, whether the OIH is what we have called taxonomic [16],

where persons understanding level n also understand level

n−1. We have tested this with a threshohold of roughly 70%.

In this versions at least 5 or 6 level1 questions had to be

answered correctly. If we would include 4 correct answers as

sufficient, this would change some statistical tests; e.g. would

be the taxonomic and the hierarchical levels be completely

the same. Given the overall difficulties with the test, one

should be cautious; but we think to have some indication that

indeed learners progress in their understanding according to

the theoretical model of the OIH.

B. The test instrument (COIN)

We aimed at minimizing influences from e.g. domain knowl-

edge and external concepts, still it might be that this test,

as those analyzed by Petersen et. al. [7], has quite a lot of

concepts are included in one question that are not intended

to be evaluated by the question. Here for example the use of

ArrayList’s, as they allow more complex object interaction to

be described with a few line of code.

Programs c and d include questions on all three levels while

d includes the ArrayList construct, c does not. And indeed,

questions c1,c2 and c3 are relatively easy questions for their

level, but nevertheless the difficulty increases along with the

proposed and theoretically assigned level of the OIH as can be

seen from figure ??. Similarly, questions for d are increasing

according to the proposed level, but are for each level more

difficult than c questions. This might be due to the use of

arrays. On the other hand, question b1 (on level 1) also uses

arrays and belongs to the four easiest questions of the test.

We conclude that for the participants in this test, the ArrayList

construct was not an additional problem.

Using build in classes like an ArrayList can cause problems

if the test is used with students who are no very familiar with

the class. If the students can not remember the interface for the

class (e.g. the semantics of the set method), the answer will

1414

Page 7: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Object Interaction Competence

be counted as wrong even though it is a problem with other

aspects than the object interaction. In future versions of the

test instrument, we will include a description of the interface

for an ArrayList. One question used the List interface as the

formal type of an variable. This was an error; we do not want

to include knowledge about interfaces and types in the test.

Object interaction is one of several complexities the students

must master. One example of this can be seen in question ”b1”

and ”e1”. In principle are they they the same, an initialization

of a list structure. However, question ”b1” is very easy (94%

answered it correct) whereas only 61% answered question ”e1”

correct (if we exclude the not answered, it is 95% correct

for b1 and 84% for e1). This illustrate that the ”functional”

complexity must be taken into consideration when analyzing

the results of the test. Another problem with question ”b1”

could be that it is very easy to guess the correct answer just

by looking at the text – here are some numbers and something

called sum, so it probably sum up the numbers.

In summary we conclude that the OIH is useful to predict

the ”relative” difficulty of questions, but not their ”absolute”

difficulty.

One example of where the concrete program in question did

have some influence is questions d1-d4. In these questions,

there are two outputs for each answer box. In the prologue

to the test, we explicitly stated that the students should write

all outputs on a single line divided by commas. However, as

can be seen in table II, more students have not answered the

second question (named dxb) than the first question (named

dxa)

TABLE IINUMBER OF not answered OUTPUTS IN THE D QUESTIONS

first output second output

d1 23 31d2 32 47d3 40 51d4 45 59

The test was – as previously described – administered in a

lecture. For practical reasons, it was necessary to end the test

when we anticipated that the students have ended. It might

have been the case that some students did not finish the test

within the time frame, e.g. question e1 (a level 1 question)

had 22 students not answering it, whereas question b2 (a

level 2 question) only had 5 blank answers. By counting ”not

answered” as wrong, this leads to the last questions seeing

more difficult.

The test was written in Java. Other studies have shown that

the layout of a program influences students’ understanding of

the program [29], [30]. Java have a common style; a style

that the questions adhere to. Other programming languages

might not have the same common style, and consequently it

might be more difficult to compare answers from two or more

institutions.

As we have discussed in the introduction section, there are

several problems with validity of questions for assessment of

programming knowledge. We therefore aimed at designing

questions from a theoretical model, the object interaction

hierarchy. Based on this model, we designed the questions

used in the COIN-instrument.

VII. FUTURE WORK

We will elaborate on the test instrument based on the

problems discussed above. Secondly, we will evaluate the test

instrument in more than one institution as we did with the

previous one and include different sequences of the questions

to exclude that effect. Furthermore, we will translate the test

instrument to other programming languages in order to test

the reliability of the proposed hierarchy.

VIII. CONCLUSION

We have analyzed the results of a previous hierarchy of

object interaction. Based on that analysis, we have developed

a new taxonomy for object interaction. We have furthermore

developed a test instrument for the hierarchy and made a pre-

liminary validation of the test instrument. The test instrument

is valid and verifies that the proposed levels are taxonomic.

However, there are still problems with the test instrument.

Many researchers have concluded that one of the most prob-

lematic concepts for students to understand is the execution of

a program text - the notional machine. Sorva [31] has a good

definition what a notional machine is and not is. Mark Guzdial

discusses on his blog [32] notional machines and teaching

and learning computer science. He concludes “To understand

computing is to have a robust mental model of a notional

machine”. The proposed hierarchy is NOT a notional machine,

but it can serve as a guide for the development of different

layers of a notional machine that is useful when structuring

the progression in teaching.

Teachers need to focus on the development of the students

model of the object-oriented part of the notional machine,

because an understanding of this notional machine is needed

to understand interaction on the higher levels. When the

alternations have a direct, visible counterpart in the program

text, there are no problems, but when the changes happen

indirectly, almost none of the students understand this. The

programs used in the test were small (max 50 lines of code);

it would be even more difficult in larger programs, were such

information is dispersed.

ACKNOWLEDGMENT

We would like to thank the lecturer of the course as well

as the students for their participation in this study. We would

furthermore like to thank the reviewers for their fruitful and

thoughtprovoking comments.

REFERENCES

[1] Simon, “Koli calling comes of age: an analysis,” in Seventh Baltic SeaConference on Computing Education Research (Koli Calling 2007), ser.CRPIT, R. Lister and Simon, Eds., vol. 88. Koli National Park, Finland:ACS, 2007, pp. 119–126.

1515

Page 8: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Object Interaction Competence

[2] R. Lister, E. S. Adams, S. Fitzgerald, W. Fone, J. Hamer, M. Lindholm,R. McCartney, J. E. Mostrom, K. Sanders, O. Seppala, B. Simon,and L. Thomas, “A multi-national study of reading and tracingskills in novice programmers,” in Working group reports from ITiCSEon Innovation and technology in computer science education, ser.ITiCSE-WGR ’04. New York, NY, USA: ACM, 2004, pp. 119–150.[Online]. Available: http://doi.acm.org/10.1145/1044550.1041673

[3] M. McCracken, V. Almstrum, D. Diaz, M. Guzdial, D. Hagan, Y. B.-D.Kolikant, C. Laxer, L. Thomas, I. Utting, and T. Wilusz, “A multi-national, multi-institutional study of assessment of programming skillsof first-year cs students,” SIGCSE Bull., vol. 33, no. 4, pp. 125–180, Dec.2001. [Online]. Available: http://doi.acm.org/10.1145/572139.572181

[4] J. Sheard, Simon, A. Carbone, D. Chinn, M.-J. Laakso, T. Clear,M. de Raadt, D. D’Souza, J. Harland, R. Lister, A. Philpott, andG. Warburton, “Exploring programming assessment instruments: aclassification scheme for examination questions,” in Proceedings of theseventh international workshop on Computing education research, ser.ICER ’11. New York, NY, USA: ACM, 2011, pp. 33–38. [Online].Available: http://doi.acm.org/10.1145/2016911.2016920

[5] A. Robins, “Learning edge momentum: a new account of outcomesin cs1,” Computer Science Education, vol. 20, no. 1, pp. 37–71,2010. [Online]. Available: http://www.tandfonline.com/doi/abs/10.1080/08993401003612167

[6] R. Lister, “Computing education research: Geek genes and bimodalgrades,” ACM Inroads, vol. 1, no. 3, pp. 16–17, Sep. 2011. [Online].Available: http://doi.acm.org/10.1145/1835428.1835434

[7] A. Petersen, M. Craig, and D. Zingaro, “Reviewing cs1 examquestion content,” in Proceedings of the 42nd ACM technicalsymposium on Computer science education, ser. SIGCSE ’11. NewYork, NY, USA: ACM, 2011, pp. 631–636. [Online]. Available:http://doi.acm.org/10.1145/1953163.1953340

[8] D. Zingaro, A. Petersen, and M. Craig, “Stepping up to integrativequestions on cs1 exams,” in Proceedings of the 43rd ACM technicalsymposium on Computer Science Education, ser. SIGCSE ’12. NewYork, NY, USA: ACM, 2012, pp. 253–258. [Online]. Available:http://doi.acm.org/10.1145/2157136.2157215

[9] P. Hubwieser and A. Muhling, “What students (should) knowabout object oriented programming,” in Proceedings of the seventhinternational workshop on Computing education research, ser. ICER’11. New York, NY, USA: ACM, 2011, pp. 77–84. [Online]. Available:http://doi.acm.org/10.1145/2016911.2016929

[10] M. Pedroni and B. Meyer, “Object-oriented modeling of object-orientedconcepts,” in Proceedings of the 4th International Conference onInformatics in Secondary Schools - Evolution and Perspectives:Teaching Fundamentals Concepts of Informatics, ser. ISSEP ’10.Berlin, Heidelberg: Springer-Verlag, 2010, pp. 155–169. [Online].Available: http://dx.doi.org/10.1007/978-3-642-11376-5 15

[11] K. Sanders, J. Boustedt, A. Eckerdal, R. McCartney, J. E. Mostrom,L. Thomas, and C. Zander, “Student understanding of object-orientedprogramming as expressed in concept maps,” in Proceedings of the39th SIGCSE technical symposium on Computer science education,ser. SIGCSE ’08. New York, NY, USA: ACM, 2008, pp. 332–336.[Online]. Available: http://doi.acm.org/10.1145/1352135.1352251

[12] A. Zendler and C. Spannagel, “Empirical foundation of centralconcepts for computer science education,” J. Educ. Resour. Comput.,vol. 8, no. 2, pp. 6:1–6:15, May 2008. [Online]. Available:http://doi.acm.org/10.1145/1362787.1362790

[13] A. Eckerdal, M.-J. Laakso, M. Lopez, and A. Sarkar, “Relationshipbetween text and action conceptions of programming: aphenomenographic and quantitative perspective,” in Proceedingsof the 16th annual joint conference on Innovation andtechnology in computer science education, ser. ITiCSE ’11. NewYork, NY, USA: ACM, 2011, pp. 33–37. [Online]. Available:http://doi.acm.org/10.1145/1999747.1999760

[14] A. Eckerdal and M. Thune, “Novice java programmers’ conceptionsof ”object” and ”class”, and variation theory,” in Proceedings ofthe 10th annual SIGCSE conference on Innovation and technologyin computer science education, ser. ITiCSE ’05. New York,NY, USA: ACM, 2005, pp. 89–93. [Online]. Available: http://doi.acm.org/10.1145/1067445.1067473

[15] J. Sheard, S. Simon, M. Hamilton, and J. Lonnberg, “Analysis ofresearch into the teaching and learning of programming,” in Proceedingsof the fifth international workshop on Computing education research

workshop, ser. ICER ’09. New York, NY, USA: ACM, 2009, pp. 93–104. [Online]. Available: http://doi.acm.org/10.1145/1584322.1584334

[16] J. Bennedsen and C. Schulte, “A competence model for object-interaction in introductory programming,” in Proceedings of the 18thannual Workshop Of the Psychology Of Programming Interest Group.Brighton, UK: ppig, 2006, p. 215229.

[17] B. du Boulay, “Some difficulties of learning to program,” Journal ofEducational Computing Research, vol. 2, no. 1, pp. 57–73, Jun. 1986.

[18] B. du Boulay, T. OShea, and J. Monk, “The black box in-side the glassbox: Presenting computing concepts to novices,” in Studying the noviceprogrammer., E. Soloway and J. Spohrer, Eds. Lawrence Erlbaum,1989.

[19] M. Guzdial, “Centralized mindset: A student problem with object-oriented programming,” SIGCSE Bull., vol. 27, no. 1, pp. 182–185, 1995.

[20] I. Milne and G. Rowe, “Difficulties in learning and teachingprogramming – views of students and tutors,” Education andInformation Technologies, vol. 7, no. 1, pp. 55–66, Mar. 2002. [Online].Available: http://dx.doi.org/10.1023/A:1015362608943

[21] N. Ragonis and M. Ben-Ari, “A long-term investigation of the com-prehension of oop concepts by novices,” Computer Science Education,vol. 15, no. 3, pp. 203–221, 2005.

[22] ——, “On understanding the statics and dynamics of object-oriented programs,” in Proceedings of the 36th SIGCSE technicalsymposium on Computer science education, ser. SIGCSE ’05. NewYork, NY, USA: ACM, 2005, pp. 226–230. [Online]. Available:http://doi.acm.org/10.1145/1047344.1047425

[23] J. Sorva, “Reflections on threshold concepts in computer programmingand beyond,” in Proceedings of the 10th Koli Calling InternationalConference on Computing Education Research, ser. Koli Calling ’10.New York, NY, USA: ACM, 2010, pp. 21–30. [Online]. Available:http://doi.acm.org/10.1145/1930464.1930467

[24] J. Sajaniemi, M. Kuittinen, and T. Tikansalo, “A study of thedevelopment of students’ visualizations of program state during anelementary object-oriented programming course,” in Proceedings of thethird international workshop on Computing education research, ser.ICER ’07. New York, NY, USA: ACM, 2007, pp. 1–16. [Online].Available: http://doi.acm.org/10.1145/1288580.1288582

[25] K. Beck and W. Cunningham, “A laboratory for teaching objectoriented thinking,” SIGPLAN Not., vol. 24, no. 10, pp. 1–6, Sep. 1989.[Online]. Available: http://doi.acm.org/10.1145/74878.74879

[26] C. Schulte and J. Bennedsen, “What do teachers teach in introductoryprogramming?” in Proceedings of the second international workshopon Computing education research, ser. ICER ’06. New York,NY, USA: ACM, 2006, pp. 17–28. [Online]. Available: http://doi.acm.org/10.1145/1151588.1151593

[27] J. Bennedsen and C. Schulte, “Bluej visual debugger for learningthe execution of object-oriented programs?” Transactions on ComputerEducation, vol. 10, no. 2, pp. 8:1–8:22, Jun. 2010. [Online]. Available:http://doi.acm.org/10.1145/1789934.1789938

[28] “Testing with success series - multiple choice tests,” http://www.studygs.net/tsttak3.htm, accessed: 02/11/2012.

[29] S. Wiedenbeck, “Novice/expert differences in programming skills,”International Journal of Man-Machine Studies, vol. 23, no. 4, pp. 383– 390, 1985. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0020737385800419

[30] R. Green and H. Ledgard, “Coding guidelines: finding the art in thescience,” Commun. ACM, vol. 54, no. 12, pp. 57–63, Dec. 2011.[Online]. Available: http://doi.acm.org/10.1145/2043174.2043191

[31] J. Sorva, “Visual Program Simulation in Introductory ProgrammingEducation,” Ph.D. dissertation, Aalto University, Finland, May 2012.[Online]. Available: http://lib.tkk.fi/Diss/2012/isbn9789526046266/isbn9789526046266.pdf

[32] “Defining: What does it mean to understand com-puting?” http://computinged.wordpress.com/2012/05/24/defining-what-does-it-mean-to-understand-computing/, accessed:13/11/2012.

1616