a comparison of instructor and student verbal feedback in

35
1 A comparison of instructor and student verbal feedback in engineering capstone design review meetings: Differences in topic and function Abstract We analyzed verbal feedback provided by instructor and students in capstone design review meetings and categorized questions and comments by topic; design, project management, communication, and by function; comprehension, evaluation, recommendation. We also used a question-type typology to further analyze questions posed within the function of comprehension. Overall, comprehension questions were more likely to be on the topic of design, while evaluations and recommendations fell under the topic of communication. A comparison of instructor and student feedback revealed that, on average, the instructor provided not only more feedback than individual students, but also distributed it better across the different topics and functions. The amount of feedback provided by student teams positively correlated with their own design project outcomes. Background In capstone design courses students typically receive formative and summative feedback in design review meetings. More recent implementations integrate and encourage student peers to provide feedback, in addition to the course instructor and industry client. A review of the literature on feedback uncovers various characterizations of feedback Purpose/Hypothesis

Upload: others

Post on 18-May-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A comparison of instructor and student verbal feedback in

1

A comparison of instructor and student verbal feedback in engineering capstone design

review meetings: Differences in topic and function

Abstract

We analyzed verbal feedback provided by instructor and students in capstone design review

meetings and categorized questions and comments by topic; design, project management,

communication, and by function; comprehension, evaluation, recommendation. We also used a

question-type typology to further analyze questions posed within the function of comprehension.

Overall, comprehension questions were more likely to be on the topic of design, while evaluations

and recommendations fell under the topic of communication. A comparison of instructor and

student feedback revealed that, on average, the instructor provided not only more feedback than

individual students, but also distributed it better across the different topics and functions. The

amount of feedback provided by student teams positively correlated with their own design project

outcomes.

Background

In capstone design courses students typically receive formative and summative feedback in design

review meetings. More recent implementations integrate and encourage student peers to provide

feedback, in addition to the course instructor and industry client. A review of the literature on

feedback uncovers various characterizations of feedback

Purpose/Hypothesis

Page 2: A comparison of instructor and student verbal feedback in

2

The purpose of this investigation was threefold. First, we developed a new categorization of

feedback that characterizes it along both topic and function dimensions. Second, we used the new

framework to compare feedback provided by students and course instructors in design review

meetings. Third, we investigated how the feedback students provide correlates with their own

design outcomes.

Design/Method

We analyzed verbal feedback provided by the instructor and students in twelve design review

meetings of a management engineering capstone design course. Over 500 questions and comments

were categorized by topic: design, project management, communication, and by function:

comprehension, evaluation, recommendation.

Results

Comprehension questions were more likely to be on the topic of design, while evaluations and

recommendations fell under the topic of communication. On average, the instructor provided not

only more feedback than individual students, but also distributed it better across the different topics

and functions. The amount of feedback provided by student teams positively correlated with their

own design project outcomes

Conclusions

The new typology gives insight into what the instructor and peers choose to communicate to

student design teams, as well as how they choose to communicate it. Findings can help instructors

promote better feedback-giving for themselves and students alike.

Page 3: A comparison of instructor and student verbal feedback in

3

Introduction

Pedagogical research has long been concerned with how feedback can best promote student

learning. Typically, this is done through formative feedback, defined as “information

communicated to the learner that is intended to modify his or her thinking or behaviour for the

purpose of improving learning” (Shute, 2008, p. 154).

In the context of design education in general and capstone engineering design courses in

particular, formative feedback is regularly provided to students in design review meetings. These

are held at various points in the project progression, often coinciding with the completion of

major design milestones, and are attended by students, the course instructor, the project client,

and other stakeholders. Similarly, in the context of professional engineering practice, design

reviews serve an important function to communicate the state of the design to all project

stakeholders, and to find errors in the current design proposal and to mitigate design project

risks. A design review is effective if it raises issues and concerns, challenges assumptions, and

results in a number of follow-on items to address (Colwell, 2003).

Traditionally, design reviews in educational settings have been attended only by the students

directly involved in the design. Yet, the broader education literature has long advocated for the

use of peer feedback, which has been shown to improve students’ ability to give and receive

criticism (Sondergaard & Mulder, 2012), as well as increase collaborative learning in the

classroom (Willey & Gardner, 2010). Multiple studies comparing peer and teacher assessment

have unpacked the benefits of peer feedback (Falchikov & Goldfinch, 2000). Adding peer review

to instructor review increases the overall quantity of feedback received by students (Cho &

MacArthur, 2010), with the most benefit being derived when feedback is provided by multiple

Page 4: A comparison of instructor and student verbal feedback in

4

peers (Cho & Schunn, 2007). Peer feedback has been found to be beneficial even in cultures that

emphasize the authority of the teacher (Yang, Badger, & Yu, 2006).

There have recently been reported multiple, varied implementations of the use of peer feedback

in engineering design courses (both at the capstone level, as well as in junior and intermediate

years) at various universities. The trend is in part influenced by a successful tradition of the

design critique in architecture programs, where student peers, in addition to course instructors

and expert professionals are invited to critique design artifacts (Dinham, 1987; Bannerot &

Patton, 2002; Oh, Ishizaki, Gross, & Do, 2013) Accordingly, many of the reported

implementations incorporate peer feedback consciously and in a larger context of explicitly

adapted studio model to engineering (Kuhn, 2001; Knight & Horton, 2005). The quantity and

type of peer review that is made possible in engineering design classes vary depending on the

implementation, with reported examples ranging from inter-team assessment of oral

presentations (McKenzie, Trevisan, Davis, & Beyerlein, 2004) to written reviews of design

documents (Nicol, Thomson, & Breslin, 2014; Pimmel, 2001) and artifacts (Garousi, 2010).

Peer review is also a common industry practice, sometimes embedded as a formal standard

operating procedure to satisfy quality certification, or to meet regulatory requirements. The

review and release of data and documentation usually includes a checking function, representing

the most basic and detailed level of peer review. Weekly internal management update meetings

offer an informal opportunity to provide peer review. Finally design reviews occur with clients

and suppliers where required. These represent more formal and public reviews. In the cases

where a design non-conformance is found during manufacture or testing, the quality

management system will need to address this with a corrective action, and this often requires a

design review.

Page 5: A comparison of instructor and student verbal feedback in

5

We have previously reported on an intensive face-to-face implementation of peer feedback in a

management engineering capstone design program [Reference hidden to ensure blind review]. In

biweekly design review meetings students critiqued various aspects of other teams’ design

projects, including oral presentations, documents, and artifacts in an informal and supportive

atmosphere. The format encouraged students to share ideas between groups, helped them

improve how they communicated their problem and solution, and potentially led to improved

design outcomes by providing multiple opportunities for receiving feedback and refining the

design. While students were asked to report on the helpfulness of feedback received from both

the course instructor and student peers, the results were not conclusive [Reference hidden to

ensure blind review].

In this study we (1) analyze actual feedback provided by students and course instructors in

design review meetings, (2) expose and classify characteristics of each along multiple

dimensions, and (3) compare and evaluate their benefits and suitability.

The rest of the paper is structured as follows. In Section 1, we seek to determine useful existing

typologies of feedback, looking at both differences in the focus of feedback given by engineering

experts and novices, as well as on how feedback can be structured more generally. Then, in

Section 2, after introducing the general context of the study from which data is collected, we

describe how the textual data is coded and analyzed. Drawing heavily from prior work, we

describe a typology of feedback that accounts for both its topic and function, with numerous

examples of actual feedback comments provided by students and the instructor in our study. In

Section 3 we present the results of our analysis and compare instructor and student feedback

along multiple dimensions. We conclude this paper with a general discussion of the results,

situating them in the context of existing findings (Section 4).

Page 6: A comparison of instructor and student verbal feedback in

6

1. Background

Feedback must be effective at all levels of student learning: cognitive, motivational, and

behavioural. In a review of the literature, Nicol and Macfarlane-Dick (2006) found that good

feedback promotes student reflection and self-learning by helping clarify learning goals and

teacher expectations, facilitating student self-assessment, delivering high-quality information to

students about their progress, encouraging teacher-student dialogue, improving student

motivation and self-esteem, bridging the gap between current and expected performance, and

ultimately improving teaching.

1.1 Typologies of feedback

Some models propose that feedback is composed of two main components: the evaluative (or

verification) part, which assesses the quality of the answer, and the informational (or

elaboration) part, which provides direction for progress (Narciss, 1999; Shute, 2008). The

evaluative component of feedback has sometimes been further broken down as either praise (i.e.,

encouraging comments) or criticism (i.e., pointing out weaknesses without suggesting an

improvement) (Cho, Schunn, & Charney, 2006). In contrast, summary-type feedback comments

(i.e., restating the main points of a portion or the whole work) usually lack an evaluative

component altogether (Cho, Schunn, & Charney, 2006). A more informative feedback is found

to be related to better performance, and, depending on the student’s confidence in their own

abilities (or self-efficacy), better motivation (Narciss, 1999; Narciss, 2004).

There also exist other typologies of feedback that categorize feedback along various (sometimes

overlapping) dimensions (Nelson & Schunn, 2009). One of the more common ways to categorize

feedback is according to its specificity, though the classification (itself) of feedback in this

Page 7: A comparison of instructor and student verbal feedback in

7

dimension is far from specific (Shute, 2008). In the domain of learning a second language, for

example, a basic distinction has been made between corrective feedback that is direct (i.e.,

telling students exactly where the problem is and how to fix it) versus indirect (i.e., point out that

there is an error without correcting it) (Ellis, 2009). Similarly, in the domain of English writing,

feedback has been classified as directive (i.e., specific suggestions for improvement) and non-

directive (i.e., non-specific suggestions for improvement that could apply to any paper) (Cho,

Schunn, & Charney, 2006). Generally, while direct corrective feedback can be an appealing

alternative when time is limited, indirect feedback can be better ‘customized’ to specific

students’ learning styles and improves student learning by allowing them to self-correct

(Yoshida, 2008). A similar dimension to specificity is also the degree of helpfulness of the

feedback; for example, Yu and Wu (2013) described four levels of feedback helpfulness:

general, specific identification of strengths and weaknesses, the latter plus identification of areas

for improvement, and explicit suggestions for how improvements might be achieved.

Another identified dimension of feedback is length/complexity. While longer feedback can be

more informative, more complex feedback can be more difficult to understand by the novice;

however, it is not clear whether the effect of complex feedback is entirely negative (Shute,

2008). Studies have found that the length of feedback and the number of comments are larger for

experts than for student peers, with experts providing more directive and non-directive feedback

than student peers (Cho, Schunn, & Charney, 2006). Student peers provide more praise

comments, while experts provide the fewest summary comments (Cho, Schunn, & Charney,

2006). Non-directive feedback results in more complex repairs to students work, whereas

directive feedback results in mostly surface improvements (Cho & MacArthur, 2010).

1.2 Feedback in engineering and architecture

Page 8: A comparison of instructor and student verbal feedback in

8

In architecture programs, the activity of critique (through crits, juries, reviews) is central to

design pedagogy (Oh, Ishizaki, Gross, & Do, 2013). Reviews are the primary oral venue in

which students present their work and receive feedback, and are considered important events in

students’ learning. As one moves from the context of providing feedback in the domain of

learning writing and second language skills to the context of the architecture and engineering

design reviews, new feedback typologies begin to emerge.

In the engineering educational domain there is recent interest in understanding and characterizing

feedback provided by instructors, peers and other reviewers. (Yilmaz & Daly, 2016; Krauss &

Neeley, 2016; Ferreira, Christiaans, & Almendra, 2016; Tolbert, Buzzanel, Zoltowski,

Cummings, & Cardella, 2016; Groen, McNair, & Paretti, 2016; Cardella, Diefes-Dux, &

Marbouti, 2016). For example, an exploratory study examining instructor feedback across the

disciplines of dance choreography, industrial design and mechanical engineering proposed a

classification of feedback based on whether instructor feedback made students take convergent

or divergent paths in their design processes (Yilmaz & Daly, 2016). Feedback suggesting

convergent pathways was found to be more prominent than feedback suggesting divergent

pathways across all 3 disciplines. Possible explanations include the preference for students to

favor convergent feedback since divergent feedback promotes exploration, which takes time, and

has risks associated with it. The study highlights how feedback can prompt students to think in

these ways and how instructors can be more intentional and aware about the type of feedback

they provide. A similar study found that instructor feedback encouraging students to engage in

convergent thinking took different forms and it emphasized the importance of including both

forms of thinking in course designs (Daly & Yilmaz, 2015).

Page 9: A comparison of instructor and student verbal feedback in

9

Another typology of feedback was constructed by Cardella, Diedes-Dux, and Marbouti (2016)

who analyzed written feedback of 19 educators and 120 first-year engineering students on a

common set of student project work. A coding scheme for the feedback was developed with two

domains: focus and substance of feedback. The focus domain included elements such as direct

recommendation, investigation or brainstorming, expression of confusion, provide

detail/example, positive assessment, and negative assessment. The substance of the feedback

included elements such as communication, design concepts, design ideas. The study found that

educators and students’ feedback was different in both focus and substance. Educators focused

on investigation/brainstorming comments and asked more thought-provoking questions. In

contrast, students provided direct recommendation comments related to specific instructions on

improving the design project. It is thought that educators try to motivate students to explore

before choosing a solution whereas students, as novice designers, become fixated on a specific

solution and continue to evolve that solution. Students also expressed confusion less often,

possibly because they lack confidence in admitting they do not understand. The authors suggest

that students may need to develop more design thinking expertise in order to demonstrate more

expertise in providing design feedback.

An important question emerges on how the feedback channel (e.g., oral versus written) affects its

content. While examining peer feedback processes for capstone design projects at Harvey Mudd

College, Kraus and Neeley (2016) hypothesized that structural and social barriers complicate and

degrade the feedback process. They found that students in the course may not be open with their

feedback during oral question and answer periods and that written feedback gathers more

information. They recommend that time in class be devoted to discussing feedback rather than

for the presentations themselves. Artifacts themselves affect feedback provided by reviewers

Page 10: A comparison of instructor and student verbal feedback in

10

(Groen, McNair, & Paretti, 2016). Well-developed artifacts elicited constructive feedback from

reviewers, whereas incomplete or inaccurate designs caused students to receive limited dialogue

from reviewers.

1.3 Design reviews and design review conversations

Design reviews are conducted to communicate the status of design evolution to project

stakeholders, and can take several legitimate and meaningful forms in professional engineering

practice. They provide important opportunities to compare the state of the design to the design

requirements, identify errors and non-conformances in the design that may prove to be very

costly if implemented, and to ensure safety and regulatory requirements are addressed adequately

(Liu, 2015).

Depending on the practice context, design reviews may be required as part of a binding contract,

or, as in the case for the development of medical devices, to be conducted as part of the

regulatory regime. A well executed design can yield multiple benefits or can yield multiple

liabilities, and therefore the design review represents a crucial activity in mitigating this risk. A

recent study in the process industry suggested that about 20% to 50% of the studied incidents had

at least one root cause attributed to design, as opposed to workmanship (Taylor, 2007). The

number of design errors that exist in the design process is much higher, but 80% to 90% of them

are removed during design reviews. Design reviews are a control on the design process to ensure

that the design meets stakeholder (user(s)) needs, intended uses and applications, and specified

requirements (Tartal, 2017). These controls can take the form of soft controls, such as a casual

review with an individual designer/engineer, or hard controls, where all stakeholders are

represented in a formal, documented review. One comprehensive study of design reviews in an

aerospace company revealed the importance of information synchronization in the design review,

Page 11: A comparison of instructor and student verbal feedback in

11

and characterized the review process as having the steps of sharing information, evaluating the

design, and managing the design, as key functions (Huet, Culley, McMahon, & Fortin, 2007).

Hallmarks of an effective review are that all potential non-conformances and risks have been

identified and a plan made to address them, and that the review conduct was held in a positive

and constructive tone (Colwell, 2003).

The primary purpose of design reviews in engineering education is for learning. A secondary

purpose is to evaluate the state of the design to the specified requirements and to find errors.

This view is not commonly held amongst all faculty, especially for capstone design courses. In

the discipline of design, learning from failure is a deeper learning result than learning from

success; this can be at odds with the traditional assessment systems and promotional

requirements of academic programs in capstone design. Experienced design faculty know that

design reviews in engineering education are opportunities to facilitate learning of design, and

therefore require a different approach in question-asking and dialogue. External examiners and

clients tend to short-circuit student learning during reviews by divulging the answer or approach

to students, rather than leading the students along to self-discovery (Cardoso, Eris, Badke-

Schaub, & Aurisicchio, 2014). It is therefore important to distinguish the primary purposes of

design reviews for professional practice and design reviews for engineering design education.

The desire to understand design, or design thinking, as a cognitive, affective and conative ability

has now become a serious topic of research (Johansson‐Sköldberg, Woodwilla, & Çetinkaya,

2013; Jonassen, 2000). It is recognized that all humans have innate creative abilities, but it is the

specific activity of design and exceptional ability of designers performing complex design tasks,

that has attracted great attention (Cross, Christiaans, & Dorst, 1996). A landmark series of

Design Thinking Research Symposia (DTRS), initiated in the mid-1990s, offer a rich corpus of

Page 12: A comparison of instructor and student verbal feedback in

12

insight into the features of design activity, and more recently (DTRS10) of design review

activity, the latter also published in a proceedings book (Adams & Siddiqui, 2015) and in special

issues of Design Studies (Adams, Cardella, & Purzer, 2016) and CoDesign (Adams, McMullen,

& Fosmire, 2016).

An important subset of the conversations that occur in design review meetings are questions; as

such, question-asking behaviour of instructors, peer students and other stakeholders in design

reviews has been studied extensively (Cardoso, Eris, Badke-Schaub, & Aurisicchio, 2014; Eris,

2004) According to one particular framework, which builds on significant prior work on

question-asking (Graesser, Person, & Huber, 1992; Graesser & Person, 1994; Graesser, Lang, &

Horgan, 1988), questions are classified into two (2) categories: low-level and high-level

(Cardoso, Badke-Schaub, & Eris, 2016). Low-level questions are information-seeking in nature –

the aim is to get missing details, to establish a baseline understanding of the design problem

and/or progress. High-level questions, which “relate to higher-levels of reasoning” (Cardoso,

Badke-Schaub, & Eris, 2016, p. 62), are further sub-classified into Deep Reasoning Questions

(DRQ) and Generative Design Questions (GDQ). DRQs suggest convergent thinking, where the

reviewer seeks causal explanation of given facts, while GDQs suggest divergent thinking, where

the reviewer attempts to imagine possibilities. The type and timing of questions in the design

process can have significant effects on both the design process and outcomes. For example, high-

level questions during the idea generation phase can reduce the effect of design fixation

(Cardoso, Badke-Schaub, & Eris, 2016).

1.4 Purpose of study

The reviewed literature presents a wealth of prior work on categorizing and understanding

feedback from instructors, (expert) designers, clients, and student peers in both written and

Page 13: A comparison of instructor and student verbal feedback in

13

verbal forms. This study builds on this prior work, using a new two-dimensional and multi-level

typology of feedback to characterize and compare verbal feedback provided to student design

teams in semi-formal design review meetings by their peers and course instructor. We seek to

answer the following research questions:

How can the formative assessment and feedback provided by students (novices) and

instructors (experts) during design review meetings be characterized?

What are the differences between the feedback provided by students (novices) and

instructors (experts)?

How do formative assessment and feedback correlate with design outcomes?

2. Methodology

2.1 Data collection

The study was conducted in a management engineering capstone design course at a large

engineering school in Canada. This course was the first in a two-course series that comprises the

capstone design project in this program. By the end of the 13-week course, students were

expected to form groups, select a project topic, research and gather information on the design

problem, identify the design requirements and specifications, produce at least three conceptual

designs and finally propose and describe a low-fidelity prototype of a chosen design for

implementation. The class of fifty-five students self-enrolled in fourteen project teams, with all

but one team having four members. Each team was able to source their project topic from a

variety of sources, including industry clients and design problems students identified themselves.

Students attended formal lectures on engineering design in just six of the thirteen weeks in the

course. The rest of class time was structured around bi-weekly design review meetings. Each

Page 14: A comparison of instructor and student verbal feedback in

14

team participated in three such meetings, of which the third one was chosen for analysis. They

were formatted as 80-minute meetings attended by two teams and the instructor. Teams took

turns presenting their progress to the audience (for 20 minutes), followed by a discussion period

in which each team was questioned and received feedback from both the instructor and the other

team in attendance (for another 20 minutes).

By the third meeting, students had become familiar with the format and were fairly comfortable

asking questions and providing feedback to their peers. While the reviews of all 14 projects were

intended to be video recorded and analyzed, due to technical issues, 2 of the recordings were not

of sufficiently good quality for transcribing and further analysis. The resulting 8 hours of video

recorded meetings were transcribed using Rev, a professional transcribing service.

2.2 Coding categories

Excerpts of interest were statements (in whatever form) uttered by instructors and peers that

concerned the design project under review. Of those, we excluded any statements not directly

related to the project, including comments related to meeting management and housekeeping

(e.g., “Should we get started?”), agreement (e.g., “Okay, I see”), and directions (e.g., “Can you

go back to that slide?”).

Each statement was assigned a source code – instructor or peer, referring to the origin of the

questions/statements made towards the students whose design project was being reviewed. The

coding was made at the ‘group’ level; in other words, we didn’t distinguish between the

individual team members conducting another team’s project review.

All feedback statements were categorized along two primary dimensions: topic and function.

Page 15: A comparison of instructor and student verbal feedback in

15

We took a grounded theory methodological approach (Corbin & Strauss, 2008) to identify the

various topics of the conversations in the design review meetings. The analysis suggested that all

feedback statements could be categorized as belonging to one or more of the following three

topics:

● Design, including problem identification, problem formulation, concept generation,

preliminary and detailed design, verification and validation, design impact. E.g., “What

are the different [design] concepts that you considered?”

● Project management, including scheduling, deliverables, and stakeholder management.

E.g., “Have you been meeting with your [faculty] advisor?”

● Project communication. E.g., “You have some work to do on your presentation – just for

people to understand your design concept.”

This categorization is similar to the one provided by Cardella, Diefes-Dux, and Marbouti (2016),

whose analogous category (“substance”) comprised four sub-categories: communication, design

concepts, design ideas, and “no code”.

To categorize feedback by its function, we took the following view of design project evolution

during a capstone design course (Figure 1). At each design review meeting, different

stakeholders come with different understandings and knowledge of a particular project’s

progress. The project owners naturally know the actual project progress and current state; the

first half of the design review meeting allows them to present their progress-to-date to all those

present. On the other hand, the instructor and paired team have a perception of the progress a

project should have made, based on their individual and subjective expectations and desires

informed by their experiences with similar projects in similar (or dissimilar) contexts.

Page 16: A comparison of instructor and student verbal feedback in

16

Figure 1 An approximate depiction of a typical design project's evolution

The questions and feedback provided by the audience can therefore be categorized as

accomplishing one (or more) of three functions: (1) accurately pin-point the actual state of the

project (labelled comprehension), (2) compare that actual state to the expected/desired state

(labelled evaluation), and (3) provide suggestions to achieve this (labelled recommendation).

In statements that are coded as performing a comprehension function the reviewer seeks to

clarify details and to expand their understanding beyond what is already presented. Within this

category, questions are further categorized according to their type – low-level, deep reasoning,

and generative design questions (and respective sub-categories) - a taxonomy that is based on

prior work on question-asking in design review meetings (Graesser & Person, 1994; Eris, 2004;

Cardoso, Eris, Badke-Schaub, & Aurisicchio, 2014). A list of all question types, illustrated with

examples, is provided in Appendix A.

Page 17: A comparison of instructor and student verbal feedback in

17

Statements directed at the group under review that are evaluative in nature generally provide a

judgment about what is presented. Each evaluative statement is also assigned a “value” code,

ranging from 1 to 5: very negative (1), negative (2), neutral (3), positive (4), and very positive

(5). Judgment is closely related to an expected target; in other words, whether the judgment is

very negative (e.g., “As far as the presentation goes, there wasn’t really any progress to be

seen”), neutral (e.g., “That’s a really important aspect of your design”), or very positive (e.g.,

“These [concepts] are very good”), is an assessment of the distance between the current state of

the design (or its communication or management) and the state it is believed it should be in by

the reviewer. Note that both states are subject to the reviewer’s perception. First, the perceived

current status of the design is based on the reviewer’s understanding of the presented

information; questions are necessary to ascertain that the current state has been accurately

pinpointed. Second, the expected state of the design is also subject to the reviewer’s perception

of the type and difficulty of the design project, the team’s skill, the elapsed time in the project, as

well as the reviewer’s own design experience. The expressed evaluation can take both explicit

and implicit forms. When implicit, the evaluation can usually be extracted from the content of

the recommendation component of the feedback (see below). In other words, the content of the

recommendation provided also packs an implicit evaluation of what has been presented.

Finally, statements that are recommending in nature provide further elaboration/information

about what the team can do to achieve a desired state. In the case when the evaluation was

negative (i.e., the current state is perceived to be lower than the expected state), the

recommendation will provide steps for achieving an expected target performance (e.g., “Just

[analyze data] from one hospital at this point”). In the case when the evaluation is positive (i.e.,

the current state surpasses the expected state), the recommendation will either ‘raise the bar’ and

Page 18: A comparison of instructor and student verbal feedback in

18

set a new performance target (or desired state) for the team (e.g. “I know it’s not in your scope,

but it would be cool to give the client a report that will help them with forecasting”), or simply

give the team an opportunity to re-scope the project so that additional effort (above expectation)

is not needed in future milestones (e.g. “It’s a great project idea, but you have to focus on what

you actually want to do”).

A summary of the proposed typology of all statements directed at a design team during a design

review meeting is shown in Table 1.

Table 1 Proposed typology of feedback statements

Dimension Category Sub-Category

1. Topic 1.1 Design

1.2 Project Management

1.3 Communication

2. Function 2.1 Comprehension 2.1.1 Low-level questions

2.1.2 Deep reasoning questions

2.1.3 Generative design questions

2.2 Evaluation

2.3 Recommendation

Excerpts can be monothetic or polythetic, meaning that they can be assigned to one or more

categories (Graesser, Person, & Huber, 1992). Although we tried – as much as possible – to

divide speech acts to maximize the number of monothetic excerpts, in some excerpts multiple

Page 19: A comparison of instructor and student verbal feedback in

19

content and topic types overlapped. For example the excerpt “That’s really cool, so I would make

it pop [in the presentation]” contains both a (positive) evaluation and a recommendation.

Similarly, the excerpt “You have to show what is interfacing with what, and where it is

happening”, touches on both topics of design and communication. This also explains why the

number of feedback statements does not always coincide with the number of codes assigned

those statements, as the reader may observe throughout Section 3.

2.3 Reliability

Once a general coding framework was agreed upon, all transcripts were first coded by the first

author. The second author coded “test” samples of the transcripts (approximately 20% of the

total number of excerpts each time). After each iteration, and based on discussions between the

two authors, the first author re-coded the entire set of transcripts, until inter-rater agreement of

more than 70% was achieved on a new test sample.

3. Results

A total of 553 ‘codable’ feedback statements were identified. Of those, 298 originated from

students (peers) and 255 from the instructor. As shown in Figure 2, the spread of feedback

directed at each team was not uniform in either total number of statements or the origin of those

statements from the instructor or peers. In particular, while on average each team received 46

questions or comments from peers and the instructor, between teams that number varied from 20

to 69. Further, while on average 53% of statements each team received originated from peers,

that number was as low 18% for some teams and as high as 74% for others.

Further breaking down the number of comments originating from the instructor and each

individual student, it is found that, on average, in each design review meeting the instructor

Page 20: A comparison of instructor and student verbal feedback in

20

provides a significantly larger number of questions/comments compared to the individual student

[MInstructor = 21.25, SDInstructor = 7.99, MStudent = 6.26, SDStudent = 2.66; t(11) = 6.39 , p < .01].

Figure 2 Feedback statements directed at each project (arbitrarily labelled 1-12) by the instructor

and the paired peer teams.

3.1 Topic of feedback

The results of the analysis on the topic of feedback are shown in Figure 3. For both the

instructor and peers, the majority of comments/questions are on the topic of design. However,

while 89 (or 33%) of the 267 topic codes assigned to the instructor’s feedback are on project

management and communication, feedback from peers is less balanced, with just 21 (or 7%) of

the 300 codes assigned to peers’ feedback falling under those topics.

1015 20

32

16

29

9

2632

18 212710

29

4023

12

40

25

21 7 34 26

31

0

10

20

30

40

50

60

70

80

1 2 3 4 5 6 7 8 9 10 11 12

# o

f fe

ed

bac

k st

atem

ents

(q

ues

tio

ns/

com

men

ts)

Project under review

Instructor Peer

Page 21: A comparison of instructor and student verbal feedback in

21

Figure 3 Distribution of instructor and peer feedback according to topic

The instructor places a greater emphasis on management and communication aspects of the

design project for two main reasons: First, they are mindful of the limited time and resources

available to the teams as well as the number of future critical course milestones that heavily rely

on the team’s ability to communicate their project to a wider audience. Second, project

management and communication are intended learning outcomes of the capstone design course

and thus need to be regularly assessed by the course instructor, both formatively and

summatively. In fact, of the instructor’s 32 feedback statements that were coded as belonging in

the “project management” topic, 13 were coded as performing an evaluative function. In many

cases the instructor expresses concern about the project’s progress or the ability of the team to

deliver according to schedule, as illustrated by the following comment:

“I guess I'm more worried about your schedule in terms of what [] you hope to achieve by the

end of this term”

Similarly, of the instructor’s 57 feedback statements in the topic of communication, 31 were

coded as evaluative. Often, the instructor makes an assessment on the effectiveness of the

presentation, identifying aspects that need improvement; for example:

178 (67%)

279 (93%)

57 (21%)

14 (5%)

32 (12%)

7 (2%)

Instructor

Peer

Design Communication Project Management

Page 22: A comparison of instructor and student verbal feedback in

22

“With regards to the presentation, your slides are a bit text heavy”

3.2 Function of feedback

The results of the analysis on the function of feedback are shown in Figure 4. While a similar

portion of both the instructor’s and peer’s feedback performs a recommending function, there is

a significant imbalance between the two groups in the amount of both comprehension and

evaluation feedback.

Figure 4: Distribution of instructor and peer feedback according to function

First, compared to the instructor’s, a larger portion of the peers’ feedback falls in the

comprehension category. Second, 33% of the function codes assigned to the instructor’s

feedback are evaluative, proportionally more than twice the peers’ feedback. Likely the main

reason why students are hesitant to provide evaluative feedback is out of concern of

inadvertently affecting their peers’ grade in the course, especially since they would be providing

their evaluations in the presence of the instructor. Another possible reason could be their self-

perceived lack of authority in evaluating others’ projects. In fact, the difference between peers

and the instructor is apparent not only in the number of evaluation statements, but also in their

content: peer’s evaluative feedback is more positive than the instructor’s feedback [MInstructor =

2.50, SDInstructor = 1.06, MPeers = 3.08, SDPeers = 1.19; t(142) = 2.88, p < .01]. The difference in the

134 (48%)

211 (67%)

91 (33%)

51 (16%)

54 (19%)

53 (17%)

Instructor

Peer

Comprehension Evaluation Recommendation

Page 23: A comparison of instructor and student verbal feedback in

23

number of evaluative feedback statements can also be attributed to the differences in feedback

topic, as noted earlier. The instructor provides more feedback in the topic of communication and

project management – feedback falling in this topic often performs an evaluative function.

Finally, one can speculate that without having read their peers’ prior deliverables and having to

completely rely on the presentation given in the design review meeting, students need to dedicate

a larger portion of their feedback to questioning, leaving little room for evaluative feedback.

3.3 Relationship between feedback topic and function

There are strong correlations between the topics and functions of feedback. Table 2 summarizes

the correlation factors and respective significance levels for each permutation taking into

consideration all feedback. (When taken separately, peer and instructor feedback follow very

similar patterns.) The emerging picture suggests that while questions (i.e., comprehension

feedback) are more likely to be on the topic of design, evaluations and recommendations are

more likely to be on the topic of communication.

Table 2 Pearson's correlation (r) analysis between the function and topic of feedback (df = 552)

Topic

Design Communication Project Management

Fu

nct

ion

Comprehension .26* - .33* - .04

Evaluation - .23* .32* .06

Recommendation - .10* .15* - .03

*Statistically significant at .05 level

No strong correlations emerge between the topic of project management and the various

functions. As it was previously noted, a large majority of comments in the topic of project

management come from the instructor (32 of 39 total comments). The instructor’s feedback in

Page 24: A comparison of instructor and student verbal feedback in

24

this topic spans all functions; for example, she might ask questions about the schedule (e.g., “Is

there any contingency?”), assess the team’s progress (e.g., “…There wasn’t really any progress

to be seen.”, and make recommendations for project scope adjustment (e.g., “I’d much rather

you [reprioritize] your work according to the time left, rather than trying continuing to align

yourselves with the initial needs statement”).

3.4 Type of questions

We further analyzed comprehension feedback and broke down the questions asked by peers and

the instructor by type: low-level, deep reasoning, and generative. The results are summarized in

Figure 5. While both groups dedicate a similar portion to deep reasoning questions, there is a

larger difference between the two in the other categories. In particular, compared to the

instructor, a larger portion of the peers’ questions fall in the generative category, while a larger

portion of the instructor’s questions are in the low-level category.

Figure 5 Distribution of instructor and peer feedback in the comprehension function according to type

Within the low-level questions category, the biggest differences between the instructor and the

students are observed in the portion of questions that are of the concept completion type (e.g.,

“Who is the typical user?”, “What are the inputs?”, “What are the next steps?”). One possible

99 (74%)

137 (65%)

13 (10%)

25 (12%)

22 (16%)

49 (23%)

Instructor

Peer

Low-Level Deep Reasoning Generative

Page 25: A comparison of instructor and student verbal feedback in

25

explanation for why the instructor places a great emphasis on low-level questions in general and

concept completion questions in particular is that she needs to grasp the student team’s design

project progress very well in order to properly assess it. On the other hand, students – who are

not tasked with evaluating their peers – can instead focus more of their questions on future steps

by using GDQs, which made up 23% of all their questions). For students, GDQs also play the

role of recommendations; particular future steps are suggested, but when stated in the form of

GDQs they do not carry the same weight (and attached accountability on the part of the team on

the receiving end) as the instructor’s recommendations. The fact that students ask relatively

fewer low-level questions may also be indicative of their developing skills in giving effective

feedback.

3.4 Feedback and design outcomes

Feedback provided by peers and the instructor - along the topic, form, and type dimensions - was

compared to student design outcomes. The measure that was used to approximate design

outcomes was the average grade of all design deliverables throughout the two-course design

project sequence. Although we found a number of notable correlations, due to the small sample

size, few were statistically significant.

One notable and significant relationship was found between student’s design outcomes and the

amount of feedback they directed to their peers: The number of questions and comments that

students directed at a project (i.e., total feedback) was positively correlated with their own design

outcomes, r(10) = 0.52, p < .05. In other words, stronger teams provided more feedback and

weaker teams provided less feedback.

4. Discussion

Page 26: A comparison of instructor and student verbal feedback in

26

We have described a new comprehensive taxonomy that builds on existing work (Cardella,

Diefes-Dux, & Marbouti, 2016; Eris, 2004; Graesser & Person, Question Asking During

Tutoring, 1994) and characterizes feedback in design review meetings along two dimensions:

topic and function. The analysis of authentic feedback provided to capstone design teams by both

their student peers and the course instructor in design review meetings revealed important

differences between instructor and student peer feedback.

4.1 Major findings

The overall analysis of instructor and peer feedback demonstrated that the instructor provides

more feedback (over three times as many comments/questions) than the average student, a result

that is in line with prior findings. For example, in their analysis of written feedback on the same

sample student work, Cardella, Diefes-Dux and Marbouti (2016) found that the instructors

provided more feedback (both in number and in length) than students. Similarly, in their analysis

of design review meetings, Cardoso et al. (2014) found that instructors and clients asked more

questions (per unit of time) than student peers.

In our study, we also found that the better the design outcomes of a team, the more feedback they

provided to their peers. Taken together, the two findings – that the instructor and stronger teams

provide more feedback – suggest that the ability and confidence to critique a design is increased

with prior design experience.

An interesting finding that emerged from the correlation analysis of feedback along the topic and

function dimensions is that feedback in the topic of design is significantly more likely to have the

function of comprehension. In other words, questions directed at a student team, from both the

instructor and peers, are more likely to concern (technical) details of the design. In contrast,

Page 27: A comparison of instructor and student verbal feedback in

27

evaluative and recommending feedback is more likely to concern the communication (i.e.,

presentation) of the design. No significant relationships were found between feedback in the

topic of project management and the various functions, indicating that comments on that topic

are a balanced mix of questions, assessments, and suggestions.

These correlations are also important in explaining the differences in feedback between

instructors and peers. We found that the instructor provides more evaluative and recommending

feedback than the peers. In addition, as also predicted by prior work (Cardella, Diefes-Dux, &

Marbouti, 2016; Cho, Schunn, & Charney, 2006) the students’ evaluative feedback was more

positive than the instructor’s. Part of the reason for the difference is that, being more aware than

students of the importance of effective communication of the design project and being obliged to

evaluate all components of the design, including communication and project management, the

instructor places greater emphasis on assessing all aspects of the project and providing

suggestions for improvement.

On the other hand, students focus more on asking questions (comprehension feedback) to their

peers; in their case “recommendations” are made in the form of generative design questions. It

appears that proposing changes or new directions for the design project in a form of a question is

easier for students than giving outright recommendations.

Our finding that students ask more questions than the instructor (in proportion to other types of

feedback) may seem to contradict earlier findings by Cardella et al. (2016), who reported that

students expressed confusion less often than instructors, possibly because they lacked confidence

in admitting they did not understand. The inconsistency can be attributed to the significantly

different settings in which the two studies were run. In our case, feedback is provided in an

informal, conversation-based design review, in which students – all in their senior year - know

Page 28: A comparison of instructor and student verbal feedback in

28

each other fairly well. Students expect their questions to be answered during the meeting. In

Cardella et al. (2016), feedback is written and reviewers – first-year students- do not know the

author of the sample work. As such, it makes sense that in that setting, students would provide

fewer questions than instructors in their feedback.

The literature suggests that expert designers spend more time in the information gathering and

problem definition phases compared to novices (Atman, Chimka, Bursic, & Nachtmann, 1999;

Atman, et al., 2007). It would seem that our results here - that a higher portion of the students’

feedback (compared to the instructor) was spent on asking questions (comprehension) –

contradict prior findings. This is not the case, however: in absolute terms, the instructor provided

more feedback (including in the function of comprehension) than the average student. In

addition, the instructor has more opportunities to ask comprehension questions in prior design

review meetings, where as, for students, the design review meeting under study was their first

opportunity to learn about and provide feedback to their peers’ projects. Moreover, a good

portion of the student’s questions are of the GDQ type; rather than to help the students converge

on important problem details, their aim is to expand the solution space, implicitly serving the

function of (weak) recommendations.

4.2 Limitations of the study and future research

Kraus and Neeley (2016) hypothesized that structural and social barriers complicate and degrade

the feedback process. They found that students in the course may not be open with their feedback

during oral question and answer periods and that written feedback gathers more information. In

the case of the mixed-review format described in this paper, the instructor is not only reacting to

the presented material but also to the feedback provided by the students in the meeting. Thus, it

is plausible that the instructor’s feedback in this context may differ - in quantity, content and

Page 29: A comparison of instructor and student verbal feedback in

29

type - from the feedback they would have provided if the meeting was in the instructor-only

format. The same could be said for the students’ feedback. The feedback provided by the

instructor, and more importantly- the presence, itself, of the instructor in the design review –

likely affects the feedback that students provide to their peers. A more rigorous comparison of

feedback between instructors and peers would occur in a setting where design reviews are only

attended by either student peer reviewers or the instructor.

Nevertheless, the characterization of student and instructor feedback in design reviews, as

described in this paper, is valuable. In teaching practice, design reviews would rarely occur

without the presence of or other involvement from the instructor; student peers’ feedback is

always, to some extent, conditioned by the physical presence of the instructor and/or their

perceptions of the instructor’s expectations.

Another gap in our current understanding of instructor and peer feedback is its value, both

objectively and subjectively (as perceived by the students). While we found that stronger teams

provided more feedback, we can not be sure that stronger teams also provide better feedback. In

particular, we do not know if students assign more value to certain feedback topics or functions

over others. Future studies are needed to characterize feedback and describe the sensitivity of

“value” according to a number of factors, including the feedback’s source and timing in the

design process. A better understanding of what makes for good questions/comments in a design

review meeting would help instructors improve the efficiency and effectiveness of these

meetings and better train students in asking good questions and providing good feedback in their

future careers as engineering professionals.

Conclusion

Page 30: A comparison of instructor and student verbal feedback in

30

We have described a systematic characterization and comparison of student and instructor verbal

feedback in design review meetings of an engineering capstone design course.

A new two-dimensional typology was developed and utilized that captured both the topic of the

feedback statements (design, design communication, and project management) but also the

function being performed by the feedback (to comprehend, evaluate, or recommend). The new

typology gives insight into what the instructor and peers choose to communicate to student

design teams, as well as how they choose to communicate it. This augmented understanding of

how peer and instructor feedback differ can help instructors promote better feedback-giving not

only in students but also in themselves, by encouraging balanced feedback that addresses all

components of the design project and performs a variety of functions.

Funding

This work was supported by an institutional learning innovation grant.

References

Adams, R. S., & Siddiqui, J. A. (Eds.). (2015). Analyzing Design Review Conversations. West

Lafayette, Indiana: Purdue University Press.

Adams, R. S., Cardella, M., & Purzer, Ş. (2016). Analyzing Design Review Conversations:

Connecting Design Knowing, Being and Coaching. Design Studies, 45, Part A, 1-8.

Adams, R. S., McMullen, S., & Fosmire, M. (2016). Co-Designing Review Conversations – Visual

and Material Dimensions, Editorial. CoDesign, 12(1-2), 1-5.

Atman, C. J., Adams, R. S., Cardella, M. E., Turns, J., Mosborg, S., & Saleem, J. (2007).

Engineering design processes: A comparison of students and expert practitioners. Journal

of Engineering Education, 96(4), 359-379.

Atman, C. J., Chimka, J. R., Bursic, K. M., & Nachtmann, H. L. (1999). A comparison of freshman

and senior engineering design processes. Design Studies, 20(2), 131-152.

Page 31: A comparison of instructor and student verbal feedback in

31

Bannerot, R., & Patton, A. (2002). Studio Design Experiences. Proceedings of the 2002 ASEE

Gulf-Southwest Annual Conferenct. The University of Louisiana at Lafayette: American

Society for Engineering Education.

Cardella, M. E., Diefes-Dux, H. A., & Marbouti, F. (2016). Written Feedback on Design: A

Comparison of Students and Educators. International Journal of Engineering Education,

32(3B), 1481-1491.

Cardoso, C., Badke-Schaub, P., & Eris, O. (2016). Inflection Moments in Design Discourse: How

Questions Drive Problem Framing During Idea Generation. Design Studies, 46, 59-78.

Cardoso, C., Eris, O., Badke-Schaub, P., & Aurisicchio. (2014). Question Asking in Design

Reviews: How Does Inquiry Facilitate the Learning Interaction. DTRS 10: Design Thinking

Research Symposium. Purdue University.

Cho, K., & MacArthur, C. (2010). Student Revision with Peer and Expert Reviewing. Learning

and Instruction, 20, 3328-338.

Cho, K., & Schunn, C. D. (2007). Scaffolded Writing and Rewriting in the Discipline: A Web-

Based Reciprocal Peer Review System. Computers and Education, 48(3), 409-426.

Cho, K., Schunn, C. D., & Charney, D. (2006). Commenting on writing: Typology and perceived

helpfulness of comments from novice peer reviewers and subject matter experts. Written

Communication, 23(3), 260-294.

Colwell, B. (2003). Design Reviews. Computer, 36(10), 8-10.

Corbin, J., & Strauss, A. (2008). Basics of Qualitative Research: Techniques and Procedures for

developing grounded theory (3rd ed.). Sage Publications.

Cross, N., Christiaans, H., & Dorst, K. (Eds.). (1996). Analysing Design Activity. John Wiley and

Sons.

Daly, S. R., & Yilmaz, S. (2015). Directing Convergent and Divergent Activity Through Design

Feedback. In R. S. Adams, & J. A. Siddiqui (Eds.), Analyzing Design Review

Conversations (pp. 413-429). Purdue University Press.

Dinham, S. M. (1987). Research on Instruction in the Architecture Studio: Theoretical

Conceptualizations, Research Problems, and Examples. Presented at the Annual Meeting

of the Mid-America College Art Association.

Ellis, R. (2009). A typology of written corrective feedback. ELT Journal, 63(2), 97-107.

Eris, O. (2004). Effective Inquiry for Innovative Engineering Design. Kluwer Academic

Publishers.

Page 32: A comparison of instructor and student verbal feedback in

32

Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-

analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287-

322.

Ferreira, J., Christiaans, H., & Almendra, R. (2016). A Visual Tool for Analysing Teacher and

Student Interactions in a Design Studio Setting. CoDesign, 12(1-2), 112-131.

Garousi, V. (2010). Applying Peer Reviews in Software Engineering Education: An Experiment

and Lessons Learned. IEEE Transactions on Education, 53(2), 182-193.

Graesser, A. C., & Person, N. K. (1994). Question Asking During Tutoring. American Educational

Research Journal, 31(1), 104-137.

Graesser, A. C., Lang, K., & Horgan, D. (1988). A Taxonomy for Question Generation.

Questioning Exchange, 2(1), 3-15.

Graesser, A. C., Person, N., & Huber, J. (1992). Mechanisms that Generate Questions. In T. E.

Lauer, E. Peacock, & A. C. Graesser (Eds.), Questions and Information Systems (pp. 167-

187). Hillsdale, NJ: Erlbaum.

Groen, C., McNair, L. D., & Paretti, M. C. (2016). Prototypes and the Politics of the Artifact:

Visual Explorations of Design Interactions in Teaching Spaces. CoDesign, 12(1-2), 39-54.

Huet, G., Culley, S. J., McMahon, C. A., & Fortin, C. (2007). Making Sense of Engineering Design

Review Activities. Artificial Intelligence for Engineering Design, Analysis and

Manufacturing, 21(3), 243-266.

Johansson‐Sköldberg, U., Woodwilla, J., & Çetinkaya, M. (2013). Design Thinking: Past, Present

and Possible Futures. Creativity and Innovation Management, 22(2), 121-146.

Jonassen, D. H. (2000). Toward a design theory of problem solving. Educational Technology

Research and Development, 48(4), 63-85.

Kan, J. W., & Gero, J. S. (2009). Using the FBS Ontology to Capture Semantic Design Information

in Design PRotocol Studies. In J. McDonnell, & P. Lloyd (Eds.), About: Designing

Analysing Design Meetings (pp. 213-229). CRC Press.

Knight, J. C., & Horton, T. B. (2005). Evaluating a Software Engineering Project Course Model

Based on Studio Presentations. 35th ASEE/IEEE Frontiers in Education Conference.

Indianapolis, IN.

Krauss, G. G., & Neeley, L. (2016). Peer Review Feedback in an Introductory Design Course:

Increasing Student Comments and Questions through the use of Written Feedback.

International Journal of Engineering Education, 32(3B), 1445-1457.

Kuhn, S. (2001). Learning from the Architecture Studio: Impmlications for Project-Based

Pedagogy. International Journal of Engineering Education, 17(4 & 5), 349-352.

Page 33: A comparison of instructor and student verbal feedback in

33

Liu, S. (2015). Design Controls. FDA Small Business Regulatory Education for Industry (REdI).

Silver Spring, MD.

McDonnell, J., & Lloyd, P. (Eds.). (2009). About: Designing Analysing Design Meetings. CRC

Press.

McKenzie, L. J., Trevisan, M. S., Davis, D. C., & Beyerlein, S. W. (2004). Capstone Design

Courses and Assessment: A National Study. Proceedings of the 2004 American Society of

Engineering Education Annual Conference & Exposition.

Narciss, S. (1999). Motivational effects of the informativeness of feedback. Paper presented at the

Annual Meeting of the American Educational Research Association. Montreal, Canada.

Narciss, S. (2004). The impact of informative tutoring feedback and self-efficacy on motivation

and achievement in concept learning. Experimental Psychology, 51(3), 214-228.

Nelson, M. M., & Schunn, C. D. (2009). The nature of feedback: how different types of peer

feedback affec writing performance. Instructional Science, 37, 375-401.

Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a

model and seven principles of good feedback practice. Studies in Higher Education, 31(2),

199-218.

Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking Feedback Practices in Higher Education:

A Peer Review Perspective. Assessment & Evaluation in Higher Education, 39(1), 102-

122.

Oh, Y., Ishizaki, S., Gross, M. D., & Do, E. Y.-L. (2013). A Theoretical Framework of Design

Critiquing in Architecture Studios. Design Studies, 34, 302-325.

Pimmel, R. (2001). Cooperative Learning Instructional Activities in a Capstone Design Course.

Journal of Engineering Education, 90(3), 413-421.

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153-

189.

Sondergaard, H., & Mulder, R. A. (2012). Collaborative Learning through Formative Peer Review:

Pedagogy, Programs, and Potential. Computer Science Education, 22(4), 343-367.

Tartal, J. (2017, December 17). Design Controls. Retrieved from U.S. Food and Drug

Adminisration: https://www.fda.gov/downloads/Training/CDRHLearn/UCM529614.pdf

Taylor, R. J. (2007). Statistics of Design Error in the Process Industries. Safety Science, 45(1), 61-

73.

Page 34: A comparison of instructor and student verbal feedback in

34

Tolbert, D., Buzzanel, P. M., Zoltowski, C. B., Cummings, A., & Cardella, M. E. (2016). Giving

and Responding to Feedback Through Visualisations in Design Critiques. CoDesign, 12(1-

2), 26-38.

Willey, K., & Gardner, A. (2010). Investigating the Capacity of Self and Peer Assessment

Activities to Engage Students and Promote Learning. European Journal of Engineering

Education, 35(4), 429-443.

Yang, M., Badger, R., & Yu, Z. (2006). A comparative study of peer and teacher feedback in a

Chinese EFL writing class. Journal of Second Language Writing, 15(3), 179-200.

Yilmaz, S., & Daly, S. R. (2016). Feedback in Concept Development: Comparing Design

Disciplines. Design Studies, 45 Part A, 137-158.

Yoshida, R. (2008). Teachers' choice and learners' preference of corrective feedback types.

Language Awareness, 17(1), 78-93.

Yu, F.-Y., & Wu, C.-P. (2013). Predictive effects of online peer feedback types on performance

quality. Educational Technology & Society, 16(1), 332-341.

Appendix

Categorization of feedback (i.e., questions) performing the comprehending function, according to

their type.

Type Description (Example)

Low-level questions

Verification Is X true? (Do they have TVs in the depots?)

Definition What does X mean? (What do you mean [by] best performance?)

Example What is an example of X? (What would be three top use cases

you can envision right now?)

Feature specification What (qualitative) attributes does X have? (What kinds are they?)

Concept completion Who? What? When? Where? (What is the deliverable for

preliminary design? Where is the processing happening?)

Quantification How much? How many? (How many interact with your web

app?)

Disjunctive Is X or Y the case? (Is it mobile or web from here?)

Page 35: A comparison of instructor and student verbal feedback in

35

Comparison How does X compare to Y? (Tell me the difference between the

data that comes from supply chain and the data that comes from

sales)

Judgmental What is your opinion on X? (Do you think [in general] each

hospital will use it and will use it for its own data?)

Deep reasoning questions (DRQs)

Interpretation How is a particular event or pattern of information interpreted or

summarized? (What do you think the need is based on where

they’re going?)

Goal Orientation What are the motives behind an agent’s action? (As far as the

client, do you know what their main goal is?)

Causal Antecedent What caused X to occur? (What keeps the costs up?)

Causal Consequent What were the consequences of X occurring? ((N/A))

Expectational Why is X not true? (Why does it not store historical data?)

Instrumental/Procedural How does an agent accomplish a goal? (How did you decide that

these are the 3 tabs?)

Enablement What object or resource enables an agent to perform an action?

(Are there specific people that transport components? Is there a

forklift driver and that's his job?)

Generative design questions (GDQs)

Proposal/Negotiation Could a new concept be suggested/negotiated? (…Do you think

they'll want to see some analytics?)

Scenario Creation What would happen if X occurred? (How do you think having 10

hospitals would affect the models?)

Ideation Generation of ideas without a deliberate end goal (How safe am

I?)

Method Generation How could an agent accomplish a goal? (How would you measure

that?)

Enablement What object or resource could enable an agent to perform an

action? (…What system are you going to use to create the UI?)