action design research and visualization designmiriah/publications/adr.pdfrequest permissions from...

9
Action Design Research and Visualization Design Nina McCurdy University of Utah [email protected] Jason Dykes City University London [email protected] Miriah Meyer University of Utah [email protected] ABSTRACT In applied visualization research, artifacts are shaped by a series of small design decisions, many of which are evaluated quickly and informally via methods that often go unreported and unverified. Such design decisions are influenced not only by visualization theory, but also by the people and context of the research. While existing applied visualization models support a level of reliability throughout the design process, they fail to explicitly address the influence of the research context in shaping the resulting design artifacts. In this work we look to action design research (ADR) for insight into this gap. In particular, ADR oers a framework along with a set of guiding principles for navigating and capital- izing on the disruptive, subjective, human-centered nature of applied design research, while aiming to ensure reliability of the process and design. We explore the utility of ADR in increasing reliability of applied visualization design research by: describing ADR in the language and constructs devel- oped within the visualization community; comparing ADR to existing visualization methodologies; and analyzing a re- cent design study retrospectively through the lens of ADR’s framework and principles. Keywords Design study; applied visualization design research; action design research; concurrent evaluation 1. INTRODUCTION Throughout the visualization design process artifacts are shaped by many design decisions, only some of which are for- mally validated. Many of these decisions are instead evalu- ated through quick, informal, and light-weight mechanisms, most of which are not reported or verified. Furthermore, in collaborative settings these design decisions are influenced not just by visualization theory and guidelines, but also by the people and context in which artifacts are designed. Documented adherence to applied visualization process [24, 16] and decision [19, 21] models aords a level of reliability Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full cita- tion on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- publish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. c 2016 ACM. ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 for the resulting artifacts. These models stress reliability through the grounding of design decisions in established vi- sualization principles and through the validation of artifacts within the application domain. These models do not, how- ever, explicitly address the more subjective shaping of ar- tifacts by the people and context involved in the project, a shaping that can at times ignore or even go against es- tablished visualization conventions. Nor do they explicate the role that deliberate disruption on the part of the visual- ization designer plays in the shaping of artifacts. Taken together, these gaps reveal important influencing factors within the visualization design process for which there are not yet established guidelines for ensuring reliability of the resulting artifacts. Within information systems research, a recently proposed methodology called action design research (ADR) oers a framework that aims to ensure reliability of designed arti- facts through adherence to a set of principles [25]. Like vi- sualization design research, ADR seeks to contribute design knowledge by solving real-world problems, while supporting the messy, iterative, human-centered nature of the design process. ADR, however, explicitly incorporates approaches from social science that acknowledge and facilitate the ef- fects of people and context on the shaping of artifacts, and specifically those that occur when actions taken by the de- sign researcher result in a disruption of the target users’ processes or understanding, and vice versa. This explicit incorporation of established social science methods, namely those from action research [6, 17], provides guidance for re- liably designing artifacts in a real-world context precisely where gaps exist in current visualization models. In this paper we explore the use of ADR for visualization design research. More specifically, we are interested in how ADR can systematically guide disruption within the visual- ization design process in order to address reliability threats We do this by: describing ADR in the language and con- structs of visualization research; comparing ADR to existing visualization design models; and analyzing a recent design study retrospectively through the lens of ADR’s framework and principles. This exploration has led us to conclude that ADR is helpful in understanding the role of people and con- text in the shaping of visualization artifacts, and in pro- viding pointers to places in the visualization design process where this shaping can, and should, be captured and re- ported. Moreover, ADR provides scaolding for better un- derstanding the relationship of controlled, experimental, in vitro research and design-orientated in vivo research within visualization.

Upload: others

Post on 26-Sep-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Action Design Research and Visualization Designmiriah/publications/adr.pdfRequest permissions from permissions@acm.org. c 2016 ACM. ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 for the

Action Design Research and Visualization Design

Nina McCurdy

University of Utah

[email protected]

Jason Dykes

City University London

[email protected]

Miriah Meyer

University of Utah

[email protected]

ABSTRACTIn applied visualization research, artifacts are shaped by aseries of small design decisions, many of which are evaluatedquickly and informally via methods that often go unreportedand unverified. Such design decisions are influenced not onlyby visualization theory, but also by the people and contextof the research. While existing applied visualization modelssupport a level of reliability throughout the design process,they fail to explicitly address the influence of the researchcontext in shaping the resulting design artifacts. In thiswork we look to action design research (ADR) for insightinto this gap. In particular, ADR o↵ers a framework alongwith a set of guiding principles for navigating and capital-izing on the disruptive, subjective, human-centered natureof applied design research, while aiming to ensure reliabilityof the process and design. We explore the utility of ADR inincreasing reliability of applied visualization design researchby: describing ADR in the language and constructs devel-oped within the visualization community; comparing ADRto existing visualization methodologies; and analyzing a re-cent design study retrospectively through the lens of ADR’sframework and principles.

KeywordsDesign study; applied visualization design research; actiondesign research; concurrent evaluation

1. INTRODUCTIONThroughout the visualization design process artifacts are

shaped by many design decisions, only some of which are for-mally validated. Many of these decisions are instead evalu-ated through quick, informal, and light-weight mechanisms,most of which are not reported or verified. Furthermore, incollaborative settings these design decisions are influencednot just by visualization theory and guidelines, but also bythe people and context in which artifacts are designed.

Documented adherence to applied visualization process [24,16] and decision [19, 21] models a↵ords a level of reliability

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected].

c� 2016 ACM. ISBN 978-1-4503-2138-9.

DOI: 10.1145/1235

for the resulting artifacts. These models stress reliabilitythrough the grounding of design decisions in established vi-sualization principles and through the validation of artifactswithin the application domain. These models do not, how-ever, explicitly address the more subjective shaping of ar-tifacts by the people and context involved in the project,a shaping that can at times ignore or even go against es-tablished visualization conventions. Nor do they explicatethe role that deliberate disruption on the part of the visual-ization designer plays in the shaping of artifacts. Takentogether, these gaps reveal important influencing factorswithin the visualization design process for which there arenot yet established guidelines for ensuring reliability of theresulting artifacts.

Within information systems research, a recently proposedmethodology called action design research (ADR) o↵ers aframework that aims to ensure reliability of designed arti-facts through adherence to a set of principles [25]. Like vi-sualization design research, ADR seeks to contribute designknowledge by solving real-world problems, while supportingthe messy, iterative, human-centered nature of the designprocess. ADR, however, explicitly incorporates approachesfrom social science that acknowledge and facilitate the ef-fects of people and context on the shaping of artifacts, andspecifically those that occur when actions taken by the de-sign researcher result in a disruption of the target users’processes or understanding, and vice versa. This explicitincorporation of established social science methods, namelythose from action research [6, 17], provides guidance for re-liably designing artifacts in a real-world context preciselywhere gaps exist in current visualization models.

In this paper we explore the use of ADR for visualizationdesign research. More specifically, we are interested in howADR can systematically guide disruption within the visual-ization design process in order to address reliability threatsWe do this by: describing ADR in the language and con-structs of visualization research; comparing ADR to existingvisualization design models; and analyzing a recent designstudy retrospectively through the lens of ADR’s frameworkand principles. This exploration has led us to conclude thatADR is helpful in understanding the role of people and con-text in the shaping of visualization artifacts, and in pro-viding pointers to places in the visualization design processwhere this shaping can, and should, be captured and re-ported. Moreover, ADR provides sca↵olding for better un-derstanding the relationship of controlled, experimental, invitro research and design-orientated in vivo research withinvisualization.

Page 2: Action Design Research and Visualization Designmiriah/publications/adr.pdfRequest permissions from permissions@acm.org. c 2016 ACM. ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 for the

2. ACTION DESIGN RESEARCHIn this section we present the ADR methodology, which

consists of a set of guiding principles and a high-level pro-cess model. While ADR has been reported as a developmentmethodology in visual analytics systems research [28], it hasyet to be explored in light of applied visualization designmethods. In the following we discuss the development ofthe methodology within the information systems commu-nity, followed by an explication of each of the principles andprocess stages. Each of these discussions include an analysisof how they reflect, apply to, or inform current visualizationdesign research theory and practice.

2.1 OverviewInformation systems research is driven by a dual mis-

sion: to generate valuable information systems knowledgeand to create e↵ective solutions to real-world problems thatinform an application’s domain [25]. Early process modelsto achieve this mission focus on incorporating both exper-imental and design methods, while emphasizing relevancethrough a grounding in real-world problems [13, 27]. Seinet. al [25] critique this approach for not recognizing or cap-turing the influence of people, organizations, and contexton shaping technology throughout the design process. Theyargue further that appropriate forms of evaluation that con-sider such elements must be interwoven throughout the de-sign process.

The action design research (ADR) methodology [25], shownin Figure 1, is an attempt to address these shortcomings.Specifically, ADR explicates the influence of the researchenvironment as well as the roles and influences of membersof the research team in and on the design process and the re-sulting technology artifact. ADR structures the design pro-cess around cycles of building, intervention, and evaluationthat mirror the cycles of planned intervention and reflectionused in action research. Action research embraces disrup-tion and action on the part of the designer as a means tolearn about a problem [6, 17] As a result, within the ADRmethodology the resulting artifact is considered an instan-tiation of the space, time, community, and process in whichit is developed [22, 23], and termed the ensemble artifact toreflect this quality.

Our own experience suggests that, whether consciouslyor not, many visualization designers learn through actions,using explicit methods like technology probes [7] and datasketches [12], or more implicit approaches such as abstrac-tion and visualization suggestions during interviews. Through-out the design process visualization designers disrupt andinfluence both the target users and the problem context,while simultaneously being disrupted themselves. We sus-pect that adherence to the principles and process of ADRcould increase the reliability of visualization design researchby providing guidance for where and with what goals to ap-ply qualitative methods like grounded evaluation [8], and forcapturing important details around insightful disruptions.

2.2 ADR PrinciplesADR’s seven guiding principles express the core values of

the methodology and serve as a system of reminders to helpensure that research conducted by the ADR team — com-prising visualization researchers and domain practitioners— is reliable throughout. Principles associated with earlystage research (P1, P2) stress the importance of ground-

Figure 1: An overview of the ADR methodology in-

dicating the relationship of the stages to each other,

as well as the guiding principles [25]. The deliberate

omission of an arrow pointing from stage 1 to stage

2 may seem counter-intuitive, but emphasizes the

key role of reflection and learning: movement from

stage 1 to stage 2 must occur via stage 3. While

stages 1, 2 and 3 are iterative and cyclic, stage 4 is

isolated and visited only after the preceding stages

are completed.

ing design in theory and real-world problems, while thoseassociated with design development (P3, P4, P5) focus onthe influences of members of the collaboration on each otherand on the design, and on the need for continuous evalu-ation that takes such influences into account. Additionalprinciples (P6, P7) emphasize the importance of acknowl-edging and responding to such influences throughout thedesign process, as well the importance of generating usabledesign knowledge from specific research outcomes. Whilesome of these principles (P1, P2, P5, P7) reflect those ex-plicitly articulated in the visualization literature [1, 9, 16,19, 21, 24, 26], several of these principles (P3, P4, P6)provide new and potentially valuable guidance around thevalidation of visualization design research.

P1: Practice-inspired Research.The first principle emphasizes that applied research should

be motivated and inspired by real-world problems. This no-tion is analogous to what the visualization community hastermed problem-driven research. This approach helps to en-sure domain relevance and paves the way for in vivo evalu-ation — two core tenets of design studies [24].

P2: Theory-ingrained Artifact.The second principle stresses the importance of design the-

ory and domain theory in informing a design researcher’sunderstanding of the problem and solution space, and inhelping guide the design process. This principle serves thesame purpose as the learn stage in the design study method-ology [24], which emphasizes that researchers must learn thespace of visualization possibilities in order to design e↵ec-

Page 3: Action Design Research and Visualization Designmiriah/publications/adr.pdfRequest permissions from permissions@acm.org. c 2016 ACM. ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 for the

tively. Additionally, it relates to the nested model’s [21]encoding and interaction threats, which stress that theoryshould inform decisions at all levels of the design, therebyensuring that resulting artifacts are theory-ingrained. Visu-alization design studies frequently focus on the applicationof theory to inform and justify design.

P3: Reciprocal Shaping.Principle 3 emphasizes the constant shifting and shaping

of both the artifact and the design process by the di↵er-ent perspectives within the team. While this element ofdesign research may feel familiar and perhaps obvious tomembers of the visualization community, few attempts havebeen made to account for this element in existing visual-ization methodology [12]. Acknowledging the occurrence ofreciprocal shaping can increase reliability by: 1) providingan explicit opportunity to document impactful activities andinsights throughout the design process; and 2) revealing op-portunities for structured approaches to ensure and supportthe e↵ects of all people involved, on the design process andresulting artifacts. We suspect that reciprocal shaping ismore prevalent in applied visualization research than in in-formation systems research due to the deeply collaborativeand highly iterative nature of visualization design and theinfluential nature of data-led discovery. Methods for en-suring and capturing reciprocal shaping within visualizationremain underdeveloped, and are potentially of high-interestfor the BELIV community.

P4: Mutually Influential Roles.Principle 4 emphasizes the learning and cross-fertilization

that occurs among ADR team members. Each member ofthe team brings a unique suite of knowledge, theory, andexpertise. Through close collaboration the team memberslearn about each other’s expertise, sometimes o↵ering valu-able insight toward another member’s primary research do-main. These insights can create substantial shifts in howanother team member thinks about, or approaches, their re-search or domain, and within the visualization communityis informally considered a sign of success for design studies.Beyond the citation of publications in an application do-main, or anecdotal stories [18, 5, 14], however, methods andmechanisms for reliably assessing and reflecting on mutualinfluence are underdeveloped.

P5: Authentic and Concurrent Evaluation.Principle 5 stresses that evaluation should happen through-

out the design process to both influence the process itselfand to inform design decisions. Importantly, this principleencourages researchers to prioritize authenticity when eval-uating artifacts and their e↵ects through methods that areecologically valid and conducted in-the-wild; this value isechoed within visualization design methodologies [26, 24,16]. Authentic and reliable evaluation in the context of P3

and P4 — which value and encourage the influence of allmembers of the team on the shaping of the artifact andeven in developing the insights achieved across domains —however, is at odds with controlled studies that aim to es-tablish predictive models through in vitro experiments thatseek to remove subjectivity. Instead, ADR provides a meansof achieving reliability within a subjective context by artic-ulating the role that people and context have on the de-sign process itself. Furthermore, ADR, and visualization

design research more generally, provide an environment for:1) the evaluation of the results of controlled studies; and 2)a real-world scenario that may draw attention to the needfor additional visualization research that may itself requirecontrolled studies.

P6: Guided Emergence.Principle 6 encourages researchers to be aware of, and

sensitive to the reciprocal shaping of theory-ingrained arti-facts that happens throughout the design process: to nur-ture and incorporate the shaping into the design process,and to capture and apply it toward generating new designprinciples. While design research should be guided in partby theory, researchers should also be open to incorporatinginsights that emerge from the research context, interactions,and evaluation. This principle can help ensure reliability ofthe resulting artifact by encouraging the explicit awareness,and documentation, of the emergence itself, such as the eval-uation of design decisions and the evidence used to developthem.

P7: Generalized Outcomes.Principle 7, while recognizing the unique and highly spe-

cialized outcomes of a design process, emphasizes the im-portance of generalizing and abstracting research findings.This principle specifically encourages researchers to gener-alize the problem and the solution, as well as to derive de-sign principles. The visualization community subscribes to asimilar process of generalizing research findings for the pur-pose of broader application. Abstractions, problem char-acterizations and guidelines are examples of such general-izations [19, 21]. This principle, however, suggests thatother kinds of learning, particularly surrounding the recip-rocal shaping and mutually influential nature of the designprocess, could also be formalized, benefitting the greater re-search community.

2.3 ADR stagesADR is a high-level framework that encompasses many

of the details found in existing visualization process modelsand practices. Research begins with preliminary investiga-tion and articulation of the problem, continues with a periodof iterative and cyclic human-centered design and develop-ment, and ends with critical reflection and synthesis of re-search. ADR di↵ers from visualization design methodologiesin its focus on intervention as a critical element of the de-sign process, and its objective of learning through design.Equally if not more important, however, is the actionableframework that ADR’s stages provide for adhering to andreflecting on the principles discussed in Section 2.2 to under-pin the design process in ways that aim to achieve reliability.Tight cycles of action and evaluation are core to this and,unlike the emphasis in models of visualization design, reflec-tion is required and ongoing.

S1: Problem Formulation.The ADR process is triggered by a real-world domain

problem, whether expressed by domain experts or discov-ered by design researchers. The problem formulation stageinvolves the preliminary research and investigation of theproblem, including narrowing in on the research opportu-nity. This stage also involves what ADR terms “casting theproblem as an instance of a class of problems”— similar to

Page 4: Action Design Research and Visualization Designmiriah/publications/adr.pdfRequest permissions from permissions@acm.org. c 2016 ACM. ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 for the

the initial problem characterization and abstraction of visu-alization design research. This stage emphasizes the prin-ciple of practice-inspired research (P1), stressing the im-portance of a real-world context for developing appropriatetasks as well as for establishing an ecologically valid contextfor validation of artifacts. The principle of theory-ingrainedartifact (P2) is also stressed in this stage, indicating theimportance of a prepared mind for developing e↵ective so-lutions.

S2: Building, Intervention, Evaluation.The second stage of ADR is grounded in the core tenet of

action research that an e↵ective way to learn about some-thing is to try to change it [6]. In this stage design re-searchers collaborate closely with domain practitioners, bothto continue to develop and refine the problem space, as wellas to design, develop and refine the artifact. This is accom-plished via cycles of building, intervention and evaluation(BIE). As these cycles progress, new interventions are de-signed based on the results from previous cycles, are evalu-ated in real time, and are used to inform subsequent cycles.Technology probes [7] are a common intervention instrumentused within design study research. Our own experiences ofconducting visualization design research suggests that BIEcycles occur frequently and at multiple scales, with overar-ching cycles exploring high-level questions, mid-level cyclesexploring core concepts surrounding the data abstractionand design of a visualization artifact, and low-level rapid,iterative feedback and informal evaluation cycles through-out. In Section 4.2, we illustrate the multi-scale nature ofBIE cycles and adapt the model proposed in Sein et. al [25].

The principles of reciprocal shaping (P3) and mutuallyinfluential roles (P4) emphasize the highly collaborative,messy, human-centered nature of BIE cycles, as well as theshifting nature of the problem being studied. These prin-ciples provide structure to incorporate these dynamic andunpredictable elements of applied research into the designprocess. This stage also emphasizes authentic and concur-rent evaluation (P5) as designers probe with technology tofind out what works, and what their design ideas reveal.Evaluation needs to be quick, as well as concurrent with thebuild and intervene activities.

S3: Reflection & Learning.The reflection and learning stage happens continuously

and in parallel with S1 and S2. Within this stage re-searchers are encouraged to reflect on: ongoing evaluationin order to guide the design process; how well the researchprocess adheres to guiding principles and how to encour-age deeper adherence; and potential, broader implicationsof the research. This stage may occur either momentarily orin longer stretches, and is often triggered by an insight —a revelation, a moment of validation, or a design challenge— developed during S1 or S2. While this stage has similarobjectives to the reflect stage in the nine-stage frameworkfor design studies [24], ADR is explicit about the repeatedand central role of reflection throughout the design process.Reflection and learning is guided by one principle, guidedemergence (P6), encouraging researchers to adhere to P2-5

throughout the design process and to reflect critically on theimpact of such principles on the design and on the greatercontribution of their research.

S4: Formalization of Learning.The final stage of ADR is the formalization of learning.

This stage occurs once the BIE cycles are completed andbuilds on the reflection and learning conducted throughoutthe design process — casting the insights and artifacts to abroader class of problems and solutions. Stage 4 embracesthe generalization of outcomes (P7), pushing visualizationresearchers to think more broadly about the scope of theircontributions to provide guidance around generalizing andabstracting elements of the design process.

3. COMPARISON TO VIS MODELSAs described in Section 2, ADR marks an evolution of

thinking within the information systems community aboutthe role of design in research, and specifically about how tomake design research reliable and generalizable. The visual-ization community is engaged in a similar conversation, andhas put forth a number of models for structuring the designprocess [24, 16] and validating design decisions [21, 19, 8].In this section we briefly discuss these models as they relateto ADR.

The nine-stage framework represents the first formalizedprocess model and methodology for conducting design stud-ies [24]. This framework is broken into three stages: the firstdescribing a set of activities that should occur before trig-gering a design study project, the second describing the coredesign activities in the production of visualization artifacts,and the third describing analysis and reflection to move de-sign insights toward generalizable knowledge. Evaluationis stated as a concurrent step across the entire nine-stageframework, but the specific role of evaluation or guidanceon what types of evaluation are appropriate is not discussedin detail.

The design activity framework (DAF) [16] was a responseto some of the shortcomings of the nine-stage framework.Specifically, the DAF emphasizes evaluation as a primarycomponent of each design activity within the design processwhile also o↵ering guidance for appropriate evaluation meth-ods. The DAF also attempts to give a more flexible structureto the design process by supporting iterative, nested, andparallel design activities. In an e↵ort to boost the action-ability of the framework, the DAF bridges between the stepsa designer takes and the decisions they make by explicatingthe levels of the nested model [21, 19] that are considered ineach design activity. The nested model describes four levelsof decisions — problem characterization, data and task ab-straction, visual encodings and interactions, and algorithms— and the interconnected relationship of decisions with re-spect to validation. The artifacts produced as a result ofthese decisions are often considered to be the contributionsmade by a design study [24].

ADR encompasses both the latter two stages of the nine-stage framework as well as the entire DAF by describing thedesign process from a problem trigger through formalizationof the knowledge acquired. Similar to the DAF, ADR hasan explicit treatment of evaluation as an essential step thatis repeatedly visited throughout all design activities. Unlikethe DAF, however, ADR makes reflection a primary activitythroughout the design process, extending the role of reflec-tion from that detailed in the nine-stage framework.

The biggest shift that ADR presents over existing visual-ization models, however, is the inclusion of action research [11,17], impacting the design process in a number of ways. First

Page 5: Action Design Research and Visualization Designmiriah/publications/adr.pdfRequest permissions from permissions@acm.org. c 2016 ACM. ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 for the

is the emphasis of the role of learning through planned ac-tions as a primary driver of the design process. Second is theview that the development of an artifact is both a contribu-tor to and consequence of the research process. Third is theframing of the design process in a manner that achieves re-liability by incorporating established values from social sci-ence.

4. APPLYING ADR TO POEMAGEIn this section we consider ADR in a visualization context

by applying it retrospectively to a recent design study [14].This design study involved a two-year collaboration betweenvisualization researchers (two of the co-authors of this work)and poetry scholars, resulting in various artifacts, including:new views on visualization design in poetry scholarship, in-sights into the role of computers in this discipline, a seriesof guidelines that can be applied to visualization in human-istic domains, and a new visualization tool called Poemage.In what follows, we present and critique the design study byreframing it through the lens of ADR. We argue this refram-ing sheds new light on contributions of the design study andillustrates the applicability of ADR to visualization designresearch.

4.1 Problem FormulationThe Poemage design study was triggered by the poetry

scholars’ interest in exploring the potential role of visualiza-tion in poetry scholarship, and in particular in the experi-ence of a close reading of a poem. Close reading involvesthe in-depth analysis of a poem and all its literary devices,and is central to the poetry scholars’ research. During theinitial portion of the problem formulation stage the visual-ization researchers conducted informal and semistructuredinterviews with the poetry scholars to learn about the closereading process [8]. From these interviews the visualizationresearchers discovered that influencing close reading couldhappen in many di↵erent ways, and that there was no ex-plicit notion of data in this context.

From there the visualization researchers dug through textanalysis literature to determine what types of literary de-vices — metaphor, imagery, a↵ect, and sound to name afew — can be extracted from a poem. The goal was to finda device that was both robustly computable and interestingto the poetry scholars. Eventually, the team narrowed in onsonic devices — a class of poetic device that utilizes soundand the relationships between sounds in words to e↵ect theinterpretation of a poem. Next, the team developed an ini-tial data abstraction; the data under consideration would besets of sonically similar words within a poem. Additionally,the visualization researchers reviewed literature around textvisualization, close reading, and digital humanities.

Adherence to P1 and P2.The investigation within this stage was grounded in the

poetry scholars’ interest in exploring the role of visualiza-tion in poetry scholarship, which was continually revisitedthrough the numerous discussions between the team mem-bers. The theory acquired within this stage came from var-ious approaches to text analysis and visualization, and thevalues and methodologies of digital humanities. In addition,learning around approaches to digital humanities scholarshipinspired the visualization researchers to pursue a highly ex-perimental and exploratory approach to the design process,

which was maintained through the duration of the designstudy. As the problem formulation stage was revisited laterin the design study, the visualization researchers turned onceagain to visualization theory, digging into specific visualiza-tion techniques that best supported the e↵ective encodingof the evolving data abstraction for the tasks at hand.

4.2 Building, Intervention, EvaluationDuring the BIE stage the team developed a broad array of

technology probes to understand three core questions: Whatsonic devices are interesting to the poetry scholars? Whatare the scholars interested in doing with the sonic devices?How can visualization support their exploration of the sonicdevices? Overarching to these questions was the larger in-vestigation into the role visualization could play in poetryscholarship. Sein et. al [25] describe small numbers of BIEcycles relating to beta and alpha prototypes of software sys-tems in their examples of ADR. On reflection, our designstudy consisted of a series of BIE cycles that occurred atmultiple scales: one high-level, overarching BIE cycle ex-amining the role and impact of technology on poetry schol-arship; three mid-level cycles focusing on sound, visualiza-tion design, and the development of the Poemage tool; andmany rapid, low-level cycles of iteration, expansion, and re-finement each involving a planned and active intervention,evaluation of e↵ect on the poetry scholars and subsequentreflection to establish knowledge gained and to drive designdecisions. Each scale warranted di↵erent types of evalua-tion, with the higher level scales incorporating more formalevaluation, and the lower level scales using quicker, lighterweight methods. These low-level BIE cycles are of particu-lar interest for future investigations into reliability via docu-mentation and recording of evaluation e↵orts. Furthermore,the di↵erent scales of BIE cycles may be particularly impor-tant in visualization as it is so often driven by discovery.

Figure 2: Multi-scale BIE cycles: (a) high-level BIE

cycle focusing on the role of technology in poetry

scholarship, (b) mid-level BIE cycles focusing on

sound, vis design, and the development of Poemage,

(c) low-level cycles involving fast, informal feedback.

The multi-scale BIE cycles for the Poemage design studyare roughly depicted in Figure 2. The three horizontal linesreflect contributions from di↵erent members of the Poemageteam — the visualization researchers; the poetry scholars,or practitioners in ADR parlance; and end-users beyond theteam. The top line relates to development on the part of thevisualization researchers as they produced functionality forintervening in the practices of the poetry scholars. The mid-dle line indicates an intervention as the developed artifact

Page 6: Action Design Research and Visualization Designmiriah/publications/adr.pdfRequest permissions from permissions@acm.org. c 2016 ACM. ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 for the

was deployed to the scholars. We should emphasize that thisis the crucible of action — where the ADR team, design anddata interact in an authentic setting, and where a plausible,theory-ingrained artifact is used by a practitioner to estab-lish knowledge in both the application and visualization do-mains. Further development, and reflection and learning,result from evaluation of these planned actions. The bot-tom line corresponds to the deployment of the Poemage toolto users beyond the team. Lam et. al [9] describe this as“deployment ... in the field,” which o↵ers opportunities forsummative evaluation, as is described in multi-dimensionalin-depth long-term case studies (MILCs) [26]. In what fol-lows, we outline the three mid-level BIE cycles that werecore to the Poemage design study.

The first mid-level BIE cycle focused on sound and sonicdevices. Via an informal survey followed by semistructuredinterviews, visualization researchers worked with poetry schol-ars to determine which sonic devices would be most interest-ing to explore in the close reading of a poem. The identifiedsonic devices were translated to code within an interactivesystem that extracted sets of words in a poem that were re-lated via the various devices. The visualization researchersused this software as a technology probe to test the selecteddevices, and to understand how the poetry scholars mightexplore such devices within a poem. Evaluation of the tech-nology probe ranged from casual feedback to highly struc-tured interviews. Insights from the initial technology probemotivated visualization experts to develop a language alongwith a formalism for specifying and analyzing a broad rangeof sonic devices — all of which the poetry scholars blan-keted under an extended definition of rhyme — within apoem. This language and formalism was subsequently im-plemented in a system called RhymeDesign [15]. Evaluationfor RhymeDesign was formal, including both case studiesand a survey testing the expressivity of the RhymeDesignlanguage against examples of interesting sonic devices col-lected from an extended network of poetry scholars.

The second mid-level BIE cycle focused on the design ofvisual representations. This cycle included explorations ofdi↵erent visual representations of the data abstraction aswell as experimentation with di↵erent visualization and in-teraction techniques to support the exploratory tasks ob-served and identified in the previous BIE cycles. As thesecond BIE cycle progressed through a series of rapid, high-frequency interventions, the poetry scholars’ interest evolvedfrom browsing through sets of words detected by the sys-tem to instead exploring the interaction between these setsacross the space of the poem. This new focus inspired theteam to revisit a metaphor relating sound in poetry to flowdeveloped previously by one of the poetry scholars, whichin turn informed the visual notion of sonic topology. Thiscycle was guided by regular, rapid and informal feedbackfrom poetry scholars on ideas and prototypes — sketches,screen captures, live demos, etc. — shared either in personor remotely.

The third and final mid-level BIE cycle focused on thedevelopment of the Poemage visualization tool. During thiscycle valuable features, interactions, capabilities, and de-sign elements were extracted from previous BIE cycles andcompiled into a multi-linked view system. Following an ini-tial beta-testing deployment period in which poetry scholarsfrom an extended network were given several weeks to exper-iment with incorporating Poemage into their practices, the

visualization researchers conducted contextual interviews andcase studies. In preparation for these focused evaluationsessions the poetry scholars wrote experiential, qualitativenarratives about their experiences using Poemage, whichthey discussed during the interviews. While this prepara-tory writing was not asked of the poetry scholars, they ex-pressed that this was a natural and productive method ofreflection within their field, and an exercise which they wereinclined to complete regardless. In reflection we note thatinsights like these could point to new forms of evaluation forothers working the in digital humanities.

Adherence to P3, P4, and P5.In reflecting on the BIE cycles we found that reciprocal

shaping (P3) often occurred during close collaboration be-tween the visualization researchers and poetry scholars, thatevaluation (P5) occurred rapidly and informally during pe-riods of intervention, and that the cycles supported mutualinfluence (P4) by creating a gradual decrease in separationbetween the knowledge states and the roles of the researchersand scholars.

One specific example of reciprocal shaping occurred aroundthe development of a particular feature in Poemage, whichcame to be known as the beautiful mess. The beautifulmess, shown in Figure 3, displays all the detected sets for agiven poem, resulting in visual clutter and significant over-plotting. Although this feature was explicitly requested bythe poetry scholars, it was met with a degree of resistance bythe visualization researchers as it contravened visualizationconventions that value clarity and readability [3]. The po-etry scholars argued, however, that the messiness resonateddeeply with them as it captured the energy and excitementthey felt during a close reading, as well as serving as a visualrepresentation of the untangling task they confront with anew poem. Ultimately, the inclusion of the beautiful messnot only led to one of the more important insights of thework, but also helped to engage and gain the trust of thepoetry scholars.

The reciprocal shaping of the beautiful mess contradictsthe visualization theory brought to bear on the design pro-cess (P2). But the design worked for the poetry scholars,emphasizing the importance of reciprocal shaping and mu-tual influence. The beautiful mess was considered to be astrange anomaly of the design study, and precisely what tomake of it remained unclear to visualization researchers. Anunderstanding of the notion of reciprocal shaping along withstructured guidance to embrace and nurture this element ofthe design process and its contribution to design and knowl-edge acquisition, such as that provided by ADR, may haveresulted in more features like the beautiful mess, and in moredirected learning and evaluation around such features.

Mutual influence also played a significant role in this re-search. Each team member contributed a di↵erent level ofexpertise in her own field, a di↵erent level of expertise in theother domain, and a di↵erent level of openness to deviatingfrom theory and convention. Throughout the design pro-cess, the poetry scholars developed a computational way ofthinking about their scholarship, which they discussed andreflected on in multiple articles and talks to the humani-ties and DH communities [4, 10]. This, along with insightsgained throughout the collaboration, led one collaboratorto develop new theoretical thinking about the relationshipbetween human and machine in the context of the digi-

Page 7: Action Design Research and Visualization Designmiriah/publications/adr.pdfRequest permissions from permissions@acm.org. c 2016 ACM. ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 for the

Figure 3: The beautiful mess feature of Poemage

applied to Clark Coolidge’s “Machinations Calcite.”

Development of this feature exhibited elements of

reciprocal shaping (P3) and guided emergence (P6).

tal humanities. On the other end, visualization researcherslearned to embrace the poets’ broad and imprecise definitionof rhyme and developed an openness to deviating from con-ventional visualization methods and principles. Addition-ally, visualization researchers learned to incorporate a moreextemporaneous element into their research — one that re-flected the nature of their collaborators’ poetry scholarship.Furthermore, revisiting a close reading of a particular poem,and the particular analysis that led to a new interpretationor insight, was a regular tactic used by poetry scholars toillustrate a point. Thus visualization researchers had to de-velop enough of an understanding of poetry, poetry analysis,and close reading in order to interpret the point being made,and translate it to the space of visualization research.

Lastly, authentic evaluation played an integral role in shap-ing the research and design process. Fast and informal feed-back guided the research team toward pursuing sonic de-vices and facilitated the design process. At various pointsthroughout the first and second BIE cycles, visualization re-searchers sat with the poetry scholars and iteratively testedand evaluated new features, interactions, and visual encod-ings. In addition, consistent feedback helped visualizationresearchers identify and build on elements of the researchprocess that engaged the poetry scholars, increased theirtrust in the technology, and were disruptive in some inter-esting sort of way. In retrospect, recording and reportingthese kinds of findings in a more structured and perhapscomprehensive fashion, as ADR begins to facilitate, wouldhave increased the reliability of the design process.

The evaluation strategies, particularly as they applied tointerviewing techniques, evolved and shifted throughout theBIE cycles based on feedback and reflection. For example,while the first round of interviews were highly structured,it became clear that semistructured interviews were muchmore appropriate since the poetry scholars needed very lit-tle prompting and came to the interviews with valuable in-sights that would have been hard to elicit via preconceivedquestions. As another example, elements of poetry scholar-ship found their way into evaluation tactics. The primaryexample of this was the experiential, qualitative narrativeswritten by the poetry scholars that were incorporated intothe evaluation of Poemage. Thus, mutual influence and re-ciprocal shaping had an e↵ect on evaluation as well as ondesign. This is not something that would be welcome inthe kinds of isolated objective evaluation that lab studies

permit, but a reflective methodology such as ADR providesa means for such flexibility while providing reassurance re-garding reliability in applied work. Had this type of mutualinfluence and reciprocal shaping occurred earlier in the de-sign study, or had the team been following a methodologythat explicitly encouraged this awareness and flexibility, theteam may have sought and benefitted from more opportu-nities of this kind.

4.3 Reflection & LearningThroughout the design process the visualization researchers

reflected in order to shift and shape the direction of theproject, to operationalize poorly-defined tasks, and to ex-tract insights. For example, during the first BIE cycle it be-came clear that the poetry scholars embraced a broad andimprecise definition of sonic similarity, motivating the visu-alization researchers to move beyond straight-forward rhymedetection. The result was the development of a formalismfor describing sonic similarity computationally, and the im-plementation of the RhymeDesign tool. Another reflectivemoment occurred when visualization researchers observed aspectrum of ways in which poetry scholars were using thePoemage tool, leading to a realization that one role of tech-nology in poetry scholarship is for creativity support, as op-posed to data analysis.

Adherence to P6.Moments of guided emergence occurred throughout the

design process. An illustration of this is the beautiful messexample described in section 4.2. The visualization researcherswere guided by conventions surrounding clarity and read-ability, and initially resisted even experimenting with thefeature. As the poetry scholars continued to push for thefeature, however, one of the visualization researchers be-came more receptive. As mutual influence was establishedthrough validation of the technique, the other visualiza-tion researcher was eventually persuaded. In hindsight, thisexperience taught the visualization researchers to be moreopen to precisely the notion of guided emergence. At thetime the precise impact and takeaway of this anecdote re-mained unclear to the visualization researchers, however thelesson was presented in the visualization publication aboutthe Poemage design study as a kind of guideline that encour-aged others to adopt the same openness in their research.P6 directly confirms and articulates the importance of suchexperiences to the design process, and gives weight to anyassociated lessons and formulated guidelines.

4.4 Formalization of LearningThe majority of the formalization of learning in this project

occurred during the writing phase of the research. Duringthis period the visualization researchers looked back throughthe entire project, gathering and formalizing the elements ofthe project that had potential for benefiting the visualizationcommunity as a whole. Some formalization came out of theproblem characterization and data abstraction, as is typicalin the reporting of design work in visualization research [20].Other formalization came out of reflecting on the project asa whole, including insights surrounding creativity supporttools and conducting design research in the digital human-ities. Additionally, the visualization researchers revisitedthe most interesting challenges encountered throughout theresearch — especially those surrounding evaluation and ap-

Page 8: Action Design Research and Visualization Designmiriah/publications/adr.pdfRequest permissions from permissions@acm.org. c 2016 ACM. ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 for the

propriate measures of success — and formalized them intoopen research questions for future work. While there was adesire to formalize learning surrounding the reciprocal shap-ing and the disruption that occurred throughout the designprocess, the lack of guidance and language for doing this leftthe visualization researchers with little confidence in such anendeavor.

Adherence to P7.While the results of this research were highly specific and

designed to meet the interests and needs of a very smallgroup of poetry scholars, the visualization researchers gen-eralized elements of the process and design to various lev-els of abstraction. For example, the poetry scholars’ inter-est in exploring the role of technology on their scholarshippractices motivated some speculation surrounding possibleimplications in the arts and in other fields that value novelinterpretations and creative thinking. At a much lower level,while Poemage was designed to support a very specific re-search activity — the close reading of american english freeverse poetry — formalizing the data abstraction allowed vi-sualization researchers to speculate about possible applica-tions to other set visualization problems. In retrospect, wewonder whether taking an ADR approach might have facili-tated framing these outcomes more e↵ectively, consistently,and ultimately more reliably.

5. DISCUSSIONIn applied visualization research, visualization systems are

often shaped by a series of small decisions made by design-ers and researchers who have invested heavily throughoutthe design process. These decisions are typically made withestablished visualization principles in mind, and rely on fast,informal, and light-weight evaluation strategies. How to con-duct such evaluation in a manner that is reliable and thatcontributes to our knowledge of visualization as a methodol-ogy and to the domains in which we work is an open questionwithin the visualization community.

In this work we look to information systems research forinsight around conducting reliable, informed evaluation insettings that are applied and dynamic. In particular we turnto action design research, which through adherence to a setof guiding principles o↵ers a framework for reliably structur-ing and reporting on the design process in ways that can con-tribute to the acquisition of knowledge. ADR shares manycommonalities with existing visualization design methodolo-gies, but deepens the theoretical underpinnings through itsuse of action research as a basis or design research. Thisfoundation — adapted from an established method of en-quiry in social science, in which researchers directly influencethe context that they study through planned intervention —a↵ords a new perspective on the forces that shape the natureof visualization design, and on the way we define reliabilityof research.

We describe the ADR principles and stages using visual-ization parlance, highlighting the similarities and overlapswith existing visualization theory and practice. But, we arealso able to point out a number of places where ADR prin-ciples and stages are not reflected in visualization models,indicating gaps that present reliability threats for visual-ization design research. Furthermore, applying ADR retro-spectively to the Poemage design study revealed a numberof highly significant moments and insights that we strug-

gled to articulate using existing visualization models. ADRprovided a structure and organization for analyzing the im-pact of the human-centered and disruptive elements on theprocess and on the design, as well as the impact of the col-laboration on the learning that occurred in both domains —visualization and poetry. This learning has extended beyondthe scope of the project and continues to influence researchin both domains. While we are unable to draw conclusionsaround ADR’s utility as a guiding methodology through thisretrospective application of the framework, doing so leadsus to the hypothesis that incorporating elements of ADRinto future design studies will enable better navigation andevaluation of the design process, as well as the facilitationof new kinds of learning. Furthermore, while we acknowl-edge the plausible risk of confirmation bias associated with aretrospective application of ADR, our collective experiencearound Poemage and other previous design studies is pre-cisely what motivated us to seek the added guidance o↵eredby ADR.

ADR emphasizes the di↵erences between the methods usedto frame controlled in vitro research and those applicable todesign-oriented research where reciprocal shaping and mu-tual influence occurs. This contrast, however, also points tothe synergy between subjective and objective research en-deavors: applied visualization contexts o↵er an environmentto evaluate the results of controlled studies while simulta-neous providing inspiration for new research questions thatcould benefit from empirical experiments. ADR provides areasoning to articulate these di↵erences for the visualizationresearch community.

6. FUTURE WORKThe emphasis on the use of design as a research tool is

largely implicit in visualization research. Furthermore, theuse of design as a deliberate means of disruption to observee↵ect is an exciting approach through which visualizationresearchers may beneficially engage with a broad range ofapplication areas. In addition to these potential benefits,we see major implications along with several open questionssurrounding the application of ADR to visualization designresearch.

First, ADR explicates the role of reciprocal shaping (P3),mutual influence (P4), and guided emergence (P6), none ofwhich are captured in existing visualization models. Theseprinciples stem from the application of action research, whichcould provide guidance for how to apply these principles in astructured, reliable way. Furthermore, visualization designresearch is in need of mechanisms for capturing and report-ing on moments guided by these principles, and well as forreflecting on them to produce new visualization knowledge.

And second, the subjective nature of ADR questions theapplicability of P7 itself — what does it mean to general-ize results in the context of such a subjective, uncontrolled,and specific scenario? Others have suggested that for designstudies the goal is “transferability not reproducibility” [24],and we believe that the notion of transferability is also im-portant for ADR. How do we define reliable, transferable re-sults for visualization design research, and what mechanismscan we develop to support transferability? These questionsare core components of what we believe to be interestingand exciting future work.

Page 9: Action Design Research and Visualization Designmiriah/publications/adr.pdfRequest permissions from permissions@acm.org. c 2016 ACM. ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 for the

7. REFERENCES[1] K. Andrews. Evaluation comes in many guises. In AVI

Workshop on BEyond time and errors (BELIV)Position Paper, pages 7–8, 2008.

[2] M. Brehmer, S. Ingram, J. Stray, and T. Munzner.Overview: The design, adoption, and analysis of avisual document mining tool for investigativejournalists. IEEE Trans. Visualization and ComputerGraphics (TVCG / Proc. InfoVis), 20(12):2271–2280,2014.

[3] W. S. Cleveland. A model for studying displaymethods of statistical graphics. Journal ofComputational and Graphical Statistics, 2(4):323–343,1993.

[4] K. Coles. Slippage, spillage, pillage, bliss: Closereading, uncertainty, and machines. WesterHumanities Review, pages 39–65, Fall 2014.

[5] S. Goodwin, J. Dykes, S. Jones, I. Dillingham,G. Dove, A. Du↵y, A. Kachkaev, A. Slingsby, andJ. Wood. Creative user-centered visualization designfor energy analysts and modelers. Visualization andComputer Graphics, IEEE Transactions on,19(12):2516–2525, 2013.

[6] G. R. Hayes. The relationship of action research tohuman-computer interaction. ACM Transactions onComputer-Human Interaction (TOCHI), 18(3):15,2011.

[7] H. Hutchinson, W. Mackay, B. Westerlund, B. B.Bederson, A. Druin, C. Plaisant, M. Beaudouin-Lafon,S. Conversy, H. Evans, H. Hansen, et al. Technologyprobes: inspiring design for and with families. InProceedings of the SIGCHI conference on Humanfactors in computing systems, pages 17–24. ACM,2003.

[8] P. Isenberg, T. Zuk, C. Collins, and S. Carpendale.Grounded evaluation of information visualizations. InProceedings of the 2008 Workshop on BEyond timeand errors: novel evaLuation methods for InformationVisualization, page 6. ACM, 2008.

[9] H. Lam, E. Bertini, P. Isenberg, C. Plaisant, andS. Carpendale. Seven guiding scenarios for informationvisualization evaluation. 2011.

[10] J. Lein. Sounding the surfaces: Computers, context,and poetic consequence. Wester Humanities Review,pages 84–109, Fall 2014.

[11] K. Lewin. Action research and minority problems.Journal of social issues, 2(4):34–46, 1946.

[12] D. Lloyd and J. Dykes. Human-centered approaches ingeovisualization design: Investigating multiplemethods through a long-term case study. IEEE Trans.Vis. Comp. Graphics, 17(12):2498–2507, 2011.

[13] S. T. March and G. F. Smith. Design and naturalscience research on information technology. Decisionsupport systems, 15(4):251–266, 1995.

[14] N. McCurdy, J. Lein, K. Coles, and M. Meyer.Poemage: Visualizing the sonic topology of a poem.Visualization and Computer Graphics, IEEETransactions on, 22(1):439–448, 2016.

[15] N. McCurdy, V. Srikumar, and M. Meyer.Rhymedesign: A tool for analyzing sonic devices inpoetry. Proceedings of Computational Linguistics forLiterature, 2015.

[16] S. McKenna, D. C. Mazur, J. Agutter, and M. Meyer.Design activity framework for visualization design.Visualization and Computer Graphics, IEEETransactions on, 20(12):2191–2200, 2014.

[17] J. McNi↵. Action research: Principles and practice.Routledge, 2013.

[18] M. Meyer, T. Munzner, and H. Pfister. Mizbee: amultiscale synteny browser. Visualization andComputer Graphics, IEEE Transactions on,15(6):897–904, 2009.

[19] M. Meyer, M. Sedlmair, P. S. Quinan, andT. Munzner. The nested blocks and guidelines model.Information Visualization, page 1473871613510429,2013.

[20] T. Munzner. Process and pitfalls in writinginformation visualization research papers. InInformation visualization, pages 134–153. Springer,2008.

[21] T. Munzner. A nested model for visualization designand validation. Visualization and Computer Graphics,IEEE Transactions on, 15(6):921–928, 2009.

[22] W. J. Orlikowski and C. S. Iacono. Researchcommentary: Desperately seeking the “it” in itresearch—a call to theorizing the it artifact.Information systems research, 12(2):121–134, 2001.

[23] S. Purao, O. Henfridsson, M. Rossi, and M. Sein.Ensemble artifacts: From viewing to designing inaction design research. Systems, Signs & Actions,7(1):73–81, 2013.

[24] M. Sedlmair, M. Meyer, and T. Munzner. Designstudy methodology: Reflections from the trenches andthe stacks. Visualization and Computer Graphics,IEEE Transactions on, 18(12):2431–2440, 2012.

[25] M. Sein, O. Henfridsson, S. Purao, M. Rossi, andR. Lindgren. Action design research. MIS Qarterly,35(1):37–56, 2011.

[26] B. Shneiderman and C. Plaisant. Strategies forevaluating information visualization tools:multi-dimensional in-depth long-term case studies. InProceedings of the 2006 AVI workshop on BEyondtime and errors: novel evaluation methods forinformation visualization, pages 1–7. ACM, 2006.

[27] R. H. von Alan, S. T. March, J. Park, and S. Ram.Design science in information systems research. MISquarterly, 28(1):75–105, 2004.

[28] C. J. Zimmerman, H. T. Wessels, and R. Vatrapu.Building a social newsroom: Visual analytics for socialbusiness intelligence. In Enterprise Distributed ObjectComputing Workshop (EDOCW), 2015 IEEE 19thInternational, pages 160–163. IEEE, 2015.