iwc towards a design space for multisensory data...

21
© The Author 2016. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved. For Permissions, please email: [email protected] doi:10.1093/iwc/iww015 Towards a Design Space for Multisensory Data Representation TREVOR HOGAN 1,* AND EVA HORNECKER 2 1 Department of Media Communications, Cork Institute of Technology, Cork, Ireland 2 Department of Fak. Medien, Bauhaus-Universität Weimar, Bauhausstr. 11, Weimar, Germany * Corresponding author: [email protected] Humans have represented data in many forms for thousands of years, yet the main sensory chan- nel we use to perceive these representations today still remains exclusive to vision. Recent develop- ments now offer us opportunities to perceive data through different levels and combinations of sensory modalities. In this article, we survey the state-of-the-art in data representation that requires more than one sensory channel to fully interpret and understand the data. Drawing on techniques and theories adapted from Thematic Analysis and Prototype Theory, we analysed 154 examples of multisensory data representations to establish a design space along three axes: use of modalities, representation intent and humandata relations. We frame the discussion around pre- senting how a selection of examples, chosen from the collection, t into the design space. This not only informs our own research but can also draw the attention of the humancomputer interaction and Design Research communities to aspects of data representation that have hitherto been either ill-dened or underexplored. We conclude by discussing key research challenges, which emerged from the exploration of the design space and point out future research topics. RESEARCH HIGHLIGHTS Dened the term Multisensory Data Representation. Surveyed the state-of-the-art in Multisensory Data Representation. Established and discussed a design space for Multisensory Data Representation. Presented ve key recommendations for practical guidance on the study and design of Data Multisensory Data Representation. Proposed a research agenda based on research challenges and questions. Keywords: data representation; design space; multisensory; physicalization; sensication; visualization Editorial Board Member: Prof. Kaisa Väänänen Received 7 July 2015; Revised 1 April 2016; Accepted 18 April 2016 1. INTRODUCTION Making sense of and communicating information through data representations has been engrained in human behaviour for millennia, from prehistoric cave paintings, over develop- ments in cartography, onto more recent developments in sci- entic and information visualization. While multisensory data representations have been studied for a long time, starting from the seminal article by Miller (1956), the majority of existing data representations rely exclusively on the visual modality. Although there are some rare examples throughout the 20th century, which represent data through and beyond the visual modality, since the turn of the century we have seen a marked increase in data representations that combine at least two modalities (Fig. 1). Some examples include: a sculpted piece of wood to represent the worlds distribution of wealth (Fig. 2A), a 3D printed cup whose form represents 150 years of Sydney temperature data (Fig. 2B) and the sounds and vibration emitted from a cube to represent the real-time hydrogen levels in deep space (Fig. 2C). These seek to exploit the material, aesthetic and natural qualities and affordances offered by modalities beyond vision to represent data. INTERACTING WITH COMPUTERS, 2016

Upload: vankhuong

Post on 18-Sep-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • The Author 2016. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.For Permissions, please email: [email protected]

    doi:10.1093/iwc/iww015

    Towards a Design Space forMultisensory Data Representation

    TREVOR HOGAN1,* AND EVA HORNECKER2

    1Department of Media Communications, Cork Institute of Technology, Cork, Ireland2Department of Fak. Medien, Bauhaus-Universitt Weimar, Bauhausstr. 11, Weimar, Germany

    *Corresponding author: [email protected]

    Humans have represented data in many forms for thousands of years, yet the main sensory chan-nel we use to perceive these representations today still remains exclusive to vision. Recent develop-ments now offer us opportunities to perceive data through different levels and combinations ofsensory modalities. In this article, we survey the state-of-the-art in data representation thatrequires more than one sensory channel to fully interpret and understand the data. Drawing ontechniques and theories adapted from Thematic Analysis and Prototype Theory, we analysed 154examples of multisensory data representations to establish a design space along three axes: use ofmodalities, representation intent and humandata relations. We frame the discussion around pre-senting how a selection of examples, chosen from the collection, fit into the design space. This notonly informs our own research but can also draw the attention of the humancomputer interactionand Design Research communities to aspects of data representation that have hitherto been eitherill-defined or underexplored. We conclude by discussing key research challenges, which emerged

    from the exploration of the design space and point out future research topics.

    RESEARCH HIGHLIGHTS

    Defined the term Multisensory Data Representation. Surveyed the state-of-the-art in Multisensory Data Representation. Established and discussed a design space for Multisensory Data Representation. Presented five key recommendations for practical guidance on the study and design of Data

    Multisensory Data Representation. Proposed a research agenda based on research challenges and questions.

    Keywords: data representation; design space; multisensory; physicalization; sensification; visualization

    Editorial Board Member: Prof. Kaisa Vnnen

    Received 7 July 2015; Revised 1 April 2016; Accepted 18 April 2016

    1. INTRODUCTION

    Making sense of and communicating information throughdata representations has been engrained in human behaviourfor millennia, from prehistoric cave paintings, over develop-ments in cartography, onto more recent developments in sci-entific and information visualization. While multisensory datarepresentations have been studied for a long time, startingfrom the seminal article by Miller (1956), the majority ofexisting data representations rely exclusively on the visualmodality. Although there are some rare examples throughoutthe 20th century, which represent data through and beyond

    the visual modality, since the turn of the century we haveseen a marked increase in data representations that combineat least two modalities (Fig. 1). Some examples include: asculpted piece of wood to represent the worlds distributionof wealth (Fig. 2A), a 3D printed cup whose form represents150 years of Sydney temperature data (Fig. 2B) and thesounds and vibration emitted from a cube to represent thereal-time hydrogen levels in deep space (Fig. 2C). These seekto exploit the material, aesthetic and natural qualities andaffordances offered by modalities beyond vision to representdata.

    INTERACTING WITH COMPUTERS, 2016

  • The study of data represented beyond the visual modality,in particular the physical representation of data, has recentlyreceived increased attention (Alexander et al., 2015; Jansenand Dragicevic, 2013; Stusak et al., 2015). Reviews, such asPousman et al. (2007), Swaminathan et al. (2014), VandeMoere (2008), Zhao and Vande Moere (2008) also provide acomprehensive overview of the state-of-the-art in these typesof data representations. Notwithstanding these developments,we see limitations in the literature. First, research has tendedto focus on a single modality (cf. Pan and Roberts, 2009),and few studies have surveyed research across different disci-plines, for example Information Visualization (InfoVis),HumanComputer Interaction (HCI) and the Arts. Also, whileresearch is continuing on multisensory data representation(cf. Bowen Loftin, 2003; Nesbitt, 2001; Nesbitt, 2005; Ogiand Hirose, 1996; Roberts and Walker, 2010; Search, 2015;Tak and Toet, 2013), few efforts have surveyed existingrepresentations to identify key research and design challenges.To our knowledge, this article is the first to draw togetherdata representations from a wide range of disciplines (artistic,design and scientific), which originated over a long time span(18622015) and to analyse these along the same criteria.We believe that it is timely to explore the design space of

    data representations that encode data in more than one

    modality. We do this by first introducing the concept of multi-sensory data representations, we follow by exploring thepotential of multisensory data representations and discussingtheir value, for example how they offer a more holistic sen-sory experience and allow people to interpret the data in amanner that is personally meaningful to them. This articlemakes several contributions to the study and design of multi-sensory data representations. First, we define, for the firsttime, multisensory data representation. Second, from 154existing examples, we survey the state-of-the-art in multisen-sory data representations. Third, we establish and discuss adesign space for multisensory data representations and finally,we propose a research agenda based on challenges and ques-tions that are presently underexplored.

    1.1. Defining multisensory data representation

    The aim of this research is not to reclassify current data repre-sentations or prescribe a new domain; instead, we are moreinterested in surveying an array of data representations acrossmany disciplines to encourage a debate about the potential forusing more than one representation modality. Recent literaturehas labelled data representation that does not rely exclusively

    Figure 1. Year of publication/creation of the 154 examples from our survey (x-axis: year, y-axis: amount) (see Supplemental Appendix A forfull list of examples).

    Figure 2. (A) Fundament with permission from Andreas Nicolas Fischer (Fischer, 2007). (B) Measuring cup with permission from MitchellWhitelaw 2010 (Whitelaw, 2010). (C) H3 with permission from Trevor Hogan and Eva Hornecker (Hogan and Hornecker, 2013a).

    2 TREVOR HOGAN AND EVA HORNECKER

    INTERACTING WITH COMPUTERS, 2016

    http://IWCOMP.oxfordjournals.org/lookup/suppl/doi:10.1093/iwc/iww015/-/DC1

  • on the visual modality as Sonification (Kramer and Walker,1999), Physicalization (Alexander et al., 2015), Non-visualvisualization (Vande Moere, 2008) or Cross/Multimodal dis-plays (Hoggan et al., 2009). However, in the context of thisresearch, these terms pose difficulties. First, the sonificationof data typically relies on a single modality (auditory). Also,categorizing visualizations as non-visual is also problematic,as the visual modality often remains an integral communica-tive aspect of the representation, for example when physica-lizations can be touched and seen. We therefore prefer theterm cross-modal or multimodal display. Nevertheless, theseare also problematic terms as the focus is on the output ofthe representation and not on the sensory channel used toperceive the data. There are, however, terms that havealready been defined that more closely fit the type of datarepresentation we explore here, these are: Sensualization(Ogi and Hirose, 1996), Sensification (Tak and Toet, 2013)or Perceptualization (Card, et al., 1999). But as our aim is notto reclassify examples in our collection, we discuss work situ-ated across many domains together under the umbrella term:multisensory data representations. We define multisensorydata representation as:

    Multisensory data representation is a class of data representationthat has a clear intent to reveal insight by encoding data in morethan one representational modality and requires at least two sen-sory channels to fully interpret and understand the underlyingdata.

    1.2. Related work

    The research field that this kind of work is traditionally linkedwith is InfoVis (Card et al., 1999). More recently, as themedium used to encode data has moved beyond the pixeltoward tangible objects, sonifications and other modalities,we have seen an increased attention around the field of HCI(cf. Nissen and Bowers, 2015) as well as more artistic endea-vours (cf. Rodgers and Bartram, 2011). Today, we continueto see related work in research fields such as TangibleComputing (cf. Pockson, 2014), Data Art (cf. Jarman andGerhardt, 2014) and Design (cf. Hogan and Hornecker,2013b).Some attempts have already been made to address and

    define data representations, which fall outside the typical clas-sification of Information Visualization. In Pousman et al.s(2007) classification of Casual Information Visualizations, hedistinguishes visualizations within this classification as: ambi-ent, social and artistic. This definition also addresses thewidening of usage of data representations, from a task-drivenuser to a lay audience whose motivation for engaging withvisualizations is more personally meaningful rather than aim-ing at pure analytical discovery. Zhao and Vande Moere(2008) also proposed a domain model to establish the concept

    of Data Sculpture as a data-based physical artefact, posses-sing both artistic and functional qualities. Vande Moere (2008)extended this line of thinking by introducing five degrees ofdata physicality, which differ in the level of abstraction ofhow data are mapped and perceived by human senses. Inrecognizing the importance of design in the process of datarepresentation, while also supporting and inspiring futuredesigns, Vande Moere and Patel (2010) surveyed a collec-tion of physical data representations, created within the con-text of education and defined a classification based on threecategoriessymbolic, iconic and indexical. While VandeMoere and Patel focused on the design of data representa-tions, the research by Jansen and Dragicevic (2013) is morepeople-centred by addressing issues related to interaction. Inthis work, they establish an interaction model that can beused to help describe, compare and critique non-screen-based data representations.

    Beyond work that attempts to categorize different types ofdata representation, we reviewed theoretical and empiricalstudies that address issues related to the representation andperception of data through more than one modality. Thisresearch is spread over a broad range of research disciplinesincluding: Psychology (Freides, 1974; Heller, 1982; Miller,1956; Warren and Rossano, 1991; Welch and Warren, 1980)Scientific Visualization (Ogi and Hirose, 1996), InformationVisualization (Roberts et al., 2014; Tak and Toet, 2013),HumanComputer Interaction (Bowen Loftin, 2003; Loftin,2003; Reeves, et al., 2004; Roberts, 2004; Search, 2015),Geovisualization (Faeth et al., 2008; Harding, et al., 2002),Ergonomics (Sarter, 2006), Design (Nesbitt, 2005), VirtualReality (Basdogan and Bowen Loftin, 2009) and Engineering(Harding and Souleyrette, 2010). In this literature, we identi-fied four prominent themes related to our research: Humanperception, Call for action, Previous surveys and Arts-basedresearch.

    1.2.1. Human perceptionA number of studies have investigated how combining repre-sentational modalities affects the capability of human percep-tion. Warren and Rossano (1991) found that the tactileperception of transformation properties, such as tilt, size andlength, in most cases, is as accurate as visual perception.Heller (1982) illustrated how the combination of modalities(i.e. touch and visual) can enhance the peoples ability to pro-cess information, especially for perceiving texture. However,evidence also suggests a limit to the amount of informationpeople can process (receive, process, remember), which dif-fers depending on the modality or combination of modalitiesin use. Miller (1956) refers to this as the span of absolutejudgement.

    1.2.2. Call for actionA number of researchers have called for further research onmultisensory data representations. For example, Roberts and

    3TOWARDS A DESIGN SPACE FOR MULTISENSORY DATA REPRESENTATION

    INTERACTING WITH COMPUTERS, 2016

  • Walker (2010) encouraged future studies to validate whethercombining representational modalities will reinforce orcontradict data insight revealed through different modalities.They also call for a more unified approach to the theory ofvisualization, to encompass the perceptual variables of all rep-resentational modalities. Tak and Toet (2013) have argued forfurther systematic empirical research to develop design guide-lines for multisensory data representations or as they callthem, Sensifications. They claim that multisensory representa-tions of complex data facilitate a more intuitive and transpar-ent process. In particular, they suggest these help the user to:(i) acquire a wider range of detail (bandwidth expansion),(ii) fill in missing information (data completion) and (iii) gaina more holistic data experience (Gestalt forming). Theseaspects were also highlighted by Search (2015), who suggeststhat utilizing more modalities expands the number of datavariables that can be represented simultaneously. The pro-posed benefits of multisensory data representation are similarto those highlighted for multimodal displays, includingaspects such as: synergy (i.e. merging different modalities inone event or process), redundancy (i.e. communicating thesame information via several modalities) and increased band-width of information transfer (Sarter, 2002; Oviatt, 2002).

    1.2.3. Previous surveysExploring the design space of multisensory data representa-tion has been attempted before, in particular, Card and collea-gues (1999) also explored this space and coined the termPerceptualization: a multisensory display of abstract informa-tion. This line of enquiry was subsequently extended byNesbitt (2001). Nesbitt (2005) later categorized multisensorydata representation via a multisensory taxonomy based onhigh-level metaphors, which he classifies as spatial, direct andtemporal. The research we present in this article differs fromthis in regards to: approach, scope and contribution. Unlikeprevious attempts, we establish our design space through asystematic survey of existing examples, collected from a widerange of disciplines, and the dimensions of our design spaceare closely linked to the properties and qualities of representa-tions, instead of high-level taxonomies.

    1.2.4. Arts-based researchApart from theoretical and empirical studies, other researchhas taken a more artistic and cultural view of non-typical datarepresentations. Alongside the numerous articles by newmedia theorist Lev Manovich (cf. 2008), in which he high-lights the cultural impact of data representation, Vigas andWattenberg (2007) have reviewed the field of artistic informa-tion visualization to investigate how artists appropriate andrepurpose scientific techniques to create data representa-tions. Recently, alongside these attempts to explore the theor-etical foundations of artistic visualization, a large body ofwork, consisting of artistic and often provocative explorationshas received increased attention. An example is the art

    installation and performance piece: All The People Of TheWorld (Stans Caf, 2011) uses single grains of rice arrangedin mountains, to represent the worlds population, with onegrain for each person. Individual piles of rice represent vari-ous data sets, such as the deaths in The Holocaust or thepopulation of the United Kingdom. The creators consider thiswork of art not to be merely a representation of data but alsoenvisage it as helping people to engage in the critical issuesof the day and get people to participate, understand and tofeel moved enough to take action.

    2. DESIGN SPACE ANALYSIS

    Design space analysis is a valuable approach to representdesign rationale. Maclean and his colleagues (1993) identifiedit as a tool to help designers reason about design, while alsohelping others to better understand why certain design deci-sions have been made. With this in mind, our aim was to sur-vey the design space of data representations that seek tocommunicate across more than one sensory channel. Themain goal of exploring this space is to bridge the gap betweentheoretical concerns and the practicalities of research, art anddesign, thereby providing an overview of the possibilities, aswell as identifying key research challenges and questionswhen representing data beyond the visual paradigm.

    2.1. Methodology

    In order to systematically understand the study and design ofmultisensory data representations, we surveyed contemporaryand historical examples that fit the definition outlined above,from fields such as scientific research, design and art contexts.We identified international conferences and periodicals thatpublish the most articles related to this topic; these includeACM Conference on Human Factors in Computing Systems(CHI), ACM Conference on Tangible, Embedded andEmbodied Interaction (TEI), IEEE Information VisualizationConference (InfoVis), ACM Transactions on Computer-Human Interactions (TOCHI) and IEEE Transactions onVisualization and Computer Graphics (TVCG). As well asjournal articles and papers, we examined posters, and demos,where applicable. We also conducted a comprehensive onlinesearch for examples of work outside of academic publica-tions, in particular for artworks. We surveyed work fromonline networks, such as Creative Applications,1 researchlaboratories such as AVIZ,2 archives of digital art exhibition,such as Ars Electronica3 and personal websites of artists,designers and researchers.

    1http://www.ca.com2http://www.aviz.fr/wiki/pmwiki.php/Main/HomePage3www.aec.at/

    4 TREVOR HOGAN AND EVA HORNECKER

    INTERACTING WITH COMPUTERS, 2016

    http://www.ca.comhttp://www.aviz.fr/wiki/pmwiki.php/Main/HomePagehttp://www.aec.at/

  • When gathering examples for our survey, we only includedwork that fully met our definition of multisensory datarepresentation. We excluded, for instance, work that reliedpredominantly on a single human sense to interpret the dataor where the data mapping is overly ambiguous with little orno intention to reveal any data insight. Our collection by nowcontains 154 entries and was used as the basis to establish thedesign space. The full list of examples included in the collec-tion is included as additional material. We acknowledge thatthere may be some examples missing from our collection,mainly due to the lack of exposure surrounding the piece thatis no publications or exhibitions. However, we feel that theexamples in the collection are a fair representation of thestate-of-the-art in multisensory data representation.

    2.2. Analysis

    The goal of our research was not to develop a taxonomy ofmultisensory data representations, instead we were more inter-ested in analysing a large collection of representations and toestablish a vocabulary to describe them. This can then be usedto help shed light on the choices that designers face when cre-ating multisensory data representations. Once we felt we hadexhausted our search for examples we commenced the processof analysis. The overall goal of our analysis was to establishdimensions of the design space and categorise each examplein the collection against these. The methodology used is basedon Thematic Analysis (Boyatzis, 1998). This is a groundedapproach, typically used to analyse interview transcriptions topresent an accurate portrayal of how people feel, think andbehave within a particular context (Guest et al., 2012). It isbased on a set of procedures designed to identify, examine andreport patterns (themes) that emerge from the data. In the con-text of our study, we followed five discrete phases: (i) famil-iarization, (ii) thematic coding, (iii) abstraction, (iv)structuring and (v) categorization (Table 1).

    2.2.1. FamiliarizationThe first phase of analysis involved repeatedly reading arti-cles, interviews, descriptions and watching demonstration vid-eos related to the examples in the collection. This was doneto become more familiar with the data (collection). As withmany qualitative approaches, this is an essential phase of thedata analysis, as it allows the researcher to gain an overviewof the data, which later helps in the identification of thematiccodes (Guest et al., 2012).

    2.2.2. Thematic codingDuring the process of familiarization, we annotated the exam-ples with keywords and phrases that describe the key character-istics of each representation, which included aspects such asfunctionality, use, context, data source, etc. Based on theseannotations, we derived an initial set of thematic codes from the

    data (Table 2, Column 3). Thematic codes can be defined asparts of the data relevant to the research questions that capturethe qualities of what is being investigated (Boyatzis, 1998). Theprocedure we followed was open coding, where codes wereonly defined as they emerged from the data. After careful cod-ing of all examples, preliminary codes were assigned and thenvalidated, with minor adjustments being applied before the listof codes was finalized. Only codes that featured more than threetimes were utilized for further analysis.

    2.2.3. Abstraction and structuringOnce we had completed coding the collection, we com-menced the process of sorting the codes into themes. Themesare broader than thematic codes in that they capture importantdetails and meaningful patterns within the data (Braun andClarke, 2006). An initial set of themes was produced by oneresearcher, which was then reviewed and refined togetherwith the second researcher (Table 2, Column 2). Thesethemes were then used to establish the dimensions of thedesign space, which are: use of modalities, representationintent and humandata relations (Table 2, Column 1).

    2.2.4. CategorizationThe final phase of analysis involved categorizing the exam-ples in the collection against the dimension of the designspace. The approach we took for this phase is adopted fromPrototype Theory (Rosch, 1973). Prototype theory is a modeof graded categorization typically used in cognitive science,where some members of a category are more central thanothers. In the context of our research, we applied aspects ofthis theory when sorting examples from the collection into thethree dimensions of the design space. We specifically adhered

    Table 1. Methodological phases used to analyse the examples in thecollection.

    Phase Name Tasks1 Familiarization Indebt analysis (reading, viewing

    and/or using) examples from thecollection

    2 Thematiccoding

    Annotate these examples withkeywords

    Coding of examples Second pass on coding Confirm codes

    3 Abstraction Extracting themes from the codes Validating themes

    4 Structuring Establishing the dimensions of thedesign space

    Validating dimensions5 Categorization Categorize each example from the

    survey under the dimensionsestablished

    Confirm categorizing

    5TOWARDS A DESIGN SPACE FOR MULTISENSORY DATA REPRESENTATION

    INTERACTING WITH COMPUTERS, 2016

  • to the two notions, which, over time, have been assimilatedinto prototype theory. First, Family Resemblance, made popu-lar by Ludwig Wittgenstein, where members of a category (inour case: dimension) may be related to one another withouthaving any properties in common that define the category(Rosch and Mervis, 1975). Second, we subscribe to the notionof Generality, where some members of a category (dimen-sion) may be better examples of that category than others(Lakoff, 1987). To exemplify this process, for the dimensionof representation intent, the decision on which end of thedimension each example should be placed was based on theavailable literature, including testimonies from the creatorsas well as independent reviews of the work. In some cases,there was very little information available, so we had tomake a judgement based on personal experience of thedomain. Once this process was completed, a database wasproduced that lists the title, credit, year, number and type ofmodality, data type and mode of interaction of each examplein the collection. We include this database as part of the add-itional material.4

    3. THE DESIGN SPACE

    We establish the design space along three axes of dimension-ality: use of modalities, representation intent and humandata relations. The dimensions of the design space can bethought of as design choices or questions that systemdesigners must address when creating multisensory datarepresentations. In the following, we use this space as a basisfor discussing important design and research challenges andproposing a set of recommendations and guidelines for thefuture study and design of multisensory data representations.

    3.1. Use of modalities

    The first dimension of the design space is: use of modalities.It is based on the themes: multimodal representation, cross-modal representation, sensory modality, representation modal-ity, different materials in use and data insight through experi-ence (Table 2, Column 2, Row 1). Hoggan and Brewster(2006) have already made a distinction between cross-modaland multimodal representation. In cross-modal interaction,different information is transmitted via different modalities,while multimodal interaction refers to the same informationbeing transmitted by each modality. We apply the same con-cept to data representation. For instance, Ryoji Ikedas instal-lation Data.anatomy [civic] (Ikeda, 2012) encodes differentdata, derived from the latest Honda Civic car, in auditory andvisual outputs, thus we consider this to be an example ofcross-modal representation. However, Tac-tile (Wall andBrewster, 2006a) encodes the same data in sounds and vibra-tion to allow visually impaired users to browse graphicalinformation, thus we consider this to be a multimodalrepresentation.In regards to sensory modality and representation modality,

    we also apply a distinction between these two types of modal-ities. We use the term representation modality, when referringto representational artefact as this relates to the representa-tional format, medium or material that encodes the data.However, when referring to sensory modality, from the per-spective of the user/audience, we use the term sensorymodality; this refers to the human sensory channel used toperceive and interpret the data. The rationale for this differen-tiation arose during the survey, when we found multimodaldata representations that may not be considered as multisen-sory, as they only require one sensory channel to interpret thedata. An example of this can be seen in Nathalie Miebachsseries of intricate data sculptures: Changing Weather (Fig. 3).Although this piece, which is a physical representation of

    Table 2. Design space analysis. Column 1 shows the dimensions of the design space, Column 2 shows the themes established from the codesand Column 3 shows a selection of prominent thematic codes extracted from the data.

    Dimensions Themes Thematic codesUse of Modalities Multimodal Representation, Cross-modal

    Representation, Sensory Modality Representation Modality Different materials in use Data insight through experience

    Multimodal, cross-modal, wood, acrylic, sound, water, visual,physical, digital, analogue, food, light, paper, print, taste,experience, intuition, feel, hear, taste, smell, touch

    Representation Intent Casual Representations Utilitarian Representations, Work of Art Reveal Little Data Insight Reveal Large Data Insight

    Artistic, casual, functional, fun, work, formal, serious, public, artgallery, performance, learning, school, awareness, personal,science, environmental, social

    HumanData Relations Interactive Systems Non-interactive Systems Live Data Archived Data

    Active, interactive, engaging, dynamic, passive, non-interactive,database, archived, live, static, responsive

    4We also make an online version of this database available at http://www.tactiledata.net/mdr/database.csv

    6 TREVOR HOGAN AND EVA HORNECKER

    INTERACTING WITH COMPUTERS, 2016

    http://www.tactiledata.net/mdr/database.csvhttp://www.tactiledata.net/mdr/database.csv

  • weather and oceanic data, incorporates numerous modalitiesand materials, the viewers rely mainly on their sense of visionto interpret the representation. We do, however, acknowledgethat its 3D form may affect the perception of the representa-tion, as people can imagine what it would be like to touch.Piaget and Inhelder (1956) were the first to explain the notionthat spatial concepts, such as form, distance, space, etc., areperceived and understood as internalized actions and not asmental images of external things. Search (2015) also notesthat bodily movement is an integral part of the perceptualexperience, which helps to interrogate and understand vir-tual and physical objects in space. Thus, we considerChanging Weather, and others like it, to be multisensory.Although the viewer is not allowed to touch the piece,when they physically navigate around the representation,their perception of the data may be altered, when visioninterrelates with kinaesthetic experience. This may not bedefined as touch, in the traditional sense, but kinaestheticbody movement, combined with the variety of materials inuse may result in a wider sensory experience, a phenom-enon that does not appear in traditional 2D representationssuch as bar graphs.The use of alternative representation modalities and materi-

    als can be mapped to a similar shift in approach to computingin general. Over the last two decades, HCI subfields, such asUbiquitous Computing (Weiser et al. 1999), TangibleInteraction and Computational Materiality (Ishii and Ullmer,1997), have sought to free computing from the traditionalcomputer and display monitor. Data representations havegone through a similar shift, albeit from analogue to digitalrepresentations. Starting in the late 20th century, assisted bythe proliferation of personal computing and the democratiza-tion of data, a systematic research effort was coordinated,under the umbrella of InfoVis, to explore the use of digitalpixels as units of data representation.In the following, we first discuss multisensory data representa-

    tions use of sensory modalities, following this we focus on repre-sentational modality through the lens of materiality. We concludeour discussion of this dimension by highlighting a recently emer-ging trend, which does not encode the data in modalities butthrough the experience of using data-driven artefacts.

    3.1.1. Sensory modalitiesWe have already made the distinction between sensory andrepresentational modality. We now focus the discussion ofour analysis around sensory modalities. We do this, as we aremore interested in representations that require more than onesense to perceive the data, over representations that utilizemore than one representational modality. The analysis of the154 examples shows that the predominant sensory modalitiesrequired, as part of a set, to interpret the data are sight (151)and touch (144) (Fig. 3, right). While 22 examples alsorequire the audience to interpret the representation by listen-ing to the representation, only 5 were found that incorporateother sensory modalities (taste and smell). Regarding thecombination of senses to interpret the data, 130 requiredtouch and sight and 11 required more than two sensory chan-nels (see Fig. 3, left).

    An example of a representation that requires more than onesensory channel to fully interpret the data is Tac-tile (Walland Brewster, 2006a). While the user browses pie charts witha stylus pen, the data is transmitted to the users non-dominant hand through vibrotactile feedback as well as thepitch of a sound emitted through speakers. Using more thanone modality to represent data is not only a remit of assistivetechnology but we also found examples in other domains.Visual artist Ryoji Ikedas installation Data.anatomy [civic](Ikeda, 2012) immerses an audience in an audio-visual experi-ence driven by the entire data set of the latest Honda Civiccar. This multisensory experience offers the audience aunique insight into the complexity of modern car design andmanufacturing by mapping data points to sounds and dynamicgraphics. Another example from the arts is Perpetual(Tropical) SUNSHINE (fabric.ch, 2012). This installationemits heat and light through a screen composed of severalhundred infrared light bulbs. The data driving this installationis transmitted by weather stations all around the Tropic ofCapricorn measuring the real-time intensity of the sun. Anexample that relies on taste, smell and sight is the recent workof the design researcher Moritz Stefaner (2014). An experi-mental research project, Data Cuisine explores food as ameans of data representation or as Stefaner refers to it asedible diagrams.

    Figure 3. Inner pie chart (right): distribution of sensory modalities used in combination with other modalities. Doughnut (outer ring) (left):Combinations of sensory modalities.

    7TOWARDS A DESIGN SPACE FOR MULTISENSORY DATA REPRESENTATION

    INTERACTING WITH COMPUTERS, 2016

  • 3.1.2. MaterialityTo fully appreciate the role representational modality plays inmultisensory data representation, we must also addressmateriality. The materials used in the representation of datanot only dictates (in most cases) the sensory channel used toperceive the data but they may also have a metaphorical role.Gross et al. (2013) recently asserted materiality of computa-tion is best observed indirectly through the artefacts thatemploy it and interactions with those artefacts (pp. 639).Vallgrda and Redstrm (2007) claim that computation needsto be combined with other materials before we can consider itto be a material itself. However, Dourish and Mazmanian(2011) argue that digital information (data) is a material, inthat it is only ever encountered in material form, and its prop-erties are revealed in the process of interacting with theseforms. In attempting to apply these concepts to the materialityof data representation, it is easy to reconcile this withGrosss assertion that computation (or data) is best viewedthrough representational artefacts; however, the other con-cepts of informational materiality seem to conflict, as Dourishand Mazmanian (2011) disagree with Vallgrda andRedstrm (2007). We lean towards Vallgrda and Redstrmsview, as data is inherently imperceptible and can only beexperienced through the lens, form, sound or taste of what isrepresenting it, for example data is viewed through the digitalor analogue marks of a visualization, while data is experi-enced by viewing and touching the 3D form of aphysicalization.We here focus on the materiality of the objects and sur-

    faces which the data has been encoded in. Table 3 shows aselection of materials used to represent data in the 154examples we surveyed, sorted by the sensory channel usedfor interpreting the representation. However, we shouldnote that some materials may be interpreted using severalsensory channels, such as wood (sight and touch). Wheninvestigating materiality, we framed our analysis aroundthe following question: Is the motivation for materialchoice metaphorically linked to the data source? From the

    examples we surveyed, we only found a few that use mate-rials we consider to be metaphorically linked to the datasource. One example is Perpetual (Tropical) SUNSHINE,(fabric.ch, 2012). Typically, sun intensity is translated intonumerical data and represented in Fahrenheit or Celsius.However, this installation directly translates the heat andlight emitted from the sun to the heat and light emittedfrom infrared bulbs and offers the audience who gather infront of the screen a visceral experience of the data source.Other examples include Paul Mays From Over Here(May, 2010) comprising numerous laser-cut paper cards,each representing a month of articles from the New YorkTimes (19922010) that relate to Ireland. The length ofeach card is mapped to the numbers of articles from thatmonth. To explore this representation, people are encour-aged to handle and rearrange the cards. The tactile qualityof the material (paper) facilitates this as well as referencingthe source of the data. The choice of materials used inIohanna Nicenboims (2009) series Form Follows Data isalso metaphorically linked to the data source. This incorpo-rates a collection of domestic objects, whose traditionalform has been altered and controlled by personal data. Theinner form of a ceramic coffee cup has been produced torepresent the amount of coffee a person consumes everymorning over the course of a week. In Hal Watts pieceCan We Keep Up?, the choice of materials is also closelylinked to the data source. Constructed out of compressedcellulose sponge, it is laser cut to the shape of a country,where the thickness of the sponge represents domesticwater usage data from that country (Watts, 2011).For the majority of examples surveyed, we found no evi-

    dence to show that the material choice has any metaphoricalconnection to the data source. To exemplify this, we returnagain to Nathalie Miebachs woven sculpture ChangingWeather (Fig. 4). Although the perception of movement in

    Table 3. List of materials identified in the 154 examples in the col-lection, these are sorted by the sensory modality used to engage withthe material.

    Sensorymodality Material and mediumVisual Cardboard, ink, paper, light, water, rice, wood,

    badges, metal, wicker, shape memory alloy,plywood, precious metals, vinyl

    Haptic Textiles, plastic, wood, ceramic, metal, sand,Lego, string, nails, beads, pins, wire, rubberbands, glass, sponge

    Auditory Metronome, MIDI electronic motorsGustation Bread, fish, meat, sweetsThermoception Infrared light

    Figure 4. Changing Weather with permission from NathalieMiebach (Miebach, 2012).

    8 TREVOR HOGAN AND EVA HORNECKER

    INTERACTING WITH COMPUTERS, 2016

  • this piece may reflect the sometimes-chaotic nature of therepresented weather conditions, the material of wicker, yarn,thread and plastic is not related symbolically or metaphoric-ally to the data source. There may be no direct link betweenthe choice of materials in this piece and the data source;however, the materials and techniques used to assemblethese certainly add interest and intrigue to the piece. Thisstrategy is often employed by data artists who seek to offermore than just data insight, by creating aesthetic, engagingand often provocative work that sometimes facilitates thegeneration of meaning that goes beyond the topic of thedata source.

    3.1.3. Beyond representation modalityDuring our survey, we also noted nine recent examples thatutilize properties beyond representation modality whenencoding the data. Although the data is encoded in thephysical or digital properties of the artefact, the generationof data insight is primarily facilitated through the experienceof using the data representation. An example of this isMelanie Bosserts The Worlds Best Spintop (Fig. 5A).This piece consists of a number of 3D printed spin tops,where the shape of each spintop is a translation of political,environmental, health, education, economic and quality oflife data of a specific country. Once the data for a countryis collated, an algorithm generates the shape of a spintop. Ifa country performs poorly the generated shape will beirregular and the handle will be small, which results in thespintop being difficult to set and maintain motion. However,if the data indicates that the country has performed well,the shape will be more symmetrical and the handle will belong enough to grasp (making it easy to set in motion).Although the data is encoded in the physical properties ofthe spintop, much like the other physicalizations mentionedalready, the data cannot be fully interpreted until the spintopis set in motion, which means the data is actually encodedin the shape as well as the behaviour of the representation.Another example that encodes data beyond traditionalmodalities is Life Dont Mean A Thing If It Aint Got ThatSwing (Fig. 5B), a usable swing installation that encodesdata that represents the satisfaction levels of a countrys

    population in its architectural properties, that is the lengthof rope, the height of the seat, etc. People interpret the datathrough the experience and enjoyment of their ride on theswing.

    In the collaborative artwork Change Ringing by artist PeterShenai and composer Laurence Osborn (Fig 5C), climatedata, representing changes in temperature over the last cen-tury, is encoded in the shape of six cast bronze bells. Again,the data has been translated into the physical properties of theobjects. However, the audience cannot perceive or fully inter-pret the data until they hear the different sounds that emanatewhen the bells are rang. These examples illustrate a newdeparture for data representation, away from modalities suchas visual, physical, sound, smell and taste to new ways ofencoding data in the performance, behaviour, affordances andresulting experience of the data representation.

    3.2. Representational Intent

    During the analysis of examples in our collection, we eli-cited five themes related to the system designers intentionwhen creating multisensory data representations. These arecasual representations, utilitarian representations, work ofart, reveal little data insight and reveal large data insight(Table 2, Column 2, Row 2). They describe the motivationfor creating the representation and the purpose of using it.The intent and motivation of those who create multisensorydata representations is no different to those in many otherareas of design. Some seek to serve utilitarian needs andconvey information in a clear manner, while others take amore open-ended approach, by seeking to evoke an emo-tional response or triggering conversations and debate abouta concern related to or inspired by their creation.Traditionally, the primary purpose of data representation hasbeen to provide people with an analytical tool that enhanceshuman cognition about a task, and their primary value hastypically been assessed along the lines of the effective andefficient discovery of information (Van Wijk, 2013). Morerecently, however, some have attempted to broaden theintent of visualizations to include more open-ended insight,

    Figure 5. (A) The Worlds Best Spintop with permission from Melanie Bossert (Bossert, 2012). (B) Life Dont Mean A Thing If It Aint GotThat Swing with permission from James Pockson (Pockson, 2014). (C) Change Ringing with permission from Peter Shenai (Shenai and Osborn,2014).

    9TOWARDS A DESIGN SPACE FOR MULTISENSORY DATA REPRESENTATION

    INTERACTING WITH COMPUTERS, 2016

  • such as creating awareness and evaluating the emotionalresponse to visualizations (cf. Vande Moere and Claes,2015). The establishment of research fields such asInformation Aesthetics (Lau and Vande Moere, 2007),Artistic Visualization (Vigas and Wattenberg, 2007) andData Art (Ramirez Gaviria, 2008) has been the catalyst forextending representational intent by incorporating non-analytical intentions and evoking hedonic responses.In our exploration of the design space, we categorize rep-

    resentational intent along the lines of Utilitarian andCasual, as we feel these terms capture the essence of allthemes that make up this dimension. Representations whoseintent we define as Utilitarian target a specific audience toreveal data insight related to an explicit task, such as usingcoloured building blocks to represent production problemswithin a large organization (Pastor, 2011), while Casualrepresentations are intended for a much broader audienceand the exploration of data may be more open-ended andnot related to a work task, such as a piece of data artlocated in a gallery space (Fig. 3). In our survey, the major-ity of examples (115/154) were created for a casual audi-ence. We base this classification (Utilitarian or Casual) on anumber of factors, including the description provided by thecreators of the representation, the domain the work was cre-ated in (i.e. scientific or artistic), as well as our own inter-pretation/experience.

    3.2.1. UtilitarianReturning again to the work of Wall and Brewster (2006b),Tac-tile provides a good example of a multisensory datarepresentation developed primarily to serve the functionalneeds of its users. In explaining their motivation for Tac-tile,Wall and Brewster point toward the lack of access to datavisualizations for people with visual impairment, which hin-ders them from engaging in numerate disciplines such asmaths, economics or science. In creating Tac-tile, they offerpeople, who have a visual impairment, a platform that allowsthem to explore and interpret data representations usingboth their auditory and haptic perception. Examples from oursurvey that serve the utilitarian needs of a broader section ofsociety include Water Usage, a prototype design by NadeemHaidary (2010). A visible compartment within a faucet acts asa visualization of the amount of water consumed each time thefaucet is used. As water flows out, a small portion of the watergets redirected through a valve into the faucets glass chamber,showing the person how much water they are currently using.Although there is clear effort by the designer to create a smart,playful and striking design, the primary purpose of this productis to make the user aware of water consumption. Anotherexample that assists in an explicit task is a Rearrangeable 3DBar Chart, which was created by Janson and her colleagues tocompare the efficiency of physical and virtual data representa-tions (Jansen, et al., 2013). This representation can be viewed

    and held when interpreting the data and is created to replicateall the features of a virtual bar graph.

    3.2.2. CasualAt the other end of this dimension are representations that arecreated for a more casual user/audience and typicallyrepresent non-critical data. These types of representationsmay be seen as a medium for artistic expression that drawsattention to an issue or concern, or an informal informationdisplay situated in our work or home life. Over the past dec-ade, an increasing number of artists have used data represen-tation as part of their work to make statements and encouragepublic debate on various cultural, political or social issues.Vigas and Wattenberg (2007) discuss some of this work indetail.We believe that expanding the perceptual field of represen-

    tation beyond the visual modality offers practitioners morescope to create representations that evoke personal meaningin their audience. Although no research has focused on theeffect that multisensory representations have on people,Hogan and Hornecker (2012) have determined that the mean-ing acquired by people when interpreting representations thatuse non-traditional representational modalities such as hapticand auditory is different from that generated through the visualmodality. A multisensory data representation that exemplifiesthis is Pulse. Created by Berlin-based artist Markus Kison(2009), Pulse is a live visualization that uses social data fromprivate weblog communities such as blogger.com. The pieceuses concepts from Robert Plutchiks book PsychoevolutionaryTheory of Emotion to translate emotion-based tags found inrecent blog entries into a series of instructions that manifest insounds and temporal sculptural forms generated by a shape-shifting object. The artist is intentionally vague about theprocess of mapping the data, leaving it completely open tointerpretation of the audience. In comparison to this, the map-ping used by lgail Reynolds in Mount Fear is very explicate(Fig. 6A), and although the data source (violent crimes in EastLondon between 2002 and 2003) would normally be associatedwith more utilitarian data representations (Fig 6B), we considerthis piece (and others like it) to be casual as it is displayedwithin an art gallery and its purpose is to draw attention to anissue or concern and to trigger debate.The type of data insight revealed to the viewer through

    these examples varies, but it is clear that the data insightrevealed is not intended to assist in any work-related task.However, this was never the intention of the artists, it is moreabout provoking the audience to interpret the work in a waythat is personally meaningful to them. A key characteristic ofrepresentations that lie at this end of the dimension is theopen-ended and personal nature of the data insight revealed.It is rare for the artist/designer to intend the same meaning orinsight to emerge for everyone who engages with the piece,while at the opposing (Utilitarian) end of the dimension, the

    10 TREVOR HOGAN AND EVA HORNECKER

    INTERACTING WITH COMPUTERS, 2016

  • intention is to reveal similar data insight for everyone whoengages with the data representation.

    3.3. Humandata relations

    The final dimension of our design space addresses the inter-action between people and multisensory data representations,as well as the nature of the data in use. This dimension wasderived from combining the themes: interactive systems, non-interactive systems, using live data and using archived data(Table 2, Column 2, Row 3).The digital age has allowed us to move beyond the printed

    page to bits, pixels and atoms, and opened new possibilitiesfor designers to let people engage with data in ways that werenot possible before. Digital technology also meant that livedata can now be represented through various dynamic inter-faces. Exploring the phenomenon of interaction is a funda-mental aspect of HCI research; however, apart from somenotably examples (cf. Brodbeck, et al., 2009; Card, 2002), ithas received far less attention within the InfoVis community.This may partially be due to the different relationship thatusers have with digital artefacts in the context of InfoVis asopposed to HCI. This was first illustrated by Ware (2000),who identified the concept of asymmetry in data rates, wheremore data flow from a visualization to the user than from theuser to the system. Thus, the interaction is more about alteringor exploring the representation than about data input.Our design space analysis explores the mode of interaction

    in terms of active versus passive engagement, these axesrepresent the themes: interactive systems, non-interactive sys-tems (Table 2, Column 2, Row 3). We follow this by addres-sing the type and nature of the represented data source(dynamic and static), which corresponds with the themes:using live data and using archived data. We categorise these

    themes under the same dimension as they often relate. A datarepresentation that requires interaction to fully interpret thedata we define as active, whereas we consider representationsthat do not require or encourage interaction as being passive.While we acknowledge that all data representations (multisen-sory or not) require some level of engagement to form mean-ing from the representation, we only consider these to beactive if they require people to intentionally manipulate therepresentation or data. We consider data to be dynamic if it islive or multidimensional, whereas data that is fixed andarchived we consider static. When analysing the examples inour database, each could be categorised under one of four possibleoptions: {passive/static}, {passive/dynamic}, {active/static} and{active/dynamic}. We found passive interaction and staticdata to be the most frequent combination (91), while passiveinteraction combined with dynamic data is the least commoncombination (4). The distribution of data types is more evenwhen combined active interaction: {active/static: 25} and{active/dynamic: 35} (Fig. 7).

    3.3.1. Interaction modePassive. Our survey shows that the dominant mode of inter-action with multisensory data representations is passive (93)(Fig. 8, right). Datafountain exemplifies this, as it does notfacilitate or offer any means of interaction beyond lookingand listening to the representational output. This work useswater fountains as an information display and incorporatesjets of water whose height is controlled by live data. In oneapplication of this installation, the creators mapped the latestvalue of three international currencies to corresponding waterjets (Fig. 9). The creators consider Datafountain to be anexample of calm technology or an ambient information dis-play that remains at the periphery of our attention. As theaudience has no control over the representation, we considerthis to be passive engagement; however, the nature of the

    Figure 6. (A) Mount Fear with permission from Abigail Reynolds (Reynolds, 2002), (B) The distribution of London crime rates (source http://maps.met.police.uk/).

    11TOWARDS A DESIGN SPACE FOR MULTISENSORY DATA REPRESENTATION

    INTERACTING WITH COMPUTERS, 2016

    http://http://

  • data is dynamic as it is collected from a live data stream. Anexample that we also consider to be passive is Fundament byAndreas Nicolas Fischer (Fig. 2A), although this piece affordsand encourages physical contact with the material that thedata is encoded to (wood), we cannot manipulate the datasource or representation in any way, this process is purelyabout perception and interpretation.Active. Many examples (61) in our survey facilitate and

    encourage active engagement between the user and the datarepresentation, and an example that exemplifying this isWable (Physical Interaction Lab, 2008). Comprising a motor-ized physical bar chart that represents the online activity of aperson logged into the system, each of the bars can be config-ured to link to online social network accounts, such as, forinstance, Twitter or Facebook. The system monitors theseaccounts, and if activity increases (e.g. retweets or sharedposts), the bar rises. A slider button incorporated into thephysical interface allows viewing of past activity. Wable facil-itates active engagement while also being using dynamic data.Although we acknowledge that the interaction offered byWable is quite limited, the system does allow users to choosewhich data stream is being represented, which facilitates asense of ownership in the user. An example that facilitates

    much more interaction between the data and the user isVirtual Gravity (Hilsing, 2010). Using Google Insights as thedata source, this system represents the past search queries ofthe user. Using a tangible on a smart surface, the user grabstwo words and places them on a digitally mediated physicalweighing scale. The frequency of the searches is representedthrough weight, which means that the more searches, theheavier the word will become and thus moving their sidedownwards (Fig. 10). The complex interaction of VirtualGravity facilitates a rich and informative engagement betweenthe user and the data source and allows for in-depth explor-ation of the data.

    3.3.2. Nature of dataWhen we analysed the thematic codes applied to the exam-ples in our collection, we identified a number that related tothe nature of the data, these include: dynamic, database,archived, live and static. These were then used to establishthe themes: using archived data (static), using live data(dynamic), which we now use as the basis of our discussionon this aspect of this dimension. We could consider thesedata properties as polar dimensions (Fig. 8, static, dynamic);however, some data could be considered as possessing thecharacteristics from both poles. For instance, data.anatomy[civic] (Ikeda, 2012) represents a data set that, due to its sizeand multidimensionality, may be seen by some as dynamic,whereas in fact the data that drives this installation is retrievedfrom a static archived data set. An example that uses staticdata is From Over Here (May, 2010). Although the datarepresented in this piece is relatively large and spans a longperiod of time (8 years), it is neither live nor multidimen-sional. Each datum is a single integer, which, when added,represents the number of New York Times articles that referto Ireland over a given period of time.Representing dynamic data, in particular live data, can be

    challenging, especially from a technical perspective, as therepresentational artefacts require a constant connection to thedata source and continual updating of the representation. Oneexample from our collection that is both live and multidimen-sional is The Rhythm of the City (Guljajeva, 2011). This artpiece translates geotagged content from social media plat-form, such as Twitter, Flickr and Youtube, into the rhythm of

    Figure 8. Design space dimension: humandata relations. Left pie chart: Nature of data. Right pie chart: Interaction mode.

    Figure 7. Design space dimension: humandata relations, showingthe combination of interaction mode and nature of data: {passive/static}, {passive/dynamic}, {active/static} and {active/dynamic}.

    12 TREVOR HOGAN AND EVA HORNECKER

    INTERACTING WITH COMPUTERS, 2016

  • a physical metronome in real time. Each of the 10 metro-nomes in the installation is linked to a different city aroundthe world, the more online conversations, stories and media(relating to the city) that are uploaded to the web, the fasterthe rhythm of the metronome becomes, thus representing thecitys pace of life.

    3.4. Plotting the design space

    In the previous sections, we introduced and discussed thedesign space for multisensory data representation. We nowselect 18 representations from our collection and trace theirposition in each dimension to present a snapshot of how thedesign space is populated. When selecting the 18 examples,we wanted to show a fair representation of the breadth of

    the design space. For instance, the majority of examples inour collection are situated at the casual end of the dimen-sion: Representational Intent, this is also the case for the18 examples chosen. This was similarly done for the otherdimensions. The examples used represented in Fig 11 are:Illuminating Clay (Piper, et al., 2002), Tactons (Brewsterand Brown, 2004), Mount Fear (Reynolds, 2002), Tides andPoles (Miebach, 2006), Weather Bracelet (Whitelaw, 2009),Paper Note (Nip and Spitz, 2012), Data Cuisine (Stefaner,2014), From Over Here (May, 2010), Underwater (Bowen,2012), Google Eye (Boleslavsk, 2012), Smell Maps(McLean, 2012), Touching Air (Posavec, 2015), Sensetable(Patten, et al., 2001), Centograph (Tinker, 2009), Pulse(Kison, 2009), Data Fountain (Van Mensvoort, 2012),Season in Review (Teehan and Lax labs, 2013) and Virtualgravity (Hilsing, 2010).

    Figure 11 shows the three dimensions of the design space,and each of the 18 examples are placed into a group alongeach. Thin lines trace the trajectory of the representationsthrough each dimension, similar to a parallel coordinates plot.However, unlike a typical parallel coordinates plot, the axislabels are different in each column (dimension). When weinspect the graph, we see two zig-zag patterns (Fig. 11:T1&2), these patterns represent a correlation between differ-ent dimensions of the design space. First, we can see a collec-tion of representations that use two modalities that are also(with two exceptions) casual in intent. These also utilize staticdata and are generally passive (Fig. 11: T1), whereas thosethat use three modalities tend to be utilitarian in intent, withdynamic data and offer user interaction (Fig. 11: T2). Thesepatterns are also evident when we increase the number ofexamples in the parallel coordinate plot.

    In Section 3.3, we highlighted the pertinacity of representa-tions that offer interaction, to be coupled with dynamic data,whereas passive representation are more likely to use staticdata. This trend also appears when we populate the full designspace. However, Figure 11 shows us further correlationsbetween separate dimensions of the design space. These zig-zag patterns (Fig. 11 T1&2) raise a number of fundamentalquestions, including: why do casual representations, moreoften not utilize only two modalities, while utilitarian repre-sentations use more? Although we do not have any clear evi-dence that explains this trend, we can surmise that increasingthe use of modalities provides people with more options togenerate insight from the data, while the generation of richdata insight may not be the priority of the creators of casualrepresentations. We can also question: why do casual repre-sentations tend to offer passive engagement with static data,but utilitarian representations tend to be interrogated throughactive interaction with dynamic data. The reason for this trendmay also be related to the type of data insight that is offeredby the representations, or may be, the level of data insightgenerated is a reflection of the type of data and mode ofinteraction.

    Figure 9. Data Fountain with permission from Koert VanMensvoort (Van Mensvoort, 2012).

    Figure 10. Virtual Gravity with permission from Silke Hilsing(Hilsing, 2010).

    13TOWARDS A DESIGN SPACE FOR MULTISENSORY DATA REPRESENTATION

    INTERACTING WITH COMPUTERS, 2016

  • 4. DISCUSSION

    Our analysis of the design space has provoked three keyresearch questions and challenges that have &yet to be fullyaddressed. These are: (i) Does adding modalities add value?(ii) How do multisensory data representations affect andchange the user experience? and (iii) When is a representationnot a representation? In the following, we discuss each ofthese individually. Derived from this discussion, we addressresearch topics that are presently underexplored and list fivekey recommendations for addressing these issues.

    4.1. Does adding modalities add value?

    Once we had analysed the design space regarding the useof modalities, we questioned: Does adding more represen-tational modalities add further value to the representation?It may be argued that additional modalities increase therichness of the representation by stimulating more sensorychannels. However, adding modalities may also increase thecognitive demand on people to reveal data insight.Research from cognitive psychology reveals that extraneouscognitive load is caused by unnecessary cognitive processes

    (Kalyuga, 2011). For example, if a representation uses twoor more modalities to represent the same data (cross-modal)without revealing anything new, this not only createsredundancy but may also increase the cognitive loadrequired to interpret the representation (Soon Fook andAldalalah, 2010). Notwithstanding these concerns, researchin instructional design has demonstrated that increasing therange of modalities used in a presentation can increase itseffectiveness in terms of enhanced learning with less mentaleffort (Sweller et al., 1998). Although work has com-menced (particularly in the field of visual analytics) toinvestigate the cognitive and perceptual capabilities neededto explore complex information visualizations (cf. Kalyuga,2011), this research has tended to focus exclusively onmono-modal visual representations. Thus, more research isrequired to investigate under what conditions the additionof modalities increases the performance and user experienceof data representation. From an artistic perspective, expand-ing the sensory perception of a data representation, whichmay result in possible overloading of audiences cognitiveprocesses, could be a deliberate strategy by the artist.When discussing the representational intent dimension,

    we categorised representations as being either casual or

    Figure 11. Parallel coordinate plot of 18 existing multisensory data representations across three design space dimensions. Lines trace each repre-sentations position along the design space and shaded areas illustrates trends.

    14 TREVOR HOGAN AND EVA HORNECKER

    INTERACTING WITH COMPUTERS, 2016

  • utilitarian. We believe that multisensory data representationsat the either end of this dimension benefit from expanding theuse of representational modalities. For instance, representa-tions created to assist people with an explicit task, such asTac-tile (Wall and Brewster, 2006a), offer the option to per-ceive data using two discrete sensory channels. Not only doesTac-tile demonstrate the functional value of combiningmodalities, it was also found that it can potentially supportcollaboration and communication between visually impairedpeople and sighted colleagues working with a sharedrepresentation (Wall and Brewster, 2006a pp. 1132). At theother end of this dimension, where the purpose of a represen-tation is not task related but to reveal or draw attention to anissue or concern, we argue that the generation of data insightis merely a part of the overall experience that the creator ishoping to achieve. In some cases, the generation of insight isnot the end of the cognitive process but is meant to act as atrigger for further contemplation about the data source.Whether the intention of the creator is to support a utilitariantask or draw attention to a concern, data representations thatstimulate more than one sensory channel, we argue, facilitatea more holistic sensory experience and allow the user/audi-ence to interpret the data in a manner that is personal to them.

    4.2. Do multisensory data representations affect andchange the user experience?

    In our design space analysis, we distinguished representationsthat allow user engagement from those that do not (active ver-sus passive). We also surveyed representations along the trajec-tory of representational intent from utilitarian to casual. Thisanalysis shows not only do multisensory data representationsallow for the full spectrum of human interaction but also thetype of insight people gather from these varies from analyticaldiscovery over awareness to intrigue or curiosity. We believethat the main challenge that this holds for the research anddesign community is the approaches we take and the methodswe employ when evaluating multisensory data visualizations.Chen and Czerwinski (2000) have already stressed the

    need for improved methods in areas such as task analysis,usability evaluation and usage analysis. However, apart fromrare examples (cf. Hogan and Hornecker, 2012), the majorityof literature evaluates the usability of data representationsbased on traditional measures such as efficiency and effective-ness. The InfoVis research community has recently began toacknowledge the shortfall in approaches and techniques tomeasure non-analytical aspects of information visualizations.Lam et al. (2011) point out, there is a lack of research thatstudies peoples subjective experiences of visualizations, andVande Moere and Claes (2015) have recently advocatedincorporating qualitative and tacit aspects in future researchon physicalization. Jackson and colleagues (2012) encouragethe use of design processes, such as sketching, ideation and

    critique, which are widely used in related design fields as aform of early evaluation, while North (2009) discuss the ben-efits of complementing low-level studies with insight-basedevaluation strategies. Beyond the evaluation of use, therehave also been attempts to assess the perceptual qualities ofdata representations. Most notably, Lang (2008) discussed therole of aesthetics in data representation as being more thanjust a vehicle to engage the user but an integral part of thescience of representation. Although we recognize that thereis an inherent difficulty in measuring non-analytical insightfacilitated by data representations such as awareness, intrigueand curiosity, the field of HCI has recently witnessed arenewed interest in evaluating such properties (cf. Hogan andHornecker, 2012; Wright and McCarthy, 2008). Employingmethods already used in this domain to measure the value ofdata representations will help researchers to better understandthe way people make sense from multisensory data represen-tation. There is some evidence that shows representationalmodality does affect the way people experience data; how-ever, to date, there is little research that examines the effectthat multisensory data representations have on peoples abilityto generate insight.

    4.3. When is a representation not a representation?

    The final question concerns the purpose of creating multisen-sory data representations. Although this question could beasked of any data representation (multisensory or not), it wasprovoked by our exploration of the design space and in par-ticular the dimension of representational intent. In our ana-lysis of this dimension, we have shown that there is a widerange of scientific, design or artistic intent in their creation.While the purpose of representations that reveal explicit infor-mation is clear, those that facilitate the generation of moreopen-ended or personally meaningful insight can be equallyvaluable for the user/audience. This is illustrated by Kosara(2007) who argues that the goal of artistic data representationsare unlike other forms of representation, in that they areused to communicate a concern rather than facilitate expli-cit data insight. With this in mind, we believe that designersand researchers need to be mindful when employing novelmapping techniques, where the primary aim is to provokeopen interpretation and personal reflection in the audience.In this case, the data may become merely a medium and isnot integral to the type of information that is being trans-ferred to the audience. We do recognize, however, that theboundary between a data representation and a piece of art issometimes hazy, and that it is not always the intention ofthe creator to define which side of the line the data artefactis positioned. Quite often it is left to the audience to appre-ciate the work as an artistic creation or as a carrier of infor-mation, and the value of the work may lie in blurring orshifting this line.

    15TOWARDS A DESIGN SPACE FOR MULTISENSORY DATA REPRESENTATION

    INTERACTING WITH COMPUTERS, 2016

  • 4.4. Underexplored areas and future challenges

    We conclude our discussion by highlighting research chal-lenges that are hitherto underexplored and warrant investigationto help this field of research to evolve and expand. Our surveyshowed that the predominant combination of senses needed forinterpreting existing data representations is touch and vision(130 examples). Although data sonification is a vibrant field,our survey shows that combining the perceptual qualities ofsound with those of vision or touch is still somewhat underex-plored (11 examples). Apart from a few recent examples(Stefaner, 2014), data is very rarely interpreted using othersenses: taste and smell. We also note that using humans innateability to detect other stimuli, beyond those governed by thetraditional senses is underexplored. Apart from Perpetual(Tropical) SUNSHINE (fabric.ch, 2012), which relies on ourability to sense temperature, we found no examples that rely onour sense of, for example balance, pain or our kinaestheticsense, to form meaning from a representation. We also note thatthe majority of examples in our survey (72%) were aimed atsingle users. Apart from some notable exceptions (cf. Pattenet al., 2001), both traditional and multisensory data representa-tions have typically been created with the purpose of communi-cating to a single person. Moving representations beyond thesingle modality may provide an opportunity to expand the sizeof the audience, while also supporting collaboration. In similarterms, the size and dimension of the data typically used in ourcollection is small. During our review, we found only threeexamples that represented large data sets. What is commonlyknown today as Big Data has received minimal attention inregards to extending the modalities that represent theseextremely large data sets.The final aspect of multisensory data representations that

    we see as being relatively underexplored is the concept ofsubmodalities, in particular, the properties of the different rep-resentational modalities that can be exploited to create a suc-cessful data representation. The field of cartography has a richtradition of investigating these properties, which dates back toJacques Bertins seminal work on visual variables publishedin 1967 (Bertin, 1983). First identified for use in sign-systems, Bertin identified seven visual variables (position,size, value, texture, colour, orientation and shape) and pre-sented a set of rules for their appropriate use, based onwhether the visualized data are nominal, ordinal or quantita-tive. While research has continued over the years to validateBertins visual variables (cf. Cleveland and McGill, 1986),the work of MacEachren (1995) has expanded the variablesto account for the use of computer technology. His extensionof Bertins visual variables is made up of: location, size,crispness, resolution, transparency, colour value, colour hue,colour saturation, texture, orientation, arrangement andshape. Much like MacEachrens work, the majority ofresearch to date on visual variables has focused on theirrefinement to account for new technology or different context

    of use (cf. Carpendale, 2003). There have also been someattempts to transfer Bertins concept across to other modal-ities. Krygier (1994) explored the use of sound as a designvariable for data representation. Using Bertins work as a ref-erence point, he surveyed the use of sound as a representa-tional modality in existing systems to establish the soundvariables: location, loudness, pitch, register, timbre, dur-ation, rate of change, order and attack/decay. Vasconcellos(1995) also explored Bertins graphic semiology, but thistime in relation to tactile cartography. While Vasconcellostranslated Bertins principles to a tactual format, she did notidentify or validate any variables that are unique to tactile per-ception, such as, for instance, pressure, vibration or tempera-ture. More recently, in the context of physicalization, Jansenand Hornbk (2016) have revisited Bertins variables, byinvestigating how people interpret data encoded in the size of3D objects. They present empirical evidence that shows phys-ical bars achieve the same levels of perceptual accuracy as2D bars.Apart from the work on sound variables of Krygiers

    (1994), there has been no research that has attempted to sys-tematically establish or validate design variables for modal-ities other than visual. We have also found no work that seeksto explore the combination of design variables from differentmodalities. For instance, if a representation encodes data inthe visual and physical properties of an artefact, we couldleverage the principles developed by Bertin to interrogate anduse visual variables; however, what would be the equivalentvariables for the haptic modality? We could possibly surmisethat these would include pressure, vibration, temperature, tex-ture, weight, shape or orientation; however, a concerted effortis needed to establish, confirm and validate these.

    4.5. Recommendations

    In this section, we outline five key recommendations that aremeant to provide practical guidance to other design researcherswho wish to study or design multisensory data representations.The first highlights issues from a user-centred perspective, thenext three relate to representation and sensory modality, whilethe final one needs to be addressed at an interdisciplinary level.

    4.5.1. Methods and approach to evaluationThe approach to evaluating the success of multisensory datarepresentations needs to reflect the purpose of the representa-tion. Although there is ongoing research within the InfoVisresearch community to investigate how analytical discovery ismade and meaning is formed from visualizations, more atten-tion is needed to evaluate representations whose purpose ismore open-ended. This mirrors the challenges being met bythe third-wave HCI researchers (cf. Bdker, 2006). Webelieve that methods used in HCI are also applicable to theevaluation of multisensory data representations that have

    16 TREVOR HOGAN AND EVA HORNECKER

    INTERACTING WITH COMPUTERS, 2016

  • similar intent, such as, for instance, evoking hedonicresponses. Another issue that can be addressed through newapproaches to evaluation is the cognitive demand on userswhen interpreting data through more than one modality.Further fundamental research is needed to better understandhow multisensory representations affect peoples ability toform meaning from the representation.

    4.5.2. Beyond touch, vision and soundWe have shown that the main representational modalities cur-rently used in multisensory data representations are visual,haptic and sound. We believe that there is potential in harnes-sing the perceptual qualities of other modalities to representdata for all the human senses. Although the use of alternativemodalities, such as olfactory or taste, may be problematic forgenerating accurate data insight, combining these with morefamiliar ones may yield potential for a truly holistic sensorialexperience of data.

    4.5.3. Beyond visual variablesIt is almost 50 years since Jacques Bertin published his workon visual variables (Bertin, 1983). The principles that he pos-tulated in this book are still closely adhered today by thosewho choose to visualize data. In order to effectively harnessthe attributes (variables) of modalities other than vision, wemust first understand how these can be used to effectivelyand accurately communicate information. Having a completetaxonomy of the variables of all modalities will assistresearchers and designers to better understand the role theyplay in the perception and interpretation of data. Once thedesign variables for all modalities has been established andempirically validated, the next step would involve exploringthe use of design variables from different modalities in therepresentation of data.

    4.5.4. Data insight through experienceDuring our survey, we highlighted a new phenomenon in datarepresentation, where the data is not encoded in the represen-tational modalities but in the behaviour, affordance andexperience of the data representation. We believe that thisapproach has the potential to be a key milestone in the evolu-tion of data representationmuch like how the developmentsin computer technology allowed data representations to belive and dynamic, and developments in off-the-shelf micro-controllers and rapid prototyping technology allowed datarepresentations to be physical. Presently, the examples wefound that utilize this approach are exploratory pieces createdwithin an arts context. However, as data representations arenow commonly used in casual contexts such as libraries,museums or at home, facilitating data insight through peopleinnate ability to perceive how something feels (experience) isworthy of further exploration. Peter Shenai, one of the crea-tors of My Life Dont Mean A Thing If It Aint Got ThatSwing talks about this approach to data representation as

    allowing him to push the interactive medium to its limits, upto and including the point of malfunction, as the malfunctionor breakdown of the artefact (in his case a swing) serves as areminder to the user that the data has been skewed strongly inone direction (Shenai, 4 July 2015, personal communication).This is a departure from how HCI and InfoVis researcherspresently think of and address the concept of (mal)functional-ity. Alongside this, we believe that representing data throughexperience can further harness peoples natural instincts andcan offer researchers a platform to generate data insight in amore natural and intuitive way.

    4.5.5. Multidisciplinary collaborationThe interpretation of data through multiple channels is notexclusive to any one community. Our survey shows that it ispracticed by a range of professionals and enthusiasts includ-ing academics, researchers, designers, artists, engineers andeven hobbyists. While acknowledging that their intentionsmay vary greatly, we see great potential and benefit fromencouraging and supporting open collaborations between thesedisciplines and practitioners. By harnessing the logical andtechnical skills of information scientists and engineers withthe aesthetic and perceptive skills of designers and sensitivityof artists, we believe that data can be represented in a mannerthat is artistic, engaging, aesthetic, informative and insightful.

    5. LIMITATIONS

    In this article, we present a design space for multisensory datarepresentations based on a survey of contemporary and histor-ical examples we collected over time. However, it has severallimitations. First, although we have attempted to gather asmany examples as possible, we acknowledge that the list isnot exhausted as work is continuously being produced, andwe may also have missed some work due to their limitedexposure. Second, the examples in our collection have beenproduced in a wide range of different domains, such as HCI,InfoVis, art and design. This has meant that the documenta-tion related to each example can vary dramatically. Inresearch fields, such as HCI and InfoVis, the description ofand motivation for creating the work is easy to find and typic-ally quite detailed. Whereas, examples that are situated infields, such as, art and design, do not always have easilyaccessible documentation about the work. In these cases wehad to reply on a combination of third-party critiques and ourown experience and interpretation to place the work withinthe dimensions of our design space. Finally, we acknowledgethat the reader may object to the dimensions of our designspace and/or the terminology used to define and describethem. However, we do not intend this research to be authori-tative. We have argued in the introduction that the aim of thisresearch is not to reclassify current data representations orprescribe a new domain. Instead, we hope that this article will

    17TOWARDS A DESIGN SPACE FOR MULTISENSORY DATA REPRESENTATION

    INTERACTING WITH COMPUTERS, 2016

  • promote debate, inform design and encourage future researchto move towards a design space for multisensory datarepresentation.

    6. CONCLUSION

    In this article we define, for the first time, multisensorydata representation. Using this definition, we collected andsurveyed 154 state-of-the-art representations and estab-lished a design space using methods and techniquesadopted from Thematic Analysis and Prototype Theory.The three dimensions of our design space are: use ofmodalities, representation intent, humandata relations.Derived from our analysis of the design space, weaddressed key research issues, including: questioning thevalue added by expanding the sensory channels required tointerpret a data representation. We also discussed issuesfrom the users perspective and explored the boundariesbetween representation and art. We concluded our discus-sion by highlighting the underexplored areas and futurechallenges and presented five key recommendations aimedat providing practical guidance to other researchers andpractitioners who wish to study and create multisensorydata representations.We conclude our exploration of the design space by

    addressing a semantic dilemma that may be caused fromutilizing more than one representation modality within thesame representation. One could question: can a representa-tion that utilizes modalities alongside and beyond the vis-ual modality still be considered a visualization? New mediatheorist Lev Manovich (2008) defines information visual-ization simply as a mapping between discrete data and avisual representation and Eric Rodenbeck, during his key-note address at the 2008 OReilly Emerging TechnologyConference, spoke about information visualization asbecoming more than a set of tools and technologies andtechniques to understand large data sets. It is emerging asa medium in its own right, with a wide range of expressivepotential.5 If we subscribe to the concept of defining infor-mation visualization as a medium, can we then disassociatethe meaning of the word visualization from the visualmodality, and propose that data represented in any modal-ity or combination of modalities may be considered aninformation visualization and not data sculpture, data art,sonification, physicalization, sensualization, sensification,perceptualization or even multisensory data representa-tion? We believe that by accepting this concept, while alsoacknowledging the broad expertise needed to meet the the-oretical and practical challenges, the HCI community willplay an important role in future developments ofInformation Visualization.

    SUPPLEMENTARY MATERIAL

    Supplementary material is available at http://iwc.oxfordjournals.org/.

    REFERENCES

    Alexander, J. et al. (2015) Exploring the Challenges of Making DataPhysical, Workshop at CHI15.

    Basdogan, C. and Bowen Loftin, R. (2009) Multimodal displaysystems: haptic, olfactory, gustatory, and vestibular. The PSIHandbook of Virtual Environments for Training andEducation, Vol 2, pp. 116134. VE Components and TrainingTechnologies.

    Bertin, J. (1983) Semiology of graphics: diagrams, networks, maps.University of Wisconsin Press, Madison, Wisconsin. Translatedby William Berg from Semiologie Graphique (1967). Paris:Editions Gauthier-Villars.

    Bdker, S. (2006) When Second Wave HCI meets Third WaveChallenges. Proc. NordiCHI06, ACM (2006), 1418.

    Boleslavsk, B. (2012) Google Eye. http://id144.org/projects/google-eye/ (accessed May, 2015).

    Bossert, M. (2012) The Worlds Best Spintop. http://spintop.cc/(accessed November, 2012).

    Bowen, D. (2012) Underwater. http://www.dwbowen.com/underwater(accessed January, 2015).

    Bowen Loftin, R. (2003) Multisensory perception: beyond the visualin visualization. Comp. Sci. Eng., 5, 5658.

    Boyatzis, R. (1998) Transforming Qualitative Information: ThematicAnalysis and Code Development. Sage Publications, ThousandOaks, California.

    Braun, V. and Clarke, V. (2006) Using thematic analysis in psych-ology. Qual. Res. Psychol., 3, 77101.

    Brewster, S. and Brown, L.M. (2004) Tactons: structured tactilemessages for non-visual information display. In Proc. 5th conf.Australasian User Interface. Volume 28 (AUIC 04), Vol. 28.Australian Computer Society, Inc., Darlinghurst, Australia, Australia,1523.

    Brodbeck, D., Mazza, R. and Lalanne, D. (2009) Interactive visual-izationa survey. In, Human Machine Interaction, pp. 2746.Springer, Berlin Heidelberg.

    Card, S., Mackinlay, J.D. and Shneiderman, B. (1999) Readings inInformation Visualization: Using Vision to Think, MorganKaufmann Publishers Inc., San Francisco, CA, USA.

    Card, S. (2002) Information visualization. In Jacko J.A. and Sears A.(eds.), The Human-Computer Interaction Handbook, pp. 544582.L. Erlbaum Associates Inc., Hillsdale, NJ, USA.

    Carpendale, M.S.T. (2003) Considering Visual Variables as a Basisfor Information Visualisation, University of Calgary, Departmentof Computer Science, 2001-693-16.

    Chen, C. and Czerwinski, M.P. (2000) Empirical evaluation of infor-mation visualizations: an introduction. Int. J. Hum-Comput. St.,53, 631635.5http://en.oreilly.com/et2008/public/schedule/detail/1588

    18 TREVOR HOGAN AND EVA HORNECKER

    INTERACTING WITH COMPUTERS, 2016

    http://IWCOMP.oxfordjournals.org/lookup/suppl/doi:10.1093/iwc/iww015/-/DC1http://IWCOMP.oxfordjournals.org/lookup/suppl/doi:10.1093/iwc/iww015/-/DC1http://id144.org/projects/google-eye/http://id144.org/projects/google-eye/http://spintop.cc/http://www.dwbowen.com/underwaterhttp://en.oreilly.com/et2008/public/schedule/detail/1588

  • Cleveland, W.S. and McGill, R. (1986) An experiment in graphicalperception. Int. J. Man. Mach. Stud., 25, 491500.

    Dourish, P. and Mazmanian, M. (2011) Media as material: informa-tion representation as material foundations for organizationalpractice. Proc. POS11, 2011, 124.

    fabric.ch. (2012) Perpetual (Tropical) SUNSHINE. 2006. url: http://www.fabric.ch/pts/ (accessed June, 2013).

    Faeth, A., Oren, M. and Harding, C. (2008) Combining 3-D geovi-sualization with force feedback driven user interaction. InProceedings of the 16th ACM SIGSPATIAL international confer-ence on Advances in geographic information systems (GIS 08).ACM, Article 25 , 9 pages.

    Fischer, A.N.Fundement (2007) url: http://anf.nu/fundament/,(accessed May, 2013).

    Freides, D. (1974) Human information processing and sensorymodality: crossmodal functions, information complexity, memoryand deficit. Psychol. Bull., 81, 284310.

    Gross, S. et al. (2013) Structures, forms, and stuff: the materialityand medium of interaction. Pers. Ubiquit. Comput., 637649.

    Guest, G., MacQueen, K.M. and Namey, E.E. (2012) Introduction toApplied Thematic Analysis. SAGE Publications, Inc., California.

    Guljajeva, V. (2011) The Rhythm of