diseñoyevaluaciónde productosinteractivosssayag/uc3m/depi/doc-depi-ch5.ppt.pdf ·...

37
Diseño y Evaluación de Productos Interactivos Interaction paradigms and virtual environments Sergio Sayago Master Universitario en Ingeniería Informá5ca Créditos ECTS: 6 Segundo cuatrimestre, curso 2012/2013 Departamento de Informá5ca

Upload: duongdan

Post on 10-Feb-2019

221 views

Category:

Documents


0 download

TRANSCRIPT

Diseño  y  Evaluación  de  Productos  Interactivos  Interaction  paradigms  and  virtual  environments  

Sergio  Sayago  Master  Universitario  en  Ingeniería  Informá5ca  Créditos  ECTS:  6  Segundo  cuatrimestre,  curso  2012/2013  Departamento  de  Informá5ca  

Contents  •  Interac(on  paradigms  •  Ubiquitous  /  pervasive  compu5ng  •  The  seminal  idea  (and  works)  •  Two  examples  •  Some  issues  

•  Wearable  compu5ng  (and  virtual  environments)  •  The  basics  •  Three  examples  •  Some  issues  

•  Tac5le/Tangible  compu5ng  •  The  basics  •  From  GUI  to  TUI:  two  examples  

•  Discussion  

Paradigms  

2  

Interaction  paradigms  •  A  par5cular  philosophy  or  way  of  thinking  about  interac5on  design.  It  is  intended  to  orient  designers  to  the  kinds  of  ques5ons  they  need  to  ask  

•  We  believe  that  we  now  design  interac5ve  products  which  are  beVer  than  those  we  designed  5-­‐10-­‐15…  years  ago  •   For  many  years,  the  prevailing  paradigm  was  the  personal  compu5ng  (desktop  interfaces,  single  users)  

•  Paradigms  for  interac5on  have  for  the  most  part  been  dependent  on  technological  advances  and  their  crea(ve  applica(on  to  enhance  interac5on  

•  Interac5on  paradigms  include  ubiquitous  /  pervasive  compu5ng,  wearable  compu5ng  and  tangible/tac5le/hap5c  compu5ng  (others  are  5me  sharing,  the  WWW,  CSCW…)    

Paradigms  

Contents  •  Interac5on  paradigms  •  Ubiquitous  /  pervasive  compu(ng  •  The  seminal  idea  (and  works)  •  Two  examples  •  Some  issues  

•  Wearable  compu5ng  (and  virtual  environments)  •  The  basics  •  Three  examples  •  Some  issues  

•  Tac5le/Tangible  compu5ng  •  The  basics  •  From  GUI  to  TUI:  two  examples  

•  Discussion  

Paradigms  

4  

Ubiquitous/pervasive  computing  The  seminal  idea  (and  works)  •  Mark  Weiser,  Xerox  Palo  Alto  Research  Center,  1991,  “the  most  profound  technologies  are  those  that  disappear.  They  weave  themselves  into  the  fabric  of  everyday  life  un5l  they  are  indis5nguishable  from  it”  

•  “More  than  50  million  personal  computers  have  been  sold,  and  the  computer  nonetheless  remains  largely  in  a  world  of  its  own.  It  is  approachable  only  through  complex  jargon  that  has  nothing  to  do  with  the  tasks  for  which  people  use  computers”  •  Consider  wri5ng  a  leVer  with  a  pen.  The  pen  is  “invisible”  •  Now  consider  wri5ng  a  leVer  with  a  PC.  The  PC  is  far  for  being  “invisible”  

Paradigms  

M.  Weiser,  “The  Computer  for  the  21st  Century,”  Scien&fic  American,  pp.  94–104,  1991  

•  “We  are  trying  to  conceive  a  new  way  of  thinking  about  computers,  one  that  takes  into  account  the  human  world  and  allows  the  computers  themselves  to  vanish  into  the  background”  

•  “Ubiquitous  computers  will  come  in  different  sizes,  each  suited  to  a  par5cular  task.  My  colleagues  and  I  have  built  what  we  call  tabs,  pads  and  boards:  inch-­‐scale  machines  that  approximate  ac5ve  post-­‐it  notes,  foot-­‐scale  ones  that  behave  something  like  a  sheet  of  paper  and  yard-­‐scale  displays  that  are  the  equivalent  of  a  blackboard  or  bulle5ng  board”  •  Tabs  -­‐>  (not  exactly  inch-­‐based,  but…)  Personal  Digital  Assistants  (PDAs)?  

•  Pads  -­‐>  tablet  PCs  (Apple  iPad)?  •  Boards  -­‐>  whiteboards?    

Ubiquitous/pervasive  computing  The  seminal  idea  (and  works)  

Paradigms  

M.  Weiser,  “The  Computer  for  the  21st  Century,”  Scien&fic  American,  pp.  94–104,  1991  

•  “The  technology  required  for  ubiquitous  compu5ng  comes  in  three  parts:  cheap,  low-­‐power  computers  that  include  equally  convenient  displays,  sohware  for  ubiquitous  applica5ons  and  a  network  that  5es  them  all  together”  

•  “My  colleagues  and  I  believe  that  what  we  call  ubiquitous  compu5ng  will  gradually  emerged  as  the  dominant  mode  of  computer  access  over  the  next  20  years”  •  Do  we  have  nowadays  ‘cheap,  low-­‐power  computers  that  include  equally  convenient  displays’?  

•  And  ‘sohware  and  a  network  that  5es  them  all  together’?  •  “Specialized  elements  of  hardware  and  sohware,  connected  by  wires,  radio  waves  and  infrared,  will  be  so  ubiquitous  that  no  one  will  no5ce  their  presence”  

Ubiquitous/pervasive  computing  The  seminal  idea  (and  works)  

Paradigms  

M.  Weiser,  “The  Computer  for  the  21st  Century,”  Scien&fic  American,  pp.  94–104,  1991  

•  Ageing  popula5on  &  age-­‐related  changes  in  func5onal  abili5es  (vision,  cogni5on,  mobility,  hearing)  •  Smart  homes  which  •  aid  older  people  in  conduc5ng  ac5vi5es  of  daily  living  •  monitor  certain  aspects  of  a  person’s  health,  with  sensors  (and  other  technologies,  such  as  cameras)  

•  An  example  of  ubiquitous  compu5ng  in  smart  homes  for  suppor5ng  beVer  independent  living  •  hVp://www.casala.ie/the-­‐great-­‐northern-­‐haven.html  

Ubiquitous/pervasive  computing  Two  examples:  independent  living  

Paradigms  

•  Mobile  technologies  create  a  vast,  geographically  aware  sensor  web  that  accumulates  tracks  to  reveal  both  individual  and  social  behaviors  with  unprecedented  detail  

•  This  ar5cle  illustrates  the  poten5al  of  user-­‐generated  electronic  trails  to  remotely  reveal  the  presence  and  movement  of  a  city’s  visitors  

•  Understanding  popula5on  dynamics  by  type,  neighborhood,  or  region  would  enable  customized  services  (and  adver5sing)  as  well  as  the  accurate  5ming  of  urban  service  provisions,  such  as  scheduling  monument  opening  5mes  based  on  daily,  weekly,  or  monthly  tourist  demand  

Ubiquitous/pervasive  computing  Two  examples:  digital  footprinting  

Paradigms  

F.  Girardin,  J.  Blat,  F.  Calabrese,  F.  Dal  Fiore,  and  C.  Ram,  “Digital  Footprin5ng:  Uncovering  Tourists  with  User-­‐Generated  Content,”  IEEE  Pervasive  Compu&ng,  pp.  36–43,  2008  

•  Two  types  of  footprint  •  Passive  tracks:  leh  through  interac5on  with  an  infrastructure,  such  as  mobile  phone  network,  that  produces  entries  in  loca5onal  logs  

•  Ac5ve  prints:  users  expose  loca5onal  data  in  photos,  messages,  and  sensor  measurements  

•  Both  types  of  footprint  in  Rome  •  Geo-­‐referenced  photos  made  publicly  available  on  the  photo-­‐sharing  web  site  Flick  

•  Aggregate  records  of  wireless  networks  events  generated  by  mobile  phone  users  making  class  and  sending  text  messages  on  the  Telecom  Italia  Mobile  (TIM)  system  

Ubiquitous/pervasive  computing  Two  examples:  digital  footprinting  

Paradigms  

F.  Girardin,  J.  Blat,  F.  Calabrese,  F.  Dal  Fiore,  and  C.  Ram,  “Digital  Footprin5ng:  Uncovering  Tourists  with  User-­‐Generated  Content,”  IEEE  Pervasive  Compu&ng,  pp.  36–43,  2008  

•  Analyze  three  years  of  data,  from  November  2004  to  November  2007.  144,501  geo-­‐referenced  photos  

•  Over  a  period  of  three  months,  5med  to  coincide  with  the  Venice  Biennale  from  September  to  November  2006:  phone  calls  

Ubiquitous/pervasive  computing  Two  examples:  digital  footprinting  

Paradigms  

F.  Girardin,  J.  Blat,  F.  Calabrese,  F.  Dal  Fiore,  and  C.  Ram,  “Digital  Footprin5ng:  Uncovering  Tourists  with  User-­‐Generated  Content,”  IEEE  Pervasive  Compu&ng,  pp.  36–43,  2008  

38 PERVASIVE computing www.computer.org/pervasive

USER-GENERATED CONTENT

behaviors. We elected to use Google Earth to support visual synthesis and our preliminary investigation of digi-tal traces. Accordingly, we stored data collected by the Lochness platform and the Flickr service on a MySQL server, enabling us to flexibly query and ag-gregate the data further as required. Using software developed in house, we then exported the aggregate results in a format compatible with Google Earth for interactive visual exploration. Pre-cise digital satellite imagery from Teles-pazio, which is a company providing satellite services, was added as image overlay. Applying these techniques and tools to process digital footprints lets us uncover the presence of crowds and the patterns of movement over time as well as compare user behaviors to generate new hypotheses.

Analyzing Digital FootprintsWe used user-originated digital foot-prints to uncover some new aspects of the presence and movement of tourists during their visit to Rome. Specifically, we used spatial and temporal pres-ence data to visualize user-generated information.

Spatial PresenceTo map users’ spatial distribution, we store data in a matrix covering the en-tire study area. Each cell in the matrix includes data about the number of pho-tos taken, the number of photographers

present, and the number of phone calls made by foreigners over a given period of time. The geovisualization in Figure 1 reveals the main areas of tourist ac-tivity in part of central Rome over the three-month period of September to November 2006.

Figure 1a shows the presence of pho-tographers, and Figure 1b depicts the areas of heavy mobile phone usage by foreigners. The union between visiting photographers and foreign mobile phone customers quickly uncovers the area’s major visitor attractions such as the Col-iseum and the main train station next to Piazza della Repubblica. It appears that the Coliseum attracts sightseeing photographers whereas foreign mobile phone users, typically on the move, tend to be active around the train station.

Temporal PresenceTurning to the temporal patterns ob-tained from the digital traces, we com-pared the number of photographers and the volume of phone activity for each day over the three-month study pe-riod. Figure 2 shows the difference be-tween the average weekly distribution of phone calls made by visitors and the presence of visiting photographers in the areas around the Coliseum and Pi-azza della Repubblica. The histograms show the normalized variation between the average number of calls and pho-tographs for each day and the average amount for the whole week.

The resulting temporal signatures for the Coliseum area show related trends for both data sets, with higher activity over the weekend than on weekdays. However, the Piazza della Repubblica area reveals a markedly different pattern: photographers, though fewer in number than at the Coliseum, also tend to be active on the weekend, whereas the foreign mobile phone users are much more active dur-ing the weekdays.

These temporal signatures provide further evidence to the different types of presence that occur at the tourist points of interest. We can further hy-pothesize that the Coliseum attracts sightseeing activities (photographers) over the weekend and the train station neighborhood provides facilities for visitors on the move (such as people on business trips) during the weekdays.

Desire Lines from Digital TracesThe study of digital footprints also lets us uncover the digital desire lines, which embody people’s paths through the city. Based on the time stamp and location of photos, our software or-ganizes the images chronologically to reconstruct the photographers’ move-ment. More precisely, we start by re-vealing the most active areas obtained by spatial data clustering. Next, we aggregate these individual paths to generate desire lines that capture the sequential preferences of visitors. We check the location of each user activ-ity (photo) to see if it’s contained in a cluster and, in the case of a match, add the point to the trace generated by the

(b)(a)

Figure 1. Geovisualizations of the presence of (a) 932 tourist photographers and (b) 520,000 phone calls from foreign mobile phones in the Coliseum and Piazza della Repubblica area from September to November 2006. Both types of data cover the train station area in the proximity of the Piazza della Republica. The values in each cell are normalized.

It  appears  that  the  Coliseum  aVracts  sightseeing  photographers  whereas  foreign  mobile  phone  users,  typically  on  the  move,  tend  to  be  ac5ve  around  the  train  sta5on.  

Ubiquitous/pervasive  computing  Two  examples:  digital  footprinting  

Paradigms  

F.  Girardin,  J.  Blat,  F.  Calabrese,  F.  Dal  Fiore,  and  C.  Ram,  “Digital  Footprin5ng:  Uncovering  Tourists  with  User-­‐Generated  Content,”  IEEE  Pervasive  Compu&ng,  pp.  36–43,  2008  

OCTOBER–NOVEMBER 2008 PERVASIVE computing 39

photo’s owner. This process produces multiple directed graphs that support better quantitative analysis, which gives us the number of sites visited by season, the most visited and photo-graphed points of interests, and data on where photographers start and end their journeys.

Formatting this data according to the open Keyhole Markup Language standard lets us import it into Google Earth to explore the traveling behaviors of specific types of visitors. The result-ing visualization in Figure 3 suggests

the main points of interest in the city as a whole. Building asymmetric matrices of the number of photographers who moved from point of interest x to point of interest y reveals the predominant se-quence of site visits. We can also base queries on the users’ nationalities, the number of days of activity in the city, the number of photos taken, and areas visited during a trip.

Semantic DescriptionPrevious work has demonstrated that we can use spatially and temporally an-

notated material available on the Web to extract place- and event-related se-mantic information.13 In a similar vein, we analyzed the tags associated with the user-originating photos to reveal clues of people’s perception of their environ-ment and the semantics of their per-spective of urban space. For instance, the word “ruins” is one of the most-used tags to describe photos in Rome. Mapping the distribution of this tag for 2,866 photos uncovers the most ancient and “decayed” part of the city: the Coli-seum and the Forum (Figure 4).

1.0

0.5

0

–0.5

–1.0

CallPhotographers

Monday Tuesday Wednesday Thursday Friday Saturday Sunday

Average photographers per day: 331. Standard deviation: 49.5Average phone call per day: 620.47. Standard deviation: 99.65

1.0

0.5

0

–0.5

–1.0

CallPhotographers

Monday Tuesday Wednesday Thursday Friday Saturday Sunday

Average photographers per day: 10.85. Standard deviation: 3.43Average phone call per day: 1,165.35. Standard deviation: 198.43

Train station

Colloseum

(b)

(a)

Figure 2. Comparison of the temporal signature of foreigners’ phone activity and number of tourist photographers. It reveals patterns of below-average activity on weekdays and a rise of presence over the weekend at (a) the Coliseum. In contrast, (b) the train station’s temporal signature shows a higher presence of foreigners calling from their mobile phones during the week, whereas photographers indicate a reverse pattern and increased presence over the weekend.

Ubiquitous/pervasive  computing  Two  examples:  digital  footprinting  

Paradigms  

F.  Girardin,  J.  Blat,  F.  Calabrese,  F.  Dal  Fiore,  and  C.  Ram,  “Digital  Footprin5ng:  Uncovering  Tourists  with  User-­‐Generated  Content,”  IEEE  Pervasive  Compu&ng,  pp.  36–43,  2008  

40 PERVASIVE computing www.computer.org/pervasive

USER-GENERATED CONTENT

Significance of User-Generated DataThese aggregate spatiotemporal records seem to lead to an improved under-

standing of different aspects of mobility and travel. Although the results are still fairly coarse, we’ve shown the potential for geographically referenced digital

footprints to reveal patterns of mobility and preference among different visitor groups. However, in the context of our study, traditional methods would help us better define the usefulness of perva-sive user-generated content. For exam-ple, hotel occupancy and museum sur-veys would let us observe and quantify visitors’ presence and movement. Along this vein, the Rome tourism office sup-plied us with monthly ticket receipts for the Coliseum in 2006.

Figure 5 compares sales figures with mobile usage and photographic ac-tivity. Ticket receipts show that there are slightly more Coliseum visitors in October than September, with a ma-jor drop in attendance in November. This pattern matches the activity of foreign-registered mobile phones in the area, but it doesn’t coincide with photographer activity. These discrep-ancies likely exist because the data sets are capturing the activity of different sets of visitors. For example, correla-tion with ticket sales from the Coli-seum fails to account for the fact that users can easily photograph the arena or make a call from the vicinity of the monument without bothering to pay the entry fee. Due to the large differ-ence in the nature of the activity pro-ducing the data, it might be that cor-relating it with user-generated content doesn’t reinforce existing tourism and travel knowledge, but does reveal new dimensions of behavior.

Challenges of User-Generated Data SetsOur data-processing techniques have tried to account for the fluctuating

(b)

(a)

Figure 3. Geovisualiation of the main paths taken by photographers between points of interests in Rome. Significantly, (a) the 753 visiting Italian photographers are active across many areas of the city, whereas (b) the 675 American visitors stay on a narrow path between the Vatican, Forum, and Coliseum. (Different scales apply to each geovisualization.)

Figure 4. Geovisualization of the areas defined by the position of the 2,886 photos with the tag “ruins” as uploaded by 260 photographers. It reveals the Coliseum and Forum areas known for their multitude of ancient ruins.

Ubiquitous/pervasive  computing  Two  examples:  digital  footprinting  

Geovisualia5on  of  the  main  paths  taken  by  photographers  between  points  of  interests  in  Rome.  Significantly,  (a)  the  753  visi5ng  Italian  photographers  are  ac5ve  across  many  areas  of  the  city,  whereas  (b)  the  675  American  visitors  stay  on  a  narrow  path  between  the  Va5can,  Forum,  and  Coliseum.  (Different  scales  apply  to  each  geovisualiza5on.)  

Paradigms  

F.  Girardin,  J.  Blat,  F.  Calabrese,  F.  Dal  Fiore,  and  C.  Ram,  “Digital  Footprin5ng:  Uncovering  Tourists  with  User-­‐Generated  Content,”  IEEE  Pervasive  Compu&ng,  pp.  36–43,  2008  

•  The  disappearance  of  ubiquitous  compu(ng  •  from  the  first  genera5on  of  the  mainframe  to  a  second  genera5on  of  the  personal  computer  to  the  third  genera5on  of  ubiquitous  compu5ng  

•  Weiser’s  vision  has  succeeded  in  pervading  the  thoughts  of  a  large  community  of  researchers.  Connected  devices,  at  a  variety  of  sizes  and  with  varying  models  of  ownership,  define  our  world  of  compu5ng  today  

•  It  is  increasingly  hard  to  iden5fy  what  cons5tutes  ubicomp  research  today,  because  it  is  hard  to  rule  anything  out  as  being  unrelated  to  this  current  genera5on  of  compu5ng  

•  the  only  concrete  sugges5on  about  where  we  go  next  is  to  disappear  into  the  larger  compu5ng  research  agenda,  or  into  the  research  literature  of  other  domains,  and  cease  to  be  a  niche  topic  

Ubiquitous/pervasive  computing  Some  issues  

G.  D.  Abowd,  “What  next  ,  Ubicomp ?  Celebra5ng  an  intellectual  disappearing  act,”  in  UbiComp,  2012,  pp.  31–41.  

Paradigms  

•  A  fourth  genera(on  of  compu(ng?    •  First  genera5on:  one  computer  to  many  people  (e.g.  the  first  computers)  •  Second  genera5on:  one  computer  per  individual  (personal  compu5ng)  •  Third  genera5on:  many  computers  per  individual  (ubicomp)  •  Fourth  genera5on:  the  human–computer  experience  will  be  more  conjoined  than  ever  before  

Ubiquitous/pervasive  computing  Some  issues  

G.  D.  Abowd,  “What  next  ,  Ubicomp ?  Celebra5ng  an  intellectual  disappearing  act,”  in  UbiComp,  2012,  pp.  31–41.  

Paradigms  

Contents  •  Interac5on  paradigms  •  Ubiquitous  /  pervasive  compu5ng  •  The  seminal  idea  (and  works)  •  Two  examples  •  Some  issues  

•  Wearable  compu(ng  (and  virtual  environments)  •  The  basics  •  Three  examples  •  Some  issues  

•  Tac5le/Tangible  compu5ng  •  The  basics  •  From  GUI  to  TUI:  two  examples  

•  Discussion  

Paradigms  

17  

Wearable  computing  The  basics  • Wearable  compu5ng  is  the  study  or  prac5ce  of  inven5ng,  designing,  building  and  using  miniature  body-­‐borne  computa5onal  and  sensory  devices  • Wearable  computers  may  be  worn  under,  over,  or  in  clothing,  or  may  also  be  themselves  clothes  •   Wearable  computers  versus  portable  computers  •  The  goal  of  wearable  compu5ng  is  to  posi5on  or  contextualize  the  computer  in  such  a  way  that  the  human  and  computer  are  inextricably  intertwined  

Paradigms  

Mann,  Steve.  (2013):  Wearable  Compu5ng.  In:  Soegaard,  Mads  and  Dam,  Rikke  Friis  (eds.).  "The  Encyclopedia  of  Human-­‐Computer  Interac5on,  2nd  Ed.".  Aarhus,  Denmark:  The  Interac5on  Design  Founda5on.  Available  online  at  hVp://www.interac5on-­‐design.org/encyclopedia/wearable_compu5ng.html  

18  

Wearable  computing  Three  examples:  more  reality  •  Wearable  compu5ng  and  virtual  environments  •  Augmented  reality:  means  to  super-­‐impose  an  extra  layer  on  a  real-­‐world  environment,  thereby  augmen5ng  it  

•  Wikitude  applica5on  for  the  iPhone:  lets  you  point  your  iPhone’s  camera  at  something,  which  is  then  “augmented”  with  informa5on  from  the  Wikipedia  

 

Paradigms  

Mann,  Steve.  (2013):  Wearable  Compu5ng.  In:  Soegaard,  Mads  and  Dam,  Rikke  Friis  (eds.).  "The  Encyclopedia  of  Human-­‐Computer  Interac5on,  2nd  Ed.".  Aarhus,  Denmark:  The  Interac5on  Design  Founda5on.  Available  online  at  hVp://www.interac5on-­‐design.org/encyclopedia/wearable_compu5ng.html  

19  

•  Diminished  Reality  •  Some5mes  there  are  situa5ons  where  it  is  appropriate  to  remove  or  diminish  cluVer.    

•  The  electric  eyeglasses  (www.eyetap.org)  can  assist  the  visually  impaired  by  simplifying  rather  than  complexifying  visual  input.  To  do  this,  visual  reality  can  be  re-­‐drawn  as  a  high-­‐contrast  cartoon-­‐like  world  where  lines  and  edges  are  made  more  bold  and  crisp  and  clear,  thus  being  visible  to  a  person  with  limited  vision.  

Wearable  computing  Three  examples:  less  reality  

Paradigms  

Mann,  Steve.  (2013):  Wearable  Compu5ng.  In:  Soegaard,  Mads  and  Dam,  Rikke  Friis  (eds.).  "The  Encyclopedia  of  Human-­‐Computer  Interac5on,  2nd  Ed.".  Aarhus,  Denmark:  The  Interac5on  Design  Founda5on.  Available  online  at  hVp://www.interac5on-­‐design.org/encyclopedia/wearable_compu5ng.html  

20  

•  Children-­‐driven  play  can  be  regarded  as  free-­‐play  

•  Spontaneous  play,  no  fixed  rules  (they  are  defined  while  playing)  

•  Playful  objects  (bags,  toys)  and  social  interac5on  

•  Develop  social  skills,  learn,…  

Wearable  computing  Three  examples:  wearable  sounds  

Paradigms  

21  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.  Evoca5ve  Experiences  in  the  Design  of  Objects  to  Encourage  Free-­‐Play  for  Children.  In  Proceedings  of  the  Interna&onal  Joint  Conference  on  Ambient  Intelligence  –  AmI’11.  Amsterdam,  Netherlands  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.    Playful  accessories.  Design  process  of  two  objects  to  encourage  free-­‐play.  In  Proceedings  of  the  4th  World  Conference  on  Design  Research,  IASDR’11.  

•  Free-­‐play  possibili5es  of  movement-­‐to-­‐sound  interac5on  amongst  school  children  (wearable  “device”  in  blue)  • What  is  the  play?  • How  do  they  play?  

Wearable  computing  Three  examples:  wearable  sounds  

Paradigms  

22  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.  Evoca5ve  Experiences  in  the  Design  of  Objects  to  Encourage  Free-­‐Play  for  Children.  In  Proceedings  of  the  Interna&onal  Joint  Conference  on  Ambient  Intelligence  –  AmI’11.  Amsterdam,  Netherlands  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.    Playful  accessories.  Design  process  of  two  objects  to  encourage  free-­‐play.  In  Proceedings  of  the  4th  World  Conference  on  Design  Research,  IASDR’11.  

•  Movement-­‐to-­‐Sound  in  augmented  dance,  theater  and  games  •  The  focus  is  on  visualiza5on,  which  would  restrict  free-­‐play  • Wearable  interfaces  to  manipulate  music…with  teenagers  and  adults  

Wearable  computing  Three  examples:  wearable  sounds  

Paradigms  

23  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.  Evoca5ve  Experiences  in  the  Design  of  Objects  to  Encourage  Free-­‐Play  for  Children.  In  Proceedings  of  the  Interna&onal  Joint  Conference  on  Ambient  Intelligence  –  AmI’11.  Amsterdam,  Netherlands  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.    Playful  accessories.  Design  process  of  two  objects  to  encourage  free-­‐play.  In  Proceedings  of  the  4th  World  Conference  on  Design  Research,  IASDR’11.  

•  Playful  accessories  •  Augment  everyday  clothes  •  The  most  relevant  playful  object  in  their  free-­‐play  is  the  body  (based  on  observa5ons    &  conversa5ons  in  different  contexts)  

•  Anywhere  and  at  any  5me  

Wearable  computing  Three  examples:  wearable  sounds  

Paradigms  

24  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.  Evoca5ve  Experiences  in  the  Design  of  Objects  to  Encourage  Free-­‐Play  for  Children.  In  Proceedings  of  the  Interna&onal  Joint  Conference  on  Ambient  Intelligence  –  AmI’11.  Amsterdam,  Netherlands  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.    Playful  accessories.  Design  process  of  two  objects  to  encourage  free-­‐play.  In  Proceedings  of  the  4th  World  Conference  on  Design  Research,  IASDR’11.  

•  This  concept  comes  from  co-­‐design  ac5vi5es  with  kids  •  They  had  to  use  their  imagina5on  (summer  school)  • We  would  like  to  change  the  sounds…  

Wearable  computing  Three  examples:  wearable  sounds  

Paradigms  

25  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.  Evoca5ve  Experiences  in  the  Design  of  Objects  to  Encourage  Free-­‐Play  for  Children.  In  Proceedings  of  the  Interna&onal  Joint  Conference  on  Ambient  Intelligence  –  AmI’11.  Amsterdam,  Netherlands  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.    Playful  accessories.  Design  process  of  two  objects  to  encourage  free-­‐play.  In  Proceedings  of  the  4th  World  Conference  on  Design  Research,  IASDR’11.  

Wearable  computing  Three  examples:  wearable  sounds  

Paradigms  

26  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.  Evoca5ve  Experiences  in  the  Design  of  Objects  to  Encourage  Free-­‐Play  for  Children.  In  Proceedings  of  the  Interna&onal  Joint  Conference  on  Ambient  Intelligence  –  AmI’11.  Amsterdam,  Netherlands  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.    Playful  accessories.  Design  process  of  two  objects  to  encourage  free-­‐play.  In  Proceedings  of  the  4th  World  Conference  on  Design  Research,  IASDR’11.  

•  hVps://www.youtube.com/watch?v=LBBe8iDeFrs  •  Some  ini(al  results  •  Ars  Electrónica  2012,  274  players  (0-­‐7:  25;  8-­‐12:  76;  13-­‐17:  17;  18-­‐30:  68;  31-­‐60:  83;  60+:  5)  

•  0-­‐7:  the  wearable  kit  is  not  suitable  •  Teenagers  (13-­‐17):  high  interest,  long-­‐5me  interac5ng  •  Young  adults  (18+):  tried  to  understand  how  it  works  and  took  pictures  

•  Adults  (31-­‐60):  tried  to  understand  how  it  works;  short  crea5ve  performances  

•  Schoolchildren  (8-­‐12):  crea5ve,  diverse  uses,  personal  uses  

Wearable  computing  Three  examples:  wearable  sounds  

Paradigms  

27  

•  Surveillance  is  an  established  prac5ce,  and  while  controversial,  much  of  the  controversies  have  been  (or  are  being)  worked  out  •  Digital  cameras  on  train  coaches  -­‐>  security  (more  important  than  privacy)  

•  Sousveillance,  (individuals  wearing  digital  cameras  in  their  phones)  however,  being  a  newer  prac5ce,  remains,  in  many  ways,  yet  to  be  worked  out  •  Department  store:  it  is  illegal  to  take  photos…but  you  can  scan  this  QR  code  with  your  phone  to  know  more  about  our  products  (?)  

Wearable  computing  Some  issues:  surveillance  vs  sousveillance  

Paradigms  

Mann,  Steve.  (2013):  Wearable  Compu5ng.  In:  Soegaard,  Mads  and  Dam,  Rikke  Friis  (eds.).  "The  Encyclopedia  of  Human-­‐Computer  Interac5on,  2nd  Ed.".  Aarhus,  Denmark:  The  Interac5on  Design  Founda5on.  Available  online  at  hVp://www.interac5on-­‐design.org/encyclopedia/wearable_compu5ng.html  

28  

Contents  •  Interac5on  paradigms  •  Ubiquitous  /  pervasive  compu5ng  •  The  seminal  idea  (and  works)  •  Two  examples  •  Some  issues  

•  Wearable  compu5ng  (and  virtual  environments)  •  The  basics  •  Three  examples  •  Some  issues  

•  Tac(le/Tangible  compu(ng  •  The  basics  •  From  GUI  to  TUI:  two  examples  

•  Discussion  

Paradigms  

29  

Tactile/Tangible  computing  The  basics  •  Tac5le?  Tangible?  Hap5c?  But  is  it  not  generally  the  visual  channel  that  features  most  prominently  within  any  given  interface?  

•  Visually  dominant  display  might  some5mes  be  either  imprac5cal  or  impossible:  •  Extraordinary  people  under  ordinary  situa5ons  -­‐  Non-­‐visual  interfaces  for  people  with  visual  disabili5es  (e.g.  the  blind)  

•  Ordinary  people  under  extraordinary  situa5ons  –the  environment  might  not  offer  enough  light  to  easily  see  what  is  happening  

Challis,  Ben.  (2013):  Tac5le    Compu5ng.  In:  Soegaard,  Mads  and  Dam,  Rikke  Friis  (eds.).  "The  Encyclopedia  of  Human-­‐Computer  Interac5on,  2nd  Ed.".  Aarhus,  Denmark:  The  Interac5on  Design  Founda5on.  Available  online  at  hVp://www.interac5on-­‐design.org/encyclopedia/tac5le_compu5ng.html  

Paradigms  

30  

•  Two  fundamental  and  dis5nct  senses  that  together  provide  us  with  a  sense  of  touch:  the  cutaneous  sense  and  kinesthesis  

•  The  cutaneous  sense  provides  an  awareness  of  the  s5mula5on  of  the  receptors  within  the  skin  

•  The  kinesthe5c  sense  provides  an  awareness  of  the  rela5ve  posi5oning  of  the  body  (head,  torso,  limbs…)  

•  Thus,  we  dis5nguish  between:  •  Tac(le  percep(on:  varia5ons  in  cutaneous  s5mula5on;  the  individual  must  be  sta5c  

•  Kinesthe(c  percep(on:  varia5ons  in  kinesthe5c  s5mula5on  •  Hap(c  percep(on:  both  tac5le  and  kinesthe5c  (explore  and  understand  our  surrounding  using  touch)  

Tactile/Tangible  computing  The  basics  

Challis,  Ben.  (2013):  Tac5le    Compu5ng.  In:  Soegaard,  Mads  and  Dam,  Rikke  Friis  (eds.).  "The  Encyclopedia  of  Human-­‐Computer  Interac5on,  2nd  Ed.".  Aarhus,  Denmark:  The  Interac5on  Design  Founda5on.  Available  online  at  hVp://www.interac5on-­‐design.org/encyclopedia/tac5le_compu5ng.html  

Paradigms  

31  

Tactile/Tangible  computing  From  GUI  to  TUI  (Tangible  UI)  

model, the building surface material is switched from bricks toglass, and a projected reflection of sunlight appears to bounceoff the walls of the building. Moving the building allows urbandesigners to be aware of the relationship between the buildingreflection and other infrastructure. For example, the reflectionoff the building at sundown might result in distraction to driverson a nearby highway. The designer can then experiment withaltering the angles of the building to oncoming traffic or mov-ing the building further away from the roadway. Tapping againwith the material wand changes the material back to brick, andthe sunlight reflection disappears, leaving only the projectedshadow.

By placing the “wind tool” on the workbench surface, a windflow simulation is activated based on a computational fluid dy-namics simulation, with field lines graphically flowing aroundthe buildings. Changing the wind tool’s physical orientationcorrespondingly alters the orientation of the computationallysimulated wind. Urban planners can identify any potential windproblems, such as areas of high pressure that may result inhad-to-open doors or unpleasant walking environments. An“anemometer” object allows point monitoring of the wind speed(Photo 24.3). By placing the anemometer onto the workspace,the wind speed of that point is shown. After a few seconds, thepoint moves along the flow lines, to show the wind speed alongthat particular flow line. The interaction between the buildingsand their environment allows urban planners to visualize anddiscuss inter-shadowing, wind, and placement problems.

In Urp, physical models of buildings are used as tangible rep-resentations of digital models of the buildings. To change the lo-cation and orientation of buildings, users simply grab and movethe physical model as opposed to pointing and dragging agraphical representation on a screen with a mouse. The physicalforms of Urp’s building models, and the information associatedwith their position and orientation upon the workbench, rep-resent and control the state of the urban simulation.

Although standard interface devices for GUIs, such as key-boards, mice, and screens, are also physical in form, the role ofthe physical representation in TUI provides an important dis-tinction. The physical embodiment of the buildings to representthe computation involving building dimensions and locationallows a tight coupling of control of the object and manipulationof its parameters in the underlying digital simulation.

In Urp, the building models and interactive tools are bothphysical representations of digital information (shadow dimen-sions and wind speed) and computational functions (shadow in-terplay). The physical artifacts also serve as controls of the un-derlying computational simulation (specifying the locations ofobjects). The specific physical embodiment allows a dual use inrepresenting the digital model and allowing control of the digital

24. Tangible User Interfaces • 471

PHOTO 24.1. Urp and shadow stimulation.

PHOTO 24.2. Urp and wind stimulation.

PHOTO 24.3. inTouch.

ch24_8166_Sears-Jacko_LEA 7/13/07 8:04 PM Page 471

Urp”  (Urban  Planning  Workbench).  Urp  uses  scaled  physical  models  of  architectural    buildings  to  configure  and  control  an  underlying  urban  simula5on  of  shadow,    light  reflec5on,  wind  flow,  and  so  on.  Urp  also  provides  a  variety  of  interac5ve  tools    for  querying  and  controlling  the  parameters  of  the  urban  simula5on.  These  tools    include  a  clock  tool  to  change  the  posi5on  of  sun,  a  material  wand  to  change  the    building  surface  between  bricks  and  glass  (with  light  reflec5on),  a  wind  tool  to    change  the  wind  direc5on,  and  an  anemometer  to  measure  wind  speed      Cited  in:  H.  Ishii,  “Tangible  User  Interfaces,”  in  The  handbook  of  Human-­‐Computer  Interac&on.  Fundamentals,  

Evolving  Technologies  and  Emerging  applica&ons,  A.  Sears  and  J.  Jacko,  Eds.  New  York:  Lawrence  Erlbaum  Associates,  2008,  pp.  470–495  

Paradigms  

32  

•  Prolifera5on  of  tabletop  tangible  musical  interfaces  •  Turn  live  music  performance  into  a  test-­‐bed  for  advanced  HCI  •  Combines  expression  and  crea5vity  with  entertainment  •  Files,  folders  and  hyperlinks  might  not  be  needed  •  Social  experience  that  integrated  collabora5on  and  compe55on  •  Experts,  non-­‐experts,  children,  adults…who  plays  music?  

•  Poten5al  of  tabletop  tangible  interfaces  as  new  musical  instruments  •  Dis5nc5on  between  the  controllers  and  the  sound-­‐genera5ng  system  is  closer  to  non-­‐keyboard  tradi5onal  musical  instruments  (the  guitar)  than  most  computer  systems  (mice,  sliders…)  

•  Direct  control  of  the  musician  –  movement,  small  varia5on  

Tactile/Tangible  computing  From  GUI  to  TUI  (Tangible  UI)  

Paradigms  

33  

S.  Jordà,  G.  Geiger,  M.  Alonso,  and  M.  Kaltenbrunner,  “The  reacTable:  Exploring  the  Synergy  between  Live  Music  Performance  and  Tabletop  Tangible  Interfaces,”  in  Tangible  and  Embedded  Interac&on,  Baton,  2007,  139-­‐146  

Tactile/Tangible  computing  From  GUI  to  TUI  (Tangible  UI)  •  The  reacTable  has  been  designed  

for  installa5ons  and  causal  users  as  well  as  for  professionals  in  concert  

•  It  is  based  on  a  round  table,  and  with  no  privileged  points-­‐of-­‐view  or  points-­‐of-­‐control  

•  Several  musicians  can  share  the  control  of  the  instrument  by  caressing,  rota5ng  and  moving  physical  ar5facts  on  the  luminous  surface  

•  Each  reacTable  object  represents  a  modular  synthesizer  component  with  a  dedicated  func5on  for  the  genera5on,  modifica5on  or  control  of  sound  

including the first author. It was with this know-how and with the idea of surpassing mice limitations that the reacTable project started in 2003.

The reacTable: Conception and Description The first step was to believe that everything is feasible, assuming access to a universal sensor which can provide all the necessary information about the instrument and the player state, and enabling thus the conception and design of an ideal instrument without being constrained by technological issues. Luckily enough, the current implementation almost fully coincides with the original model [19].

Figure 2. Four hands at the reacTable

The reacTable, has been designed for installations and casual users as well as for professionals in concert. It seeks to combine immediate and intuitive access in a relaxed and immersive way, with the flexibility and the power of digital sound design algorithms, resulting in endless improvement possibilities and mastership. It is based on a round table, thus a table with no head position or leading voice, and with no privileged points-of-view or points-of-control. Like in other circular tables such as the Personal Digital Historian (PDH) System [32] the reacTable uses a radial coordinate system and a radial symmetry.

In the reacTable several musicians can share the control of the instrument by caressing, rotating and moving physical artifacts on the luminous surface, constructing different audio topologies in a kind of tangible modular synthesizer or graspable flow-controlled programming language. Each reacTable object represents a modular synthesizer component with a dedicated function for the generation, modification or control of sound. A simple set of rules automatically connects and disconnects these objects, according to their type and affinity and proximity with the other neighbors. The resulting sonic topologies are permanently represented on the same table surface by a graphic synthesizer in charge of the visual feedback, as shown in figure 2. Auras around the physical objects bring information about their behavior, their parameters values and configuration states, while the lines that draw the connections between the objects, convey the real waveforms of the sound flow being produced or modified at each node.

THE REACTABLE IMPLEMENTATION Computer Vision In the previous years, researchers have often criticized the application of computer vision techniques in tabletop development, pointing out drawbacks such as slowness and high latency, instability, lack of robustness and occlusion problems, while favoring other techniques such as electromagnetic field sensing with the use of RFID tagged objects [25] or acoustic tracking by means of ultrasound [23]. Recent implementations such as the PlayAnywhere [40] or the reacTable itself, clearly demonstrate that these reservations are not applicable anymore. For tracking pucks and fingers, the reacTable uses an IR camera situated beneath the translucent table, avoiding therefore any type of occlusion (see Figure 3).

Figure 3. The reacTable components

Additionally, some of the advantages we have found in the use of computer vision (CV) are:

• CV can be combined with beneath projection, permitting a compact all-in-one system, in which both camera and projector are hidden

• Almost unlimited number of different markers (currently several hundreds)

• Almost unlimited number of simultaneous pucks (only limited by the table surface), and with a processing time independent of this number (>= 60 fps)

• Possibility to use cheap pucks (such as for example, specially printed business cards)

• Detection of puck orientation (pucks are not treated as points)

• Natural integration of pucks and finger detection for additional control

ReacTIVision, the reacTable vision engine, is a high-performance computer vision framework for the fast and robust tracking of fiducial markers in a real-time video stream. Fiducial markers are specially designed graphical symbols, which allow the easy identification and location of

including the first author. It was with this know-how and with the idea of surpassing mice limitations that the reacTable project started in 2003.

The reacTable: Conception and Description The first step was to believe that everything is feasible, assuming access to a universal sensor which can provide all the necessary information about the instrument and the player state, and enabling thus the conception and design of an ideal instrument without being constrained by technological issues. Luckily enough, the current implementation almost fully coincides with the original model [19].

Figure 2. Four hands at the reacTable

The reacTable, has been designed for installations and casual users as well as for professionals in concert. It seeks to combine immediate and intuitive access in a relaxed and immersive way, with the flexibility and the power of digital sound design algorithms, resulting in endless improvement possibilities and mastership. It is based on a round table, thus a table with no head position or leading voice, and with no privileged points-of-view or points-of-control. Like in other circular tables such as the Personal Digital Historian (PDH) System [32] the reacTable uses a radial coordinate system and a radial symmetry.

In the reacTable several musicians can share the control of the instrument by caressing, rotating and moving physical artifacts on the luminous surface, constructing different audio topologies in a kind of tangible modular synthesizer or graspable flow-controlled programming language. Each reacTable object represents a modular synthesizer component with a dedicated function for the generation, modification or control of sound. A simple set of rules automatically connects and disconnects these objects, according to their type and affinity and proximity with the other neighbors. The resulting sonic topologies are permanently represented on the same table surface by a graphic synthesizer in charge of the visual feedback, as shown in figure 2. Auras around the physical objects bring information about their behavior, their parameters values and configuration states, while the lines that draw the connections between the objects, convey the real waveforms of the sound flow being produced or modified at each node.

THE REACTABLE IMPLEMENTATION Computer Vision In the previous years, researchers have often criticized the application of computer vision techniques in tabletop development, pointing out drawbacks such as slowness and high latency, instability, lack of robustness and occlusion problems, while favoring other techniques such as electromagnetic field sensing with the use of RFID tagged objects [25] or acoustic tracking by means of ultrasound [23]. Recent implementations such as the PlayAnywhere [40] or the reacTable itself, clearly demonstrate that these reservations are not applicable anymore. For tracking pucks and fingers, the reacTable uses an IR camera situated beneath the translucent table, avoiding therefore any type of occlusion (see Figure 3).

Figure 3. The reacTable components

Additionally, some of the advantages we have found in the use of computer vision (CV) are:

• CV can be combined with beneath projection, permitting a compact all-in-one system, in which both camera and projector are hidden

• Almost unlimited number of different markers (currently several hundreds)

• Almost unlimited number of simultaneous pucks (only limited by the table surface), and with a processing time independent of this number (>= 60 fps)

• Possibility to use cheap pucks (such as for example, specially printed business cards)

• Detection of puck orientation (pucks are not treated as points)

• Natural integration of pucks and finger detection for additional control

ReacTIVision, the reacTable vision engine, is a high-performance computer vision framework for the fast and robust tracking of fiducial markers in a real-time video stream. Fiducial markers are specially designed graphical symbols, which allow the easy identification and location of

Paradigms  

34  

S.  Jordà,  G.  Geiger,  M.  Alonso,  and  M.  Kaltenbrunner,  “The  reacTable:  Exploring  the  Synergy  between  Live  Music  Performance  and  Tabletop  Tangible  Interfaces,”  in  Tangible  and  Embedded  Interac&on,  Baton,  2007,  139-­‐146  

Contents  •  Interac5on  paradigms  •  Ubiquitous  /  pervasive  compu5ng  •  The  seminal  idea  (and  works)  •  Two  examples  •  Some  issues  

•  Wearable  compu5ng  (and  virtual  environments)  •  The  basics  •  Three  examples  •  Some  issues  

•  Tac5le/Tangible  compu5ng  •  The  basics  •  From  GUI  to  TUI:  two  examples  

•  Discussion  

Paradigms  

35  

Discussion  •  At  what  moment  in  class  did  you  feel  most  engaged  with  what  was  happening?  

•  At  what  moment  in  class  were  you  most  distanced  from  what  was  happening?  

•  What  ac5on  that  anyone  (teacher  or  student)  took  did  you  find  most  affirming  or  helpful?  

•  What  ac5on  that  anyone  took  did  you  find  most  puzzling  or  confusing?  

•  What  about  the  class  surprised  you  the  most?  (This  could  be  about  your  own  reac5ons  to  what  went  on,  something  that  someone  did,  or  anything  else  that  occurs)  

36  

Paradigms  

Some  readings  •  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.  Evoca5ve  Experiences  in  the  Design  of  Objects  to  Encourage  

Free-­‐Play  for  Children.  In  Proceedings  of  the  Interna&onal  Joint  Conference  on  Ambient  Intelligence  –  AmI’11.  Amsterdam,  Netherlands  

•  A.  Rosales,  E.  Arroyo,  J.  Blat,  2011.    Playful  accessories.  Design  process  of  two  objects  to  encourage  free-­‐play.  In  Proceedings  of  the  4th  World  Conference  on  Design  Research,  IASDR’11  

•  A.  Schmidt,  P.  Bas5an,  F.  Alt,  and  G.  Fitzpatrick,  “Interac5ng  with  21st-­‐Century  Computers,”  Pervasive  Compu&ng,  no.  January-­‐March  2012,  pp.  22–30,  2012  

•  F.  Girardin,  J.  Blat,  F.  Calabrese,  F.  Dal  Fiore,  and  C.  Ram,  “Digital  Footprin5ng:  Uncovering  Tourists  with  User-­‐Generated  Content,”  IEEE  Pervasive  Compu&ng,  pp.  36–43,  2008  

•  G.  D.  Abowd,  “What  next  ,  Ubicomp ?  Celebra5ng  an  intellectual  disappearing  act,”  in  UbiComp,  2012,  pp.  31–41  

•  H.  Ishii,  “Tangible  User  Interfaces,”  in  The  handbook  of  Human-­‐Computer  Interac&on.  Fundamentals,  Evolving  Technologies  and  Emerging  applica&ons,  A.  Sears  and  J.  Jacko,  Eds.  New  York:  Lawrence  Erlbaum  Associates,  2008,  pp.  470–495  

•  Mann,  Steve.  (2013):  Wearable  Compu5ng.  In:  Soegaard,  Mads  and  Dam,  Rikke  Friis  (eds.).  "The  Encyclopedia  of  Human-­‐Computer  Interac5on,  2nd  Ed.".  Aarhus,  Denmark:  The  Interac5on  Design  Founda5on.  Available  online  at  hVp://www.interac5on-­‐design.org/encyclopedia/wearable_compu5ng.html  

•  M.  Weiser,  “The  Computer  for  the  21st  Century,”  Scien&fic  American,  pp.  94–104,  1991  •  S.  Jordà,  G.  Geiger,  M.  Alonso,  and  M.  Kaltenbrunner,  “The  reacTable:  Exploring  the  Synergy  

between  Live  Music  Performance  and  Tabletop  Tangible  Interfaces,”  in  Tangible  and  Embedded  Interac&on,  Baton,  2007,  139-­‐146  

Paradigms  

37