fundamentals of wearable computers and augmented reality- second edition

740
FUNDAMENTALS OF Wearable Computers and Augmented Reality SECOND EDITION

Upload: ileana-craita

Post on 06-Dec-2015

91 views

Category:

Documents


5 download

DESCRIPTION

wearable computers

TRANSCRIPT

  • FUNDAMENTALS OFWearable Computers and Augmented RealityS E C O N D E D I T I O N

  • edited by

    Woodrow Barfield

    FUNDAMENTALS OFWearable Computers and Augmented RealityS E C O N D E D I T I O N

    Boca Raton London New York

    CRC Press is an imprint of theTaylor & Francis Group, an informa business

  • MATLAB is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This books use or discussion of MAT-LAB software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB software.

    CRC PressTaylor & Francis Group6000 Broken Sound Parkway NW, Suite 300Boca Raton, FL 33487-2742

    2016 by Taylor & Francis Group, LLCCRC Press is an imprint of Taylor & Francis Group, an Informa business

    No claim to original U.S. Government worksVersion Date: 20150616

    International Standard Book Number-13: 978-1-4822-4351-2 (eBook - PDF)

    This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.

    Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information stor-age or retrieval system, without written permission from the publishers.

    For permission to photocopy or use material electronically from this work, please access www.copy-right.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that pro-vides licenses and registration for a variety of users. For organizations that have been granted a photo-copy license by the CCC, a separate system of payment has been arranged.

    Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.Visit the Taylor & Francis Web site athttp://www.taylorandfrancis.comand the CRC Press Web site athttp://www.crcpress.com

  • vContentsPreface.......................................................................................................................ixAcknowledgments .....................................................................................................xiEditor ..................................................................................................................... xiiiContributors .............................................................................................................xv

    Section i introduction

    Chapter 1 Wearable Computers and Augmented Reality: Musings andFuture Directions ...........................................................................3

    Woodrow Barfield

    Chapter 2 Wearable Computing: Meeting the Challenge ................................... 13

    Thad Starner

    Chapter 3 Intimacy and Extimacy: Ethics, Power, and Potential ofWearable Technologies .................................................................. 31

    Patricia Flanagan, Despina Papadopoulos, and Georgina Voss

    Section ii the technology

    Chapter 4 Head-Mounted Display Technologies for Augmented Reality .......... 59

    Kiyoshi Kiyokawa

    Chapter 5 Optics for Smart Glasses, Smart Eyewear, Augmented Reality, and Virtual Reality Headsets .............................................................85

    Bernard Kress

    Chapter 6 Image-Based Geometric Registration for Zoomable Cameras Using Precalibrated Information ...................................................... 125

    Takafumi Taketomi

  • vi Contents

    Chapter 7 Visual Tracking for Augmented Reality in Natural Environments .....151

    Suya You and Ulrich Neumann

    Chapter 8 Urban Visual Modeling and Tracking .............................................. 173

    Jonathan Ventura and Tobias Hllerer

    Chapter 9 Scalable Augmented Reality on Mobile Devices: Applications, Challenges, Methods, and Software ................................................. 195

    Xin Yang and K.T. Tim Cheng

    Chapter 10 Haptic Augmented Reality: Taxonomy, Research Status, andChallenges ................................................................................. 227

    Seokhee Jeon, Seungmoon Choi, and Matthias Harders

    Section iii Augmented Reality

    Chapter 11 Location-Based Mixed and Augmented Reality Storytelling .......... 259

    Ronald Azuma

    Chapter 12 Dimensions of Spatial Sound and Interface Styles of Audio Augmented Reality: Whereware, Wearware, and Everyware ......... 277

    Michael Cohen

    Chapter 13 Applications of Audio Augmented Reality: Wearware, Everyware, Anyware, and Awareware .............................................309

    Michael Cohen and Julin Villegas

    Chapter 14 Recent Advances in Augmented Reality for Architecture, Engineering, and Construction Applications ................................... 331

    Amir H. Behzadan, Suyang Dong, and Vineet R. Kamat

    Chapter 15 Augmented Reality HumanRobot Interfaces toward Augmented Robotics ........................................................................ 399

    Maki Sugimoto

    Chapter 16 Use of Mobile Augmented Reality for Cultural Heritage ................ 411

    John Krogstie and Anne-Cecilie Haugstvedt

  • viiContents

    Chapter 17 Applications of Augmented Reality for the Automotive Industry .....433

    Vincent Gay-Bellile, Steve Bourgeois, Dorra Larnaout, andMohamed Tamaazousti

    Chapter 18 Visual Consistency in Augmented Reality Compositing ................. 457

    Jan Fischer

    Chapter 19 Applications of Augmented Reality in the Operating Room ...........485

    Ziv Yaniv and Cristian A. Linte

    Chapter 20 Augmented Reality for Image-Guided Surgery ............................... 519

    Marta Kersten-Oertel, Pierre Jannin, and D. Louis Collins

    Section iV Wearable computers and Wearable technology

    Chapter 21 Soft Skin Simulation for Wearable Haptic Rendering ................................ 551

    Gabriel Cirio, Alvaro G. Perez, and Miguel A. Otaduy

    Chapter 22 Design Challenges of Real Wearable Computers ............................ 583

    Attila Reiss and Oliver Amft

    Chapter 23 E-Textiles in the Apparel Factory: Leveraging Cut-and-Sew Technology toward the Next Generation of Smart Garments .......... 619

    Lucy E. Dunne, Cory Simon, and Guido Gioberto

    Chapter 24 Garment Devices: Integrating Energy Storage into Textiles ............ 639

    Kristy Jost, Genevieve Dion, and Yury Gogotsi

    Chapter 25 Collaboration with Wearable Computers ......................................... 661

    Mark Billinghurst, Carolin Reichherzer, and Allaeddin Nassani

    Author Index......................................................................................................... 681

    Subject Index ........................................................................................................ 707

  • ix

    PrefaceIn the early 1990s, I was a member of the coordinating committee that put together the first conference on wearable computers, which, interestingly, was followed by a highly publicized wearable computer fashion show. Speaking at the conference, Irecall making the following comment about wearable computers: Are we wearing them, or are they wearing us? At the time, I was thinking that eventually advances in prosthetics, sensors, and artificial intelligence would result in computational tools that would have amazing consequences for humanity. Developments sincethen have proven that vision correct. The first edition of Fundamentals of Wearable Computers and Augmented Reality, published in 2001, helped set the stage for the coming decade, in which an explosion in research and applications for wearable computers and augmented reality occurred.

    When the first edition was published, much of the research in augmented reality and wearable computers was primarily proof-of-concept projects; there were few, if any, commercial products on the market. There was no Google Glass or handheld smartphones equipped with sensors and the computing power of a mid-1980s supercomputer. And the apps for handheld smartphones that exist now were nonexistent then. Fast forward to today: the commercial market for wearable computers and augmented reality is in the millions of dollars and heading toward the billions. From a technology perspective, much of what is happening now with wear-ables and augmented reality would not have been possible even five years ago. So, as an observation, Ray Kurzweils law of accelerating returns seems to be alive and well with wearable computer and augmented reality technology, because 14 years after the first edition of this book, the capabilities and applications of both technolo-gies are orders of magnitude faster, smaller, and cheaper.

    As another observation, the research and development of wearable computers and augmented reality technology that was once dominated by U.S. universities and research laboratories is truly international in scope today. In fact, the second edition of Fundamentals of Wearable Computers and Augmented Reality contains contributions from researchers in the United States, Asia, and Europe. And if one par-ticipates in conferences in this field, they are as likely to be held these days in Europe or Asia as they are in the United States. These are very positive developments and will lead to even more amazing applications involving the use of wearable computers and augmented reality technology in the future.

    Just as the first edition of this book provided a comprehensive coverage of the field, the second edition attempts to do the same, specifically by including chapters from a broad range of topics written by outstanding researchers and teachers within the field. All of the chapters are new, with an effort to again provide fundamental knowledge on each topic so that a valuable technical resource is provided to the community. Specifically, the second edition contains chapters on haptics, visual dis-plays, the use of augmented reality for surgery and manufacturing, technical issues of image registration and tracking, and augmenting the environment with wearable

  • x Preface

    audio interfaces. The second edition also contains chapters on the use of augmented reality in preserving our cultural heritage, on humancomputer interaction and augmented reality technology, on augmented reality and robotics, and on what we termed in the first edition as computational clothing. Still, even with this wide range of applications, the main goal of the second edition is to provide the community with fundamental information and basic knowledge about the design and use of wearable computers and augmented reality with the goal to enhance peoples lives. I believe the chapter authors accomplished that goal showing great expertise and breadth of knowledge. My hope is that this second edition can also serve as a stimulus for developments in these amazing technologies in the coming decade.

    Woodrow Barfield, PhD, JD, LLMChapel Hill, North Carolina

    The images for augmented reality and wearable computers are essential for the understanding of the material in this comprehensive text; therefore, all color images submitted by the chapter authors are available at http://www.crcpress.com/product/isbn/9781482243505.

    MATLAB is a registered trademark of The MathWorks, Inc. For product informa-tion, please contact:

    The MathWorks, Inc.3 Apple Hill DriveNatick, MA 01760-2098 USATel: 508-647-7000Fax: 508-647-7001E-mail: [email protected]: www.mathworks.com

  • xi

    AcknowledgmentsI offer special thanks to the following chapter authors for providing images that appear on the cover of the book: Kiyoshi Kiyokawa, an occlusion-capable optical see-through head-mounted display; Miguel A. Otaduy, Gabriel Cirio, and Alvaro G. Perez, simulation of a deformable hand with nonlinear skin mechanics; Vineet R. Kamat, Amir H. Behzadan, and Suyang Dong, augmented reality visualization of buried utilities during excavation; Marta Kersten-Oertel, virtual vessels of an arteriovenous malformation (AVM) (with color-coded vessels [blue for veins, red for arteries, and purple for the AVM nidus]) overlaid on a live image of a 3D printed nylon anthropomorphic head phantom; Seokhee Jeon, Seungmoon Choi, and Matthias Harders, an example of a visuo-haptic augmented reality system, doing a modulation of real soft object stiffness; and Kristy Jost, Genevieve Dion, and Yury Gogotsi, 3D simulations of knitted smart textiles (rendered on the Shima Seiki Apex 3 Design Software).

    Several members of CRC Press contributed in important ways to this books publication and deserve recognition. First, I thank Jessica Vakili, senior project coordinator, for answering numerous questions about the process of editing the book and those of the chapter authors in a timely, patient, and always efficient manner. I also thank and acknowledge Cindy Renee Carelli, senior acquisition editor, for contacting me about editing a second edition, championing the proposal through the publishers review process, and her timely reminders to meet the deadline. The project editor, Todd Perry, is thanked for the important task of overseeing the coor-dination, copyediting, and typesetting of the chapters. Gowthaman Sadhanandham is also thanked for his work in production and assistance provided to authors.

    Most importantly, in my role as editor for the second edition, I acknowledge and thank the authors for their hard work and creative effort to produce outstanding chapters. To the extent this book provides the community with a valuable resource and stimulates further developments in the field, each chapter author deserves much thanks and credit. In many ways, this book began 14 years ago, when the first edition was published. To receive contributions from some of the original authors, tosee how their careers developed over the years, and the contributions they made to the field, was a truly satisfying experience for me. It was a great honor that such a distin-guished group again agreed to join the project.

    Finally, in memoriam, I thank my parents for the freedom they gave me to follow my interests and for the Erlenmeyer, distilling, and volumetric flasks when I was a budding teenage scientist. Further, my niece, Melissa, is an inspiration and serves as the gold standard in the family. Last but not least, I acknowledge my daughter, Jessica, student and college athlete, for keeping me young and busy. I look forward to all she will achieve.

  • xiii

    EditorWoodrow Barfield, PhD, JD, LLM, has served as professor of engineering at the University of Washington, Seattle, Washington, where he received the National Science Foundation Presidential Young Investigator Award. Professor Barfield directed the Sensory Engineering Laboratory, where he was involved in research on sensors and augmented and virtual reality displays. He has served as a senior edi-tor for Presence: Teleoperators and VirtualEnvironments and isan associate editor for Virtual Reality.He has more than 350 publications and presentations, including invited lectures and keynote talks, and holds two degrees inlaw.

  • xv

    Oliver AmftACTLab Research GroupUniversity of PassauPassau, Germany

    Ronald AzumaIntel LabsSanta Clara, California

    Woodrow BarfieldChapel Hill, North Carolina

    Amir H. BehzadanDepartment of Civil, Environmental,

    and Construction EngineeringUniversity of Central FloridaOrlando, Florida

    Mark BillinghurstHuman Interface Technology

    Laboratory New ZealandUniversity of CanterburyChristchurch, New Zealand

    Steve BourgeoisVision and Content Engineering

    LaboratoryCEA LISTGif-sur-Yvette, France

    K.T. Tim ChengDepartment of Electrical and Computer

    EngineeringUniversity of California, Santa BarbaraSanta Barbara, California

    Seungmoon ChoiPohang University of Science and

    TechnologyPohang, South Korea

    Gabriel CirioDepartment of Computer ScienceUniversidad Rey Juan CarlosMadrid, Spain

    D. Louis CollinsDepartment of Biomedical EngineeringDepartment of Neurology &

    NeurosurgeryMontreal Neurological InstituteMcGill UniversityMontreal, Canada

    Michael CohenComputer Arts LaboratoryUniversity of AizuAizu-Wakamatsu, Japan

    Genevieve DionShima Seiki Haute Technology

    LaboratoryExCITe CenterAntoinette Westphal College of Media

    Arts and DesignDrexel UniversityPhiladelphia, Pennsylvania

    Suyang DongDepartment of Civil and Environmental

    EngineeringUniversity of MichiganAnn Arbor, Michigan

    Contributors

  • xvi Contributors

    Lucy E. DunneDepartment of Design, Housing, and

    ApparelUniversity of MinnesotaSt Paul, Minnesota

    Jan FischerEuropean Patent OfficeMunich, Germany

    Patricia FlanaganWearables LabAcademy of Visual ArtsHong Kong Baptist UniversityKowloon Tong, Hong Kong

    Vincent Gay-BellileVision and Content Engineering

    LaboratoryCEA LISTGif-sur-Yvette, France

    Guido GiobertoDepartment of Computer Science and

    EngineeringUniversity of MinnesotaMinneapolis, Minnesota

    Yury GogotsiDepartment of Materials Science and

    EngineeringCollege of EngineeringA.J. Drexel Nanomaterials InstituteDrexel UniversityPhiladelphia, Pennsylvania

    Matthias HardersUniversity of InnsbruckInnsbruck, Austria

    Anne-Cecilie HaugstvedtComputas A/SLysaker, Norway

    Tobias HllererUniversity of CaliforniaSanta Barbara, California

    Pierre JanninINSERM Research DirectorLTSI, Inserm UMR 1099University of RennesRennes, France

    Kristy JostDepartment of Materials Science and

    EngineeringCollege of EngineeringA.J. Drexel Nanomaterials InstituteandShima Seiki Haute Technology

    LaboratoryExCITe CenterAntoinette Westphal College of Media

    Arts and DesignDrexel UniversityPhiladelphia, Pennsylvania

    Seokhee JeonKyung Hee UniversitySeoul, South Korea

    Vineet R. KamatDepartment of Civil and Environmental

    EngineeringUniversity of MichiganAnn Arbor, Michigan

    Marta Kersten-OertelDepartment of Biomedical EngineeringMontreal Neurological InstituteMcGill UniversityMontreal, Quebec, Canada

    Kiyoshi KiyokawaCybermedia CenterOsaka UniversityOsaka, Japan

  • xviiContributors

    Bernard KressGoogle [X] LabsMountain View, California

    John KrogstieDepartment of Computer and

    Information ScienceNorwegian University of Science and

    TechnologyTrondheim, Norway

    Dorra LarnaoutVision and Content Engineering

    LaboratoryCEA LISTGif-sur-Yvette, France

    Cristian A. LinteDepartment of Biomedical EngineeringRochester Institute of TechnologyRochester, New York

    Allaeddin NassaniHuman Interface Technology

    Laboratory New ZealandUniversity of CanterburyChristchurch, New Zealand

    Ulrich NeumannDepartment of Computer ScienceUniversity of Southern CaliforniaLos Angeles, California

    Miguel A. OtaduyDepartment of Computer ScienceUniversidad Rey Juan CarlosMadrid, Spain

    Despina PapadopoulosInteractive Telecommunications

    ProgramNew York UniversityNew York, New York

    Alvaro G. PerezDepartment of Computer ScienceUniversidad Rey Juan CarlosMadrid, Spain

    Carolin ReichherzerHuman Interface Technology

    Laboratory New ZealandUniversity of CanterburyChristchurch, New Zealand

    Attila ReissChair of Sensor TechnologyUniversity of PassauPassau, Germany

    Cory SimonJohnson Space CenterNational Aeronautics and Space

    AdministrationHouston, Texas

    Thad StarnerSchool of Interactive Computing Georgia Institute of TechnologyAtlanta, Georgia

    Maki SugimotoFaculty of Science and TechnologyDepartment of Information and

    Computer ScienceKeio UniversityTokyo, Japan

    Takafumi TaketomiNara Institute of Science and

    TechnologyNara, Japan

    Mohamed TamaazoustiVision and Content Engineering

    LaboratoryCEA LISTGif-sur-Yvette, France

  • xviii Contributors

    Jonathan VenturaUniversity of ColoradoColorado Springs, Colorado

    Julin VillegasComputer Arts LaboratoryUniversity of AizuAizu-Wakamatsu, Japan

    Georgina VossScience and Technology Policy

    ResearchUniversity of SussexSussex, United Kingdom

    Xin YangDepartment of Electrical and Computer

    EngineeringUniversity of California, Santa BarbaraSanta Barbara, California

    Ziv YanivTAJ Technologies, Inc.Mendota Heights, Minnesota

    and

    Office of High Performance Computing and Communications

    National Library of MedicineNational Institutes of HealthBethesda, Maryland

    Suya YouDepartment of Computer ScienceUniversity of Southern CaliforniaLos Angeles, California

  • Section I

    Introduction

  • 31 Wearable Computers and Augmented RealityMusings and Future Directions

    Woodrow Barfield

    In this chapter, I briefly introduce the topic of wearable computers and augmented reality, with the goal to provide the reader a roadmap to the book, a brief historical perspective, and a glimpse into the future of a sensor-filled, wearable computer and augmented reality (AR) world. While each technology alone (AR and wearables) is providing people with amazing applications and technologies to assist them in their daily life, the combination of the technologies is often additive and, in some cases, multiplicative, as, for example, when virtual images, spatialized sound, and haptic feedback are combined with wearable computers to augment the world with infor-mation whenever or wherever it is needed.

    Let me begin to set the stage by offering a few definitions. Azuma (1997) defined an augmented reality application as one that combines the real world with the virtual world, is interactive and in real-time, and is registered in three dimensions. Often, the platform to deliver augmented reality is a wearable device; or in the case of a smart phone, a hand-held computer. Additionally, most people think of a wearable com-puter as a computing device that is small and light enough to be worn on ones body without causing discomfort. And unlike a laptop or a palmtop, a wearable computer is constantly turned on and is often used to interact with the real-world through sensors that are becoming more ubiquitous each day. Furthermore, information provided by a wearable computer can be very context and location sensitive, especially when combined with GPS. In this regard, the computational model of wearable computers differs from that of laptop computers and personal digital assistants.

    In the early days of research in developing augmented reality, many of the same researchers were also involved in creating immersive virtual environments. We began to discuss different degrees of reality and virtuality. Early on, Paul Milgram from the University of Toronto, codified the thinking by proposing a virtuality continuum

    CONTENTS

    1.1 Public Policy .....................................................................................................71.2 Toward a Theory of Augmented Reality ..........................................................91.3 Challenges and the Future Ahead ................................................................... 10References ................................................................................................................ 11

  • 4 Fundamentals of Wearable Computers and Augmented Reality

    which represents a continuous scale ranging between the completely virtual world, avirtuality, and a completely real, reality (Milgram etal., 1994). The realityvirtuality continuum therefore encompasses all possible variations and compositions of real and virtual objects. The area between the two extremes, where both the real and the virtual are mixed, is the so-called mixed-realitywhich Paul indicated consisted of both augmented-reality, where the virtual augments the real, and augmented virtuality, where the real augments the virtual. Another prominent early researcher in wearables, and a proponent of the idea of mediating reality, was Steve Mann (2001, 2002). Steve, now at the University of Toronto, describes wearable comput-ing as miniature body-borne computational and sensory devices; he expanded the discussion of wearable computing to include the more expansive term bearable computing by which he meant wearable computing technology that is on or in the body, and with numerous examples, Steve showed how wearable computing could be used to augment, mediate, or diminish reality (Mann, 2002).

    When I think of the different types of computing technology that may be worn on or in the body, I envision a continuum that starts with the most basic of wearable com-puting technology and ends with wearable computing that is actually connected to a persons central nervous system, that is, their brain (Figure 1.1). In fact, as humans are becoming more-and-more equipped with wearable computing technology, the distinction as to what is thought of as a prosthesis is becoming blurred as we inte-grate more wearable computing devices into human anatomy and physiology. The extension of computing integrated into a persons brain could radically enhance human sensory and cognitive abilities; in fact, in my view, we are just now at the cusp of wearable computing and sensor technology breaking the skin barrier and moving

    2

    1

    3

    4 mm

    FIGURE 1.1 A microchip is used to process brain waves that are used to control a cursor on a computer screen. (Image courtesy of Wikimedia Commons.)

  • 5Wearable Computers and Augmented Reality

    into the human body, and eventually into the brain. Already there are experimental systems (computing technology integrated into a persons brain) in-field now that are helping those with severe physical disabilities. For example, consider people with debilitating diseases such that they are essentially locked in their own body. With the appropriate wearable computing technology consisting of a microchip that is implanted onto the surface of the brain (where it monitors electronic thought pulses), such people may use a computer by thought alone allowing them to communicate with their family, caregivers, and through the internet, the world at large. Sadly, just in the United States about, 5000 people yearly are diagnosed with just such a disease that ultimately shuts down the motor control capabilities of their bodyamyotrophic lateral sclerosis, sometimes called Lou Gehrigs disease. This disease is a rapidly progressive, invariably fatal neurological disease that attacks the nerve cells respon-sible for controlling voluntary muscles. I highlight this example to show, that while many uses of AR/wearables will be for gaming, navigation, shopping, and so on, there are very transformative uses of wearable computing technology, either being developed now, or soon to be developed, that will benefit humanity in ways we are just now beginning to realize.

    One of the early adopters of wearable computing technology, especially with regard to implantable sensors within the body, was Professor Kevin Warwick who in 1998 at the University of Reading was one of the first people to hack his body when he participated in a series of proof-of-concept studies involving a sensor implanted into the median nerves of his left arm; a procedure which allowed him to link his nervous system directly to a computer. Most notably, Professor Warwick was able to control an electric wheelchair and an artificial hand, using the neural interface. In addition to being able to measure the signals transmitted along the nerve fibers in Professor Warwicks left arm, the implant was also able to create artificial sensation by stimulating the nerves in his arm using individual electrodes. This bi- directional functionality was demonstrated with the aid of Kevins wife and a second, less com-plex implant which connected to her nervous system. According to Kevin, this was the first solely electronic communication between the nervous systems of two humans; since then, many have extended Kevins seminal work in wearable computers using RFID chips and other implantable sensors (and there is even an anti-chipping statute enacted in California and other states).

    Other types of innovative and interesting wearable devices are being developed at a rapid pace. For example, researchers at Brown University and Cyberkinetics in Massachusetts are devising a microchip that is implanted in the motor cortex just beneath a persons skull that will be able to intercept nerve signals and reroute them to a computer, which will then wirelessly send a command to any of various electronic devices, including computers, stereos, and electric wheelchairs. And neuroscientists and robotics engineers have just recently demonstrated the viability of direct brain-to-brain communication in humans using electroencephalogram (EEG) and image-guided transcranial magnetic stimulation (TMS) technologies. Further, consider a German team that has designed a microvibration device and a wireless low-frequency receiver that can be implanted in a persons tooth. The vibrator acts as microphone and speaker, sending sound waves along the jawbone to a persons eardrum. And in another example of a wearable implantable device, the company

  • 6 Fundamentals of Wearable Computers and Augmented Reality

    Setpoint, is developing computing therapies to reduce systemic inflammation by stimulating the vagus nerve using an implantable pulse generator. This device works by activating the bodys natural inflammatory reflex to dampen inflammation and improve clinical signs and symptoms.

    Medical necessity, for example, to manage debilitating disease such as diabetes, is a main reason why people will become equipped with wearable computing tech-nology and sensors that monitor their bodys health. In fact, millions of people worldwide with diabetes could benefit from implantable sensors and wearable computers designed to monitor their blood-sugar level; because if not controlled such people are at risk for dangerous complications, including damage to the eyes, kid-neys, and heart. To help people monitor their blood-sugar level, Smart Holograms, a spinoff company of Cambridge University, Google, and others are developing eye-worn sensors to assist those with the disease. Googles technology consists of contact lens built with special sensors that measures sugar levels in tears using a tiny wire-less chip and miniature sensor embedded between two layers of soft contact lens material. As interesting and innovative as this solution to monitoring diabetes is, this isnt the only examples of eye-oriented wearable technology that will be developed. In the future, we may see people equipped with contact lens or retinal prosthesis that monitor their health, detect energy in the x-ray or infrared range, and have telephoto capabilities.

    As for developing a telephoto lens, for the approximately 2025 million people worldwide who have the advanced form of age-related macular degeneration (AMD), a disease which affects the region of the retina responsible for central, detailed vision, and is the leading cause of irreversible vision loss and legal blindness in people over the age of 65, an implantable telescope could offer hope. In fact, in 2010, the U.S. FDA approved an implantable miniature telescope (IMT), which works like the tele-photo lens of a camera (Figure 1.2).

    The IMT technology reduces the impact of the central vision blind spot due to end-stage AMD and projects the objects the patient is looking at onto the healthy area of the light-sensing retina not degenerated by the disease.

    FIGURE 1.2 The implantable miniature telescope (IMT) is designed to improve vision for those experiencing age-related macular degeneration. (Images provided courtesy of VisionCare Ophthalmic Technologies, Saratoga, CA.)

  • 7Wearable Computers and Augmented Reality

    The surgical procedure involves removing the eyes natural lens, as with cataract surgery, and replacing the lens with the IMT. While telephoto eyes are not coming soon to an ophthalmologist office, this is an intriguing step in that direction and a look into the future of wearable computers. I should point out that in the United States any device containing a contact lens or other eye-wearable technology is reg-ulated by the Federal Drug Administration as a medical device; the point being that much of wearable computing technology comes under government regulation.

    1.1 PUBLIC POLICY

    Although not the focus of this book, an important topic for discussion is the use of augmented reality and wearable computers in the context of public policy especially in regard to privacy. For example, Steve Mann presents the idea that wearable computers can be used to film newsworthy events as they happen or people of authority as they perform their duties. This example brings up the issues of privacy and whether a person has a legal right to film other people in public. Consider the following case decided by the U.S. First Circuit Court of Appealsbut note it is not the only legal dispute involv-ing sensors and wearable computers. In the case, Simon Glik was arrested for using his cell phones digital video camera (a wearable computer) to film several police officers arresting a young man on the Boston Common (Glik v. Cunniffe, 2011). The charges against Glik, which included violation of Massachusettss wiretap statute and two other state-law offenses, were subsequently judged baseless and were dismissed. Glik then brought suit under a U.S. Federal Statute (42 U.S.C. 1983), claiming that his very arrest for filming the officers constituted a violation of his rights under the First (free speech) and Fourth (unlawful arrest) Amendments to the U.S. Constitution. The court held that based on the facts alleged, that Glik was exercising clearly established First Amendment rights in filming the officers in a public space, and that his clearly estab-lished Fourth Amendment rights were violated by his arrest without probable cause.

    While the vast amount of information captured by all the wearable digital devices is valuable on its own, sensor data derived from wearable computers will be even more powerful when linked to the physical world. On this point, knowing where a photo was taken, or when a car passed by an automated sensor, will add rich metadata that can be employed in countless ways. In effect, location information will link the physical world to the virtual meta-world of sensor data. With sensor technology, everything from the clothing we wear to the roads we drive on will be embedded with sensors that collect information on our every move, including our goals, and our desires. Just consider one of the most common technologies equipped with sensorsa cell phone. It can contain an accelerometer to measure changes in velocity, a gyroscope to measure orientation, and a camera to record the visual scene. With these senses the cell phone can be used to track a persons location, and integrate that information with comprehensive satellite, aerial, and ground maps to generate multi-layered real-time location-based databases. In addition, body-worn sensors are being used to monitor blood pressure, heart rate, weight, and blood glucose, and can link to a smartphone often with wireless sensors. Also, given the ability of hackers to access networks and wireless body-worn devices, the cybersecurity of wearable

  • 8 Fundamentals of Wearable Computers and Augmented Reality

    devices is becoming a major concern. Another point to make is that sensors on the outside of the body, are rapidly moving under the skin as they begin to connect the functions of our body to the sensors external to it (Holland etal., 2001).

    Furthermore, what about privacy issues and the use of wearable computers to film people against their will? Consider an extreme case, video voyeurism, which is the act of filming or disseminating images of a persons private areas under circumstance in which the person had a reasonable expectation of privacy regardless of whether the person is in a private or public location. Video voyeurism is not only possible but being done using wearable computers (mostly hand held cameras). In the United States, such conduct is prohibited under State and Federal law (see, e.g., Video Voyeurism Prevention Act of 2004, 18 U.S.C.A. 1801). Furthermore, what about the privacy issues associated with other wearable computing technology such as the ability to recognize a persons face, then search the internet for personal infor-mation about the individual (e.g., police record or credit report), and tag that infor-mation on the person as they move throughout the environment?

    As many of the chapters in this book show, the use of wearable computers combined with augmented reality capabilities can be used to alter or diminish reality in which a wearable computer can be used to replace or remove clutter, say, for exam-ple, an unwanted advertisement on the side of a building. On this topic, I published an article Commercial Speech, Intellectual Property Rights, and Advertising Using Virtual Images Inserted in TV, Film, and the Real World in UCLA Entertainment Law Review. In the article, I discussed the legal and policy ramifications of placing ads consisting of virtual images projected in the real world. We can think of virtual advertising as a form of digital technology that allows advertisers to insert computer-generated brand names, logos, or animated images into television programs or mov-ies; or with Steves wearable computer technology and other displays, the real world. In the case of TV, a reported benefit of virtual advertising is that it allows the action on the screen to continue while displaying an ad viewable only by the home audience.

    What may be worrisome about the use of virtual images to replace portions of the real world is that corporations and government officials may be able to alter what people see based on political or economic considerations; an altered reality may then become the accepted norm, the consequences of which seem to bring up the dystopian society described in Huxleys Brave New World. Changing directions, another policy issue to consider for people equipped with networked devices is what liabilities, if any, would be incurred by those who disrupt the functioning of their computing pros-thesis. For example, would an individual be liable if they interfered with a signal sent to an individuals wearable computer, if that signal was used to assist the individual in seeing and perceiving the world? On just this point, former U.S. Vice President, Dick Cheney, equipped with a pacemaker had its wireless feature disabled in 2007.

    Restaurants have also entered into the debate about the direction of our wearable computer future. Taking a stance against Google Glass, a Seattle-based restaurant, Lost Lake Cafe, actually kicked out a patron for wearing Glass. The restaurant is standing by its no-glass policy, despite mixed responses from the local community. In another incident, a theater owner in Columbus, Ohio, saw enough of a threat from Google Glass to call the Department of Homeland Security. The Homeland Security agents removed the programmer who was wearing Google Glass connected to his

  • 9Wearable Computers and Augmented Reality

    prescription lenses. Further, a San Francisco bar frequented by a high-tech crowd has banned patrons from wearing Google Glass while inside the establishment. In fact, San Francisco seems to be ground zero for cyborg disputes as a social media consultant who wore Glass inside a San Francisco bar claimed that she was attacked by patrons objecting to her wearing the device inside the bar. In addition, a reporter for Business Insider said he had his Google Glass snatched off his face and smashed to the ground in San Franciscos Mission District.

    Continuing the theme of how wearable computers and augmented reality technol-ogy impact law and policy, in addition to FDA regulations, some jurisdictions are just beginning to regulate wearable computing technology if its use poses a danger to the population. For example, sparsely populated Wyoming is among a small number of U.S. states eyeing a ban on the use of wearable computers while driving, over concerns that drivers wearing Google Glass may pay more attention to their email or other online content than the road. And in a high-profile California case that raised new questions about distracted driving, a driver wearing Google Glass was ticketed for wearing the display while driving after being stopped for speeding. The ticket was for violating a California statute which prohibited a visual monitor in her car while driving. Later, the ticket was dismissed due to lack of proof the device was actually operating while she was driving. To show the power and influence of cor-porations in the debate about our wearable computer/AR future, Google is lobbying officials in at least three U.S. states to stop proposed restrictions on driving with headsets such as Google Glass, marking some of the first clashes over the nascent wearable technology.

    By presenting the material in the earlier sections, my goal was to inform the readers of this book that while the technology presented in the subsequent chapters is fascinating and even inspiring, there are still policy and legal issues that will have to be discussed as wearable computer and augmented reality technologies improve and enter more into the mainstream of society. Thus, I can concludewhile technology may push society further, there is a feedback loop, technology is also influenced by society, including its laws and regulations.

    1.2 TOWARD A THEORY OF AUGMENTED REALITY

    As a final comment, one often hears people discuss the need for theory to provide an intellectual framework for the work done in augmented reality. When I was on the faculty at the University of Washington, my students and I built a head tracked augmented reality system that as one looked around the space of the laboratory, they saw a corresponding computer-generated image that was rendered such that it occluded real objects in that space. We noticed that some attributes of the virtual images allowed the person to more easily view the virtual object and real world in a seamless manner. Later, I became interested in the topic of how people performed cognitive operations on computer-generated images. With Jim Foley, now at Georgia Tech, Iperformed experiments to determine how people mentally rotated images rendered with different lighting models. This led to thinking about how virtual images could be seamlessly integrated into the real world. I asked the question of whether there was any theory to explain how different characteristics of virtual

  • 10 Fundamentals of Wearable Computers and Augmented Reality

    images combined to form a seamless whole with the environment they were pro-jected into, or whether virtual images projected in the real world appeared separate from the surrounding space (floating and disembodied from the real world scene). I recalled a paper I had read while in college by Garner and Felfoldy (1970) on the integrality of stimulus dimensions in various types of information processing. The authors of the paper noted that separable dimensions remain psychologically distinct when in combination; an example being forms varying in shape and color. We say that two dimensions (features) are integral when they are perceived holisti-cally, that is, its hard to visually decode the value of one independently from the other. A vast amount of converging evidence suggests that people are highly efficient at selectively attending to separable dimensions. By contrast, integral dimensions combine into relatively unanalyzable, unitary wholes, an example being colors varying in hue, brightness, and saturation. Although people can selectively attend to integral dimensions to some degree, the process is far less efficient than occurs for separable-dimension stimuli (Shepard, 1964). I think that much can be done to develop a theory of augmented, mediated, or diminished reality using the approach discussed by Garner and Felfoldy, and Shepard, and I encourage readers of this book to do so. Such research would have to expand the past work which was done on single images, to virtual images projected into the real world.

    1.3 CHALLENGES AND THE FUTURE AHEAD

    While the chapters in this book discuss innovative applications using wearable computer technology and augmented reality, the chapters also focus on providing solutions to some of the difficult design problems in both of these fields. Clearly, there are still many design challenges to overcome and many amazing applications yet to developsuch goals are what designing the future is about. For example, consider a technical problem, image registration, GPS lacks accuracy, but I expect vast improvements in image registration as the world is filled with more sensors. Ialso expect that wearable computing technology will become more-and-more inte-grated with the human body; especially for reasons of medical necessity. And with continuing advances in miniaturization and nanotechnology, head-worn displays will be replaced with smart contact lens, and further into the future bionic eyes that record everything a person sees, along with the capability to overlay the world with graphics (essentially information). Such technology will provide people augmented reality capabilities that would be considered the subject of science fiction just a few years ago.

    While this chapter focused more on a policy discussion and futuristic view of wearable computers and augmented reality, the remaining chapters focus far more on technical and design issues associated with the two technologies. The reader should keep in mind that the authors of the chapters which follow are inventing the future, but we should all be involved in determining where technology leads us and what that future looks like.

  • 11Wearable Computers and Augmented Reality

    REFERENCES

    Azuma, R. T., 1997, A survey of augmented reality, Presence: Teleoperators and Virtual Environments, 6(4), 355385.

    Garner, W. R. and Felfoldy, G. L., 1970, Integrality of stimulus dimensions in various types ofinformation processing, Cognitive Psychology, 1, 225241.

    Glik v. Cunniffe, 655 F.3d 78 (1st Cir. 2011) (case at the United State Court of Appeals for the First Circuit that held that a private citizen has the right to record video and audio of public officials in a public place, and that the arrest of the citizen for a wiretapping violation violated the citizens First and Fourth Amendment rights).

    Holland, D., Roberson, D. J., and Barfield, W., 2001, Computing under the skin, in Barfield, W. and Caudell, T. (eds.), Fundamentals of Wearable Computing and Augmented Reality, pp. 747792, Lawrence Erlbaum Associates, Inc., Mahwah, NJ.

    Mann, S., August 6, 2002, Mediated reality with implementations for everyday life, Presence Connect, the online companion to the MIT Press journal, PRESENCE: Teleoperators and Virtual Environments, 11(2), 158175, MIT Press.

    Mann, S. and Niedzviecki, H., 2001, Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer, Anchor Canada Publisher, Toronto, Doubleday, Canada Publisher.

    Milgram, P., Takemura, H., Utsumi, A., and Kishino, F., 1994, Augmented reality: A class of displays on the realityvirtuality continuum, in Proceedings of the SPIE Conference on Telemanipulator and Telepresence Technologies, vol. 2351, pp. 282292, Boston, MA.

    Shepard, R. N., 1964, Attention and the metric structure of the stimulus space, Journal of Mathematical Psychology, 1, 5487.

  • 13

    2 Wearable ComputingMeeting the Challenge

    Thad Starner

    Wearable computers and head-mounted displays (HMDs) are in the press daily. Why have they captured our imaginations now, when the technology has been available for decades? While Fitbits fitness tracking devices are selling in the millions in 2014, what prevented FitSense (see Figure 2.5) from having similar success with such devices in 2000? Since 1993 I have been wearing a computer with an HMD as part of my daily life, and Reddy Information Systems had a commercial wearable with Reflection Technologys Private Eye HMD in 1991 (Eliason 1992). Yet over 20years later, Google Glass is generating more excitement than any of those early devices.

    Many new classes of devices have followed a similar arc of adoption. The fax machine was invented in 1846 but became popular over 130years later. In 1994, the IBM Simon touchscreen smartphone had many features familiar in todays phones, but it was the Apple iPhone in 2007 that seized the publics imagination (Sager 2012). Often, the perceived need for a technology lags behind innovation, and sometimes developers can be surprised by the ways in which users run with a technology. When the cellular phone was introduced in the early 1980s, who would have guessed that increasingly we would use it more for texting than talking?

    Some pundits look for a killer app to drive the adoption of a new class of device. Yet that can be misleading. As of mid-2014, tablets are outselling laptops in Europe, yet there is no single killer app that drives adoption. Instead, the tablet offers a dif-ferent set of affordances (Gibson 1977) than the smartphone or the laptop, making

    CONTENTS

    2.1 Networking ..................................................................................................... 142.2 Power and Heat ............................................................................................... 152.3 Mobile Input ................................................................................................... 172.4 Display ............................................................................................................ 182.5 Virtual Reality ................................................................................................ 192.6 Portable Video Viewers ..................................................................................202.7 Industrial Wearable Systems ..........................................................................222.8 Academic/Maker Systems for Everyday Use .................................................242.9 Consumer Devices ..........................................................................................262.10 Meeting the Challenge ....................................................................................28References ................................................................................................................28

  • 14 Fundamentals of Wearable Computers and Augmented Reality

    it more desirable in certain situations. For example, for reading in bed the tablet is lighter than a laptop and provides an easier-to-read screen than a smartphone. Thetablet is controlled by finger taps and swipes that require less hardware and dex-terity than trying to control a mouse and keyboard on a laptop, which also makes it convenient for use when the user is in positions other than upright at a desk.

    Wearable computers have yet a different set of affordances than laptops, tablets, and smartphones. I often lie on a couch in my office, put the focus of my HMD at the same depth as the ceiling, and work on large documents while typing using a one-handed keyboard called a Twiddler. This position is very comfortable, much more so than any other interface I have tried, but students often think that they are waking me when they walk into my office. In addition, I often use my wearable computer while walking. I find it helps me think to be moving when I am composing, and no other device enables such on-the-go use.

    On-the-go use is one aspect of wearable computers that makes them distinct from other devices. In fact, my personal definition of a wearable computer is any body-worn computer that is designed to provide useful services while the user is perform-ing other tasks. Often the wearables interface is secondary to a users other tasks and should require a minimum of user attention. Take, for example, a digital music player. It is often used while a user is exercising, studying, or commuting, and the interface is used in short bursts and then ignored.

    Such a secondary interface in support of a primary task is characteristic of a wearable computer and can be seen in smartwatches, some spacesuits, fitness moni-tors, and even smartphones for some applications. Some of these devices are already commonplace. However, here I will focus on wearable computers that include an HMD, as these devices are at the threshold of becoming popular and are perhaps the most versatile and general-purpose class of wearable computers. Like all wear-able computers, those based on HMDs have to address fundamental challenges in networking (both on and off the body), power and heat, and mobile input. First I will describe these challenges and show how, until recently, they severely limited what types of devices could be manufactured. Then I will present five phases of HMD development that illustrate how improvements in technology allowed progressively more useful and usable devices.

    2.1 NETWORKING

    Turn-by-turn navigation, voice-based web search, and cloud-based office tools are now commonplace on smartphones, but only in the past few years has the latency of cellular networks been reduced to the point that computing in the cloud is effec-tive. A decade ago, the throughput of a cellular network in cities like Atlanta could be impressive, yet the latency would severely limit the usability of a user interface depending on it. Today when sending a message, a Google Glass user might say, OK Glass, send a message to Thad Starner. Remember to pick up the instruction manual, and the experience can be a seamless interplay of local and cloud-based processing. The three commands OK Glass, send a message to, and Thad Starner are processed locally because the speech recognizer simply needs to distinguish between one of several prompts, but the message content Remember to pick up

  • 15Wearable Computing

    the instruction manual requires the increased processing power of the cloud to be recognized accurately. With an LTE cellular connection, the content is processed quickly, and the user may barely notice a difference in performance between local and remote services. However, with a GPRS, EDGE, or sometimes even an HSPDA connection, the wait for processing in the cloud can be intolerable.

    WiFi (IEEE 802.11) might seem a viable alternative to commercial cellular net-works, but until 2000 open hotspots were rare. Wearable computers in the late 1990s often used WiFi, but they required adapters that were the size of a small mobile phone and required significant power. Today, a part of a single chip can provide this service.

    On-body networking has also been a challenge. Bluetooth (IEEE 802.15) was originally intended as a replacement for RS232 connections on desktop PCs, not as a body network. The standard was not designed with power as a foremost concern, and even basic implementations were unstable until 2001. Only recently, with the wide-spread adoption of Bluetooth Low Energy by the major mobile phone manufacturers have wearable devices really had an appropriate body-centered wireless network. Fundamental issues still remain. Both WiFi and Bluetooth use 2.4 GHz radio, which is blocked by water and the human body. Thus, a sensor mounted in a shoe to moni-tor footfalls might have difficulty maintaining connection to an earbud that provides information as to a runners performance.

    Most positioning systems also involve networks. For example, the location-aware Active Badge system made by Olivetti Research Laboratory in 1992 used a network of infrared receivers to detect transmissions from a badge to locate a wearer and to unlock doors as the user approached them. When the user was walking through the lab, the system could also re-route phone calls to the nearest phone (Want 2010). Similarly, the Global Positioning System uses a network of satellites to provide precisely synchronized radio transmissions that a body-worn receiver can use to determine its position on the surface of the planet. Today, GPS is probably one of the most commonly used technologies for on-body devices. It is hard to imagine life without it, but before 2000, GPS was accurate to within 100 m due to the U.S. military intentionally degrading the signal with Selective Availability. Turn-by-turn directions were impossible. Today, civilian accuracy has a median open, outdoor accuracy of 10 m (Varshavsky and Patel 2010). Modern GPS units can even maintain connection and tracking through wooden roofs.

    2.2 POWER AND HEAT

    In 1993, my first HMD-based wearable computer was powered by a lead-acid gel cell battery that massed 1.3kg. Today, a lithium-ion camcorder battery stores the same amount of power but weighs a quarter as much. While that seems like an impressive improvement, battery life will continue to be a major obstacle to wearable technol-ogy, since improvements in battery technology have been modest compared to other computing trends. For example, while disk storage density increased by a factor of 1200 during the 1990s, battery energy density only increased by a factor of three (Starner 2003). In a mobile device, the battery will often be one of the biggest and most expensive components. Since battery technology is unlikely to change during a

  • 16 Fundamentals of Wearable Computers and Augmented Reality

    normal 18-month consumer product development cycle, the battery should be speci-fied first as it will often be the most constraining factor on the products industrial design and will drive the selection of other components.

    One of those components is the DCDC power converter. A typical converter might accept between 3.4 and 4.2 V from a nominal 3.6 V lithium battery and produce several constant voltages for various components. One improvement in mobile consumer electronics that often goes underappreciated is the efficiency of DCDC power converters. Before 2000, just the DCDC converter for Google Glass could mass 30 g (Glass itself is 45 g), and the device might lose 30% of its power as heat. Today, switching DCDC converters are often more than 95% efficient and are just a few grams.

    Due to this efficiency improvement, there is a corresponding reduction in heat production. Heat often limits how small a mobile device can be. A wearable device is often in contact with a users skin, and it must have enough surface area and ventilation to cool, or it will have to throttle its performance considerably to stay at a comfortable temperature for the user (Starner and Maguire 1999). This tension between perfor-mance and physical size can be quite frustrating to designers of wearable devices. Users often desire small jewelry-like devices to wear but are also attracted to power-hungry services like creating augmented reality overlays with registered graphics or transmitting video remotely. Yet in consumer products, fashion is the key. Unless the consumer is willing to put on the device, it does not matter what benefits it offers, and physical size and form are major components of the desirability of a device.

    In practice, the design of a wearable device is often iterative. Given a battery size, an industrial designer creates a fashionable package. That package should be optimized in part for thermal dissipation given its expected use. Will the device have the ability to perform the expected services and not become uncomfortable to wear? If not, can the package be made larger to spread the heat, lowering the temperature at the surface? Or can lower-heat alternatives be found for the electronics? Unfortunately, many industrial design tools do not model heat, which tends to require highly specialized software. Thus, the iteration cycle between fash-ion and mechanical engineering constraints can be slow.

    One bright spot in designing wearable computers is the considerable effort that has been invested in smartphone CPUs and the concomitant power benefits. Modern embedded processors with dynamic voltage scaling can produce levels of comput-ing power equivalent to a late-1980s supercomputer in one instant and then, in the next moment, can switch to a maintenance mode which draws milliwatts of power while waiting for user input. Designing system and user software carefully for these CPUs can have significant benefits. Slower computation over a longer period can use significantly less power than finishing the same task at a higher speed and then resting. This slow-and-steady technique has cascading benefits: power converters are generally more efficient at lower currents, and lithium-ion batteries last longer with a steady discharge than with bursty uses of power.

    Similarly, system software can exploit knowledge about its networking to help flatten the battery load.

    Wireless networking requires significant power when the signal is weak. For non-crucial tasks, waiting for a better signal can save power and heat. Designing

  • 17Wearable Computing

    maintenance and background tasks (e.g., caching email and social networking feeds) to be thermally aware allows more headroom for on-demand interactive tasks. If the wearable is thought of as a leaky cup, and heat as water filling it, then one goal is to keep the cup as empty as possible at any given time so that when a power-hungry task is required, we have as much space as possible to buffer the heat produced and not overflow the cup.

    2.3 MOBILE INPUT

    Wearable computing interfaces often aspire to be hands-free. This term is a bit of a misnomer. What the user really wants is an interface that is unencumbering. A wrist-watch that senses a wearers gesture to decline a phone call or to change the track on a digital music player is certainly not hands-free, but its clearly better for use while jogging compared to stopping and manipulating a touchscreen. Unfortunately, an on-the-go wearable user has reduced dexterity, eyesight, hearing, attention, and sense of touch than when stationary, which makes an unencumbering interface design particularly challenging.

    Speech interfaces seem like an obvious alternative, and with low-latency cellular networks and processing in the cloud, speech recognition on Android and iOS phones has become ubiquitous. Modern, big data machine learning techniques are enabling ever-better speech recognition. As enough examples of speech are captured on mobile devices with a large variety of accents and background noises, recognition rates are improving. However, dictating personal notes during a business conversa-tion or a university class is not socially appropriate. In fact, there are many situations in which a user might feel uncomfortable interacting with a device via speech. Thus, mobile keyboards will continue to be a necessary part of mobile interfaces.

    Unfortunately, todays mini-QWERTY and virtual keyboards require a lot of visual attention when mobile. A method of mobile touch typing is needed. To my knowledge, the Twiddler keyboard, first brought on the market in 1992, is still the fastest touch typing mobile device. Learning the Twiddler requires half the learning time (25h for 47 wpm on average) of the desktop QWERTY keyboard to achieve the greater-than-40 wpm required for high school typing classes (Lyons 2006). Yet the device remains a niche market item for dedicated users. Perhaps as more users type while on-the-go and after the wireless Twiddler 3 is introduced, more people will learn it. Such silent, eyes-free mobile text entry still remains an opportunity for innovation, especially for any technology that can accelerate the learning curve.

    Navigating interfaces while on-the-go also remains a challenge. Some self- contained headsets use trackpads or simple d-pad interactions, but some users would like a more subtle method of interaction. One option is to mount a remote controller elsewhere on the body and use Bluetooth Human Interface Device profiles for con-nection. In a series of studies, Bruce Thomass group at the University of South Australia explored both what types of pointing devices are most effective while on-the-go and where they should be mounted (Thomas etal. 2002, Zucco etal. 2009). His results suggest that mini-trackpads and mini-trackballs can be highly effective, even while moving. Launched in 2002, Xybernauts POMA wearable computer suggested another interesting variant on this theme. A user could run his finger over a wired,

  • 18 Fundamentals of Wearable Computers and Augmented Reality

    upside-down optical mouse sensor to control a cursor. Perhaps with todays smaller and lower-power components, a wireless version could be made. More recently, Zeagler and Starner explored textile interfaces for mobile input (Komar etal. 2009, Profita etal. 2013), and a plethora of community-funded Bluetooth Human Interface Devices are being developed, often focusing on rings and bracelets. One device will not satisfy all needs, and there will be an exciting market for third-party interfaces for consumer wearable computers.

    Traditional windows, icon, menu, pointer (WIMP) interfaces are difficult to use while on-the-go as they require too much visual and manual attention. Fortunately, however, smartphones have broken the former monopoly on graphical user inter-faces. Swipes, taps, and gestures on phone and tablet touchscreens can be made with-out much precision, and many of the features of Android and iOS can be accessed through these cruder gestures. Yet these devices still require a flat piece of glass, which can be awkward to manipulate while doing other tasks. Instead, researchers and startups are spending considerable energy creating gestural interfaces using motion sensors. Besides pointing, these interfaces associate gestures with particular commands such as silencing a phone or waking up an interface. False triggering, however, is a challenge in the mobile environment; an interface that keeps triggering incorrectly throughout the users workday is annoying at best.

    2.4 DISPLAY

    While visual displays often get the most attention, auditory and tactile displays are excellent choices for on-the-go users. Almost all mobile phones have a simple vibra-tion motor to alert the user to an incoming call. Unfortunately, a phone vibrating in a pants pocket or purse can be hard to perceive while walking. In the future, I expect the closer contact with the skin made available by smartwatches to enable more reli-able and expressive tactile interfaces than a simple on/off vibration motor.

    Audio displays are another good choice for on-the-go interaction. Smartphones and mobile music players are almost always shipped with earbuds included, but there is much room for innovation. Bone conduction, such as is used with Google Glass and by the military and professional scuba divers, allows the wearer to hear notifica-tions from the computer without blocking the ear canals. Ambient audio interfaces (Sawhney and Schmandt 2000) allow the wearer to monitor information sources, like the volume of stock market trading, for sudden changes without devoting much attention to the process. Rendering audio in 3D can help the user monitor several ambient information sources at once or can improve the sense of participant pres-ence during conference calls.

    HMDs can range from devices meant to immerse the user in a synthetic reality to a device with a few lights to provide feedback regarding the wearers performance while biking. HMDs can be created using lasers, scanning mirrors, holographic optics, LCDs, CRTs, and many others types of technologies. For any given HMD, design trade-offs are made between size, weight, power, brightness and contrast, transparency, resolution, color, eyebox (the 3D region in which the eye can be placed and still see the entire display in focus), focus, and many other factors. The intended purpose of the HMD often forces very different form factors and interactions.

  • 19Wearable Computing

    Forthe purpose of discussion, Ive clustered these into five categories: virtual reality, portable video viewers, industrial wearable systems, academic/maker wearables for everyday use, and consumer devices. See Kress etal. (2014) for a more technical discussion of typical optics of these types of displays.

    2.5 VIRTUAL REALITY

    In the late 1980s and early 1990s, LCD and CRT displays were large, heavy, power hungry, and required significant support electronics. However, it was during this time that virtual reality was popularized, and by the mid-1990s, HMDs began to be affordable. VPL Research, Virtual Research, Virtual I/O, Nintendo, and many others generated a lot of excitement with virtual reality headsets for professionals and gamers (Figure 2.1). An example of an early professional system was the 1991 Flight Helmet by Virtual Research. It has a 100-degree diagonal field of view and 240 120 pixel resolution. It weighs 1.67kg and uses 6.9cm LCD screens with LEEP Systems wide-angle optics to provide an immersive stereoscopic experience. For its era, the Flight Helmet was competitively priced at $6000. Subsequent Virtual Research devices employed smaller lenses and a reduced field of view to save weight and cost. By 1994, the LCDs in the companys VR4 had twice the resolution at half

    (a) (b)

    (c) (d)

    FIGURE 2.1 Virtual reality HMDs. (a) Virtual Researchs Flight Helmet (1991, $6000). (b) Nintendo virtual boy video game console (1995, $180). (c) Virtual i-O i-glasses! Personal 3D viewer head-mounted display (1995, $395). (d) Oculus Rift DK1 (2013, $300). (Images courtesy of Tavenner Hall.)

  • 20 Fundamentals of Wearable Computers and Augmented Reality

    the size. However, with todays lighter weight panels and electronics, the Oculus Rift Developer Kit 1 slightly surpasses the original Flight Helmets field of view and has 640 480 pixel resolution per eye while weighing 379 g. The biggest difference between 1991 and today, though, is the pricethe Rift DK1 is only $300 whereas the Flight Helmet, adjusted for inflation, would be the equivalent of over $10,000 today.

    The 1995 Nintendo Virtual Boy game console is an interesting contrast to the Flight Helmet. It costs $180, and with over a million devices sold, it ranks among the largest-selling HMDs. The Virtual Boy introduced many consumers to immer-sive gameplay. It is portable and includes the full computing system in the headset (the wired controller includes the battery pack for the device). As a table-top head display, the Virtual Boy avoids the problem of too much weight on the head, but it has no possibility of head tracking or the freedom of motion available with most VR headsets. It uses Reflection Technologys scanning, mirror-style, monochromatic display in which a column of 224 LEDs is scanned across the eye with an oscillating mirror as the LEDs flash on and off, creating an apparent 384 224 pixel resolution display with persistence of vision. Unlike many consumer VR devices, the Virtual Boy provides adjustments for focus and inter-eye distance. Still, some users quickly complain of simulation sickness issues.

    Minimizing head weight and simulation sickness continues to be a major concern with modern VR HMDs. However, power and network are rarely a concern with these devices, since they are mostly for stationary use and attach to desktop or gaming systems. The user controls the experience through head tracking and instrumented gloves as well as standard desktop interfaces such as keyboards and joysticks. While these VR HMDs are not wearables by my definition, they are examples of early major efforts in industrial and consumer devices and share many features with the next class of device, mobile video viewers.

    2.6 PORTABLE VIDEO VIEWERS

    By the late 1990s, camcorder viewfinders were a major market for small LCD pan-els. Lightweight and inexpensive LCD-based mobile HMDs were now possible (Figure 2.2). Unfortunately, there were no popular mobile computing devices that could output images or videos. Smartphones would only become prevalent after 2007, and most did not have the capability for controlling an external screen. The video iPod can stream video, but it would not be released until 2005. Instead, HMD man-ufacturers started focusing on portable DVD players for entertaining the traveler. In-seat entertainment systems were rare, so manufacturers envisioned a small HMD connected to a portable DVD player, which allowed the wearer to watch a movie during a flight or car ride. Networking was not required, and user input consisted of a few button presses. Battery life needed to be at least 2 h and some devices, like the Eyetop (Figure 2.2), offered packages with an external battery powering both the dis-play and the DVD player. With the Glasstron HMD (and the current HMZ line), Sony was more agnostic about whether the device should be used while mobile or at home. One concept was that the headsets could be used in place of a large screen television for those apartments with little space. However, the Glasstron line did include a place to mount Sonys rechargeable camcorder batteries for mobile usage.

  • 21Wearable Computing

    As small, flash memory-based mobile video players became common, portable video viewers became much more convenient. Companies such as MyVu and Vuzix sold several models and hundreds of thousands of devices (Figure 2.2), with the units even making appearances in vending machines at airports. Modern video viewers, like the Epson Moverio, can be wireless, having an internal battery and using a micro-SD reader or internal memory for loading the desired movie directly to the headset.

    (a) (b)

    (c) (d)

    (e) (f )

    FIGURE 2.2 Portable video viewers first concentrated on interfacing with portable DVD players, then flash-based media players like the video iPod, and most recently started inte-grating enough internal memory to store movies directly. (a) Sony Glasstron PLM-A35 (2000, $499). (b) Eyetop Centra DVD bundle (2004, $599). (c) MyVu Personal Viewer (2006, $270). (d) Vuzix iWear (2008, $250). (e) Vuzix Wrap 230 (2010, $170). (f) Epson Moverio BT-100 (2012; $700). (Images courtesy of Tavenner Hall.)

  • 22 Fundamentals of Wearable Computers and Augmented Reality

    The Moverio BT-100 (Figure 2.2) is especially interesting as it sits astride three different classes of device: portable video viewer, industrial wearable, and consumer wearable. It is self-contained, two-eyed, 2D or 3D, and see-through and can run stan-dard Android applications. It has WiFi and a removable micro-SDHC for loading movies and other content. Its battery and trackpad controller is in a wired pendant, giving it ease of control and a good battery life. Unfortunately, the HMD itself is a bit bulky and the noseweight is too highboth problems the company is trying to address with the new BT-200 model.

    Unlike the modern Moverio, many older devices do not attempt 3D viewing, as simulator sickness was a potential issue for some users and 3D movies were uncom-mon until the late 2000s. Instead, these displays play the same image on both eyes, which can still provide a high quality experience. Unfortunately, video viewers suffer certain apathy from consumers. Carrying the headset in addition to a smartphone or digital video player is a burden, and most consumers prefer watching movies ontheir pocket media players and mobile phones instead of carrying the extra bulk of a video viewer. An argument could be made that a more immersive system, like an Oculus Rift, would provide a higher quality experience that consumers would prefer, but such a wide field of view system is even more awkward to transport. Studies on mobile video viewing show diminishing returns in perception of quality above 320 240 pixel resolution (Weaver etal. 2010), which suggests that once video quality is good enough, the perceived value of the video system will be more determined by other factors such as convenience, ease-of-use, and price.

    2.7 INDUSTRIAL WEARABLE SYSTEMS

    Historically, industrial HMD-based wearable computers have been one-eyed with an HMD connected to a computer module and battery mounted on the waist (Figure2.3). Instead of removing the user from reality, these systems are intended to provide computer support while the wearer is focused on a task in the physical world such as inspection, maintenance, repair, and order picking. For example, when repairing a car, the HMD might show each step in a set of installation instructions.

    Improvements in performance for industrial tasks can be dramatic. A study per-formed at Carnegie Mellon University showed that during Army tank inspections, an interactive checklist on a one-eyed HMD can cut in half the required personnel and reduce the required time for completing the task by 70% (Siewiorek etal. 2008). For order picking, a process during which a worker selects parts from inventory to deliver to an assembly line or for an outgoing package to a customer, a graphical guide on a HMD can reduce pick errors by 80% and completion time by 38% over the current practice of using paper-based parts lists (Guo etal. 2014).

    Some HMD uses provide capabilities that are obviously better than current practice. For instance, when testing an electrical circuit, technicians must often hold two electrical probes and a test meter. Repairing telephone lines adds the extra complication of clinging to a telephone pole at the same time. The Triplett VisualEYEzer 3250 multimeter (Figure 2.3) provides a head-up view of the meters display, allowing the user to hold a probe in each hand. The result is that the tech-nician can test circuits more quickly and have a better ability to handle precarious

  • 23Wearable Computing

    situations. In the operating room, anesthesiologists use HMDs in a similar way. The HMD overlays vital statistics on the doctors visual field while monitoring the patient (Liu etal. 2009). Current practice often requires anesthesiologists to divert their gaze to monitor elsewhere in the room, which reduces the speed at which dangerous situ-ations are detected and corrected.

    With more case studies showing the advantages of HMDs in the workplace, industry has shown a steady interest in the technology. From the mid-1990s to after 2000, companies such as FlexiPC and Xybernaut provided a general-purpose line

    (a) (b)

    (c) (d)

    (e) (f )

    FIGURE 2.3 Wearable systems designed for industrial, medical, and military applications.(a) Xybernaut MA-IV computer (1999, $7500). (b) Triplett VisualEYEzer 3250 multimeter (2000, $500). (c) Xybernaut MA-V computer (2001, $5000). (d) Xybernaut/Hitachi VII/POMA/WIA computer (2002, $1500). (e) MicroOptical SV-6 display (2003, $1995). (f)Vuzix Tac-Eye LT head-up display (2010, $3000). (Images courtesy of Tavenner Hall.)

  • 24 Fundamentals of Wearable Computers and Augmented Reality

    of systems for sale. See Figure 2.3 for the evolution of Xybernauts line. Meanwhile, specialty display companies like MicroOptical and Vuzix (Figure 2.3) made displays designed for industrial purposes but encouraged others to integrate them into systems for industry. User input to a general purpose industrial system might be in the form of small vocabulary, isolated-word speech recognition; a portable trackball; a dial; or a trackpad mounted on the side of the main computer. Wireless networking was often by 802.11 PCMCIA cards. CDPD, a digital standard implemented on top of analog AMPS cellular service, was used when the wearer needed to work outside of the corporate environment. Most on-body components were connected via wires, as wireless Bluetooth implementations were often unstable or non-existent. Industrial customers often insisted on Microsoft Windows for compatibility with their other systems, which dictated many difficult design choices. Windows was not optimized for mobile use, and 86 processors were particularly bad at power efficiency. Thus, wearables had to be large to have enough battery life and to dissipate enough heat during use. The default Windows WIMP user interface required significant hand-eye coordination to use, which caused wearers to stop what they were doing and focus on the virtual interface before continuing their task in the physical world. After smart phones and tablets introduced popular, lighter-weight operating systems and user interfaces designed for grosser gesture-based interactions, many corporate custom-ers began to consider operating systems other than Windows. The popularization of cloud computing also helped break the Windows monopoly, as corporate customers considered wearables as thin client interfaces to data stored in the wireless network.

    Today, lightweight, self-contained Android-based HMDs like Google Glass, Vuzix M100, and Optinvent ORA are ideal for manufacturing tasks such as order picking and quality control, and companies like APX-Labs are adapting these devices to the traditional wearable industrial tasks of repair, inspection, and maintenance. Yet many opportunities still exist for improvements; interfaces are evolving quickly, but mobile input is still a fundamental challenge. Switching to a real-time operat-ing system could help with better battery life, user experience, weight, cost, system complexity, and the number of parts required to make a full machine. One device is not suitable for all tasks, and I foresee an array of specialized devices in the future.

    2.8 ACADEMIC/MAKER SYSTEMS FOR EVERYDAY USE

    Industrial systems focused on devices donned like uniforms to perform a specific task, but some academics and makers started creating their own systems in the early 1990s that were intended for everyday private use. These devices were worn more like eyeglasses or clothing. Applications included listening to music, texting, naviga-tion, and schedulingapps that became mostly the domain of smartphones 15years later. However, taking notes during classes, meetings, and face-to-face conversations was a common additional use of these devices beyond what is seen on smartphones today. Users often explained that having the devices was like having an extra brain to keep track of detailed information.

    Audio and visual displays were often optimized for text, and chording keyboards such as a Twiddler (shown in Figure 2.4b) or any of the 7- or 8-button chorders (shown in Figure 2.4a) enabled desktop-level touch typing speeds. Due to the use of

  • 25Wearable Computing

    lighter-weight interfaces and operating systems, battery life tended to be better than the industrial counterparts. Networks included analog dial-up over cellular, amateur radio, CDPD, and WiFi as they became available. The CharmIT, Lizzy, and Herbert 1 concentrated the electronics into a centralized package, but the MIThril and Herbert 3 (not shown) distributed the electronics in a vest to create a more balanced package for wearing.

    Displays were mostly one-eyed and opaque, depending on the illusion in the human visual system by which vision is shared between the two eyes. These dis-plays appear see-through to the user because the image from the occluded eye and the image of the physical world from the non-occluded eye are merged to create a perception of both. In general, opaque displays provide better contrast and bright-ness than transparent displays in daylight environments. The opaque displays might be mounted up and away from the main line of sight or mounted directly in front of the eye. Reflection Technologys Private Eye (Figure 2.4b) and MicroOpticals displays (Figure 2.4d) were popular choices due to their relatively low power and good sharpness for reading text. Several of the everyday users of these homebrew machines from the 1990s would later join the Google Glass team and help inform the development of that project.

    (a) (b)

    (c) (d)

    FIGURE 2.4 Some wearable computers designed by academics and makers focused on creating interfaces that could be used as part of daily life. (a) Herbert 1, designed by Greg Priest-Dorman in 1994. (b) Lizzy wearable computer, designed by Thad Starner in 1995 (original design 1993). (c) MIThril, designed by Rich DeVaul in 2000. (d) CharmIT, designed as a commercial, open-hardware wearable computing kit for the community by Charmed, Inc. in 2000. (Images courtesy of Tavenner Hall.)

  • 26 Fundamentals of Wearable Computers and Augmented Reality

    2.9 CONSUMER DEVICES

    Consumer wearable computers are fashion and, above all, must be designed as such. Unless a user is willing to put on the device, it does not matter what functionality it promises. Making a device that is both desirable and fashionable places constraints on the whole system: the size of the battery, heat dissipation, networking, input, and the design of the HMD itself.

    Consumer wearable computers often strive to be aware of the users context, which requires leveraging low power modes on CPUs, flash memory, and sensors to moni-tor the user throughout the day. As opposed to explicit input, these devices may sense the wearers movement, location, and environmental information in the background. For example, the Fitbit One (Figure 2.5), clipped on to clothing or stored in a pocket, monitors steps taken, elevation climbed, and calories burned during the day. This information is often uploaded to the cloud for later analysis through a paired laptop or phone using the Ones Bluetooth LE radio. The Fitsense FS-1 from 2000 had a similar focus but also included a wristwatch so that the user can refer to his statistics quickly while on-the-go. Since Bluetooth LE did not yet exist when the FS-1 was created, it used a proprietary, low-power, on-body network to communicate between

    (a) (b)

    (c) (d)

    FIGURE 2.5 As technology improves, consumer wearable devices continue to gain accep-tance. (a) Fitsense heart band, shoe sensor, and wristwatch display (2000, $200). (b) Fitbit One (2012, $100). (c) Recon MOD Live HMD and watch band controller for skiing (2011, $400). (d) 2012 Ibex Google Glass prototype. Released Glass Explorer edition (2014, $1500). (Images courtesy of Tavenner Hall.)

  • 27Wearable Computing

    its different components as well as a desktop or laptop. This choice was necessary because of battery life and the lack of stability of wireless standards-based interfaces at the time, and it meant that mobile phones could not interface with the device. Now that Bluetooth LE is becoming common, an increasing number of devices, including the Recon MOD Live and Google Glass (Figure 2.5), will leverage off-body digital networks by piggybacking on the connection provided by a smartphone.

    Both consumer wristwatches, such as FS-1, and HMDs, such as Recon MOD and Google Glass, can provide information to the wearer while on the go. Because these displays are fast to access, they reduce the time from when the user first has the intention to check some information and the action to do so. Whereas mobile phones might take 23 s to access (to physically retrieve, unlock, and navigate to the appropriate application), wristwatches and HMDs can shorten that delay to only a couple of seconds (Ashbrook etal. 2008). This reduction in time from intention to action allows the user to glance at the display, much like the speedometer in a cars dashboard, and get useful information while performing other tasks.

    An HMD has several advantages over a wristwatch, one of which is that it can be actually hands-free. By definition, a wristwatch requires at least one arm to check the display and often another hand to manipulate the interface. Howeve