“designing assistive technology: a personal experience of trial and error “

Download “Designing Assistive Technology: A Personal Experience of Trial and Error “

Post on 24-Jan-2018

334 views

Category:

Technology

0 download

Embed Size (px)

TRANSCRIPT

  • Computer Engineering

    Designing Assistive Technology

    Roberto Manduchi

    A personal experience of trial and error

  • Research at JPL (circa 2000)

  • Research at JPL (circa 2000)

    Sensors

  • Can we go from here

  • Can we go from here

    to here?

  • Services that provide equipment or systems, standardized or individualized, whose aim is to improve or maintain the functional capabilities of individuals with disabilities.

    M.J. Fuhrer, NIH

    Assistive Technology:

  • Significant vision loss (25 million)

    Normal vision

    Legally blind (1.3 million)

    Blind or light perception (0.25 million)

    18-44

    45-64

    65-74

    75

    Prevalence

    Age distribution

  • Educational attainment:

    With vision loss

    U.S. average

    < High schoolHigh school Some collegeBachelors

    13% 27%

    23% 31% 28% 18%

    Employment rate : 19% of legally blind Americans 18 and older were employed

    in 1994-95 (NHIS, 1994-95)

    Pre-readers

    Non-

    read

    ers

    Aud. readers

    Visual readersBr

    aille

    read

    ers

  • 1. You cannot drive your car2. You cannot read the paper 3. You may trip over an obstacle 4. You may miss a sign far away 5. You may not be able to cross a street safely 6. You may not find what you are looking for at the supermarket 7. You may get lost in a new place 8. You may not receive a proper education 9. You may have problems finding a job 10. You may not recognize friends from a distance 11. You may lose objects in your home 12. You may have problems surfing the Web 13. You may not know who is in the room

    14. You may not be able to read this line

    If you cannot see well...

  • Success stories

    Screen readers

    Screen magnifiers

    Braille interfaces

    Enlargers/telescopes

  • Success stories

    Accessible GPS

    Money reader

    Object recognition/crowdsourcing

    OCR

  • Moving safely, gracefully, and comfortably through the environment. Mobility depends in large part on perceiving the properties of the immediate surroundings.

    R.G. Long, E.W. Hill

    Mobility:

  • Mobility aids

    110,000 users 7,000 users

  • Mobility aids

    110,000 users 7,000 users

    very few users

  • A laser-based virtual cane

    Active triangulation for range estimation

    Camera Laserpointer

    Range tracking for environmental feature detection

    D. Yuan, R. Manduchi (2005). Proc. CVPR.

  • The many things a white cane can do Bumper: contacts things that are in the direct path of travel Probe: an extension of the sense of touch. Finds, verifies, and discriminates landmarks Helps establish the line of direction of travel

    e.g. by trailing a straight edge Detects drop offs Measurement tool

    and

  • The many things a white cane can do Bumper: contacts things that are in the direct path of travel Probe: an extension of the sense of touch. Finds, verifies, and discriminates landmarks Helps establish the line of direction of travel

    e.g. by trailing a straight edge Detects drop offs Measurement tool

    and Is cheap Never runs out of power Is light, durable, and foldable Always works Identifies the blind traveler Puts drivers on alert when it moves into a street before the

    blind traveler

  • The process of navigating through an environment and traveling to places by relatively direct paths.

    R.G. Long, E.W. Hill

    Finding the way is not a gift or a innate ability it is a precondition for life itself. Knowing where I am, my location, is the precondition for knowing where I have to go, wherever it may be.

    Otl Aicher

    Wayfinding:

  • Wayfinding without sight

    The Love Stone at Kiyomizu Temple, Kyoto

  • Prior information Maps Verbal directions

    Path integration Continuous update of egocentric coordinates of starting location

    Remembering the path traversed, turns, etc. Piloting

    Sensing ones positional information to determine ones location

    Reading signs Noticing landmarks (acoustic, tactile, smells, heat)

    Wayfinding for sighted people

  • Prior information Maps (tactile) Verbal directions

    Path integration Continuous update of egocentric coordinates of starting location

    Counting steps, turns, etc. Piloting

    Sensing ones positional information to determine ones location

    Reading signs (Braille) Noticing landmarks (acoustic, tactile, smells, heat)

    Wayfinding for blind people

  • GPS is only a partial solution

    Works only outdoors ~10m resolution

    Will take you to locations and wont get you lost, but

    where is the door?

  • Tactile paving

    Accessible pedestrian signals

    Light beaconing (Talking Signs)

    RFID, iBeacons

    iBeacons

    Supporting Infrastructure

  • The problem with indoor navigation systems

    Turn-by turn guidance only useful if you can visually access the scene context

    Blind travelers need some form of scene knowledge a representation of objects, spaces and surface at a scale normally not represented in maps

  • The promise of cameras

    12 Chelhwon Kim, Roberto Manduchi

    c2

    c1

    c1

    c2

    c1c2 c1

    c2

    Fig. 5. Top row: Coplanar line sets produced by our algorithm for the image set con-sidered in the evaluation. Only one image for each pair is shown. Dierent line setsare shown in dierent color. Note that some lines (especially those at a planar junc-tion) may belong to more than one cluster (although they are displayed using only onecolor). All lines that have been matched (possibly incorrectly) across images are shown(by thick segments) and used for coplanarity estimation. The quadrilaterals Q shownby dotted lines represent potential planar patches. They contain all coplanar lines ina cluster, and are computed as described in Sec. 4. Bottom row: 3-D reconstruction ofthe visible line segments and camera center positions. Line segment are colored accord-ing to their orientation in space. The colored rectangles are the reconstructed planarpatches corresponding to the quadrilateral Q shown with the same color as in the toprow.

    can test for their coplanarity using Plucker matrices [30]. More precisely, lines(L1, L2, L3) are coplanar if L1L2L3 = 0, where L1, L3 are the Plucker L-matricesassociated with L1, L3 and L2 is the Plucker L-matrix associated with L2 [30].The ability of an algorithm to determine line coplanarity is critical for precisereconstruction of Manhattan environments; in addition, this criterion gives us anindirect assessment of the quality of pose estimation (as we expect that good poseestimation should result in good 3-D reconstruction and thus correct coplanarityassessment).

    We compared our algorithm against two other techniques. The first is tradi-tional structure from motion from point features (SFM-P). We used the popularVisualSFM application [31], created and made freely available by ChangchangWu. The second technique is Elqursh and Elgammals algorithm [7], which useslines (rather than point features) in a pair of images to estimate the relativecamera pose (SFM-L). Once the motion parameters (R, t) are obtained witheither algorithm, 3-D lines are reconstructed from matched image line pairs. Tocheck for coplanarity of a triplet of lines (at least two of which are parallel), wecompute the associated Plucker matrices L1, L2 and L3, each normalized to unitnorm (largest singular value), and threshold the norm of L1L2L3. By varyingthis threshold, we obtain a precision/recall curve. This evaluation was conductedwith and without the corrective pre-processing step, discussed in Sec. 4, thatrotates each line segment to align it with the associated vanishing point.

    12 Chelhwon Kim, Roberto Manduchi

    c2

    c1

    c1

    c2

    c1c2 c1

    c2

    Fig. 5. Top row: Coplanar line sets produced by our algorithm for the image set con-sidered in the evaluation. Only one image for each pair is shown. Dierent line setsare shown in dierent color. Note that some lines (especially those at a planar junc-tion) may belong to more than one cluster (although they are displayed using only onecolor). All lines that have been matched (possibly incorrectly) across images are shown(by thick segments) and used for coplanarity estimation. The quadrilaterals Q shownby dotted lines represent potential planar patches. They contain all coplanar lines ina cluster, and are computed as described in Sec. 4. Bottom row: 3-D reconstruction ofthe visible line segments and camera center positions. Line segment are colored accord-ing to their orientation in space. The colored rectangles are the reconstructed planarpatches corresponding to the quadrilateral Q shown with the same color as in the toprow.

    can test for their coplanarity using Plucker matrices [30]. More precisely, lines(L1, L2, L3) are coplanar if L1L2L3 = 0, where L1, L3 are the Plucker L-matricesassociated with L1, L3 and L2 is the Plucker L-matrix associated with L2 [30].The ability of an algorithm to determine line coplanarity is critical for precisereconstruction of Manhattan environments; in addition, this criterion gives us anindirect assessment of the quality of pose estimation (as we expect that good poseestimation should result in good 3-D reconstruction and thus correct coplanarityassessment).

    We compared our algorithm against two other techniques. The first is tradi-tional structure from motion from poin

Recommended

View more >