human motion tracking

Upload: azhar-bakar

Post on 07-Apr-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/4/2019 Human Motion Tracking

    1/18

    Review

    Human motion tracking for rehabilitationA survey

    Huiyu Zhou a, Huosheng Hu b,*aBrunel University, Uxbridge UB8 3PH, United Kingdom

    bUniversity of Essex, Colchester CO4 3SQ, United Kingdom

    Received 21 May 2007; received in revised form 21 August 2007; accepted 19 September 2007

    Available online 31 October 2007

    Abstract

    Human motion tracking for rehabilitation has been an active research topic since the 1980s. It has been motivated by the increased number of

    patients who have suffered a stroke, or some other motor function disability. Rehabilitation is a dynamic process which allows patients to restoretheir functional capability to normal. To reach this target, a patients activities need to be continuously monitored, and subsequently corrected. This

    paper reviews recent progress in human movement detection/tracking systems in general, and existing or potential application for stroke

    rehabilitation in particular. Major achievements in these systems are summarised, and their merits and limitations individually presented. In

    addition, bottleneck problems in these tracking systems that remain open are highlighted, along with possible solutions.

    # 2007 Elsevier Ltd. All rights reserved.

    Keywords: Stroke rehabilitation; Sensor technology; Motion tracking; Biomedical signal processing; Control

    Contents

    1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    2. Generic sensor technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    2.1. Non-visual tracking systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2. Visual based tracking systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    2.2.1. Visual marker based tracking systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2.2.2. Marker-free visual based tracking systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2.3. Combination tracking systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    3. Non-visual tracking systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    3.1. Inertial sensor based systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    3.2. Magnetic sensor based systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    3.3. Other sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    3.4. Intersense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    3.5. Glove-based analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    4. Visual marker based tracking systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    4.1. Passive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    4.2. Active. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    4.3. Non-commercialized systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    5. Marker-free visual tracking systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    5.1. 2-D approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    5.1.1. 2-D approaches with explicit shape models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    5.1.2. 2-D approaches without explicit shape models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    5.2. 3-D approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    5.2.1. Model-based tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    www.elsevier.com/locate/bspc

    Available online at www.sciencedirect.com

    Biomedical Signal Processing and Control 3 (2008) 118

    * Corresponding author: Tel.: +44 20 872297; fax: +44 20 872788.

    E-mail address: [email protected] (H. Hu).

    1746-8094/$ see front matter # 2007 Elsevier Ltd. All rights reserved.

    doi:10.1016/j.bspc.2007.09.001

    mailto:[email protected]://dx.doi.org/10.1016/j.bspc.2007.09.001http://dx.doi.org/10.1016/j.bspc.2007.09.001mailto:[email protected]
  • 8/4/2019 Human Motion Tracking

    2/18

    5.2.2. Feature-based tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    5.2.3. Camera configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    5.3. Animation of human motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    6. Robot-aided tracking systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    6.1. Typical working systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    6.1.1. Cozens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    6.1.2. MIT-MANUS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    6.1.3. Taylor and improved systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126.1.4. MIME. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    6.1.5. ARM Guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    6.1.6. Others. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    6.2. Haptic interface techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    6.3. Other techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    6.3.1. Gait rehabilitation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    7. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    8. Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    1. Introduction

    Evidence shows that, during 20012002, 130,000 people in

    the UK experienced a stroke [72] and required admission to

    hospital. More than 75 % of these people were elderly; and

    anticipated locally based multi-disciplinary assessments and

    appropriate rehabilitative treatments after they were dismissed

    from hospital [29,54]. This resulted in increased demand on

    healthcare services and expense in the national health service.

    Reducing the need for face-to-face therapy might lead to an

    optimal solution for therapy efficiency and expense issues.

    Therefore, more and more interest has been drawn toward the

    development of home based rehabilitation schemes [4,61].

    The goal of rehabilitation, is to enable a person who hasexperienced a stroke to regain the highest possible level of

    independence so that they can be as productive as possible

    [182,183]. In fact, rehabilitation is a dynamic process which

    uses available facilities to correct any undesired motion

    behaviour in order to reach an expectation (e.g. ideal position)

    [150]. Therefore, in a rehabilitation course the movement of

    stroke patients needs to be continuously monitored and rectified

    so as to hold a correct motion pattern. Consequently, detecting/

    tracking human movement becomes vital and necessary in a

    home based rehabilitation scheme [179].

    This paper provides a survey of technologies embedded

    within human movement tracking systems, which consistentlyupdate spatiotemporal information with regard to human

    movement. Existing systems have demonstrated that, to some

    extent, proper tracking designs help accelerate recovery in

    human movement. Unfortunately, many challenges still remain

    open, due to the complexity of human motion, and the existence

    of error or noise in measurement.

    2. Generic sensor technologies

    Human movement tracking systems are expected to generate

    real-time data that dynamically represents the pose changes of a

    human body (or a part of it), based on well developed motion-

    sensor technologies [9]. Fig. 1 illustrates a proposed motiontracking system, where human movements can be detected

    using available visual and on-body sensors. Motion sensor

    technology in a home based rehabilitation environment,

    involves accurate identification, tracking, and post-processing

    of movement. Currently, intensive research interests address the

    application of position sensors, such as goniometry, pressure

    sensors and switches, magnetometers, and inertial sensors (e.g.

    accelerometers and gyroscopes).

    Data acquisition is usually bound to noise or error. It is

    essential to study the structure and characteristics of individual

    sensors so that we can identify noise or error sources. To

    proceed with a relevant analysis, we first summarise overall

    sensory technologies, followed by a detailed description. Ingeneral, a tracking system can be non-visual, visual based (e.g.

    marker and markerless based) or a combination of both. Fig. 2

    illustrates a classification of available sensor techniques that

    will be introduced later in this paper. Performance of the

    systems based on these techniques is outlined in Table 1.

    2.1. Non-visual tracking systems

    Sensors employed within these systems adhere to the human

    body in order to collect movement information. These sensors

    are commonly categorised as mechanical, inertial, acoustic,

    Fig. 1. An illustration of a proposed human movement tracking system

    (courtesy of Zhang et al. [175]).

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 1182

  • 8/4/2019 Human Motion Tracking

    3/18

    radio, or microwave and magnetic based. Some of them have

    such small footprints that they can detect small amplitudes, such

    as finger or toe movements. Generally speaking, each kind of

    sensor has its ownadvantages and limitations. Modality-specific,measurement-specific, and circumstance-specific limitations

    accordingly affect the use of particular sensors in different

    environments [162]. One example is an inertial accelerometer

    (piezoelectric [136], piezoresistive [97] or variable capacitive

    [167]), which normally converts linear or angular acceleration

    (or a combination of both) into an output signal [21]. An

    accelerometer is illustrated in Fig. 3. An accelerometer is

    physically compact and lightweight, therefore it has been

    frequently accommodated in portable devices (e.g. head-

    mounted devices). Furthermore, the outcomes of accelerometers

    are immediately available without complicated computation.

    This feature normally plays a great role if people only need to

    obtain basic acceleration information from accelerometers.

    Unfortunately, accelerometers suffer from the drift pro-blem if they are used to estimate velocity or orientation. This is

    due to sensor noise or offsets. Therefore, external correction is

    demanded throughout the tracking stage [17]. Even though each

    sensor has its own drawbacks, other available sensors may be

    used as a complement. For example, to improve the accuracy of

    location computation people have exploited odometers, instead

    of accelerometers, in the design of mobile robots. Recently,

    voluntary repetitive exercises administered with the mechanical

    assistance of robotic rehabilitators, have proven effective in

    improving arm movement ability in post-stroke populations.

    Through these robot-aided tracking systems, human movements

    can be measured using electromechanical or electromagnetic

    sensors that are integrated in the structures. Electromechanicalsensor based systems prohibit free human movement, but the

    electromagnetic approach permits motion freedom. It has been

    justified that robot-aided tracking systems provide a stable and

    consistent relationship over a limited period, between system

    outputs and real measurements. An introduction to such robot-

    aided tracking systems will be provided in a later section.

    2.2. Visual based tracking systems

    Optical sensors (e.g. cameras) are normally applied to

    improve accuracy in position estimation. Tracking systems can

    be classifiedas either visual marker or marker-free, depending on

    Fig. 2. Classification of human motion tracking using sensor technologies.

    Table 1

    Performance comparison of different motion tracking systems according to Fig. 2

    Systems Accuracy Compactness Computation Cost Drawbacks

    Inertial High High Efficient Low Drifts

    Magnetic Medium High Efficient Low Ferromagnetic materialsUltrasound Medium Low Efficient Low Occlusion

    Glove High High Efficient Medium Partial posture

    Marker High Low Inefficient Medium Occlusion

    Marker-free High High Inefficient Low Occlusion

    Combinatorial High Low Inefficient High Multidisciplinary

    Robot High Low Inefficient High Limited motion

    Fig. 3. Entrans family of miniature accelerometers [77].

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118 3

  • 8/4/2019 Human Motion Tracking

    4/18

    whether or not indicators need to be attached to body parts. First,

    we provide a brief description of visual marker based systems.

    2.2.1. Visual marker based tracking systems

    Visual marker based tracking is a technique where cameras

    are applied to track human movements, with identifiers placed

    upon the human body. As the human skeleton is a highly

    articulated structure, twists and rotations generate movement at

    high degrees-of-freedom. As a consequence, each body part

    conducts an unpredictable and complicated motion trajectory,

    which may lead to inconsistent and unreliable motion

    estimation. In addition, cluttered scenes, or varied lighting,

    most likely distract visual attention from the real position of a

    marker. As a solution to these problems, visual marker based

    tracking is preferable in these circumstances.

    Visual marker based tracking system, e.g. VICON or

    Optotrack, are quite often used as a golden standard in

    human motion analysis due to their accurate position

    information (errors are around 1mm). This accuracy feature

    optimistically motivates popular applications of the visualmarker based tracking systems in medicine. For example, a

    MacReflex Motion Capture System was used in a study to

    evaluate the relationship between the body-balancing move-

    ments and anthropometric characteristics of subjects while

    they stood on two legs with eye open and closed [95].

    Another application example is a study about inducing slips in

    healthy young subjects and determine if subjects that recovered

    after the slip could be discriminated from those subjects who

    fell after the slip using selected lower extremity kinematics

    [19].

    One major drawback of using optical sensors with markers,

    is that rotated joints or overlapped body parts cannot bedetected, and hence 3-D rendering is not available [146]. This

    situation could possibly happen in a home environment, where

    a patient lives in a cluttered background.

    2.2.2. Marker-free visual based tracking systems

    Marker-free visual based tracking systems only exploit

    optical sensors to measure movements of the human body. This

    application is motivated by the flaws of using visual marker

    based systems [81]: (1) identification of standard bony

    landmarks can be unreliable; (2) the soft tissue overlying bony

    landmarks can move, giving rise to noisy data; (3) the marker

    itself can wobble due to its own inertia; (4) markers can even

    come adrift completely.A camera can be of a resolution of a million pixels,

    indicating a high accuracy in detection of object movements. In

    addition, cameras nowadays can be easily obtained with a low

    cost, while the camera parameters can be flexibly configured by

    the user. These merits encourage cameras to be popularly used

    in surveillance applications. A little bit disappointment is that

    this technique requires intensive computation to conduct 3-D

    localisation and error reduction; in addition to the minimisation

    of the latency of data [22]. Furthermore, high speed cameras are

    required, as conventional cameras (with a sampling rate of less

    than sixty frames a second) provide insufficient bandwidth for

    accurate data representation [12].

    2.3. Combination tracking systems

    These systems take advantage of marker based and marker-

    free based technologies. This combination strategy helps

    reduce errors arising from using individual platforms. For

    example, the boundaries or silhouettes of human body parts can

    be captured in a motion trajectory if markers mounted on these

    parts are not in the field of view of cameras. This strategy

    requires intensive calibration and computation, and hence will

    not be discussed further in this paper. For the purposes of

    research interest, a reader can refer to literature such as ref.

    [152].

    3. Non-visual tracking systems

    Tracking human actions is an effective method, which

    consistently and reliably represents motion dynamics over time

    [177]. In a rehabilitive course, the limbs of a patient must be

    localised so that undesirable patterns can be corrected. For this

    purpose, it is possible to make use of non-visual sensors, e.g.electromechanical or electromagnetic sensors. In fact, non-

    vision based tracking systems have been commonly used, as

    they do not suffer from the line-of-sight problem which

    cannot be effectively dealt with in a home based environment.

    In this paper, we will focus on systems with inertial, magnetic,

    ultrasonic, and other similar sensing techniques. Additionally,

    glove based techniques are included (due to their employment

    of modern sensing techniques).

    3.1. Inertial sensor based systems

    Inertial sensors like accelerometers and gyroscopes havebeen frequently used in navigation and augmented reality

    modeling [157,176,115,172,15]. This is an easy to use and cost-

    efficient way for full-body human motion detection. The

    motion data of the inertial sensors can be transmitted wirelessly

    to a work base for further process or visualisation. Inertial

    sensors can be of high sensitivity and large capture areas.

    However, the position and angle of an inertial sensor cannot be

    correctly determined, due to the fluctuation of offsets, and

    measurement noise, leading to integration drift. Therefore,

    designing drift-free inertial systems is the main target of the

    current research.

    MT9 (newly MTx) is a digital measurement unit that

    measures 3-D rate-of-turn, acceleration, and earth-magneticfield [85] (Fig. 4). In a homogeneous earth-magnetic field, the

    MT9 system has 0.058 root-mean-square (RMS) angular

    resolution; 1.08 static accuracy; and 38 RMS dynamic

    accuracy. Using such a commercially available inertial sensor,

    Zhou and Hu discovered a novel tracking strategy for human

    upper limb motion [180,178]. Human upper limb motion was

    represented by a kinematic chain, in which there were six joint

    variables to be considered. A simulated annealing based

    optimization method was adopted to reduce measurement error

    [180]. To effectively depress noise in measurement, Zhou and

    Hu [181] exploited an extended Kalman filter that fused the

    data from the on-board accelerometers and gyroscopes.

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 1184

  • 8/4/2019 Human Motion Tracking

    5/18

    Experimental results demonstrated a reduction of drift and

    noise. G-Link is an inertial sensor similar to MT9 (Fig. 5).

    G-Link has two acceleration ranges: 2 Gs and 10 Gs, while

    its batterylifespan can be 273 h. Furthermore, this product has a

    small transceiver size: 25 25 5 mm2. Many G-Links can be

    linked together to form a wireless sensor network [78].

    Literature about the use of G-Link can be found in refs. [101,5].Luinge [104] introduced the design and performance of a

    Kalman filter to estimate inclination from the signals of a

    triaxial accelerometer. Empirical evidence shows that inclina-

    tion errors are less than 28. Unfortunately, the problem of

    integration drift around the global vertical direction still

    appears. Foxlin et al. [55] revealed the first prototype of the

    FlightTracker, which was designed to overcome the short-

    comings addressed in a hybrid tracking platform that fuses

    ultrasonic range measurements with inertial tracking. Experi-

    mental results show that drift was slower than 1 mm/s or

    18 min1. Lobo and Dias [103] presented a framework for using

    inertial sensor data in vision systems. Using the verticalreference provided by the inertial sensors, the image horizon

    line could be determined. The main weakness of this method

    was that vertical world features were not available in some

    circumstances, e.g. flat surfaces, and cluttered scenes, etc.

    Similar work also has been described in refs. [113,117].

    Applications of inertial sensors in medicine have been

    popularly observed up to date. Steele et al. [144] provided an

    overview of the potential applications of motion sensors to

    detect physical activity in persons with chronic pulmonary

    disease in the setting of pulmonary rehabilitation. They used

    StayHealthy RT3 triaxial accelerometers to measure activity

    over 1 min epochs for collecting bouts of acitivity over 21 days.

    The study showed that in general the sensors outcomes

    corresponded to the real activity. Similarly, an accelerometer

    based wireless body area networks system was proposed by

    Jovanov et al., which presented ambulatory health monitoring

    using two perpendicular dual axis accelerometers for extended

    periods of time and near real-time updates of patients medical

    records through the Internet [92].

    Najafi et al. proposed to use a miniature gyroscope to

    conduct a study on the falls of the elderly [114]. The

    experimental results showed that the sensor measurement

    enabled the falls to be predicted according to the previous

    history of the elderly subjects with high and low fall-risk.

    Patrick et al. reported that parkinsonian rigidity could be

    assessed by monitoring force and angular displacements

    imposed by the clinician onto the limb segment distal to the

    joint being evaluated [118].

    3.2. Magnetic sensor based systems

    Magnetic motion tracking systems have been widely used for

    tracking user movements in virtual reality, due to their size, high

    sampling rate, lack of occlusion, etc. Despite great successes,

    magnetic trackers have inherent weaknesses, e.g. latency and

    jitter [100]. Latency arises due to the asynchronous nature by

    which sensor measurements are conducted. Jitter appears in the

    presence of ferrous or electronic devices in the surrounding, and

    noise in the measurements. A number of research projects have

    beenlaunched to tackle theseproblems, using Kalman filtering orother predictive filtering methods [170,107,173].

    MotionStar is a magnetic motion capture system produced by

    the Ascension Technology Corporation in the United States [73]

    (Fig. 6). It holds such good performance as: (1) translation range:

    3:05 m; (2) angular range: all attitude 1808 for Azimuth and

    Roll,908 for Elevation; (3) static resolution(position): 0.08 cm

    at 1.52 m range; (4) static resolution (orientation): 0.1 RMS at

    1.52 m range. This system applies direct current (dc) magnetic

    Fig. 5. A G-Link unit [78]. Fig. 6. A Motionstar Wireless 2 system [73].

    Fig. 4. A MT9 sensor [85].

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118 5

  • 8/4/2019 Human Motion Tracking

    6/18

    tracking technologies, which are significantly less susceptible to

    metallic distortion than alternating current (ac) electromagnetic

    tracking technologies. Another example is LIBERTY from

    Polhemus [80] (Fig. 7). LIBERTY computed at an extraordinary

    rate of 240 updates per second per sensor, with the ability to beupgraded from four sensor channels to eight via the addition of a

    single circuit board. Also, it had a latency of 3.5 ms, a resolution

    of 0.038 mm at a 30 cm range, and a 0.00128 orientation. Molet

    et al. [111,112] presented a real-time conversion of magnetic

    sensor measurements into human anatomical rotations. Using

    solid-state magnetic sensors and a tilt sensor, Caruso [27,28],

    developed a new compass that could determine an accurate

    heading.

    Suess et al. presented a frameless system for intraoperative

    image guidance [147]. This system generated and detected a dc

    pulsed magnetic field for computing the displacements and

    orientation of a localizing sensor. The entire tracking systemconsists of an electromagnetic transmitting unit, a sensor and a

    digitizer that controlled the transmitter and received the data

    from the localizing sensor. Experiments revealed that the mean

    localisation errors are less than 2 mm. An image guided

    intervention system was proposed by Wood et al. [164]. A

    tetrahedral-shaped weak electromagnetic field generator was

    designed in combination with open-source software compo-

    nents. The minimal registration error and tracking error are less

    than 5 mm.

    3.3. Other sensors

    Acoustic systems collect signals by transmitting and sensingsound waves, where the flight duration of a brief ultrasonic

    pulse is timed and calculated. These systems are used in

    medical applications [48,120,133], but have not been used in

    motion tracking. This is due to the following drawbacks: (1) the

    efficiency of an acoustic transducer is proportional to the active

    surface area, so large devices are desirable; (2) to improve the

    detected range, the frequency of ultrasonic waves must be low

    (e.g. 10 Hz), but this affects system latency in continuous

    measurement; (3) acoustic systems require a line of sight

    between emitters and receivers.

    Ultrasonic systems can be combined with other techniques

    so as to solve these existing problems. InterSense produced the

    IS-600 Motion Tracker [75] (Fig. 8), which actually eliminated

    jitter. It is a hybrid acousto-inertial system, where orientation

    and position are generated by integrating the outputs of its gyros

    and accelerometers, and drift can be corrected using an

    ultrasonic time-of-flight range system.

    3.4. Intersense

    Radio and microwaves are normally used in navigation

    systems and airport landing aids [162]. They have very low

    resolutions, therefore they cannot be applied in human motion

    tracking. Electromagnetic wave-based tracking approaches can

    provide range information, by calculating the radiated energy

    dissipated in a form of radius r as 1/r2. For example, using a

    delay-locked loop (DL), a Global Positioning System (GPS)

    can achieve a resolution of 1 m. Obviously, this is not enough to

    discriminate human movements of 050 cm displacements per

    trial. A radio frequency-based precision motion tracker can be

    used to detect motion over a few millimeters. Unfortunately, ituses large racks of microwave equipment which is accom-

    modated in a large room.

    The electromyogram (EMG) is an analysis of the electrical

    activity of the contracting muscles. It is often used to detect the

    muscles that are working or not working, and in what sequence

    they are working to respond the needs of the movements. EMG

    can provide an amount of intensity of muscle activity. This

    technique has commonly been used in rehabilitation exercises.

    Wang et al. designed a wearable training unit, which collected

    signals such as heart rate and EMG. By inspecting these

    biosignals, one can select optimal control signals correspond-

    ing to a proper workload for the device [160]. Mavroidis et al.

    [109] introduced several smart rehabilitation devices developedby Northeasten University Robotics and Mechatronics Labora-

    tory. Among these devices, Biofeedback is a device that uses

    EMG to monitor muscle activity after knee surgery, and

    provides quantitive information on how a patient responds to a

    delivered stimulation. Patten et al. applied EMG to explore the

    biomechanical variations of locomotor activities when patients

    received vesticular rehabilitation [119].

    3.5. Glove-based analysis

    Since the late 1970s, people have studied glove-based

    devices for the analysis of hand gestures. Glove-based devices

    Fig. 7. Illustration of LIBERTY by Polhemus [80].

    Fig. 8. Illustration of InterSense IS-300 Pro [75].

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 1186

  • 8/4/2019 Human Motion Tracking

    7/18

    adopt sensors attached to a glove (Fig. 9), that transduces finger

    flexion and abduction into electrical signals, to determine hand

    pose. These devices may be used to reconstruct motor function

    in the case of hand impairment. These glove-based devices are

    encouraged to be used in hand therapy due to the flexibility,

    easy donning and removal, lightweight and accuracy.

    The Dataglove (originally developed by VPL Research) wasa neoprene fabric glove with two fiber optic loops on each

    finger. At one end of each loop is an LED, and at the other end a

    photosensor. The fiber optic cable has small cuts along its

    length. When the user bends a finger, light escapes from the

    fiber optic cable through these cuts. The amount of light

    reaching the photosensor is measured and converted into a

    measure of how much the finger is bent. The Dataglove requires

    recalibration for each user [184]. The CyberGlove system

    included one CyberGlove [84], an instrumentation unit, a serial

    cable to connect to a host computer, and an executable version

    of the VirtualHand graphic hand model display and calibration

    software. Based on the design of the DataGlove, the Power-Glove was developed by Abrams-Gentile Entertainment. The

    PowerGlove consists of a sturdy Lycra glove with flat plastic

    strain gauge fibers, coated with conductive ink running up each

    finger; this measures change in resistance during bending, to

    measure the degree of flex for the finger as a whole. It employs

    an ultrasonic system to track the roll of the hand, where

    ultrasonic transmitters must be oriented toward the micro-

    phones in order to obtain an accurate reading. Drawbacks

    appear when a pitching or yawing hand changes the orientation

    of transmitters, and the signal is lost by the microphones.

    Simone and Kamper [141] reported a wearable monitor to

    measure the finger posture using a data glove that use materials

    of Lycra and Nylon blend, and contains five bend sensors. therepeatability test showed average variability of 2.96% in

    the gripped hand position. A force feedback glove called the

    Rutgers Master was integrated into an orthopedic telereh-

    abilitation system by Burdea et al. [23].

    4. Visual marker based tracking systems

    In 1973, Johansson explored his famous Moving Light

    Display (MLD) psychological experiment, to perceive biolo-

    gical motion [91]. He attached small reflective markers to the

    joints of human subjects, which allowed these markers to be

    monitored during trajectories. This experiment became a

    milestone in human movement tracking. Marker based tracking

    systems are capable of minimising the uncertainty of a subjects

    movements, due to the unique appearance of markers. This

    basic theory is still embedded in current state-of-the-art motion

    trackers. These tracking systems can be passive, active or

    hybrid in style: a passive system uses a number of markers thatdo not generate any light, only reflect incoming light. In

    contrast, markers in an active system can produce light, i.e.

    infrared, which is then collected by a camera system.

    4.1. Passive

    Qualisys is a motion capture system consisting of 116

    cameras, each emitting a beam of infrared light [82] (Fig. 10).

    Small reflective markers are placed on an object to be tracked.

    Infrared light is flashed from close to, and then picked up by, the

    cameras. The system then computes a 3-D position of the

    reflective Target, by combining 2-D datafrom severalcameras. Asimilar system, VICON, was specifically designed for use in

    virtual and immersive environments [83] (Fig. 11). The appli-

    cation of these passive optical systems can be often found in

    medical science. For example, Davis et al. reported a study of

    using a VICON system for gait analysis [42]. A VICON system

    was also used to calculate joint centers and segment orientations

    by optimizing skeletal parameters from the trials [31].

    Fig. 10. An operating Qualisys system [82].

    Fig. 11. Reflective markers used in a real-time VICON system [83].

    Fig. 9. Illustration of a glove-based prototype (image courtesy of KITTY

    TECH [76]).

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118 7

  • 8/4/2019 Human Motion Tracking

    8/18

    4.2. Active

    One of the active visual tracking systems is CODA (Fig. 12).

    CODA was pre-calibrated for 3-D measurement, without theneed to recalibrate using a space-frame [74]. Up to six sensor

    units can be used together, which enables the system to track

    3608 movement. Active markers can be identified by virtue of

    their positions during a time multiplexed sequence. At a 3 m

    distance, this system has such good accurate parameters as

    follows:1:5 mm in Xand Zaxes,2.5 mm in Yaxis for peak-

    to-peak deviations from actual position. CODAs measurements

    have been commonly used as ground truth to evaluate the

    motion measurements [177,182]. In addition, this system was

    employed in an instrumented assessment of muscle overactivity

    and Spasticity with dynamic polyelectromyographic and

    Motion Analysis for Treatment Planning [49]. It was used to

    measure 3-D lower limb kinematics, kinetics and surfaceelectromyography (EMG) of the rectus femoris, tibialis

    anterior, peroneus longus and soleus muscle in all subjects

    during a lateral hop task for the period 200 ms pre- and post-

    initial contact (IC) [43].

    Another example is Polaris (Fig. 13). The Polaris system

    (Northern Digital Inc.) [79] optimally combines simultaneous

    tracking in both wired and wireless states. Thewhole system can

    be divided into two parts: position sensors, and passive or active

    markers. The former consist of a couple of cameras that are onlysensitive to infrared light. This design is particularly useful when

    background lighting varies and is unpredictable. Passive markers

    are covered by reflective materials, which are triggered by arrays

    of infrared light-emitting diodes surrounding the position sensor

    lenses. With proper calibration, this system may achieve

    0.35 mm RMS accuracy in position measures.

    4.3. Non-commercialized systems

    Using established techniques, people have developed hybrid

    strategies to perform human motion tracking. Such systems,

    although still at an experimental stage, have already demon-strated promising performance.

    Lu and Ferrier [106] presented a digital-video based system

    for measuring the human motion of repetitive workplace tasks. A

    single camera was exploited to track colored markers placed on

    upper limbs. From the marker locations, one could recover a

    skeleton model of the investigated arm. However, this system

    was not able to separate lateral movements of the arm. Mihailidis

    et al. [110] designed a vision based agent for an intelligent

    environment that assists older adults with dementia during daily

    living activity. A color-based motion tracking strategy was used

    to estimate upper limb motion. The weakness of this agent was

    the lack of a three dimensional representation for real move-

    ments. Tao et al. [152,151] proposed a visual tracking system,which exploited both marker-based and marker-free tracking

    methods(Fig. 14). Unfortunately, likeother marker basedmotion

    Fig. 13. A Polaris system [79].

    Fig. 12. A CODA system [74].

    Fig. 14. Demonstration of Tao and Hus approach: (a) markers attached to the joints; (bd) markers position captured by three cameras [152].

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 1188

  • 8/4/2019 Human Motion Tracking

    9/18

    trackers, this system required calibration and professional

    intervention.

    5. Marker-free visual tracking systems

    In the previous section, we described the features of marker-

    based tracking systems; which are restricted to limited degrees

    of freedom, due to mounted markers. As a less restritive motion

    capture technique, markerless based systems are capable of

    overcoming the mutual occlusion problem; as they are mainlyconcerned with the boundaries or features of human bodies.

    This has been an active and challenging research area for the

    past decade. The research addressed in this area is still ongoing,

    because of unsolved technical problems. Applications of these

    marker-free visual tracking systems have demonstrated

    promising performance. For example, a Camera Mouse

    system was developed to provide computer access for disabled

    people [10]. This system could track the users movements with

    a video camera and translated them to the movements of the

    mouse pointer on the screen. Twelve people with severe

    cerebral palsy or traumatic brain injury had used this system,

    and nine of them showed success.Human motion analysis can be divided into three groups [1]:

    body structure analysis (model and non-model based), camera

    configuration (single and multiple), and correlation platform

    (state-space and template matching). We provide a brief

    description as follows.

    5.1. 2-D approaches

    As a commonly used framework, 2-D motion tracking is

    only concerned with human movement in an image plane;

    where the tracking system may adapt flexibly, and respond

    rapidly due to reduced spatial dimensions. This approach can be

    employed with and without explicit shape models. Model-based tracking involves matching generated object models with

    acquired image data.

    5.1.1. 2-D approaches with explicit shape models

    In the presence of arbitrary human movements, self-

    occlusion commonly appears in rehabilitation environments.

    To solve this problem, one normally uses a priori knowledge of

    human movement in 2-D, by segmenting the human body. For

    example, Wren et al. [165] presented a region-based approach,

    where they regarded the human body as a set of blobs which

    could be described using a spatial and color Gaussian

    distribution (see Fig. 15).

    Juetal. [93] proposed a cardboard human body model using aset of jointed planar ribbons. Niyogi andAdelson[116] examined

    the braided pattern yielded by the lower limbs of a pedestrian,

    whose head movements were projected in a spatio-temporal

    domain; followed by identification of joint trajectories.

    5.1.2. 2-D approaches without explicit shape models

    Since human movements are non-rigid and arbitrary, the

    boundaries or silhouettes of a human body are viable and

    deformable, leading to difficulty in describing them. Tracking

    the human body, e.g. hands, is normally achieved by means of

    background substraction or color detection. Furthermore, due

    to unavailable models, one has to utilize low level imageprocessing (such as feature extraction).

    Baumberg and Hogg [7] considered using an Active Shape

    Model (ASM) to track pedestrians (Fig. 16). A Kalman filter

    was then applied to accomplish the spatio-temporal operation,

    which was similar to the work of Blake et al. [14]. Their work

    was then extended by generating a physical model, using a

    training set of examples for object deformation, and by tuning

    the elastic properties of the object to reflect how the object

    actually deformed [8]. Freeman et al. [56] developed a special

    detector for computer games on-chip, which was used to infer

    useful information about the position, size, orientation, and

    configuration of human body parts (Fig. 17). Cordea et al. [37]

    discussed a 2.5 dimension tracking method, allowing the real-time recovery of the 3-D position and orientation of a head

    moving in its image plane. Fablet and Black [50] proposed a

    Fig. 15. Demonstration of Pfinder by Wren, et al. [165].

    Fig. 16. Parts of human tracking results using Baumberg and Hoggs approach [7].

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118 9

  • 8/4/2019 Human Motion Tracking

    10/18

    solution for the automatic detection and tracking of human

    motion, using 2-D optical flow information. A particle filter was

    used to represent and predict non-Gaussian posterior distribu-

    tions over time.

    Chang et al. [30], considered tracking cyclic human motionby decomposing complex cyclic motions into components, and

    maintaining coupling between components. Wong and Wong

    [163] proposed a wavelet based tracking system, where the

    human body is located within a small search window, using

    color and motion as heuristics. The windows location and size

    were estimated using the proposed wavelet estimation.

    5.2. 3-D approaches

    2-D frameworkshavenatural restrictions, due to theirviewing

    angle. To improve a tracker in an unpredicted environment, 3-D

    modelling techniques have been promoted as an alterative. Infact, these approaches attempt to recover 3-D articulated poses

    over time [60]. In some circumstances, people frequently project

    a 3-D model onto a 2-D image for later processing.

    5.2.1. Model-based tracking

    Modelling human movements allows the tracking problem

    to be minimised: the future movements of a human body can be

    predicted regardless of self-occlusion or self-collision. Model-

    based approaches contain stick figures, volumetric, and a

    mixture of models.

    5.2.1.1. Stick figure. A stick figure is a representation of a

    skeletal structure, which is normally regarded as a collection ofsegments and joint angles (refer to Fig. 18). Bharatkumar et al.

    [11] used stick figures to model lower limbs, e.g. hip, knees, and

    ankles. They applied a medial-axis transformation to extract 2-

    D stick figures of lower limbs.

    Hubers human model [86] was a refined version of the stick

    figure representation. Joints were connected by line segments,

    with a certain degree of constraint that could be relaxed using

    virtual springs. By modelling a human body with 14 joints

    and 15 body parts, Ronfard et al. [135] attempted to find people

    in static video frames, using learned models of both the

    appearance of body parts (head, limbs, hands), and of the

    geometry of their assemblies. They built on Forsyth and Flecks

    general body plan methodology, and Felzenszwalb and

    Huttenlochers dynamic programming approach, to efficientlyassemble candidate parts into pictorial structures.

    Karaulova et al. [94] built a hierarchical model of human

    dynamics, encoded using hidden Markov models (HMMs).

    This approach allows view-independent tracking of a human

    body in monocular image clips. Sullivan et al. [149] combined

    automatic tracking of rotational body joints with well defined

    geometric constraints associated with a skeletal articulated

    structure. This work was based on heuristically tracked

    points [102], and correct tracking using the method in ref.

    [148]. Further similar work has been reported in ref.

    [116,58,70,88,124].

    5.2.1.2. Volumetric modelling. Elliptical cylinders are one of

    the volumetric models that model the human body. Rohr [134]

    extended the work of Marr and Nishihara [108], which used

    elliptical cylinders to represent the human body. Rehg and

    Kanade [126] represented two occluded fingers using several

    cylinders; the center axes of cylinders were projected into the

    center line segments of 2-D finger images. Goncalves et al. [64]

    modelled both the upper and lower arm as truncated circular

    cones; shoulder and elbow joints were presumed as spherical

    joints. Chung and Ohnishi [34] proposed a 3-D model-based

    motion analysis, which used cue circles (CC) and cue sphere

    (CS). Theobalt et al. [154] suggested combining efficient real-

    time optical feature tracking, with the reconstruction of thevolume of a moving subject, in order to fit a sophisticated

    humanoid skeleton to video footage. A scene was observed with

    four video cameras, two of which were connected to a PC. In

    addition, a voxel-based approximation to the visual hull was

    computed for each time step. Fig. 19 illustrates the final

    outcome.

    Other research projects have been carried out using 3-D

    volumetric models, e.g. cones [45,44], super-quadrics [142],

    and cylinders, etc. Volumetric modelling requires more

    parameters to build up an entire model, resulting in intensive

    computation throughout registration. Similar results can be

    found in refs. [134,59,159,132].

    Fig. 17. Computer game on-chip by Freeman et al. [56].

    Fig. 18. Stick figure of human body [71].

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 11810

  • 8/4/2019 Human Motion Tracking

    11/18

    In contrast, hierarchical modelling techniques are believed

    to improve the deficiencies highlighted in the systems described

    above. For example, Plankers et al. [121] revealed a

    hierarchical human model for achieving more accurate tracking

    results, where four stages were engaged: skeleton, ellipsoid

    meatballs for tissues and fats, polygonal surface for skin, and

    shaded rendering.

    5.2.2. Feature-based tracking

    This approach starts by extracting significant characteristics,

    and then matches them across images. In this context, 2-D and

    3-D features are adopted. Hu et al. [87] advocated that feature-

    based tracking algorithms consist of three groups, based on the

    nature of selected features: global feature-based

    [41,122,137,156], local feature-based [35,128], and depen-

    dence-graph-based algorithms [51,62,63,57].

    5.2.3. Camera configuration

    The line of sight problem can be partially tackled using a

    proper camera setup, including a single camera [6,18,46,123,

    142,13,139,140,168,169] (see Fig. 20)or a distributed-cameraconfiguration [16,26,33,90,131]. Using multiple cameras does

    require a common spatial reference to be employed, and a single

    camera does not have such a requirement. However, a singlecamera readily suffers occlusion from a human body, due to its

    fixed viewing angle. Thus, a distributed-camera strategy is a

    better option for minimising such risk. One example of using two

    cameras is illustrated in Fig. 21.

    5.3. Animation of human motion

    Video capture virtual reality (VR) uses a video camera and

    software to track human movements, without the need to place

    markers at specific body locations. The users image is

    generated within a simulated environment, such that it is

    possible to interact with animated graphics in a completelynatural manner. This technology first became available 25 years

    ago, but it was not applied to rehabilitation practice until five

    years ago [161]. Recently, VR has been commonly used in

    stroke rehabilitation, e.g. refs. [89,174,171], etc.

    Holden and Dyar [69] pre-recorded the movements of a

    virtual teacher, and then asked patients to imitate movement

    Templates in order to conduct upper limb repetitive training

    using a VR system. Evidence shows that the Vivid GX video

    capture technology employed, can be used for improvements in

    upper extremity function [96]. Rand et al. [125] designed a

    Virtual Mall (VMall), using the available GX platform, where

    stroke patients could carry out daily activities such as shopping

    in a supermarket. A comprehensive survey on this topic isavailable in ref. [150].

    Fig. 19. Volumetric modelling by Theobalt [154].

    Fig. 20. Human motion tracking by Sidenbladh et al. [139].

    Fig. 21. Applications of multiple cameras in human motion tracking by Ringer

    and Lasenby [131].

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118 11

  • 8/4/2019 Human Motion Tracking

    12/18

    It is necessary to bear in mind that the marker-free visual

    tracking techniques described above, have been partially

    successful in real situations. The main problem is that the

    proposed algorithms/systems still need to be improved to

    compromise robustness and efficiency. This bottleneck

    problem inevitably affects the further development of a home

    based motion detection system.

    6. Robot-aided tracking systems

    Robot-aided tracking systems, a subset of therapeutic robots,

    are valuable platforms for delivering neuro-rehabilitation for

    human limbs following stroke [68,143]. The rendering position/

    orientation of limbs is encompassed and necessarily required in

    order to guide limb motion. In this section, one can find a rich

    variety of rehabilitation systems that are driven by electro-

    mechanical or electromagnetic tracking strategies. These

    systems incorporate individual sensor technologies to conduct

    sense-measure-feedback strategies.

    6.1. Typical working systems

    6.1.1. Cozens

    To justify whether or not motion tracking techniques can

    assist simple active upper limb exercises for patients recovering

    from neurological diseases (e.g. stroke), Cozens [38] reported a

    pilot study, using torque attached to an individual joint,

    combined with EMG measurement that indicated the pattern of

    arm movements during exercise. Evidence highlighted that

    greater assistance was given to patients with more limited

    exercise capacity. This work was only able to demonstrate the

    principle of assisting single limb exercises using a 2-D basedtechnique.

    6.1.2. MIT-MANUS

    To find out whether exercise therapy influences plasticity

    and recovery of the brain following a stroke, a tool is demanded

    to control the amount of therapy delivered to a patient; where

    appropriate, objectively measuring the patients performance.

    To address these problems, a novel automatic system named

    MIT-MANUS (Fig. 22), was designed to move, guide, or

    perturb the movement of a patients upper limb, while recording

    motion-related quantities, e.g. position, velocity, or forces

    applied [99]. When comparing robotic assisted treatment with

    standard sensorimotor treatment, Fasoli et al. found a

    significant reduction in motor impairment in the robotic

    assisted group [52]. Ferraro et al. also reported similar

    improvements after a 3 month trial [53]. However, it was also

    stated that the biological basis of recovery, and individual

    patients needs, should be further studied in order to improve

    the performance of the system under different circumstances.

    These findings were also supported in ref. [98].

    6.1.3. Taylor and improved systems

    Taylor [153] described an initial investigation, where a

    simple two DOF arm support was built to allow movements of a

    shoulder and elbow in a horizontal plane. Based on this simple

    device, he then suggested a five exoskeletal system, to allow the

    activities of daily living (ADL) to be performed in a natural

    way. The design was validated by tests which showed that the

    configuration interfaces properly with the human arm,

    resulting in a trivial addition of goniometric measurementsensors for the identification of arm position and pose.

    Another good example was provided in ref. [130], where a

    device was designed to assist elbow movement. This elbow

    exerciser was strapped to a lever, which rotated about a

    horizontal plane. A servomotor driven through a current

    amplifier was applied to drive the lever; a potentiometer

    indicated the position of the motor. Obtaining the position of

    the lever was achieved by using a semi-circular array of light

    emitting diodes (LEDs) around the lever. However, this system

    required a physiotherapist to activate the arm movement, and

    use a force handle to measure forces applied.

    To effectively deal with the problem arising from individualswith spinal cord injuries, Harwin and Rahman [65] explored the

    design of head controlled force-reflecting masterslave tele-

    manipulators for rehabilitation applications. A test-bed power

    assisted orthosis, consisted of a six DOF master, with its end

    effector replaced by a six axis force/torque sensor. A splint

    assembly was mounted on the force torque sensor and

    supported a persons arm [145]. Similar to this technique,

    Chen et al. [32] provided a comprehensive justification for their

    proposal and testing protocols.

    6.1.4. MIME

    Burgar et al. [24,105] summarised systems for post-stroke

    therapy conducted at the Department of Veterans Affairs PaloAlto, in collaboration with Stanford University. The original

    principle had been established with two or three DOF elbow/

    forearm manipulators. Amongst these systems, MIME was

    more attractive, due to its ability to fully support a limb during

    3-D movement, and self-guided modes of therapy (see

    Fig. 23). Subjects were seated in a wheelchair close to an

    adjustable height table. A PUMA-560 automation, was

    mounted beside the table, and was attached via a wrist-

    forearm orthosis (splint) and a six-axis force transducer. Also,

    Shor et al. [138] investigated the effects of MIME on pain and

    passive ranges of movement, finding no negative impact of

    MIME on a joint passive range of movement, or pain in theFig. 22. The MANUS system in MIT [99].

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 11812

  • 8/4/2019 Human Motion Tracking

    13/18

    paretic upper limb. The disadvantage of this system is that it

    cannot allow a subject to freely move his/her body.

    6.1.5. ARM Guide

    A rehabilitator namely ARM Guide [127], was presented

    to diagnose and treat arm movement impairment followingstroke and other brain injuries. Some vital motor impairment,

    such as abnormal tone, lack of coordination, and weakness,

    were evaluated. Pre-clinical results showed that this therapy

    produced quantifiable benefits in a chronic hemiparetic arm. In

    the design, the subjects forearm was strapped to a specially

    designed splint, which slides along the linear constraint. A

    motor drove a chain drive attached to the splint. An optical

    encoder mounted on the motor, indicated the arm position. The

    forces produced by the arm were measured by a six-axis load

    cell, located between the splint and linear constraint. The

    system requires further development for efficacy and practi-

    cality, although it achieved great success.

    6.1.6. Others

    Engelberger introduced rehabilitation applications for the

    HelpMate robot by Pyxis Co., San Diego, US [47]. The Handy 1

    robot was first invented in 1987 as a research project at Keele

    University, and is now able to animate make-up, shaving, and

    painting operations [155]. OxIM (Oxford, UK), developed the

    RT-series robots for rehabilitation applications [25].

    6.2. Haptic interface techniques

    Haptic interfaces are a type of robot designed to interact with

    a human being via touch. This interaction is normallyundertaken via kinaesthetic and cutaneous channels. Haptic

    interface techniques are becoming an important area for

    assistive technologies; for example they provide a natural

    interface for people with visual impairment, or as a means to aid

    target reaching for post-stroke patients. This technique is

    potentially useful in home based environments, due to its

    reliable performance, such as [20,150].

    Amirabdollahian et al. [3] proposed the use of a haptic

    device (Haptic Master by Fokker Control Systems), for

    errorless learning techniques and intensive rehabilitation

    treatment in post-stroke patients. This device can teach correct

    movement patterns, as well as correcting and aiding in

    achieving point-to-point measurements using virtual, augmen-

    ted, and real environments. The significant contribution of this

    proposal was the implementation of a model that minimised

    jerk parameters during movement.

    Allin et al. [2], described their preliminary work in the use of

    a virtual environment to derive just noticeable differences

    (JNDs) for force. A JND is a measure of the minimum

    difference between two stimuli, that is necessary in order for the

    difference to be reliably perceived. Stroke patients normally

    produce significant increases in JNDs. Their experimental

    results indicated that visual feedback distortions in a virtual

    environment, can be created to encourage increased force

    productions by up to 10%. This threshold can help discriminate

    stroke patients from healthy groups, and predict the con-

    sequence of rehabilitation.

    To improve the performance of haptic interfaces, e.g.

    stability and flexibility, researchers have developed successful

    prototype systems, e.g. refs. [3,66]. Hawkins et al. [66] set up

    experimental apparatus consisting of a frame with one chair, a

    wrist connection mechanism, two embedded computers, a largecomputer screen, an exercise table, a keypad, and a 3 DOF

    haptic interface arm. A user was seated on the chair, with their

    wrist connected to the haptic interface via the wrist connection

    mechanism. The devices end-effector consisted of a gimbal

    which provided an extra three DOF to facilitate wrist

    movement.

    6.3. Other techniques

    6.3.1. Gait rehabilitation

    Rehabilitation for walking post-stroke patients, challenges

    researchers due to trunk balance and proper force distribution.Training and the functional recovery of lower limbs are

    attracting more and more interest.

    The Jet Propulsion Laboratory of NASA and UCLA, have

    designed a robotic stepper that uses a pair of robotic arms

    resembling knee braces, to guide a patients legs. Attached

    sensors, can measure a patients force, speed, acceleration and

    resistance [166]. A virtual reality (VR) walking simulator was

    developed to allow individuals post-stroke, to practise

    ambulation in a variety of virtual environments. This system,

    including Stewart-platforms, was based on the original design

    of Rutgers Ankle 6DOF pneumatic robot, where a user strapped

    into a weightless frame, stood on two such devices placed side-

    by-side [129]. Colombo et al. [36] built a robotic orthosis tomove the legs of spinal cord injury patients during rehabilita-

    tion training on a treadmill. Van der Loos et al. [158] used a

    servomotor-controlled bicycle to study lower limb biomecha-

    nics in terms of resistance.

    Hesse and Uhlenbrock [67] introduced a newly developed

    gait trainer, allowing wheelchair-bound subjects to perform

    repetitive practice in gait-like movement, without overstressing

    therapists. It consisted of two footplates positioned on two bars,

    two rockers, and two cranks that provided propulsion. The

    system generated a different movement at the tip and rear of the

    footplate during swing. Otherwise, the crank propulsion was

    controlled via a planetary system, which provided a ratio of 60/

    Fig. 23. The MIME system in MIT [24].

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118 13

  • 8/4/2019 Human Motion Tracking

    14/18

    40%, between stance and swing phases. Two cases of non-

    ambulatory patients who regained their walking ability after 4

    weeks of daily training on the gait trainer were positively

    reported. Reviews on robot-guided rehabilitation systems have

    been given [39,40].

    7. Discussion

    Existing rehabilitation and motion tracking systems have

    been comprehensively summarised in this paper. The advan-

    tages and weaknesses of these systems were also presented. All

    these rehabilitation or tracking systems, require professionals

    to perform calibration and sampling. Without their help, none

    of these systems would work properly. These systems did not

    provide patient-oriented therapy, and hence cannot yet be

    directly used in home-based environments.

    The second challenge is cost. People intended to build

    complicated tracking systems in order to satisfy multiple

    purposes. This imposes expensive components on designed

    systems. Some of these systems also consist of specificallydesigned sensors, which limit the further development and

    broad application of the designed systems.

    The application or use of a device is very important. Most

    people who had suffered a stroke, have significant loss of

    function in affected limbs, and therefore sensor systems need

    careful consideration. It has been suggested that devices should

    be as easy as possible to apply/handle.

    Existing rehabilitation systems occupy large spaces. As a

    consequence, this prevents people who have less accommoda-

    tion space from using these systems to regain their mobility. A

    telemetric and compact system which overcomes the space

    problem should instead be proposed.Poor performance in humancomputer interface (HCI)

    design in both rehabilitation and motion tracking systems has

    been recognised. Unfortunately, people fail to discuss this issue

    in the literature. From a practical point of view, an attractive

    interface may increasingly encourage participants to carry out

    device manipulation.

    Feedback in real time has not been achieved yet. For

    example, some patients with a visual impairment may require

    an auditory signal, others with hearing problems would need

    visual feedback. There is a concept that a simple system is

    required to indicate correct or incorrect movements. Such a

    system should allow a patient to adjust his/her movements

    immediately.In summary, when one considers a recovery system, six

    issues need to be taken into account: cost, size, weight,

    function, operation, and automation.

    8. Conclusions

    This paper reviews the development of human motion

    tracking systems and their application in stroke rehabilitation.

    State-of-the-art tracking techniques has been classified as

    non-visual, visual marker based, markerless visual, and robot-

    aided systems; according to sensor location. In each subgroup,

    we have described commercialized and non-commercialized

    platforms by taking into account technical feasibility, work

    load, size, and cost. In particular, we have focused on a

    description of markerless visual systems, as they offer positive

    features such as reduced restriction, robust performance, and

    low cost.

    Evidence shows that existing motion tracking systems, to

    some extent, are able to support various rehabilitation settings

    and training delivery. Therefore, these systems could possibly

    be used to replace face to face therapy on-site. Unfortunately,

    evidence also reveals that human motion is of a complicated

    physiological nature leading to unsolved problems beyond

    previous tracking systems functional capability, e.g. occlusion

    and drift. There is therefore a need to develop insight into the

    characteristics of human movement.

    Finally, it was highlighted that a successful design has to

    envisage all of these factors: real time operation, wireless

    properties, easy manipulation, correctness of data, friendly

    graphical interface, and portability.

    Acknowledgments

    This work was in part supported by the UK EPSRC, under

    Grant GR/S29089/01. We are grateful for the provision of

    partial literature sources from Miss Nargis Islam at the

    University of Bath, and Dr. Huiru Zheng at the University of

    Ulster. The authors also acknowledge Dr. Liam Cragg and Ms.

    Sharon Cording for proofreading this manuscript.

    References

    [1] J. Aggarwal, Q. Cai, Human motion analysis: a review, Comput. Vis.

    Image Understand.: CVIU 73 (3) (1999) 428440.[2] S. Allin, Y. Matsuoka, R. Klatzky, Measuring just noticeable difference

    for haptic force feedback: a tool for rehabilitation, in: Proceedings of

    IEEE Symposium on Haptic Interfaces for Virtual and Teleoperator

    Systems, March 2002.

    [3] F. Amirabdollahian, R. Louerio, W. Harwin, A case study on the

    effects of a haptic interface on human arm movements with implica-

    tions for rehabilitation robotics, in: Proceedings of the First Cam-

    bridge Workshop on Universal Access and Assistive Technology

    (CWUAAT), 2002.

    [4] C. Anderson, C. Mhurchu, S. Robenach, M. Clark, C. Spencer, A.

    Winsor, Home or hospital for stroke rehabilitation? results of a rando-

    mized controlled trial: Ii: cost minimization analysis at 6 months, Stroke

    31 (2000) 10321037.

    [5] R. Bajcsy, J. Smith, Exploratory research in telerobotic control using

    ATM networks, Tech. Rep., Computer and Information Science, Uni-versity of Pennsylvania, Philadelphia, 2002.

    [6] C. Barron, I. Kakadiaris, A convex penalty method for optical human

    motion tracking, in: First ACM SIGMM International Workshop on

    Video Surveillance, 2003.

    [7] A. Baumberg, D. Hogg, An efficient method for contour tracking using

    active shape models, in: Proceedings of IEEE Workshop on Motion of

    Non-Rigid and Articulated Objects, 1994.

    [8] A. Baumberg, D. Hogg, Generating spatiotemporal models from exam-

    ples, Image Vis. Comput. 14 (1996) 525532.

    [9] T. Beth, I. Boesnach, M. Haimeri, J. Moldenhauer, K. Bos, V. Wank,

    Characteristics in Human MotionFrom Acquisition to Analysis,

    Humanoids, Germany, 2003.

    [10] M. Betke,J. Gips, P. Fleming, The cameramouse: visualtrackingof body

    features to provide computer access for people with severe disabilities,

    IEEE Trans. Neural Syst. Rehab. Eng. 10 (2002) 110.

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 11814

  • 8/4/2019 Human Motion Tracking

    15/18

    [11] A. Bharatkumar, K. Daigle, M. Pandy, Q. Cai, J. Aggarwal, Lower limb

    kinematics of human walking with the medial axis transformation, in:

    Proceedings of IEEE Workshop on Non-Rigid Motion, 1994.

    [12] D. Bhatnagar, Position trackers for head mounted display systems, Tech.

    rep., TR93010, Department of Computer Sciences, University of North

    Carolina, 1993.

    [13] M. Black, Y. Yaccob, A. Jepson, D. Fleet, Learning parameterized

    models of image motion, in: Proceedings of IEEE Conference on

    Computer Vision and Pattern Recognition, 1997.[14] A. Blake, R. Curwen, A. Zisserman, A framework for spatio-temporal

    control in the tracking of visual contour, Int. J. Comput. Vis. (1993) 127

    145.

    [15] M. Boonstra, R. van der Slikke, N. Keijsers, R. van Lummel, M. de Waal

    Malefijt, N. Verdonschot, The accuracy of measuring the kinematics of

    rising from a chair with accelerometers and gyroscopes, J. Biomech. 39

    (2006) 354358.

    [16] E. Borovikov, L. Davis, A distributed system for real-time volume

    reconstruction, in: Proceedings of International Workshop on Computer

    Architecture for Machine Perception, September 2000.

    [17] C. Bouten, K. Koekkoek, M. Verduim, R. Kodde, J. Janssen, A triaxial

    accelerometer and portable processing unit for the assessment

    daily physical activity, IEEE Trans. Biomed. Eng. 44 (3) (1997)

    136147.

    [18] R. Bowden, T. Mitchell, M. Sarhadi, Reconstructing 3D pose and motionfrom a single camera view, in: British Machine Vision Conference, 1998.

    [19] R. Brady, M. Pavol, T. Owing, M. Grabiner, Foot displacement but not

    velocity predicts the outcome of a slip induced in young subjects while

    walking, J. Biomech. 33 (2000) 803808.

    [20] J. Broeren, K. Sunnerhagen, M. Rydmark, A kinematic analysis of haptic

    handled stylus in a virtual environment: a study in healthy subjects, J.

    NeuroEng. Rehab. 4 (2007) 13.

    [21] T. Brosnihan, A. Pisano, R. Howe, Surface micromachined angular

    accelerometer with force feedback, in: Digest ASME International

    Conference and Expo, 1995.

    [22] S. Bryson, Virtual reality hardware, in: Implementating Virtual Reality,

    ACM SIGGRAPH 93, 1993.

    [23] G. Burdea, V. Popescu, V. Hentz, K. Colbert, Virtual reality-based

    orthopedic telerehabilitation, IEEE Trans. Rehab. Eng. 8 (2000) 430

    432.[24] C. Burgar, P. Lum, P. Shor, H. Machiel Van der Loos, Development of

    robots for rehabilitation therapy: the palo alto va/stanford experience, J.

    Rehab. Res. Dev. 37 (6) (2000) 663673.

    [25] M. Busnel, R. Cammoun, F. Coulon-Lauture, J.-M. Detriche, G. Claire,

    B. Lesigne, The robotized workstation master for users with tetra-

    plegia: Description and evaluation, J. Rehab. Res. Dev. 36 (3) (1999)

    217230.

    [26] Q. Cai, J.K. Aggarwal, Tracking human motion using multiple cameras,

    in: International Conference on Pattern Recognition, 1996.

    [27] M. Caruso, L. Withanawasam, Vehicle detection and compass applica-

    tions using AMR magnetic sensors, in: Sensor Expo Proceedings, May

    1999.

    [28] M. Caruso, Applications of magnetic sensors for low cost compass

    systems, in: Proceedings of IEEE on Position Location and Navigation

    Symposium, San Diego, 2000.[29] J. Cauraugh, S. Kim, Two coupled motor recovery protocols are better

    than one electromyogram-triggered neuromuscularstimulation and bilat-

    eral movements, Stroke 33 (2002) 15891594.

    [30] C. Chang, R. Ansari, A. Khokhar, Cyclic articulated human motion

    tracking by sequential ancestral simulation, in: Proceedings of IEEE

    Conference on Computer Vision and Pattern Recognition, 2003.

    [31] I. Charlton, P. Tate, P. Smyth, L. Roren, Repeatability of an optimised

    lower body model, Gait Post. 20 (2004) 213221.

    [32] S. Chen, T. Rahman, W. Harwin, Performance statistics of a head-

    operated force-reflectingrehabilitationrobot system, IEEETrans.Rehab.

    Eng. 6 (1998) 406414.

    [33] K. Cheung, T. Kanade, J. Bouguet, M. Holler, A real time system for

    robust 3D voxel reconstruction of human motions, in: Proceedings of

    IEEE Conference on Computer Vision and Pattern Recognition, 2000.

    [34] J. Chung, N. Ohnishi, Cue circles: Image feature for measuring 3-D

    motion of articulatedobjects using sequential image pair, in: Proceedings

    of the Third International Conference on Face & Gesture Recognition,

    1998.

    [35] B. Coifman, D. Beymer, P. McLauchlan, J. Malik, A real-time computer

    vision system for vehicle tracking and traffic surveillance, Transpot. Res.

    C 6 (1998) 271288.

    [36] G. Colombo, M. Joerg, R. Schreier, V. Dietz, Treadmill training of

    paraplegic patients using a robotic orthosis, J. Rehab. Res. Dev. 37 (6)(2000) 693700.

    [37] M. Cordea, E. Petriu, N. Georganas, D. Petriu, T. Whalen, Real-time 21/

    2d head pose recovery for model-based video-coding, in: IEEE Instru-

    mentation and Measurement Technology Conference, 2000.

    [38] J. Cozens, Robotic assistance of an active upper limb exercise in

    neurologically impaired patients, IEEE Trans. Rehab. Eng. 7 (2)

    (1999) 254256.

    [39] J. Dallaway, R. Jackson, P. Timmers, Rehabilitation robotics in Europe,

    IEEE Trans. Rehab. Eng. 3 (1995) 3545.

    [40] K. Dautenhahn, I. Werry, Issues of robot-human interaction dynamics in

    the rehabilitation of children with autism, in: Proceedings of FROM

    ANIMALS TO ANIMATS, The Sixth International Conference on the

    Simulation of Adaptive Behavior (SAB2000), 2000.

    [41] J. Davis, Hierarchical motion history images for recognizing human

    motion, in: IEEE Workshop on Detection and Recognition of Events inVideo, 2001.

    [42] R.I. Davis, S. Ounpuu, D. Tyburski, J. Gage, A gait data collection and

    reduction technique, Hum. Mov. Sci. 10 (1991) 575587.

    [43] E. Delahunt, K. Monaghan, B. Caulfield, Ankle function during hopping

    in subjects with functional instability of the ankle joint, Scand. J. Med.

    Sci. Sports (2007).

    [44] Q. Delamarre, O. Faugeras, 3d articulated models and multi-view

    tracking with physical forces, Comput. Vis. Image Understand. 81

    (2001) 328357.

    [45] Q. Delamarre, F.O., 3d articulated models and multi-view tracking with

    silhouettes, in: Proceedings of International Conference on Computer

    Vision, 1999.

    [46] S. Dockstader, M. Berg, A. Tekalp, Stochastic kinematic modeling and

    feature extraction for gait analysis, IEEE Trans. Image Process. 12 (8)

    (2003) 962976.[47] G. Engelberger, Helpmate, A service robot with experience, Ind. Robot.

    Int. J. 25 (2) (1998) 101104.

    [48] F. Escolano, M. Cazorla, D. Gallardo, R. Rizo, Deformable templates for

    tracking and analysis of intravascular ultrasound sequences, in: Proceed-

    ings of First International Workshop of Energy Minimization Methods in

    IEEEConference on Computer Vision and Pattern Recognition, Venecia,

    Mayo 1997.

    [49] A. Esquenazi, N. Mayer, Instrumented assessment of muscle overactivity

    and spasticity with dynamic polyelectromyographic and motion analysis

    for treatment planning, Am. J. Phys. Med. Rehab. 83 (2004) S19S29,

    Supplement:.

    [50] R. Fablet, M.J. Black, Automatic detection and tracking of human

    motion with a view-based representation, in: European Conference on

    Computer Vision, 2002.

    [51] T. Fan, G. Medioni, G. Nevatia, Recognizing 3-d objects using surfacedescriptions, IEEE Trans. Pattern Recogn. Mach. Intell. 11 (1989) 1140

    1157.

    [52] S. Fasoli, H. Krebs, J. Stein, W. Frontera, N. Hogan, Effects of robotic

    therapy on motor impairment and recovery in chronic stroke, Arch. Phys.

    Med. Rehab. 84 (2003) 477482.

    [53] M. Ferraro, J. Demaio, J. Krol, C. Trudell, L. Edelstein, P. Christos, J.

    England, S. Dasoli, M. Aisen, H. Krebs, N. Hogan, B. Volpe, Assessing

    the motor status score: a scale for the evaluation of upper limb motor

    outcomes in patients after stroke, Neurorehab. Neural Rep. 16 (2002)

    301307.

    [54] H. Feys, W. De Weerdt, B. Selz, C. Steck, R. Spichiger, L. Vereeck, K.

    Putman, G. Van Hoydonck, Effect of a therapeutic intervention for the

    hemiplegic upper limb in the acute phase after stroke: a single-blind,

    randomized, controlled multicenter trial, Stroke 29 (1998) 785792.

    H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118 15

  • 8/4/2019 Human Motion Tracking

    16/18

    [55] E. Foxlin, Y. Altshuler, L. Naimark, M. Harrington, Flighttracker: a novel

    optical/inertial tracker for cockpit enhanced vision, in: Proceedings of

    International Symposium on Augmented Reality, November 25, 2004.

    [56] W. Freeman, K. Tanaka, J. Ohta, K. Kyuma, Computer vision for

    computer games, in: Proceedings of IEEE International Conference

    on Automatic Face and Gesture Recognition, 1996.

    [57] R. Frezza, A. Chiuso, Learning and exploiting invariants for multi-target

    tracking and data association, in: Submission for 44th IEEE Conference

    on Decision and Control and European Control Conference, 2005.[58] H. Fujiyoshi, A. Lipton, Real-time human motion analysis by image

    skeletonisation, in: Proceedings of the Workshop on Application of

    Computer Vision, 1998.

    [59] A. Galata, N. Johnson, D. Hogg, Learning behaviour models of human

    activities, in: British Machine Vision Conference, 1999.

    [60] D. Gavrila, The visual analysis of human movement: a survey, Comput.

    Vis. Image Understand.: CVIU 73 (1) (1999) 8298.

    [61] J. Geddes, M. Chamberlain, Home-based rehabilitation for people with

    stroke: a comparative study of six community services providing co-

    ordinated, multidisciplinary treatment, Clin. Rehab. 15 (2001) 589599.

    [62] G. Gennari, A. Chiuso, F. Cuzzolin, R. Frezza, Integrating shape and

    dynamic probabilistic models for data association and tracking, in: IEEE

    Conference on Decision and Control, 2002.

    [63] G. Gennari, A. Chiuso, F. Cuzzolin, R. Frezza, Integrating shape con-

    straint in data association filter, in: IEEE Conference on Decision andControl, 2004.

    [64] L. Goncalves, E. Bernardo, E. Ursella, P. Perona, Monocular tracking of

    the human arm in 3d, in: International Conference on Computer Vision,

    1995.

    [65] W. Harwin, T. Rahman, Analysis of force-reflecting telerobotics systems

    for rebalitation applications, in: Proceedings of the First European

    Conference on Disability, Virtual Reality and Associated Technologies,

    1996.

    [66] P. Hawkins, J. Smith, S. Alcock, M. Topping, W. Harwin, R. Loureiro, F.

    Amirabdollahian, J. Brooker, S. Coote, E. Stokes, G. Johnson, P. Mark,

    C. Collin, B. Driessen, Gentle/s project: design and ergonomics of a

    stroke rehabilitation system, in: Proceedings of the First Cambridge

    Workshop on Universal Access and Assistive Technology (CWUAAT),

    2002.

    [67] S. Hesse, D. Uhlenbrock, A mechanized gait trainer for restoration ofgait, J. Rehab. Res. Dev. 37 (6) (2000) 701708.

    [68] M. Hillman, Rehabilitation robotics from past to presenta historical

    perspective, in: Proceedings of the International Conference on Reha-

    bilitation Robotics, April 2003.

    [69] M. Holden, T. Dyar, Virtual environment training: a new tool for

    rehabilitation, Neurol. Rep. 26 (2002) 6271.

    [70] T. Horprasert, I. Haritaoglu, D. Harwood, L. Davies, C. Wren, A.

    Pentland, Real-time 3d motion capture, in: PUI Workshop, 1998.

    [71] N. Howe, M. Leventon, W. Freeman, Bayesian reconstruction of 3d

    human motion from single-camera video, in: NIPS, 1999.

    [72] http://bmj.bmjjournals.com/cgi/reprint/325/.

    [73] http://www.ascensiontech.com/products/motionstar.pdf

    [74] http://www.charndyn.com/

    [75] http://www.isense.com/products/prec/is600/

    [76] http://www.kittytech.com/about/kitty.html[77] http://www.korins.com/m/ent/atoc.htm

    [78] http://www.microstrain.com/

    [79] http://www.ndigital.com/polaris.php

    [80] http://www.polhemus.com/

    [81] http://www.polyu.edu.hk/cga/faq/

    [82] http://www.qualisys.se/

    [83] http://www.vicon.com/

    [84] http://www.vrealities.com/cyber.html

    [85] http://www.xsens.com/

    [86] E. Huber, 3d real-time gesture recognition using proximity space, in:

    Proceedings of Interantional Conference on Pattern Recognition, 1996.

    [87] W. Hu, T. Tan, L. Wang, S. Maybeck, A survey on visual surveillance of

    object motion and behaviors, IEEE Trans. Syst. Man and Cyber. C: Appl.

    Rev. 34 (2004) 334352.

    [88] M. Ivana, M. Trivedi, E. Hunter, P. Cosman, Human body model

    acquisition and tracking using voxel data, Int. J. Comp. Vis. 53 (3)

    (2003) 199223.

    [89] D. Jack, R. Boian, A. Merians, G. Tremaine, M. Burdea, S. Adamovich,

    M. Recce, H. Poizner, Virtual reality-enhanced stroke rehabilitation,

    IEEE Trans. Neural Syst. Rehab. Eng. 9 (2001) 308318.

    [90] O. Javed, S. Khan, Z. Rasheed, M. Shah, Camera handoff: tracking in

    multiple uncalibrated stationary cameras, in: IEEE Workshop on Human

    Motion, HUMO-2000, 2000.[91] G. Johansson, Visual motion perception, Sci. Am. 232 (1975) 7688.

    [92] E. Jovanov, A. Milenkovic, C. Otto, P. de Groen, B. Johnson, S. Warren,

    G. Taibi, A wban system for ambulatory monitoring of physical activity

    and health status: applications and challenges, in: Proceedings the 27th

    Annual International Conference of the IEEE Engineering in Medicine

    and Biology Society, 2005.

    [93] S. Ju, M. Black, Y. Yaccob, Cardboard people: a parameterised model of

    articulated image motion, in: Proceedings of IEEE International Con-

    ference Automatic Face and Gesture Recognition, 1996.

    [94] I. Karaulova, P. Hall, A. Marshall, A hierarchical model of dynamics for

    tracking people with a single video camera, in: British Machine Vision

    Conference, 2000.

    [95] P. Kejonen, K. Kauanen, H. Vanharanta, The relationship between

    anthropometric factors and body-balancing movements in postural bal-

    ance, Arch. Phys. Med. & Rehab. 84 (2003) 1722.[96] R. Kizony, N. Katz, P. Weiss, Adapting an immersive virtual reality for

    rehabilitation, J. Vis. Comput. Anim. 14 (2003) 261268.

    [97] A. Kourepenis, A. Petrovich, M. Meinberg, Development of a monotithic

    quartz resonator accelerometer, in: Proceedings of 14th Biennial Gui-

    dance Test Symposium, Hollman AFB, NM, 1989.

    [98] H. Krebs, N. Hogan, M. Aisen, B. Volpe, Robot-aided neurorehabilita-

    tion, IEEE Trans. Rehab. Eng. 6 (1) (1998) 7587.

    [99] H. Krebs, B. Volpe, M. Aisen, N. Hogan, Increasing productivity and

    quality of care: robot-aided nero-rehabilitation, J. Rehab. Res. Dev. 37

    (6) (2000) 639652.

    [100] J. Lenz, A review of magnetic sensors, Proc. IEEE 78 (1990) 973989.

    [101] F. Lewis, Wirelesssensor networks, in:D.J. Cook, S.K. Das(Eds.), Smart

    Environments: Technologies, Protocols, and Applications, John Wiley,

    New York, 2004.

    [102] D. Liebowitz, S. Carlsson, Uncalibrated motion capture exlpotingarticulated structure constraints, in: International Conference on Com-

    puter Vision, 2001.

    [103] J. Lobo, J. Dias, Vision and inertial sensor cooperation using gravity as a

    vertical reference, IEEE Trans. Pattern Anal. Mach. Intell. 25 (12) (2003)

    15971608.

    [104] H. Luinge, Inertial sensing of human movement, Ph.D. Thesis, Twente

    University Press, Netherlands, 2002.

    [105] P. Lum, D. Reinkensmeyer, R. Mahoney, W. Rym