ieee journal of biomedical and health … · movements of the upper limb, ... joint angles and...

12
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, MONTH 201X 1 Detecting Elementary Arm Movements by Tracking Upper Limb Joint Angles with MARG Sensors Evangelos B. Mazomenos, Member, IEEE, Dwaipayan Biswas, Andy Cranny, Amal Rajan, Koushik Maharatna, Member, IEEE, Josy Achner, Jasmin Klemke, Michael J¨ obges, Steffen Ortmann and Peter Langend¨ orfer Abstract—This paper reports an algorithm for the detection of three elementary upper limb movements i.e. reach and retrieve, bend the arm at the elbow and rotation of the arm about the long axis. We employ two MARG sensors, attached at the elbow and wrist, from which the kinematic properties (joint angles, position) of the upper arm and forearm are calculated through data fusion using a quaternion-based gradient-descent method and a 2-link model of the upper limb. By studying the kinematic patterns of the three movements on a small dataset, we derive dis- criminative features that are indicative of each movement; these are then used to formulate the proposed detection algorithm. Our novel approach of employing the joint angles and position to discriminate the three fundamental movements was evaluated in a series of experiments with 22 volunteers who participated in the study: 18 healthy subjects and 4 stroke survivors. In a controlled experiment, each volunteer was instructed to perform each movement a number of times. This was complimented by a semi-naturalistic experiment where the volunteers performed the same movements as subtasks of an activity that emulated the preparation of a cup of tea. In the stroke survivors group, the overall detection accuracy for all three movements was 93.75% and 83.00%, for the controlled and semi-naturalistic experiment respectively. The performance was higher in the healthy group where 96.85% of the tasks in the controlled experiment and 89.69% in the semi-naturalistic were detected correctly. Finally, the detection ratio remains close (±6%) to the average value, for different task durations further attesting to the algorithms robustness. Index Terms—MARG sensors, orientation estimation, upper limb movement, body-area networks, gradient-descent, quater- nion I. I NTRODUCTION The prevalence of cerebrovascular diseases (e.g. stroke) as a leading cause of death in recent years, is discussed in a number of published reports [1], [2]. For those individuals who survive a stroke episode, a long period of neurorehabilitation is This work was supported by the European Union under the Seventh Frame- work Programme (EU-FP7), grant agreement #288692, under the project name “StrokeBack: Telemedicine System Empowering Stroke Patients to Fight Back” E.B. Mazomenos, D. Biswas, A. Cranny, A. Rajan and K. Maharatna are with the School of Electronics and Computer Science, University of Southampton, Southampton, SO17 1BJ, U.K. (e-mail: {ebm, db9g10, awc, ar9g12, km3}@ecs.soton.ac.uk), E.B Mazomenos is also with the Centre for Medical Image Computing, UCL, London, WC1E 6BT, U.K. (email: [email protected]) J. Achner, J. Klemke and M. J¨ obges are with Brandenburg Klinik, Berlin- Brandenburg, 16321 Bernau-Waldsiedlung, Germany (e-mail: {josy.achner, ergo n1, joebges}@brandenburgklinik.de) S. Ortmann and P. Langend¨ orfer are with IHP, Leibniz-Institute for In- novative Microelectronics, Im Technologiepark 25, 15236 Frankfurt (Oder), Germany (e-mail: {ortmann, langendoerfer}@ihp-microelectronics.com) required in order to regain or recover some of the lost motor functions that typically accompany a stroke. In general, the first stage of rehabilitation takes place under expert supervision at a dedicated rehabilitation center which provides every- day care and the patient’s condition is constantly evaluated by medical professionals. After their discharge, patients are prescribed a customized rehabilitation program and are ad- vised to continue their rehabilitation at home. However, home rehabilitation suffers from the fact that it is difficult for therapists to monitor the progress of the patient remotely [3], [4]. This is a key point since a major requirement for medical experts is the ability to know if the patients are capable of performing specific movements as part of their rehabilitation exercises during their everyday natural activities. The EU funded StrokeBack project, of which this work is a part, proposes to develop a body-worn sensor system that can precisely detect and recognize specific movements of the stroke-impaired arm that are of interest to the therapists, in a home-rehabilitation environment and inform therapists of the number of occurrences and the associated quality of these movements [5]. This information allows therapists to remotely evaluate the patient’s rehabilitation progress in their natural environment. In this context, this work presents the design and evaluation of a detection algorithm for three fundamental movements of the upper limb, a part of the body that most often has its motor function impaired after a stroke episode. The three movements we target to detect and recognize are; reach and retrieve an object, lift an object to mouth (e.g. drink or eat) and rotation of the forearm (e.g. pouring action or turning a key). These three types of movements are present in unison or in combination in the majority of everyday tasks and were chosen under guidance from physiotherapists participating in the StrokeBack project. Their suitability as indicators of rehabilitation progress is further demonstrated by the fact that these activities are performed in the Wolf Motor Function test, an established procedure used by therapists to evaluate the level of motor function impairment in stroke survivors [6], [7]. From a kinematic perspective, the shoulder and elbow joints of the upper limb cooperate in order for the three movements to be executed. Hence, the approach we took in the formulation of our detection algorithm was to initially calculate the kinematics of the upper limb in terms of the joint angles and position of the upper arm and forearm. Accordingly, we studied the kinematic patterns of the three movements in order to derive discriminative features that allow

Upload: tranliem

Post on 20-May-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, MONTH 201X 1

Detecting Elementary Arm Movements by TrackingUpper Limb Joint Angles with MARG Sensors

Evangelos B. Mazomenos, Member, IEEE, Dwaipayan Biswas, Andy Cranny, Amal Rajan, KoushikMaharatna, Member, IEEE, Josy Achner, Jasmin Klemke, Michael Jobges, Steffen Ortmann and Peter

Langendorfer

Abstract—This paper reports an algorithm for the detection ofthree elementary upper limb movements i.e. reach and retrieve,bend the arm at the elbow and rotation of the arm about thelong axis. We employ two MARG sensors, attached at the elbowand wrist, from which the kinematic properties (joint angles,position) of the upper arm and forearm are calculated throughdata fusion using a quaternion-based gradient-descent methodand a 2-link model of the upper limb. By studying the kinematicpatterns of the three movements on a small dataset, we derive dis-criminative features that are indicative of each movement; theseare then used to formulate the proposed detection algorithm.Our novel approach of employing the joint angles and positionto discriminate the three fundamental movements was evaluatedin a series of experiments with 22 volunteers who participatedin the study: 18 healthy subjects and 4 stroke survivors. In acontrolled experiment, each volunteer was instructed to performeach movement a number of times. This was complimented bya semi-naturalistic experiment where the volunteers performedthe same movements as subtasks of an activity that emulated thepreparation of a cup of tea. In the stroke survivors group, theoverall detection accuracy for all three movements was 93.75%and 83.00%, for the controlled and semi-naturalistic experimentrespectively. The performance was higher in the healthy groupwhere 96.85% of the tasks in the controlled experiment and89.69% in the semi-naturalistic were detected correctly. Finally,the detection ratio remains close (±6%) to the average value,for different task durations further attesting to the algorithmsrobustness.

Index Terms—MARG sensors, orientation estimation, upperlimb movement, body-area networks, gradient-descent, quater-nion

I. INTRODUCTION

The prevalence of cerebrovascular diseases (e.g. stroke) asa leading cause of death in recent years, is discussed in anumber of published reports [1], [2]. For those individuals whosurvive a stroke episode, a long period of neurorehabilitation is

This work was supported by the European Union under the Seventh Frame-work Programme (EU-FP7), grant agreement #288692, under the projectname “StrokeBack: Telemedicine System Empowering Stroke Patients to FightBack”

E.B. Mazomenos, D. Biswas, A. Cranny, A. Rajan and K. Maharatnaare with the School of Electronics and Computer Science, University ofSouthampton, Southampton, SO17 1BJ, U.K. (e-mail: {ebm, db9g10, awc,ar9g12, km3}@ecs.soton.ac.uk), E.B Mazomenos is also with the Centrefor Medical Image Computing, UCL, London, WC1E 6BT, U.K. (email:[email protected])

J. Achner, J. Klemke and M. Jobges are with Brandenburg Klinik, Berlin-Brandenburg, 16321 Bernau-Waldsiedlung, Germany (e-mail: {josy.achner,ergo n1, joebges}@brandenburgklinik.de)

S. Ortmann and P. Langendorfer are with IHP, Leibniz-Institute for In-novative Microelectronics, Im Technologiepark 25, 15236 Frankfurt (Oder),Germany (e-mail: {ortmann, langendoerfer}@ihp-microelectronics.com)

required in order to regain or recover some of the lost motorfunctions that typically accompany a stroke. In general, thefirst stage of rehabilitation takes place under expert supervisionat a dedicated rehabilitation center which provides every-day care and the patient’s condition is constantly evaluatedby medical professionals. After their discharge, patients areprescribed a customized rehabilitation program and are ad-vised to continue their rehabilitation at home. However, homerehabilitation suffers from the fact that it is difficult fortherapists to monitor the progress of the patient remotely [3],[4]. This is a key point since a major requirement for medicalexperts is the ability to know if the patients are capable ofperforming specific movements as part of their rehabilitationexercises during their everyday natural activities.

The EU funded StrokeBack project, of which this work isa part, proposes to develop a body-worn sensor system thatcan precisely detect and recognize specific movements of thestroke-impaired arm that are of interest to the therapists, ina home-rehabilitation environment and inform therapists ofthe number of occurrences and the associated quality of thesemovements [5]. This information allows therapists to remotelyevaluate the patient’s rehabilitation progress in their naturalenvironment. In this context, this work presents the designand evaluation of a detection algorithm for three fundamentalmovements of the upper limb, a part of the body that mostoften has its motor function impaired after a stroke episode.The three movements we target to detect and recognize are;reach and retrieve an object, lift an object to mouth (e.g. drinkor eat) and rotation of the forearm (e.g. pouring action orturning a key). These three types of movements are presentin unison or in combination in the majority of everydaytasks and were chosen under guidance from physiotherapistsparticipating in the StrokeBack project. Their suitability asindicators of rehabilitation progress is further demonstrated bythe fact that these activities are performed in the Wolf MotorFunction test, an established procedure used by therapists toevaluate the level of motor function impairment in strokesurvivors [6], [7].

From a kinematic perspective, the shoulder and elbowjoints of the upper limb cooperate in order for the threemovements to be executed. Hence, the approach we took inthe formulation of our detection algorithm was to initiallycalculate the kinematics of the upper limb in terms of thejoint angles and position of the upper arm and forearm.Accordingly, we studied the kinematic patterns of the threemovements in order to derive discriminative features that allow

2 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, MONTH 201X

us to effectively detect them during typical everyday activities.We chose to employ Magnetic, Angular Rate (gyroscope) andGravity (accelerometer) (MARG) sensor modules, positionedproximal to the elbow and the wrist, to derive and track thekinematic properties (angles, position) of the upper limb. Dueto their small size and compactness, MARG sensors can beattached to the upper limb without hindering its movementand after appropriate data fusion and processing, can providea good estimation of the upper limb kinematics without theneed for clear line of sight; which is not the case in themore accurate, though considerably more expensive, marker-based optical or camera sensor systems. Our analysis revealedthat discrimination among the three movements is possibleby investigating the pattern of the shoulder and elbow flex-ion/extension angles and the value of the vertical coordinate(z-coordinate) of the wrist position.

The use of MARG or inertia sensors for calculating theorientation and position of the upper limb is well researched,with a plethora of techniques reported in the literature. How-ever the work presented here is considered an extension ofprevious work, since it is among the first to further utilizethe kinematic information for a specific purpose; in our case,to detect the three particular, though fundamental upper limbmovements, known to be used as a measure for evaluatingthe upper extremity motor ability. Specifically, we employMARG sensors for the purpose of tracking the orientationand position of the arm segments and to use this informationto correctly detect the three arm movements. In addition,our work differs from traditional human activity recognitionapproaches, in that we focus on detecting the occurrence ofspecific tasks during natural everyday activities, instead ofattempting to classify gross human activities (e.g. walking,sitting, standing) [8]–[11]. Finally, the proposed detectionmethod, based on identifying characteristic kinematic proper-ties of the three movements, distinguishes itself from the vastmajority of activity recognition works that employ complexmachine learning techniques to achieve classification. Sincethis work intends to be integrated in a home-based monitoringsystem, where body-worn battery powered sensors are used,conventional approaches for detection and recognition basedon machine learning and pattern recognition methods may notbe suitable, due to their significant computational complexitythat renders them inappropriate for resource constrained body-worn devices. By comparison, this work employs a com-putationally efficient orientation algorithm, that requires 248scalar arithmetic operations per update for the gradient-descentoptimization [12], and a basic 2-link model to obtain the upperlimb kinematics, complimented with a simple set of rulesderived from the kinematic analysis, to detect and recognizethe three movements of interest.

Although computationally inexpensive pattern recognitionmethods are available (e.g. Linear Discriminant Analysis(LDA)), our preliminary investigation demonstrated that a highvolume of training data is required to capture the variability inthe movement of different individuals in order to effectivelytrain such classifiers. Particularly for LDA, our investigationwith various features from accelerometer and gyroscope sensordata, showed that consistent results for all three movements

and various individuals, with impaired and unimpaired motorabilities, is difficult to achieve [13]. Contrary to classical ma-chine learning approaches, in this work a rule-based detectionalgorithm was designed to discriminate the three movements,by analyzing the kinematic patterns of the three tasks ona dataset (analysis dataset) collected from a small numberof participants with unimpaired motor function. Consideringthat the derived set of rules is based on functional kinematiccharacteristics (patterns of angles, position of joints) of theupper limb during the execution of these particular tasks, wehypothesized that similar discriminating patterns, albeit withsome variation, will appear when these tasks are executed byany individual, even if their motor function is impaired by amedical condition. Therefore the kinematic analysis and theformulation of the discriminating rules was performed on dataobtained exclusively from unimpaired participants. The keynovelty of the work presented here is the use of characteristickinematic patterns, which are consistent and can be effectivelyapplied for the detection of the three movements.

To validate our hypothesis we applied the detection algo-rithm on a different dataset (evaluation dataset) comprisingboth healthy volunteers and stroke survivors, performing thethree tasks in two distinct type of experiments (controlled andsemi-naturalistic). Our experimental investigation aims at es-tablishing the robustness against variations, due to inter-personvariability and motor function impairment, of the derived setof rules (from the controlled experiment) and revealing theextent to which the proposed algorithm can be applied for thedetection of the three movements when performed as subtasksof a typical activity (from the semi-naturalistic experiment).

The remainder of this paper is structured as follows. InSection II we briefly summarize the relevant literature, whileSection III details the derivation of the upper limb kinematics.The formulation of the proposed recognition algorithm isdiscussed in Section IV, with the experimental evaluation, per-formance assessment and discussion following in Section V.Conclusions are drawn in Section VI.

II. BACKGROUND

Estimation of the upper limb orientation and position isachieved through fusion and processing of heterogeneoussensor data obtained from MARG or inertia sensor modulesproperly attached to the upper limb. With the aid of a kinematicmodel, the position of the individual body segments (upperarm, forearm) can be determined in 3D space. A theoreticalstudy on the required number of modules that need to beattached for a full orientation analysis of the upper limb isprovided in [14]. The majority of the proposed solutions inthe literature are based on the established Kalman Filter andits derivatives as the sensor fusion algorithm for estimatingorientation [15]–[19], though more computationally efficientalternatives based on complementary filters and gradient-descent methods have also been reported [12], [20].

In recent years, the recognition and classification of basicbody postures and daily activities using data from wearableMARG or inertia sensors has been an active research topic.Some of the application scenarios considered are those of re-habilitation, chronic care management and elderly population

MAZOMENOS et al.: DETECTING ELEMENTARY ARM MOVEMENTS BY TRACKING UPPER LIMB JOINT ANGLES WITH MARG SENSORS 3

monitoring [8]–[10], [21]–[24]. Typically, the data obtainedare used to derive a set of features which are then used asinputs in various machine learning and/or pattern recognitionsystems for classification. For example, decision trees andneural networks were employed in [25], [26], support vectormachines (SVM) in [27], [28], hidden Markov-models (HMM)in [29] and template matching in [11]. Although these ap-proaches demonstrated very good performance, their objectivewas to classify gross human activities (e.g. lying, sitting, walk-ing, running, climbing stairs etc). By comparison, the workthat is reported here has a significantly different objective:to detect and classify specific arm movements, of which thelongitudinal variation in their number of occurrences couldpotentially serve as an indicator of rehabilitation progress.

III. DERIVING THE UPPER LIMB KINEMATICS

Movements of the human upper limb occur in one of thethree cardinal planes of the body (sagittal, frontal and trans-verse) and around three corresponding axes (mediolateral, an-teroposterior and longitudinal). The three planes are mutuallyorthogonal and the three axes are orthogonal both to each otherand their corresponding planes. To represent the upper-limb weemploy a 2-link limb model, depicted in Fig. 1. The upper armand forearm are modeled through link 1-2 and 2-3 respectively.The shoulder joint is represented by joint 1, which is fixed atthe origin of the global coordinate frame, while the elbow ismodeled by joint 2, which connects the two links. To track theupper limb during movement, two MARG sensors are placedproximal to the wrist and elbow near points 2 and 3, as shownin Fig. 2. By continuously calculating the orientation of theMARG sensors, we can determine the orientation and positionof the upper limb joints in space during dynamic movements,with respect to the 2-link model. This allows us to estimate the5 degrees of freedom (DoF) of the upper limb, 3 DoF at theshoulder and 2 DoF at the elbow, that correspond to the anglesof shoulder flexion/extension, shoulder adduction/abduction,upper arm medial/lateral rotation, elbow flexion/extension andforearm pronation/supination. This information is then fed intoa detection algorithm to discriminate between the three armmovements. Based on this model, the orientation and positionof the two limb segments can be defined against a staticglobal coordinate frame, with its origin placed on the shoulderjoint. Additionally, a local coordinate frame (body coordinateframe), which rotates dynamically following the rotations ofthe link, is applied in each link of our model, with its originbeing in the shoulder and the elbow joints for the upper armand forearm respectively. These body coordinate frames areused to obtain the initial orientation of each limb, using theMARG sensor data, which are provided in terms of theirrespective coordinate frames, and then mapped with respectto the global coordinate frame to enable the calculation of thejoint angles and the links’ position. For simplicity we chose toconsider the local coordinate frame (body frame) of the upperarm and forearm to align with the coordinate frame of thesensor. Therefore, the orientation output represents not onlythe orientation of the MARG sensor but also the orientation ofthe body segment upon which the sensor platform is attached.

Fig. 1. The 2-link limb model we utilize to represent the upper limb and thedefinition of the global coordinate system.

Among the different mathematical representations of the 3-Dorientation of a rigid body we elect to use quaternions in ourwork to express the orientations of the upper limb body seg-ments. Quaternions demonstrate significant advantages overboth Euler angles and rotation matrices in representing theorientation of a rigid body. The singularities issue (that affectsEuler angles and rotation matrices) is not present in quaternionrepresentation, which is also known to provide more robustresults during orientation calculations.

The initial orientation (expressed in the body coordinateframe) of the upper arm and forearm is obtained from esti-mating the 3-D orientation of the MARG sensor attached toit. This is achieved by fusing the data from the accelerometer,gyroscope and magnetic field sensors and then employing aquaternion gradient-descent orientation algorithm, originallyreported in [12], to calculate the MARG sensor 3-D orientationduring the movements of the arm. In this algorithm, thegyroscope output (ω) can be used to derive the orientation rateof change (q1) of the static reference frame (global) againsta dynamic one (sensor). This is expressed, in quaternionrepresentation as:

ωq = [0 ωx ωy ωz] (1)

q1 =1

2q1 ⊗ ωq (2)

Here, the operator ⊗ corresponds to quaternion multiplication.An estimation of the orientation (q1) of the global framerelative to the sensor frame can then be calculated by inte-grating the quaternion derivative q1 over time, given an initialcondition and the sampling frequency being known.

In addition, by assuming that the other two types of sen-sor (the accelerometer and magnetometer) are continuouslysubjected to the constant fields of gravity (g) and magneticnorth (mn) with respect to the sensor coordinate frame, andgiven that the orientation of these two fields in the globalcoordinate frame is known and constant, the measurementof these fields in the sensor frame enables the orientation ofthat frame against the global frame to be estimated. This isformulated as an optimization problem that attempts to finda quaternion (q2) solution that corresponds to an orientation

4 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, MONTH 201X

that aligns the constant field of the global frame (fg) to themeasured one (fs) [12].

minq2

f (q2, fg, fs) (3)

f (q2, fg, fs) = q∗2 ⊗ fg ⊗ q2 − fs (4)

Here, q∗2 denotes the conjugate quaternion. The gradient-

descent optimization algorithm is employed to produce aquaternion solution based again on an initial condition andof course a step size. Each field individually can not providea unique quaternion solution, but a range of orientation solu-tions. The two fields are thus combined in order to producea single quaternion solution q2 that describes the sensororientation against the global reference frame.

The two independent approximations of the orientation q1

and q2 suffer from intrinsic limitations related to the sensorsystems. For q1, the accumulation of gyroscope errors willresult in a distorted estimation, while q2 will suffer from theaddition of linear accelerations and magnetic interference. Thisnecessitates the fusion of the two estimates in a weightedmanner, so that each one mitigates the limitations of theother. Ultimately, following a number of simplifications relatedto the convergence step of the gradient-descent method, thefinal orientation estimation is achieved from the integration ofthe rate of change of orientation (measured by gyroscopes),after the magnitude of the gyroscope error, denoted as β, issubtracted, alongside a direction specified by the accelerometerand magnetometer measurements (see Eq. 6). Furthermore, amechanism for compensating magnetic distortions (soft ironerrors) is in place, to limit the errors caused from them to onlyaffect the angle of rotation around the global z-axis (yaw).

qt = qt−1 + q ·∆t (5)

q = q1 − β∇f‖∇f‖

(6)

The derived quaternion orientation for each MARG sensoris expressed with respect to the sensor’s coordinate frame.Obviously the orientation obtained can also be expressedwith Euler angles or rotation matrices using the appropriatetransformations.

To locate the position of the upper arm and forearm duringmovements, we define two position vectors (vu, vf ) in ourmodel structure, as illustrated in Fig. 2, which also depictsthe placement of the sensors. The two position vectors aredefined with respect to the body frame of the upper arm andforearm respectively. The x-axis of these frames is alignedwith the direction of the upper arm and forearm when thearm is lying prone against the side of the body. Thus, theposition vector of the forearm would be bvf = [−lf 0 0]while the one for the upper arm would be bvu = [−lu 0 0],where lf and lu are the lengths of the forearm and upperarm respectively. The location of the upper arm and forearmin the global coordinate frame is determined by transformingthe position vectors from the body coordinate frame to theglobal one using the orientation quaternions obtained fromeach sensor (qw,qe). This is achieved from the following set

of equations where the superscripts g and b denote the globaland body reference frames respectively:

gvu = qw ⊗ bvu ⊗ q∗w (7)

gvf = qw ⊗ bvf ⊗ q∗w +gvu (8)

ZY

X

ZY

X

Vu

Vf

1

2

3

Y

ZX

Fig. 2. The employed set-up for the kinematic analysis adapted to the two-link upper limb model of Fig. 1. The two MARG sensors (Shimmer 2r) areplaced near the elbow (point 2) and wrist (point 3) with their correspondingsensor frame (upper right corner) and the global coordinate system (lower leftcorner) shown. The two position vectors vu, vf are also indicated.

The 5 joint angles are calculated using the two positionvectors (gvu, gvf ) as described in the following sections.

A. Shoulder angles

The shoulder flexion/extension (s fe) andabduction/adduction (s aa) angles are calculated from theupper arm position vector, while the shoulder medial/lateral(s ml) rotation angle is calculated from the forearm positionvector as follows.

s fe = 90◦ + atan2(gvu(z), gvu(x)) (9)s aa = 90◦ + atan2(gvu(z), gvu(y)) (10)

s ml =

{atan2(gvf (y),gvf (x)), s fe < 90◦ & s aa < 90◦

atan2(gvf (z),gvf (y)), s fe > 90◦ || s aa > 90◦

(11)

where atan2 is the four quadrant inverse tangent function. Thevalue of 90◦ is added to bring the s fe and s aa angles tothe standard range of [−90◦,+180◦]. The range of s ml is[−90◦,+90◦]

MAZOMENOS et al.: DETECTING ELEMENTARY ARM MOVEMENTS BY TRACKING UPPER LIMB JOINT ANGLES WITH MARG SENSORS 5

B. Elbow angles

The elbow flexion/extension (e fe) angle is calculated asthe angle between the two position vectors, while the elbowpronation/supination angle (e ps) is calculated as the rollangle (φw), the angle of rotation around the global x-axis,of the MARG sensor located at the wrist. This is calculatedfrom qw = [qw1 qw2 qw3 qw4]. Thus,

e fe = atan2(‖gvf ×g vu‖,g vf ·g vu) (12)

e ps = φw = atan2(2qw3 ·qw4+2qw1 ·qw2, 2q2w1+2q2w4−1)

(13)

The range of e fe is [0◦,+180◦] and that for e ps is[−90◦,+90◦]. The calculations listed in Eq.10-Eq.13 referto the joint angles for the right upper limb. The calculationof s aa, s ml and e ps requires minor adjustments to bemade, such as change of sign for some of the position vectorcoordinates, when the left upper limb is considered.

IV. DATA COLLECTION AND KINEMATIC PATTERNANALYSIS

The kinematic information derived from the previous anal-ysis was utilized in order to identify characteristic patternsand values that can be used for distinguishing among thethree movements. This investigation was performed on aset of 28 repetitions for each of the three tasks, performedby two healthy volunteers in a controlled environment. Thedataset obtained (analysis dataset) was used exclusively for theextraction of the discriminating features and for the formula-tion of the identification algorithm and was not included inthe performance evaluation experiments (evaluation dataset),which took place both with healthy individuals (18) and strokesurvivors (4), as discussed in Section V.

In the analysis dataset, the three tasks were executed se-quentially in each repetition, with the subject performing themwhilst sitting comfortably on a chair at a table. Initially thevolunteer reached and retrieved a glass of water positionedin front of them. After the glass was retrieved the subjectperformed the task of arm rotation and poured the water intoanother, initially empty, glass. The final task performed wasthe task of lifting and drinking the water from the second glassbefore returning it to the table. The tasks were deliberatelyexecuted at a relatively slow pace in order to clearly capturetheir kinematic patterns. This facilitated comparison and theextraction of the discriminating features.

In our experiments we employed the Shimmer 2r 9DoFMARG sensor platform, comprised of mutually orthogonal3-axis accelerometer, gyroscope and magnetometer. The datastreams from the three sensors were used as inputs for derivingthe kinematic information. The Shimmer module is basedon an MSP430 microcontroller operating at 8 MHz and hasan integrated RN-42 class-2 Bluetooth transceiver enablingwireless communication [30]. The operational range was setat ±1.5g for the accelerometer and at ±500◦/s for thegyroscopes. The MARG module was programmed to sampleat 50Hz which was deemed sufficiently fast for samplingelementary arm movements. The accumulated MARG datawere initially filtered to remove noise, using zero-phase digital

FIR filters. Accelerometer and magnetometer data were filteredwith a low-pass filter with cut-off frequency at 12Hz and10Hz respectively. Gyroscope data were filtered with a band-pass filter in the range of 0.5Hz to 25Hz. The sensors werecalibrated before the beginning of the experiments, usingShimmer’s proprietary software. Other Shimmer software, thatpermitted multiple wireless Bluetooth streams to transmitdata concurrently, was used for data acquisition. During theexperiments, the operator of the acquisition software manuallyannotated the start (on) and end (off) times of each task byadding a marker signal to the data, based on a predefinedresting position, effectively segmenting each task.

The two sensors were attached to the upper arm, proximalto the elbow and proximal to the wrist, using bespoke holderswith elastic straps and orientated such that their coordinateframes were closely aligned with the local coordinate frameof the upper arm and forearm. The alignment was visuallyinspected by instructing the subject to raise their upper armto approximately 90◦ (shoulder height) and fully extend theirelbow (palm facing downward). The conclusions we drawfrom the kinematic pattern analysis of each task and thedescription of the characteristic features that we base ouridentification algorithm upon, are discussed in the followingparagraphs.

A. Reach and retrieve - Task AThe reach and retrieve task relates to the act of reaching in

order to grasp an object and the subsequent retrieval of theobject. During the reaching act, the shoulder joint is flexingwhile the elbow is extending concurrently. Throughout thiswork and in our experiments a reach and retrieve task isconsidered one which requires the elbow to be almost fullyextended in order to reach the object. Therefore, since flexionof the shoulder results in the s fe angle increasing whilethe extension of the elbow translates to the e fe angle beingdecreased at the same time, one would expect that at the pointwhen the object is reached, the s fe value will be at a localmaximum while the e fe will demonstrate a local minimum inthe same temporal frame. This kinematic pattern is illustratedin Fig. 3, produced from a representative execution of Task A.

B. Lift object to mouth - Task BThe second task we investigate relates to the act of bending

the arm at the elbow. This was realized as lifting an object(e.g. glass or cup) to the mouth and drinking from it. Fromour observations it is revealed that the value of e fe remains atan almost constant maximum level with minuscule variationsduring the act of drinking, which takes place near the midpointof the task. Additionally, our investigation further revealed thatthe value of gvf on the z-axis (vzf ), the vertical coordinate,becomes higher than 0m in the midpoint area of the task.This is based on the fact that the act of lifting and drinkingrequires the end of the forearm (wrist) to reach the heightof the mouth thus having its vertical coordinate being higher(> 0m) than the origin located at the shoulder. Fig.4, takenfrom an execution of Task B, illustrates these two characteristicfeatures of this task.

6 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, MONTH 201X

Time (sec)

An

gle

()

Elbow Flexion/Extension

0 5 10 1520

40

60

80

100

120

140

Time (sec)

An

gle

()

Shoulder Flexion/Extension

0 5 10 15-20

0

20

40

60

80

on off

min

on

off

max

Fig. 3. Reach and retrieve s fe and e fe angles demonstrating the temporalproximity of the two extrema points.

Time (sec)

An

gle

()

Elbow Flexion/Extension

0 2 4 6 8 10 12 14 16 1820

40

60

80

100

120

140

Time (sec)

z-ax

is p

osi

tio

n (

m)

vfz-coordinate

0 2 4 6 8 10 12 14 16 18

-0.4

-0.2

0

0.2

0.4

onoff

on off

Fig. 4. The e fe angle and vzf coordinate during the act of lifting an objectand drinking. The area of constant e fe is indicated by a circle and thesurpassing of the 0m threshold, indicated by a solid line, is clearly visible.

C. Rotate an object - Task C

The final movement we consider is the act of rotating thearm. In our experiments, this task is realized by rotating aglass and pouring its contents to another glass. The samebehavior for e fe discussed in Task B, was also observedin Task C, where e fe has an almost constant value with verysmall perturbations. Finally, through monitoring the value ofgvf during the executions of Task C we observe that the vzfis always smaller than 0m. These two observations are shownin Fig.5, which shows a representative execution of Task C.

Time (sec)

An

gle

()

Elbow Flexion/Extension

0 2 4 6 8 10 12 14 160

20

40

60

80

100

120

Time (sec)

z-ax

is p

osi

tio

n (

m)

vfz-coordinate

0 2 4 6 8 10 12 14 16

-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

onoff

onoff

Fig. 5. The e fe angle and vzf coordinate during the act of rotating the arm.The area where e fe remains constant is indicated by a circle and the 0mthreshold, is never surpassed.

D. Detection and discrimination method

The aforementioned set of observations was deduced ex-clusively from the analysis dataset and even though only twosubjects participated in these experiments, we hypothesizedthat the kinematic patterns we observed will also be presentduring the execution of these tasks by any individual, althougha certain amount of inter-person variability is expected. Thisstems from the fact that these kinematic patterns and thesubsequent set of detection rules that we formulated, arebased on the motor function of the upper limb, which isexpected to have similar, more or less, characteristics evenin situations where it is impaired due to a neuro-degenerativepathology. For example, the pattern associated with the actof reaching, in which the s fe angle will increase whilealmost simultaneously the e fe angle will decrease and theextrema points will appear during full extension, is due tothe inherent way this task is performed. Subsequently, ourstrategy was to employ the analysis dataset for deriving the setof rules for discriminating the three tasks and then evaluate itsrobustness through experimentation, as presented in Section V,with a diverse population which included both a healthy groupand stroke survivors (evaluation dataset). To summarize ourfindings, the kinematic analysis of the three movements hasshown that discriminating among the three tasks is possible byinvestigating the value and pattern of three kinematic features,namely: the values of the s fe and e fe angles and the valueof vzf . The specific characteristics of the kinematic features thatwe use to discriminate the three movements are summarizedas:

• Task A: The e fe pattern displays a steep slope andsignificant variations at midpoint. In addition the e feand s fe angles will have extrema points (min for e fe,max for s fe) that are nearly coincident in time.

MAZOMENOS et al.: DETECTING ELEMENTARY ARM MOVEMENTS BY TRACKING UPPER LIMB JOINT ANGLES WITH MARG SENSORS 7

• Task B: The e fe pattern remains almost constant at themidpoint. The vzf value will be > 0m at midpoint.

• Task C: The e fe pattern remains almost constant at themidpoint. The vzf value will be < 0m at midpoint.

Hence, Task A can be distinguished from the other two tasksby examining the pattern of e fe. Secondly, after eliminatingTask A, a distinction between Task B and Task C is possible byinvestigating the value of the vzf at midpoint against a thresholdset at 0m. Based on this set of rules, a two-level detection anddiscrimination algorithm is proposed as follows.

We consider the three kinematic properties (e fe, s fe andvzf ) and the on/off times, provided from the manual annotation,of each task as the inputs to our algorithm. Initially, theminimum value (emin) of the e fe and its temporal position(ep) and the maximum value (smax) and its temporal location(sp) of the s fe are extracted. The window in which wesearch for these values is set as [on+1s, off-1s]. We confine oursearch for the two extrema points to this window, to excludekinematic data collected at the very beginning and end of thetask, which we have observed can sometimes lead to erroneousdetections of the extrema points. This is due to involuntarysudden movements of the arm occurring near the beginningand end times of the task. We then extract the temporallocation (m p) and value (m v) of the midpoint of e fe, andcount the total number of times n that the angle falls belowa pre-determined fraction (α · m v) of the midpoint value,derived experimentally, within the temporal window [m p-0.7s, m p+0.7s]. Parameter n allows us to determine whetheror not the e fe values around m p are fairly constant. Thisis typical of the plateau-like responses exhibited in the caseof Task B and Task C and which translates to n being smallerthan a experimentally derived threshold n < 5. By comparisonthe value of n is expected to be rather high n > 5 for Task A.Hence, the value of n acts as the discriminating factor betweenTask A and Tasks B and C. In the case where n > 5, thetwo extrema angles (emin, smax) and their temporal proximity(abs(ep - sp)) are checked against pre-defined thresholds and iffound to be emin < 40◦, smax > 50◦ and abs(ep - sp)< 0.7sthen the task is labelled as Task A. If n < 5, the algorithmproceeds to the second level to distinguish the task as eitherTask B or Task C. For this, the maximum value (mz) ofthe vzf in the [m p-1s, m p+1s] window is extracted. Whencompared to the 0m threshold, this parameter allows us todistinguish Task B from Task C. The proposed detection anddiscrimination algorithm is provided as pseudocode in Fig.6.

V. EXPERIMENTAL EVALUATION AND DISCUSSION

A. Experimental Evaluation

In order to evaluate the performance of the proposed algo-rithm, we conducted a series of experiments with 18 healthyvolunteers at the University of Southampton and with 4 strokesurvivors at Brandenburg Klinik in Bernau, Germany; thelatter under supervision of clinical staff. The healthy cohortcomprised staff and students from the university, age range25-50, with representatives from both male and female popu-lations and examples of both left and right arm dominance.The 4 stroke survivors were men and women, age range

InitialiseConsider on, off points and the timeseries of s fe, e fe, vzfCalculate the Parameters- Find min(e fe) and max(e fe) in [on+1s,off-1s]- Find (m p) as m p = on+((off-on)/2) and (m v) from e fe- Count n as the number of e fe values in [m p-0.7s, m p+0.7s]that < α ·m v, α = 0.88- Find mz as the max(vzf ) value in [m p-1s, m p+1s]Task discriminationif abs(ep − sp)< 0.7s & emin < 40◦ & smax > 50◦ & n > 5then

- task = A;else if n < 5 then

if mz > 0m then- task = B;

else if mz < 0m then- task = C;

end ifend if

Fig. 6. The pseudocode of the proposed algorithm.

45-73, at different stages of their post-stroke rehabilitation.The selection of the two groups reflects our approach inevaluating the detection algorithm against two populationswith noticeably different qualitative characteristics with regardto their motor function abilities. Our intention was not toperform a clinical study and test a medical hypothesis. Theselection criterion for the participants was that of motorfunction impairment. The results obtained from the healthyvolunteers provide a baseline of the algorithm’s performancefor normative, unimpaired motion. The experiments involvingstroke survivors are used to reveal the extent to which theproposed algorithm can be applied for the discrimination ofthe three movements in individuals with non-canonical motorfunction. In other words, how the detection performance isaffected by physical disability.

In these experiments, the healthy subjects used their domi-nant arm while the stroke survivors used their stroke-affectedarm. Shimmer 2r MARG sensors were attached to the forearm(proximal to the wrist) and upper arm (proximal to the elbow)of the subjects, whilst they performed a number of armmovement exercises. The correct placement of the sensors wasvisually verified as described in Section IV. Physiotherapistsassisted the stroke survivors in placing their arm in the desiredposition for placement evaluation. Whenever necessary thesensor placement was corrected.

Our study comprised two distinct types of experiment. In thefirst type, referred to as “controlled”, each subject performedthe three tasks whilst seated at a table, in a similar way as inthe analysis dataset with the subject having to reach, grasp andretrieve a glass of water for Task A, pour the water into anotherglass for Task C and drink the water and return the glass tothe table for Task B. However, in the controlled experimentsthe tasks were not executed sequentially, Instead each taskwas repeated individually 5 times on a single execution run(i.e. 5 repetitions of Task A followed by 5 of Task B andthen 5 of Task C). Between task repetitions the arm wasbriefly returned to a resting position. This was a simply apredetermined position in which the participant paused brieflyand facilitated the operator of the data collection software

8 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, MONTH 201X

to manually annotate the start and end time points of eachtask execution. The second type of experiments, referred toas “semi-naturalistic” involved the execution of 20 tasks in asequence that emulated the everyday activity of “preparing acup of tea”. Every one of the 20 tasks belongs to one of thethree classes of interest. Out of the 20 tasks, 10 of them wereTask A, 5 Task B and 5 Task C. Our intention in the semi-naturalistic experiment, was to evaluate the performance ofthe proposed method with a series of tasks that when groupedtogether constitute a standard everyday activity. Again, thesubjects briefly positioned their arm in the resting positionbetween actions. Table I lists the sequence of the 20 tasksperformed during the semi-naturalistic experiment and theirrespective class assignment. For the stroke survivors the study

TABLE ITHE SEQUENCE OF 20 TASKS IN THE “PREPARING A CUP OF TEA”

ACTIVITY

No Action Class1 Fetch cup from desk A2 Place cup on kitchen surface A3 Fetch kettle A4 Pour out extra water from kettle C5 Put kettle onto charging point A6 Reach out for the power switch on the wall A7 Drink a glass of water while waiting for kettle to boil B8 Reach out to switch off the kettle A9 Pour hot water from the kettle in to cup C10 Fetch milk from the shelf A11 Pour milk into cup C12 Put the bottle of milk back on shelf A13 Fetch cup from kitchen surface A14 Have a sip and taste the drink B15 Have another sip B16 Unlock drawer C17 Retrieve biscuits from drawer A18 Eat a biscuit B19 Lock drawer C20 Have a drink B

spanned 3 weeks in order to minimize interference to theirregular rehabilitation program. Each week, 4 execution runsof the controlled experiment, totaling 20 repetitions per personfor each task, were performed. Over the entire 3-week study,each stroke survivor completed 60 executions of each task.The semi-naturalistic experiment was executed twice in thefirst week and 4 times in weeks two and three. This resultedin a total of 10 executions of the experiment for each strokesurvivor, involving 100 instances of Task A and 50 of Task Band Task C. In the healthy group, each subject performed 4runs of the controlled experiment, thus 20 repetitions of eachtask, and 4 of the 18 volunteers performed 4 repetitions of thesemi-naturalistic experiments (40 Tasks A, 20 Tasks B and 20Tasks C). The data gathered from these experiments constitutethe evaluation database which was used for evaluating theperformance and robustness of the proposed detection anddiscrimination algorithm. It should be noted that none of thedata from these experiments was used to modify the existingalgorithm, which was based exclusively on the analysis datasetas described in Section IV. The evaluation database was usedexplicitly for performance assessment.

Table II lists the achieved performance of the controlledexperiment in the stroke survivors group. High detection

performance (> 95%) was observed for each individual task,the only exception being a lower value of approximately88% for Task A in the combined results. Similar level ofperformance (> 95%) was attained for all subjects over thethree tasks, apart from Subject 3 where the overall detectionaccuracy was 80%. These two lower scores (< 90%) are bothattributed to Task A being detected with a lower degree ofaccuracy in the second week of experiments with Subject3. This is illustrated in Fig. 7, which depicts the week-by-week performance achieved in the controlled experiment bythe stroke survivors group. From Fig. 7(a) we observe thatTask A was detected with 40% (8/20 successful detections) inweek 2 for Subject 3. We also notice that Task A was detectedat higher levels during the other weeks, 60% (12/20 correctdetections) in week 1 and 80% (16/20 correct detections) inweek 3, and that the detection of Task B for Subject 3 wasalso at the lowest level in week 2. These two observationsprompt us to conclude that some erroneous sensor placementwas the reason for this performance during the second week.Furthermore, the motor functions of Subject 3 were affected bytheir stroke episode more than the rest of the group. Table III

TABLE IIDETECTION PERFORMANCE FOR THE STROKE SURVIVORS GROUP IN THE

CONTROLLED EXPERIMENT

Subject Task Accuracy (%) OverallNo A (#/60) B (#/60) C (#/60) (#/180) (%)

Subject 1 60 (100%) 59 (98%) 60 (100%) 179 (99.4%)Subject 2 57 (95%) 60 (100%) 57 (95%) 174 (96.67%)Subject 3 36 (60%) 50 (83.3%) 58 (96.67%) 144 (80%)Subject 4 58 (97%) 60 (100%) 60 (100%) 178 (98.9%)

Totals 211/240 229/240 235/240 675/720(87.92%) (95.4%) (97.92%) (93.75%)

TABLE IIIDETECTION PERFORMANCE FOR THE HEALTHY GROUP IN THE

CONTROLLED EXPERIMENT

Subject Task Accuracy (%) OverallNo A (#/20) B (#/20) C (#/20) (#/60) (%)Subject 1 19 (95%) 20 (100%) 20 (100%) 59 (98.3%)Subject 2 20 (100%) 20 (100%) 18 (90%) 58 (96.67%)Subject 3 17 (85%) 20 (100%) 20 (100%) 57 (95%)Subject 4 19 (95%) 19 (95%) 17 (85%) 55 (91.67%)Subject 5 20 (100%) 19 (95%) 17 (85%) 56 (93.33%)Subject 6 20 (100%) 20 (100%) 20 (100%) 60 (100%)Subject 7 20 (100%) 20 (100%) 20 (100%) 60 (100%)Subject 8 20 (100%) 20 (100%) 20 (100%) 60 (100%)Subject 9 20 (100%) 19 (95%) 17 (85%) 56 (93.33%)Subject 10 20 (100%) 20 (100%) 20 (100%) 60 (100%)Subject 11 20 (100%) 19 (95%) 18 (90%) 57 (95%)Subject 12 20 (100%) 17 (100%) 20 (100%) 57 (95%)Subject 13 18 (90%) 19 (95%) 20 (100%) 57 (95%)Subject 14 18 (90%) 20 (100%) 20 (100%) 58 (96.67%)Subject 15 20 (100%) 20 (100%) 20 (100%) 60 (100%)Subject 16 20 (100%) 20 (100%) 20 (100%) 60 (100%)Subject 17 20 (100%) 20 (100%) 16 (80%) 56 (93.33%)Subject 18 20 (100%) 20 (100%) 20 (100%) 60 (100%)

Totals 351/360 352/360 343/360 1046/1080(97.5%) (97.79%) (95.28%) (96.85%)

illustrates the detection performance of the healthy populationin the controlled experiment. As anticipated, the level ofperformance in the healthy group is higher than that of thestroke survivors group. This is attributed to the impaired motor

MAZOMENOS et al.: DETECTING ELEMENTARY ARM MOVEMENTS BY TRACKING UPPER LIMB JOINT ANGLES WITH MARG SENSORS 9

02468

101214161820

Week1 Week2 Week3

Correct D

etectio

ns (#

\20)

Task A ‐ Reach and retrieve

Subject1 Subject2 Subject3 Subject4

(a)

02468

101214161820

Week1 Week2 Week3

Correct D

etectio

ns (#

\20)

Task B ‐ Lift object to mouth 

Subject1 Subject2 Subject3 Subject4

(b)

02468

101214161820

Week1 Week2 Week3

Correct D

etectio

ns (#

\20)

Task C ‐ Rotate an object

Subject1 Subject2 Subject3 Subject4

(c)

Fig. 7. The per-week performance of the controlled experiments for (a) Task A (b) Task B and (c) Task C from the stroke survivors group

Task Duration (sec)0-3 3-4 4-5 5-6 >6

Cor

rect

det

ectio

ns (%

)

0

20

40

60

80

100

Task A - Reach and retrieveStroke survivors - Avg: 87.92Healthy group - Avg: 97.5

100

92.697.8

84.1

97.2

85.7

98.594.4

90.187.1

(a)

Task Duration (sec)0-5 5-6 6-7 7-8 >8

Cor

rect

det

ectio

ns (%

)

0

20

40

60

80

100

Task B - Lift object to mouthStroke survivors - Avg: 95.4Healthy group - Avg: 97.79

89.796.4 94.9 93.7

98.5 99 98.8 100100100

(b)

Task Duration (sec)0-5 5-6 6-7 7-8 >8

Cor

rect

det

ectio

ns (%

)

0

20

40

60

80

100

Task C - Rotate an objectStroke survivors - Avg: 97.92Healthy group - Avg 95.28

10095 94.7 97.3

92.3

100 10095.3

98.197.5

(c)

Fig. 8. Percentage of correct detections against task durations in the controlled experiment for healthy group and stroke survivors. For all tasks the detectionratio remains within ±6% of the average value in both groups (shown in the legend).

function capabilities of the stroke survivors. The combineddetection accuracy was higher than 95% for each separatetask, when considering all 18 participants. Likewise, a higherthan 91% accuracy was obtained for each volunteer, over alltasks. In Table IV the average durations and their respectivevariances of the tasks performed in the controlled experimentare listed for both groups. To demonstrate the robustnessof the detection algorithm against the time a task took tocomplete, the percentage of correct task detections in thecontrolled experiment is illustrated in Fig. 8 as a function oftask duration for both the stroke survivors and healthy groups.We observe that in both groups and for every task duration, thedetection ratio lies within ±6% of the total average detectionratio for this task in the respective group. From this weconclude that the developed algorithm achieves a similar levelof performance irrespective of the time required for a task to becompleted. Table V and Table VI list the performance results

TABLE IVAVERAGE TIME DURATIONS OF TASKS IN THE CONTROLLED EXPERIMENT

Task Duration (µ± σ2 sec)Stroke Survivors Healthy Group

A 4.1±1 5.4±1.3B 5.9±1.6 6.7±1.7C 6±2 7.2±1.8

from the semi-naturalistic experiments. The overall detectionaccuracy remains at high levels (> 80%), although lower thanthe corresponding figures for the controlled experiment in bothgroups, as would be expected. Also as expected the accuracywas higher (both overall and for each task) in the healthy group

(89%) than in the stroke survivors group (83%). Subject 3 fromthe stroke survivors group demonstrates the lowest detectionaccuracies, which we again attribute to their greater level ofimpairment. Finally, the reason Task A returns a lower levelof accuracy, with a close to 80% correct detection ratio, isattributed to the fact that a number of the pre-determined TaskA actions in the “preparation of a cup of tea” sequence, suchas actions 2,5,12 and 17, are not strictly reach and retrieveactions and were the actions that were misdetected most ofthe time. During the design of this experiment, however, theexpert physiotherapists participating in this study consideredthese activities as representative of a reach and retrieve actionand therefore worthy of inclusion in the experiment.

TABLE VDETECTION PERFORMANCE FOR THE STROKE SURVIVORS GROUP IN THE

SEMI-NATURALISTIC EXPERIMENT

Subject Task Accuracy (%) OverallNo A (#/100) B (#/50) C (#/50) (#/200) (%)Subject 1 86 (86%) 46 (92%) 44 (88%) 176 (88%)Subject 2 83 (83%) 45 (90%) 43 (86%) 172 (86%)Subject 3 61 (61%) 44 (88%) 40 (80%) 145 (72.5%)Subject 4 88 (88%) 46 (92%) 38 (76%) 172 (86%)

Totals 318/400 181/200 165/200 664/800(79.5%) (90.5%) (82.5%) (83.00%)

B. Discussion

The experimental investigation achieves two things. Firstly,the high performance in the controlled experiment (93.75% instroke survivors and 96.85% in the healthy group) among adiverse population, allows us to conclude that the developed

10 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, MONTH 201X

TABLE VIDETECTION PERFORMANCE FOR THE HEALTHY GROUP IN THE

SEMI-NATURALISTIC EXPERIMENT

Subject Task Accuracy (%) OverallNo A (#/40) B (#/20) C (#/20) (#/80) (%)Subject 1 33 (82.5%) 19 (95%) 20 (100%) 72 (90%)Subject 2 39 (97.5%) 20 (100%) 20 (100%) 79 (98.75%)Subject 3 30 (75%) 18 (90%) 20 (100%) 68 (85%)Subject 4 31 (77.5%) 19 (95%) 18 (90%) 68 (85%)

Totals 133/160 76/80 78/80 240/260(83.13%) (95%) (97.5%) (89.69%)

algorithm is robust against the variability in the kinematic pat-terns of different individuals. Additional investigations showedthat tasks of different duration are detected with similar highlevels of performance in both groups. These observationsvalidate our discrimination strategy and verify that the derivedset of rules is capable of discriminating the three movementseven in situations where the motor function is impaired.Secondly, with the semi-naturalistic experiment we attemptto evaluate the extent to which the proposed algorithm candetect the three movements when these take place as subtasksof a typical activity (i.e. “preparation of a cup of tea”). Theobtained results in this experiment (83% in stroke survivorsand 89.69% in the healthy group) are quite promising andindicative of our method’s ability to effectively distinguishbetween the three movements of interest when these takeplace sequentially. Although the 20 tasks in our experimentwere predefined and their sequence was predetermined and notspontaneous, we believe that the semi-naturalistic experimentprovides an adequate proof-of-concept. It is common foreveryday activities to combine some elements of the threemovements. For example the act of drinking a glass of watermay involve both a “reach and retrieve” and a “lift objectto mouth” action. In such cases, we expect the algorithm toidentify the overall activity based on the movement featuresthat characterize it the most.

Analysis of the data shows that incorrect sensor placementcan potentially affect the performance of the algorithm. Thisis because the body coordinate frame of the upper arm andforearm is not perfectly aligned with the two sensor’s coor-dinate frames. When attaching the sensors on the individual’sarm we aimed to ensure that the two coordinate frames wereclosely aligned by visual inspection. More explicit methodsfor aligning the sensor frame to the body frame like theones in [31] and [18], were deemed too burdensome andtime consuming to be applied to the stroke survivors. In realworld use, perfect alignment between the two frames is a verychallenging task since body segments are not rigid elements.In addition, in a long-term deployment scenario (like the onewe consider), it is not uncommon for attached sensors to moveslightly from their original position and thus some degree ofmisalignment is typically expected to be present. The resultsshow that good performance can be achieved even withoutperfect alignment which further validates the robustness of theproposed algorithm. In the application scenario we consider,the MARG sensors are expected to be properly placed andaligned to the respective body segment at the beginning of the

monitoring session and checked at regular intervals. This canbe done by the patients themselves, or if not possible, by acaregiver.

Since the focus of this work was to evaluate the abilityof the proposed algorithm to discriminate between the threeelementary movements, we decided to manually annotate theobtained datasets, using a marker signal during data acqui-sition, avoiding any ambiguity in the on/off time instancesof the tasks. This annotation strategy, applied both in thecontrolled and semi-naturalistic experiments, enabled us toisolate the performed tasks in the dataset and exclude fromour analysis any other movements performed by the volunteersduring the experiments. Naturally, during everyday activitiesthe arm moves around freely and it is not a trivial matterto determine definitively when the start or end of a particularmovement or event occurs. This is not as great a problem whenthe sensors are used during prescribed rehabilitation exercises,since these tend to be directed under supervision. Nevertheless,an automatic event detection system capable of segmentingreal time data into periods of activity and inactivity would benecessary for a truly autonomous system (e.g. rehabilitation inthe home environment), and this is an area of research that weare currently pursuing. For example, when the arm is not inmotion, the modulus of the combined tri-axial accelerometersignal equates to the value of gravitational acceleration, whilstthe modulus of the combined tri-axial gyroscope signal equatesto zero and similarly the modulus of the combined tri-axialmagnetometer signal equates to the local value of magneticfield strength, irrespective of sensor orientation. Although thelast of these factors is also dependant on geographical locationand magnetometers can be influenced by the proximity offerrous materials [32] which are likely to be present in thehome environment, it is possible by employing a simplethresholding technique on these sensor signals to distinguishperiods of activity from inactivity; a crude form of eventdetection. It is established that density estimation algorithms(e.g. Kalman Filters, Particle Filters) coupled with morecomplex modeling of the arm, based on physical geometricalconstraints, can provide a more accurate estimation of theupper limb orientation, minimizing the effect of gyroscopedrift [18], [31]. Typically, the natural restriction of the humanelbow in performing abduction/adduction is used to correctan initial orientation estimation in such a way that the elbowabduction/adduction angle is minimized. However, the com-putational load in such approaches is considerably high, thuswe employed the more efficient quaternion gradient descentmethod. Here, the gyroscope drift is compensated using theparameter β, that represents the gyroscope measurement erroras the magnitude of a quaternion derivative [12]. To demon-strate the ability of the orientation algorithm to mitigate thegyroscope error, we calculate the elbow abduction/adductionangle, as the angle between the Z-axis of the forearm and theY-axis of the upper arm minus 90 deg, for the three movementsexecuted by healthy volunteers in the controlled experiment.From Fig. 9, we observe that the angle remains small (almostalways < 10◦) in all three movements. From this we concludethat, although less accurate than the Particle Filter approachin [31], the quaternion gradient descent method provides a

MAZOMENOS et al.: DETECTING ELEMENTARY ARM MOVEMENTS BY TRACKING UPPER LIMB JOINT ANGLES WITH MARG SENSORS 11

0 5 10 15 20 25 30 35 40 45-30

-20

-10

0

10

20

30

Time (sec)

An

gle

()

Elbow abduction/adduction angle (Task A)

(a)

0 5 10 15 20 25 30 35 40 45-30

-20

-10

0

10

20

30

Time (sec)

An

gle

()

Elbow abduction/adduction angle (Task B)

(b)

0 5 10 15 20 25 30 35 40 45-30

-20

-10

0

10

20

30

Time (sec)

An

gle

()

Elbow abduction/adduction angle (Task C)

(c)

Fig. 9. The elbow abduction/adduction angle calculated in the three tasks, (a) Task A (b) Task B and (c) Task C.

reasonably accurate estimation of the orientation of the upperlimb.

VI. CONCLUSIONS

This paper describes the development of an algorithmicsolution for efficiently detecting and discriminating three ele-mentary arm movements. These movements are fundamentalin natural activities and the ability to detect them is ofgreat importance in evaluating the rehabilitation progress ofstroke survivors or patients suffering from other motor neurondiseases. In our work, we employ a pair of MARG sensorsattached to the wrist and elbow, from which the orientationof the arm segments are deduced using a gradient-descentquaternion based method. With the aid of a 2-link limb modeland position vectors, 3-D tracking of the upper arm andforearm position is achieved. From the kinematic analysis aset of rules, involving three kinematic parameters (e fe, s feand vzf ), was derived and used to formulate the detectionand discrimination algorithm. The proposed solution was thenevaluated in a series of experiments with two groups, healthyindividuals (18 subjects) and stroke survivors (4 subjects). Inthe controlled experiments the proposed algorithm achieved>88% performance for each task individually and >93%overall across both groups. This validates the basic rules ofour detection algorithm and establishes its robustness. This isfurther solidified by the ability to identify tasks of differentduration with similar accuracy, (±6% of the average value)in both groups. Similar levels of performance, >80% foreach separate task and >83% overall, were also obtained inthe semi-naturalistic experiments which, although predefined,comprise a sequence of the three tasks that represent a typicaleveryday activity: “preparing a cup of tea”. This level ofaccuracy demonstrates the potential of the proposed methodin identifying the three elementary upper limb movementsduring natural activities. Combined with the computationallyinexpensive orientation algorithm, the work discussed in thispaper has clear potential of being integrated in a body-area-network of MARG sensors as a component of a fullyautomated task detection and discrimination system for home-based rehabilitation applications.

REFERENCES

[1] A. S. Go, D. Mozaffarian, V. L. Roger, E. J. Benjamin, J. D. Berry,M. J. Blaha, S. Dai, E. S. Ford, C. S. Fox, S. Franco, H. J. Fullerton,

C. Gillespie, S. M. Hailpern, J. A. Heit, V. J. Howard, M. D. Huffman,S. E. Judd, B. M. Kissela, S. J. Kittner, D. T. Lackland, J. H. Lichtman,L. D. Lisabeth, R. H. Mackey, D. J. Magid, G. M. Marcus, A. Marelli,D. B. Matchar, D. K. McGuire, E. R. Mohler, C. S. Moy, M. E.Mussolino, R. W. Neumar, G. Nichol, D. K. Pandey, N. P. Paynter, M. J.Reeves, P. D. Sorlie, J. Stein, A. Towfighi, T. N. Turan, S. S. Virani,N. D. Wong, D. Woo, and M. B. Turner, “Heart disease and strokestatistics2014 update: A report from the american heart association,”Circulation, vol. 129, no. 3, pp. e28–e292, Jan 2014.

[2] V. L. Feigin, M. H. Forouzanfar, R. Krishnamurthi, G. A. Mensah,M. Connor, D. A. Bennett, A. E. Moran, R. L. Sacco, L. Ander-son, T. Truelsen, M. O’Donnell, N. Venketasubramanian, S. Barker-Collo, C. M. M. Lawes, W. Wang, Y. Shinohara, E. Witt, M. Ezzati,M. Naghavi, and C. Murray, “Global and regional burden of strokeduring 1990-2010: findings from the global burden of disease study2010,” Lancet, vol. 383, no. 9913, pp. 245 – 255, Jan 2014.

[3] I. Korhonen, J. Parkka, and M. Van Gils, “Health monitoring in thehome of the future,” IEEE Eng. Med. Biol., vol. 22, no. 3, pp. 66–73,May 2003.

[4] P. Bonato, “Wearable sensors and systems,” IEEE Eng. Med. Biol.,vol. 29, no. 3, pp. 25–36, May 2010.

[5] StrokeBack Project, Sep 2014. [Online]. Available: www.strokeback.eu[6] S. L. Wolf, P. A. Catlin, M. Ellis, A. L. Archer, B. Morgan, and

A. Piacentino, “Assessing wolf motor function test as outcome measurefor research in patients after stroke,” Stroke, vol. 32, no. 7, pp. 1635–1644, Jul 2001.

[7] S. L. Fritz, S. Blanton, G. Uswatte, E. Taub, and S. L. Wolf, “Minimaldetectable change scores for the wolf motor function test,” NeurorehabilNeural Re., vol. 23, no. 7, pp. 662–669, Sep 2009.

[8] B. Najafi, K. Aminian, A. Paraschiv-Ionescu, F. Loew, C. Bula, andP. Robert, “Ambulatory system for human motion analysis using akinematic sensor: monitoring of daily physical activity in the elderly,”IEEE Trans. Biomed. Eng., vol. 50, no. 6, pp. 711–723, Jun 2003.

[9] A. Salarian, H. Russmann, F. Vingerhoets, P. Burkhard, and K. Aminian,“Ambulatory monitoring of physical activities in patients with parkin-son’s disease,” IEEE Trans. Biomed. Eng., vol. 54, no. 12, pp. 2296–2299, Dec 2007.

[10] P. Veltink, H. Bussmann, W. de Vries, W. L. Martens, and R. vanLummel, “Detection of static and dynamic activities using uniaxialaccelerometers,” IEEE Trans. Rehabil. En., vol. 4, no. 4, pp. 375–385,Dec 1996.

[11] L. Wang, T. Gu, X. Tao, and J. Lu, “A hierarchical approach to real-time activity recognition in body sensor networks,” Pervasive and MobileComputing, vol. 8, no. 1, pp. 115 – 130, Feb 2012.

[12] S. Madgwick, A. J. L. Harrison, and R. Vaidyanathan, “Estimation ofimu and marg orientation using a gradient descent algorithm,” in 2011IEEE Int. Conf. on Rehabilitation Robotics, Jun 2011, pp. 1–7.

[13] D. Biswas, A. Cranny, N. Gupta, K. Maharatna, J. Achner, J. Klemke,M. Jobges, and S. Ortmann, “Recognizing upper limb movements withwrist worn inertial sensors using k-means clustering classification,”(accepted, in press) Hum. Mov. Sci., vol. 40, pp. 59–76, Apr 2015.

[14] R. Hyde, L. Ketteringham, S. Neild, and R. Jones, “Estimation of upper-limb orientation based on accelerometer and gyroscope measurements,”IEEE Trans. Biomed. Eng., vol. 55, no. 2, pp. 746–754, Feb 2008.

[15] M. El-Gohary and J. McNames, “Shoulder and elbow joint angletracking with inertial sensors,” IEEE Trans. Biomed. Eng., vol. 59, no. 9,pp. 2635–2641, Sep 2012.

[16] D. Roetenberg, P. Slycke, and P. Veltink, “Ambulatory position and

12 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, MONTH 201X

orientation tracking fusing magnetic and inertial sensing,” IEEE Trans.Biomed. Eng., vol. 54, no. 5, pp. 883–890, May 2007.

[17] H. Zhou, H. Hu, N. D. Harris, and J. Hammerton, “Applications ofwearable inertial sensors in estimation of upper limb movements,”Biomed. Signal Process. Control, vol. 1, no. 1, pp. 22 – 32, Jan 2006.

[18] H. Luinge, P. Veltink, and C. Baten, “Ambulatory measurement of armorientation,” J. Biomech., vol. 40, no. 1, pp. 78–85, 2007.

[19] X. Yun and E. Bachmann, “Design, implementation, and experimentalresults of a quaternion-based kalman filter for human body motiontracking,” IEEE Trans. Robot., vol. 22, no. 6, pp. 1216–1227, Dec 2006.

[20] R. Mahony, T. Hamel, and J.-M. Pflimlin, “Nonlinear complementaryfilters on the special orthogonal group,” IEEE Trans Autom. Control,vol. 53, no. 5, pp. 1203–1218, Jun 2008.

[21] N. Keijsers, M. Horstink, and S. Gielen, “Online monitoring of dyskine-sia in patients with parkinson’s disease,” IEEE Eng. Med. Biol., vol. 22,no. 3, pp. 96–103, May 2003.

[22] L. Atallah, B. Lo, R. King, and G.-Z. Yang, “Sensor positioningfor activity recognition using wearable accelerometers,” IEEE Trans.Biomed. Circuits Syst., vol. 5, no. 4, pp. 320–329, Aug 2011.

[23] O. Banos, M. Damas, H. Pomares, A. Prieto, and I. Rojas, “Daily livingactivity recognition based on statistical feature quality group selection,”Expert Syst Appl, vol. 39, no. 9, pp. 8013 – 8021, Jul 2012.

[24] D. Rodrıguez-Martın, C. Perez-Lopez, A. Sama, J. Cabestany, andA. Catala, “A wearable inertial measurement unit for long-term monitor-ing in the dependency care area,” Sensors, vol. 13, no. 10, pp. 14 079–14 104, Oct 2013.

[25] M. Ermes, J. Parkka, J. Mantyjarvi, and I. Korhonen, “Detection of dailyactivities and sports with wearable sensors in controlled and uncontrolledconditions,” IEEE Trans. Inf. Technol. Biomed., vol. 12, no. 1, pp. 20–26,Jan 2008.

[26] J. Parkka, M. Ermes, P. Korpipaa, J. Mantyjarvi, J. Peltola, and I. Korho-nen, “Activity classification using realistic data from wearable sensors,”IEEE Trans. Inf. Technol. Biomed., vol. 10, no. 1, pp. 119–128, Jan2006.

[27] A. Fleury, M. Vacher, and N. Noury, “Svm-based multimodal classi-fication of activities of daily living in health smart homes: Sensors,algorithms, and first experimental results,” IEEE Trans. Inf. Technol.Biomed., vol. 14, no. 2, pp. 274–283, Mar 2010.

[28] S. Patel, K. Lorincz, R. Hughes, N. Huggins, J. Growdon, D. Standaert,M. Akay, J. Dy, M. Welsh, and P. Bonato, “Monitoring Motor Fluctu-ations in Patients With Parkinson‘s Disease Using Wearable Sensors,”IEEE Trans. Inf. Technol. Biomed., vol. 13, no. 6, pp. 864–873, Nov2009.

[29] H. Junker, O. Amft, P. Lukowicz, and G. Tr “Gesture spotting withbody-worn inertial sensors to detect user activities,” Pattern Recogn.,vol. 41, no. 6, pp. 2010 – 2024, Jun 2008.

[30] A. Burns, B. Greene, M. McGrath, T. O’Shea, B. Kuris, S. Ayer,F. Stroiescu, and V. Cionca, “ShimmerTM - a wireless sensor platformfor noninvasive biomedical research,” IEEE Sensors J., vol. 10, no. 9,pp. 1527–1534, Sep 2010.

[31] Z.-Q. Zhang and J.-K. Wu, “A novel hierarchical information fusionmethod for three-dimensional upper limb motion estimation,” IEEETrans. Instrum. Meas., vol. 60, no. 11, pp. 3709–3719, Nov 2011.

[32] C. Kendell and E. D. Lemaire, “Effect of mobility devices on orientationsensors that contain magnetometers,” J. Rehab. Res. Dev., vol. 46, no. 7,pp. 957–962, 2009.