comparison of human machine interfaces to …voice controlled wheelchair; however, these kind of...

6
COMPARISON OF HUMAN MACHINE INTERFACES TO CONTROL A ROBOTIZED WHEELCHAIR Guilherme M. Pereira * , Suzana V. Mota * , Dandara T. G. de Andrade * , Eric Rohmer * * Av. Albert Einstein, 400, Cidade Universit´aria Zeferino Vaz, Distrito Bar˜ao Geraldo School of Electrical and Computer Engineering, FEEC, UNICAMP Campinas, S˜ ao Paulo, Brazil Emails: [email protected], [email protected], [email protected], [email protected] Abstract— Assistive robotics solutions help people to recover their lost mobility and autonomy in their daily life. This work presents a comparison between two Human Machine Interfaces (HMIs) based on head postures and facial expressions to control a robotized wheelchair. Comparing both strategies, JoyFace has shown to be the safest and easiest to use, on the other hand, RealSense has demanded more physical efforts but may be the appropriate solution for people who suffered severe trauma as most of them cannot even move their heads. Although both HMIs need improvements, these strategies have shown to be promising technologies for people paralyzed from down the neck to control a robotized wheelchair. Keywords— Computer Vision, Robotized Wheelchair, Assistive Technology, Human Machine Interface Resumo— Pessoas com deficiˆ encia severa necessitam de alternativas de controle de cadeira de rodas. Este trabalho apresenta a compara¸c˜ ao de dois sistemas baseados em express˜ oes e movimentos faciais: a JoyFace e RealSense. A proposta JoyFace foi avaliada como segura e f´ acil de utilizar, mas demanda alto esfor¸co f´ ısico al´ em de necessitar de uma tela de feedback do rosto do usu´ario. Em contrapartida, a RealSense exigiu menor demanda f´ ısica e pode ser utilizada por usu´arios com graves limita¸c˜oes de movimentos. Apesar dos resultados obtidos indicarem que as interfaces necessitam de melhorias, ambas mostraram-se promissoras no controle de uma cadeira de rodas robotizada, podendo oferecer autonomia e seguran¸ca para o operador. Palavras-chave— Vis˜ao Computacional, Cadeira de Rodas Robotizada, Tecnologia Assistiva, Interface Hu- mano Computador 1 Introduction According to (World Health Organization, 2011), more than one billion people in the world suffer from some form of disability, being about 200 mil- lion with considerable functional difficulties. In Brazil, over 45 million of the population have some disability, among which 13 million of them suffer from severe motor disabilities (IBGE, 2010). Plenty of assistive robotics researchers are aiming at finding solutions for these people to re- cover their lost mobility and autonomy in their daily life. They mainly investigate robotiz- ing wheelchairs (Simpson, 2005) (Cowan et al., 2012), and interaction with robots through var- ious hands-free devices adapted to the disability, using eye/face tracking, voice/puff/sip activation and especially Brain-Computer Interfaces (BCI) (Escolano et al., 2012) (Iturrate et al., 2009), or a combination of those. Recent works bring different solutions for wheelchair alternative controls. In (Chauhan et al., 2016) the authors propose a new model of voice controlled wheelchair; however, these kind of interfaces are subject to the influence of the envi- ronment’s sounds. In (Huo and Ghovanloo, 2010) and (Kim et al., 2013) they implement and investi- gate the efficacy of a Tongue Drive System (TDS) for controlling a power wheelchair. They tested the interfaces in groups of volunteers with spinal cord injury (SCI). In (Rohmer, Pinheiro, Raizer, Olivi and Car- dozo, 2015) the authors proposed a control for assistive robotics vehicles using small movements of the face or limbs through Electromyograph (EMG) and signals generated by brain activity through Electroencephalograph (EEG). Also, in (Rohmer, Pinheiro, Cardozo, Bellone and Reina, 2015) they present a navigation strategy, in which an Inertial Measurement Unit (IMU) tracks the user’s head posture, to accordingly project a col- ored spot on the ground ahead, with a pan-tilt mounted laser. The wheelchair is equipped with a low-cost depth camera that models a traversabil- ity map to define if the desired destination is reachable or not by the chair. The operator can validate the target via an Electromyogram (EMG) device attached to his face, and the system calcu- lates the path to the pointed coordinate, based on the traversability map. In (Gautam et al., 2014) they developed an optical-type eye tracking system to control a power wheelchair and perform a manual navi- gation based on the user’s gazes direction. In (Escobedo et al., 2013) they describe a method to perform semi-autonomous navigation on a wheelchair by estimating the user’s intention us- ing a face pose recognition system (Kinect camera to interact with the user). Hence, this article presents a comparison be- tween two Human Machine Interfaces (HMIs) to control a robotized wheelchair using the head dis- XIII Simp´osio Brasileiro de Automa¸ ao Inteligente Porto Alegre – RS, 1 o – 4 de Outubro de 2017 ISSN 2175 8905 2301

Upload: others

Post on 22-Mar-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

COMPARISON OF HUMAN MACHINE INTERFACES TO CONTROL AROBOTIZED WHEELCHAIR

Guilherme M. Pereira∗, Suzana V. Mota∗, Dandara T. G. de Andrade∗, Eric Rohmer∗

∗Av. Albert Einstein, 400, Cidade Universitaria Zeferino Vaz, Distrito Barao GeraldoSchool of Electrical and Computer Engineering, FEEC, UNICAMP

Campinas, Sao Paulo, Brazil

Emails: [email protected], [email protected],

[email protected], [email protected]

Abstract— Assistive robotics solutions help people to recover their lost mobility and autonomy in their dailylife. This work presents a comparison between two Human Machine Interfaces (HMIs) based on head posturesand facial expressions to control a robotized wheelchair. Comparing both strategies, JoyFace has shown to bethe safest and easiest to use, on the other hand, RealSense has demanded more physical efforts but may bethe appropriate solution for people who suffered severe trauma as most of them cannot even move their heads.Although both HMIs need improvements, these strategies have shown to be promising technologies for peopleparalyzed from down the neck to control a robotized wheelchair.

Keywords— Computer Vision, Robotized Wheelchair, Assistive Technology, Human Machine Interface

Resumo— Pessoas com deficiencia severa necessitam de alternativas de controle de cadeira de rodas. Estetrabalho apresenta a comparacao de dois sistemas baseados em expressoes e movimentos faciais: a JoyFace eRealSense. A proposta JoyFace foi avaliada como segura e facil de utilizar, mas demanda alto esforco fısicoalem de necessitar de uma tela de feedback do rosto do usuario. Em contrapartida, a RealSense exigiu menordemanda fısica e pode ser utilizada por usuarios com graves limitacoes de movimentos. Apesar dos resultadosobtidos indicarem que as interfaces necessitam de melhorias, ambas mostraram-se promissoras no controle deuma cadeira de rodas robotizada, podendo oferecer autonomia e seguranca para o operador.

Palavras-chave— Visao Computacional, Cadeira de Rodas Robotizada, Tecnologia Assistiva, Interface Hu-mano Computador

1 Introduction

According to (World Health Organization, 2011),more than one billion people in the world sufferfrom some form of disability, being about 200 mil-lion with considerable functional difficulties. InBrazil, over 45 million of the population have somedisability, among which 13 million of them sufferfrom severe motor disabilities (IBGE, 2010).

Plenty of assistive robotics researchers areaiming at finding solutions for these people to re-cover their lost mobility and autonomy in theirdaily life. They mainly investigate robotiz-ing wheelchairs (Simpson, 2005) (Cowan et al.,2012), and interaction with robots through var-ious hands-free devices adapted to the disability,using eye/face tracking, voice/puff/sip activationand especially Brain-Computer Interfaces (BCI)(Escolano et al., 2012) (Iturrate et al., 2009), or acombination of those.

Recent works bring different solutions forwheelchair alternative controls. In (Chauhanet al., 2016) the authors propose a new model ofvoice controlled wheelchair; however, these kind ofinterfaces are subject to the influence of the envi-ronment’s sounds. In (Huo and Ghovanloo, 2010)and (Kim et al., 2013) they implement and investi-gate the efficacy of a Tongue Drive System (TDS)for controlling a power wheelchair. They testedthe interfaces in groups of volunteers with spinalcord injury (SCI).

In (Rohmer, Pinheiro, Raizer, Olivi and Car-dozo, 2015) the authors proposed a control forassistive robotics vehicles using small movementsof the face or limbs through Electromyograph(EMG) and signals generated by brain activitythrough Electroencephalograph (EEG). Also, in(Rohmer, Pinheiro, Cardozo, Bellone and Reina,2015) they present a navigation strategy, in whichan Inertial Measurement Unit (IMU) tracks theuser’s head posture, to accordingly project a col-ored spot on the ground ahead, with a pan-tiltmounted laser. The wheelchair is equipped with alow-cost depth camera that models a traversabil-ity map to define if the desired destination isreachable or not by the chair. The operator canvalidate the target via an Electromyogram (EMG)device attached to his face, and the system calcu-lates the path to the pointed coordinate, based onthe traversability map.

In (Gautam et al., 2014) they developed anoptical-type eye tracking system to control apower wheelchair and perform a manual navi-gation based on the user’s gazes direction. In(Escobedo et al., 2013) they describe a methodto perform semi-autonomous navigation on awheelchair by estimating the user’s intention us-ing a face pose recognition system (Kinect camerato interact with the user).

Hence, this article presents a comparison be-tween two Human Machine Interfaces (HMIs) tocontrol a robotized wheelchair using the head dis-

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

ISSN 2175 8905 2301

placement and facial expressions. The early re-sults have led us to conclude that the system needsseveral modifications to be considered as a safeand reliable solution for people paralyzed fromdown the neck.

2 System Overview

The system consists of a robotized wheelchair anda Human Machine Interface (HMI). Figure 1 il-lustrates the system overview and the communi-cation protocols used to integrate the subsystems.

The volunteers were asked to test one HMIat a time which captures either head movements(JoyFace) or facial expressions (RealSense), eachone with four possible commands to actuate onthe wheelchair. The interface application sendsthe captured commands via UDP socket to aMatlab application which filters the received mes-sages and actuates on the chair using a Repre-sentational State Transfer (RESTful) architecture(Souza et al., 2013).

The Matlab Application implements a fastprototype of the high-level navigation algorithmwhich handles two threads: one to listen to thereceived messages on the UDP socket and an-other to filter the valid commands and actuateon the chair accordingly. Table 1 shows how thefours commands of JoyFace and RealSense areassociated with the high-level commands of thewheelchair. Go front applies a fixed linear veloc-ity of 100 mm/s; Turn left increments the rota-tional velocity (counterclockwise is positive) andTurn right decrements it, being that its absolutevalue is limited to 10 deg/s. Although the oper-ator can use commands head down, and smile tostop the wheelchair, for security reasons the sys-tem is equipped with emergency stop buttons thatcan be pressed by the operator as well as back andfront limit switches fixed on the chair (activatedwith contact) to avoid collisions. The next sub-sections describe some details about the robotizedwheelchair and the HMIs JoyFace and RealSense.

Table 1: JoyFace and RealSense commands as-sociated with high-level commands to control thewheelchair.

Wheelchairmovements

JoyFace RealSense

Go front head up kissTurn right head right eyebrows upTurn left head left mouth openStop head down smile

2.1 Robotized Wheelchair

The robotized wheelchair (Figure 1) used to com-pare JoyFace and RealSense HMIs was born froma transformation of the conventional powered

MATLAB

Application

JoyFace RealSense

OR

HMIs

UDP

COMPUTER ROBOTIZED WHEELCHAIR

Emergency

push button

Laser

rangefinder

Access

point

Embedded

Control

HTTP

Figure 1: System overview. The operator controlsthe wheelchair using one of the HMIs: JoyFace orRealSense. The commands are sent via UDP toa Matlab application which filters the messagesand actuates on the wheelchair by sending HTTPrequisitions using the REST paradigm.

wheelchair Freedom SX. In his Master’s thesis(Junior, 2016) documented its architecture, mod-els, control and applications. The author has an-alyzed many commercial and academic-developedwheelchairs and based on his research he proposedan architecture of robotic wheelchair that could becontrolled by a wide range of assistive interfaces.

An Arduino Mega 2560 is responsible for con-necting some sensors and providing the embeddedcontrol to actuate on the independent modules ofthe rear wheels whereas the front caster wheels canroll freely. This mobile robot has two emergencystop buttons (one close to each arm support), oneencoder in each motor to measure the wheels’ dis-location, a laser range finder (LRF) to measuredistances to obstacles (detect obstacles, mapping),limit switches to stop the wheelchair for securitywhen colliding, infrared sensors pointed to theground to detect abrupt irregularities, an InertialMeasurement Unit (IMU) to detect and correctmotion and other components.

A Raspberry Pi Model B+ implements thecommunication between high-level applications(like the MATLAB Application in Figure 1) andthe low-level layer which is responsible for thecontrol and sensing. The software embedded inthis intermediate layer is a RESTful applicationwhich uses HTTP protocol. In this way, we canuse any programming language that can handleHTTP requisitions to communicate with the robo-tized wheelchair.

2.2 Joyface

The JoyFace HMI considers the displacement ofthe user’s face relative to a reference region.Theface is identified by a regular webcam and ver-ify the face positions. Each position is associatedwith a movement control of the wheelchair.

JoyFace was implemented in Python languageand uses face detection based on the Viola-Jonesclassifiers incorporated into the OpenCV library

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

2302

(Viola and Jones, 2001). These classifiers use HaarCascade features that are applied to images in realtime (Papageorgiou et al., 1998).

After detection of the user’s face, the last 40frames are observed. From there the average faceposition is calculated and a reference region is de-marcated. This reference region will remain staticwhen using JoyFace and can be viewed as a whiterectangle.

The centroid of the face detection square iscalculated in real time and receives a green cir-cle to highlight the displayed image. This waythe user can send commands through the displace-ment of his nose that has the same position of thecalculated centroid.

Figure 2 shows how JoyFace HMI works. Ifthe user positions the nose above the reference re-gion, the wheelchair begins to move front, if thenose is positioned to right or left the wheelchairmoves to the corresponding side and if it positionsthe nose below the reference region the wheelchairinterrupts the movements.

Head Up

Head RightHead Left

Head Down

Figure 2: JoyFace facial expressions commandsused to control the robotized wheelchair.

2.3 RealSense

The RealSense HMI consists of an Intel Real SenseF200 camera (Figure 3) and the Face TrackingApplication acquired from their Software Devel-opment Kit (SDK) available in (Intel RealSenseSDK, n.d.). This Intel technology provides severalalgorithms of hand/finger tracking, facial analy-sis, speech recognition, augmented reality, back-ground segmentation and more others possible tointegrate apps. Although the SDK was developedin C++, the wrapper layer exposes its interfacein a variety of programming languages i.e. C#,Processing, Java, Unity etc.

The Face Tracking algorithm used in our ex-periment was based on one of the C# Samples ofthe SDK which was modified to send the detectedfacial expression via UDP socket. The applicationlocates the face in a rectangle and further iden-

tifies the feature points (eyes, mouth, etc.) withwhich the algorithm calculates the scores for a fewsupported facial expressions such as eye closed andeyebrow turning up. Figure 4 show some screen-shots of the application detecting the four facialexpressions chosen to control the wheelchair: kiss,eyebrows up, mouth open, smile.

IR Sensor

Color Sensor

IR LaserProjector

Figure 3: Intel RealSense F200 camera used tocapture the facial expressions and control thewheelchair. This device consists of a color sen-sor with full 1080p RGB resolution and an Intel-developed depth camera to capture the reflectedrays generated by the IR laser projector (adaptedfrom (Intel RealSense SDK, n.d.)).

eyebrows upkiss

smilemouth open

Figure 4: Intel RealSense SDK sample used to getthe facial expressions and control the wheelchair.

3 Methods

In the laboratory, we tested the use of JoyFaceand RealSense HMIs with ten healthy volunteers,being 9 men and 1 woman who have experimentedboth interfaces to control the robotized wheelchairon the same day and perform a navigation in a pre-defined route with obstacles. Moreover, the testswere conducted under the permission of the ethicscommittee process number 58592916.9.1001.5404which was approved on September 9, 2016.

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

2303

The volunteers were presented to each of theapproaches, received instructions for use and per-formed the navigation using the HMIs once at atime. The tests were performed using the robo-tized wheelchair (Freedom SX) presented in sub-section 2.1.

The subjects were asked to use JoyFace andRealSense HMIs to control the wheelchair andnavigate in an indoor corridor with obstacles pre-sented in Figure 5. The scene was approximately8.10 x 2.10 meters (length x width) with fourwood barriers positioned 1.80 meters far from eachother. The experiments with each subject were di-vided into five steps:

• Training navigation with Joyface/RealSense;

• Navigation to collect data;

• Training navigation with Joyface/RealSense;

• Navigation to collect data;

• Answer questionnaire.

START

POINT

END

POINT

Figure 5: Corridor with obstacles.

The whole experiment has taken 30-40 minwith each volunteer including the questionnairewhich was composed of the following questionsregarding their experience with JoyFace and Re-alSense:

• Which HMI has demanded more mental ef-forts?

• Which HMI has demanded more physical ef-forts?

• Which HMI has offered more security?

• Which HMI was easier to use?

The questionary was inspired by the NASATask Load Index (NASA-TLX) methodology(NASA-TLX, 2011) to evaluate the HMIs effec-tiveness and performance.

4 Results

Table 2 shows the subjects’ impressions compar-ing JoyFace and RealSense regarding the fours as-pects of the questionnaire presented in Section 3.Most of them pointed JoyFace to be the safest,easiest to use and the one which required less men-tal demand. In contrast, the majority of the sub-jects have answered that RealSense requires lessphysical demand.

Table 2: Comparison between JoyFace and Re-alSense HMIs.

JoyFace RealSense

Mental demand 20% 80%Physical demand 60% 40%Security 60% 40%Facility 70% 30%

The results in Table 3 compares JoyFace andRealSense, considering the lap times for each sub-ject to perform the trajectory and how many timesthe wheelchair has stopped during the navigation.The best laps are marked in bold, and althoughwe had 6/10 best lap times with JoyFace, we can-not conclude that this is indeed the fastest HMIfor wheelchair navigation. The time averages forboth are quite similar, but the higher number ofstops with RealSense indicates that the operatorhad problems during the tests. The wheelchairhas stopped mostly because RealSense has failedseveral times on recognizing facial expressions, sothe subjects had to push the emergency button toavoid collisions. Other causes of the stops werepanic or even lack of training to control the chair.The subjects have complained that RealSense wasconfusing mouth open with smile commands. Fur-thermore, the eyebrows up has failed more oftenwith wearers of glasses, which is another problemof the algorithm.

Table 3: Lap times (in minutes) and number ofemergency stops the subjects had to take due tocollisions, imminence of collisions, or even panicduring the navigation with JoyFace and RealSenseHMIs.

Subject JoyFaceNo.

StopsRealSense

No.Stops

1 01:37 0 03:25 32 03:17 1 03:06 03 01:44 0 06:07 24 04:17 4 03:58 45 01:34 0 02:35 16 02:07 0 05:10 07 04:56 0 02:50 08 02:03 0 04:21 29 03:42 0 02:47 010 01:59 0 03:30 3

Avg. 02:43 0,5 03:46 1,5

Although some of the volunteers have taken along time to complete the trajectory and stoppedseveral times, Figures 6 and 7 show the experi-ment with a well-trained operator who got a good

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

2304

performance with both HMIs. The first one il-lustrates the performed trajectory in 1 minuteand 22 seconds using JoyFace while the secondone corresponds to the same route completed in1 minute and 36 seconds with RealSense. Despiteall the problems and required improvements forboth navigation strategies (to be discussed in thenext section) these last two Figures indicate thatin future these HMIs may be comfortable and safesolutions to allow people with severe disabilities tocontrol a robotized wheelchair.

Y (m)-4-3-2-101234

X (m

)

0

1

2

3

4

5

6

7

8

Start pointTrajectory

Figure 6: Trajectory performed by a trained sub-ject using JoyFace.

Y (m)-4-3-2-101234

X (m

)

0

1

2

3

4

5

6

7

8

Start pointTrajectory

Figure 7: Trajectory performed by a trained sub-ject using RealSense.

5 Discussions and Conclusions

This work compares two possible solutions for peo-ple paralyzed from down the neck to control arobotized wheelchair. The first one called Joy-Face is an HMI developed by us which uses aregular notebook webcam and image processingto capture the head movements and actuate onthe chair. The second HMI consists of an IntelRealSense camera and a face tracking algorithmwhich captures some facial expressions to controlthe wheelchair.

Although most of the subjects have reportedthat JoyFace demands more physical effort thanRealSense, the first strategy has shown to be safer,easier to use and requires less mental efforts, as itscommands are more intuitive than the facial ex-pressions. However, these upsides of JoyFace relyon the feedback screen with the user‘s face im-age captured by the notebook webcam. Duringthe navigation, the operator tends to look at thescreen to monitor if he/she is positioning the facelandmark (centroid) out of the reference squareand indeed actuating on the wheelchair. The userfeels forced to look at the screen during the mostof the navigation which is a limitation, as he/shecannot move his head freely without sending un-desirable commands to the chair.

RealSense does not rely on a feedback screen,so the user can freely move his head without ac-tuating on the wheelchair navigation. Despite thefact that the RealSense demands more mental ef-forts as the facial expressions require more timeof training and memorization, it maybe this HMIis more appropriate for people who suffered severetrauma and have limitations on moving the heador even making some facial expressions.

On the other side, the RealSense SDK algo-rithm for face tracking must be improved as itdoes not acquire the facial expressions correctlyfor different people. The classification rule to de-fine if the operator is with his mouth opened orsmiling, for example, may differ a lot depending onthe subject and consider fixed thresholds to deter-mine the facial expressions is not a good strategy.

Thus, as future work, the RealSense mustbe adaptative to reduce failures on detecting andclassifying the facial expressions, providing a morereliable, comfortable and safer HMI for the opera-tor. Furthermore, both HMIs have the called ”Mi-das Touch” problem which non-intentional headmovements or facial expressions can result in un-wanted commands to the wheelchair. This is com-mon in the development of eye-gaze or facial ex-pressions interfaces (Jacob, 1995) and can over-come with an additional command to lock/unlockthe actuation on the wheelchair. For the Re-alSense we could use a long smile and for the Joy-Face we could implement a smile detection usingthe OpenCV library and apply the same strategyto enable/disable the system.

Finally, after the mentioned modifications, weshould repeat the tests and use the NASA-TLXmethodology (NASA-TLX, 2011) to evaluate theHMIs effectiveness and performance. Althoughthe NASA-TLX has inspired us to assess the inter-faces with different subjects, we have not appliedall its steps for simplification reasons. However,its correct application may lead us to more reli-able results.

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

2305

6 Acknowledgements

We thank the Brazilian agencies CAPES andFAPESP in addition to Brazilian Institute ofNeuroscience and Neurotechnology (BRAINN)CEPID-FAPESP for financial support.

References

Chauhan, R., Jain, Y., Agarwal, H. and Patil, A.(2016). Study of implementation of voice con-trolled wheelchair, Advanced Computing andCommunication Systems (ICACCS), 20163rd International Conference on, Vol. 1,IEEE, pp. 1–4.

Cowan, R. E., Fregly, B. J., Boninger, M. L.,Chan, L., Rodgers, M. M. and Reinkens-meyer, D. J. (2012). Recent trends in assis-tive technology for mobility, Journal of neu-roengineering and rehabilitation 9(1): 20.

Escobedo, A., Spalanzani, A. and Laugier, C.(2013). Multimodal control of a roboticwheelchair: Using contextual informationfor usability improvement, Intelligent Robotsand Systems (IROS), 2013 IEEE/RSJ In-ternational Conference on, IEEE, pp. 4262–4267.

Escolano, C., Antelis, J. M. and Minguez, J.(2012). A telepresence mobile robot con-trolled with a noninvasive brain–computerinterface, IEEE Transactions on Systems,Man, and Cybernetics, Part B (Cybernetics)42(3): 793–804.

Gautam, G., Sumanth, G., Karthikeyan, K., Sun-dar, S. and Venkataraman, D. (2014). Eyemovement based electronic wheel chair forphysically challenged persons, Int. J. Sci.Technol. Res 3(2).

Huo, X. and Ghovanloo, M. (2010). Evaluationof a wireless wearable tongue–computer in-terface by individuals with high-level spinalcord injuries, Journal of neural engineering7(2): 026008.

IBGE (2010). Cartilha do censo 2010: Pessoascom deficiencia, Brasılia: Secretaria de Di-reitos Humanos da Presidencia da Republica(SDH)/Secretaria Nacional de Promocao dosDireitos da Pessoa com Deficiencia (SNPD).

Intel RealSense SDK (n.d.). https://software.

intel.com/en-us/intel-realsense-sdk.Accessed: 2017-04-18.

Iturrate, I., Antelis, J. M., Kubler, A. andMinguez, J. (2009). A noninvasive brain-actuated wheelchair based on a p300 neu-

rophysiological protocol and automated nav-igation, IEEE Transactions on Robotics25(3): 614–627.

Jacob, R. J. (1995). Eye tracking in advanced in-terface design, Virtual environments and ad-vanced interface design pp. 258–288.

Junior, A. (2016). Robotizacao de uma cadeirade rodas motorizada: arquitetura, mode-los, controle e aplicacoes., Master’s thesis,School of Electrical and Computer Engineer-ing, FEEC, UNICAMP.

Kim, J., Park, H., Bruce, J., Sutton, E., Rowles,D., Pucci, D., Holbrook, J., Minocha, J., Nar-done, B., West, D. et al. (2013). The tongueenables computer and wheelchair control forpeople with spinal cord injury, Science trans-lational medicine 5(213): 213ra166–213ra166.

NASA-TLX (2011). http://www.nasatlx.com/.(Accessed in 20/03/2017).

Papageorgiou, C. P., Oren, M. and Poggio, T.(1998). A general framework for object de-tection, Computer vision, 1998. sixth inter-national conference on, IEEE, pp. 555–562.

Rohmer, E., Pinheiro, P., Cardozo, E., Bellone,M. and Reina, G. (2015). Laser based driv-ing assistance for smart robotic wheelchairs,Emerging Technologies & Factory Automa-tion (ETFA), 2015 IEEE 20th Conferenceon, IEEE, pp. 1–4.

Rohmer, E., Pinheiro, P., Raizer, K., Olivi, L.and Cardozo, E. (2015). A novel platformsupporting multiple control strategies for as-sistive robots, Robot and Human Interac-tive Communication (RO-MAN), 2015 24thIEEE International Symposium on, IEEE,pp. 763–769.

Simpson, R. C. (2005). Smart wheelchairs: A lit-erature review, Journal of rehabilitation re-search and development 42(4): 423.

Souza, R., Pinho, F., Olivi, L. and Cardozo, E.(2013). A restful platform for networkedrobotics, Ubiquitous Robots and Ambient In-telligence (URAI), 2013 10th InternationalConference on, IEEE, pp. 423–428.

Viola, P. and Jones, M. (2001). Robust real-time object detection, International Journalof Computer Vision 4(34–47).

World Health Organization (2011). World reporton disability., World Health Organization.

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

2306