intech-markerless kinect based hand tracking for robot teleoperation

10
International Journal of Advanced Robotic Systems Markerless Kinect-Based Hand Tracking for Robot Teleoperation Regular Paper Guanglong Du, Ping Zhang * , Jianhua Mai and Zeling Li Department of Computer Science, South China University of Technology, P.R. China * Corresponding author E-mail: [email protected] Received 9 Apr 2012; Accepted 23 May 2012 DOI: 10.5772/50093 © 2012 Du et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract This paper presents a realtime remote robot teleoperation method using markerless Kinectbased hand tracking. Using this tracking algorithm, the positions of index finger and thumb in 3D can be estimated by processing depth images from Kinect. The hand pose is used as a model to specify the pose of a realtime remote robotʹs endeffector. This method provides a way to send a whole task to a remote robot instead of sending limited motion commands like gesturebased approaches and this method has been tested in pickandplace tasks. Keywords robot manipulator, markerless, Kinect 1. Introduction If a task is too complex for an autonomous robot to complete, then human intelligence is required to make a decision and control the robot, especially when it is in unstructured dynamic environments. Furthermore, when the robot is in a dangerous environment, robot teleoperation may be necessary. Some humanrobot interfaces (Yussof et al. [1]; Mitsantisuk et al. [2]) like joysticks, dials and robot replicas, have been commonly used, but these contacting mechanical devices require unnatural hand and arm motions to complete a teleoperation task. Another way to communicate complex motions to a remote robot, which is more natural, is to track the operator handarm motion which is used to complete the required task using contacting electromagnetic tracking sensors, inertial sensors and gloves instrumented with angle sensors (Hirche et al. [3]; Villaverde et al. [4]; Wang et al. [5]). However, these contacting devices may hinder natural humanlimb motions. Because visionbased techniques are noncontact and less hindrance to handarm motions, they have also been used. Visionbased methods always use physical markers placed on the anatomical body part (Kofman et al. [6]; Lathuilière and Hervé [7]; GuangLong Du et al. [8]). There are a lot of applications (Peer et al. [9] Borghese and Rigiroli [10]; Kofman et al. [6]) using markerbased human motion tracking, however, because body markers may hinder the motion for highly dexterous tasks and may get occluded, this markerbased tracking is not always practical. Thus, a markerless approach seems better for many applications. 1 ARTICLE www.intechopen.com Int J Adv Robotic Sy, 2012, Vol. 9, 36:2012

Upload: irho

Post on 29-Sep-2015

9 views

Category:

Documents


0 download

DESCRIPTION

Kinect

TRANSCRIPT

  • International Journal of Advanced Robotic Systems Markerless Kinect-Based Hand Tracking for Robot Teleoperation

    Regular Paper

    Guanglong Du, Ping Zhang*, Jianhua Mai and Zeling Li Department of Computer Science, South China University of Technology, P.R. China * Corresponding author E-mail: [email protected]

    Received 9 Apr 2012; Accepted 23 May 2012 DOI: 10.5772/50093 2012 Du et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

    Abstract This paper presents a realtime remote robotteleoperation method using markerless Kinectbasedhand tracking. Using this tracking algorithm, thepositions of index finger and thumb in 3D can beestimatedbyprocessingdepth images fromKinect.Thehand pose is used as a model to specify the pose of arealtime remote robots endeffector. This methodprovides away to send awhole task to a remote robotinstead of sending limited motion commands likegesturebased approaches and this method has beentestedinpickandplacetasks.Keywordsrobotmanipulator,markerless,Kinect1.Introduction

    If a task is too complex for an autonomous robot tocomplete, thenhuman intelligence isrequired tomakeadecision and control the robot, especially when it is inunstructureddynamicenvironments.Furthermore,whenthe robot is in a dangerous environment, robotteleoperation may be necessary. Some humanrobotinterfaces (Yussof et al. [1]; Mitsantisuk et al. [2]) likejoysticks,dials and robot replicas,havebeen commonly

    used, but these contacting mechanical devices requireunnatural hand and arm motions to complete ateleoperationtask.

    Another way to communicate complex motions to aremote robot, which is more natural, is to track theoperatorhandarmmotionwhichisusedtocompletetherequired task using contacting electromagnetic trackingsensors, inertial sensors and gloves instrumented withanglesensors(Hircheetal.[3];Villaverdeetal.[4];Wangetal.[5]).However,thesecontactingdevicesmayhindernaturalhumanlimbmotions.

    Becausevisionbasedtechniquesarenoncontactandlesshindrancetohandarmmotions,theyhavealsobeenused.Visionbased methods always use physical markersplaced on the anatomical body part (Kofman et al. [6];LathuilireandHerv[7];GuangLongDuetal.[8]).Thereare a lot of applications (Peer et al. [9] Borghese andRigiroli[10];Kofmanetal.[6])usingmarkerbasedhumanmotion tracking, however, because body markers mayhinderthemotionforhighlydexteroustasksandmaygetoccluded, this markerbased tracking is not alwayspractical. Thus, amarkerless approach seems better formanyapplications.

    1Guanglong Du, Ping Zhang, Jianhua Mai and Zeling Li: Markerless Kinect-Based Hand Tracking for Robot Teleoperation

    www.intechopen.com

    ARTICLE

    www.intechopen.com Int J Adv Robotic Sy, 2012, Vol. 9, 36:2012

  • Compared to imagebased trackingwhichusesmarkers,markerless isnot only less invasive, but also eliminatesproblemsofmarkerocclusion and identification (Verma[11]).Thus,markerlesstrackingmaybeabetterapproachfor remote robot teleoperation. However, existingmarkerless humanlimb tracking techniques have somanylimitationsthattheymaybedifficulttouseinrobotteleoperation applications. A lot of existing markerlesstracking techniques capture images and compute themotion later like a postprocess (Goncalves et al. [12];Kakadiarisetal.[13];Uedaetal.[14]RosalesandScarloff[15]). The markerless tracking has to performsimultaneouslyinrealtimeforremoterobotteleoperationwhen controlling continuous robotmotion.Toallow thehumanoperatortoperformhandarmmotionsforataskin a naturalwaywithout interruption, theposition andorientation of the hand and arm should be providedimmediately.Manytechniquescanonlyprovide2Dimageinformationof thehumanmotion (Koaraetal. [16];MacCormickandIsard[17])andthetrackingmethodscannotbe extended for accurate 3D jointposition data. Anendeffector of a remote robot would require the 3Dposition and orientation information of the operatorslimbjointcentreswithrespecttoafixedreferencesystem,and identifying human body parts in differentorientations has always been a significant challenge(Kakadiaris et al. [13];Goncalves et al.[12]; Triesch andMalsburg[18]).

    For robot teleoperation, there is limited research onmarkerlesshumantracking.Mosttechniqueshavetriedtouse a humanrobot interface based on handgesturerecognitiontocontrolrobotmotion(Fongetal.[19];Huetal. [20];Moy [21]).Coquin et al. and Ionescu et al. [22]developedmarkerlesshandgesturerecognitionmethodswhichcanbeusedformobilerobotcontrolwhereonlyafew different commands are enoughlikego,stop,left,rightandsoon.However,forobject manipulation in 3D space, it is not possible toachievenaturalcontrolandflexiblerobotmotionusing

    gesturesonly.Ifahumanoperatorwantstousegestures,he/sheneedstothinkofthoselimitedseparatecommandsthatthehumanrobotinterfacecanunderstandlikemoveup, down, forward and so on. A better way ofhumanrobotinteractionwouldbetopermittheoperatortofocusonthecomplexglobaltaskasahumannaturallydoeswhengraspingandmanipulatingobjectsin3Dspaceinsteadofthinkingaboutwhattypeofhandmotionsarerequired.Toachieve thisgoal,amethod thatallows theoperatortocompletethetaskusingthehandarmmotionsnaturally, providing the robot with information of thehandarm motion in realtime like the hand and armanatomicalpositionandorientation(Kofmanetal.[23]),isneeded.However,toachievetheinitialization,thehumanoperatormustassumeasimpleposturewithanunclothedarm in frontof adarkbackground,handplacedhigherthantheshoulder.Itisnotpossibletogetapreciseresultwith a complex background. In addition, the humanoperatorwouldfindithardtoworkincoldweatherasthearmisunclothed.Itisalsolimitedbecauseofthelightingeffect,i.e.,itisdifficulttousewhenitistoobrightortoodark.

    This paper presents a method of remote robotteleoperation using markerless Kinectbased 3D handtracking of the human operator (Figure 1). MarkerlessKinectbased hand tracking is used to acquire 3Danatomicalpositionandorientation,andthenitsendsthedatatotherobotmanipulatorbyahumanrobotinterfacetoenabletherobotendeffectortocopytheoperatorhandmotion in realtime. This natural way to communicatewith the robot allows the operator to focus on the taskinsteadofthinkingintermsoflimitedseparatecommandsthat the humanrobot interface can understand likegesturebased approaches. Using the noninvasiveKinectbased tracking avoids the problem that physicalsensors,cablesandothercontactinginterfacesmayhindernaturalmotionsand that theremaybemarkerocclusionandidentificationwhenusingmarkerbasedapproaches.

    Figure1.NoninvasiverobotteleoperationsystembasedontheKinect

    2 Int J Adv Robotic Sy, 2012, Vol. 9, 36:2012 www.intechopen.com

  • 2.HumanhandtrackingandpositioningsystemHumanhand trackingandpositioning is carriedoutbycontinuouslyprocessingRGBimagesanddepthimagesofan operator who is performing the hand motion tocompletearobotmanipulationtask.TheRGBimagesanddepthimagesarecapturedbytheKinectwhichisfixedinthefrontoftheoperator.

    The Kinect has three autofocus cameras: two infraredcamerasoptimizedfordepthdetectionandonestandardvisualspectrumcamerausedforvisualrecognition.

    2.1Kinectcoordinatesystem

    InFigure2,anoperatorstandsinfrontoftheKinectandcontrolsa robot.We candefine theKinectcoordinateasshowninFigure2:axisXisupturned,axisYisrightwardandaxisZisvertical.TheKinectcancapturethedepthofanyobjects in itsworkspace. InFigure2we can see theindexfingertip(I),thethumbtip(T)andapartofthehandbetween the thumb and the index finger(B). EverydistancebetweentheKinectandI,B,TorUisdifferent.IandT are closest to theKinect and theupper armU isfurthest. The 3D position of B is used to control thepositionof the robot endeffector.The I,TandBof theoperatorareused to control theorientationof the robotendeffector.

    Figure2.depthofobjects.K:Kinect;I:indexfingertip;T:thumbtip,B:apartofthehandbetweenthethumbandtheindexfinger;U:upperarm.

    2.2Imagecaptureandsegmentationofhand

    Inordertocatchthehandmotionusedforcontrollingtherobotmanipulator,weneedtoseparatethehandfromthedepth image. The arm is segmented from the body bythresholdingtherawdepthimage.

    (a) (b) (c)

    (d) (e) (f)Figure3.Segmentationofhandanddeterminationofthumbandindexfingertippositions

    3Guanglong Du, Ping Zhang, Jianhua Mai and Zeling Li: Markerless Kinect-Based Hand Tracking for Robot Teleoperation

    www.intechopen.com

  • A depth image D(i,j), shown in Figure 3a, records thedepthofall thepixelsofRGB imagewhich is shown inFigure3b.Assume that thedistancebetween thehumanoperatorandtheKinectisnotmorethanT(m)andthereisno other object between the human operator and theKinect.ForalliandjindepthimageD(i,j),thebodyimageCb(i,j)isthendividedas:

    b , { , | ( , ) T ; ( , ) ;1,2,..., ; 1,2,..., }

    C i j d i j d i j d i j Di n j m

    (1)

    Whered(i,j)isthepixelofdepthimageD;nisthewidthofDandmistheheightofD.When thehumanoperatorheldout thehand to controltherobotmanipulator,thearmiscloserthanthebody,wecan first compute the mean value M of all the body,includingthearm:

    b1 1

    ( , )n m

    i jC i j

    M mn

    (2)

    ThenwecandividethearmregionA(i,j)asfollows:

    b{ ( , ) | ( , ) & ( , ) }A d i j d i j C d i j M (3)ThearmregionAisshowninFigure3c.2.3DeterminationofthumbandindexfingertippositionsThe positions of thumb tip and indexfinger tip aredeterminedbyan image thatcontains thearm.ThearmregionA3d(x,y,z)canbereconstructedfromA(i,j)asshowninFigure3d.

    For all 2Dpoints (i,j) in theA(i,j), the 3Dpoints canbecalculatedby:

    A3d(x,y,z)=[i,j,d(i,j)] (4)

    Thenproject the3DpointsA3d(x,y,z) to the faceYOZasshowninFigure3e.

    AYOZ(y,z)=A3d(0,y,z)=[j,d(i,j)] (5)

    Definetheminimizeprojectfunction f :

    YOZ1,2,...( ) min ( ( , ))z mf y A y z (6)

    Determinetheonemaximum(aty=y1)andtwominimum(aty=y2,y=y3)fortheminimizeprojectfunction f .Thenthe3DpointofIcanbereconstructedby:

    '

    '3d

    1( , y2, (y2))

    ( , , ) y2(y2)

    n

    xx A x f

    I x y z yz f

    (7)

    '

    '3d

    1( , y3, (y3))

    ( , , ) y3(y3)

    n

    xx A x f

    T x y z yz f

    (8)

    '

    '3d

    1( , y1, (y1))

    ( , , ) y1(y1)

    n

    xx A x f

    B x y z yz f

    (9)

    3.Positionmodel

    Toavoid largescalemotionwhen theoperatorperformsmanipulation,we need to confine theworking space ofthe operator to a relatively small space. However, theworkingspaceoftheremoterobotshouldnotbelimited.Thismeansthemappingfromarelativelysmallplacetoanunconfined largespace isnecessary.Becauseofdirectmappingfromsmallspacetoalargerspace,themappingwilllosesomeprecision.Toavoidthisproblem,weadjustadifferentialpositioningmethodinthissituation.

    Similartothemouseandthekeyboard,thepositionofthehandcanbecalculatedbytheincrementalmethod.Fromsection2,the3DpositionofB,TandIarecalculatedintheworldcoordinate,shownasFigure3.Theinitialpositionand orientation of the robot endeffector in the startingpointarealsostoredas the robot referencepositionandorientation, respectively. The position of the robottoolcontrol point on the endeffector is controlled bypositionBofthehumanoperator.

    Define the 3D position of the I,T andB in the currentframe as I(x,y,z), T(x,y,z) and B(x,y,z), respectively.Define the length of the line segment jointing theindexfinger tip (I) and thumbtip (T) on the operatorhandasL(shownasFigure5),

    L=||T(x,y,z)I(x,y,z)|| (10)

    4 Int J Adv Robotic Sy, 2012, Vol. 9, 36:2012 www.intechopen.com

  • The 3D position of B in the last frame isB(x,y,z). TheendeffectorreferencepositioninthelastframeisP(x,y,z)andthenewendeffectorreferencepositionisupdated:

    ' '' * u

    ' '' uP P L L

    P P L

    (11)

    Whereuisathresholdthatdetermineswhethertherobotkeepsmovingorpauses.WhenL=0,itmeanstheoperatorstopstocontroltherobot,shownasFigure4.

    Figure4.Handpose

    Figure5.Positioningmodel

    Because is an adjustable parameter, theoretically thespacemanipulatedbytheoperatorisaninfinitespaceandwe can obtain coarsecontrol and finecontrol throughadjustmenttothevalueof .

    4.OrientationModel

    AsdescribedinFigure,theorientationoftheendeffectorisinaccordancewiththeorientationformedbythumbtip,indexfingertipand Thepartbetweenthethumbandtheindexfinger

    Figure6.Orientationmodel

    Theorientationoftheendeffectoriscalculatedusingthe3D positions of the I, T and B. In the mapping of the

    operatorhandtotherobottoolcoordinatesystem,thelinefromBtothemidpointMofthelinesegment,whichjointstheindexfingertip(I)andthumbtip(T)ontheoperatorhand, ismapped to therobottoolaxisX (Figure4),andtheXYplaneisdefinedbyB,IandT.

    Thismeansthatifweonlygetthetransformationmatrixfrom the coordinate system of the console to thecoordinate systemof theoperatorshand,wecanobtainthe transformation matrix from the base coordinatesystemtotheendeffector.Thedetailsofthederivationoftheorientationmatrixaregivenbelow:

    Assuming the origin of the operators hand coordinatesystemisidenticaltotheoneinconsolecoordinatesystemandthetransformationmatrixisa3*3matrixM.LetPointA in the operators hand coordinate system transfer toPointAintheconsolecoordinatesystem,wehave:

    A =MA (12)

    In hand tracking and positioning, the unit vector1 2 3[ , , ]x x x 1 2 3[ , , ]y y y , 1 2 3[ , , ]z z z indirectionX,Y,

    ZcanbemeasuredbyKinectyielding:

    11 12 13 1

    21 22 23 2

    31 32 33 3

    100

    m m m xm m m xm m m x

    (13)

    11 12 13 1

    21 22 23 2

    31 32 33 3

    010

    m m m ym m m ym m m y

    (14)

    11 12 13 1

    21 22 23 2

    31 32 33 3

    001

    m m m zm m m zm m m z

    (15)

    Through(13),(14),(15),wecanget:

    11 12 13 1 1 1

    21 22 23 2 2 2

    31 32 33 3 3 3

    m m m x y zm m m x y zm m m x y z

    (16)

    As stated before, the transformation matrix from theconsole coordinate system to the operators handcoordinate system is identical to the one from the basecoordinatesystem to theendeffectorcoordinatesystem,andthetranslationrelationshipbetweentheendeffectorandthebasecoordinatesystemisalreadyyieldinginthe

    5Guanglong Du, Ping Zhang, Jianhua Mai and Zeling Li: Markerless Kinect-Based Hand Tracking for Robot Teleoperation

    www.intechopen.com

  • positioning model, so the transformation matrix oforientationis:

    1 1 1 1

    2 2 2 2

    3 3 3 30 0 0 1

    x y z px y z pM x y z p

    (17)

    Noticethatthe[ 1 2 3, ,p p p ]isthetranslationmatrixfromthebasecoordinatesystemtotheendeffector.

    5.VirtualRobotManipulationSystem

    We use a six degreeoffreedom industrial robot toperformthisexperiment,asshowninFigure7.Thetaskistograb the targetobjectwhich is in the robotsworkingspaceandthenplacetheobjectatthedestination.

    Therearetwoworkingmodesfortherobot.Thefirstoneis to calculate the angle of every joint by reversingkinematic according to the position of the endeffector.After joints execute the entire requested angles, theendeffectorof thevirtual robot reaches thedestination.Thismode is suitable for a situationwhere no obstacleoccursintheworkspaceofthevirtualrobot.However,thesecond mode is suitable for the situation where anobstacleshowsupinthevirtualrobotsworkingspace.Inthismodethevirtualrobothastomovealongasafepath,whichensures thevirtualrobotwillnotcollidewith theobstacle.

    In DH representation, Ai presents the homogeneouscoordinatetransformationmatrixfromcoordinatei1toi:

    cos sin cos sin sin cossin cos cos cos sin sin

    0 sin cos0 0 0 1

    i i i i i i i

    i i i i i i ii

    i i i

    llA r

    (18)

    Fora robotwith six joints, thehomogeneous coordinatetransformationmatrixfromthebasecoordinatesystemtotheendeffectorscoordinatesystemisdefinedas:

    0 0 0 06 6 6 6

    6 1 2 6... [ ]0 0 0 1n s a pT A A A

    (19)

    Where 06n is the row vector of the endeffector, 06s isthe pitch vector, 06a is the yaw vector and 06p is thepositionvector.Using(17),(19),wehave:

    6T M (20)

    Through (8) we can have the angle of six joints:1 2 6( , ,..., ) .

    (a)

    (b)

    Figure7.Sixaxisrobotmanipulatorusedattheremoterobotsite

    6.Experiments

    Weevaluatedthealgorithmonourrobotplatform.Whentesting it, we built up an experimental environment ofteleoperation.Webuilt a setof emulation environmentsforthetechnicalrobotandasetofvirtualrealitysystemsbasedonvideoatthelocalsite.Theremotesiteistherealrobot in the working environment. In this experiment,considering the real environment of teleoperation, welimit bandwidth to 30kB/s and the delay time isapproximately3seconds.

    To evaluate the Kinectbased teleoperation algorithmdescribed in this paper, we use C++ to develop aKinectbasedhumanrobotinterfacesystem(Figure8)andthis system is used for the teleoperation of a sixaxistechnical robot.Thisexperimental system includes threemodules:

    1) Use the human hand tracking and positioningsystemtogetthehandimages,andthencalculatethe3DpositionsofT (the thumb tip), I (the indexfingertip) andB (thepartof thehandbetween the thumbandtheindexfinger). 2)Virtualrobotmanipulationsystemdrivesthevirtualrobotbasedon the joint angleswhich are calculatedthroughreversekinematic.Ifthecommandsaresafe,

    6 Int J Adv Robotic Sy, 2012, Vol. 9, 36:2012 www.intechopen.com

  • theywillbe transmitted to the remotesite tocontroltherealrobot.3)Theremotesitewilltransmitthevideotothelocalsite and the video fuse system displays the virtualenvironment and the real environment. Then theedgesofthevirtualrobotcoverthevideoframewhichistransmittedfromtheremotesite.

    In the experiment, the operator placed his hand in theworkspacetocontrolthevirtualrobot.Theorientationofthevirtualrobotsendeffectorcoincidedwiththehumanhand.Thepositionofthevirtualrobotsendeffectorwasadjusted bymoving the human hand through differentfacesofthedirectionspace,asshowninFigure3.

    Asshown inFigure3, theway theoperatorcontrols therobot is natural and intuitive. Because of using anincrementalmethodwhichissimilartokeyboardcontrol,the operator is not required to make large scalemovementstocontroltherobot.

    7.Result After reconstructing and controlling robots by reversekinematics, the precision of manipulation will decreasebecause of the transformation of the coordinate systemandsolvingoftheequationsset.

    Figure showsthepositionandorientationoftherobotsendeffectorandtheoperatorshandduringteleoperationexperiments.Thedashedlinerepresentstheendeffectorspath.Thesolidlinewithgreensquaresrepresentsthepathoftheoperatorshand.Thevirtualrobotwasmanipulatedto grab the ball which is placed on a square. The datageneratedbythisexperimenthasshownthatthepositionerrors ranged from 13 to +13 mm and the orientationerrors ranged from 2 to2degree.Figure (c,d,e)showstheX,Y,Zdisplacementsof the endeffectorandhand,whiletherotationsofthemareshowninFigure (f,g,h).

    Figure8.Noninvasivevisionbasedteleoperationsystem

    8.Discussion

    In the remote unstructured environment of the robotteleoperation, we assume that all the remote robot sitecomponents, including robotic arm, robot controller,camerasonendeffectorsandsomeothercameras,canbeinstalled on a mobile platform and enter thoseunstructured environments. The method shown here isproved on grabbing objects, picking up objects andpositioningaccuratelyduringgrabbingobjectsinthefineadjustment controlling mode. One advantage of thissystem is that it includes the operator into thedecisioncontrolloop.Itallowsarobottograb,moveandplacethe

    objectwithoutanypriorknowledgelikestartinglocationand even destination location. There are some similartasks which require decision making when picking upobjectsandtargetsfrommultipleobjectslikepackingandcleaningsomeobjectswhichmaycontainsomedangerousitems. It is expected that this system can be used toachievethosemorecomplexposeswhenthe jointsoftherobotarelimited.Theholetaskshowshowtodeterminethe position of an extruded body and a target holerandomly.Assemblyanddisassemblymay includemorelimitedholetasks.Wemayneedanappropriategrabhook,biggerholeandgrooveunlessthissystem includesforcefeedback.

    7Guanglong Du, Ping Zhang, Jianhua Mai and Zeling Li: Markerless Kinect-Based Hand Tracking for Robot Teleoperation

    www.intechopen.com

  • (a) (b)

    (c) (d)

    (e) (f)

    (g) (h)

    Figure9.AnalysisoftheexperimentComparedwith the automatic capture (Kofman et al. [6]),this algorithmusesmanualpositioning.Consideringhandtremor,thisalgorithmincludesacoarseadjustmentandfineadjustmentfunction.Whenguidingtherobot,wecanusethecoarse adjustment to move the robot close to the targetquickly. When grabbing the target, we can use the fineadjustmenttopositiontherobotaccurately.Thatcanensure

    thesafetyand theefficiencyof the teleoperation,andsolvetheproblemofinaccuracycausedbymanualoperation.This paper contributes to the guiding teleoperationsystem based on noncontact measurement. By usingtrackingbasedonKinect, robot teleoperationallows theoperator to control the robot in a more natural way.

    8 Int J Adv Robotic Sy, 2012, Vol. 9, 36:2012 www.intechopen.com

  • Generally speaking, using the same hand motion thatnaturally would be used in a task can accomplish theoperation taskandwhat ismore, this trackingbasedonKinect is noncontact. Thus, compared with contactingelectromagneticdevices,devicesbasedonsensoranddatagloveswhichareusednormally,noncontactdevicesmaycause lesshindrance to thenaturalhumanlimbmotion.The method proposed here will allow the operator tofocusonthetaskinsteadofthinkingofhowtodecomposethecommandsintosomesimplecommandsthatthevoicerecognition teleoperation system can understand. ThismethodismorenaturalandintuitivethantheoperationinKofmanetal. [23].Thesystemcanbeused immediatelywithoutanyinitializationandthisnoncontactingcontrolsystemcanbeusedoutdoors.Becausethisalgorithmusesinfrareddistancemeasurementtogetarminformation,itcanignorethelightingeffectanddoesnotneedtoextractthe 3D coordinates by accurate image processing. Thatallowsthesystemtobeusedinmoresevereenvironments,like when it is too bright or too dark. In addition, thealgorithm of [23] reference 1 needs a bare hand torecognizethecolourofskin,otherwise,itcannotbeusedtoextract thehanddata.Comparedwith thatalgorithm,this algorithm does not require a bare hand and theoperatorcanweargloveswhenusingthesysteminacoldoutdoorworkingenvironment.Thatenlargesthefieldofapplicationofthesystem.

    9.ConclusionAmethod of humanrobot interaction usingmarkerlessKinectbased tracking of the human hand for arobotmanipulatorteleoperation hasbeenpresented.Viatrackingofthethumbtip,indexfingertipandthepartofthe hand between the thumb and the index finger inrealtime,the3Dpositionandorientationofthehandarecomputed accurately and the robotmanipulator can becontrolledbyhandtoperformthetaskofpickingupandplacing.Tocompletethecomplextasks,multiKinectwillbeusedtoworktogetherinfuturework.10.References[1] YussofH,CapiG,NasuY,YamanoM,OhkaM.A

    CORBABased Control Architecture for RealTimeTeleoperationTasks in aDevelopmentalHumanoidRobot. International Journal of Advanced RoboticSystems,8(2):2948,2011.

    [2] MitsantisukC,KatsuraS,OhishiK.ForceControlofHumanRobot Interaction Using Twin DirectDriveMotor SystemBased onModal SpaceDesign. IEEETransactions on Industrial Electronics,57(4):13381392,2010.

    [3] Hirche S, Buss M. HumanOriented Control forHaptic Teleoperation. Proceedings of the IEEE,100(3):623647,2012.

    [4] Villaverde AF, Raimundez C, Barreiro A. PassiveInternetbasedCraneTeleoperationwithHapticAids.International Journal of Control Automation andSystems,10(1):7887,2012.

    [5] WangZ,GiannopoulosE,SlaterM,PeerA,BussM.Handshake: Realistic Human Robot Interaction inHaptic Enhanced Virtual Reality.PresenceTeleoperators and Virtual Environments,20(4):371392,2011.

    [6] Kofman, Jonathan, XianghaiWu, TimothyLuu, andSiddharth Verma. 2005. Teleoperation of a robotmanipulator using a visionbased humanrobotinterface.IEEETransactionsonIndustrialElectronics52(5):12061219.

    [7] Lathuilire, Fabienne and Herv JeanYves. 2000.Visual hand posture tracking in a gripper guidingapplication.Proc.Int.Conf.RoboticsandAutomation(ICRA)16881694.

    [8] GuanglongDu,PingZhang,LiyingYang,Yanbin Su.Robotteleoperationusingavisionbasedmanipulationmethod. Audio Language and Image Processing(ICALIP),2010InternationalConference.2010,945949.

    [9] Peer A, Pongrac H, Buss M. Influence of VariedHumanMovementControlonTaskPerformanceandFeeling ofTelepresence.PresenceTeleoperators andVirtualEnvironments,19(5):463481,2010.

    [10] Borghese, N. Alberto and Rigiroli Paolo. 2002.Tracking densely moving markers. IEEE FirstInternationalSymposiumon3DDataProcessingandTransmission,PadovaGiugno,682685.

    [11] VermaSiddharth.2004.Visionbasedmarkerless3Dhumanarm tracking.M.A.Sc.Thesis,DepartmentofMechanical Engineering, University of Ottawa,Ottawa,Canada.

    [12] Goncalves, Luis, DiBernardo Enrico, Ursella EnricoandPeronaPietro. 1995.Monocular tracking of thehumanarm in3D.Proceedingsof IEEE InternationalConferenceonComputerVision,ICCV95,764770.

    [13] Kakadiaris, Ioannis A, Metaxas Dimitri and BajcsyRuzena.1994a.Activepartdecomposition,shapeandmotion estimation of articulated objects: aphysicsbased approach. Proceeding of IEEEComputer Society Conference on Computer VisionandPatternRecognition,980984.

    [14] UedaEtsuko,YoshioMatsumoto,MasakazuImaiandTsukasaOgasawara.2001.Handposeestimation forvisionbasedhumaninterface.10thIEEEInternationalWorkshop on Robot and Human Communication(ROMAN2001)473478.

    [15] RosalesRomerandSclaroffStan.2000.Inferringbodypose without tracking body parts. Proceedings ofIEEE Conference on Computer Vision and PatternRecognition2:721727.

    [16] Koara, Kengo, Atsushi Nishikawa and FumioMiyazaki. 2001. Contour based hierarchical partdecomposition method for human body motion

    9Guanglong Du, Ping Zhang, Jianhua Mai and Zeling Li: Markerless Kinect-Based Hand Tracking for Robot Teleoperation

    www.intechopen.com

  • analysis from video sequence. In Human FriendlyMechatronics.Ed.ByE.Arai,T.Arai,andM.Takano,ElsevierScience.

    [17] Mac Cormick, John and Isard, Michael. 2000.Partitioned sampling, articulated objects, andinterfacequality hand tracking. Proceeding ofEuropeanConferenceonComputerVision2:319.

    [18] Triesch, Jochen and von der Malsburg, Christoph.2002.Asystemforpersonindependenthandposturerecognition against complex backgrounds. IEEETransactions on Pattern Analysis and MachineIntelligence23(12):14491453.

    [19] Fong,Terrence,ContiFrancois,GrangeSebastienandBaur Charles. 2000. Novel interfaces for remotedriving: gesture, haptic and PDA. SPIETelemanipulator & Telepresence Technologies VII4195:300311.

    [20] Hu,Chao,MaxQingHuMeng,PeterXiaopingLiu,andXiangWang.2003.Visualgesturerecognitionforhumanmachine interface of robot teleoperation.IEEE=RSJ International Conference on IntelligentRobotsandSystems,USA,15601565.

    [21] Moy,MilynC.1999.GesturebasedinteractionwithaPetRobot.Proceedingsof6thNationalConferenceonArtificial Intelligence and 11th Conference onInnovative Applications of Artificial Intelligence,USA,628633.

    [22] Ionescu,Bogdan,CoquinDidier,LambertPatrickandBuzuloiu Vasile. 2005. Dynamic and gesturerecognitionusingtheskeletonofthehand.JournalonAppliedSignalProcessing13:21012109.

    [23] Kofman Jonathan, Verma Siddharth, Wu Xianghai.RobotManipulator Teleoperation by MarkerlessVisionBased HandArm Tracking. InternationalJournalofOptomechatronics,1:331357,2007.

    10 Int J Adv Robotic Sy, 2012, Vol. 9, 36:2012 www.intechopen.com