a prediction and motion-planning scheme for

Upload: brad-jones

Post on 04-Jun-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    1/16

    634 IEEE TRANSACTIONS ON ROBOTICS, VOL. 28, NO. 3, JUNE 2012

    A Prediction and Motion-Planning Scheme forVisually Guided Robotic Capturing of Free-Floating

    Tumbling Objects With Uncertain DynamicsFarhad Aghili , Senior Member, IEEE

    Abstract Visually guided robotic capturing of a moving objectoften requires long-term prediction of the object motion not onlyfor a smooth capture but because visual feedback may not be con-tinually available, e.g., due to visionobstruction by the robotic arm,as well. This paper presents a combined prediction and motion-planning scheme for robotic capturing of a drifting and tumblingobject with unknown dynamics using visual feedback. A Kalmanlter estimates the states and a set of dynamics parameters of the object needed for long-term prediction of the motion fromnoisymeasurements of a vision system. Subsequently,the estimatedstates, parameters, and predicted motion trajectories are used toplan thetrajectory of the robots end-effector to intercepta grapplexture on the object with zero relative velocity (to avoid impact) inan optimal way. The optimal trajectory minimizes a cost function,which is a weighted linear sum of travel time, distance, cosine of a line-of-sight angle (object alignment for robotic grasping), and apenalty function acting as a constraint on acceleration magnitude.Experiments are presented to demonstrate the robot-motion plan-ning scheme for autonomous grasping of a tumbling satellite. Tworobotics manipulators are employed: One arm drifts and tumblesthe mockup of a satellite, and the other arm that is equipped with arobotic hand tries to capture a grapple xture on the satellite usingthe visual guidance system.

    Index Terms Active orbital debris removal, capture of a tum-

    bling satellite, capturing free-oating objects, de-orbiting spacedebris, on-orbit servicing, optimal guidance and control, opti-mal robot trajectory, robot-motion planning, robot vision, spacerobotics.

    NOMENCLATUREA Rotation matrix.a Maximum acceleration.e Feedback error.H Observation sensitivity matrix.h Observation vector.I c Inertia tensor of the target.

    I xx , . . . , I zz Principal inertia components of the target.K Kalman lter gain matrix.k , k Unit vectors expressed in {B} and {A} .

    Manuscript received February 15, 2011; revised June 6, 2011 and September2, 2011; accepted December 1, 2011. Date of publication January 5, 2012; dateof current version June 1, 2012. This paper was recommended for publicationby Associate Editor T. Murphey and Editor G. Oriolo upon evaluation of thereviewers comments.

    The author is with the Space Exploration of Canadian Space Agency, Saint-Hubert, QC J3Y 8Y9, Canada (e-mail: [email protected]).

    Color versions of one or more of the gures in this paper are available onlineat http://ieeexplore.ieee.org.

    Digital Object Identier 10.1109/TRO.2011.2179581

    P Kalman lter covariance matrix. p Vector containing the inertia ratios.Q Covariance matrix of process noise.q Target quaternion: rotation {B} w.r.t. {A} .q r End-effector quaternion.R Covariance matrix of measurement noise.r Position of the target center of mass; ex-

    pressed in {A}.

    r s Grapple xture position expressed in {A} .r r End-effector position expressed in {A} .t f Terminal time.t o Initial time.t Sampling time.x s State vector of the target.x State vector.u Control input.v Pose measurement noise.z Measured pose.H Hamiltonian. Costate. Quaternion: rotation {C} w.r.t. {B}. Linear velocity of the target center of mass. Quaternion: rotation {C} w.r.t. {A} . Angular velocity of target satellite. Location of the target center of mass; ex-

    pressed in {B}. State transition matrix. Line-of-sight angle.

    th Threshold. , f Process noise.

    1 n n n identity matrix.[ ] Matrix form of cross-product.

    Quaternion product.

    I. INTRODUCTION

    S INCE mid-1990s, the paradigm of on-orbit servicing us-ing a space manipulator has attracted many researchers[1][14]. These works were motivated by several national andinternational missions not only for repairing, rescuing, and re-fueling failed satellites [15], [16] but for removal of defunctsatellites or space debris as well [17], [18]. Orbital debris re-moval using space manipulator is becoming of particular in-terest as space debris is on the rise and the population growthhas reached an unstable point in some congested orbits due to

    risk of the collision of large debris [19]. Several studies that are1552-3098/$31.00 2012 IEEE

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    2/16

    AGHILI: PREDICTION AND MOTION-PLANNING SCHEME FOR VISUALLY GUIDED ROBOTIC CAPTURING 635

    conducted by national space agencies concluded that the spaceenvironment can be only preserved upon continual removal of large orbital debris objects, such as dead satellites [20], [21].All these robotic servicing missions require that a robotic armcaptures the client satellite in a safe and secure manner givenseveral operational and environmental constraints. Since de-funct satellites usually do not have a functional attitude controlsystem (ACS), it is common for them to tumble. This is becausethe angular momentum that is stored in the gyros and wheelsof the satellite starts migrating to its body as soon as failureoccurs. Ground observations [17] also conrm that many spacedebris objects have indeed tumbling motion, making the roboticcapturing a very challenging task. In fact, although several mis-sions for free-yer capture using a robotic arm on-orbit havebeen demonstrated so far [9], [15], [22], [23], [24], robotic cap-turing of a tumbling satellite has yet to be attempted. Roboticmanipulators that are used in future on-orbit serving missionswill be operated from ground or autonomously depending onmission constraints, requirements, and the level of technology

    readiness. However, in the presence of command latency, even atrained operator cannot successfully capture a free-yer unlessthe angular rate is extremely low. Therefore, increased auton-omy for robotic systems for on-orbit servicing missions is oneof the key technology identied by many space agencies [25].

    Fig. 1 illustrates a typical on-orbit satellite servicing oper-ation where the servicer satellite caries a robotic arm (with agrappling device on it), which is driven by a vision system so asto capture a client satellite (target). Theservicer satellite is underits own attitude controller, whereas the target is assumed to be anoncooperative satellite with uncertain dynamics. Clearly, onlyafter the robot successfully captures and stabilizes the tumbling

    satellite, a repairing, rescuing, or de-orbiting operation can bestarted. Therefore, a common task for such on-orbit servicingconsists two primitive robot operations: 1) pregrasping phase 1

    and 2) postgrasping phase. In the pregrasping phase, the manip-ulator arm moves from its home position to intercept a grapplexture on the target at a rendezvous point with zero relative ve-locity. In the postgrasping phase, the space manipulator has tobring the target satellite to rest in such a way that the interactiontorque/force remains below a safe value [26]. This paper focuson a robot guidance scheme for pregrasping phase.

    The capture will be without impact if the manipulator ap-proaches the target in such a manner that, at the time of capture,the relative velocity between the end-effector and the targetsgrapple xture is zero [27]. Otherwise, the detrimental effectof the impact on the free-oating space robot has to be takeninto account [28]. In addition, the target orientation at the in-terception time has to be adequate such that the xture be-comes accessible for robotic grasping. Satisfying theses grasp-ing requirements is not possible without prediction of the targetmotion so that the motion of the end-effector can be carefullyplanned and executed. Since external force/moment acting upona free-oating object in space is small, the object motion canbe predicted provided that the relevant dynamics parametersand the initial states are accurately known. However, inertia

    1Also known as approach phase.

    Fig. 1. Servicer satellite trying to capture a client satellite.

    parameters of spacecraft are usually uncertain, because there isno practical way to measure remaining propellant fuel in zerogravity. Consequently, when the dynamics of the target space-craft is not known beforehand, not only the linear and angularvelocities of the target but also its inertia parameters must beaccurately estimated in real time so that the servicer satellite canreliably predict the states of the target in future.

    Controlling of a free-oating space robot, such that when themanipulator is controlled, the reaction of the manipulator mo-tion on the manipulator base is nullied, has been established

    in the literature [29][32]. Although zero reaction maneuvershas been proven, particularly, useful [32], reaction wheels arealso used to stabilize the base attitude [33]. In fact, it was shownthat nearly any control schemes that can be used for xed-basemanipulators can also be implemented on free-oating spacerobot, provided that the platform attitude and position are esti-mated and then taken into account [27], [29], [31], [34]. Despitea variety of control strategies for retrieval and capturing a spaceobject by a free-ying space manipulator [1], [2], [4], guidanceof a robotic manipulator in order to rendezvous and capture atumbling free-oating object [7] in a safe and secure mannerstill remains a challenging task. The majority of the techniquesthat are developed for the real-time motion planning and therobotic interception of a moving target assume a point masstarget [12], [35] [42]. Interception of a spinning or tumblingsatellite is a more difcult task, however, due to the complexdynamics of rotating rigid bodies. For the rendezvous with atumbling satellite in 2-D, a navigation technique was developedin [40]. Ma et. al [10] formulated and analyzed the rendezvouscontrol problem of a servicing spacecraft for optimal approachto and alignment with a rotating rigid body with known dynam-ics. An iterative algorithm for a planar case is proposed to solvethe optimal trajectory for minimum time/fuel consumption. Tra- jectory planning for robotic capturing of a tumbling satellite waspresented in our earlier work [43], [44], without taking the re-

    quirements at the time of grasping into account. Similar motion

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    3/16

    636 IEEE TRANSACTIONS ON ROBOTICS, VOL. 28, NO. 3, JUNE 2012

    planning scheme with inclusion of joint and velocity limits wasproposed in [45].

    Vision guided approaches for automated rendezvous anddocking of nonspinning spacecraft have been proposed for var-ious missions [46][50]. Wertz and Bell [46] give an overviewof hardware and software technologies (sensors and actuators)required for autonomous rendezvous and docking of two space-craft started at a remote distance. The terminal phase of theDemonstration of Autonomous Rendezvous Technology mis-sion that includes proximity maneuvers for rendezvous to a co-operative spacecraft under an advanced video guidance sensoris described in [47]. The application of rendezvous and dock-ing for space exploration missions regardless of whether therendezvous is fully automated or has human in the loop is dis-cussed in [48]. The design and application of the rendezvousguidance and navigation ight software for the Orbital Expressight demonstration are examined in [49]. Adaptive control lawfor spacecraft rendezvous and docking under measurement un-certainty, such as aggregation of sensor calibration parameter,

    systematic bias, or some stochastic disturbances, is proposedin [50]. Rouleau et al. [51] present a framework for roboticcapture and servicing of satellites considering the shortcomingsof the communication links such as bandwidth limitations andcommunication dropouts. Thereare differentvisionsystems thatare capable of estimating the pose of moving objects. Amongthem, an active vision system, such as the laser camera system(LCS), is preferable because of its robustness in face of the harshlighting conditions of space [52]. As successfully veried dur-ing the STS-105 space mission, the 3-D imaging technology thatis used in the LCS can indeed operate in the space environment.The use of laser range data has alsobeenproposed for the motion

    estimation of free-oating space objects [53][55]. Linchter andDubowsky [53] employed two separate Kalman lters (KFs) forthe rotational and translational dynamics of a free-oatingspaceobject in order to reduce the noise of a range sensor. Since prin-cipal inertia of the target are directly included in the state vectorto be estimated by KF, a normalization and reparameterizationof the estimated inertia values have to be performedat every stepof the KF cycle. Hillenbrand and Lampariello [54] developed aleast-squares estimation method for motion estimation and themodel identication of a free-oating space object that relieson the measurement range data, which consists displacements,and approximations of the velocity and acceleration obtainedfrom differentiation of the visual data. The unknowns in thisidentication procedure consist the six inertia parameters of thetarget inertia tensor.

    In this paper, we employ optimal control and estimation the-ory [56] for planning the motion of a robotic manipulator tointercept a noncooperative target satellite withunknown dynam-ics. A schematic diagram of the guidance and control system,as considered herein, is shown in Fig. 2. In this prediction-planning-control approach, rst, an extended Kalman lter(EKF) gives reliable estimations of the motion states and thedynamics parameters of the free-oating target from noisy posemeasurements of the target obtained from an LCS. This allowsthe motion of the target to be reliably predicted as soon as the

    lter converges. The convergence canbe detected by monitoring

    Fig. 2. Prediction-planning-control approach for robotic capturing of a free-oating object.

    the lter covariance matrix. Next, the robot motion to interceptthe object at a rendezvous point is optimally planned and exe-cuted. The robot motion is planned to minimize a cost functionsubject to the constraint that the robots end-effector and thetargets grapple xture arrive at their rendezvous point simul-taneously at the same velocity. A line-of-sight angle is denedin such a way that zero angle corresponds to the best possibletarget alignment for robotic grasping of the xture. The costfunction is a weighted linear sum of travel time, distance, co-sine of the line-of-sight angle, and a penalty function acting asa constraint on the acceleration magnitude during the capturingmaneuvers. The weights in the cost function can be speciedfor optimization of each objectives. For example, time-optimaltrajectory or distance optimal trajectory or a combination canbe achieved by proper selection of the weights. The problem of nding optimal rendezvous point is formulated as solving thesystem Hamiltonian function for the terminal time, which can

    be solved by using a numerical technique such as the NewtonRaphson (NR) method. Subsequently, the position and velocityof the target at the time of rendezvous can be computed from thedynamics model of target object. Finally, the optimal trajectorygenerator is presented in a simple feedback form, where theacceleration at any given time is obtained by time-varying gainsacting on the differences between the position/velocity of theend-effector at current time and those at the time of rendezvous.All matrices that are required to design the EKF are derivedin closed-form rendering the prediction and motion-planningscheme suitable for real-time implementation. Moreover, to getaround the quandary of nonidentiability of the components of the inertia tensor, new parameters pertaining to the inertia tensorare used in the state vector to be estimated by EKF. Experimentsare conducted to demonstrate the application of the proposed vi-sual guidance method that can be used for robotic capturing of afree-oating object. A controlled robotic manipulator moves themockup of a satellite as if it drifts and tumbles in zero-gravity.Another robotic manipulator is used to intercept a grapple x-ture installed on the mockup using the proposed visual guidancescheme.

    II. DYNAMICS MODEL

    Fig. 3 illustrates a robot trying to capture a drifting and tum-

    blingobject. {A} is assumed the inertial frame,while coordinate

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    4/16

    AGHILI: PREDICTION AND MOTION-PLANNING SCHEME FOR VISUALLY GUIDED ROBOTIC CAPTURING 637

    Fig. 3. Robotic capturing of a free-oating tumbling object.

    frame {B} is attached to the body of the target. The origin of {B} is located at the target center of mass (CM) and the axesof {B} are oriented so as to be parallel to the principal axes of the target. Coordinate frame {C} is xed to the targets grapplexture located at from the origin of {B}; it is the pose of {C}which is measured by the vision system. We, further, assumethat the target tumbles with angular velocity . Herein, both and are expressed in the target coordinate frame {B}.

    Let quaternion q represent the rotation of the target, i.e.,the orientation of {B} with respect to {A} . Then, the relationbetween the time derivative of the quaternion and the angularvelocity can be readily expressed by

    q = 12

    ( )q , where ( ) = [ ] T 0 . (1)

    Coincidentally, the matrix function () also appears in thequaternion product. Let us dene operator acting on quater-nion q as

    q = q 0 1 4 + (q v ) (2)

    where q v is the vector part of the quaternion, and q o is the scalarpart such that q = [q T v q o ]T . Then, the product of two quater-nions q1 q 2 corresponds to rotation matrix A (q 2)A (q 1),where the rotation matrix A (q) is related to the correspond-ing quaternion by

    A (q ) = (2 q 2o 1)1 3 + 2q o [q v ] + 2qv qT v . (3)

    Dynamics of the rotational motion of the object can be de-scribed by Eulers equations written in the terms of the momentof inertia. We will need these equations for not only trajectoryplanning purposes but also estimation of the target states andparameters. However, it is important to note that it is not pos-sible to identify the absolute values of the inertia parametersfrom observations of torque-free tumbling motion of an object.For instance, one can observe that the motion trajectories of arigid body with zero input torque remains unchanged when all

    inertia variables are scaled up or down by the same factor. To

    circumvent the nonidentiability problem, we use the followingnondimensional inertia parameters:

    px = I y y I zz

    I xx

    py = I zz I xx

    I yy

    pz = I xx I y y

    I zz(4)

    where I xx , I y y , and I z z denote the principal moments of inertiaof the target, and I c = diag(I xx , I y y , I zz ) is the correspondinginertia tensor. Notice that since the absolute inertial parameterssatisfy the following inequalities:

    I xx + I y y > I z z

    I yy + I zz > I xx

    I zz + I xx > I y y

    the nondimensional inertia parameters can only take valuesgrater than 1, i.e.,

    px > 1, py > 1, pz > 1.

    The Eulers equations can be expressed in terms of the newparameters as

    = d ( , p) + B ( p) (5)

    where

    d ( , p) = px y z py x z pz x y

    , B =bx ( p) 0 0

    0 by ( p) 00 0 bz ( p)

    and the entries of the aforementioned diagonal matrix can becomputed from inertial ratio parameters p = [ px py pz ]T as

    bx ( p) = 1 + 1 + px1 py

    + 1 px1 + pz

    by ( p) = 1 + 1 + py1 pz

    + 1 py1 + px

    bz ( p) = 1 + 1 + pz1 px

    + 1 pz1 + py

    (6)

    and

    =

    tr( I c ) (7)

    is dened as the acceleration disturbance, in which tr( ) denotesthe trace of a matrix (see the Appendix for detailed deriva-tions). Note that represents the lumped effects of exibleappendages like solar panels, fuel sloshing, and gravity gra-dient, which were not considered in our model. Nevertheless,in practice, it is usually sufcient to model them as processnoise in the state estimation process [57], [58]. It is worth not-ing that is independent of the choice of local frame becausetr( I c ) is an invariant quantity. Therefore, the covariance of is equal to that of torque disturbance scaled by constant scalar[tr( I c )] 1 . Here, the process noise is assumed to be with co-

    variance E [ T ] =

    2 1 3 .

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    5/16

    638 IEEE TRANSACTIONS ON ROBOTICS, VOL. 28, NO. 3, JUNE 2012

    Denoting r as the location of the target CM expressed in theinertial frame {A} and = r as the drift velocity, the transla-tional motion of the target can be simply expressed by

    = f (8)

    where f is the disturbance force per unit mass, which is as-

    sumed to be white noise with covariance E [ f T f ] =

    2f

    13 . Inaddition, the values of the position and velocity of the grapple

    xture can be obtained from

    r s = r + A (q ) (9a)

    r s = + A (q )( ). (9b)

    In the rest of this paper, the stochastic model of the targetembodied in (1), (5), and (8) will be used for optimal estimationof the motion states and parameters, whereas the deterministicpart of the model, i.e., when = f = 0 , will be employed foroptimal motion planning of the robotic manipulator.

    III. OPTIMAL ROBOT GUIDANCE

    A. Optimal Rendezvous Point and Path Planning

    This section presents an optimal approach to robot-motionplanning for smooth grasping of a free-oating object. Therobot-motion planner must determine when, where, and how tograsp the moving grapple xture to satisfy some objectives. Weassume that the dynamics parameters of the target { p, } areknown. This assumption will be relaxed in the next section. Themotionplanning of therobot is based on prediction of themotionof target computed from the values of the states (and parame-ters) of the target estimated at the initial time of the capturing

    time interval. This predicting-planning scheme relives the needfor continuous visual feedback information during the captur-ingoperation. This is important for a successful robotic graspingbecause most likely the vision system will be obstructed by therobotic armatonepoint during itsapproach to the target. We alsoassume that the chaser satellite is placed in a suitable positionrelative to the client satellite so that robotic capturing within theworkspace envelope becomes feasible. Moreover, although thesolution of the optimal path is given in the task space, one mayuse a singularity robust scheme for inverse kinematic solution of nonredundant manipulators [59], [60], or a singularity manage-ment and self-collision avoidance scheme for kinematic controlof redundant manipulators [61][65]. Despite space manipu-lators mounted on a free-oating base provide further redun-dancy, they also introduce dynamic singularities [66]. Severalalgorithms have been proposed to solve the inverse kinematicsof redundant free-oating robotic systems [67][69] by meansof the so-called generalized Jacobian. Notice that generalizedJacobian, owing to its dynamic nature, captures dynamic singu-larities.

    As schematically illustrated in Fig. 3, the end-effector po-sition is represented by r r . The end-effector and the graspingpoint are expected to arrive at a rendezvous-point simultane-ously with the same velocity in order to avoid impact at the timeof grasping. Therefore, the main objective is to bring the end-

    effector from its initial position r r (to ) at initial time t = to to a

    grasping location along the trajectory of the grapple xture onthe target satellite, while satisfying the acceleration magnitudeconstraint r r (t) a for t [to tf ] and terminal conditions

    r r (t f ) = r s (t f ) and r r (t f ) = r s (t f ). (10)

    Note that the initial time to is assumed to be xed and t f vari-able. The aforementioned terminal constraints for a successfulgrasping are necessary but not sufcient because the grapplexture should be accessible for the robotic interception too. Inorder to gauge the accessibility of the grapple xture by therobot, let us dene the line of sight angle that is made by theopposite direction of vector r r (at the time of interception) andunit vector k attached to the grapple xture (see Fig. 3). It is as-sumed that vector k is expressed in the target frame {C} in sucha way that = 0 corresponds to the most suitable object align-ment for robotic grasping, and conversely, the grasping is notpossible when = . Therefore, it will make sense to maximizethe cosine of the line-of-sight angle at the time of interception.

    Now, assume that the optimal trajectory is generated by thefollowing second-order system:

    r r = u (11)

    which can be thought of as the linearized dynamics of a rigidmanipulator. Now, denoting the state vector of the entire systemas

    x = [q T T r T r T r rT r ]

    T

    we get the following autonomous system (time-invariant):

    x = f (x , u ) =

    12

    ( )qd ( )

    r ru

    (12)

    because the drift velocity is constant.In the following analysis, an optimal solution to the input u of

    system (12) is sought to minimize the following cost function:

    J = [x (t f )] + t f

    t o1 + c(u )d (13)

    in which

    c(u ) = ( u2 a 2), if u a

    0, otherwise (14)

    is a penalty function added to the cost function so that it acts asa soft constraint [56] to the acceleration magnitude, subjectto the following terminal state constraint:

    [x (t f )] = 0 (15)

    and

    (x ) = r s (x s ) r rr s (x s ) r r (16)

    and vector x s = [q T T r T ]T contains the time-varying statesof the target. A soft constraint with very large penalty be-comes equivalent to a hard constraint [56]. This means that

    scalar > 0 in penalty function (14) should be selected to be

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    6/16

    AGHILI: PREDICTION AND MOTION-PLANNING SCHEME FOR VISUALLY GUIDED ROBOTIC CAPTURING 639

    sufciently large. If the nonintegral part of the cost function (13)is set to zero, then the time of travel t f to is minimized, i.e.,time-optimal trajectory. However, the integral cost function isaugmented by the following term:

    (x ) = w1 r r w2 cos (17)

    where w1 > 0 and w2 > 0 are scaler weights. Clearly, the in-corporation of in the cost function tends to minimize thedistance r r (t f ) and maximize the value of the cosine of theline-of-sight angle (t f ). Denoting k (q) = A (q )k as the unitvector expressed in the inertial frame, one can readily expressthe cosine as a function of the states, i.e.,

    cos = r r k (q)

    r r .

    In summary, cost function (13) is a weighted linear sum of thevarious objectives. Therefore, one can specify scalar weightsfor each objective to be optimized. For instance, w1 w2 1corresponds to minimum distance path, w2 w1 1 corre-

    sponds to time-optimal path, and w2 w1 1 corresponds tominimum line-of-sight angle at the time of rendezvous.The Hamiltonian for the system (12) and (13) can be written

    as

    H = 1 + c(u ) + T f (x , u ). (18)

    The Hessian of the Hamiltonian 2H 2u

    = 21 3 , if u a

    0, otherwise (19)

    is positive denite only if the input satises u a , andthereby satisfying the sufcient condition for local minimal-ity. According to the optimal control theory [70], the costateand optimal input, i.e., u , must satisfy the following partialderivatives:

    = H x

    , H

    u = 0 . (20)

    Splitting the vector of costate as = [ T s T r T u ]T , where s R 9 and r , u R 3 , and applying (20) to our Hamiltonian (18)yields the following identities:

    r = 0 (21a)

    u = r (21b)

    u = 2u . (21c)

    One can readily infer from (21a)(21c) that the expression of the acceleration trajectory is

    u = (t t o ) + (22)

    where and are dened such that

    r = 2

    u = 2 (t to ) 2 . (23)

    Subsequently, one can obtain the position and velocity trajecto-ries via integration, i.e.,

    r r (t ) r r (to ) = 1

    2 (t to )2

    + (t t o )

    r r (t) r r (to ) = 16

    (t to )3 + 12

    (t to )2 + r r (to )( t t o ).

    (24)

    The constant polynomial coefcients and are solved byimposing the initial and terminal conditions (16)

    = 6

    (t f to )2 [rs (t f ) + r r (to )]

    12

    (t f to )3r s (t f ) r r (to )

    = 2

    t f t o[r s (t f ) + 2 r r (to )]

    + 6

    (t f to )2r s (t f ) r r (to ) . (25)

    If the terminal time tf is given, then r s (t f ) and r s (t f )can be computed from (9). However, the optimal terminaltime tf remains to be found. The optimal Hamiltonian H =H (x , u , ) that is calculated at optimal point u and x mustsatisfy H = 0 , i.e.,

    H (t f ) = 1 a2 2 2 + T s x s = 0 . (26)

    This gives the extra equation required to determine the optimalterminal time. We will show in the following analysis that T s x scan be expressed as a function of tf . The nal values of thecostate can be obtained from the end-point constraint equationreferred to as the transversality condition:

    (t f ) = x

    T

    + x x ( t f )

    (27)

    where vector = [ T 1 T 2 ]T , where 1 , 2 R 3 , is the La-grangian multiplier owing to the constraint (16). Using (16) in(27) yields

    (t f ) =( x s r s )T 1 + ( x s r s )T 2 + x s

    1 + r r 2 x ( t f )

    . (28)

    Now, equating right-hand side (RHS) of (23) and (28) will resultin a set of three equations with three unknowns 1 , 2 , and T s x s . Solving the set of equations gives the following usefulidentity:

    T s x s = T 1 r s +

    T 2 r s + x s

    T x s

    = ( r r 2 )T r s + 2 (t f t o ) +

    T r s + q T q

    (29)

    in which we use identity x s T x s = q T q (note that func-tion does not depend on and r ). Finally, substituting (29)into (26) and using (24), we will arrive at

    H (t f ) = 1 a 2 2 2 + 2 T r s + 12 q

    T ( )q

    + T r r 2T + 2 T r s (t f to )

    +1

    2 T r r

    2 (t f to )2 = 0 (30)

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    7/16

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    8/16

    AGHILI: PREDICTION AND MOTION-PLANNING SCHEME FOR VISUALLY GUIDED ROBOTIC CAPTURING 641

    is a nonlinear function of quaternions q and . To linearize theobservation vector, one needs to derive the sensitivity of thenonlinear observation vector with respect to small variations q v and v , where q v 1 and v 1. To this end,consider quaternion variations

    q = q q 1 (41a)

    = 1 (41b)

    as the deviations of the corresponding quaternions from theirnominal values q and . Here, the inverse of quaternion isdened such that q q 1 = [0 0 0 1]T , where [0 0 0 1]T isthe identity quaternion. It can be readily seen from (3) that, fora small rotation, where q v 1 and q o 1, the rst-orderapproximation of the rotation matrix can be written as

    A ( q ) = 1 3 + 2[ qv ] + H.O.T.

    where H.O.T. stands for second and higher order terms. Hence,the rst row of the RHS of (40) can be approximated (rst-order

    approximation) asr s = r + A (q )( 1 3 + 2[ q v ]) + H.O.T. (42)

    Now, we need to linearize the orientation component of theobservation vector. Substituting q and from (41) into (38)yields

    = q q

    = q (43)

    where

    = q . (44)

    Using the denitionof the quaternion product (2) andneglectingthe second and higher order terms, one can rewrite (43) in thefollowing form:

    + (q , ) v

    where = q , and

    = q o o1 3 q o [ v ] + o [q v ] [q v ][ v ] q v

    T v

    (q v v )T o qT v q o

    T v

    .

    Moreover, from (44), we get

    v = q v + v v q v

    q v + v (45)and therefore

    = + (q , )[ q v + v ] + H.O.T. (46)

    Finally, from the rst-order approximation of the observationvector in (42) and (46), one can infer the observation sensitivitymatrix as

    H = h (x )

    x x = x

    = 2A (q )[ ] 0 3 1 3 0 3 6 A (q ) 0 3

    (q , ) 0 4 3 0 4 3 0 4 6 0 4 3 (q , ).

    (47)

    B. Extended Kalman Filter Design

    Assume that the dynamics parameters, i.e., =[ pT T T v ]T , remained constant, and therefore

    = 0 9 1 . (48)

    Now, in view of (1), (5), and (8) together with (48), the processdynamics can be described in this compact form

    x = f s (x ) + G (x ) (49)

    where

    G (x ) =

    0 3 0 3B ( p) 0 3

    0 3 0 30 3 1 3

    0 9 3 0 9 3

    and = f

    contains the entire process noise. Equation (49) can be usedfor state propagation. However, the state transition matrix of the

    linearized system will be also needed to be used for covariancepropagation of EKF. Adopting a linearization technique simi-lar to that in [57] and [58], one can linearize the equation of quaternion (1) as

    ddt

    q v = q v + 12

    (50a)

    ddt

    q o = 0 . (50b)

    Note that, since q o is not an independent variable and ithas variations of only the second order, its time derivative canbe ignored, as suggested in [57]. Moreover, linearization of

    function d ( , p) in the Euler equation (5) yields

    d x = x

    = M ( , p) =0 px z px y

    py z 0 py x pz y pz x 0

    d p x = x

    = N ( ) =y z 0 0

    0 x z 00 0 x y

    .

    Thus

    F = f s (x )

    x x = x

    = [ ]

    12 1 3 0 3 0 3 0 3 0 3 6

    0 3 M ( , p) 0 3 0 3 N ( ) 0 3 60 3 0 3 0 3 1 3 0 3 0 3 6

    0 12 3 0 12 3 0 12 3 0 12 3 0 12 3 0 12 6

    .

    The equivalent discrete-time system of the linearized systemcan be written as

    x k +1 = k x k + w k (51)

    where w k is discrete-time process noise, and the state tran-sition matrix k = (t k , t ) at time tk over time intervalt = tk +1 t k is given by

    (tk , t ) = eF ( t k ) t

    1 21 + t F (t k ). (52)

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    9/16

    642 IEEE TRANSACTIONS ON ROBOTICS, VOL. 28, NO. 3, JUNE 2012

    The covariance of the discrete-time process noise is related tothe continuous process noise covariance by

    Q k = E [w k w T k ] = t k + t

    t k

    (t k , )G G T T (tk , )d

    where = E [ T ] = diag(2 1 3 , 2f 1 3). After integration, the

    covariance matrix takes the formQ k = diag 2 1(t k ),

    2f 2(tk ), 0 9 (53)

    where

    1=t 312 B

    2 t36 B

    2M T + t24 B

    2

    t33 MB

    2M T + t22 (B

    2M T + MB 2) + B 2 t

    2= t 3

    3 1 3

    t 22

    1 3 t 1 3

    .

    Now, one can design an EKF based on the derived models(47), (52), and (53). The EKF-based observer for the associatednoisy discrete system (51) is given in two steps: 1) Estimateupdate

    K k = P k H T k H k P

    k H

    T k + R k

    1 (54a)

    x k = K z k h (x k ) (54b)

    P k = 1 21 K k H k P k (54c)

    and 2) estimate propagation

    x k +1 = x k + t k + t

    t kf s (x ) dt (55a)

    P k +1 = k P k T k + Q k (55b)

    where x k and x k are a priori and a posteriori estimations of the state vector, and H k = H (x k ). The state update followsthe error-state update, i.e., x = [ q T v

    T T v ]T , in theinnovation step of the KF (54b). The update of nonquaternionpart of the state vector, i.e., , can be easily obtained by addingthe corresponding error to a priori estimation. However, quater-nion updates are more complicated because only the vectorsparts of the quaternion errors q vk and vk are given at ev-ery step time. Therefore, rst, the scalar parts of the quaternionvariations should be computed from the corresponding vectorparts. Second, the quaternion updates are computed using thequaternion multiplication rule. Consequently, the state update

    may proceed right after (54b) as

    state update:

    k = k + k

    q k = q vk

    1 q vk 2qk

    k = vk

    1 vk 2 k

    (56)

    V. EXPERIMENT

    This section reports experimental results showing the per-formance of the vision-guided robotic system for autonomousinterception of a tumbling satellite. Fig. 4 illustrates the experi-

    mental setup with two manipulator arms. One arm is employed

    Fig. 4. Experimental setup.

    Fig. 5. ICP algorithm matches points from the CAD model and the scanneddata to estimate the object pose. (a) Satellite CAD model. (b) Point-cloud data.

    to drift and tumble the mockup of a target satellite 2 accordingto dynamics motion in zero-gravity [75]. The other arm, which

    is equipped with a robotic hand known as SARAH [76], is usedto autonomously approach the mockup and capture its grapplexture. In this experiment, the servicer satellite to which thespace manipulator is attached is assumed to have an ACS toperfectly cancel the dynamic effect of the manipulator motionon the attitude of the servicer base. Therefore, only the visionguidance and robot planning method can be veried withouttaking imperfect attitude control or space manipulator controlinto account. Each robotic arm has seven joints and, hence, is re-dundant. The redundancy resolution algorithm that is proposedby Nelson and Khosla [64] is used to control the self-motion of the manipulators in such a way that joint limits and singularitiesare avoided as much as possible. Neptecs LCS [52] is usedto obtain the range data at a rate of 2 Hz. The target pose isestimated by comparing the range data obtained from the LCSagainst the CAD model of the mockup using the iterative clos-est point (ICP) algorithm [55], [77]. The ICP algorithm is aniterative procedure which minimizes a distance between pointcloud in one dataset and the closest points in the other [78], [79].One dataset is a set of 3-D point cloud acquired by scanning thetarget, while the other one is a CAD surface model as illustratedin Fig. 5.

    Theentire control systemhasbeen developed using Simulink.MATLABs Real-Time Workshop [80] generates portable C

    2A two-third scaled model of a Canadian micro-satellite called Quicksat.

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    10/16

    AGHILI: PREDICTION AND MOTION-PLANNING SCHEME FOR VISUALLY GUIDED ROBOTIC CAPTURING 643

    Fig. 6. Last steps of robotic grasping of the satellite mockup.

    code from the Simulink model. Matrix manipulations are car-ried out by using the DSP blockset of Matlab/Simulink [81].The code, upon compilation, is executed on the QNX real-timeoperating system. Four 2.5-GHz P4 computers are dedicated tothe processes associated with the ICP, the EKF estimator, therobot path planner and control, and the satellite simulator.

    The robotic arm to which the satellite mockup is attached iscontrolled to precisely mimic the dynamics motion of a tar-

    get satellite. In other words, the satellite mockup is movedas if it is drifting and tumbling in zero-gravity. The mo-ment of inertia values of the target satellite are selected asI xx = 4 kgm2 , I yy = 8 kgm2 , I zz = 5 kgm2 , and the locationof CM is = [ 0.15 0 0]T m. Therefore, tr( I c ) = 17kgm2 ,and the inertia ratio parameters are

    p = 0.125 0.75

    0.8.

    These values are used to determine the manipulator-driven mo-tion of the target as free-oating object with initial drift ve-locity r (0) = [ 3 2 1.5] mm/s and initial angular velocity (0) = [ 0.09 0.04 0.03 ] rad/s.Randomforce andtorqueper-turbation signals with covariances

    f = 1 10 3m/s2 and = 2 10 3 rad/s2

    are fed to the simulator so that the produced motion is notcompletely deterministic. The motion trajectories of the targetsatellite are generated off-line using MATLAB Simulink.

    The LCSs provides pose measurements of the object at 2 Hz.The measurement noise matrix, i.e., R , for the vision system is

    R = diag (0.045)2 1 3m2 ,(0.06)2 1 4 .

    For the rst iteration of the EKF, an adequate guess of the initialstates is required. The initial linear and angular velocities of theobject at t = 0 s are selected to be zero. In addition, coordinateframe {C} is assumed to be initially aligned with the objectsprincipal axes, i.e., q(0) = q (0) = (0) . Given initial valuesof the parameters p(0) and (0) , the initial value of the CM

    position can be computed from the reciprocal of the observation

    equation, and thus, the state vector is initialized as

    x (0) =

    0

    0

    r s (0) A T ( (0)) (0)0

    p(0) (0)

    0

    .

    The objective is to capture the grapple xture of the tumblingsatellite mockup smoothly using the robotic arm despite of notknowing the target dynamics parameters. The parameters of theoptimal motion planner are set to

    w1 = 2 , w2 = 50 , = 1000 , a = 0 .01. (57)

    The time history of the position and orientation of the mockupsatellite obtained from ICP algorithm is shown in Fig. 7. Appar-ently, the vision system is obstructed by the robotic arm duringits nal approach at time t = 112 s, and thereafter, the visualpose measurements are incorrect. During the learning phase, theEKF estimator receives these noisy measurements and, subse-quently, estimates the sets of inertial parameters plus the linearand angular velocities of the tumbling satellite. Figs. 8 and 9show the trajectories of the estimated velocities and the iner-tial parameters, respectively. The motion planner automaticallygenerates robot trajectory once the EKF estimator converges.To this end, the convergence is when the Euclidean norm of thestate covariance matrix drops below a threshold th , i.e.,

    P k th .

    It should be pointed out that the threshold must be selectedcarefully: A high value can lead to inaccurate estimation re-sults, while a very low value can lead to an unnecessarily longconvergence time. The time history of the norm of the covari-ance matrix is plotted in Fig. 10. In this experiment, the ltercovariance matrix is initialized as

    P (to ) = 1 21

    whereas the threshold value for the convergence detection isset to 0.1. Apparently, at time t = 85 s, the trajectory reachesthe prescribed threshold th = 0.1 indicating the lter conver-gence. The initial time, however, is set to to = 90 s, leavinga margin of 5 s to accommodate the computation time neededfor path planning. Trajectories of predicted position, velocity,

    and acceleration of the grapple xture calculated from the latest

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    11/16

    644 IEEE TRANSACTIONS ON ROBOTICS, VOL. 28, NO. 3, JUNE 2012

    Fig. 7. Target pose measurements obtained from LCS.

    estimation of the motion states and parameters of the target areshown in Fig. 11. A comparison of the trajectories of the visualmeasurements and those of the predicted position in Fig. 11(a)reveals that the visual measurements are not reliable at the nalstages of the robot approach after time t = 112 s. The optimalrendezvous time t f is obtained from solving the Hamiltonian asfunction of tf in (30) using the predicted motion trajectories.The zero-crossing of the Hamiltonian is found numerically us-ing the NR method as illustrated in Fig. 12. It is clear from thegure that the NR method quickly converges to optimal solutiont f t o = 36.5 s., i.e., within about eight iterations, in spite of far off initial guess 10 s from the actual value. Thus

    to = 90 s and tf = 126.5 s

    is the rendezvous time minimizing the weighted linear sum of the objectives corresponding to the specied weights (57). Forthe sake of comparing the optimization results with respect todifferent objectives, the optimal rendezvous time for minimum-time path, minimum-distance path, and minimum line-of-sightangle are also calculated to be tf = 105 s, tf = 120 s, and

    t f = 132 s, respectively, for this particular example.

    Fig. 8. Estimated drift velocity and angular velocity of the target.

    The last steps of robotic capturing of the satellites grap-ple xture are illustrated in Fig. 6. Trajectories of the robotsend-effectorandthe satellitegrapple xtureareshown in Fig. 13.It is evident that the robot intercepts the handle at the designatednal time. Fig. 14 shows a phase plane showing trajectories of the distance between the end-effector and the grapple xture,r s r r versus the relative velocity, r s r r . It is clear from theplot that the end-effector has reached the grapple xture withalmost zero relative velocity. The optimal rendezvous point andpath are planned based on the prediction of the targets motioncomputed from the estimated parameters and states at initialtime to = 90 s. In order to assess the accuracy of motion predic-tion, the prediction motion error for the grasping interval [to tf ]is estimated from the actual target position, i.e., r s a c t , computedfrom the kinematics model of the simulator manipulator and theresults are plotted in Fig. 15. It is evident from the graph thatthe positioning error at the time of grasping is less than 1.5 cm,which is well within the grasping envelop of our robotic hand. Inaddition, the predicted trajectories of the line-of-sight angle inFig. 16 shows that the robot reaches the target at angle = 21 ,

    which is small angle for reliable capture.

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    12/16

    AGHILI: PREDICTION AND MOTION-PLANNING SCHEME FOR VISUALLY GUIDED ROBOTIC CAPTURING 645

    Fig. 9. Convergence of the estimated dynamics parameters of the target totheir true values (depicted by dotted lines).

    Fig. 10. Detecting the estimator convergence from the norm of the state co-

    variance matrix.

    Fig. 11. (a)(c) Estimations and predictions of the grapple-xture position,velocity, and acceleration.

    Robotic capturing of the target satellite also requires that therobot end-effector is properly aligned with the target at the time

    of rendezvous. In this experiment, the robot attempts to track

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    13/16

    646 IEEE TRANSACTIONS ON ROBOTICS, VOL. 28, NO. 3, JUNE 2012

    Fig. 12. NR iterations to nd the zero crossing of the Hamiltonian function.

    Fig. 13. Position trajectories of the target satellite and the robot hand.

    Fig. 14. Trajectories of the relative position versus relative velocity.

    the targets orientation trajectories starting at initial time t = to .By virtue of (5), one can estimate the target angular accelerationfrom the estimated angular velocity and parameters according

    to = d ( , p). Fig. 17 illustrates trajectories of the estimated

    Fig. 15. Prediction error of the grapple-xture position.

    Fig. 16. Line-of-sight angle.

    Fig. 17. Estimation of the targets angular acceleration.

    angular acceleration. Consequently, q , , and constitute thedesired orientation trajectories of the robot.Fig.18 demonstratesthat the robot end-effector is able to track the target attitude few

    seconds after the initial time to .

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    14/16

    AGHILI: PREDICTION AND MOTION-PLANNING SCHEME FOR VISUALLY GUIDED ROBOTIC CAPTURING 647

    Fig. 18. Orientation trajectories of the target satellite and robots hand.

    VI. CONCLUSION

    We presented an optimal estimation and guidance scheme indetail for smooth robotic capturing of a drifting and tumblingtarget with uncertain dynamics. First, a KF was used to estimatethe motion states and a set of dynamics parameters of the targetbased on noisy pose measurements of an LCS. The dynamicsequations were adequately represented in terms of inertia ratios,and trace of the inertia matrix (an invariant quantity) is factoredoutof theequationsso that thesetof remainingdynamicsparam-eters became identiable. Second, the estimations of the statesand parameters are used by a guidance scheme to optimally planthe trajectory of the robots end-effector to intercept a grapplexture on the object with zero relative velocity at an optimal

    rendezvous point. In order to satisfy the capture requirements,a cost function was dened as a weighted linear sum of traveltime, distance, and cosine of a line-of-sight angle (indicating theaccessibility of the grapple xture for robotic capturing) plus apenalty function acting as a soft constraint on the acceler-ation magnitude. Optimal control theory was used to nd thetrajectory and rendezvous point minimizing the cost functionsubject to the zero relative velocity requirements at the time of grasping. Finally, experimental results illustrating autonomousguidance of a robotic manipulator for the capture of a driftingand tumbling mockup satellite based on noisy pose measure-ments obtained from a laser vision system were reported. Theresults demonstrated that the robots hand smoothly interceptsthe grapple xture of the tumbling satellite, i.e., with almostzerorelative velocity, even though the vision sensor was obstructedby the robot arm and that target dynamics was not known a priori to the guidance system.

    APPENDIX AEULER EQUATIONS

    The Euler equations written in the principal axes are given by

    I xx x = ( I yy I xx )y z + x

    I y y y = ( I zz I xx )x z + y

    I zz z = ( I xx I y y )x y + z (58)

    which are equivalent to (5) using denition (7), and

    bx = tr( I c )

    I xx, by =

    tr( I c )I y y

    , bz = tr( I c )

    I z z. (59)

    The following identities can be directly derived from (4):

    I y y

    I xx=

    1 + px

    1 py,

    I z z

    I xx=

    1 px

    1 + pz,

    I yy

    I zz=

    1 pz

    1 + py. (60)

    Finally, using (60) in (59) gives

    bx ( p) = 1 + I y yI xx

    + I zzI xx

    = 1 + 1 + px1 py

    + 1 px1 + pz

    .

    The expressions of by and bz as functions of p can be derivedin the same fashion.

    APPENDIX BNEWTON RAPHSON

    According to the NR method, the following loop

    tn +1 = tn H(tn )H (tn )

    (61)

    may be worked out iteratively until the error in equation H (t) =0 falls into an acceptable tolerance. The differentiation of theHamiltonian with respect to time in (61) can be approximatedfrom

    H (tn ) H(tn ) H (tn 1)

    tn tn 1. (62)

    Therefore, upon substitution of (62) into (61), we get

    tn +1 = tn H (t n 1) tn 1H (tn )H (tn 1) H (tn ) . (63)

    ACKNOWLEDGMENT

    Theauthor would like to thank Dr. L. Hartman from CanadianSpace Agency for his helpful comments to this paper.

    REFERENCES

    [1] Z. Luo and Y. Sakawa, Control of a space manipulator for capturing atumbling object, in Proc. 29th IEEE Conf. Decis. Control , Dec. 1990,vol. 1, pp. 103108.

    [2] E. Papadopoulos and S. Moosavian, Dynamics and control of multi-armspace robots during chase and capture operations, in Proc. IEEE/RSJ/GI Int. Conf. Intell. Robots Syst. Adv. Robot. Syst. Real World , Sep. 1994,vol. 3, pp. 15541561.

    [3] D. Zimpfer and P. Spehar, STS-71 Shuttle/MIR GNC mission overview,in Proc. Adv. Astron. Sci., Amer. Astron. Soc. , San Diego, CA, 1996,pp. 441460.

    [4] H. Nagamatsu, T. Kubota, and I. Nakatani, Capture strategy for retrievalof a tumbling satellite by a space robotic manipulator, in Proc. IEEE Int.Conf. Robot. Autom. , Apr., 1996, vol. 1, pp. 7075.

    [5] G. Visentin and D. L. Brown, Robotics for geostationary satellite ser-vice, in Proc. Robot. Auton. Syst. , 1998, vol. 23, pp. 4551.

    [6] S. Nakasuka and T. Fujiwara, New method of capturing tumbling objectin space and its control aspects, in Proc. IEEE Int. Conf. Robot. Autom. ,Kohala Coast, HI, Aug.1999, pp. 973978.

    [7] S. Matsumoto, Y. Ohkami, Y. Wakabayashi, M. Oda,and H. Uemo, Satel-lite capturing strategy using agile orbital servicing vehicle, hyper OSV,in Proc. IEEE Int. Conf. Robot. Autom. , Washington, DC, May. 2002,

    pp. 23092314.

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    15/16

    648 IEEE TRANSACTIONS ON ROBOTICS, VOL. 28, NO. 3, JUNE 2012

    [8] E. Bornschlegl,G. Hirzinger, M. Maurette,R. Mugunolo, andG. Visentin,Space robotics in Europe, a compendium, presentedat the 7th Int. Symp.Artif. Intell., Robot., Autom. Space, Nara, Japan, May. 2003.

    [9] G. Hirzinger, K. Landzettel, B. Brunner, M. Fisher, C. Preusche,D. Reintsema, A. Albu-Schaffer, G. Schreiber, and B.-M. Steinmetz,DLRs robotics technologiesfor on-orbitservicing, Adv. Robot. , vol. 18,no. 2, pp. 139174, 2004.

    [10] Z. Ma, O. Ma, and B. Shashikanth, Optimal approach to and alignmentwith a rotating rigid body for capture, J. Astron. Sci. , vol. 55, no. 4,pp. 407419, 2007.

    [11] K. Yoshida, D. Dimitrov, and H. Nakanishi, On the capture of tumblingsatellite by a space robot, presented at the IEE/RSJ Int. Conf. Intell.Robots Syst., Beijing, China, Oct. 2006.

    [12] C. Li, B. Liang, and W. Xu, Autonomous trajectory planning of free-oating robot for capturing space target, in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. , 2006, pp. 10081013.

    [13] I. Rekleitis, E. Martin, G. Rouleau, R. LArchev eque, K. Parsa, andE. Dupuis, Autonomous capture of a tumbling satellite, J. Field Robot.,Spec. Issue Space Robot. , vol. 24, no. 4, pp. 275296, 2007.

    [14] F. Aghili and K. Parsa, Adaptive motion estimation of a tumbling satel-lite using laser-vision data with unknown noise characteristics, in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. , San Diego, CA, Oct. 29Nov. 2,2007, pp. 839846.

    [15] D. Whelan, E. Adler, S. Wilson, and G. Roesler, Darpa orbital expressprogram: Effecting a revolution in space-based systems, in Proc. Small

    Payloads Space , Nov. 2000, vol. 136, pp. 4856.[16] J. K. Thienel andR. M. Sanner, Hubble space telescopeangualar velocityestimation during the robotic servicing mission, AIAA J. Guid., Control, Dyn., vol. 30, no. 1, pp. 2934, Feb. 2007.

    [17] S. Kawamoto, S. Nishida, and S. Kibe, Research on a space debris re-moval system, NAL Res. Prog. (Nat. Aerosp. Lab. Jpn.) , vol. 2002/2003,pp. 8487, 2003.

    [18] D. Reintsema, J. Thaeter, A. Rathke, W. Naumann, P. Rank, and J. Som-mer, DEOSThe German robotics approach to secure and de-orbit mal-functioned satellitesfrom lowearth orbit, in Proc. Int. Symp. Artif. Intell., Robot. Autom. Space , Sapporo,Japan, Aug. 29Sep. 1, 2010,pp. 244251.

    [19] G. Brachet, Long-term sustainability of outer space activities, Commit-tee on the Peaceful Uses of Outer Space (UNCOPUOS), Vienna, Austria,Tech. Rep., Feb. 819, 2010.

    [20] J.-C. Liou, N. Johnson, and N. Hill, Controlling the growth of futureLEO debris populations with active debris removal, Acta Astronautica ,vol. 66, no. 56, pp. 648653, Mar./Apr. 2009.

    [21] B. Barbee, S. Alfano, E. Pinon, K. Gold, and D. Gaylor, Design of spacecraft missions to remove multiple orbital debris objects, in Proc. IEEE Aerosp. Conf. , Mar. 2011, pp. 114.

    [22] M. Oda, Space robot experiments on NASDAs ETS-VII satellitePreliminary overview of the experiment results, in Proc. IEEE Int. Conf. Robot. Autom. , Detroit, MI, 1999, pp. 13901395.

    [23] N. Inaba and M. Oda, Autonomous satellite capture by a space robot:World rst on-orbit experiment on a Japanese robot satellite ETS-VII, in Proc. IEEE Int. Conf. Robot. Autom. , 2000, pp. 11691174.

    [24] K. Yoshida, Engineering test satellite VII ight experiment for spacerobot dynamics and control: Theories on laboratory test beds ten yearsago, now in orbit, Int. J. Robort. Res. , vol. 22, no. 5, pp. 321335, 2003.

    [25] On-Orbit Satellite Servicing Study, Project Report , Nat. Aeronaut. SpaceAdmin., Goddard Space Flight Center, Greenbelt, MD, 2010. Available:http://servicingstudy.gsfc.nasa.gov/

    [26] F. Aghili, Optimal control of a space manipulator for detumbling of a

    target satellite, in Proc. IEEE Int. Conf. Robot. Autom. , Kobe, Japan,May.2009, pp. 30193024.[27] X. Cyril, A. K. Misra, M. Ingham, and G. Jaar, Postcapture dynamics of

    a spacecraft-manipulator-payload system, AIAA J. Guid., Control, Dyn. ,vol. 23, no. 1, pp. 95100, Jan./Feb. 2000.

    [28] D. N. Nenchev and K. Yoshida, Impact analysis and post-impact motioncontrol issues of a free-oating space robot subject to a force impulse, IEEE Trans. Robot. Autom. , vol. 15, no. 3, pp. 548557, Jun. 1999.

    [29] Z. Vafa and S. Dubowsky, On the dynamics of manipulators in spaceusing the virtual manipulator approach, in Proc. IEEE Int. Conf. Robot. Autom, Raleigh, NC, pp. 579585.

    [30] Y. Umetani and K. Yoshida, Resolved motion rate control of space ma-nipulators withgeneralized Jacobianmatrix, IEEE Trans.Robot. Autom. ,vol. 5, no. 3, pp. 303314, Jun. 1989.

    [31] E. Papadopoulos and S. Dubowsky, On the nature of control algorithmsfor space manipulators, in Proc. IEEE Int. Conf. Robot. Autom. , 1990,pp. 11021108.

    [32] K. Yoshida and K. Hashizume, Zero reaction maneuver: Flight validationwith ETS-VII space robot and extention to kinematically redundant arm,in Proc. IEEE Int. Conf. Robot. Autom. , Seoul,Korea, May 2001, pp.441446.

    [33] T. Oki, H. Nakanishi, and K. Yoshida, Time-optimal manipulator controlof a free-oating space robot with constraint on reaction torque, in Proc. IEEE/RSJ Int. Conf. Intell. RobotsSyst. , Nice,France,Sep.2008, pp.28282833.

    [34] F. Aghili, Cartesian control of space manipulators for on-orbit servic-ing, presented at the AIAA Guid., Navigat. Control Conf., Toronto, ON,Canada, Aug. 2010.

    [35] Z. Lin, V. Zeman, and R. V. Patel, On-line robot trajectory planning forcatching a moving object, in Proc. IEEE Int. Conf. Robot. Autom. , May1989, vol. 3, pp. 17261731.

    [36] R. Sharma, J. Y. Herve, and P. Cuka, Dynamic robot manipulation usingvisual tracking, in Proc. IEEE Int. Conf. Robot. Autom. , Nice, France,May.1992, pp. 18441849.

    [37] M. Lei and B. Ghosh, Visually guided robotic tracking and grasping of a moving object, in Proc. 32nd IEEE Conf. Decis. Control , Dec. 1993,vol. 2, pp. 16041609.

    [38] Y. Chen and L. T. Watson, Optimal trajectory planning for a space robotdocking with a moving target via homotopy algorithms, J. Robot. Syst. ,vol. 12, no. 8, pp. 531540, 1995.

    [39] M. Zhang and M. Buehler, Sensor-based online trajectory generationfor smoothly grasping moving objects, in Proc. IEEE Int. Symp. Intell.

    Control , Aug., 1994, pp. 141146.[40] N. Fitz-Coy and M. C. Liu, New modied proportional navigationscheme for rendezvous and docking with tumbling targets: The planarcase, in Proc. Symp. Flight Mech./Estimat. Theory , May 1995, pp. 243252.

    [41] E. A. Croft, R. G. Fenton, and B. Benhabib, Optimal rendezvous-pointselection for robotic inteception of moving objects, IEEE Trans. Syst., Man, Cybern. , vol. 28, no. 2, pp. 192204, Apr. 1998.

    [42] M. Mehrandezh, N. M. Sela, R. G. Fenton, and B. Benhabib, Roboticinterception of moving objects using an augmented ideal proportionalnavigationguidance technique, IEEE Trans.Syst., Man,Cybern. , vol. 30,no. 3, pp. 238250, May 2000.

    [43] F. AghiliandK. Parsa,Anadaptive visionsystemforguidanceof a roboticmanipulator to capture a tumbling satellite with unknown dynamics, in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. , Nice, France, Sep. 2008,pp. 30643071.

    [44] F. Aghili, Optimal control for robotic capturing and passivation of atumbling satellite, presented at the AIAA Guid., Navigat. Control Conf.,Honolulu, HI, Aug. 2008.

    [45] R. Lampariello, Motion planning for the on-orbit grasping of an on-cooperative target satellite with c ollision avoidance, in Proc. Int. Symp. Artif. Intell., Robot. Autom. Space , Sapporo, Japan, Aug. 29Sep. 12010,pp. 636643.

    [46] J. R. Wertz and R. Bell, Autonomous rendezvous and dockingtechnologiesStatus and prospects, in Proc. Int. Soc. Opt. Eng. , Apr.2003, vol. 5088, pp. 2030.

    [47] M. Ruth and C. Tracy, Video-guidance for DART rendezvous mission,in Proc. Int. Soc. Opt. Eng. , Aug. 2004, vol. 5419, pp. 92106.

    [48] M. F. Machula and G. S. Sandhoo, Rendezvous and docking for spaceexploration, presented at the 1st Space Explorat. Conf.: Contin. VoyageDiscov., Orlando, FL, Jan. 2005.

    [49] J. W. Evans, E. Pinon III, and T. A. Mulder, Autonomous rendezvousguidance and navigation for orbital express and beyond, in Proc. 16th

    AAS/AIAA Space Flight Mech. Conf. , Jan. 2006, pp. 27.[50] P. Singla, K. Subbarao, and J. L. Junkins, Adaptive output feedback control for spacecraft rendezvous and docking under measurement un-certainty, AIAA J. Guid., Control, Dyn. , vol. 29, no. 4, pp. 892902,2006.

    [51] G. Rouleau, I. Rekleitis, R. LArchev eque, E. Martin, K. Parsa, andE. Dupuis, Autonomous capture of a tumbling satellite, in Proc. IEEE Int. Conf. Robot. Autom. , Orlando, FL, May 1519, 2006, pp. 38853860.

    [52] C. Samson, C. English, A. Deslauriers, I. Christie, F. Blais, and F. Ferrie,Neptec 3D laser camera system: From space mission STS-105 to terres-trial applications, Can. Aeronaut. Space J. , vol. 50, no. 2, pp. 115123,2004.

    [53] M. D. Lichter andS. Dubowsky, State, shape, andparameterestimation of space object from range images, in Proc. IEEE Int. Conf. Robot. Autom. ,Apr. 2004, pp. 29742979.

    [54] U. Hillenbrand and R. Lampariello, Motion and parameter estimationof a free-oating space object from range data for motion prediction, in

  • 8/13/2019 A Prediction and Motion-Planning Scheme For

    16/16