a cmac neural network approach to redundant manipulator kine

Upload: maherkamel

Post on 04-Apr-2018

224 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    1/17

    STROJNCKY ASOPIS, 54, 2003, . 2 65

    A CMAC neural network approach to redundant

    manipulator kinematics control

    YANGMIN LI1

    *, SIO HONG LEONG1

    This paper studies the inverse kinematics issue of redundant manipulatorsusing the neural network method. The conventional method of solving thisproblem analytically is by applying the Jacobian Pseudoinverse Algorithm. It iseffective and able to resolve the redundancy for additional constraints. However,its demand for computational effort makes it not suitable for real-time control.Recently, neural networks have been widely used in robotic control because theyare fast, fault-tolerant and able to learn. In this paper, the CMAC (CerebellarModel Articulation Controller) neural network for solving the inverse kinematicsproblems in real time is presented. Simulations have shown that the CMACneural network has good performance in tackling the inverse kinematics control

    of redundant manipulators.

    K e y w o r d s: redundant manipulator, CMAC neural network, kinematics

    1. Introduction

    Robotic manipulators have been widely used in different industries to perform

    many kinds of work such as welding, painting, assembling, nuclear material han-

    dling, space, sea exploring, etc. The control of a manipulator involves pre-study of

    trajectory planning, inverse kinematics and inverse dynamics. Among which, in-

    verse kinematics of a redundant manipulator is of the interest in this paper. Kine-

    matic redundant manipulators arouse research interest due to their redundancy can

    be utilized not only to avoid singularities and obstacles, but also enhance the work-ing ability in limited workspace and the capability of fault-tolerance. The inverse

    kinematics problem is to determine the joint variables corresponding to a given

    end-effector position and orientation. Given a desired workspace trajectory, the

    task how to find out the corresponding joint space trajectory is a complex problem

    since redundant manipulators have more than necessary degrees of freedom (DOF)

    and multiple or infinite number of solutions. Moreover, the inverse kinematics

    equations are usually nonlinear and are difficult to find closed-form solutions.

    1 Department of Electromechanical Engineering, Faculty of Science and Technology, Uni-versity of Macau, Av. Padre Toms Pereira S. J., Taipa, Macao S.A.R., P. R. China

    * corresponding author, e-mail: [email protected]

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    2/17

    66 STROJNCKY ASOPIS, 54, 2003, . 2

    Conventionally, the Jacobian Pseudoinverse Algorithm [1] has been applied to

    solve the inverse kinematics problems due to its ability to satisfy additional con-

    straints through mapping the velocities corresponding to the additional constraints

    into the null space of the Jacobian J while tracking the desired workspace trajec-

    tory. However, the Jacobian Pseudoinverse Algorithm involves the inversion of the

    J matrix which can present a serious inconvenience not only at singularity but

    also in the neighborhood of a singularity though the damped least squares (DLS)inverse can render the inversion better considered from numerical viewpoint.

    Recent years, neural networks (NN) [2] have been successfully applied in

    robotic control, whose generalization capacities and structures make them robust

    and fault-tolerant in algorithms, which are able to solve a problem that was diffi-

    cult to handle before. Also, neural networks are composed of many neurons, even

    though some of them are damaged, the output of the network will not have too

    big variations. Another advantage of NN is their ability to solve highly nonlinear

    problems. The properties of NN make them so promising when they are applied

    to robotic control problems.

    Although there are many kinds of NN [3], the CMAC (Cerebellar Model Ar-

    ticulation Controller) neural networks [4, 5] have special advantage in speed of

    convergence and a simple -learning method, which are especially suitable for bothreal time control of robots and nonlinear function approximation of any control

    systems.

    A CMAC neural network is proposed in order to simulate the function of our

    cerebellum. It is an associative neural network in which the inputs determine a

    small subset of the network and that subset determines the outputs corresponding

    to the inputs. The associative mapping property of CMAC assures local general-

    ization, that is, similar inputs produce similar outputs while distant inputs produce

    nearly independent outputs. The CMAC is similar to perceptron, although it is a

    linear relationship on the neuron scope, in overall it is a nonlinear relationship.

    About the research on the CMAC neural networks and their applications,

    Miller III et al. [6] developed a robot tracking system consisting of a manipulator

    attached with a video camera for tracking an object on a conveyor and havingthe image of the object specifically positioned and oriented on the screen without

    giving any robot kinematics information, height measurement or any camera-screen

    calibration, the CMAC was used to learn all these parameters. On the other hand,

    CMAC was modified by Macnab et al. [7] to utilize radial basis functions for

    precision control of flexible-joint robots in order to deal with the elasticity. For

    solving the inverse kinematics problems, Oyama et al. [8] proposed a Modular

    Neural Net System for learning inverse kinematics in order to deal with the multi-

    -valued and discontinuous function of the inverse kinematics system. Gardner et

    al. [9] investigated the application of neural network in a two-link manipulator

    trajectory tracking in which the NN was used to compute the components of the

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    3/17

    STROJNCKY ASOPIS, 54, 2003, . 2 67

    inverse of the Jacobian matrix. Ferreira et al. [10] applied Attentional Mode Neural

    network (AMNN) in leading a robot arm in 3-D space to a goal point in real time.

    PSOM Network presented by Walter [11] is able to learn the inverse kinematics

    based on only 27 data points.

    This paper is organized as follows: In Section 2, the conventional inverse

    kinematics of manipulators will be reviewed. Section 3 presents the CMAC neural

    network method for solving the complex inverse kinematics problems. Simulationwill be performed with a 5-link planar manipulator tracking a circle trajectory in

    Section 4. Finally, some conclusions are presented in Section 5.

    2. Conventional manipulator inverse kinematics

    For a redundant manipulator, the end-effector position is a function of the

    joint variables which can be expressed as the following kinematic equation:

    x = f(q) , (1)

    where x is the m 1 end-effector position vector and q is the n 1 joint angle

    vector. n > m in the case of redundant manipulator.Given the desired trajectory x(t) in workspace, we need to find the joint angle

    trajectory q(t) corresponding to x(t).

    The differential kinematic equation can be derived as

    x = J (q) q, (2)

    where x is the m1 end-effector velocity vector, q is the n1 joint velocity vector,

    and J(q) is the m n Jacobian matrix.

    Thus, our main concern is to solve the following inverse kinematics equation:

    q = J1 (q) x. (3)

    Since the manipulator is redundant (n > m), the Jacobian matrix is not

    square. Usually Eq. (3) can be solved by using the pseudoinverse of the Jacobian

    that locally minimizes the norm of joint velocities [1]. Equation (3) becomes

    q = J+ (q) x. (4)

    Here the matrix J+ is defined as

    J+ = JT(JJT)1

    . (5)

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    4/17

    68 STROJNCKY ASOPIS, 54, 2003, . 2

    Furthermore, Eq. (4) can be written as

    q = J+ (q) x +

    I J+Jqa, (6)

    where qa is a vector of arbitrary joint velocities projected to the null space of J.

    The redundancy can be resolved by specifying qa so as to satisfy an additional

    constraint. Obviously the above pseudoinverse formulation involves the inverse ofJ, a lot of calculations are needed for solving the equation, thus, it is not suitable

    for real-time control. The CMAC neural network will be proposed to approach

    this issue since it has fast learning function, rapid algorithm computation, simple

    structure, and fault-tolerant properties.

    3. A CMAC neural network

    3.1 P r i n c i p l e s

    The structure of CMAC neural network is shown in Fig. 1. The input vectors

    in the input space S are a number of sensors in real world. The input space consists

    of all possible input vectors, and CMAC maps the input vector into C points in theconceptual memory A. As shown in Fig. 1, two close inputs will have overlaps in

    A, the closer the inputs the more overlapped in memory A, and two distant inputs

    will have no overlap.

    Since practical input space is extremely large, in order to reduce the memory

    requirement, A is mapped onto a much smaller physical memory A through hash

    coding. So, any input presented to CMAC will generate C physical memory loca-

    tions. The output Y will be the summation of the content of the C locations in

    A.

    Fig. 1. Block diagram of CMAC.

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    5/17

    STROJNCKY ASOPIS, 54, 2003, . 2 69

    It is shown that the associative mapping within the CMAC network assures

    nearby points in the input space generalize while distant points not. Moreover, since

    the mapping from A to Y is linear but from S to A is nonlinear, the nonlinear

    nature of the CMAC network performs a fixed nonlinear mapping from the input

    vector to a many-dimensional output vector.

    3.2 L e a r n i n g m e t h o d

    Network training is typically based on observed training data pairs Y(s) and

    Yd, where Yd is the desired network output corresponding to the input s. Using

    the least mean square (LMS) training rule, the weight can be calculated by:

    w (t + 1) = w (t) + Yd Y(s)

    C, (7)

    where is the learning step length.

    Therefore, if we define an acceptable error , no changes are needed for the

    weights when [Yd Y(s)] . The training can be done after a set of training

    samples being tested or after each training sample being tested.

    3.3 S o l v i n g i n v e r s e k i n e m a t i c s p r o b l e m s o f k i n e m a t i c a l l y

    r e d u n d a n t m a n i p u l a t o r s

    The control block diagram of CMAC neural network for the inverse kinematics

    control of the manipulator is shown in Fig. 2.

    From Fig. 2, the following equation can be derived

    Xt+1 = Xt+1d + Ke, (8)

    which is equivalent to

    e + Ke = 0. (9)

    Fig. 2. Block diagram of CMAC NN for solving inverse kinematics problems.

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    6/17

    70 STROJNCKY ASOPIS, 54, 2003, . 2

    If K is a positive definite matrix, the system is asymptotically stable [1].

    The shadowed box is our main concern since it solves the inverse kinematics

    problem. The inputs are the current manipulator configuration qt at time t and

    the desired end-effector position increments Xt+1d at time t + 1. The outputs

    are the corresponding joint angle increments that are fed together with the current

    joint angles to the robot manipulator in order to have it moved to the desired

    configuration.Here, NN denotes the CMAC neural network and Opt denotes the op-

    timization process which is used to find the desired joint angle increments cor-

    responding to the desired end-effector position increments as well as satisfying

    specified additional constraints.

    In fact, the inverse kinematics problems can be solved by an optimization

    method. However, an optimization process usually takes quite a long time and thus

    is not suitable for real time control. Here we use CMAC neural network to generate

    an initial solution qt+1 for the optimization process. The difference between the

    initial solution qt+1 and the output of the optimization process qt+1d is fed back

    to correct the weights of the CMAC neural network, that is, CMAC neural network

    is learning to produce an output as close as possible to the optimization output.

    Thus the time spent in the optimization process can be reduced when the CMACneural network learns more and more. Eventually, the output of the CMAC neural

    network can replace that of the optimization process.

    For the optimization process, the gradient method is adopted to minimize the

    following objective function:

    U =

    2eTe +

    2qTq, (10)

    where e is the position error vector, q is joint angle difference vector, and

    are the weight constants.

    The second term on the right-hand side of Eq. (10) is for minimizing the joint

    angle difference, smoothing the motion and minimizing the energy consumption,

    we have,U

    q= eTJ + qT, (11)

    where J is the Jacobian matrix and thus,

    qk+1 = qk U

    q, (12)

    where k is the number of iterations and is the length of learning step.

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    7/17

    STROJNCKY ASOPIS, 54, 2003, . 2 71

    4. Examples

    In order to evaluate the performance of CMAC, we have developed a simu-

    lation software under Microsoft Windows 98 system using Borland C++ Builder.

    The CMAC code includes multiple designs for the receptive field lattice and the re-

    ceptive field sensitivity functions. In this simulation, the sampling time t = 0.02

    s, constant k = 1/t = 50 and the parameters for the optimization are: = 300.0, = 1.0, and = 0.003.

    4.1 M a n i p u l a t o r k i n e m a t i c s

    A 5-link planar manipulator is treated in which each link is connected with

    revolute joints (5-DOF, m = 2, n = 5, redundancy = n m = 3) as shown in

    Fig. 3. The total length of the manipulator is 1.5 m with the first link of 0.5 m,

    second and third link of 0.25 m, fourth and fifth link of 0.275 m. The height of

    the base support is 0.35 m. From Fig. 4, the Denavit Hartenberg notation of the

    manipulator is described in Table 1. No joint angle limits are imposed on during

    kinematics control.

    Fig. 3. Initial configuration of the mani-pulator.

    Fig. 4. 5-DOF Planar Manipulator dia-gram.

    The kinematic equations of the 5-DOF planar manipulator can be obtained as

    X

    Z

    =

    l1S1 + l2S12 + l3S123 + l4S1234 + l5S12345

    l1C1 + l2C12 + l3C123 + l4C1234 + l5C12345

    , (13)

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    8/17

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    9/17

    STROJNCKY ASOPIS, 54, 2003, . 2 73

    The desired end-effector trajectory in our simulation is a circle with radius of

    0.25 m, centered at (0.5, 0.6) m. The desired angular velocity is 0.5 rad/s. The

    trajectory equations are as follows:

    Xd(t)

    Zd(t)

    =

    0.5 0.25 cos(t)

    0.25 + 0.25 sin(t)

    . (15)

    The cycle time is 4 s and the initial configuration of the manipulator is shown in

    Fig. 3 with values of q1(0) = 1.730385, q2(0) = 1.219399, q3(0) = 0.0, q4(0) =

    = 2.29649 and q5(0) = 0.0.

    In the following simulations, the norm error is defined as

    Enorm (t) =

    [Xd (t)X(t)]

    2 + [Zd (t) Z(t)]2 (16)

    and the average error is defined as

    Eav =

    2/(t)n=1

    Enorm(nt)

    2/(t). (17)

    4.2 C M A C n e u r a l n e t w o r k s t r u c t u r e

    The structure of the CMAC neural network is shown in Fig. 5 where the

    input vector dimension is taken as 7 and the output vector dimension is taken as

    5. During simulation the number of vectors in CMAC memory is 10000 and the

    generalization parameter is 64.

    Fig. 5. The input and output of the CMAC neural network.

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    10/17

    74 STROJNCKY ASOPIS, 54, 2003, . 2

    4.3 S i m u l a t i o n r e s u l t s

    First of all, we would like to observe performance of the optimization processwithout using CMAC neural network. The results for different optimization itera-

    tion numbers taking from 1, 50 to 100 are shown in the Fig. 6, Fig. 7, and Fig. 8,

    respectively. The norm error has become stable after 50 iterations with maximum

    norm error of about 0.18 mm as shown in Fig. 7. At the beginning of simulation,

    the average error is very large with the maximum value of 12.5 mm approximately,after over 20 times of iterations the average error is decreased to a stable value of

    0.3 mm approximately as shown in Fig. 8. The program can output the optimalresults with any number of iterations, because of page limitations only parts of

    results are demonstrated here to point out the algorithms shortcomings without

    using CMAC neural network. Within the following figures, we define i = joint i

    (i = 1, 2 . . . 5).

    Now, the CMAC neural network is exploited and trained step by step with

    decreasing optimization iterations and learning rates as shown in Figs. 913. It hasbeen found that after 1300 trainings, the maximum norm error is about 0.3 mm

    (0.02 % of workspace dimension). The result is comparable with that of applying

    the optimization process alone. Notice that there is only one optimization iteration,

    thus CMAC neural network really can reduce the optimization time. In order to

    Fig. 6. Performance of optimization process without CMAC: iteration numbers = 1.

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    11/17

    STROJNCKY ASOPIS, 54, 2003, . 2 75

    Fig. 7. Performance of optimization process without CMAC: iteration numbers = 50.

    Fig. 8. Performance of optimization process without CMAC: iteration numbers = 100.

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    12/17

    76 STROJNCKY ASOPIS, 54, 2003, . 2

    Fig. 9. CMAC performance during training with iteration numbers = 12, CMAC = 1,and cycles = 50.

    Fig. 10. CMAC performance during training with iteration numbers = 6, CMAC = 1,and cycles = 50.

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    13/17

    STROJNCKY ASOPIS, 54, 2003, . 2 77

    Fig. 11. CMAC performance during training with iteration numbers = 3, CMAC = 1,and cycles = 50.

    Fig. 12. CMAC performance during training with iteration numbers = 2, CMAC = 1,and cycles = 150.

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    14/17

    78 STROJNCKY ASOPIS, 54, 2003, . 2

    Fig. 13. CMAC performance during training with iteration numbers = 1, CMAC = 0.25,and cycles = 5450.

    Fig. 14. Robust performance after link-length changed with iteration numbers = 1.

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    15/17

    STROJNCKY ASOPIS, 54, 2003, . 2 79

    Fig. 15. Robust performance after link-length changed with iteration numbers = 700.

    check whether the system is stable or not, we have tested 5000 more trainings.

    From Fig. 13, it is seen that after 6000 trainings, the norm error is similar to the

    result after 1000 trainings. Also, the average error is mostly below 0.1 mm. This

    implies that the system is stable enough.

    Furthermore the robustness of the system is evaluated by changing the link-

    -length parameters of the manipulator and observing the CMAC neural network

    performance. We use the result after 1000 trainings as mentioned above to testthe system. The link-length dimension of the manipulator is changed to a new set

    of values as: l1 = 0.5 m; l2 = 0.1 m; l3 = 0.4 m; l4 = 0.35 m; l5 = 0.15 m. The

    result is shown in Figs. 14 and 15, from which we can see that the average errorcaused by the dimension parameter variations of links has decreased to a stable

    value after about 100 trainings. Also the maximum norm error is about 0.25 mm

    after 700 trainings, which is close to the result of 0.3 mm above. Therefore theperformance of the CMAC neural network in solving the inverse kinematics control

    issues considering link-length variation is quite robust.

    5. Conclusions

    In this paper the CMAC neural network has been applied in solving the inverse

    kinematics problem of redundant manipulators. An inverse kinematics control al-

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    16/17

    80 STROJNCKY ASOPIS, 54, 2003, . 2

    gorithm which involves a CMAC neural network has been proposed. Simulations

    for a five-link manipulator have been performed to evaluate the CMAC perfor-

    mance as well as its robust characteristics. The five-link manipulator can track a

    circle within the tolerance of errors through trainings by CMAC neural network.

    Research results show that CMAC neural network is very good for real time control

    application due to its fast learning and simple computation properties. It should

    be pointed out that this research work is only restricted to planar manipulatorswith open chain structure of rigid bodies connected by revolute or prismatic joints.

    Further work will be extended to space closed or mixed chain structures of mani-

    pulators.

    Acknowledgements

    This work was supported in part by grant RG008/00-01w/LYM/FST and RG025/00-01S/LYM/FST from Research Committee of University of Macau. The authors wouldlike to thank the comments and suggestions from reviewers and executive editor.

    REFERENCES

    [1] SCIAVICCO, L.SICILIANO, B.: Modeling And Control of Robot Manipulators.New York, McGraw-Hill Book Company 1996.

    [2] TSOUKALAS, L. H.UHRIG, R. E.: Fuzzy and Neural Approaches in Engineering.New York, John Wiley & Sons Inc. 1997.

    [3] FREEMAN, J. A.SKAPURA, D. M.: Neural Networks, Algorithms, Applications,and Programming Techniques. Boston, Addison Wesley 1991.

    [4] MILLER, W. T.GLANZ, F. H.: Implementation of the Cerebellar Model Arith-metic Computer CMAC. UNH CMAC Version 2.1. The University of New Hamp-shire 1996.

    [5] LI, Y.LEONG, S. H.: In: Proceedings of the 5th World Multiconference on Sys-temics, Cybertics and Informatics. Eds.: N. Callaos, N. et al. Vol. 18. Orlando,International Institute of Informatics and Systemics Press 2001, p. 274.

    [6] MILLER III, W. T.GLANZ, F. H.KRAFT III, L. G.: IEEE Special Issue onNeural Networks, 78, 1990, p. 1561.

    [7] MACNAB, C. J. B.DELEUTERIO, G. M. T.: In: Proceedings of the IEEE In-ternational Conference on Robotics & Automation. Vol. 1. Piscataway, IEEE Press1998, p. 511.

    [8] OYAMA, E.TACHI, S.: In: Proceedings of the IEEE International Conference onRobotics & Automation. Vol. 4. Piscataway, IEEE Press 2000, p. 3239.

    [9] GARDNER, J. F.BRANDT, A.LUECKE, G.: In: Proceedings of the Fifth In-ternational Conference on Advanced Robotics. Vol. 1. Piscataway, IEEE Press 1991,p. 487.

    [10] FERREIRA, A. P. L.ENGEL, P. M.: In: Proceedings of International Workshopon Neural Networks for Identification, Control, Robotics, and Signal/Image Process-ing. Piscataway, IEEE Press 1996, p. 440.

    [11] WALTER, J. A.: In: Proceedings of the IEEE International Conference on Robotics& Automation. Vol. 2. Piscataway, IEEE Press 1998, p. 2054.

  • 7/30/2019 A CMAC Neural Network Approach to Redundant Manipulator Kine

    17/17

    STROJNCKY ASOPIS, 54, 2003, . 2 81

    [12] WOO, M.NEIDER, J.DAVIS, T.: OpenGL Programming Guide. 2nd Edition.Boston, Addison Wesley Developers Press 1997.

    [13] KEMPF, R.FRAZIER, C.: OpenGL Reference Manual: the official reference doc-ument to OpenGL version 1.1. Boston, Addison Wesley Developers Press 1997.

    Received: 9.8. 2002

    Revised: 3.1.2003