vision path following with a stabilized quadrotor · vision for the quadrotor is presented in...

10
Vision Path Following with a Stabilized Quadrotor Miguel Jos´ e Jorge Raba¸ca * * Department of Mechanical Engineering - IDMEC, Instituto Superior ecnico, Technical University of Lisbon (TULisbon) Av. Rovisco Pais, 1049-001 Lisboa, Portugal; e-mail: [email protected] Abstract: The goals of the present work were: to propose a control strategy for a quadrotor, with the purpose of following a ground track using airborne vision feedback; to prepare the approach to be chosen, develop the tools, test them with a wheeled mobile robot; and evaluate the approach for the quadrotor in simulation before it is applied in the experimental platform. Two methods to estimate the tracking errors were developed and evaluated: a ’Trigonometric’ approach is based on the geometric relations between the image and the ground planes; and a ’Camera matrix’ approach is based on the projective model of the camera. An image treatment was developed to clean the noise from the images captured by the camera. The estimation and image treatment were tested on the real track of the laboratory using the wheeled mobile robot. A virtual reality environment was created to test the control strategies of the quadrotor and to predict the behavior of the experimental platform. The virtual reality simulation results were compared with the experimental results obtained with the wheeled mobile robot. It was verified that the virtual reality track of the laboratory is well dimensioned and the images obtained during the track following are similar, the virtual reality is able to predict the experimental wheeled mobile robot behavior. We may conclude that the estimation methods, the image treatment and the control strategies perform the desired task within the system limitations and allow to proceed the experiment with the quadrotor. Keywords: Quadrotor, Path following, Real time image processing, Visual Servoing 1. INTRODUCTION The Unmanned Aerial Vehicles (UAV) have shown grow- ing interest in the robotics community with the passing of time. In this work, the remote navigation through image for a quadrotor is researched. In the area of autonomous flight, [Rondon et al., 2010] showed it is possible to navigate with a quadrotor us- ing information provided by a camera with a downward orientation and without converting the information from inside the image. A work on path following is presented in [Bourquardez et al., 2009] where the objective is to track geometries and maintain the quadrotor in a fixed position with the help of a fixed downward camera. In [Cabecinhas et al., 2010], a control is made with a camera equipped with pan and tilt control in order to keep the target in sight. A quadrotor tracking a trajectory is made in [Zhou et al., 2010], where a numerical simulation is made with a pre-planned path while using a PID controller.A quadrotor with control based on Backstepping and Frenet- Serret theory is used in simulation for tracking a target while using a camera pointed downward, in [Barrientos et al., 2010]. Related works have been done using other kinds of robots such as [Ismail et al., 2009], where a ground mobile robot follows a line while moving forward, but in this case image pre-processing is done in order to clean the noise present, with the use of a fixed threshold. It was verified that image enhancement and segmentation are areas in computer vision with a lot of development of late. For example, methods using histogram as bases for image treatment are seen in works such as [Wang et al., 2007], [He et al., 2010] and [Blas et al., 2008] where the first two are methods made to improve the image quality in gray scale and the last one for segmentation of areas in a colored image. In section 2 an explanation of the steps performed during this work is provided. Afterwards, the robot models used are described, a mobile ground robot and a quadrotor. In section 3 the image treatment used to clean the image and the methods to estimate the tracking errors are presented. Section 4 presents the results and analyses obtained with the control of the ground mobile robot. In section 5 the results and analyses of the control applied to the quadrotor is presented and in section 6 the conclusions obtained are provided. 2. PROBLEM STATEMENT The objective is for a quadrotor to follow a ground track in the laboratory(in figure 1) with the aid of a camera while moving forward. Five different frames are used: (1) F 0 is the global fixed frame, with z axis pointing down, and both x and y positioned horizontally on the ground of the laboratory. Here the track path is defined and the variables

Upload: others

Post on 14-Oct-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Vision Path Following with a Stabilized Quadrotor · vision for the quadrotor is presented in section 5. C S B A 2 1 V Cam Z r x,y,z , , y e, e V 0 r, , U I x cb,y x ct,y ct V x Fig

Vision Path Following with a Stabilized Quadrotor

Miguel Jose Jorge Rabaca ∗

∗ Department of Mechanical Engineering - IDMEC, Instituto SuperiorTecnico, Technical University of Lisbon (TULisbon) Av. Rovisco Pais,

1049-001 Lisboa, Portugal; e-mail: [email protected]

Abstract:The goals of the present work were: to propose a control strategy for a quadrotor, with thepurpose of following a ground track using airborne vision feedback; to prepare the approach tobe chosen, develop the tools, test them with a wheeled mobile robot; and evaluate the approachfor the quadrotor in simulation before it is applied in the experimental platform.Two methods to estimate the tracking errors were developed and evaluated: a ’Trigonometric’approach is based on the geometric relations between the image and the ground planes; and a’Camera matrix’ approach is based on the projective model of the camera.An image treatment was developed to clean the noise from the images captured by the camera.The estimation and image treatment were tested on the real track of the laboratory using thewheeled mobile robot.A virtual reality environment was created to test the control strategies of the quadrotor and topredict the behavior of the experimental platform. The virtual reality simulation results werecompared with the experimental results obtained with the wheeled mobile robot. It was verifiedthat the virtual reality track of the laboratory is well dimensioned and the images obtainedduring the track following are similar, the virtual reality is able to predict the experimentalwheeled mobile robot behavior.We may conclude that the estimation methods, the image treatment and the control strategiesperform the desired task within the system limitations and allow to proceed the experimentwith the quadrotor.

Keywords: Quadrotor, Path following, Real time image processing, Visual Servoing

1. INTRODUCTION

The Unmanned Aerial Vehicles (UAV) have shown grow-ing interest in the robotics community with the passing oftime. In this work, the remote navigation through imagefor a quadrotor is researched.

In the area of autonomous flight, [Rondon et al., 2010]showed it is possible to navigate with a quadrotor us-ing information provided by a camera with a downwardorientation and without converting the information frominside the image. A work on path following is presentedin [Bourquardez et al., 2009] where the objective is totrack geometries and maintain the quadrotor in a fixedposition with the help of a fixed downward camera. In[Cabecinhas et al., 2010], a control is made with a cameraequipped with pan and tilt control in order to keep thetarget in sight. A quadrotor tracking a trajectory is madein [Zhou et al., 2010], where a numerical simulation is madewith a pre-planned path while using a PID controller.Aquadrotor with control based on Backstepping and Frenet-Serret theory is used in simulation for tracking a targetwhile using a camera pointed downward, in [Barrientoset al., 2010]. Related works have been done using otherkinds of robots such as [Ismail et al., 2009], where a groundmobile robot follows a line while moving forward, but inthis case image pre-processing is done in order to cleanthe noise present, with the use of a fixed threshold. It

was verified that image enhancement and segmentationare areas in computer vision with a lot of development oflate. For example, methods using histogram as bases forimage treatment are seen in works such as [Wang et al.,2007], [He et al., 2010] and [Blas et al., 2008] where thefirst two are methods made to improve the image qualityin gray scale and the last one for segmentation of areas ina colored image.

In section 2 an explanation of the steps performed duringthis work is provided. Afterwards, the robot models usedare described, a mobile ground robot and a quadrotor. Insection 3 the image treatment used to clean the image andthe methods to estimate the tracking errors are presented.Section 4 presents the results and analyses obtained withthe control of the ground mobile robot. In section 5 theresults and analyses of the control applied to the quadrotoris presented and in section 6 the conclusions obtained areprovided.

2. PROBLEM STATEMENT

The objective is for a quadrotor to follow a ground track inthe laboratory(in figure 1) with the aid of a camera whilemoving forward. Five different frames are used: (1) F0 isthe global fixed frame, with z axis pointing down, andboth x and y positioned horizontally on the ground of thelaboratory. Here the track path is defined and the variables

Page 2: Vision Path Following with a Stabilized Quadrotor · vision for the quadrotor is presented in section 5. C S B A 2 1 V Cam Z r x,y,z , , y e, e V 0 r, , U I x cb,y x ct,y ct V x Fig

[x, y, z] are the position of the robot. (2) Fr is the robotframe and the location of its origin is presented for eachrobot. The location of the camera is defined in this frame.(3) Fc is the camera frame, where the image is defined. Itsorigin and orientation is defined by the camera conditions.(4) Fe is the tracking error frame, presented in figure 2.The xe axis is defined by the tangent of the track and yeis defined in the horizontal plane and its direction is tothe right. The tracking errors are defined in this frame.(5) Fmg is the mobile ground frame. This frame is used toobtain the tracking errors. The variables [φ, θ, ψ] are theEuler angles needed to rotate the frame F0 to the frameFr. These are the attitude angles of the robot, roll, pitchand yaw angles.

F0

Fr

Fc

x0

y0z0

yc xczc

Fig. 1. Track of the laboratory and three frames, F0, Fr

and Fc

In order to follow the line two variables are necessary, theseare the tracking errors. One of these errors describes thedistance between the robot and the line; it is defined ascross-tracking error and stated as variable ye. The othererror provides the orientation error of the system relativelyto the line, defined as the heading error and presented withthe variable ψe. These variables are presented in figure 2and their signals are related to the robots frame system.

ye

ψe

xr

yr

xe

yetrack

Fig. 2. Tracking errors and tracking error frame

In order to obtain the tracking errors from vision, it wasdecided to divide the image into two parts, the top andthe bottom parts. Then the mass centers of the track inthe top and bottom images are computed, the [xct, yct] and[xcb, ycb], respectively the top and bottom centers. In figure3, the variables are presented with a red mark defining thecenters and the image partition is shown with a blue line,the line of separation used was 200. The camera resolutionused was [240,320].

xctyct

xcbycb

xc

yc

Fig. 3. Image partition and top and bottom centers

The system is a stabilized quadrotor that receives refer-ences for both attitude angles and height, [φr, θr, ψr, zr].The attitude references are provided by the controller,although θr is controlled externally with the reference

of the velocity V0. This is necessary in order to to keepthe system moving forward. A vision based simulation iscarried out for the systems(figure 4) in order to observe thebehavior of the quadrotor. The tracking variables [ye, ψe]are estimated with the methods. In figure 4: (1) C,E andS are the controller, error estimator and the system; (2) Cand S are the same as before; (3) V is the virtualizationor the real image capture, where the variables of Camused are the position and orientation of the camera. Thisblock provides an image that will be named as I; (4) A1

represents the image enhancement performed over the rawimage I and the resulting image is named as Ifinal (A1

is only used for the experiments, due to the problemsrelated to the real camera); (5) A2 is the estimation ofthe [xct, yct, xcb, ycb] variables, these are the mass centersof each image part; (6) B is the estimation of the crosstracking and heading errors. The control performed withvision for the quadrotor is presented in section 5.

C S

A2 A1B VCam

Zr x,y,zф,θ,ψ

ye,ψe

фr,θr,ψrV0

IUxcb,ycbxct,yct

Vx

Fig. 4. Control block diagram with vision

The methods developed for the image were tested withthe wheeled mobile robot using a similar control to theone presented in figure 4. In this case the input variablesare the left and right wheels angular velocity, variables[Ωl,Ωr], and the output variables are the position andorientation of the system, variables [x, y, ψ]. The referencesare the [yr, ψr, V0] variables.

2.1 Models

In this section a brief explanation of the dynamic models,of the mobile ground robot and quadrotor, used for thesimulations is provided.

2.2.1 Wheeled mobile robot

Based on Edwin Carvalho thesis [Carvalho, 2008], therobot is able to move in the horizontal plane with theuse of two motors connected to the wheels, right andleft separately. The geometric parameters of this robotare presented in figure 5: (1) s represents the distance inthe x axis between the robots mass center and the motoraxis. (2) C is the center of the motors axis. (3) b is thedistance from the motor to C. (4) r is the wheel radius.The inputs of this system are the angular velocity of eachwheel, Ωl and Ωr, which can be directly converted into alinear velocity the respective equations 1. The robot linearlongitudinal velocity vm and angular velocity θm are alsodetermine using equations 1. The origin of the robot framein rasteirinho is located at C.

Page 3: Vision Path Following with a Stabilized Quadrotor · vision for the quadrotor is presented in section 5. C S B A 2 1 V Cam Z r x,y,z , , y e, e V 0 r, , U I x cb,y x ct,y ct V x Fig

Fig. 5. Geometry of mobile ground robot and variables,from [Carvalho, 2008]

vl = r · Ωl vr = r · Ωr

vm = vl+vr2 θm = vl−vr

2·b(1)

2.2.2 Quadrotor

The explanation will focus on the principal physical com-ponents of the quadrotor, how the motion is performedand the states used for the inner loop control.

Fig. 6. Quadrotor with referencial, body frame from [Hen-riques, 2011]

In the quadrotor, the origin of the robot frame Fr issituated at the mass center. The axes are defined asseen in image 6. The quadrotor is equipped with fourmotors with propellers attached, paired two by two withrotations in two different directions. These four propellersyield the necessary lift force to oppose gravity. Withseparate changes in the propellers angular speeds it ispossible to achieve control over the quadrotors, in 6DOFs. The attitude and height are controlled directlywhereas the horizontal translation is subactuated andcontrolled by changing the roll and pitch angles, φ or θ.The quadrotor is equipped with sensors used to estimatethe height, attitude, accelerations and angular velocityin the robot frame system. The height sensor range is[10 cm, 80 cm]. In [Henriques, 2011] it was recommendedthat the quadrotor should fly up to 50 cm to avoidsurpassing the height of 80 cm, due to the height sensordynamics. The system is controlled in a frequency of50 Hz. An Extended Kalman Filter (EKF) was developedin [Henriques, 2011] to estimate the attitude angles; alow level Linear-Quadratic Regulator (LQR) was thenimplemented to stabilize the systems attitude and height.These were used for the inner loop control of the systemduring the simulations in section 5. The system controlvariables are [φr, θr, ψr, zr] and the outputs of the systemare [φ, θ, ψ, x, y, h, Vx]. The variables [x, y, Vx] are notmeasured in the real system and are used for the purposeof simulation. In order to simulate the real system behaviorthe position is necessary along with the attitude. Thevariable Vx is the linear longitudinal velocity of the robotand it is used in order to maintain the system movingforward as it will be seen in the section 5.

3. IMAGE PROCESSING AND TRACKING ERRORSESTIMATION

3.1 Image enhancement

In this subsection the image enhancement is described byblock A1 in the block diagram of figure 4. An ideal image isnot possible due to the inconstant conditions present in thelaboratory. In each position different light intensities andlight reflections are present, with the addition of a noisecorrupting the signal due to the wireless transmission. Fora simpler explanation the image enhancement section isdivided into three subsections. In the subsubsection Pre-enhancement the functions used and the respective optionsare explained. In the subsubsection Post-enhancement amodified method based on the histogram equalizationtechnique is presented and finally in the subsubsectionFinal treatment the final step of the image treatment ispresented.

3.1.1 Pre-enhancement

The first step of image enhancement is the removal of somerandom noise present in the image, this noise is similarto a ”salt and pepper noise”. To remove this noise thefunction median filter is used. The next step was to applythe contrast adjustment. This function adjusts the contrastof an image by ’stretching’ the pixel range values presentin the image to a new range.

3.1.2 Post-enhancement

For a reasonable level of brightness and visual un-wanted information removal a method was developed. Thismethod was based on the histogram equalization and theidea was originated in [Wang et al., 2007]. First, thefunction Cpe is used, named as CDF post-enhancement.

Cpe(k) =

1 if C(k) > Cu

(C(k)−Cl

Cu−Cl)r if Cl < C(k) < Cu

0 if C(k) < Cl

(2)By applying equation 2 to the CDF it was possibleto remove most visual undesired information present inthe image. The function shortens the variation of theCDF values. After obtaining the new CDF distribution,a normalization is done by restricting the distribution tothe range of [0, 1]. The variable Cl protects and limits thenumber of pixels with lower level values. The variable Cu

pulls most of the CDF values to the top limit range. The rvalue will define how much the CDF values will converge tothe limits. The new gray levels will be defined by the CDFdistribution, if the gray levels in the CDF have value 0then these gray levels will have a pixel value 0; if the graylevels in the CDF have value 1 then the gray levels willhave pixel value 255. Even though the conditions changebetween the positions, the method is able to adapt andclean most of the unwanted information.

3.1.3 Final treatment

The method presents good results but when the image aslow concentration of line pixels it creats random artifactsin the image borders. To mitigate the artifacts over theestimations a cross information between the pre-enhancedimage and the post-enhanced image is done, described by

Page 4: Vision Path Following with a Stabilized Quadrotor · vision for the quadrotor is presented in section 5. C S B A 2 1 V Cam Z r x,y,z , , y e, e V 0 r, , U I x cb,y x ct,y ct V x Fig

equation 3. The result of this product is a final image withweighted information

Ifinal = ([1]− Ipre−enhanced) · Ipost−enhanced (3)

In order to analyze the treatment, a comparison to anormal thresholds is made in the figure 7. The raw imageis presented first, then the result with the use of simplethresholds equal to 0.1 and 0.2, the result with the post-enhancement and finally the final image is presented.

(a) Raw image (b) Thresholdequal to 0.1

(c) Thresholdequal to 0.2

(d) Imagewith post-enhancement

(e) Final image

Fig. 7. Comparison of images obtained during a trackfollowing with wheeled mobile robot in a location

By comparing the images, it can be seen that the normalthresholds under-detects the line. An under-detection isvisible in 7b and 7c.

3.2 Mass centers estimation

In this subsection the method described by block A2 inthe block diagram of figure 4 is presented. The first stepconsists in dividing the image into two parts, top andbottom, and to obtain the coordinates of each blob centerby computing the mass center of each image part. Theessencial parameters are:

• b is the number of lines to be included in the top partof the image, the value 200 was used.• Ifinal(i, j) is the final image obtained from the image

enhancement. Since the image is not binary, theestimation is not a geometric center but a masscenter.

This image partitioning gives different weights to theinformation provided by the top and bottom according tothe b value. If a partition is small, it will be more sensibleto changes in its interior. Due to this, the b will definewhich part is more important.

3.3 Tracking error visual estimation

n [Rondon et al., 2010] a Image Based Visual Servoing(IBVS) approach was made where the camera is positioneddownward, this setup gives the possibility of obtainingthe variables values without distortions provided by theposition and orientation of the camera. Tracking errorvisual estimation In this subsection the methods that

describe block B in the block diagram of figure 4 ispresented. Due to the camera conditions, it was chosen touse a Position Based Visual Servoing (PBVS) approach.Here the values of interest are obtained according to thecamera location in the mobile ground frame.

To estimate the tracking errors, ye and ψe two differentmethods were considered:

(1) Trigonometric method; (2) Camera matrix method;

For these methods the aid of the mobile ground framewas necessary. The origin of this frame is located at theprojection of the camera on the ground. The axis xmg,ymg and zmg are defined by the system of equations 4; theframe is similar to the global frame with the difference ofbeing aligned with the camera. The equation 4 representsthe external products needed to obtain the mobile groundframe.

xmg = zmg × xc

ymg = zmg × xmg

zmg = z0(4)

3.3.1 Trigonometric Method

It is based on direct trigonometric relations between theimage coordinates and the mobile ground frame coordi-nates. If the horizontal and vertical view angles are known,then it is possible to deduce the conversion from pixelcoordinates to angles θx and θy, along the x and y di-rections. Variables θY open angle and θX open angle are themaximum angles in the respective axis. For a point withimage coordinates (x, y) the equivalent angles are , givenby equations 5 and 6.

θy =θY open angle

m· y (5)

θx =θX open angle

n· x (6)

Since the camera height h and tilt angle θcam are known,then the coordinates (px, py) of a point in the movingground frame may be deduced from the angles θX andθY of its projection in the image.

For the longitudinal coordinate px, the relation may becomputed from trigonometric relations, leading to equa-tion 7.

px =h

tan(θcamera − θy)(7)

For the lateral coordinate py, distances must be evaluatedin the plane with angle θcam and θy, obtaining the equation8.

py =h · tg(θx)

sin(θcamera − θy)(8)

In equation 8 the camera roll angle is neglected. Inorder to mitigate or anticipate its influence, a first orderapproximation may be considered, with equation 9.

py =h · tg(θX)

sin(θcamera − θy)+ h · tg(φ) (9)

3.3.2 From mobile ground coordinates to tracking errors

To determine the heading error, the equation 10 wasdeduced, by using the point values for both top and bottomparts of the image, the py1 and px1 values for the bottompart, and the py2 and px2 for the top part. By restricting

Page 5: Vision Path Following with a Stabilized Quadrotor · vision for the quadrotor is presented in section 5. C S B A 2 1 V Cam Z r x,y,z , , y e, e V 0 r, , U I x cb,y x ct,y ct V x Fig

the problem to the plane z = 0 and by considering thepoint p0 the origin of the mobile ground frame.

ψe = atan(

py1−py2

px1−px2

)(10)

The equation used to determine the cross-tracking error iscomputed as the distance between a vector and a point,leading to equation 11, where point p0 is the origin of themobile ground frame. p1 and p2 are the bottom and topimage centers in the mobile ground coordinates.

ye =(p0 − p1)× (p0 − p2)

|p2 − p1|(11)

3.3.3 Camera matrix method

This method is based on knowing a parameter that con-tains various informations of the image acquisition device,known as the camera matrix. This matrix has some of thecameras internal parameters embedded in it, equation 12.The internal parameters are: (1) fc is the focal length;(2) cc is the principal point; (3) αc is the skew coefficient;(4) kc is the distortion coefficients. The other parametersare related to the type of deformations applied by thecamera to the objects present in a plane that can beconsidered the camera view plane.

K =

[fc(1) αc · fc(1) cc(1)

0 fc(2) cc(1)0 0 1

](12)

For the conversion from the image plane to the cameraplane the equation 13 was used.

p = KX (13)

Where: (1) p, the homogeneous coordinate point in theimage plane. (2) K, the camera matrix. (3) X, the homo-geneous coordinate point in the camera view plane. Withthe knowledge of the points in the camera view plane, itis now possible to start the conversion from camera frameto the ground frame. By knowing the type of projection, aformula can be easily obtained to describe the coordinatestransformation. This equation, 14, considers the positionand orientation of the camera.

X = R(q − t) (14)

Where: (1) X is the point in the camera frame. (2) R isthe rotation matrix from ground frame to camera frame.(3) q is the point in the ground frame. (4) t is the cameraposition in the ground frame. By solving the system ofequations 13 and 14 in order of the variable q, knowingthat qz is zero, we can obtain the coordinates qx and qyand with them obtain the line points in the ground framefor both top and bottom parts. With both coordinatesobtained in the ground frame, it is possible to estimatethe values of the cross-tracking error and heading errorusing equations 10 and 11.

3.3.4 Evaluation

An evaluation was made for the tracking errors estimatorsand it was concluded that: (1) with the attitude not beingfed-back, the mean between trigonometric and cameramatrix methods gave the best results; (2) with attitudefeedback the camera matrix method gave better results;(3) the feedback of the φ and θ are the most important.Depending on the conditions, not knowing θ may leadto instability and φ, if known may provide a smoothercontrol.

4. WHEELED MOBILE ROBOT CONTROL

To control the robot a lqrd command from matlab wasused. The tuning of the weighting matrices was donelooking at the close loop behavior, taking into accountthe oscillations and speed in the removal of the error. Thefigures presented correspond to the values obtained duringa complete lap along the track.

4.1 Rasteirinho model

The control strategy for this model consists in providinga constant longitudinal velocity (V0), since there is nopossible reference from the image to control this variable.If this was possible, the velocity would be regulatedaccording to the need. To control the model, the crosstracking and heading errors, ye and ψe, are used as statesand the yaw rate r is used as the control variable. Themodel is given by the system of equations 15, this modelis used for the simulations as the non-linear system tocontrol. The system of equations presented was derivedfrom the equations presented in section Models. x = V0cos(ψe)

ye = V0sin(ψe)

ψe = r(15)

After selecting the states to control, ye and ψe and thelinearization. The linearization point was ψ = 0. Themodel is then given by equation 16. This equation is usedto design the controller, where U = [r] and the state isX = [ye, ψe].

X =

[0 V0

0 0

]X +

[0

1

]U (16)

In the following sections the constant velocity used wasV0 = 0, 4 m/s . The camera was positioned at xr = 0.22 mand zr = −0.13 m from the center of the robot frame, withan tilt angle of −45, both the real and the virtual camera.The control of the model is done with a 10 Hz samplingrate. The model controlled is the non-linear presented bythe system of equations 15.

4.2 Path following in simulation

In this subsection the model rasteirinho is controlledthrough vision in a virtual reality environment where thetracking errors used to control the model are estimated us-ing the methods presented in section 3.3. In simulation, thenoise in the image acquisition is not considered. The lossof information, loss of frames, momentary image distortionand noise induced by the wireless communications due tovarious sources, such as loss of information and vibrations,are not included. The virtual track was modeled withvalues taken from the straight lines and arcs of the trackin the laboratory. The VR track is presented in figure 8.

Fig. 8. Images of the virtual track with and without extraelements

Page 6: Vision Path Following with a Stabilized Quadrotor · vision for the quadrotor is presented in section 5. C S B A 2 1 V Cam Z r x,y,z , , y e, e V 0 r, , U I x cb,y x ct,y ct V x Fig

4.2.1 Results from simulation

The values used for the weighting matrices and the controlgain matrix are presented by equations 17.

Q =

[2500 0

0 156.25

]R = 15

K =[10.36 3.87

] (17)

Fig. 9. Cross tracking error ye (in blue) and heading errorψe (in red) during the simulations

Fig. 10. Control action r during the simulation of thevirtual rasteirinho

It is important to know that the virtual reality representscircles as a sequence of straight segments. This effect overthe estimation can be seen in 9 in both tracking errorsas systematic ”waves” along the arcs. An initial error isincluded in order to evaluate the system stability and quickresponse. Through figure 9 it can be seen that the controlwas successful in regulating the error along the straightsegments, keeping the error below 7 cm along the circles,except at their beginning where the error goes up to 20 cmdue to the sudden transition. The control strategy wasable to effectively maintain the virtual rasteirinho fromgetting out of the track while maintaining a satisfactorysystem stability. Although the errors may appear to belarge, it is important to remind that the camera is in ahigh position, from where the track line is frequently inthe middle of the image frame and never at risk of beinglost from sight. The control actions have high values dueto the need of maintaining the line in sight, if the controlis not fast and high enough, the line will get out of sightand the rasteirinho will get lost since the linear velocityhas a high value.

4.3 Path following with experimental platform

For the experiment the image enhancement and the es-timation is performed using the real camera. Due to thedifferences between ideal and reality, the previous gainsled the robot to instability, hence the need to retune thecontrol parameters. The gains were increased to have ahigher derivative action and higher accuracy, the weightingmatrices are presented in equations 18.

Q =

[10000 0

0 400

]R = 20

K =[16.69 4.95

] (18)

Fig. 11. Cross tracking error ye(in blue) and heading errorψe (in red) during the experiment

Fig. 12. Control action r during the experiment

It can be observed that the control of the real rasteirinhois successful despite the considerable oscillations visible inboth estimations and in the control action. Through theanalysis of the heading error, it can be observed spikesduring the track. These are caused by bumps on thefloor and problems in the wireless communications of thecamera. The last part of the estimations of the trackingerrors was expected to be closer to zero, like the onespresented in the middle of the experiment. It was observedthat reflexion corrupted the estimation but the control isrobust enough to deal with it.

4.4 Comparison between results

It can be observed that in both simulation and in realitythe system was successfully controlled with the use of thecontrol strategy described.

The behavior obtained in the simulation and during theexperiment are very similar. This indicates that the VRsimulation is well modeled, although there are visibledifferences. These may come from various sources such asmiss dimensioning and the velocity of the experimentalplatform not being the same.

Through the differences between simulation and experi-mental control gains, it can be concluded that the exper-imental platform rasteirinho is slower in response whencompared to the virtual model. This was to be expectedsince the reality has problems that the ideal case, theoret-ical model, does not posses.

With the comparison of figures 9 and 11 it can be ob-served that the first curve of the track has a corneringcurve, with a higher turn than the real one with values of−0.72 radians (-41.3) and −0.56 radians (-32.1) respec-tively. This also originates a higher cross tracking error asit can be seen.

With the presented results, it can be concluded that theVR simulation proved to be a good tool for vision basedsimulations. The values obtained through the virtual andthe real cameras reveal to be different but justifiablethrough the analyzes done.

Page 7: Vision Path Following with a Stabilized Quadrotor · vision for the quadrotor is presented in section 5. C S B A 2 1 V Cam Z r x,y,z , , y e, e V 0 r, , U I x cb,y x ct,y ct V x Fig

5. QUADROTOR HOLONOMIC MODELS

In this chapter the models used to synthesize the controlfor the quadrotor are described. Afterwards a compari-son between the models is made, in order to determinethe models performance. In order to test these controlstrategies and compare them in equal grounds, it wasassumed that the longitudinal velocity of the system wasknown, despite being an unknown state. The variable V0is considered to be the constant longitudinal velocity inwhich the quadrotor travels around the track. The velocitydefined is 0.4 m/s. The control of the model is done witha 10 Hz sampling rate and the VR simulation time is 95 s.The h value, quadrotor height, is set to 0.5 m. In order toprevent lateral drifts from unstabilizing and lose control ofthe system, it was chosen to add the state Vy, the lateralvelocity relatively to the path line, equation 19 (this sideslip error is not controlled but is indirectly observed in theimage). This state will prevent abrupt drifts while the po-sition state ye will reduce the position error. The variablesto control, X, are then given by X = [Vx, Vy, ye, ψe]. Theequations 20 are obtained from [Henriques, 2011] wherethe accelerations are directly related to the attitude angles.

Vy = ye (19)

x = −gθry = gφr

(20)

In Holonomic model 1 is considered that the longitudinalvelocity is known and is used for control. In Holonomicmodel 2 it is considered that the velocity is unknown andthe only way to remove the cross tracking error is with theuse of the φr control variable, although the velocity is usedto maintain the quadrotor moving forward, for comparisonpurposes. In both of the following models the controlvariables are U = [φr, θr, r]. These two controls strategiesare compared through the results and the differences areanalyzed in a later section.

5.1 Model 1

For this model it was considered that the longitudinalvelocity is known and used by the controller to removethe cross tracking error. The state, X, is given by X =[Vx, Vy, ye, ψe] and the model is obtained using the systemof equations 20 and the equation 21. This last equation isthe result of the combination of equations 20 and the stateψe of equation 16 and the cross tracking error velocity,the velocity is controlled through the control action of thevariable φr. The state space Vx is controlled by θr and ψe

is controlled by r.

y = Vy + V0ψe (21)

The Holonomic model 1, used to design the controller, isthen given by the state-space equations presented in 22,where U = [φr, θr, r].

X =

[0 0 0 0

0 0 0 0

0 1 0 V0

0 0 0 0

]X +

[0 −g 0

g 0 0

0 0 0

0 0 1

]U (22)

5.2 Model 2

In this model the longitudinal velocity is considered to beunknown(except that the robot is moving forward). Thisway a control that should be used in the real quadrotor,with an external longitudinal velocity control, will betested beforehand and compared with Holonomic model1. The state, X, is given by X = [Vx, Vy, ye, ψe] andthe model is obtained using the same equations presentedin the previous models except for Vy. For this state theequation 23 is used.

ye = Vy (23)

The Holonomic model 2, used to design the controller, isthen given by the state-space equations presented in 24.

X =

[0 0 0 0

0 0 0 0

0 1 0 0

0 0 0 0

]X +

[0 −g 0

g 0 0

0 0 0

0 0 1

]U (24)

5.3 Control in VR with Holonomic model 1

To design the controller the matlab command LQRD wasused. The weighting matrices are tuned with the purposeof both minimizing the cross tracking and heading errorswhile maintaining a smooth control action. If the controlaction is too aggressive, the estimation through the camerawill be corrupted due to the attitude angles. In order toevaluate the control performance, statistic analysis weremade and for this the RMSE, Mean, Total and Maximumwere used. The Total value is given by equation 25, whereN is the simulation sample size.

Total =

N∑K=1

|y(k)| (25)

In this subsection, figures and results obtained while usingthe control designed with the Holonomic model 1 areshown, for both cases of with and without attitude feed-back. Afterwards, the comparison of the control performedwith and without the use of attitude feedback to theestimator is made, in order to ascertain the advantage ofknowing the quadrotors attitude through communicationsfor the control synthesized by Holonomic model 1. Thecontrol diagram block used in VR simulation is presentedin figure 13.

C S

A2B VCam

Zr x,y,zф,θ,ψ

ye

фr,θr,ψr

V0

Ixcb,ycbxct,yct

Vx

ψe

ddt

Vy~

Fig. 13. Control block diagram of the quadrotor with vision

C, S and V are the controller, system and virtualization.A2 performs the estimation of the blob centers. B repre-sents the estimation methods for the tracking errors. Camis the camera parameters, the orientation, position andintrinsic parameters.

Page 8: Vision Path Following with a Stabilized Quadrotor · vision for the quadrotor is presented in section 5. C S B A 2 1 V Cam Z r x,y,z , , y e, e V 0 r, , U I x cb,y x ct,y ct V x Fig

5.3.1 Control without attitude feedback

The values used for the weighting gains matrices and thecontrol gain matrix are presented by equations 26. Thefigures presented correspond to the values obtained duringa lap along the track.

Q =

[4000 0 0 0

0 1000000 0 0

0 0 4444 0

0 0 0 2500

]

R =

[10 0 0

0 15 0

0 0 15

]K =

[0 1.2701 0.1826 0.0328

−0.7134 0 0 0

0 0.0473 1.5446 1.9749

](26)

(a) Cross tracking error ye (in blue) and heading error ψe (inred)

(b) Quadrotor attitude angle φ (in red) and the controlvariable φr angle (in blue)

(c) The control angle rate r (in blue) and the heading errorψe angle (in red)

Fig. 14. Values obtained with control in simulation

Since the robot rasteirinho presented waves during thesimulation (figure 14a), a similar behavior was expectedfrom the quadrotor. By knowing the origin of the ”waves”,it can be concluded that the control was successful inregulating the error along the straight segments, keepingthe error below 5.6 cm along the circular arcs, except atthe beginning of the curve where the error goes up to 17 cmdue to the hard turn.

The control variable φr presented (figure 14b) a smoothcontrol with relatively small control actions consideringthe errors.

This model uses the ψ angle to remove both tracking errorsand due to this, during the cornering curves the headingerror ψe presents a high value. These values are obtaineddue to the controller trying to compensate both errors.The control strategy was able to maintain the quadrotorover the path while maintaining a good stability.

Important to remind that the quadrotor has lack of inertiain the y axis and has no sensor for the lateral velocity ofthe robot, these factors make it hard to control especiallyduring the cornering curves.

Estimated tracking errors Controlye [m] ψe [] φr [] r [/s]

RMSE 0.0470 5.7 1.0 12.3

Mean 0.0082 1.6 0.1 3.8

Total 28.4 3454.7 619.9 8160.7

Maximum 0.1992 24.8 5.0 48.5

Table 1:Statistics obtained during control without feed-back

Important to note that the maximum error ye occursduring the first instants of the simulation.

5.3.2 Control with attitude feedback

In this case the weighting and gain matrices were identicalto the ones used before, equation 26. The figures obtainedin this case are also very similar to the ones obtainedbefore.

Estimated tracking errors Controlye [m] ψe [] φr [] r [/s]

RMSE 0.0501 6.0 0.9 12.3

Mean 0.0089 1.5 0.1 3.8

Total 29.2 3554.5 521.6 8174.8

Maximum 0.2335 26.8 4.5 46.5

Table 2:Statistics obtained during control with feedback

5.3.3 Comparison

In order to compare the extra control or error obtainedduring the simulations, a percentage ratio was defined,equation 27.

Extra% =Total(φno feedback)− Total(φwithfeedback)

Total(φwithfeedback)(27)

Through the comparison of the tables, it can be seen thatthe ye has higher values when knowing the attitude. Thisshows that the estimations had considerable errors due tothe conjunction of situations where attitude angles, mostlythe φ and θ angles, had values different from zero at thesame time. The same occurs for ψe. It can also be observedthat, by knowing the attitude, the control action wassmoother and less demanding from the control variable φr,has it can be analyzed through the comparison of φr Totaland Maximum values. This can also be concluded withthe percentage difference of the Total φr control actionmade during flight, a percentage of 18.8 %. This value issignificant since it was a flight of 95 s, time correspondingto perform a hole lap around the track. If big tracksor flight courses were to be performed this would be animportant aspect to be considered since it would reducethe cost on the batteries, at the same time it would havea smoother flight. In the case of the control variable r, thechanges were insignificant although the maximum value ofthe control action was reduced with the attitude feedback.

5.4 Control in VR with Holonomic model 2

In this subsection the figures and results obtained whileusing the control designed with Holonomic model 2 are

Page 9: Vision Path Following with a Stabilized Quadrotor · vision for the quadrotor is presented in section 5. C S B A 2 1 V Cam Z r x,y,z , , y e, e V 0 r, , U I x cb,y x ct,y ct V x Fig

shown. The statistic analyzes are made, and the com-parison of the control with and without estimation usingfeedback is made in the end of the subsection. The controldiagram block used in VR simulation is presented in figure13.

5.5.1 Control without attitude feedback

The values used for the weighting matrices and the controlgain matrix are presented by equations 28. The figurespresented correspond to the values obtained during a lapalong the track.

Q =

[1 0 0 0

0 40000 0 0

0 0 400 0

0 0 0 11.1111

]

R =

[44.4444 0 0

0 51.0204 0

0 0 0.5917

]K =

[0 1.293 0.1283 0

−0.1309 0 0 0

0 0 0 3.5390

](28)

(a) Cross tracking error ye (in blue) and heading error ψe (inred)

(b) Quadrotor attitude angle φ (in red) and the controlvariable φr angle (in blue)

(c) The control angle rate r (in blue) and the heading errorψe angle (in red)

Fig. 15. Values obtained with control in simulation

During the simulation the control was successful in regu-lating the error along the straight segments, keeping theerror below 8.8 cm along the circular arcs, except at thebeginning of the curve where the error goes up to 26.2 cmdue to the hard turn. Since this model only uses the φrangle to reduce the cross tracking error and a smoothcontrolled behavior was imposed to the controller, theerror has a high value. The heading error (ψe) presented(figure 15b) a oscillatory behavior due to the corneringcurve. This curve has a sharp turn to the left and then aslight curve to the right and since the control variable ris only used to regulate the ψe error, it can be concludedthat this model is fast to in regulating the heading error.

Estimated tracking errors Controlye [m] ψe [] φr [] r [/s]

RMSE 0.0673 4.2 1.0 15.0

Mean 0.0205 1.1 0.1 3.9

Total 46.4 2506.9 596.9 8871.7

Maximum 0.2624 17.4 6.2 61.6

Table 3:Statistics obtained during control without feed-back

The maximum value in this case occurs during the corner-ing curves with the value 26.2 cm.

5.5.2 Control with attitude feedback

In this case the weight and gain matrices were identicalto the ones used before, equation 28. The figures obtainedare also similar to the ones presented before.

Estimated tracking errors Controlye [m] ψe [] φr [] r [/s]

RMSE 0.0673 4.3 0.9 15.3

Mean 0.0210 1.1 0.1 3.8

Total 48.3 2478.1 504.5 8769.9

Maximum 0.2236 20.5 6.8 72.5

Table 4:Statistics obtained during control with feedback

5.4.3 Comparison

Similar to the Holonomic model 1, the estimations duringthe simulations without feedback had lower values, thisindicates that the estimated values were lower compared totheir actual value. This can be observed with the Mean andTotal ye values. Although in this case, the Maximum yewas inferior. This indicates that either the control was nowable to have better performance. The ψe also suffered fromunderestimation but, opposite to the Holonomic model 1,the Total amount of ψe was reduced. This indicates thatthe control performed better than before in removing theangle errors, at a reduced cost of 1.2 % in the ψ angle.The φ control action during a lap around the track wasreduced with the knowledge of the attitude by 18.1 %,similar amount to the Holonomic model 1. But in thiscase, the Maximum control action was higher.

5.5 Comparison between models

With the comparison of the control models in VR itcan be observed that the Holonomic model 2 has moredifficulties in fine control. This is shown by the comparisonof the Total ye value, in Holonomic model 1 and 2 were28.4 and 46.4, with approximate control action values.On the other hand the ψe error in Holonomic model 1and 2 were 3455 and 2507, with similar control actionvalues. Although the Maximum control action providedby r is higher in Holonomic model 2, visible with theTotal and RMSE values. The Maximum heading error ishigher in Holonomic model 1, indicating that Holonomicmodel 2 is more stable in terms of angle following. In thecomparison of the estimations obtained during a track,figures 14a and 15a, it was verified that Holonomic model2 presents a peculiar behavior during the cornering curve.This occurs by only using the φr angle to remove thecross tracking error, during the curve the control actionmade the quadrotor quickly recover the lost position in

Page 10: Vision Path Following with a Stabilized Quadrotor · vision for the quadrotor is presented in section 5. C S B A 2 1 V Cam Z r x,y,z , , y e, e V 0 r, , U I x cb,y x ct,y ct V x Fig

comparison to Holonomic model 1. To adjust the controlparameters, Holonomic Model 2 is faster to adjust whencompared to model 2. The control of the Holonomicmodel 2 may be applied to other velocities equal or belowthe value 0.5 m/s. The same happens with Holonomicmodel 1 but this model almost loses sight of the track at0.5 m/s and presents a highly oscillatory movement for lowvelocities, such as 0.2m/s for the same control parameters.The attitude feedback for the estimation methods is notnecessary since the control is robust enough to deal withthe lack of this knowledge. Through the comparison of thefigures 9, 15a and 14a, it can be seen that the quadrotorhas a similar behavior to the one presented by rasteirinho.Although it can be observed that during the corneringcurves the robots present considerable differences, thisoccurs due to the drifs of the quadrotor. The Holonomicmodel 2 should be the one to be used on the experimentalplatform. This model is easier to calibrate and does notneed to use the longitudinal velocity during the control,which is ideal for an external control of the longitudinalvelocity.

6. CONCLUSIONS

With the completion of this work, it was concluded that:

• An image enhancement was developed and testedin the real conditions provided by the laboratory.Within the conditions of the applications, it provedto be a tool capable of cleaning the noise.• Methods to estimate the cross tracking and heading

errors were applied, with the possibility of feedingback the camera conditions to perform a better esti-mation.• The effects of the quadrotors attitude over the esti-

mation of the tracking errors was studied and alsothe need of feeding back the attitude to the errorsestimators during flight. It was verified that, in orderto perform the control, the feedback of the attitude tothe estimators is not needed; the controller is robustenough to deal with the lack of knowledge.• The image enhancement and the methods used to

estimate the tracking variables were tested and vali-dated in the track of the laboratory with the use ofthe robot rasteirinho.• A virtual reality simulator was successfully made,

used and compared to reality, with values obtainedwhile using rasteirinho. It was observed that it was agood tool to prepare the vision based experiments.• With the validation of the virtual reality simulator it

may be assumed that the developed control for thequadrotor is well designed.• The aim of creating a model to design the controller

for the quadrotor, with the purpose of following thetrack of the laboratory while using airborne visionfeedback, was successful. This model was comparedto another model and both the advantages and down-sides were verified. It was concluded that the resultingcontroller is robust, safe and applicable to the realsystem.• The implementation of the control approach for to

the real quadrotor system was prepared and is readyto test.

REFERENCES

A. Barrientos, J. Colorado, A. Martinez, and J. Valente.Rotary-wing mav modeling and control for indoor sce-narios. In Industrial Technology (ICIT), 2010 IEEEInternational Conference on, pages 1475 –1480, march2010. doi: 10.1109/ICIT.2010.5472486.

M.R. Blas, M. Agrawal, A. Sundaresan, and K. Konolige.Fast color/texture segmentation for outdoor robots.In Intelligent Robots and Systems, 2008. IROS 2008.IEEE/RSJ International Conference on, pages 4078 –4085, sept. 2008. doi: 10.1109/IROS.2008.4651086.

Odile Bourquardez, Robert Mahony, Nicolas Guenar-dand Franois Chaumette, Tarek Hameland, and LaurentEck. Image-based visual servo control of the translationkinematics of a quadrotor aerial vehicle. IEEE Trans-actions on Robotics, 25:743–749, 2009.

D. Cabecinhas, C. Silvestre, and R. Cunha. Vision-basedquadrotor stabilization using a pan and tilt camera. 49thIEEE Conference on Decision and Control, pages 1644–1649, 2010.

Edwin John Oliveira Carvalho. Localization and coop-eration of mobile robots applied to formation control.Master’s thesis, Instituto Superior Tcnico, 2008.

Kuen-Jan He, Chien-Chih Chen, Ching-Hsi Lu, and LeiWang. Implementation of a new contrast enhance-ment method for video images. In Industrial Elec-tronics and Applications (ICIEA), 2010 the 5th IEEEConference on, pages 1982 –1987, june 2010. doi:10.1109/ICIEA.2010.5515573.

Bernardo Sousa Machado Henriques. Estimation and con-trol of a quadrotor attitude. Master’s thesis, InstitutoSuperior Tcnico, 2011.

A. H. Ismail, H. R. Ramli, M. H. Ahmad, and M. H.Marhaban. Vision-based system for line following mobilerobot. IEEE Symposium on Industrial Electronics andApplications, pages 642–645, 2009.

Eduardo Rondon, Luis-Rodolfo Garcia-Carrillo, and Is-abelle Fantoni. Vision-based altitude, position andspeed regulation of a quadrotor rotorcraft. IEEE In-terna, pages 628–633, 2010.

Qing Wang, Member, IEEE, Rabab K. Ward, Fellow, andIEEE. Fast image/video contrast enhancement basedon weighted thresholded histogram equalization. IEEETransactions on Consumer Electronics, 53:757 – 764,2007.

Qing-Li Zhou, Youmin Zhang, Yao-Hong Qu, and C.-A.Rabbath. Dead reckoning and kalman filter design fortrajectory tracking of a quadrotor uav. In Mechatronicsand Embedded Systems and Applications (MESA), 2010IEEE/ASME International Conference on, pages 119 –124, july 2010. doi: 10.1109/MESA.2010.5552088.