# homography-based visual measurement of wheel sinkage for a

TRANSCRIPT

Homography-based visual measurement of wheel

sinkage for a mobile robot

Liang Wang, Xianbiao Dai, Hehua Ju College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing, 100124, China.

Abstract-The wheel sinkage greatly affects the mobility and localization accuracy of a mobile robot in outdoor deformable terrains. A vision-based method to measure the sinkage is presented. It firstly computes the homography between the wheel plane in 3D space and the image plane of a mounted camera. Then identify the wheel rim in the image by the estimated homography. The difference in image intensity is used to detect wheel-terrain contact points. Considering that the imaging procedure is a projection transformation and some geometric properties do not preserve, detected contact points are transformed into the wheel plane with the estimated homography. Finally compute the wheel sinkage in the wheel plane. System errors introduced by existing methods are eliminated. Additionally, the proposed method does not need costly and complex sensors to measure the relative position between the camera and wheel. Results of experiments show its validity and feasibility.

Index Terms-Visual measurement, homography, wheel sinkage.

I. INTRODUCTION

Mobile robots are increasingly used in natural outdoor terrain applications, such as desert, mining, and planetary exploration [1], [2], [3], [4], [5], [6]. For these applications, wheel sinkage occurring at the wheel-terrain interface greatly affects the mobility of a mobile robot. In some highlydeformable terrains, wheel might sink in soft soil too deep to become immobile. It is desirable to have the capability to measure the wheel sinkage so that a mobile robot could adjust wheel torque to improve traction, or revise its motion plan to avoid potentially hazardous highly-deformable terrain [7][8]. Wheel sinkage measurement could also play an important role in localization, which could be used to improve the performance of dead-reckoning by giving the effective rolling radius. Additionally, wheel sinkage is also an important input to terrain identification and classification algorithms [7][9].

Former researchers mainly focused on the mobile robot's applications for structured or indoor environments instead of outdoor terrains. So the research on the measurement or estimation of the wheel sinkage in outdoor deformable terrains is relatively few. Wilcox [10] determined the sinkage of one wheel relative to another by measuring the change in an articulated suspension's configuration to improve the odometry. However, the absolute sinkage is necessary for mobility analysis and terrain identification. Bauer et al. [11] presented an experimental analysis on the wheel-soil interaction, where the deformable soil model was used to represent the wheelterrain interaction and the pressure-sinkage characteristics were

978-1-4244-6588-0/10/$25.00 ©201 0 IEEE

analyzed. However some costly and complex sensors are needed to obtain parameters of the wheel-terrain interaction.

Reina et al. [12] proposed a method to measure the absolute wheel singkage with a camera. A special pattern attached to the wheel was used to identify the wheel rim and the wheel plane. Contact points between the wheel and the terrain could be detected by computing the intensity difference along each radial line on the wheel plane with an angular resolution of lOin the captured image. The depth of the wheel sinkage was directly computed with image coordinates of these contact points. Brooks et al. [13] also proposed a vision-based method to estimate the absolute wheel sinkage, which used some robust image processing techniques to improve the robustness of contact points detection. They also computed the wheel sinkage in the image plane. However the imaging procedure of a camera is a projection transformation, then some properties of geometric entities dramatically change after transformation, such as a circle being transformed to an ellipse instead of a circle [14], the value of angles and the ratio of lengths not preserving [15]. The aforementioned vision-based methods do not take these into account and directly compute the wheel sinkage in the image plane, which introduces system errors.

A novel homography-based method for visual measurement of wheel sinkage is presented. With an image of a wheel attaching some special marks, the homography between the wheel plane in 3D space and the image plane of the mounted camera can be computed. Then the ellipse corresponding to the wheel rim can be identified in the image with the estimated homography. Slide a one-dimensional (ID) spatial filter along the identified wheel rim to detect the wheel-terrain contact points by detecting the image intensity difference. Finally transform coordinates of these contact points from the image plane of the camera to the wheel plane in 3D space with the estimated homography and compute the wheel sinkage in 3D space. In comparison with existing vision-based methods [12], [13], some system errors are removed. Additionally, the homography determined by the relative position and orientation between the camera and wheel, can be measured by the camera itself instead of some costly and complex sensors.

The rest of this paper is organized as follows. Section 2 gives some preliminaries and the system error of existing methods. The proposed method for wheel sinkage measurement is elaborated in section 3. Then experiments are reported in section 4. Section 5 are some concluding remarks.

3543

(a)

(b)

Vc;lown Pr

Fig. l. Wheel sinkage with (a) the depth Z and (b) left (lh) and right (Or) an?les. (Here R is the radius of the wheel. PI and Pr are left and right contact pomts. V down is the direction of the gravity.)

II. PRELIMINARIES

A. Wheel Sinkage Model

The wheel sinkage of a mobile robot can be described with the depth value Z of wheel sinking in a deformable terrain [12] (see Fig. l.(a)). It also can be evaluated by a pair of angles from the vector V down termed left (01) and right (Or) terrain interface angles as Fig. 1.(b) shown, where V down is the direction vector of gravity [13]. The former representation is straightforward. But the latter is more general and can be used in uneven terrains. The latter can also be used in the border of two type of terrains, where the left half of the wheel may be still in one type of terrain and the right half may entry another. Here we use the latter one to evaluate the wheel sinkage.

B. Camera Model

In this paper, the standard pinhole camera model is used to describe the imaging geometry of a camera. The homogeneous

�oordinate of a three-dimensional (3D) point 1\1: = [x Y z]T IS denoted as M = [x y Z l]T. A 3D point is projected to a two-dimensional (2D) image point m = [u v l]T through a pinhole camera by

Am = K[R t]M, (1)

A[ f aicru: lfac:� ili:��:::i: :::,

3

:.:�n

:; 0 0 1

camera, Rand t are respectively the 3 x 3 rotation matrix

where

K =

�.� Camera frame

Wheel frame

Fig. 2. The wheel and camera coordinate system. (Tiny circles on the wheel plane are marks for determining the homography between the wheel plane and the image plane.)

and 3 x 1 translation vector which relate the camera to the world coordinate system.

It's assumed that a calibrated camera (K is known) is mounted on a robot and the field of view of the camera contains the wheel. Different to the existing methods [12], [13], there is no need to know the position t and orientation R of the camera in advance, which can be measured with the camera instead of costly and complex sensors. To do this, some marks, which has special color or shape and could be easily detected in image, are adhered to the wheel plane (see tiny circles on wheel plane in Fig. 2.).

The wheel of a mobile robot should be visually distinguishable from the surrounding terrain. This can be achieved by coloring the wheel plane a non soil-like color. Then the wheel and its rim can be easily identified in the image with simple intensity analysis.

C. The Coordinate Systems

In order to illustrate the wheel sinkage measurement algorithm, the following two coordinate systems are defined (see Fig. 2): - Wheel coordinate system, Xw Y wZw, is a rotating frame fixed on the wheel plane, with the XY -plane coinciding with the wheel plane and the Z-axis coinciding with the wheel axle. And a special mark is chosen as an indicator of the X-axis. It synchronously rotates witlI the wheel. In this configuration the 3D coordinates of marks (tiny circles shown in Fig. 2) are constant in the wheel coordinate system. We can easily identify this frame in image by recognizing the special mark indicating the X-axis. - Camera coordinate system, Xe YeZe, is a frame fixed at the center of the real camera and with its X and Y axes aligned to the image axes.

D. Homography

Homography is a concept in the mathematical science of geometry. A homography is an invertible transformation from one plane to another in projective space that maps straight lines to straight lines [15]. In 3D space it can be described by a 3 x 3 non-singular matrix.

3544

In Fig. 2, there is a homography between the image plane of the camera and the wheel plane. Since the wheel plane coincides with the XY -plane of the coordinate frame, a 3D point on the wheel plane can be expressed as M = [x Y 0 1 V. Then with equation (1), the corresponding 2D image point of the 3D point satisfies:

(2)

where ri is the ith column of rotation matrix R. The matrix

(3)

is the homography from the wheel plane in 3D space to the image plane of the camera. Then geometric entities in one plane can be transformed to the other with the homography matrix H.

For a space point [x Y 0 llT in the wheel plane, its corresponding 2D image point m in the image plane satisfies

Am=H[x Y If. (4)

The image conic C' in the image plane corresponding a space conic C in the wheel plane can be expressed as [15]:

(5)

E. System errors of existing methods

For a vision-based method of wheel sinkage measurement, the imaging procedure can also be described by a homography from the wheel plane to the image plane. However this transformation only preserves collinearity and the cross-ratio of collinear points. Other geometric properties changes greatly.

The value of an angle does not preserve. Let the homography be H = [hI h2 h3f, where hi is the ith row of the homography matrix. The value of an angle formed by three points [Xi Yi OlT can be computed with the dot product of two lines determined by three points. The point corresponding

[ IT ' [hT[Xi Yi IJ hPXi Yi IJ 0lT ' th to Xi Yi 0 IS h�'[Xi Yi IJ h '[Xi Yi IJ In e

image plane. Gener;Jly, the value of angle formed by three points in the image plane is not equal to that of the corresponding angle in the wheel plane. For example in the measurement system shown in Fig. 2, let lu = Iv = 1000, Uo = Vo = 500, t = [0 - 100 - 1000V and rotation be [57r /6 0 OlT (in Rodrigues form). The homography H can be computed by equation (3). An angle between three points [0 0 oV, [0 -100 OlT and [-50 -50v'3 oV is 30° in the wheel plane. Whereas the corresponding angle is 32.2223° in the image plane.

Similarly we can prove that the ratio of lengths does not preserve. With equation (5), it can be proved that a circle is transformed into an ellipse. So existing vision-based methods for wheel sinkage measurement [12] [13], which directly takes angles and ratio of lengths in the image plane as the wheel sinkage, will introduces system errors.

III. METHOD DESCRIPTION

Different from existing methods, the proposed method compute the wheel sinkage in the wheel plane in 3D space via the homography between the wheel plane and the image plane. The wheel sinkage is described by angles of contact points as shown in Fig.l.(b). Firstly the homography between the wheel plane and the image plane is estimated with n (n � 4) marks' 2D coordinates in the image plane and 3D coordinates in the wheel plane. Secondly determine the wheel rim by the estimated homography. Then identify contact points between the wheel and the terrain on the wheel rim in the image plane. Finally transform identified contact points from the image plane into the wheel plane by the estimated homography, and compute the wheel sinkage in the wheel plane. For simplicity, the proposed HomoGraphy-Based method is denoted by UGB in the following and the method Directly Computing in the Image plane as Reina [12] and Brooks et al. [13] do is denoted as DCI.

A. Homography Estimation

The first step of the proposed method is to determine the homography between the wheel plane and the image plane of the mounted camera. As Fig.2 shown, some special patterns are marked with tiny circles on the wheel plane. We can identify them by detecting the intensity difference or their special shape in the image. Then with the 2D coordinates in the image plane and 3D coordinates in the wheel plane of n (n � 4) correspondences, the homography H can be computed up to a non-zero scale factor by

(6)

where Mi = [Xi Yi llT are the ith point's 3D coordinates on the wheel plane and lUi ViV are the correspondent 2D image coordinates, hi is the lh column of the matrix H.

Generally the intrinsic parameters of the mounted camera, K, are fixed and calibrated off-line in advance. Submit K into equation (3), we have [rl r2 tl = K- 1H. We also have r3 = rl x r2. This means that the rotation R = [rl r2 r3l and the translation t can be determined by only one image. There is no need of other sensors to measure the rotation and translation between the wheel and the mounted camera. Considering that marks used to compute the homography shown in Fig.2 symmetrically distribute on the wheel plane and the X-axis of the wheel coordinate system is chosen to pass through a certain mark, all marks' 3D coordinates can be assigned counterclockwise in turn. To compute the homography H, we only need to determine marks' 2D image coordinates by detecting and identifying them in image.

B. Wheel Rim Identification

To identify the wheel-terrain contact points, the region of interest should be determined firstly. Here the region of interest is the rim of the wheel. The rim of the wheel generally is a circle C in the wheel frame. With the estimated homography

3545

u

v Wheel cen er ID filter

----��----�--_>J----Around

Fig. 3. The ID filter slides along the left lower quadrant of the wheel rim with image coordinates v varying in descending order to detect left contact point in a captured image.

Lil·1 I .J I ·1 I · 1 I 1 I 1 I 1 I 1 ]

Fig. 4. Mask of one·dimensional filter.

H, the rim of the wheel can be projected to an ellipse C' in the image by equation (5). Then rim points in the image plane can be determined by C'.

To speed up the identification of contact points between the wheel and the terrain, we can further narrow the region of interest. Generally the depth of the wheel sinkage is not larger than the radius of the wheel, so only the lower half of the wheel needs to be processed. The v coordinates of the image of the wheel center is used to separate the wheel rim into two sections, corresponding to the upper and lower halves of the wheel rim. Only the lower half will be processed (see Fig. 3). Considering that terrain entry generally occurs in one half of the wheel, and terrain exit occurs in the other, there are left and right contact points. To accelerate the identification process, we use the u coordinates of the image of the wheel center to separate the lower part of wheel rim into two sets: left and right side. So the interest region is finally divided into two sections: the left lower and right lower quadrant of the wheel rim.

C. Contact Points Detection

The 10 differential edge detector with central difference, Lx = [-! 0 !]*L (L denoting the image), is widely used in image processing. Here a 10 spatial filter, which is very similar to the differential edge detector, is used to detect contact points between the wheel and the terrain. Fig. 4 shows the mask of the 10 spatial filter. Transform the captured image into a binary image by taking the intensity of the wheel center's image point as the threshold. And the intensity of points corresponding to the wheel plane is denoted as 1. The others are denoted as O. Separately slide this filter along the left lower quadrant and the right lower quadrant of the detected wheel rim with image coordinates v varying in descending order to detect the two contact points PI and Pr (Fig. 3 shows how the 10 filter slide along the left lower half of the wheel rim).

80

600

400

200

400 300

200

':' ..

100

·100 ·200

·300

Fig. 5. Configuration of simulation.

The contact point on the image plane is obtained when the output of the filter first reaches the minimum. Here the 10 spatial filter is only performed along the left lower and the right lower quadrant of the wheel rim, which can dramatically reduce the computation complexity.

D. Wheel Sinkage Computation

Contact points PI and Pr on the wheel plane can be computed with image points PI and Pr using (4). Then determine the vector V down to compute the sinkage angle. The vector V down is a unit direction vector of the gravity, which can be provided by sensors mounted on robot. Then wheel sinkage angles are

{ 01 = cos- 1 (Vdown . PdlIPdl) Or = cos- 1 (Vdown . Pr/IIPrll)

where II P II denotes the norm of the vector P.

IV. EXPERIMENTS

A. Synthetic Data Experiment

(7)

The proposed method has been tested on a lot of synthetic data. The result of one synthetic data experiment is reported here. The configuration of the experiment is shown in Fig. 5. The coordinate systems coincide with those shown in Fig. 2. The rotation and translation between the camera coordinate system Xc YcZc and the wheel coordinate system Xw Y wZw are [-2.9446 - 0.2744 0.4373]T (in Rodrigues form) and [227.6704 91.6794 785.7964]T respectively. We take the inverse direction of Y w as the direction V down. The ground truth (denoted by G.T.) of the left and right sinkage angle are 25° and 35°. The marks used to compute homography are shown as cross in Fig. 5. We compute contact points in the image captured by the camera with the proposed 10 filter. Add Gaussian noise with mean 0 and standard derivations a to marked points and contact points. Vary the noise level a from o to 1.5 pixels in a step of 0.3 pixels. To make comparisons,

3546

TABLE I RESULTS OF SIMULATION (MEAN±S TANDARD VARIATION (IN DEGREES))

G.T. Noise 0 0.3 0.6

01 = 25 HGB 25.000±0.OOO 25.002±0.158 24.999±0.313 01 = 25 DCI 30.012±0.OOO 30.013±0.201 30.018±0.405 Or = 35 HGB 35.000±0.OOO 34.999±0.157 35.003±0.308 Or = 35 DCI 27.978±0.OOO 27.978±0.190 27.97l±0.374 G.T. Noise 0.9 1.2 1.5

01 =25 HGB 25.013±0.467 25.00l±0.629 24.998±0.774 01 = 25 DCI 30.01l±0.604 30.024±0.802 30.015±1.026 Or = 35 HGB 34.999±0.461 35.005±0.638 35.015±0.772 Or = 35 DCI 27.983±0.570 27.989±0.770 27.980±0.944

Fig. 6. Configuration of real data experiment.

both the proposed HGB method and the DCI method are used to estimate the sinkage angle. At each noise level, 5000 independent trials are performed. The mean and standard derivation of the estimated sinkage angle are computed. Results are shown in Table I. According to the Law of large numbers, the mean of the estimated sinkage angle shown in Table 1 should be approximately equal to the values without noise. In comparing with the ground truth, we can see that there is approximately no deviation for the HGB method, while for the DCI method there are deviations of about 5.0120 for left sinkage angle and -7.0220 for right sinkage angle. This means that the proposed HGB method can eliminate system errors introduced by the DCI method.

A lot of simulations in different configuration have been performed. They all show that there are some deviations between the ground truth and the estimated value with the DCI method even without noise. This means that there are system errors with the DCI method. And the deviations caused by system errors of the DCI method may be positive or negative, and their values vary with the relative position between the wheel and the camera. And the proposed HGB method can eliminate system errors introduced by the DCI methods. The synthetic data experiments show the validity of the proposed HGB method and the deficiency of the DCI method.

B. Real Image Data Experiment

The proposed method is also tested on our lunar rover prototype in simulated outdoor terrains. To make a comparison, both the HGB and the DCI method are performed. As synthetic data experiments shown, the deviations caused by system

Fig. 7. The captured image used for measurement.

errors of the DCI method vary with the relative rotation and position between the wheel and the camera. It is no sense to give statistics for a number of instances. So real image data experiments are performed respectively. One experiment is reported here. Fig. 6 shows the configuration of the experiment. The mobile robot locates on a planar ground. We pile some sand around the wheel to simulate the sinkage in soft terrain. In this configuration, the direction vector V down is vertical. The camera is mounted on a tripod instead of being mounted on the robot. A 2D pattern adhered to the wheel plane is used to calibrate the camera [16] [17]. Once the camera is calibrated, there is no need to adhere a 2D pattern, and only one image is needed to perform wheel sinkage measurement. The homography between the wheel plane and the image plane is computed with 2D image coordinates and 3D space coordinates of eight screws on the wheel plane. The captured image is shown in Fig. 7, where screws are marked with cross. With the estimated homography and the radius of the wheel rim, we can determine the ellipse corresponding to the wheel rim in the image plane using (3).

Detect contact points along the identified wheel rim with the proposed 1D filter. The detected contact points' coordinates are [223.2638 916.4360V and [452.6694 927.121OV respectively, which are shown in Fig. 8. Transform contact points into the wheel plane with the estimated homography. Then the angles corresponding to wheel sinkage are obtained with contact points and vertical vector V down using simple trigonometric computation as (7) shown. To verify the proposed HGB method, we also use vernier caliper to measure the depth of wheel sinkage and then compute the corresponding sinkage angle, and take it as ground truth. We also estimate the wheel sinkage with the DCI method to draw a comparison. Results are shown in Table II. The difference between the estimated value of the proposed HGB method and the ground truth is less than that of the DCI method. This shows the validity and feasibility of the proposed method.

3547

Fig. 8. Contact points detected with the proposed method.

TABLE II RESULTS OF REAL DATA EXPERIMENT (IN DEGREES)

Method Left contact angle Right contact angle

G.T. DCI HGB

22.6384 23.5883 22.0543

39.6974 36.1568 41.1345

V. CONCLUSIONS

A novel vision-based method for the robot wheel sinkage measurement is presented, which is different from the existing DCI methods in that the sinkage computation are performed on the 3D wheel plane. So the proposed method takes the fact into account that the imaging process of vision system is a projection transformation, which means some geometric properties dramatically change after transformation, such as a circle being transformed to an ellipse instead of a circle, the value of angles and the ratio of lengths not preserving. Then system errors introduced by DCI methods are eliminated. Additionally, no costly and complex sensor is used to determine the relative position between the wheel and the camera. The results of synthetic data and real image data experiments show the validity and feasibility of the proposed method. The proposed method can be used to gain information about the wheel sinkage to improve the mobility and localization accuracy. We will further improve the robustness and the efficiency of the proposed method and implement it on our lunar rover prototype for real-time use.

ACKNOWLEDGMENT

This work was supported by the Open Project Program of the National Laboratory of Pattern Recognition (No.09-3-2), Beijing Municipal Natural Science Foundation (No.4082004) and the Research Fund of Beijing University of Technology (No.OO200054K4005, 00200054RDOOl).

REFERENCES

[I) D. Bapna and E. Rollins and 1. Murphy and M. Maimone and WL Whittaker and D. Wettergreen, The Atacama Desert Trek: Outcomes, in Proc. of IEEE International Con! on Robotics and Automation, vol. l, 1998, pp 597-604.

(2) A. Le and D. Rye and H. Durrant-Whyte, Estimation of track-soil interactions for autonomous tracked vehicles, in Proc. of IEEE International Con! on Robotics and Automation, vol. 2, 1997, pp 1388-1393.

(3) 1. Cunningham and P. Corke and H. Durrant-Whyte and M. Dalziel, Automated LHDs and underground haulage trucks, Australian Journal of Mining, 1999, pp 51-53.

(4) R. Volpe, Rover functional autonomy development for the mars mobile science laboratory, in Proc. of the IEEE Aerospace Conference, Vol. 2, 2003, pp 643-652.

(5) Y.K Tiwaria and KP. Pandeyb and P.K Pranavc, A review on traction prediction equations, Journal of Terramechanics(2009), doi: 10.10 16/j.jterra.2009.1 0.002.

(6) Jeffrey 1. Biesiadecki and Chris Leger and Mark W. Maimone, Tradeoffs Between Directed and Autonomous Driving on the Mars Exploration Rovers, the International Journal of Robotics Research, 2007, 26(1):91-104.

(7) K Iagnemma and S. Dubowsky, Mobile Robots in Rough Terrain: Estimation, Motion Planning, and Control with Application to Planetary Rovers, Springer Tracts in Advanced Robotics Series, Vol.24, 2004.

(8) Ishigami G. and Kewlani, G. and Iagnemma, K., Statistical Mobility Prediction for Planetary Surface Exploration Rovers in Uncertain Terrain, Proc. of the 2010 IEEE International Conference on Robotics and Automation, 2010.

(9) Happold, M. and M. Ollis, Autonomous Learning of Terrain Classification within Imagery for Robot Navigation, in Proc. of IEEE International Conference on Systems, Man and Cybernetics, 2006: 260-266.

(10) B. Wilcox, Non-geometric hazard detection for a mars microrover, in Proc. of AIAA Con[. Intelligent Robotics in Field, Factory, Service, and Space, Vol. 2, 1994, pp 675-684.

[II) Robert Bauer and Winnie Leung and Tim Barfoot, Experimental and Simulation Results of Wheel-Soil Interaction for Planetary Rovers, in Proc. of IEEElRSJ International Conference on Intelligent Robots and Systems, Vol. I, 2005, pp 586-591.

(12) G. Reina and L. Ojeda and A. Milella and 1. Borenstein, Wheel Slippage and Sinkage Detection for Planetary Rover, IEEElASME Trans. on Mechatronics, 2006, 11(2):185-195.

[13) c. Brooks and K Iagnemma and S. Dubowsky, Visual wheel sinkage measurement for planetary rover mobility characterization, Autonomous Robots, 2006, 21(1):55-64.

(14) 1. Heikkila, Geometric Camera Calibration Using Circular Control Point, IEEE Trans. on Pattern Analysis and Machine Intelligence, 2000, 22(10): 1066-1077.

[15) R. Hartley and A. Zisserman, Multiple view geometry in computer vision, Cambridge university press, Cambridge, UK, 2004.

(16) Z. Zhang, A flexible new technique for camera calibration, IEEE Trans. on Pattern Analysis and Machine Intelligence, 2000, 22(11):1330-1334.

(17) X. Q. Meng and Z. Y. Hu, A new easy camera calibration technique based on circular points, Pattern Recognition, 2002,36(5):1155-1164.

3548