sensor fusion-based lane detection for lks+acc system

10

Upload: ngoque

Post on 31-Jan-2017

223 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: sensor fusion-based lane detection for lks+acc system

International Journal of Automotive Technology, Vol. 10, No. 2, pp. 219−228 (2009)

DOI 10.1007/s12239−009−0026−0

Copyright © 2009 KSAE

1229−9138/2009/045−11

219

SENSOR FUSION-BASED LANE DETECTION FOR LKS+ACC SYSTEM

H. G. JUNG1,2)*, Y. H. LEE1), H. J. KANG1) and J. KIM2)

1)MANDO Global R&D H.Q., 413-5 Gomae-dong, Giheung-gu, Yongin-si, Gyeonggi 446-901, Korea2)School of Electrical and Electronic Engineering, Yonsei University, Seoul 120-749, Korea

(Received 20 May 2008; Revised 23 September 2008)

ABSTRACT−This paper discusses the market trends and advantages of a safety system integrating LKS (Lane Keeping

System) and ACC (Adaptive Cruise Control), referred to as the LKS+ACC system, and proposes a method utilizing the range

data from ACC for the sake of lane detection. The overall structure of lane detection is the same as the conventional method

using monocular vision: EDF (Edge Distribution Function)-based initialization, sub-ROI (Region Of Interest) for left/right and

distance-based layers, steerable filter-based feature extraction, and model fitting in each sub-ROI. The proposed method adds

only the system for confining lane detection ROI to free space that is established by range data. Experimental results indicate

that such a simple adaptive ROI can overcome occlusion of lane markings and disturbance of neighboring vehicles.

KEY WORDS : Lane detection, Sensor fusion, Lane keeping system, Adaptive cruise system

1. INTRODUCTION

1.1. Background: Popularization of the LKS+ACC System

Adaptive cruise control (ACC) is a driver convenience

system adding headway time control, which maintains

distance to the preceding vehicle within a preset headway

time, to conventional cruise control that maintains preset

speed if there is no preceding vehicle. The lane keeping

system (LKS) is a driver convenience system which assists

a vehicle in maintaining its driving lane. These two systems

have been developed as two separate systems (Bishop,

2005). However, as the adoption rate of ACC is rising and

various marketable embedded vision systems are emerging,

the LKS+ACC system integrating both functions have

attracted more interest.

Major Japanese automakers have already produced

LKS+ACC systems. LKS of Toyota (or Lexus) maintains

its driving lane only if ACC is operating. If ACC is not

operating, it will warn the driver of lane departure by a

torque pulse (Toyota, 2004; The Tundra Solutions, 2006).

Application vehicles include Lexus LS460 (Lexus, 2008)

and Crown Majesta (The Auto Channel, 2004). Nissan has

also developed a system integrating LKS and ACC (Uni-

versity of Twente, 2003), which it has applied to the Cima

(Nissan, 2001). Honda developed the Honda Intelligent

Driver Support System (HiDS) integrating Intelligent High-

way Cruise Control (IHCC), corresponding to ACC, and

Lane Keeping Assist System (LKAS), corresponding to

LKS (Honda, 2006a). Application vehicles are the Accord

(Pintonhead, 2006), the Legend (Honda, 2006b), and the

Inspire (Honda, 2006c).

CHAUFFEUR II is a European project that was com-

pleted in 2003 and aimed at developing truck platoon and

integration of LKS and ACC. In particular, the project

proposed a system integrating LKS and Smart Distance

Keeping (SDK), corresponding to ACC, and named it

CHAUFFEUR Assistance (Fritz et al., 2004).

1.2. Advantages of the LKS+ACC System

1.2.1. Reduction of driver’s workload

A considerable portion of traffic accidents are caused by

driver carelessness and improper driving maneuvers. In

particular, the burden of long hours of driving causes

drivers to be fatigued, resulting in traffic accidents. Although

conventional ACC and LKS can relieve the driver’s work-

load, the LKS+ACC system is expected to provide greater

workload relief. Analyzing the effect of CHAUFFEUR

Assistance on the driver using a driving simulator certified

that driving stability was enhanced and the driver’s wear-

iness was reduced compared with separate systems

(Hogema, 2003). The result of vehicle testing of the Honda

HiDS showed that 88% of test subjects felt their workload

was reduced. Eye gaze pattern analysis indicated that

drivers with HiDS observed a wider field of view (FOV)

(Bishop, 2008).

1.2.2. Increase of traffic system capacity

The results of analysis of CHAUFFEUR Assistance on the

driver showed that drivers tended to maintain smaller head-

way time and change lanes less (Hogema, 2003). It was

found that the LKS+ACC system gave a greater increase of

traffic capacity than either LKS or ACC alone. Experts

have predicted that the LKS+ACC system would provide a

remarkable increase of traffic capacity when the lane width*Corresponding author. e-mail: [email protected]

Page 2: sensor fusion-based lane detection for lks+acc system

220 H. G. JUNG, Y. H. LEE, H. J. KANG and J. KIM

is narrow (Arem and Schermers, 2003).

1.2.3. Enhancement of control performance

In regards to lane keeping control, if ACC is not oper-

ational, it is hard to predict time to cross (TTC). Contrarily,

if ACC controls vehicle speed, LKS can easily design and

follow the driving trajectory. As a result, the control per-

formance will be enhanced (Cho et al., 2006).

In regards to ACC, if preceding roadway information

acquired by LKS is provided, ACC can implement proper

speed control taking into consideration the shape of the

road. For example, speed control on curves realizes cruise

control that suits the driver’s preference by controlling the

speed according to the road’s curve shape. Speed control at

exits contributes to a reduction of the driver’s operating

load by controlling deceleration when a car enters an exit

lane (Denso, 2004a).

1.2.4. Enhancement of recognition performance

Using lane information acquired by LKS, ACC can recog-

nize the preceding vehicle in a curved road. Preceding

vehicle detection by using only long range radar (LRR) is

supposed to be complicated by the need to eliminate noise

from vehicle movement and vibration. Radar-based obstacle

recognition can be enhanced by using image portion

corresponding to the obstacle’s position.

One of the major disturbances of lane detection is

occlusion by the preceding vehicle. Therefore, the position

information of the preceding vehicle makes the lane detec-

tion algorithm simpler and more robust. Otherwise, the

lane detection algorithm may be complicated by the need

to handle various cases including those in which the

preceding vehicle occludes lane markings.

1.2.5. Benefits of ECU integration

In order to enhance recognition performance of LKS and

ACC, low-level fusion between image information and

range information is essential. Low-level fusion between

separate LKS and ACC requires over-weighted traffic load

on the communication channel. In order to enhance control

performance, an extended vehicle model incorporating

lateral and longitudinal motion is needed and the vehicle

trajectory should be designed comprehensively. Therefore,

it is expected that high performance integrated electronic

control unit (ECU) implement one vehicle model. Denso

supplied LKS+ACC ECU to Toyota, who in turn develop-

ed the fusion ECU, which processes all sensor information

including the vision sensor, the radar sensor, and the Lidar

sensor and sends control commands to the active steering

system and active braking system (Denso, 2004a, 2004b).

1.3. Adaptive ROI-Based Lane Detection

The lane detection system proposed by this paper is

fundamentally based on the monocular vision-based lane

detection system published by McCall and Trivedi, (2006)

and Guo et al. (2006). The forward scene is divided into

three layers according to distance and again divided into

the left hand side (LHS) and the right hand side (RHS). In

these six regions, lane features are searched locally. Lane

feature pixels are detected by a steerable filter and are

approximated into a line or a parabola. The orientation of

the steerable filter is initialized by peak detection of the

edge distribution function (EDF), and then established

according to the lane feature state predicted by temporal

tracking. Regions of the lowest layer are fixed but regions

of the second and third layer are set dynamically.

The conventional lane detection system works well

when there is no obstacle in the vicinity. With the recent

adoption of the high dynamic range CMOS (HDRC)

camera, traditional problems such as driving against the

sun and tunnels have been overcome (Hoefflinger, 2007).

However, if the preceding vehicle occludes lane markings

or a vehicle in the adjacent lane approaches, lane features

become lost or too small. As a result, the edges of the

obstacle start to disturb lane detection. To overcome such

problems, ROI establishment based on precise trajectory

prediction using vehicle motion sensors and lane feature

verification-based outlier rejection are incorporated (McCall

and Trivedi, 2006).

Assuming that the disturbance of neighboring vehicles

occurs because the system has no knowledge about free

space, our previous paper proposed that simple confine-

Figure 1. Lane shape depends on road shape.

Page 3: sensor fusion-based lane detection for lks+acc system

SENSOR FUSION-BASED LANE DETECTION FOR LKS+ACC SYSTEM 221

ment of ROI to free space can efficiently prevent the

disturbance of neighboring vehicles (Jung et al., 2008). In

this paper, this idea is referred to as adaptive ROI-based

lane detection. Furthermore, in the case of the LKS+ACC

system, because a range sensor is already installed for the

sake of ACC function, lane detection performance can be

improved without sensor addition. Experimental results

confirm that the proposed method can detect lanes success-

fully, even in the case when conventional methods fail

because of neighboring vehicles. Compared with our previ-

ous paper, this paper adds an algorithm which can account

for traffic signs on the ground and quantitative evaluation.

In particular, our experimental results show not only

daytime performance but also nighttime performance.

2. CONVENTIONAL SYSTEM: MONOCULAR

VISION-BASED LANE DETECTION

The basic lane detection algorithm is implemented based

on state of the art methods, McCall et al. (2006) and Guo et

al. (2006).

2.1. Three Layered ROI Structure

Lane markings have different shapes according to the shape

of the road, as shown in Figure 1. If the road is straight, as

in Figure 1(a), all lane markings, both near and far, can be

approximated as a straight line. If the road is curved, as in

Figure 1(b), lane markings at near and far distances should

be approximated as a straight line and a curve, respectively.

ROI should be established such that the searching area is

minimized but still contains the lane features. Desirable

ROI is expected to include lane features and exclude image

portion belonging to other objects. Considering the fact that

the lane becomes smaller as distance increases, the search-

ing area is divided into three layers whose size decrease

gradually, and then divided into a LHS and a RHS. Con-

sequently, six sub-ROIs are established. The height of the

available searching area changes according to camera con-

figuration and the height of each layer is defined as the

ratio to the height of available searching area. Sub-ROI I

and IV near to the subject vehicle is established and fixed,

and sub-ROIs of the second and third layers are established

using the lane detection results of their lower layer (Guo et

al., 2006). In other words, the detected lane of sub-ROI I

determines the location of sub-ROI II and the detected lane

of sub-ROI II determines the location of sub-ROI III.

2.2. Steerable Filtering

Lanes appear as slanted edge lines in the lane searching

region. If the slope of the lane feature is known a priori, the

steerable filter can detect lane features more efficiently

than general edge detection methods (McCall el at., 2006;

Guo et al., 2006).

The steerable filter is defined using the two dimensional

(2D) Gaussian function of equation (1). If the lane marking

is regarded as a line having width, then the second

derivative is used (McCall et al., 2006). If the inner edge of

the lane marking is regarded as a lane feature, then the first

derivative is used (Guo et al., 2006; Mineta, 2003). In our

research, first derivatives, defined as in equations (2) and

(3), are used. Equation (2) is the derivative of (1) in the x-

axis direction (θ=0°) and (3) is the derivative of (1) in the

y-axis direction (θ=90°). It is noteworthy that the equations

(1) to (3) define 2D masks.

(1)

(2)

(3)

The first derivative of equation (1) in a specific direction

θ is defined using equations (2) and (3), as in equation (4)

(Guo et al., 2006; Mineta, 2003). The filter defined in

equation (4) outputs a strong response to edges perpendi-

cular to the specific direction θ and outputs a weaker

response as the angular difference increases. Therefore,

because the possibility that edges of the shadow and stain

have the same orientation as a lane feature is very low, a

steerable filter tuned using a priori known lane feature

direction can selectively detect lane features, i.e. lane

feature pixels. Figure 3(a) is an input image and (b) and (c)

are the outputs of the steerable filter tuned to −45o and 45o,

respectively.

(4)

2.3. Edge Distribution Function

The EDF is used to initialize the orientation parameter of

the steerable filter. The EDF is the histogram of edge pixel

directions with respect to angle (Guo et al., 2006; Nishida

et al., 2005). Equation (5) defines the gradient of pixel (x,

y). Dx denotes the intensity variation with respect to the x-

axis and Dy denotes the intensity variation with respect to

the y-axis. The gradient is approximated by the Sobel

operator. With Dx and Dy, edge direction at pixel (x, y) is

defined as in equation (6). After edge direction of all pixels

G x, y( )=ex2

y2

+( )–

G1

0o

=∂∂x-----e

x2

y2

+( )–

= 2– xex2

y2

+( )–

G1

90o

=∂∂y-----e

x2

y2

+( )–

= 2– yex2

y2

+( )–

G1

θ

=cos θ( ) G1

0o

+sin θ( ) G1

90o

⋅ ⋅

Figure 2. Three layered ROI structure.

Page 4: sensor fusion-based lane detection for lks+acc system

222 H. G. JUNG, Y. H. LEE, H. J. KANG and J. KIM

in ROI is calculated using equation (6), the EDF can be

constructed by accumulating the pixel occurrence with

respect to edge direction. Figure 4(a) shows the Sobel

operator result of Figure 3(a) and Figure 4(b) is the con-

structed EDF.

(5)

(6)

After dividing the EDF into two regions with respect to

90o, the maximum peak of each region is detected as shown

in Figure 4(b). The left portion corresponds to the sub-ROI

I of Figure 2 and the right portion corresponds to sub-ROI

IV. As mentioned above, lane features in the lowest layer

can be approximated by a line and the angle of the detected

peak represents the direction of the lane feature of each

sub-ROI. Therefore, the angle corresponding to the detect-

ed peak is used for the initialization of the orientation

parameter of the steerable filter.

2.4. Lane Feature Detection

Lane feature detection consists of steerable filtering, Hough

transformation, inner edge point detection, and model fitt-

ing. Figure 5 presents the procedure of initial lane feature

detection. A steerable filter tuned to an a priori known lane

direction and binarization detects lane feature pixels. Using

the lane feature pixels, a Hough transform finds the lane

feature. The second column of Figure 5 shows the Hough

transform results; the horizontal and vertical axes represent

parameters of the linear lane model. In the case of the

lowest layer, the orientation of the steerable filter is set

using the EDF and in the case of the other layers, it is set by

lane feature state tracking, which is explained below.

The initial lane feature found using the Hough transform

∇I x, y( )= ∂I∂x-----,

∂I∂y-----⎝ ⎠

⎛ ⎞T

Dx, Dy( )T≈

θ x, y( )=tan1– Dy

Dx

-------⎝ ⎠⎛ ⎞

Figure 3. Lane feature pixels detected by tuned steerable

filter.

Figure 4. EDF construction and peak detection.

Page 5: sensor fusion-based lane detection for lks+acc system

SENSOR FUSION-BASED LANE DETECTION FOR LKS+ACC SYSTEM 223

is a linear approximation of pixels showing strong response

to the steerable filter tuned to a specific direction using the

voting method. By searching the edge point from the lane

feature to the image center, the inner edge points are detect-

ed as shown in Figure 6 (McCall et al., 2006).

The inner edge points detected in the first layer are fitted

to a line using least square (LS) linear regression. The line

is represented by two parameters, as shown in equation (7).

The horizontal image direction is the x-axis and the vertical

image direction is the y-axis. The cross-point of the line

and a border between the first and the second layer is used

as the center x coordinates of the second layer sub-ROI

(McCall et al., 2006).

(7)

In the second layer sub-ROI, lane feature pixels are

detected by the steerable filter and then the inner edge

points are detected. The detected inner edge points are

fitted to a curve using LS quadratic regression. The curve is

represented by the three parameters of a parabola as given

by equation (8). The intercept of the curve defined by the

quadratic fitting and a border between the second and the

third layer is used as the center x coordinates of the third

layer sub-ROI (McCall et al., 2006).

(8)

2.5. Lane Feature Tracking

The left and right lines are determined by fitting inner edge

points detected in the three layers, respectively, for the

LHS and RHS of an image. Then, the orientation and offset

of the left lane and the orientation and offset of the right

y=a x⋅ +b

y=a x2

⋅ +b x⋅ +c

Figure 5. Initial lane feature detection.

Figure 6. Detected inner edge points. Figure 7. Dynamically established second and third sub-

ROIs.

Figure 8. The tracked lane feature state is the output of lane

detection.

Page 6: sensor fusion-based lane detection for lks+acc system

224 H. G. JUNG, Y. H. LEE, H. J. KANG and J. KIM

lane are used as the lane feature state. The lane feature state

is tracked by a Kalman filter such that it is robust to

external disturbance (McCall et al., 2006). The confidence

level of the detected lane feature is measured, and if it

drops below a pre-defined threshold, the EDF-based initi-

alization procedure is called. Therefore, when the subject

vehicle changes lanes, the lane features can be detected in

spite of the abrupt state variable change.

The lane feature state is tracked in such a way to be used

for the setting of the direction parameter of the steerable

filter in the next frame. Furthermore, it is used as lane

information for lane keeping control and preceding vehicle

recognition. Using the lane feature state instead of the

instantaneous lane feature detected in each frame prevents

performance degradation in the case when the lane mark-

ing is disconnected or occluded by neighboring vehicles.

Figure 8 presents an example of a tracked lane feature state.

In our previous paper (Jung et al., 2008), one of the open

problems was traffic signs on the ground, which were

drawn on the road and could not be distinguished by range

data. A horizontal edge count between the LHS and RHS

lane features enables the system to recognize the existence

of a traffic sign. As the time duration to pass the traffic sign

is short, the system can eliminate the effects of the traffic

sign by using the area about the previously recognized lane

as the new ROI. Figure 9(a) shows a correctly recognized

lane feature using the proposed algorithm and Figure 9(b)

shows the result without it.

3. RANGE DATA-BASED ROI

ESTABLISHMENT

According to a recently published survey concerning vision-

based lane detection, vision-based lane detection generally

consists of five components: road marking extraction, post-

processing, road modeling, vehicle modeling, and position

tracking (McCall et al., 2006).

Reviewing the development direction of each compo-

nent, one common objective can be realized. The main

challenge of road marking extraction is overcoming ex-

ternal disturbances such as shadow and stain, and focusing

only on the lane feature. The steerable filter used in this

Figure 9. Narrowly established ROI to cope with traffic

signs on the ground.

Figure 10. Acquired range data in the world coordinate

system and the image coordinate system.

Page 7: sensor fusion-based lane detection for lks+acc system

SENSOR FUSION-BASED LANE DETECTION FOR LKS+ACC SYSTEM 225

paper is developed to improve lane detection performance

by focusing on edges having the expected orientation. Post-

processing is aimed at eliminating falsely detected lane

features caused by external disturbances using a priori

knowledge regarding the road and lane. Road modeling,

vehicle modeling, and position tracking are aimed at effici-

ently narrowing the searching area by formularizing lane

marking shape, vehicle motion, and lane marking motion.

In other words, they are developed to establish the ROI

only at a region where the lane feature is expected to

appear in the next frame considering the current position of

the lane marking, vehicle motion, and lane marking struc-

ture. Consequently, external disturbances can be ignored

and lane detection performance can be improved. The com-

mon objective of component development is minimizing

the effect of external disturbances. We pay attention to the

fact that external disturbances are inevitable because they

are caused from dimension reduction from a 3D world to a

2D image. This means that once the external disturbance

can be identified in advance, complicated post-processing

and modeling can be simplified.

Assuming the most important external disturbance of

lane detection is neighboring objects including the preced-

ing vehicle, adjacent vehicles, and guide rail, it can be ex-

pected that simply by confining the lane feature searching

area to a free space ensured by range data, lane detection

performance will be improved. When the preceding vehicle

approaches near to the subjective vehicle, it occludes lane

markings, and the edges of its appearance can be falsely

detected as a lane feature. Because the side surface edges of

an adjacent vehicle are almost parallel to lane markings,

they can be falsely recognized as lane features when the

adjacent vehicle approaches near the subject vehicle or an

incorrectly established ROI is used. The shadow of an ad-

jacent vehicle causes many problems, even when the ad-

jacent vehicle does not approach near the subject vehicle.

In particular, a cutting-in vehicle is an external disturbance

which is difficult to identify as it is related to the update

speed of lane feature tracking (i.e. the response time).

However, it has been found that once the road surface

covered by vehicles is rejected using range data, lane

detection can simply ignore all edges generated by the

appearance of neighboring objects. Furthermore, it is note-

worthy that such a procedure can be implemented by

simple operation, which is finding the image position

corresponding to range data and masking off the area from

the ROI. Denoting the image pixel coordinates (xi, yi) and

world coordinates of range sensor by (Xw, Yw, Zw), these

two coordinates are related by homography H as follows:

(9)

(10)

In order to acquire coordinates of the road surface, Yw is

set to 0. The homography H of equation (9) is defined as in

equation (11). hc denotes the camera height and θ and ϕ

denote the yaw angle and tilt angle of camera, respectively

(Jung et al., 2004).

(11)

Figure 10(a) shows range data acquired by a scanning

laser radar. The range data are acquired in the polar coordi-

nate system and then transformed into the Cartesian coordi-

nate system. Figure 10(b) indicates range data projected

onto the input image. It can be seen that the positions

where the vehicles and the guide rail meet, the road surface

is successfully detected. However, the range data are

disconnected in several positions and contains noise.

Clustering range data eliminates disconnected and added

noise. Scanning consecutive range data points, if two range

data points are far more than a threshold apart, e.g. 50 cm,

then they are recognized as a border between two range

data clusters. Among recognized clusters, clusters with too

small points or too short length are eliminated and then the

deleted region is interpolated by adjacent clusters. The area

below the border line consisting of recognized range data

clusters and the sky line is recognized as free space, to

which the lane feature searching region is confined. Figure

11 provides an example of recognized free space. Figure

Xb

Yb

Zb

=H

xi

yi

1

Xw

Zw

=Xb/Zb

Yb/Zb

H=

hc– cosθ⋅ hc– sinθ sinϕ⋅ ⋅ f cosϕ hc sinθ⋅ ⋅ ⋅

hc cosθ⋅ hc– cosθ sinϕ⋅ ⋅ f cosϕ hc cosθ⋅ ⋅ ⋅

0 cosϕ f sinϕ⋅

Figure 11. Recognized free space.

Page 8: sensor fusion-based lane detection for lks+acc system

226 H. G. JUNG, Y. H. LEE, H. J. KANG and J. KIM

11(a) shows horizontal line-based clusters and range data-

based clusters. By combining these two kinds of clusters,

the free space is constructed, as shown in Figure 11(b).

Each of the six sub-ROIs is defined by a rectangle whose

four corners are established so as to be located in the free

space.

4. EXPERIMENTAL RESULTS

In order to verify the feasibility of the proposed range data-

based adaptive ROI establishment, we installed scanning

laser radar and a camera on the test vehicle and compared

lane detection performance of the proposed method with

the conventional method. A brief summary of the specifi-

cations of the scanning laser radar (SICK LD-OEM) is as

follows: the FOV is 360o, the angular resolution is 0.125o,

the range resolution is 3.9 mm, the maximum range is 250

m, the data interface is a controller area network (CAN),

and the laser class is 1 (eye-safe). The resolution of the

image is 640×480. Each image and range data was record-

ed with a speed of 10 frames per second. In total, 2,491

data images were recorded for daytime and 5,100 data

images for nighttime. Table 1 compares detection perfor-

mance. It can be seen that the proposed method shows

better detection performance than the conventional method.

Although the difference in detection performance is small,

it is significant because situations related to the difference

are potentially dangerous, e.g. when there are closing

vehicles or when a vehicle cuts in suddenly.

Figure 12 shows that the proposed adaptive ROI can

overcome a disturbance from an adjacent vehicle. Figure

12(a) displays the input image and (b) and (d) show lane

feature pixels detected on the LHS of the input image by

the conventional method and the proposed method, respec-

tively. It can be seen that the bottom edge of the adjacent

vehicle looks similar to the lane feature. Figure 12(c) and

(e) indicate the lane feature state detected by the conv-

entional method and the proposed method, respectively.

They show that the problem that the left lane feature is

incorrectly detected by the conventional method in Figure

12(c) can be solved by the proposed method, as shown in

Figure 12(e). It is noteworthy that neighboring vehicles are

excluded in the free space, which is depicted in Figure

12(e).

Figure 13 and Figure 14 show examples when the

Table 1. Detection performance of the proposed method

and conventional method.

Proposed method

Conventional method

Daytime (2,491 frames)

93.17% 92.13%

Nighttime (5,100 frames)

99.04% 98.82%

Figure 12. Comparison when adjacent vehicle approaches.

Figure 13. Comparison when the preceding vehicle

occludes left lane markings wholly.

Page 9: sensor fusion-based lane detection for lks+acc system

SENSOR FUSION-BASED LANE DETECTION FOR LKS+ACC SYSTEM 227

proposed method overcomes problems caused by the pre-

ceding vehicle. Figure 13 is a situation when lane markings

are disconnected at the current location and the preceding

vehicle occludes the remaining lane markings so that there

is no useful information available on the left lane feature.

The proposed method realizes that there is no useful

information and then maintains a tracked lane feature state

to output proper lane information. Figure 14 shows a

situation where there are few observable lane markings. As

the proposed method eliminates the image portion occupi-

ed by the preceding vehicle, it can focus on observable lane

markings. In contrast, the conventional method fails because

of the vehicle’s edges.

Figure 15 demonstrates that the proposed method can

successfully detect lanes in various situations. Figure 15(a)

shows a situation where there is wide free space in front of

the subjective vehicle. Figure 15(b) and (c) show situations

where there are a lot of shadows on the road surface. Figure

15(d) indicates a situation where the cutting-in vehicle

occludes right lane markings. In this case, although lane

markings in the near area are occluded, the tracked lane

feature state helps in finding the lane markings in the far

area.

Figure 16 shows results during nighttime; Figure 16(a)

and (c) shows the results of the conventional method and

Figure 16(b) and (d) shows the results of the proposed

method. As shown in Table 1, the detection performance

during nighttime is better than during daytime. This is

because lane markings become distinctive by narrow head-

lamp light-beams while other objects, which may disturb

the system during the daytime, can not be observed in the

dark. In other words, as the headlamp established an effec-

tive ROI for lane detection, the disturbance caused by

environmental objects was reduced without any additional

operations.

5. CONCLUSION

This paper proposes a method which prevents external

disturbance caused by neighboring vehicles by confining

lane detection ROI to free space confirmed by range data.

Experimental results show that the detection performance

of the proposed method is better than that of the conv-

entional method. Although the difference between the

detection performances is small, the proposed method is

Figure 14. Comparison when little lane markings are

observable.

Figure 15. Successful cases.

Figure 16. The recognition results during nighttime.

Page 10: sensor fusion-based lane detection for lks+acc system

228 H. G. JUNG, Y. H. LEE, H. J. KANG and J. KIM

expected to significantly enhance the safety of the system,

as it correctly recognizes lane features even with an ad-

jacent vehicle and a cutting-in vehicle. The main contri-

bution of this paper is showing that a range sensor can

enhance lane detection performance and simplify the lane

detection algorithm. In particular, the proposed approach

confining the ROI based on range data can be implemented

by CAN communication even if ACC and LKS are imple-

mented in separated ECUs, as in conventional implemen-

tation. Therefore, this approach requires only a minimal

change and may be easily adopted. Future studies will

focus on 1) real time implementation of the proposed

method on an embedded platform and 2) replacement of

the high angular resolution scanning laser radar used in this

paper with Lidar or Radar with low angular resolution.

REFERENCES

Arem, B. and Schemers, G. (2003). Exploration of the

traffic flow impacts of combined lateral and longitudinal

support. Available at http://www.aida.utwente.nl/en/

research/Seminar/slides_GS.ppt. Accessed on 17 May

2008.

Bishop, R. (2005). Intelligent Vehicle Technology and

Trends. Artech House Inc. Norwood. MA.

Bishop, R. (2008). Societal benefits of in-car technology.

Available at http://www.ivsource.net/public/ppt/

BishopDutchRttF.ppt. Accessed on 17 May 2008.

Cho, J. H., Nam, H. K. and Lee, W. S. (2006). Driver

Behavior with Adaptive Cruise Control. Int. J. Automotive

Technology 7, 5, 603−608.

Denso (2004 a). Sensing system. Available at http://www.

denso.co.jp/en/its/2004/products/pdf/sensingsystem_e.pdf.

Accessed on 17 May 2008.

Denso (2004 b). 11th ITS world cong. – Exhibited product

lineup. Available at http://www.denso.co.jp/en/its/2004/

products/safety2.html. Accessed on 17 May 2008.

Fritz, H., Gern, A., Schiemenz, H. and Bonnet, C. (2004).

CHAUFFEUR assistant: a driver assistance system for

commercial vehicles based on fusion of advanced ACC

and lane keeping. 2004 IEEE Intelligent Vehicle Symp.

495−500.

Guo, L., Li, K., Wang, J., and Lian, X. (2006). A robust

lane detection method using steerable filters. The 8th Int.

Symp. Advanced Vehicle Control, 2006.

Hoefflinger, B. (2007). High-Dynamic-Range (HDR) Vision.

Springer Berlin Heidelberg.

Hogema, J. (2003). Driving behavior effects of the chau-

ffeur assistant. Available at http://www.aida.utwente.nl/

en/research/Seminar/slides_JH.ppt. Accessed on 17 May

2008.

Honda (2006 a). Safety for everyone in our mobile society.

Available at http://world.honda.com/CSR/pdf/CSR-06-

7-Safety-Initiatives.pdf. Accessed on 17 May 2008.

Honda (2006 b). Honda introduces the all-new legend.

Available at http://world.honda.com/news/2004/4041007.

html. Accessed on 17 May 2008.

Honda (2006 c). Honda announces a full model change for

the inspire. Available at http://world.honda.com/news/

2003/4030618_2.html. Accessed on 17 May 2008.

Jung, C. R. and Kelber, C. R. (2004). A robust linear-

parabolic model for lane follow. Proc. XVII Brazilian

Symp. Computer Graphics and Image Processing, 10,

72−79.

Jung, H. G., Lee, Y. H., Yoon, P. J. and Kim, J. H. (2008).

Forward sensing system for LKS+ACC. SAE Paper No.

2008-01-0205.

Lexus (2008). The new LEXUS LS460. Available at http:/

/www.garagecordi.com/ls460.pdf. Accessed on 17 May

2008.

McCall, J. C. and Trivedi, M. M. (2006). Video-based lane

estimation and tracking for driver assistance: survey,

system, and evaluation. IEEE Trans. Intelligent Trans-

portation Systems 7, 1, 20−37.

Mineta, K. (2003). Development of a lane mark recogni-

tion system for a lane keeping assist system. SAE Paper

No. 2003-01-0281.

Nishida, M., Kawakami, S. and Watanabe, A. (2005).

Development of lane recognition algorithm for steering

assistance system. SAE Paper No. 2005-01-1606.

Nissan (2001). Nissan releases all-new cima. Avaiable at

http://www.nissan-global.com/EN/NEWS/2001/_STORY/

010112-01.html. Accessed on 17 May 2008.

PistonHeads (2006). HONDA ADAS. Available at http://

www.pistonheads.com/doc.asp?c=52&i=15032. Access-

ed on 17 May 2008.

The Auto Channel (2004). Toyota crown majesta under-

goes complete redesign. Available at http://www.

theautochannel.com/news/2004/07/07/202727.html.

Accessed on 17 May 2008.

The Tundra Solutions (2006). 4th-generation lexus flagship

luxury sedan features …. Available at http://

www.tundrasolutions.com/forums/toyota-scion-and-

lexus-news/58916-fourth-generation-lexus-flagship-

luxury-sedan/. Accessed on 17 May 2008.

Toyota (2004). Environmental & social report 2004. Avai-

able at http://www.toyota.co.jp/en/environmental_rep/

04/download/pdf/e_s_report_2004.pdf. Accessed on 17

May 2008.

University of Twente (2003). Summary_seminar: combi-

nation and/or integration of longitudinal and lateral sup-

port. Available at http://www.aida.utwente.nl/en/research/

Summary_seminar/. Accessed on 17 May 2008.