a semi-automatic framework for highway extraction and

18
A semi-automatic framework for highway extraction and vehicle detection based on a geometric deformable model Xutong Niu Mapping and GIS Laboratory, Department of Civil and Environmental Engineering and Geodetic Science, The Ohio State University, 470 Hitchcock Hall, 2070 Neil Avenue, Columbus, OH 43210, USA Received 13 March 2006; received in revised form 27 July 2006; accepted 21 August 2006 Available online 24 October 2006 Abstract Road extraction and vehicle detection are two of the most important steps of traffic flow analysis from multi-frame aerial photographs. The traditional way of deriving traffic flow trajectories relies on manual vehicle counting from a sequence of aerial photographs. It is tedious and time-consuming work. To improve this process, this research presents a new semi-automatic framework for highway extraction and vehicle detection from aerial photographs. The basis of the new framework is a geometric deformable model. This model refers to the minimization of an objective function that connects the optimization problem with the propagation of regular curves. Utilizing implicit representation of two-dimensional curve, the implementation of this model is capable of dealing with topological changes during curve deformation process and the output is independent of the position of the initial curves. A seed point propagation framework is designed and implemented. This framework incorporates highway extraction, tracking, and linking into one procedure. Manually selected seed points can be automatically propagated throughout a whole highway network. During the process, road center points are also extracted, which introduces a search direction for solving possible blocking problems. This new framework has been successfully applied to highway network extraction and vehicle detection from a large orthophoto mosaic. In this research, vehicles on the extracted highway network were detected with an 83% success rate. © 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved. Keywords: Semi-automation; Road extraction; Vehicle detection; Aerial photography 1. Introduction This research originates from the estimation of traffic flow parameters from aerial photographs and concen- trates on highway boundary extraction and vehicle de- tection. It is well known that traffic flow parameters such as traffic density and velocity can be extracted from a sequence of aerial photographs based on traffic tra- jectory models (Agahi et al., 1976; Treiterer, 1975; Mintzer, 1983; Daganzo, 1997; O'Kelly et al., 2005). These trajectory models require accurate mapping pro- ducts, including road boundaries and vehicle counts. Although the traffic trajectories have proved to be useful, the method of obtaining such trajectories is still through manually counting and matching vehicles from a sequence of aerial photographs. This is inefficient when a large amount of photographs needs to be pro- cessed. The cost, accuracy, and time required for gen- erating such trajectories have been the greatest limit on ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170 186 www.elsevier.com/locate/isprsjprs Tel.: +1 614 292 4303; fax: +1 614 292 2957. E-mail address: [email protected]. 0924-2716/$ - see front matter © 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved. doi:10.1016/j.isprsjprs.2006.08.004

Upload: others

Post on 01-Dec-2021

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A semi-automatic framework for highway extraction and

emote Sensing 61 (2006) 170–186www.elsevier.com/locate/isprsjprs

ISPRS Journal of Photogrammetry & R

A semi-automatic framework for highway extraction and vehicledetection based on a geometric deformable model

Xutong Niu ⁎

Mapping and GIS Laboratory, Department of Civil and Environmental Engineering and Geodetic Science,The Ohio State University, 470 Hitchcock Hall, 2070 Neil Avenue, Columbus, OH 43210, USA

Received 13 March 2006; received in revised form 27 July 2006; accepted 21 August 2006Available online 24 October 2006

Abstract

Road extraction and vehicle detection are two of the most important steps of traffic flow analysis from multi-frame aerialphotographs. The traditional way of deriving traffic flow trajectories relies on manual vehicle counting from a sequence of aerialphotographs. It is tedious and time-consuming work. To improve this process, this research presents a new semi-automaticframework for highway extraction and vehicle detection from aerial photographs. The basis of the new framework is a geometricdeformable model. This model refers to the minimization of an objective function that connects the optimization problem with thepropagation of regular curves. Utilizing implicit representation of two-dimensional curve, the implementation of this model iscapable of dealing with topological changes during curve deformation process and the output is independent of the position of theinitial curves. A seed point propagation framework is designed and implemented. This framework incorporates highway extraction,tracking, and linking into one procedure. Manually selected seed points can be automatically propagated throughout a wholehighway network. During the process, road center points are also extracted, which introduces a search direction for solving possibleblocking problems. This new framework has been successfully applied to highway network extraction and vehicle detection from alarge orthophoto mosaic. In this research, vehicles on the extracted highway network were detected with an 83% success rate.© 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rightsreserved.

Keywords: Semi-automation; Road extraction; Vehicle detection; Aerial photography

1. Introduction

This research originates from the estimation of trafficflow parameters from aerial photographs and concen-trates on highway boundary extraction and vehicle de-tection. It is well known that traffic flow parameterssuch as traffic density and velocity can be extracted froma sequence of aerial photographs based on traffic tra-

⁎ Tel.: +1 614 292 4303; fax: +1 614 292 2957.E-mail address: [email protected].

0924-2716/$ - see front matter © 2006 International Society for PhotogramAll rights reserved.doi:10.1016/j.isprsjprs.2006.08.004

jectory models (Agahi et al., 1976; Treiterer, 1975;Mintzer, 1983; Daganzo, 1997; O'Kelly et al., 2005).These trajectory models require accurate mapping pro-ducts, including road boundaries and vehicle counts.Although the traffic trajectories have proved to beuseful, the method of obtaining such trajectories is stillthrough manually counting and matching vehicles froma sequence of aerial photographs. This is inefficientwhen a large amount of photographs needs to be pro-cessed. The cost, accuracy, and time required for gen-erating such trajectories have been the greatest limit on

metry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V.

Page 2: A semi-automatic framework for highway extraction and

171X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

the full development of this technology. Therefore, thisprocess of trajectory derivation must be automated withthe help of current computer technologies.

The ideal automation process of trajectory derivationcan be divided into three steps:

1) Georeferencing aerial photographs. In this step, thetraditional photogrammetric triangulation processneeds to be applied to the sequence of aerial photo-graphs to generate orthophotos under a predefinedspatial coordinate system. After this step, the distancesbetween matched vehicles from two adjacent ortho-photos can be accurately measured. This step can befinished by using standard photogrammetric software.

2) Road extraction and vehicle detection. This stepinvolves automatic road extraction from orthophotosby using digital photogrammetric methods. Vehicleswithin the extracted road boundaries need to be ex-tracted. This paper is concentrated on the second step.

3) Vehicle matching. Certain matching methods can beapplied to the extracted vehicles in order to generatea trajectory. For example, O'Kelly et al. (2005) ap-plied a linear programming method to solve a truck-matching problem based on manually extractedtrucks from a sequence of aerial photographs andcommercial road centerline.

Highly accurate road boundaries and centerlines areoften needed in vehicle detection and traffic analysis.Sometimes, these kinds of road information are notavailable. Thus, road extraction becomes inevitable forthe purpose of vehicle detection.

In this paper, a review of road extraction and vehicledetection methods is presented first. Based on a pre-defined highway model, a geometric active contourmodel is introduced into a new framework for highwayextraction and vehicle detection from georeferencedaerial photographs. Several practical and implementa-tion issues of this new method are also discussed fol-lowed by the result evaluation and discussion.

2. Review of road extraction and vehicle detection

Automatic road extraction from digital imagery hasbeen a major research direction in the photogrammetryand computer vision fields for more than two decades.Different types and resolutions of input images can causeroad networks that do not conform to a specific globalshape. Conventionally, three steps of road extraction areneeded (Zlotnick and Carnine, 1993; Trinder and Wang,1998): road finding, road tracking, and road linking.Different combinations of these three steps consist of

various algorithms for road extraction. Normally, whenhuman interaction is involved in the first step (roadfinding), the algorithm is referred to as semi-automatic.Without human interaction, the algorithm is considered anautomatic approach.

In semi-automatic approaches, the operator providesinformation such as starting points and directions. Start-ing points are used as seed points and starting directionsassist road tracking (Vosselman and de Knecht, 1995).Algorithms then predict the trajectory of the road inincremental steps until reaching a stopping criterion. Forexample, McKeown and Denlinger (1988) proposed amethod fitting a parabola to the most recently identifiedpath points; and Gruen and Li (1997) applied a snakemodel to a least-squares framework and extracted three-dimensional road features from stereo aerial photos. Asimilar approach based on “Ziplock” snake is presentedby Neuenschwander et al. (1997).

With the automatic detection of road seed points,semi-automatic approaches may be upgraded to auto-matic ones. Barzohar and Cooper (1996) presented anautomatic approach for the selection of starting pointsbased on a gray value histogram. In Baumgartner et al.(1999), roads are modeled as a network of intersectionsand links between these intersections, and are found bygrouping processes. Amini et al. (2002) used an object-based approach for automatic extraction of major roads.This approach consists of two parallel stages. In the firststage, an image containing the road is segmented andstraight line segments are extracted. In the second stage,the resolution of the image is reduced and converted to abinary image. The road skeleton in the binary image isthen extracted. By combining the results from these twostages, the road sides are extracted.

In recent years, deformable models have emerged as apowerful tool for semi-automated object and surfacemodeling as well as two-dimensional and three-dimen-sional image segmentation in applications as diverse asmedical imaging, graphics, robotics, and terrain-model-ing. In the research on road extraction, Neuenschwanderet al. (1997) presented a new approach for segmentationof two-dimensional and three-dimensional shapes thatinitializes and then optimizes a deformable model givenonly the data and a very small number of two-dimensionaland three-dimensional seed points, respectively. Trinderand Li (1995) described a semi-automatic method offeature extraction based on the active contour or “snake”model. Gruen and Li (1997) formulated thismethod in theleast-squares context and extended it to the integration ofmultiple images for linear feature extraction in a fullythree-dimensional mode. This novel LSB-Snake (LeastSquares B-spline snake)model considerably improves the

Page 3: A semi-automatic framework for highway extraction and

172 X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

performance of active contour models and consistentlycontrols blunders such as occlusions. Tao et al. (1998)extracted the road centerline from mobile mapping datausing the B-spline snake method and obtained asatisfactory result. Agouris et al. (2001) extended thesnake models to change detection of road segments.

However, methods based on the traditional snakemodel cannot avoid the following drawbacks:

1) Initialization. The initial curve must be placed close tothe object boundary. This is tedious to draw. The initialcurve and the desired object boundary differ greatly insize and shape. The model must be re-parameterizeddynamically to recover the object boundary. This pro-cess requires some additional computation.

2) Minimization. A local minimum of energy, such asspurious edges caused by noise, may stop the evolu-tion of the snake unexpectedly.

3) Topology. This method is difficult when dealing withtopological changes. If multiple objects appear in theimage and an initial curve surrounds them, all theobjects cannot be detected. Additional splitting andmerging approaches are needed to solve this prob-lem. These increase the complexity of the snakeimplementation significantly.

In order to simplify the initialization, Neuenschwanderet al. (1997) presents a modified snake-based approach.Using the “Ziplock” snake, the initialization of road ex-traction is reduced to the specification of two endpoints ofa road. However, this method finds it difficult to solveblockage problems caused by tree shadows, vehiclecrowds, and overpasses. Therefore, a new method isneeded to improve these snake-based road-extractionmethods.

Vehicle detection from an aerial image is constrainedby viewpoint and resolution. In the work of Burlina et al.(1997) and Moon et al. (2002), a vehicle is modeled as arectangle of a range of sizes. The Canny edge detector isapplied and a general Hough transform or a convolutionwith edge masks is used to extract the four sides of therectangular boundary. Niu et al. (2002) applied mean-shift segmentation and extracted regions whose sizeswere close to the desired vehicle. The extracted regionswere input into a Hopfield neural network to differen-tiate vehicles and non-vehicles based on certain criteria.All these methods treat vehicles as two-dimensionalobjects and their primary evidence is the boundary of thecar. These methods may have a problem when applied tourban scenes where the cars are of greater variety. Zhaoand Nevatia (2003) formulated car detection as a three-dimensional object recognition problem. Car intensities

and shadows are used as parameters in a Bayesianlearning process, which gives promising results on test-ed aerial images.

Traditionally, road boundary extraction and vehicledetection from aerial photographs are two separate pro-cesses. Each process requires extensive imaging proces-sing procedures and object extraction, modeling, andrecognition. Duplicated and complicated computation ofboth processes makes the step of road extraction andvehicle detection inefficient. Furthermore, each processcontains its own drawbacks. In road boundary extractionfrom high-resolution images, multi-resolution methodsare often used to detect road seeds. This process dependson edge extraction, which would introduce edges fromother features such as large warehouses and wide rivers.In the end, an extra process is needed to remove thesenoisy edges. Such noisy edges also would create troublesfor the road tracking and linking steps. Snake-basedmodels uses initial curves to avoid these noisy edges, butdrawbacks of snake model limits their wide use in roadextraction.

3. Framework for highway extraction

In this paper, a new framework of semi-automaticroad extraction and vehicle detection is presented andapplied to a mosaic of high-resolution aerial orthopho-tographs. A geometric active contour model providesthe basis for this new extraction framework and inte-grates the boundary and region-based information intothe extraction procedure.

In the United States, rural and suburban highwaysgenerally have four or six lanes in both directions, whichare also called multilane highways. There are mediansseparating lanes in different directions. In TRB (2000),geometric base conditions of multilane highways aredefined as follows:

• Twelve-foot minimum lane widths,• Twelve-foot minimum total lateral clearance in thedirection of travel,

• No direct access points along the roadway, and• A divided highway.

All these conditions can be identified from high-resolution aerial photographs. Geometrically, high-ways in high-resolution aerial photographs appear tobe narrow belt-like areas rather than thin lines. Basedon the above highway geometric base conditions, it isassumed that highways in high-resolution photographsare continuous, belt-like areas that have good contrastwith well-defined, connected gradients along the road

Page 4: A semi-automatic framework for highway extraction and

173X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

boundaries and that have a reasonable level of homo-geneity and smooth texture on the road surfaces.

These assumptions can be satisfied in most ruraland suburban highway images. Firstly, in rural andsuburban areas, vegetation such as grasslands andbushes usually grows along both sides of highways.Highway pavement materials include concrete andasphalt. Both materials show different characteristicsfrom those of vegetation in the visible spectrum. Thespectral difference between pavement and off-roadvegetation will provide a good contrast across the roadboundary in the imagery. Secondly, within a certainlength of a highway segment, the pavement materialusually does not change frequently. This gives homo-geneity and a smooth texture along the road surfaceexcept where there are vehicles on the road and/orshadows of trees on the road side. Finally, in imageryan asphalt road surface looks darker than concrete. If,in a section of a highway segment, both kinds ofpavements are used, an apparent discontinuity will beseen within the road surface. During the road-trackingprocess, this discontinuity can be identified under theassumption that the highway is a continuous featureand no sudden end exists in the image. This assumptioncan also be used to track highway segments separated byoverpasses, vehicle congestions, and shadows from treesand traffic signs.

If we consider a highway a continuous river withcertain width, vehicles driven on the highway wouldlook like small islands within the water. The textures ofthis “water” are usually the same because of pavementmaterials. In a geometric active contour (GAC) model,an initial curve can be used to simulate “water flow” andpropagate within highway boundaries. At the end ofpropagation, highway and vehicle boundaries can beextracted simultaneously.

3.1. Geometric deformable model

The classical energy-based snake model, also calledactive contour model, was initially proposed in (Kasset al., 1988) and has been successfully applied to dealwith a wide variety of computer-vision applications. Theevolving curve in the active contour models is repre-sented explicitly by a parameterized polynomial orspline. Therefore, active contour models are also calledparametric deformable models.

Geometric deformable models, also called geometricactive contour (GAC) models, were proposed indepen-dently by Caselles et al. (1997) andMalladi et al. (1995).They were introduced as geometric alternatives tosnake/parametric deformable models and provide a

way to overcome the limitations of parametric deform-able models as mentioned in Section 2.

Let C(p)= [x(p), y(p)], p∈ [0,1], be a parameterizedclosed planar curve and I(x,y) be a given gray-levelimage in which we would like to detect the objectboundaries. Then the curve evolution equation of thismodel is (Caselles et al., 1997):

ACAt

¼ gðIÞjNY −ðjgðIÞdNYÞNY ð1Þ

where, κ represents the curvature of each point alongthe curve C, NY the normal vector of that point, and g adecreasing function, which can be of the form

gðIÞ ¼ 11þ jj½Gr�Iðx; yÞ�jp ; p ¼ 1; 2; 3; N ð2Þ

The parametric curve C can be represented implicitlywith a level-set function ϕ, which could be a signeddistance function. For each pixel in the image, there is avalue corresponding to the distance from this pixel to theclosest point on curve C. For those pixels inside thecurve C, their distances are negative; those of the pixelsoutside the curve C are positive; and those of the pointson the curve are equal to zero. By changing the repre-sentation of the curve C to a level-set function ϕ, thefollowing equation can be obtained (Kimmel, 2003):

/t ¼ gðIÞjj/jjþjgðIÞdj/ ð3ÞIn many circumstances, it is advantageous to initialize

the curve inside the object to be segmented. Thenumerical solution of this equation can be obtainedusing a narrow band approach. The narrow band approachwas initially proposed by Chop (1993). It is based on thefact that the pixels far away from the current curveposition do not affect the curve-evolution process. Thus,only the pixels around the latest curve position need to beconsidered for evolution in the current step. Accordingly,a set of narrow band pixels is defined around the latestfront position and the level set function is updated usingthe Euler–Lagrange equation (Strang, 1986) only in thesepixels. Unfortunately, the curve position changes dynam-ically from iteration to iteration. As a consequence, thenarrow band also has to be updated from iteration toiteration. This will increase the cost dynamically in termsof complexity. Thus the contour position and the set ofnarrow-band pixels are updated only in cases where thecontour is close to the borders of the current band.

A necessary denoising process should be applied to theimage to remove noise contained in the image before anyfurther processing. General linear low-pass filters, such asthe Gaussian filter, smooth both noise and high-frequency

Page 5: A semi-automatic framework for highway extraction and

174 X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

information from road edges. Therefore, more advancemethods are needed to preserve and enhance edgeinformation during the denoising process. Tomasi andManduchi (1998) proposed a new, non-iterative bilateralfilter. This filter combines both domain and rangefiltering. Given an input image I(x, y), the output imageÎ(x, y) is obtained by:

=ðx; yÞ ¼

Xsi¼−s

Xsi¼−s

Iðxþ i; yþ jÞwði; j; x; yÞPsi¼−s

Psi¼−s

wði; j; x; yÞð4Þ

with the weights given by

wði; j; x; yÞ

¼ exp −i2 þ j2

2r2D

� �exp −

ðIðxþ i; yþ jÞ−Iðx; yÞÞ22r2R

!

ð5Þ

where s is the window size of the filter. There are twoterms in the weight equation. The first term serves as atemporal (spatial in case of images) weight, whichmeasures the geometric distance between the centralpixel (x, y) and its neighboring pixel (x+ i, y+ j). Thesecond term is used as a similarity function, measuring theradiometric (or pixel value) difference between the centralpixel (x, y) and its neighboring pixel (x+ i, y+ j). Thus,neighboring pixels with smaller differences in pixel valueinfluence the result more than those with largerdifferences. In Eq. (5), the Gaussian kernel function wasused for both components. Although the bilateral filterwas originally designed as an intuitive tool, it has beenproved that there are relationships between the bilateralfilter and other nonlinear filters such as anisotropicdiffusion, adaptive smoothing, mean shift, weighted least

Fig. 1. Processing result of the GAC model. (Circle

squares, and robust estimation (Barash, 2002; Elad,2002).

Fig. 1 shows the results of the GAC model. In Fig. 1,(a) is a 256×256 test image extracted from an aerialphotograph; (b) is the smoothing result of bilateral filter;(c) is the image of function g(I) after linearly stretchingvalues of function g(I) to the values within the interval[0, 255]; and (d) is the segmentation result of the GACmodel. Fig. 1(c) represents essentially an edge map. Thethick lines in the image indicate the strong degree ofcontrast in the original image. Fig. 1(d) is a binaryimage. The black region contains the pixels that belongto the road surface. The boundary of this black region isthe place where the evolving curve stops.

Three vehicles, two trucks and one car, are in theoriginal image. In Fig. 1(b), two trucks (highlighted bytwo circles) correspond to the two inset concave shapes(enclosed by two circles) along the boundary of the blackregion in Fig. 1(d). One car (labeled by a square) isshown as a small white spot within the black region inFig. 1(d). If the two concave shapes can be extracted fromthe segmentation result, two trucks will be extracted.

Fig. 2 shows the curve propagation procedure. Onlyone seed point, represented by the dark cross, was used.The white lines indicate the positions of the zero levelset or the evolving curve at each iteration. As early asiteration number 1300, the curve converged to the roadboundary.

3.2. Implementation issues

In this section, several issues related to the implemen-tation of the GAC model are examined, beginning withthe position and number of seed points used in the model.

3.2.1. Position and number of seed pointsAccording to level set theory, the position and number

of seed points do not influence the curve evolution results.

s indicate two trucks; Square indicates a car).

Page 6: A semi-automatic framework for highway extraction and

Fig. 2. Procedure of evolving curve propagation. (Dark cross indicates the position of the seed point).

Fig. 3. Curve propagation with seed point at different initial positions. (Dark crosses indicate the positions of the seed points).

175X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

Page 7: A semi-automatic framework for highway extraction and

176 X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

To check this property, the following experiment wasperformed:

• A single seed point at different positions on the roadsurface,

• Three seed points evenly distributed across the roadsurface, and

• Six seed points evenly distributed across the roadsurface.

In Fig. 3, (a) shows the experimental result using asingle seed point; (b) the result using three seedpoints; and (c) the result using six seed points. Theiteration numbers represent the processing speeds ofthe results. It can be seen that the results of the GACmodel are independent of the position and number ofseed points used, as long as the seed points are locatedwithin the desired region. The speed of the GACmodel, however, is influenced by the position of theseed points. For example, the processing speed of theresult in Fig. 3(a), where the seed point is located inthe center of the image, is faster than that in Fig. 2,where the seed point is close to the image boundary.The reason is that there are two moving fronts in theformer case. The speed of the GAC model is alsoinfluenced by the number of seed points. The more thenumber of the seed points, the faster the curvepropagates.

Fig. 4. Leakage problem for image

3.2.2. Leakage problemFig. 4 shows a leakage example. In this figure, (a) is

the original image, (b) is the smoothing result, (c) is theedge map g(I), (d) is the segmentation result, and (e)shows the curve propagation procedure. The dark crossin (a) and (e) represents the seed point. In Fig. 4(a), thecontrast between the highway region and its right vergeis not as strong as the contrast to its left verge. Thus, inFig. 4(c) the right edge of the highway is thinner relativeto the left one and there are also some breaks along theright edge. Such weak edge information causes theleakage as shown in Fig. 4(e) and happens at iteration220. It finally stops at iteration 1020.

In a highway image, road surface can be consideredas an active region and the rest of the image asbackground. As the initial curve grows, one can monitorthe change of the variance inside the active regionduring the curve-evolution process. When a point isadded into the region, if the change of the variance is toolarge, that point needs to be excluded from the region;otherwise, the point should be kept inside the region. Toavoid noises, the mean and variance of a local window arecomputed around the point under consideration. Bothvalues are compared with the mean and variance inside theactive region. If the absolute difference between the meanof the local window and the one of the active region is lessthan the variance of the active region, the latest point isadded into the region and the values of the mean and

with weak edge information.

Page 8: A semi-automatic framework for highway extraction and

177X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

variance of the region are updated. Otherwise, the point isexcluded from the region.

This comparison serves as a regional constraint to theGAC model and can be easily implemented into theGAC model. Whenever there is a point to be added intothe zero level set or the evolving curve, this comparisonneeds to be executed. Fig. 5 shows the procedure of thecurve evolution with comparison involved. The leakageproblem shown in Fig. 4 was solved using the proposedmethod. The total number of propagation iteration was780 when the road surface was separated successfullyfrom the background. However, a zigzag edge isobserved, as in the segmentation result. This is due tothe regional constraint. These zigzag edges can besmoothed easily using a curve-smoothing process asshown later in this section.

This regional constraint is similar to the criteria used inregion growing (Zucker, 1976; Haralick and Sapiro,1985). As a well-known image segmentation method,region growing starts with individual seed points and,based on certain given criteria, merges them with otherneighboring pixels to produce regions of homogeneousbrightness. The criteria are usually the measures of localhomogeneity such as the standard deviation of the regionaround the pixel. Selection of seed points, mergingcriteria, and the orders in which the neighboring pixels arevisited determine the quality and speed of region growing.It is difficult to select criteria that meet the strict definitionof region uniformity because in reality the brightness mayvary linearly within a region. Therefore, producing falseboundaries and noises becomes a major disadvantage ofregion growing, (Pavlidis and Liow, 1990). In comparisonto region growing, the proposed GAC model combinesthe edge information from the original GAC model andthe regional information similar to that used in regiongrowing. Such a combination could yield better resultsthan those of a single method.

3.2.3. Processing of large imagesThe dimension of the test image in Fig. 1(a) is

256×256 pixels. It only occupies 64 K bytes in memory.

Fig. 5. Curve propagation procedur

However, a 9-inch full-frame aerial photograph (as shownin Fig. 10) when scanned at a resolution of 1200 dpi, has anapproximate dimension of 11,000×11,000 pixels. Storingthis large amount of pixels would occupy 115M bytes. It isnot practical to directly input such a large image directlyinto the GAC model. To optimize the GAC-based processof highway extraction, a seed-point propagation schemehas been designed and implemented. The steps in thisprocedure are:

1) Manually input seed points into a pre-defined seedstack.

2) Create an image (named the result image) with thesame size as the aerial photograph. Assign allpixels of the resulting image the value of 255.

3) If the seed stack is not empty, pop a seed point.Otherwise, go to step 10.

4) Extract a sub-image from the aerial photographcontaining the seed point. The size of the sub-image is defined as 256×256 in the experiment.

5) Record the relative position of the sub-image tothe aerial photograph so that the correspondingpart of the result image can be updated with theextracted highway region.

6) Process the sub-image using the improved GACmodel.

7) From the extracted highway region, compute theroad centerline.

8) Find the end points of the centerline, and add themto the seed stack.

9) Update the result image with the extractedhighway region in the sub-image, and go to step 3.

10) Save the resulting image and end the process.

Fig. 6 illustrates a possible process of seed-pointpropagation. Dot-dashed squares are possible locationsof sub-images. Each sub-image can be extracted basedon the direction of the road segment in the image. Thedot within the square “(a)” represents an initial seed point,which is input manually, and all other dots are seed pointsautomatically generated during the propagation

es without leakage problem.

Page 9: A semi-automatic framework for highway extraction and

Fig. 6. Illustration of possible positions of seed points in new sub-images.

178 X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

procedure. Two solid curves represent highway bound-aries and a dash line is highway centerline. Thepropagation procedure starts at the lower-left corner ofthe image, i.e. sub-image “(a)” and ends at the upper-rightcorner of the image, i.e. sub-image “(g)”.

Fig. 7. Experimental result of the see

Fig. 7 shows an experimental result of this seed-pointpropagation. In the lower-left corner of this figure, a dot“A” marks the position of the initial send point. Fivesub-images, represented by square shadows in thefigure, were generated during the propagation. The

d-point propagation procedure.

Page 10: A semi-automatic framework for highway extraction and

179X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

extracted highway region is highlighted by a white strip.The extracted vehicles are small black dots within thisstrip.

The correctness of the extracted road centerline shouldbe ensured in this seed-point propagation scheme.Possible leakage problems caused by the standard GACmodel (as shown in Fig. 4) would deviate “roadcenterline” out of road surface. Thus, wrong seed pointswould be generated during the propagation process.Therefore, the standard GAC model is not applicable tothe seed-point propagation scheme. By solving theleakage problem, the proposed GAC model can providecorrect road centerlines to ensure normal progress of seed-point propagation and highway extraction.

3.2.4. Blockage problemEssentially, the seed point propagation process can be

considered as a road tracking process. One of the majorissues with the road-tracking process is blockageproblem caused by shadows, vehicle congestions, pave-ment changes, overpasses, etc. This issue also exists inthe seed-propagation process. To solve the blockageproblem, highway continuity must be assumed. Here,the highway is considered a continuous ribbon with nosudden ends or sharp turns. A smooth curve can be usedto simulate the center line of the highway sectioncontained in any sub-image. When blockage happens,the extracted center points can be used to predict thecenter points of a forthcoming highway section. Inpractice, a second-order parabolic curve is sufficient tosimulate the road centerline based on the extractedcenter points. Fig. 8(a) shows an image containingblockage caused by an overpass. A seed point (the doton the upper-right corner) was placed within the upperright corner of the image. To keep propagating the seedpoints, a parabola curve is used to fit the extracted

Fig. 8. Example for the blockage

centerline points as shown in Fig. 8(b). The dashed linerepresents the position of the fitting centerline. Thelower end point of this fitting centerline is used as thenext seed point. Fig. 8(c) shows the next sub-imageextracted based on the predicted seed point. Fig. 8(d) isthe result image based on the propagation of seed point.By using this curve-fitting method, the blockage prob-lem was solved.

When the sizes of blocking objects are large enoughto cover most of the area of a sub-image, such as a largevehicle congestion, or there is a sharp turn happening onthe highway right after blocking objects, the predictedseed point would lie on the blocking objects or outsideof road surface. This wrong seed point would causeproblems to the seed-point propagation process and theresult of highway extraction. To identify such situations,the blocking objects are assumed to contain differenttextures or brightness from those of the road surface.The expanding direction of the blocking objects is alsoassumed to be different from that of the previous roadcenterline. After a predicted seed point is applied toextract road region, the direction of the extracted center-line, the texture of the extracted region, and the width ofthe extracted region will be compared to those of theregion extracted before the blockage problem happens.If these values are close, the newly extracted region willbe considered as a section of road region and new seedpoint will be added to seed-point stack. Otherwise, anew seed point will be popped up from the seed-pointstack to start a new propagation process.

3.2.5. Highway boundary refinement and vehicledetection

As seen from Fig. 8(d), the extracted boundariescontain information about both highways and vehicles,and there are also zigzag shapes on highway boundaries

problem and its solution.

Page 11: A semi-automatic framework for highway extraction and

180 X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

which cannot satisfy the mapping purpose. In order toimprove the extracted highway boundaries, it isnecessary to remove those small vehicle polygons andsmooth out the concave shapes caused by vehiclesdriven close to the highway boundary.

It is easy to remove these polygons by using an areathreshold (here, the threshold value is set to 300). Thenext step is to smooth the highway boundaries, in orderto remove extraneous bends and small intrusions andextrusions from a line or polygon boundary withoutdestroying its essential shape. To this end, a standardGIS procedure was used (ArcInfo command GENER-ALIZE with option BENDSIMPLIFY).

Then, the vehicles within the smoothed highwayboundaries can be detected. There are two types ofdetected vehicles: vehicles within the highway boun-daries (such as the one inside the white square in Fig.1(d)) and vehicles on the boundaries of the highway

Fig. 9. Example of highway boundary

(concave shapes along the highway boundaries as inFig. 1(d)). Both types of the vehicles can be extractedwith the following steps:

1) Smooth the remaining highway boundaries afterremoving small vehicle polygons inside the highwayregion.

2) Convert the highway boundaries before and aftersmoothing into polygons.

3) Erase the polygons before smoothing from the poly-gons after smoothing.

4) Save the remaining polygons as the extractedvehicles.

Fig. 9 shows the results of the highway boundaryextraction and vehicle detection. In this figure, (a) showsthe result directly obtained after applying the geometricdeformable model to the image and the smoothing result

refinement and vehicle detection.

Page 12: A semi-automatic framework for highway extraction and

Fig. 10. Full frame of aerial photographs with locations of seed points.

Fig. 11. Highway boundaries extracted from the aerial photograph in Fig. 10 using the GAC model.

181X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

Page 13: A semi-automatic framework for highway extraction and

Fig. 12. Extracted highway boundaries before and after refinement.

182 X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

of (a) is shown in (b). The small polygons in (c) are thevehicles detected from the images.

4. Results evaluation and discussions

The new framework was tested for a georeferencedaerial photograph as shown in Fig. 10. The area in thisphotograph was located on the southern shore ofTampa Bay, Florida. Two interstate highways appear inthe area. One is north–south I-75. The other is east–west I-275. Both highways contain two separate multi-lanes for traffic flow in each direction.

The first step was to manually select seed points. Inthis experiment, 12 seed points were input into the

Fig. 13. Definition of True Positive, Fa

GAC model. The rectangles in Fig. 10 show the placeswhere seed points were selected. Rectangles “A”, “B”,and “C” are the highway ends in the image. Thesehighway ends are good places to restrict initial curvesto start evolution in only one direction. Rectangles“D”, “E”, and “F” contain bridges, which could blockthe propagation process. After the seed points wereselected, processing of the proposed GAC modelbegan. During the seed point propagation, 588 sub-images were created. Fig. 11 displays the segmentationresult of the GAC model. The white ribbon regions arethe extracted highways.

To show the details of the result, four highlightedregions, “A”, “E”, “F”, and “G” in Fig. 10, were selected

lse Positive, and False Negative.

Page 14: A semi-automatic framework for highway extraction and

Table 1Evaluation of different road extraction results

Evaluation measures Correctness Completeness Quality

Results fromproposed method

94.97% 94.92% 90.37%

Results from(Agouriset al., 2004)

BG 91.20% 83.20% N/AAG 94.00% 91.90% 86.80%HV 27.50% 56.30% 25.00%

Results from(Mayeret al., 2006)

Gerke_W 81% 63% N/AGerke_WB 72% 77% N/AZhang 72% 63% N/A

183X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

and shown in Fig. 12(a). White lines represent theextracted boundaries. Region A contains two straighthighway sections. There is a ramp, I-75 North-I-275West, in region “E”. A bridge blocks the continuation ofthe highway in Region “F”. Region “G” shows tworamps: I-75 South-I-275 West and I-275 East-I-75North. The vehicle boundaries on the highways areextracted either as small separated polygons or con-caved arcs along the highway boundaries. In region “E”,part of the northern boundary of the ramp, I-75 North-I-275 West, disappeared because the image contrastbetween that part of the ramp and the road surface ofI-75 North was too weak to create strong edge infor-mation. This kind of error can be removed manuallyduring post-processing by adding a line segment todivide these two different regions.

From an algorithmic point of view, the extractionaccuracy typically defines the success of a featureextraction method (Agouris et al., 2004). Accuracy iscommonly measured by comparing algorithm outputagainst a manually derived ground truth. According to

Fig. 14. Orthophoto mosaic used for the

Wiedemann et al. (1998), three categories of extractionresults can be defined by comparing algorithm-extractedresults against the ground truth (Fig. 13):

• True Positives (TP). Total length of correctlyextracted line segments.

• False Positive (FP). Total length of incorrectlyextracted line segments.

• False Negative (FN). Total length of missing linesegments.

Based on these three definitions, three measures areused in this study (Wiedemann et al., 1998; Agouris et al.,2004):

correctness ¼ TPTPþ FP

ð6Þ

completeness ¼ TPTPþ FN

ð7Þ

quality ¼ TPTPþ FPþ FN

ð8Þ

Correctness is a measure ranging between 0 and 1 thatindicates the detection accuracy rate relative to groundtruth. Completeness is also a measure ranging between 0and 1 that can be interpreted as the converse of theomission error. Completeness and correctness are com-plementary metrics and need to be interpreted simulta-neously. Quality is a normalized measure betweencompleteness and correctness. The quality value can

experiment of an extended area.

Page 15: A semi-automatic framework for highway extraction and

Fig. 15. Result image extracted from the orthophoto mosaic.

Table 2Result of vehicle detection from the orthophoto mosaic

Routesection

Detectedvehicles

Wrongvehicles

Missedvehicles

Manualcounts

I-275 West 34 1 8 41I-275 East 47 4 15 58I-75 North 98 8 6 96I-75 South 132 14 18 136I-275 East-I-75 South

8 0 3 11

I-275 East-I-75 North

5 3 0 2

I-75 South-I-275 West

4 2 3 5

I-75 North-I-275 West

11 4 9 16

Sum 339 36 62 365Extractionrate

(Detected vehicles−Wrong vehicles) /Manual counts=0.8301

184 X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

never be higher than either the completeness orcorrectness measurement.

Highway boundaries were digitized from the testphotograph and used as ground truth. Table 1 showsthree measures of the extraction result derived from theproposed GAC model. For comparison, the values ofthese measures of six fully automatic road extractionmethods are also included in the Table 1 (Agouris et al.,2004; Mayer et al., 2006). The results of the first threemethods (BG: (Baumgartner et al., 1999); AG:(Agouris et al., 2002); and HA: (Harvey, 1999)) wereobtained from three different test images of differentcomplexity (Agouris et al., 2004). The results of the lastthree ones (Gerke_W, Gerke_WB, and Zhang) werederived from the same aerial image of hilly rural scenewith quite low complexity, and only values for cor-rectness and completeness were provided in (Mayeret al., 2006), in which more comparison results on roadextraction from aerial images and IKONOS satelliteimages can be found. From the evaluation measures ofthe proposed GAC model, it can be seen that 94.92% ofthe highway boundaries were extracted and 94.97% ofthe extracted highway boundaries were correct. Thisresult is comparable to those values derived from themethod AG by Agouris et al. (2002), which is 94.0% incorrectness, 91.9% in completeness, and 86.8% inquality.

The proposed extraction method can also be used inthe application on large-area mapping. Fig. 14 shows anorthophoto mosaic which expands 6.4 km north–southand 11.2 km east–west and has a size of 22,266×12,614 pixels so that it was necessary to process amosaic of partly overlapping sub-images. Fig. 15 showsthe mosaic of the extraction results. During the

extraction process, 16 seed points were used and theextracted highway boundaries were smoothed with theabove-mentioned simplification operation. The vehicleson the extracted highway were also extracted.

Table 2 lists the numbers of detected vehicles in eachsection of the extracted highways. These detectednumbers are compared against the manual vehiclecounts. Totally, there are 339 vehicles detected fromthe mosaic, among which 36 are wrong vehicles. Mostof the wrong vehicles were shadows of trees, light poles,and traffic signs. The discrepancy between the smoothboundaries and the originally extracted boundaries alsoleads to some errors.

303 out of 365 vehicles were detected, i.e., thedetection rate reaches 83.01%. During the process ofdeciding which polygons within the extracted highway

Page 16: A semi-automatic framework for highway extraction and

185X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

boundaries are vehicles, only the perimeter and area ofthe polygon were used as judging criteria. If an ad-vanced pattern-recognition method such as neuralnetwork were applied in the decision process, a betterdetection rate could be expected.

5. Conclusions

In this paper, a semi-automatic framework forhighway extraction and vehicle detection has beenpresented. This framework not only solves the probleminherited in the traditional parametric deformable models,but also incorporates the shape information of a highwaysegment into the seed point propagation scheme. Seedpoints were placed at the end of the highway segmentsclose to the boundary of the image or at the positions closeto overpass and vehicle congestion. Road centerlines werealso extracted to introduce a search direction and to solvepossible blockage problems. This new framework was alsoapplied to highway network extraction from a large mosaicof ortho-images. 83% vehicles on the highway networkwere successfully extracted.

In summary, this research makes the followingcontributions:

1) Highway boundaries and vehicles can be simulta-neously extracted from aerial photographs. This willsignificantly facilitate traffic flow analysis fromsequences of aerial photographs.

2) Some drawbacks of the traditional snake-based roadextraction models are resolved with the new proposedextraction framework since region-based informationis added into the geometric deformable model, whichsuccessfully prevents the leakage problem.

3) Only a smaller number of seed points are required toextract a complex highway network from a mosaiccovering a large area.

4) Possible blockage can be solved by consideringshape information of highway segments such ascenterline points and highway directions in the seedpoint propagation scheme.

In the future research, the proposed framework willbe applied to high-resolution satellite imagery, such asQuickBird and IKONOS, and multi-spectral imagery.

Acknowledgments

The author gratefully thanks to Prof. Rongxing (Ron)Li of Mapping and GIS Laboratory at the Ohio StateUniversity for his great support and valuable instruction tothis research. Special thanks are given to the reviewers for

their extensive comments, which greatly contributed toimproving the paper. This research was supported by theU.S. National Science Foundation Digital GovernmentProgram under grants CNS-0091494 and IIS-0446592,and the U.S. National Geospatial-Intelligence Agency.

References

Agahi, R., Gafarian, A.V., Jagger, P., Nguyen, L.T., Pahl, J., 1976.Characteristics of Multilane Traffic Flow from Aerial Data. U.S.Department of Transportation Report No. DOT-TST-76T-2.

Agouris, P., Stefanidis, A., Gyftakis, S., 2001. Differential snakes forchange detection in road segments. Photogrammetric Engineeringand Remote Sensing 67 (12), 1391–1399.

Agouris, P., Doucette, P., Stefanidis, A., 2002. Automatic roadextraction from high-resolution multispectral imagery. Technicalreport, Digital Image Processing and Analysis Laboratory.Department of Spatial Information Science and Engineering,University of Maine, Orono, Maine.

Agouris, P., Doucette, P., Stefanidis, A., 2004. Automation and digitalphotogrammetric workstations, In: McGlone, J.C., Mikhail, E.M.,Bethel, J., Mullen, R. (Eds.), Manual of Photogrammetry, fifth ed.American Society of Photogrammetry and Remote Sensing,Bethesda, MA, pp. 949–981.

Amini, J., Lucas, C., Saradjian, M.R., Azizi, A., Sadeghian, S., 2002.Fuzzy logic system for road identification using IKONOS images.Photogrammetric Record 17 (99), 493–503.

Barash, D., 2002. A fundamental relationship between bilateralfiltering, adaptive smoothing, and the nonlinear diffusion equation.IEEE Transactions on Pattern Analysis and Machine Intelligence24 (6), 844–847.

Barzohar, M., Cooper, D.B., 1996. Automatic finding of main roads inaerial images by using geometric stochastic models and estimation.IEEE Transactions on Pattern Analysis and Machine Intelligence18 (7), 707–721.

Baumgartner, A., Steger, C., Mayer, H., Echstein, W., Ebner, H., 1999.Automatic road extraction based on multi-scale, grouping, andcontext. Photogrammetric Engineering and Remote Sensing 65(7), 777–785.

Burlina, P., Parameswaran, V., Chellappa, R., 1997. Sensitivity analysesand learning strategies for context-based vehicle detection algorithms.Proc. DARPA Image Understanding Workshop, pp. 577–584.

Caselles, V., Kimmel, R., Sappiro, G., 1997. Geodesic active contours.International Journal of Computer Vision 22 (1), 61–79.

Chop, D., 1993. Computing minimal surfaces via level set curvatureflow. Journal of Computational Physics 106 (1), 77–91.

Daganzo, C., 1997. Fundamentals of Transportation and TrafficOperations. Elsevier Science, New York.

Elad, M., 2002. On the origin of the bilateral filter and ways to improveit. IEEE Transactions on Image Processing 11 (10), 1141–1151.

Gruen, A., Li, H., 1997. Semi-automatic linear feature extraction bydynamic programming and LSB-snakes. Photogrammetric Engi-neering and Remote Sensing 63 (8), 985–995.

Harvey, W., 1999. Performance evaluation for road extraction. Bulletinde la Société Française de Photogrammétrie et Télédétection 153(1999-1), 79–87.

Haralick, R.M., Sapiro, L.G., 1985. Image segmentation techniques.Computer Vision, Graphics, and Image Processing 29 (1), 100–132.

Kass, M., Witkin, A., Terzopoulos, D., 1988. Snakes: active contourmodels. International Journal of Computer Vision 1 (4), 321–331.

Page 17: A semi-automatic framework for highway extraction and

186 X. Niu / ISPRS Journal of Photogrammetry & Remote Sensing 61 (2006) 170–186

Kimmel, R., 2003. Numerical Geometry of Images. Springer-VerlagPress.

Malladi, R., Sethian, J.A., Vemuri, B.C., 1995. Shape modeling withfront propagation: a level set approach. IEEE Transactions onPattern Analysis and Machine Intelligence 17 (2), 158–175.

Mayer, H., Hinz, S., Bacher, U., Baltsavias, E., 2006. A test ofautomatic road extraction approaches. International Archives ofPhotogrammetry, Remote Sensing, and Spatial InformationSciences 36, 209–214 (Part 3).

McKeown, D., Denlinger, J., 1988. Cooperative methods for roadtracking in aerial imagery. IEEE Proceedings of Computer Visionand Pattern Recognition, Ann Arbor, MI, pp. 662–672.

Mintzer, O.W., 1983. Manual of Remote Sensing, second ed.Interpretations and Applications, vol. 2. American Society ofPhotogrammetry, Falls Church VA, pp. 1955–2109. Chapter 32.

Moon, H., Chellapa, R., Rosenfeld, A., 2002. Performance analysis of asimple vehicle detection algorithm. Image and Vision Computing 20(1), 1–3.

Neuenschwander, W., Fua, P., Iverson, L., Szekely, G., Kubler, O.,1997. Ziplock snakes. International Journal of Computer Vision 25(3), 191–201.

Niu, X., Li, R., O'Kelly, M., 2002. Truck detection from aerialphotographs. International Archives of Photogrammetry, RemoteSensing and Spatial Information Sciences 34 (Part 2), 351–356.

O'Kelly, M.E., Matisziw, T., Li, R., Merry, C., Niu, X., 2005.Identifying truck correspondence in multi-frame imagery. Trans-portation Research. Part C, Emerging Technologies 13 (1), 1–17.

Pavlidis, T., Liow, L., 1990. Integrating region growing and edgedetection. IEEE Transactions on Pattern Analysis and MachineIntelligence 12 (3), 225–233.

Strang, G., 1986. Introduction to Applied Mathematics. CambridgePress, Wellesley.

Tao, C., Li, R., Chapman, M.A., 1998. Automated reconstruction ofroad centrelines from mobile mapping image sequences. Photo-grammetric Engineering and Remote Sensing 64 (7), 709–716.

Tomasi, C., Manduchi, R., 1998. Bilateral filtering for gray and colorimages. Proceedings of the Sixth IEEE International Conference ofComputer Vision, January 4–7, pp. 839–846.

TRB (Transportation Research Board of the National Academies),2000. Highway Capacity Manual, Transportation Research Board(on CD-ROM).

Treiterer, J., 1975. Investigation of Traffic Dynamics by AerialPhotogrammetry Techniques, Ohio State University, EngineeringExperiment Station.

Trinder, J.C., Li, H., 1995. Semi-automatic feature extraction bysnakes. In: Gruen, A., Keubler, O., Agouris, P. (Eds.), AutomaticExtraction of Man-Made Objects from Aerial and Space Images.Birkhäuser Verlag, Basel, Switzerland, pp. 95–102.

Trinder, J.C., Wang, Y., 1998. Automatic road extraction from aerialimages. Digital Signal Processing 8 (4), 215–224.

Vosselman, G., de Knecht, J., 1995. Road tracing by profile matchingand Kalman filtering. Automatic Extraction of Man-Made Objectsfrom Aerial and Space Images (I), Ascona, Switzerland, April24–28. Birkhäuser Verlag, Basel, pp. 265–275.

Wiedemann, C., Heipke, C., Mayer, H., Jamet, O., 1998. Empiricalevaluation of automatically extracted road axes. In: Bowyer, K.,Phillips, P. (Eds.), Empirical Evaluation Methods in ComputerVision. IEEE Computer Society Press, pp. 172–187.

Zhao, T., Nevatia, R., 2003. Car detection in low resolution aerialimages. Image and Vision Computing 21 (8), 693–703.

Zlotnick, A., Carnine, P., 1993. Finding road seeds in aerial images.Computer Vision, Graphics, and Image Processing 57 (2), 243–260.

Zucker, S.W., 1976. Region growing: childhood and adolescence.Computer Graphics and Image Processing 5 (3), 382–399.

Page 18: A semi-automatic framework for highway extraction and

本文献由“学霸图书馆-文献云下载”收集自网络,仅供学习交流使用。

学霸图书馆(www.xuebalib.com)是一个“整合众多图书馆数据库资源,

提供一站式文献检索和下载服务”的24 小时在线不限IP

图书馆。

图书馆致力于便利、促进学习与科研,提供最强文献下载服务。

图书馆导航:

图书馆首页 文献云下载 图书馆入口 外文数据库大全 疑难文献辅助工具