enhanced clustering method using 3d laser range data for an...

6
Enhanced Clustering Method using 3D Laser Range Data for an Autonomous Vehicle KUK CHO, SEUNGHO BAEG, and SANGDEOK PARK Korea Institute of Industrial Technology Department of Robot Research & Business Development 1271-18, Sa-1dong, Sangruk-gu, Ansan Republic of Korea googi33, shbaeg, [email protected] Abstract: This paper describes an enhanced clustering method for 3D point cloud data which is acquired from a laser range scanning system. The proposed method overcomes the instinctive problems of the vertical or horizontal scanning system. The acquired 3D laser range data for an autonomous vehicle has a disadvantage in that the point cloud data from the system is not equally distributed based on distance. Moreover, the data information describes only the object’s shown surface. The proposed method uses essentially a radially bounded nearest neighbor ap- proach and describes the definition of nearest neighbor points on the point cloud data from the scanning system. The method uses a Gaussian-based approach to find nearest neighbor points during the clustering process. In con- clusion, the experimental results for the clustering and segmentation are shown with real data. Key–Words: 3D point cloud data, clustering, segmentation, autonomous vehicle, object recognition. 1 Introduction The current foremost research objective of au- tonomous vehicle technology is to support and as- sist human driving. This technology should to be- come more reliable and accurate to achieve this goal. One important issue is the creation of a sophisticated perception algorithm which undertakes object recog- nition and environmental understanding in complex environments [1, 2, 3]. There have been many stud- ies in which various sensors were assessed in terms of their environment recognition capabilities. Sim- ilarly, there are a number of devices that recognize objects as a means of determining human perception, including image sensors, ultrasonic sensors, RADAR sensors, 2D LRF sensors, and 3D LIDAR sensors. The laser sensors known as as LRF (Laser Range Finder) and LADAR (Laser Detection and Ranging) are among the primary sensors used for object recog- nition in various situations. Laser sensor-based ap- proaches include a feature-based method and a grid- based method. The feature-based method uses ge- ometrical features such as lines, boxes and circles [4, 5]. The grid-based method use occupancy grid maps [6]. Traditional classification techniques need a considerable amount of information for success with two sequential processes: feature extraction and clas- sification. The methods work using multi-layer per- ception (MLP), support vector machine (SVM), and adaptive boosting (Adaboost) [7, 8, 9, 10]. One ap- proach uses a LRF sensor to process one scan layer for object detection. Point cloud data with a 3D scan- ning sensor provides reliable object information with multiple scan layers. Over the last decade, the laser range finders has leading indoor and outdoor mobile robotics to a new level of object detection and environ- ment understanding. Localization, map building, ob- stacle avoidance, obstacle detection, classification and tracking are nowadays almost exclusively performed based on 2D and 3D range data. The amount of data obtained from either 3D sensors or a combination of 2D LRF and a scanning mechanism is generally sev- eral orders of magnitude laser than that obtainable from a simple 2D scan. In this paper, an enhanced clustering method for 3D laser range data is proposed. We use an agglom- erative nearest-neighbor clustering algorithm to seg- ment the acquired point cloud data to separate a basic unit for further processing. The obtained clusters and distinctive segment may then be processed by addi- tional classification algorithms. The structure of this paper is as follows: In section 2, the motivation of this work is briefly explained. Section 3 briefly touches on previous methods and explains the proposed method. Section 4 describes the experimental results using the proposed method in detail. Finally, section 5 con- cludes the paper and gives an outlook on future work. Recent Advances in Electrical Engineering ISBN: 978-1-61804-260-6 82

Upload: others

Post on 15-Mar-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Enhanced Clustering Method using 3D Laser Range Data for an …wseas.us/e-library/conferences/2014/Istanbul/ELECT/ELECT... · 2014. 12. 4. · Enhanced Clustering Method using 3D

Enhanced Clustering Method using 3D Laser Range Datafor an Autonomous Vehicle

KUK CHO, SEUNGHO BAEG, and SANGDEOK PARKKorea Institute of Industrial Technology

Department of Robot Research & Business Development1271-18, Sa-1dong, Sangruk-gu, Ansan

Republic of Koreagoogi33, shbaeg, [email protected]

Abstract: This paper describes an enhanced clustering method for 3D point cloud data which is acquired from alaser range scanning system. The proposed method overcomes the instinctive problems of the vertical or horizontalscanning system. The acquired 3D laser range data for an autonomous vehicle has a disadvantage in that the pointcloud data from the system is not equally distributed based on distance. Moreover, the data information describesonly the object’s shown surface. The proposed method uses essentially a radially bounded nearest neighbor ap-proach and describes the definition of nearest neighbor points on the point cloud data from the scanning system.The method uses a Gaussian-based approach to find nearest neighbor points during the clustering process. In con-clusion, the experimental results for the clustering and segmentation are shown with real data.

Key–Words: 3D point cloud data, clustering, segmentation, autonomous vehicle, object recognition.

1 IntroductionThe current foremost research objective of au-tonomous vehicle technology is to support and as-sist human driving. This technology should to be-come more reliable and accurate to achieve this goal.One important issue is the creation of a sophisticatedperception algorithm which undertakes object recog-nition and environmental understanding in complexenvironments [1, 2, 3]. There have been many stud-ies in which various sensors were assessed in termsof their environment recognition capabilities. Sim-ilarly, there are a number of devices that recognizeobjects as a means of determining human perception,including image sensors, ultrasonic sensors, RADARsensors, 2D LRF sensors, and 3D LIDAR sensors.The laser sensors known as as LRF (Laser RangeFinder) and LADAR (Laser Detection and Ranging)are among the primary sensors used for object recog-nition in various situations. Laser sensor-based ap-proaches include a feature-based method and a grid-based method. The feature-based method uses ge-ometrical features such as lines, boxes and circles[4, 5]. The grid-based method use occupancy gridmaps [6]. Traditional classification techniques need aconsiderable amount of information for success withtwo sequential processes: feature extraction and clas-sification. The methods work using multi-layer per-ception (MLP), support vector machine (SVM), andadaptive boosting (Adaboost) [7, 8, 9, 10]. One ap-

proach uses a LRF sensor to process one scan layerfor object detection. Point cloud data with a 3D scan-ning sensor provides reliable object information withmultiple scan layers. Over the last decade, the laserrange finders has leading indoor and outdoor mobilerobotics to a new level of object detection and environ-ment understanding. Localization, map building, ob-stacle avoidance, obstacle detection, classification andtracking are nowadays almost exclusively performedbased on 2D and 3D range data. The amount of dataobtained from either 3D sensors or a combination of2D LRF and a scanning mechanism is generally sev-eral orders of magnitude laser than that obtainablefrom a simple 2D scan.

In this paper, an enhanced clustering method for3D laser range data is proposed. We use an agglom-erative nearest-neighbor clustering algorithm to seg-ment the acquired point cloud data to separate a basicunit for further processing. The obtained clusters anddistinctive segment may then be processed by addi-tional classification algorithms. The structure of thispaper is as follows: In section 2, the motivation of thiswork is briefly explained. Section 3 briefly touches onprevious methods and explains the proposed method.Section 4 describes the experimental results using theproposed method in detail. Finally, section 5 con-cludes the paper and gives an outlook on future work.

Recent Advances in Electrical Engineering

ISBN: 978-1-61804-260-6 82

Page 2: Enhanced Clustering Method using 3D Laser Range Data for an …wseas.us/e-library/conferences/2014/Istanbul/ELECT/ELECT... · 2014. 12. 4. · Enhanced Clustering Method using 3D

2 Motivation

The objective of this study is to develop an enhancedobject recognition system using point cloud data fromeither a 3D laser range sensor or a 2D laser rangefinder with a scanning mechanism. Before the high-level processing steps such as classification and track-ing, the data should be divided into the basic unitsto be processed. The involves a preprocessing stepwhich includes clustering and segmentation, throughthe clustering and segmentation steps require im-provement. In the proposed system, there are 3Dlaser range data which is acquired using a 3D sen-sor provided with 3D scanning data and a 3D com-bined sensor which includes a 2D LRF with a scan-ning mechanism. The 3D laser range sensor provides3D point cloud data which describes the width, heightand depth characteristics of the 3D coordinates of x,y, and z. The 3D laser range sensor extracts a moresuitable ROI compared to the results of the LRF ap-proach. Nevertheless, 3D laser range sensor data isassociated with an irregular point distribution due toinherent problems with these types of sensors, whichdo not consider the distance and the shape. Even be-fore the challenge of classifying the information fromthe laser range sensor, we are faced with the more fun-damental problem of 3D segmentation. Thus, in thispaper, the focus is on a method which segments 3Dpoint cloud data robustly.

For an outdoor autonomous vehicle, 3D laserrange data which is acquired by a laser range finderwith a scanning mechanism or a 3D scanning sensorsuch as a Velodyne sensors (HDL-64e and HDL-32e).For the further high-level processing, 3D point clouddata should be divided into a unit of data process-ing unit called a segment through a clustering process.There are several weaknesses. First, there are instinctproblems related to scanning points based on the ac-quisition mechanism. For examples, for a composi-tion with a LRF sensor and a scanning mechanism,the scanning frequency determines the vertical reso-lution. The vertical data will be sparse with a slowscanning frequency and dense with a rapid scanningfrequency. Well-known scanning sensors by Velodynealso have similar problems stemming from their lasercomponent assembly. Second, the vertical and hori-zontal resolutions are different. With the combinationof the LRF and a scanning mechanism, the mecha-nism is scanned vertically or horizontally dependingon the application [11, 12, 13]. The scanning pointsof the horizontal layer of the LRF have a dense distri-bution; i.e., the horizontal points are close by but thevertical points are far away. Additionally, we assumethat all of the scanning points are mainly unrelated tothe shape of the object. Therefore, the condition with

(a) (b)

Figure 1: An example of a 3D laser range point illus-tration of a car; (a) the measurement points in whichthe yellow points depict the measurement points onthe scanning sensor, and (b) an illustration of the mea-surement points of the object

Figure 2: An example illustration of a scanned humanbody which consists of nine layers. The point clouddata shown that the horizontal resolution is dense butthe vertical resolution is sparse owing to the afore-mentioned instinct problems

three different coordinate conditions is not feasible.Fig. 1 shows an example of a scanning scene. A

scene is shown in which a car is scanned using a 3Dscanner with either a 3D sensor or a 2D sensor with atilting mechanism sensor. The yellow points indicatethe reflections of the laser points, such as laser tar-gets. Fig. 1(a) depicts the scanning points of the frontof a car from a scanning sensor installed on the vehi-cle, and fig. 1(b) illustrates only the object’s scanningpoints. As shown in the figures, the horizontal dis-tribution of the acquired points is dense but the ver-tical distribution of the points is sparse. Fig. 2 de-picts an illustration of a human using point cloud data.As shown the figure, the horizontal resolution is densebut the vertical resolution is sparse. The measurementpoints of the nine layers describe the human. Here,the difference in the vertical and horizontal resolutiondepends on the distance to the object. However, thetraditional Euclidean approach is not feasible for dis-tinguishing nearest-neighbor points. This paper pro-

Recent Advances in Electrical Engineering

ISBN: 978-1-61804-260-6 83

Page 3: Enhanced Clustering Method using 3D Laser Range Data for an …wseas.us/e-library/conferences/2014/Istanbul/ELECT/ELECT... · 2014. 12. 4. · Enhanced Clustering Method using 3D

poses a method to overcome this certain condition. Itwill be described as an upcoming section.

3 The Proposed Method

This section briefly describes a previous clusteringmethod which uses point cloud data which also intro-ducing the proposed approach. Klasing et al. uses es-sentially a radially bounded nearest neighbor (RBNN)approach [14], while Cho et al. applies an efficientsegmentation method (RBNN-2) to detect an object[15]. The former is a point-based description and thelatter is a cluster-based description. The proposed al-gorithm is based of two previous methods. It sug-gests the definition of nearest-neighbor points for laserrange data for a 3D scanning system. The acquiredpoint cloud data P is represented with the groundpoints Pg and object points Po. The object points re-quire the partitioning of one unit, such as a segment,for further processing. The clustering step works withonly object pointsPo to detect objects. Previously, thefour parameters needed for the clustering and segmen-tation processes are as follows: the radius range ofneighbor points (rn), the minimum number of neigh-boring points in the radius (∇n

r ), the minimum num-ber of points (∇p

c ) and the minimum number of layersin one cluster (∇l

c). Two parameters, i.e., the radiusrange of the neighbor points (rn) and the number ofneighboring points in the radius (Nn), are effectivewhen used to connect each point and remove noisydata. In addition, the RBNN-2 algorithm has two ad-ditional criteria for object segmentation: the minimumnumber of points (∇p

c ) and the minimum number oflayers in one cluster (∇l

c). Given N , the total num-ber of point cloud data instances for one frame scene,let the i-number of points of point cloud data be Pi,i = 1, . . . , N . In this case, the measurement point ofone frame consists of ground points and object points,P = Pg ∪ Po.

In this algorithm, Υ0i is the searched neighbor

non-clustered point set in the Pi-centered radius (r).Υ−

i is the searched neighbor clustered point set, whichis connected to the nearest neighbor point of the Pi-centered radius. Υ+

i is the newly established clusterpoint set with the Pi point. Pi denotes the centerpoints of the neighbor range. Ψ denotes the neigh-boring points of the Pi-centered rn radius. Algorithm1 shows the pseudo-code with four parameters. Theclusters do not comprise the full composition of ob-ject points; each cluster can be a partial object, aninstance of miss-clustering, or an instance of miss-discrimination from the previous step such as a groundremoval step. There are common situations in dataprocessing. Additional criteria are needed for accurate

Data: object points(Po, rn, ∇nr ,∇p

c , ∇lc)

Result: clustered objects(Υ+)initialization parameter j;for i← Po do

if ¬ (Pi ∈ Υ) then(Υ0

i ,Υ−i ,Ψ)←

GetNeighbor(Po, Pi, rn) ;if Υ−

i = ϕ thenΥ+

j ← CreateNewCluster(Υ0i );

j ← j + 1; /* case 1 */

else if Υ−i ̸= ϕ then

if NumberOf(Υϕi ) > ∇n

r thenΥ+

j ← Υ+j ∪Υ0

i ;/* case 2 */

forall the Υ−i do

Υ+j ← Υ+

j ∪Υ−i ;

/* case 3 */

endend

endend

endforall the Υ+

j ∈ Clusters doif Υ+

j < ∇pc or Υ+

j < ∇lc then

delete Υ+j ; /* case 4 */

endend

Algorithm 1: The proposed object clustering andsegmentation procedure

segmentation to an object for further high-level pro-cessing, such as case 4 in algorithm 1. In addition, theuse of the number of the neighboring points had theeffect of removing noise data using ∇n

r . The defini-tion of the nearest neighboring points can be found inone point. The Euclidean distance is the approach nor-mally used to estimate the distances between points.The laser range points from the scanning system arenot uniformly distributed. The 3D Gaussian distribu-tion value shows the index of the nearest neighboringpoints.

R =

σx 0 00 σy 00 0 σz

(1)

Here, R denotes the parameters of the three-dimensional Gaussian distribution. Each valuesshows the distribution of the coordinates.

Recent Advances in Electrical Engineering

ISBN: 978-1-61804-260-6 84

Page 4: Enhanced Clustering Method using 3D Laser Range Data for an …wseas.us/e-library/conferences/2014/Istanbul/ELECT/ELECT... · 2014. 12. 4. · Enhanced Clustering Method using 3D

χ =√

(p−m)TR−1(p−m)T < γ (2)

In this equation, m is the center point, which is notincluded in the cluster, for in case 1 in algorithm 1.Also, p denotes the candidate neighboring points. χis the index which determines whether or not the pointis a neighboring point. For a given point, if χ is lessthan γ, the point is determined with its nearest points.

A different effect arises with the Euclidean dis-tance. For an example of a 2D plane example, thereare two points P1(2,5) and P2(5,2) under the centerpoint (5,5) with a 2-D gaussian distribution [1 0; 0 5].The Euclidean distance is identical in the two cases,but the g-RBNN shows different results. These resultsare 3 and 1.3416, respectively.

Fig. 3 depicts the first step and the second step ofthe two clustering methods and shows the final clus-tering results with given the same data. Figs. 3(a)-3(d) and figs. 3(e)-3(h) show corresponding process-ing illustrations of each step. The gray circles depictsthe boundary range of the nearest-neighbor. The redcircles denote the center point which crates a clus-ter with the nearest-neighbor points within a certainrange. The black circles show the same cluster. Figs.3(a), 3(b) and 3(c) depicts the previous algorithm.Figs. 3(e), 3(f) and 3(g) shows the illustrations usingthe proposed nearest-neighbor points definition. Figs.3(d) and 3(h) depicts the clustering results, and Fig.3(h) shows the improved performance as compared tothat in Fig. 3(d).

4 Experimental Results

The proposed method was evaluated on a collecteddataset obtained from a 3D LIDAR sensor (VelodyneHDL-32E) mounted on an experimental all-terrain ve-hicle in a forest environment. The sensor has 32 ver-tical scanning lines and provides 3D range data witha horizontal scan of 360. This section presents the re-sults from the ground-object discrimination and fea-ture extractions that were performed. Furthermore, aclassifier was implemented based on the acquired 3Ddata using a previously introduced algorithm. The testenvironment was based on the following scenario: ina forested environment, an unmanned ground vehicle(UGV) with an installed 3D LADAR system movesand distinguishes objects as humans, trees, or otherobjects. The test uses a system with a 3D LADARsensor, GPS, INS, and an odometry encoder installedin the UGV. The test environment was based on an In-tel Core i7 CPU x980 with 3.3GHz and 24GB RAM.The overall system algorithm was optimized using

IPP. The 3D LADAR sensor used HDL-32E (Velo-dyne, Inc.) and the data update rate was 100ms. Theperformance of the first ground and object discrimina-tion stage required 40-50ms. The second stage of ob-ject segmentation and feature extraction required 15-20ms. The overall system process managed real-timedata processing within 60-70ms.

The proposed method is evaluated with the KITTIdataset [3]. The KITTI dataset was obtained fromreal-world traffic situations by a mobile vehicle with avariety of sensors, including a high-resolution stereocamera and a 3D laser scanner. The camera is cali-brated with respect to the laser scanner and the imagesfrom the camera are synchronized to 3D point clouddata from the laser scanner. Each object segment isinitially obtained from the point cloud data by the pro-posed g-RBNN clustering and segmentation method,after which its 3D bounding box is projected onto a2D image plane to generate the ROI. Figs. 4(c) showsthe object segments in the point cloud data and theprojected ROIs in the 2D image. Figs. 4(h) shows thetest samples obtained from the ROIs in the 2D imagesof the KITTI dataset. Assuming that we have a de-scriptor which describes the local shape of an object,the position of the block for the descriptor can varyby one or two pixels due to the segmentation result.These outcomes indicate that the proposed methodprovides a more stable descriptor to represent the ob-ject shape.

5 ConclusionThis study demonstrates an experiment in which clus-tering and segmentation are used as preprocessingtechniques. In a forested environment, a 3D LADARsystem installed into a UGV performed the cluster-ing and segmentation while mobile. In the objectsegmentation step, both the Cartesian coordinate in-formation and the polar coordinate information wereutilized. The performance in such areas as those as-sessed here may depend on the 3D LADAR specifica-tions. The results indicate that the proposed methodis very effective and efficient for real-time processingin autonomous mobile vehicles. The proposed algo-rithm works with only four parameters, depending onthe laser scanner and not on a specific data set. Fur-thermore, the algorithm is deterministic which meansthat running times and segmentation results are re-peatable and do not require initialization. We have an-alyzed the factors influencing the runtime complexityof the g-RBNN method and have verified the analysisby a series of benchmark evaluations with syntheticdata. Finally, the efficient performance of the algo-rithm on a real 3D laser point cloud acquired by a mo-

Recent Advances in Electrical Engineering

ISBN: 978-1-61804-260-6 85

Page 5: Enhanced Clustering Method using 3D Laser Range Data for an …wseas.us/e-library/conferences/2014/Istanbul/ELECT/ELECT... · 2014. 12. 4. · Enhanced Clustering Method using 3D

bile robot has been demonstrated In future research,the currently analyzed laser range data may be usedto estimate the amount of movements of the UGV de-signed to recognize the location of the UGV and toestablish a leader-following system.

References:

[1] L. Spinello, M. Luber, and K.O. Arras, Track-ing people in 3D using a bottom-up top-downdetector, In Proc. of IEEE Int. Conf. Roboticsand Automation (ICRA), Shanghai, China, May2011, pp. 1304-1310.

[2] K. Kidono, T. Naito, and J. Miura, Reli-able pedestrian recognition combining high-definition LIDAR and vision data, In Proc. ofIEEE Int. Conf. on Intelligent TransportationSystems (ITSC), 2012, pp. 1783-1788.

[3] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, Vi-sion meets robotics: the KITTI dataset, Interna-tional Journal Robotic Research, 32(11), 2013,pp. 1231-1237.

[4] A. Mendes, L.C. Bento, and U. Bento, Multi-target detection and tracking with a laser scan-ner, In Proc. of IEEE Inteligent Vehehicle Sym-posium (IV), Rome, Itary, 14-17 June 2004,pp. 796–801.

[5] G.A. Borges and M.-J. Aldon, Line Extractionin 2D Range Images for Mobile Robotics, Jour-nal of Intelligent and Robotic Systems, 2004, 40,pp. 267–297.

[6] T. Weiss, B. Schiele, and K. Dietmayer, Robustdriving path detection in urban and highway sce-narios using a laser scanner and online occu-pancy grids, In Proc. of IEEE Inteligent Vehe-hicle Symposium (IV), Istanbul, Turkey, 13-15June 2007, pp. 184–189.

[7] M.A. Sotelo, J. Nuevo, D. Fern, I. Parra, L.M.Bergasa, M. Ocana, and R. Flores, SVM-basedObstacles Recognition for Road Vehicle Appli-cations, In Proc. of the 19th Int. Joint Conf. onArtificial Intelligence, Edinburgh, Scotland, July30-August 5, 2005, pp. 1740–1741.

[8] G.D.C. Cacakcabti, J.P. Magalhaes, R.M.Barreto, T.I. Ren, MLPBOOST:A combinedadaboost-multi-layer perception network ap-proach for face detection, In Proc. of IEEE Int.Conf. on Systems, Man, and Cybernetics, Seoul,Korea, 14-17 October 2012.

[9] L. Chen, Q. Li, M. Li, L. Zhang, andQ. Mao, Design of a Multi-Sensor Cooper-ation Travel Environment Perception Systemfor Autonomous Vehicle, Sensors, 2012, 11,pp. 12386–12404.

[10] K. Cho, C. Kim, S.-H. Baeg and S. Park, Leanedgeometric features of 3D range data for humanand tree recognition, Electronic Letters, 50(3),2014, pp 173–175.

[11] L.E. Navarro-Serment, C. Mertz, and M.Hebert, Pedestrian detection and tracking usingthree-dimensional LADAR data, InternationalJournal of Robotics Research, 29(12), 2011,pp. 1516-1528

[12] K. Kidono, T. Miyasaka, A. Watanabe, T. Naito,and J. Miura, Pedestrian reognition using high-definition LIDAR, IEEE Intelligent VehiclesSymposium (IV), Baden-Baden, Germany, June2011, pp. 405-410

[13] K. Cho, S.-H. Baeg, and S. Park, Object trackingwith enhanced data association using a 3D rangesensor for an unmanned ground vehicle Journalof Mechanical Science and Technology, 28(11),2014, pp. 4381–4388

[14] K. Klasing, D. Wollherr, and M. Buss, A clus-tering method for efficient segmentation of 3Dlaser data, In Proc. IEEE Int. Conf. Robotics andAutomation (ICRA), Pasadena, CA, USA, May2008, pp. 4043-4048

[15] K. Cho, S.-H. Baeg, and S. Park, Human and treeclassification based on a model using 3D ladar ina GPS-denied environment, Proc. of SPIE 8741Unmanned System Technology XV, 2013

Recent Advances in Electrical Engineering

ISBN: 978-1-61804-260-6 86

Page 6: Enhanced Clustering Method using 3D Laser Range Data for an …wseas.us/e-library/conferences/2014/Istanbul/ELECT/ELECT... · 2014. 12. 4. · Enhanced Clustering Method using 3D

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Figure 3: Schematic diagram of the clusteringmethod: (a-c) an illustration of the previous cluster-ing approach at the first and second iterations, (d) theresult with the previous clustering method, (e-g) an il-lustration of the proposed clustering approach at thefirst and second iterations, and (h) the result with theproposed clustering method.

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Figure 4: The experimental results of the proposedclustering and segmentation methods: (a-b) the im-age, (c-d) measurement points acquired from the scan-ning system, (e-f) clustering and segmentation (g-h),and verification using a camera and a comparison ofthe LIDAR data.

Recent Advances in Electrical Engineering

ISBN: 978-1-61804-260-6 87