damage detection and location using time of flight depth

13
Damage Detection and Location Using Time of Flight Depth Cameras James J. Slade Honors Thesis Declan O’Brien Department of Mechanical and Aerospace Engineering Rutgers University, New Brunswick Under the direction of Aaron Mazzeo Abstract—A method for detecting and locating damage using a depth camera and polynomial surface fitting is outlined. Several depth cameras’ abilities to resolve damage is compared and the Azure Kinect is suggested as the most effective device. The depth camera is used to produce a 3D point cloud of the surface. A method of combining point clouds taken from multiple perspectives using colored fiducial marks is proposed and tested, and the results are compared to point clouds generated from a single perspective. Generated point clouds are fit to a surface using robust polynomial surface fitting and damage is detected by finding outliers from this surface. This method using the Azure Kinect is shown to be able to detect damage as small as 1/4 in. The error in bulk damage size detection is analyzed along with the error in localization of damage and compared with manual measurements of the same values. It is concluded that this method would be capable of damage detection and localization without prior knowledge of surface geometry for damage larger than 1/4 in. Index Terms—3D measurement, 3D scan registration, damage detection I. I NTRODUCTION Analyzing the extent and location of damage can be very useful in determining when and how parts need to be re- paired or replaced. Historically, this process has been done by humans, requiring significant training and time. It would thus be desirable to develop a method of damage detection and localization using computer vision to allow this task to be performed autonomously. This would also allow for detection of damage in situations that prevent easy access by humans, such as structures that are under water or in space. Several methods of computer-based damage detection are already in use with 3D vibrometry specifically seeing significant developments, such as in [1] and [2]. The goal of this research is to attempt to produce similar success in detecting damage use 3D scanning time of flight depth cameras and without prior knowledge of the undamaged geometry of the surface. The method in this paper is based on using a time of flight (ToF) depth camera to produce a 3D scan of a target surface. ToF cameras work by projecting a grid of light and then measuring the time it takes for that light to leave the sensor, reflect off of the target, and return to the sensor [3]. Because the direction and speed of the light is known, its travel time allows the camera to compute a point in 3D space for each projected light point. The camera used in this research is the Azure Kinect DK. This camera works by projecting and sensing near infrared (NIR) light. This camera is capable of producing sub centimeter accuracy in depth and high spatial resolution, and so it has potential to be used to digitally recreate damage. This camera was selected because of its relatively high resolution compared to similar available cameras [4]–[6]. To recreate a whole damaged object, it is necessary as well to use mapping techniques to combine point clouds generated from the depth camera. One method for achieving this is simultaneous localization and mapping (SLAM). SLAM works by using 3D data to both build a map and place the camera in that map [7]. This allows for both localization, without the use of GPS or other similar systems, and a full 3D scanning of an area. The implementation used in this study is Real- Time Appearance-Based Mapping (RTAB-Map), selected for its ability to synthesize both depth and RGB data in creating a map [10]. Because individual damaged areas are generally quite small, another approach will also be investigated. Point set registration is a method of aligning two point sets. Many different algorithms have been developed to accomplish this, but the implementations in MATLAB of the coherent point drift (CPD) algorithm and iterative closest point (ICP) algo- rithm are tested in this case. The CPD algorithm considers the alignment to be a probability density estimation problem; by shifting the centroids of the point set, the algorithm then maximizes the probability in order to align the two sets [8]. The ICP algorithm minimizes the mean-square distance between the point cloud by transforming the point cloud in each degree of freedom [9].

Upload: others

Post on 03-Jan-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Damage Detection and Location Using Time ofFlight Depth Cameras

James J. Slade Honors Thesis

Declan O’BrienDepartment of Mechanical and Aerospace Engineering

Rutgers University, New Brunswick

Under the direction ofAaron Mazzeo

Abstract—A method for detecting and locating damage using adepth camera and polynomial surface fitting is outlined. Severaldepth cameras’ abilities to resolve damage is compared andthe Azure Kinect is suggested as the most effective device.The depth camera is used to produce a 3D point cloud of thesurface. A method of combining point clouds taken from multipleperspectives using colored fiducial marks is proposed and tested,and the results are compared to point clouds generated froma single perspective. Generated point clouds are fit to a surfaceusing robust polynomial surface fitting and damage is detected byfinding outliers from this surface. This method using the AzureKinect is shown to be able to detect damage as small as 1/4 in.The error in bulk damage size detection is analyzed along withthe error in localization of damage and compared with manualmeasurements of the same values. It is concluded that this methodwould be capable of damage detection and localization withoutprior knowledge of surface geometry for damage larger than 1/4in.

Index Terms—3D measurement, 3D scan registration, damagedetection

I. INTRODUCTION

Analyzing the extent and location of damage can be veryuseful in determining when and how parts need to be re-paired or replaced. Historically, this process has been doneby humans, requiring significant training and time. It wouldthus be desirable to develop a method of damage detectionand localization using computer vision to allow this taskto be performed autonomously. This would also allow fordetection of damage in situations that prevent easy accessby humans, such as structures that are under water or inspace. Several methods of computer-based damage detectionare already in use with 3D vibrometry specifically seeingsignificant developments, such as in [1] and [2]. The goalof this research is to attempt to produce similar success indetecting damage use 3D scanning time of flight depth camerasand without prior knowledge of the undamaged geometry ofthe surface.

The method in this paper is based on using a time offlight (ToF) depth camera to produce a 3D scan of a targetsurface. ToF cameras work by projecting a grid of light and

then measuring the time it takes for that light to leave thesensor, reflect off of the target, and return to the sensor [3].Because the direction and speed of the light is known, itstravel time allows the camera to compute a point in 3Dspace for each projected light point. The camera used inthis research is the Azure Kinect DK. This camera works byprojecting and sensing near infrared (NIR) light. This camerais capable of producing sub centimeter accuracy in depth andhigh spatial resolution, and so it has potential to be used todigitally recreate damage. This camera was selected becauseof its relatively high resolution compared to similar availablecameras [4]–[6].

To recreate a whole damaged object, it is necessary as wellto use mapping techniques to combine point clouds generatedfrom the depth camera. One method for achieving this issimultaneous localization and mapping (SLAM). SLAM worksby using 3D data to both build a map and place the camerain that map [7]. This allows for both localization, without theuse of GPS or other similar systems, and a full 3D scanningof an area. The implementation used in this study is Real-Time Appearance-Based Mapping (RTAB-Map), selected forits ability to synthesize both depth and RGB data in creatinga map [10]. Because individual damaged areas are generallyquite small, another approach will also be investigated. Pointset registration is a method of aligning two point sets. Manydifferent algorithms have been developed to accomplish this,but the implementations in MATLAB of the coherent pointdrift (CPD) algorithm and iterative closest point (ICP) algo-rithm are tested in this case. The CPD algorithm considersthe alignment to be a probability density estimation problem;by shifting the centroids of the point set, the algorithm thenmaximizes the probability in order to align the two sets[8]. The ICP algorithm minimizes the mean-square distancebetween the point cloud by transforming the point cloud ineach degree of freedom [9].

Fig. 1: Azure Kinect Developer Kit

II. EQUIPMENT AND METHODOLOGY

A. Hardware Selection

The camera decided on for this study was the Azure KinectDeveloper Kit, shown in figure 1. This camera can capturedepth images at a resolution of 1 MP as well as 4K colorimages at a frequency of 30 Hz. Additionally, this camera hastwo modes, one optimized for a narrow field of view (NFOV)and another optimized for a wide field of view (WFOV) [3].The NFOV mode was used to allow for increased spatialresolution. Two other cameras, the Microsoft Kinect 2 andIntel RealSense L515 were considered, but the Azure Kinectwas selected for its superior resolution for both depth andcolor images.

To allow for the analysis of the error of the damagedetection, an example of a damaged target surface was used.This target, shown in figure 2, consists of a half inch thicksheet of plywood with 11 holes drilled into it. The holes weremeasured with calipers to have diameters as shown in table??. Attached to the surface as well is colored tape to act asfiducial marks, used in the point cloud registration step.

Fig. 2: Damage target

B. Software Methods

Images were collected using the robotic operating system(ROS) with the Azure Kinect SDK and Azure Kinect ROSdriver. This method was determined to allow for the fastestimage collection and allowed for easy integration with theMATLAB scripts. MATLAB was used for the image pro-cessing, damage detection, and error analysis. MATLAB wasconnected to ROS using the using ROS Toolbox. This allowedMATLAB to setup a ROS subscriber and gather images fromROS at the rate that that they were generated. Surface fittingwas performed using the Curve Fitting Toolbox.

Once the point clouds are collected, several steps are takento process them, then a surface fit is performed and damagewas detected. First, 50 point clouds were collected and thenaveraged to get a single point cloud of the surface withreduced noise. Because the depth image is organized such thateach point corresponds consistently with the same sensor, thiseffectively means that each point in the cloud is the average of50 measurements using the same sensor. The resulting pointcloud is then cropped to leave only the point cloud of thedamage target. For trials testing point cloud registration, twoperspectives are used to obtain two point clouds of the damagetarget from different angles. Next, when applicable, the twoperspectives are merged by isolating the fiducial marks in thepoint cloud by color, registering them using either the ICPor CPD algorithm implemented in the MATLAB ComputerVision Toolbox, and then applying the transform used toregister the fiducial marks to the entire point cloud, leaving asingle, merged point cloud.

After being collected and, if necessary, merged the damageand surface of the point cloud are detected. A 3x3 degreepolynomial surface fit is used to create the surface and thenoutliers from the surface are detected as damage. The exactthreshold of standard deviations considered an outlier changesbased on the method used, as some data sets have more noisethan others. The point cloud of damage is then isolated andsegmented so each individual area of damage can be measured.

Fig. 3: Point cloud of damaged surface with detected damagehighlighted in magenta

Hole Number 1 2 3 4 5 6 7 8 9 10 11

Diameter (in) 1/16 1/8 3/16 1/4 5/16 3/8 7/16 1/2 5/8 3/4 1TABLE I: Damage target hole diameters

Fig. 4: Point cloud of isolated damage

(a) Unaveraged point cloud

(b) Averaged point cloud

Fig. 5: Effect of averaging on point cloud noise

III. RESULTS & DISCUSSION

The first test performed with this method was testingwhether it was capable of detecting damage. The result ofdamage detection on this point cloud, shown in figure 3with damage highlighted with magenta, demonstrates that thismethod is capable of detecting damage as small as 6.35 mm atthe testing conditions. This demonstrates as well that damagelocation and shape is, visually at least, quite accurate. Oneweakness of the method appeared in this test as well. Becausedepth cameras rely on optical measurements, any portion ofthe surface or damage not in the FOV of the camera can not becaptured. Figure 10 shows just the detected damage isolatedfrom the rest of the surface, and the cylindrical damage insteadappears to be nearly hemispherical. This is caused in part bysome portions of the damage being optically shadowed. Thealgorithm used to calculate the depth, which is proprietaryand therefore could not be changed, also seemed to performsmoothing to hard corners, which affected the result as well.

In order to test the effectiveness of averaging multiple pointclouds, an averaged point cloud and single unaveraged pointcloud were created from the same location. Each of these wasrun through the damage detection process to determine which

could detect smaller areas of damage. It was determined thatthe averaging method allowed holes 4-11 to be detected, whilethe unaveraged point clouds only detected holes 5-11 due toan increased threshold for outliers, so the averaging methodwas used for all future testing. Additionally, comparing thepoint clouds visually reveals a significantly more noisy surfacefor unaveraged point clouds than for averaged ones, as seen infigure 5. This suggests that the predicted noise reduction effectof this method occurred. The benefit of using this method todenoise the point cloud over something like the MATLABdenoise function is that no detail is lost. It does, however,appear to make no significant change in the systematic errorof the camera

The two point cloud registration algorithms were testednext. Figure 6 shows the alignment process. From this it wasdetermined that both methods worked relatively well and wereable to align the fiducial mark point clouds on the same plane.The CPD algorithm, however, was better able to align the pointclouds within the plane, so it was selected for further testing.Visible as well in the result of the merging process is thedifference that perspective makes in the point cloud. Becausedifferent portions of the damage are shaded, the shape of the

(a) Isolated fiducial mark point clouds

(b) Transformed fiducial mark point clouds

(c) Merged final point cloud

Fig. 6: Point cloud merging process

damage changes.Error analysis was then performed on the damage detection

results to determine the accuracy. 50 averaged point cloudswere obtained, and the average and standard deviation of the

Fig. 7: Position accuracy of single-perspective point cloud

Fig. 8: Size accuracy of single-perspective point cloud

size and position of the damage were determined. Size wasdetermined by averaging the diameter of the damage in thex and y axes, and position was determined as distance ofthe centroid from the centroid of the largest area of damage.This analysis was performed for both the unmerged singleperspective point clouds, figures 7 and 8, and the merged twoperspective point clouds, figures 9 and 10.

The results for the unmerged perspective reveal that boththe damage size and position are significantly accurate. Foreach hole the mean size is less than one standard deviationfrom the measured size suggesting strong agreement with thephysical reality of the damage. The error does increase asthe size of the hole decreases. This may be due to less ofthe damage being considered an outlier due to the smoothingpreviously mentioned causing the damage’s impact to be ofsmaller magnitude. Position error also agrees well with aver-age values near the measured value, though standard deviationis significantly higher for these measurements, potentially due

Fig. 9: Position accuracy of two-perspective point cloud

Fig. 10: Size accuracy of two-perspective point cloud

to compounding error caused by the metric used being arelative position and not an absolute one.

The first mapping test performed was using RTAB-Map.The Azure Kinect DK was able to work with the mappingalgorithm to generate maps of the room the damage targetwas contained in, including the damage target. While thesemaps might be sufficient for robotic navigation, they weresignificantly more noisy and less precise than either the singleunmerged point cloud or the two point cloud merged results,so no further testing was pursued.

The results for the merged perspective reveal a significantlyhigher accuracy for holes 3/8 in and larger and a significantdecrease in accuracy below that. The increased accuracy on thelarger holes is likely the result of having two perspectives onthe damage. This results in less of the damage being shadowedand, therefore, more accurate measurement of the extent ofthe damage. The merging process also, however, introducesextra noise from the second perspective. This additional noise

increases the threshold necessary to exclude all non-damageoutliers, causing parts of the smallest holes to no longer bedetected. This caused the decreased accuracy on the smallestholes, without any significant effect on the larger holes. Thissuggests that a mapping method that implements an algorithmfor noise reduction would likely produce the best results, andshould be explored in further testing.

IV. CONCLUSION

The evidence gathered suggests that an autonomous damagedetection system based on a 3D scanning platform is feasible.Though significant error was observed with the camera used,this camera is significantly less capable at scanning thanothers designed for industrial use, suggesting a high limiton the accuracy of this method. Even with the low fidelityof the camera, damage as small as 1/4 in was successfullydetected, located, and measured. While the exact geometryof the damage was not replicated, this could likely be im-proved with more successful mapping methods and with higherresolution cameras. Further study should implement sensorsmore capable of mitigating the noise resulting from mappingthe surface and with higher fidelity to allow for the creationof volumetrically accurate damage scans. Despite the issuesfound, this work suggests that a depth camera can be used forautonomous damage location and detection with low error inbulk damage size measurement and location measurement andthat mapping, with proper reduction of noise, is likely moreeffective than single point cloud scans for this purpose.

ACKNOWLEDGMENT

I would like to thank Professor Aaron Mazzeo for his workadvising and teaching me and Noah Harmatz and CalvinDobrin for their guidance and advisement with the depthcamera and damage detection scripts. Additionally, I wouldlike to thank my parents for their patience in allowing me tocordon off testing areas in their home.

REFERENCES

[1] Ł. Pieczonka, Ł. Ambrozinski, W. J. Staszewski, D. Barnoncel, andP. Peres, “Damage detection in composite panels based on mode-converted Lamb waves sensed using 3D laser scanning vibrometer,”Optics and Lasers in Engineering, vol. 99, pp. 80–87, Dec. 2017, doi:10.1016/j.optlaseng.2016.12.017.

[2] R. Marks, C. Gillam, A. Clarke, J. Armstrong, and R. Pullin, “Damagedetection in a composite wind turbine blade using 3D scanning laservibrometry,” Proceedings of the Institution of Mechanical Engineers,Part C: Journal of Mechanical Engineering Science, vol. 231, no. 16,pp. 3024–3041, Dec. 2016, doi: 10.1177/0954406216679612.

[3] “Azure Kinect DK depth camera,” Azure Kinect DK Documenta-tion. [Online]. Available: https://docs.microsoft.com/en-us/azure/kinect-dk/depth-camera. [Accessed: 01-Apr-2021].

[4] “Azure Kinect DK hardware specifications,” Azure Kinect DKdocumentation. [Online]. Available: https://docs.microsoft.com/en-us/azure/kinect-dk/hardware-specification. [Accessed: 01-Apr-2021].

[5] “Intel® RealSense™ LiDAR Camera L515,” Intel® RealSense™Depth and Tracking Cameras, 12-Mar-2021. [Online]. Available:https://www.intelrealsense.com/lidar-camera-l515/. [Accessed: 01-Apr-2021].

[6] “Kinect for Windows v2,” Depthkit Documentation. [Online]. Available:https://docs.depthkit.tv/docs/kinect-for-windows-v2. [Accessed: 01-Apr-2021].

[7] J. J. Leonard and H. F. Durrant-Whyte, ”Simultaneous map build-ing and localization for an autonomous mobile robot,” ProceedingsIROS ’91:IEEE/RSJ International Workshop on Intelligent Robotsand Systems ’91, Osaka, Japan, 1991, pp. 1442-1447 vol.3, doi:10.1109/IROS.1991.174711.

[8] A. Myronenko and X. Song, ”Point Set Registration: Coherent PointDrift,” in IEEE Transactions on Pattern Analysis and Machine Intel-ligence, vol. 32, no. 12, pp. 2262-2275, Dec. 2010, doi: 10.1109/T-PAMI.2010.46.

[9] P. J. Besl and N. D. McKay, ”A method for registration of 3-D shapes,”in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.14, no. 2, pp. 239-256, Feb. 1992, doi: 10.1109/34.121791.

[10] M. Labbe and F. Michaud, “RTAB-Map as an open-source lidar andvisual simultaneous localization and mapping library for large-scale andlong-term online operation,” Journal of Field Robotics, vol. 36, no. 2,pp. 416–446, 2018.

V. APPENDIX: MATLAB SCRIPTS

Script used to collect depth images

%%% C r e a t e s a ROS s u b s c r i b e r t o c o l l e c t p o i n t c louds , a v e r a g e s n pc p o i n t%%% c l o u d s and s a v e s t h e r e s u l t i n g p o i n t c l o u d

% f o r k = 1:50%% I n i t i a t e ROS s u b s c r i b e r and o t h e r v a r i a b l e sr o s i n i texampleHelperROSCreateSampleNetworksub = r o s s u b s c r i b e r ( ’ p o i n t s 2 ’ ) ;pause ( 1 ) ;n pc = 5 0 ;p c l i s t = r o s m e s s a g e ( ’ s enso r msgs / P o i n t C l o u d 2 ’ ) ;p c l i s t . P r e s e r v e S t r u c t u r e O n R e a d = 1 ;r a w P o i n t s = z e r o s ( n pc , 3 6 8 6 4 0 , 3 ) ;rawColor = z e r o s ( n pc , 3 6 8 6 4 0 , 3 ) ;t i c

%% C o l l e c t p o i n t c l o u d s and c o n v e r t t o MATLAB p o i n t C l o u d f o r m a tf o r i = 1 : n pc

p c l i s t ( i ) = r e c e i v e ( sub , 1 0 ) ;endt o ct i cf o r i = 1 : n pc

p o i n t C l o u d L i s t ( i ) = p o i n t C l o u d ( readXYZ ( p c l i s t ( i ) ) , ’ Co lo r ’ , u i n t 8 (255* readRGB ( p c l i s t ( i ) ) ) ) ;r a w P o i n t s ( i , : , : ) = p o i n t C l o u d L i s t ( i ) . L o c a t i o n ;rawColor ( i , : , : ) = p o i n t C l o u d L i s t ( i ) . Co lo r ;

endt o c

%% Shutdown ROS s u b s c r i b e rexampleHelperROSShutDownSampleNetworkros shu tdown

%% Trim and average p o i n t c l o u d saveragePC = mean ( r a w P o i n t s ) ;t r i m I n d i c e s = f i n d ( averagePC ( 1 , : , 3 ) > 0 & averagePC ( 1 , : , 3 ) < 1 . 1 & averagePC ( 1 , : , 1 ) > −0.5 & averagePC ( 1 , : , 1 ) < 0 . 4 & s q r t ( averagePC ( 1 , : , 1 ) . ˆ 2 + averagePC ( 1 , : , 3 ) . ˆ 2 ) < 1 & averagePC ( 1 , : , 2 ) < 0 . 1 9 & averagePC ( 1 , : , 2 ) > − 0 . 3 ) ;a v e r a g e C o l o r = mean ( rawColor ) ;averagePC = averagePC ( 1 , t r i m I n d i c e s , : ) ;f i r s t P C = r a w P o i n t s ( 1 , t r i m I n d i c e s , : ) ;r awColor = rawColor ( 1 , t r i m I n d i c e s , : ) ;a v e r a g e C o l o r = a v e r a g e C o l o r ( 1 , t r i m I n d i c e s , : ) ;f i n a l P C = p o i n t C l o u d ( averagePC ( 1 , : , : ) , ’ Co lo r ’ , u i n t 8 ( a v e r a g e C o l o r ( 1 , : , : ) ) ) ;

%% P l o t R e s u l t sf i g u r es c a t t e r 3 ( p c l i s t ( 1 ) )f i g u r epcshow ( p o i n t C l o u d ( averagePC ) )f i g u r epcshow ( p o i n t C l o u d ( f i r s t P C ) ) ;

% f i l e n a m e = [ ’ p e r s p e c t i v e 2 ’ , num2s t r ( k ) ] ;% save ( [ ’ savedPCs / P e r s p e c t i v e 2 / ’ , f i l e n a m e ] , ’ f i na lPC ’ ) ;% end

Script used to fit the damaged surface

%%% Per forms a s u r f a c e f i t on a p o i n t c l o u d and o u t p u t s a p o i n t c l o u d o f%%% o u t l i e r sn pc = 5 0 ;o u t l i e r p c s = c e l l ( 1 , n pc ) ;s u r f a c e p c s = c e l l ( 1 , n pc ) ;f i t s = c e l l ( 1 , n pc ) ;g o f s = c e l l ( 1 , n pc ) ;

f o r k = 1 : n pc% p c i n = load ( [ ’ savedPCs / merged / mergedPC ’ , num2s t r ( k ) , ’ . mat ’ ] ) ;

p c i n = p o i n t C l o u d L i s t ( k ) ;% p c i n = s q u e e z e ( p c i n . L o c a t i o n ( 1 , : , : ) ) ;

p c i n = p c i n . L o c a t i o n ;t r i m = 1 ;

%% Trim P o i n t c l o u d ( d e a c t i v a t e i f p o i n t c l o u d pre t r immed )i f ( t r i m )

t r immed pc = [ [ ] , [ ] , [ ] ] ;% f i g u r e% p l o t 3 ( p c i n ( : , 1 ) , p c i n ( : , 2 ) , p c i n ( : , 3 ) , ’ . ’ , ’ MarkerS i ze ’ , 1 ) ;% t i t l e ( ’ Damage scan ’ )

t r i m I n d i c e s = f i n d ( p c i n ( : , 3 ) > 0 & p c i n ( : , 3 ) < 1 . 1 & p c i n ( : , 1 ) > −0.5 & p c i n ( : , 1 ) < 0 . 4 & s q r t ( p c i n ( : , 1 ) . ˆ 2 + p c i n ( : , 3 ) . ˆ 2 ) < 1 & p c i n ( : , 2 ) < 0 . 1 9 & p c i n ( : , 2 ) > − 0 . 3 ) ;p c i n = p c i n ( t r i m I n d i c e s , : ) ;

maxX = max ( p c i n ( : , 1 ) ) ;minX = min ( p c i n ( : , 1 ) ) ;maxY = max ( p c i n ( : , 2 ) ) ;minY = min ( p c i n ( : , 2 ) ) ;f r a c = 0 . 7 5 ;t r i m m e d I n d i c e s = f i n d ( p c i n ( : , 1 ) < maxX* f r a c & p c i n ( : , 1 ) > minX* f r a c & p c i n ( : , 2 ) < maxY* f r a c & p c i n ( : , 2 ) > minY* f r a c ) ;p c i n = p c i n ( t r i m m e d I n d i c e s , : ) ;

end

%% D e t e c t s u r f a c e shape% T h i s s t e p a t t e m p t s t o f i t a 2D p o l y n o m i a l t o t h e tr immed p o i n t% c l o u d . A 3 x 3 d eg r ee p o l y n o m i a l f i t i s used . The p o i n t c l o u d i s t h e n% s e p e r a t e d i n t o 2 p o i n t c louds , a p o i n t c l o u d o f t h e s u r f a c e , and a% p o i n t c l o u d o f t h e o u t l i e r s , and t h e f i t t i n g p r o c e s s i s run aga in f o r% t h e s u r f a c e .x v a l u e s = p c i n ( : , 1 ) ;y v a l u e s = p c i n ( : , 2 ) ;z v a l u e s = p c i n ( : , 3 ) ;f t = f i t t y p e ( [ ’ po ly33 ’ ] ) ;o p t s = f i t o p t i o n s ( ’ Method ’ , ’ L i n e a r L e a s t S q u a r e s ’ ) ;o p t s . Lower = [ − I n f − I n f − I n f − I n f 0 − I n f − I n f 0 0 − I n f ] ;o p t s . Upper = [ I n f I n f I n f I n f 0 I n f I n f 0 0 I n f ] ;[ f i t 2 , gof2 ] = f i t ( [ x v a l u e s , y v a l u e s ] , z v a l u e s , f t , o p t s ) ;f d a t a = f e v a l ( f i t 2 , x v a l u e s , y v a l u e s ) ;I = abs ( f d a t a − z v a l u e s ) > 1 . 2 * s t d ( z v a l u e s ) ;o u t l i e r s = e x c l u d e d a t a ( x v a l u e s , y v a l u e s , ’ i n d i c e s ’ , I ) ;o u t l i e r s i n d i c e s = f i n d ( o u t l i e r s ) ;

o u t l i e r p c = [ p c i n ( o u t l i e r s i n d i c e s , 1 ) , p c i n ( o u t l i e r s i n d i c e s , 2 ) , . . .p c i n ( o u t l i e r s i n d i c e s , 3 ) ] ;

i n l i e r s = ˜ o u t l i e r s ;i n l i e r s i n d i c e s = f i n d ( i n l i e r s ) ;s u r f a c e p c = [ p c i n ( i n l i e r s i n d i c e s , 1 ) , p c i n ( i n l i e r s i n d i c e s , 2 ) , . . .

p c i n ( i n l i e r s i n d i c e s , 3 ) ] ;

%% P l o t R e s u l t s% f i g u r e% pcshowpa i r ( p o i n t C l o u d ( o u t l i e r p c ) , p o i n t C l o u d ( s u r f a c e p c ) )% f i g u r e% p l o t 3 ( o u t l i e r p c ( : , 1 ) , o u t l i e r p c ( : , 2 ) , o u t l i e r p c ( : , 3 ) , ’ . ’ , ’ Color ’ , ’ r ’ , . . .% ’ MarkerS i ze ’ , 2 ) ;% ho ld on% p l o t 3 ( s u r f a c e p c ( : , 1 ) , s u r f a c e p c ( : , 2 ) , s u r f a c e p c ( : , 3 ) , ’ . ’ , ’ Color ’ , ’b ’ , . . .% ’ MarkerS i ze ’ , 2 ) ;% l e g e n d ( ’ Damage ’ , ’ S u r f a c e ’ ) ;% ho ld o f f% f i g u r e% p l o t ( f i t 2 , [ x v a l u e s , y v a l u e s ] , z v a l u e s )% o u t l i e r p c = p o i n t C l o u d ( o u t l i e r p c ) ;% s u r f a c e p c = p o i n t C l o u d ( s u r f a c e p c ) ;% f i g u r e% [ l a b e l s , n u m C l u s t e r s ] = p c s e g d i s t ( o u t l i e r p c , 0 . 0 1 ) ;% pcshow ( o u t l i e r p c . Loca t ion , l a b e l s )% colormap ( hsv ( n u m C l u s t e r s ) )% t i t l e ( ’ P o i n t Cloud C l u s t e r s ’ )%% Save o u t p u t

o u t l i e r p c s ( k ) = { o u t l i e r p c } ;s u r f a c e p c s ( k ) = { s u r f a c e p c } ;f i t s ( k ) = { f i t 2 } ;g o f s ( k ) = { gof2 } ;

% hold o f fend

%% D e f a u l t f i t f u n c t i o n ,f u n c t i o n [ f i t r e s u l t , go f ] = c r e a t e F i t ( x v a l u e s , y v a l u e s , z v a l u e s , n1 , n2 )

%CREATEFIT ( XVALUES , YVALUES , ZVALUES )% Cr ea t e a f i t .%% Data f o r ’ u n t i t l e d f i t 1 ’ f i t :% X I n p u t : x v a l u e s% Y I n p u t : y v a l u e s% Z Outpu t : z v a l u e s% Outpu t :% f i t r e s u l t : a f i t o b j e c t r e p r e s e n t i n g t h e f i t .% g o f : s t r u c t u r e w i t h goodness −o f f i t i n f o .%% See a l s o FIT , CFIT , SFIT .

% Auto −g e n e r a t e d by MATLAB on 02−Nov −2020 1 2 : 2 8 : 0 9

%% F i t : ’ u n t i t l e d f i t 1 ’ .[ xData , yData , zData ] = p r e p a r e S u r f a c e D a t a ( x v a l u e s , y v a l u e s , z v a l u e s ) ;

% S e t up f i t t y p e and o p t i o n s .

f t = f i t t y p e ( [ ’ po ly ’ , num2str ( n1 ) , num2str ( n2 ) ] ) ;o p t s = f i t o p t i o n s ( ’ Method ’ , ’ L i n e a r L e a s t S q u a r e s ’ , ’ Robus t ’ , ’LAR ’ ) ;i f ( n1 == 1 && n2 == 2)

o p t s . Lower = [ − I n f − I n f − I n f 0 − I n f ] ;o p t s . Upper = [ I n f I n f I n f 0 I n f ] ;

e l s e i f ( n1 == 1 && n2 == 3)o p t s . Lower = [ − I n f − I n f − I n f 0 − I n f 0 − I n f ] ;o p t s . Upper = [ I n f I n f I n f 0 I n f 0 I n f ] ;

e l s e i f ( n1 == 2 && n2 == 3)o p t s . Lower = [ − I n f − I n f − I n f − I n f 0 − I n f 0 0 − I n f ] ;o p t s . Upper = [ I n f I n f I n f I n f 0 I n f 0 0 I n f ] ;

e l s e i f ( n1 == 3 && n2 == 1)o p t s . Lower = [ − I n f − I n f − I n f − I n f 0 − I n f 0 ] ;o p t s . Upper = [ I n f I n f I n f I n f 0 I n f 0 ] ;

e l s eo p t s . Lower = [ − I n f − I n f − I n f − I n f 0 − I n f − I n f 0 0 − I n f ] ;o p t s . Upper = [ I n f I n f I n f I n f 0 I n f I n f 0 0 I n f ] ;

end% F i t model t o da ta .[ f i t r e s u l t , go f ] = f i t ( [ xData , yData ] , zData , f t , o p t s ) ;

% % P l o t f i t w i t h da ta .% f i g u r e ( ’Name ’ , ’ u n t i t l e d f i t 1 ’ ) ;% h = p l o t ( f i t r e s u l t , [ xData , yData ] , zData ) ;% l e g e n d ( h , ’ u n t i t l e d f i t 1 ’ , ’ z v a l u e s v s . x v a l u e s , y v a l u e s ’ , ’ Loca t ion ’ , ’ Nor thEas t ’ , ’ I n t e r p r e t e r ’ , ’ none ’ ) ;% % Labe l axe s% x l a b e l ( ’ x v a l u e s ’ , ’ I n t e r p r e t e r ’ , ’ none ’ ) ;% y l a b e l ( ’ y v a l u e s ’ , ’ I n t e r p r e t e r ’ , ’ none ’ ) ;% z l a b e l ( ’ z v a l u e s ’ , ’ I n t e r p r e t e r ’ , ’ none ’ ) ;% g r i d on% view ( −60.2 , 9 0 . 0 ) ;end

Script used to merge point clouds

%%% Takes two p o i n t c l o u d s from d i f f e r e n t p e r s p e c t i v e s , i s o l a t e s t h e%%% f i d u c i a l marks by c o l o r , and a l i g n s t h e two p o i n t c l o u d s u s i n g p o i n t%%% c l o u d r e g i s t r a t i o n

% f o r i = 1:50%% Load p o i n t c l o u d s from f i l epc1 = load ( [ ’ savedPCs / P e r s p e c t i v e 1 / p e r s p e c t i v e 1 ’ , num2str ( i ) , ’ . mat ’ ] , ’ f i n a l P C ’ ) ;pc2 = load ( [ ’ savedPCs / P e r s p e c t i v e 2 / p e r s p e c t i v e 2 ’ , num2str ( i ) , ’ . mat ’ ] , ’ f i n a l P C ’ ) ;pc1 = pc1 . f i n a l P C ;pc2 = pc2 . f i n a l P C ;

%% R e g i s t e r p o i n t c l o u d s and save merged p o i n t c l o u dc o l o r 1 = d ou b l e ( pc1 . Co lo r ) ;c o l o r 2 = d ou b l e ( pc2 . Co lo r ) ;meanRGB1 = mean ( c o l o r 1 ) ;meanRGB2 = mean ( c o l o r 2 ) ;stdRGB1 = s t d ( c o l o r 1 ) ;stdRGB2 = s t d ( c o l o r 2 ) ;c o l o r I n d i c e s 1 = f i n d ( ( ( c o l o r 1 ( 1 , : , 1 ) ) > ( meanRGB1 ( 1 , : , 1 ) + 2 . 6 * stdRGB1 ( 1 , : , 1 ) ) ) ) ;c o l o r I n d i c e s 2 = f i n d ( ( ( c o l o r 2 ( 1 , : , 1 ) ) > ( meanRGB2 ( 1 , : , 1 ) + 2 . 6 * stdRGB2 ( 1 , : , 1 ) ) ) ) ;pc1Loc = pc1 . L o c a t i o n ;pc2Loc = pc2 . L o c a t i o n ;

pc1Loc = pc1Loc ( 1 , c o l o r I n d i c e s 1 , : ) ;pc2Loc = pc2Loc ( 1 , c o l o r I n d i c e s 2 , : ) ;pc1RGBOut l ie rs = p o i n t C l o u d ( pc1Loc ) ;pc2RGBOut l ie rs = p o i n t C l o u d ( pc2Loc ) ;p c t f o r m = p c r e g i s t e r c p d ( pc1RGBOutl iers , pc2RGBOutl iers , ’ Trans fo rm ’ , ’ R i g i d ’ ) ;pc1RGBTransform = p c t r a n s f o r m ( pc1RGBOutl iers , p c t f o r m ) ;f i g u r ep c s h o w p a i r ( pc1RGBOutl iers , pc2RGBOut l ie r s ) ;f i g u r ep c s h o w p a i r ( pc1RGBTransform , pc2RGBOut l ie r s ) ;t formPC1 = p c t r a n s f o r m ( pc1 , p c t f o r m ) ;f i g u r ep c s h o w p a i r ( tformPC1 , pc2 ) ;pcOut = pcmerge ( tformPC1 , pc2 , 0 . 0 0 0 1 ) ;

% save ( [ ’ savedPCs / merged / mergedPC ’ , num2s t r ( i ) , ’ . mat ’ ] , ’ pcOut ’ ) ;% end

Script used to calculate error

%%% C a l c u l a t e s t h e mean and s t a n d a r d d e v i a t i o n o f t h e s i z e and p o s i t i o n o f%%% t h e e r r o r

%% I n i t i a l i z en pc = 5 0 ;c l u s t e r t o l e r a n c e = 0 . 0 1 ;e r r o r s = z e r o s ( n pc , 8 ) ;p o s l i s t = z e r o s ( n pc , 8 ) ;diams = z e r o s ( n pc , 8 ) ;h o l e d i a m s = [ 1 / 4 , 5 / 1 6 , 3 / 8 , 7 / 1 6 , 1 / 2 , 5 / 8 , 3 / 4 , 1 ] ;r e l p o s l i s t = [ ( mean ( [ 1 4 . 3 7 5 , 1 4 . 6 3 5 ] ) − 0 . 5 ) , ( mean ( [ 1 2 . 3 7 5 , 1 2 . 6 2 5 ] ) − 0 . 5 ) , . . .

( mean ( [ 1 0 . 3 1 2 5 , 1 0 . 6 8 7 5 ] ) − 0 . 5 ) , ( mean ( [ 8 . 3 1 2 5 , 8 . 7 5 ] ) − . 0 5 ) , . . .( mean ( [ 6 . 3 1 2 5 , 6 . 8 1 2 5 ] ) − . 0 5 ) , ( mean ( [ 4 . 1 8 7 5 , 4 . 8 1 2 5 ] ) − . 0 5 ) , . . .( mean ( [ 2 . 1 2 5 , 2 . 8 7 5 ] ) − . 0 5 ) , 0 ] ;

%% C a l c u l a t e mean and s t a n d a r d d e v i a t i o nf o r i = 1 : n pc

pc = p o i n t C l o u d ( o u t l i e r p c s { i } ) ;[ p c l i s t , n u m c l u s t e r s ] = s e p e r a t e c l u s t e r s ( pc , c l u s t e r t o l e r a n c e ) ;error = z e r o s ( 1 , 8 ) ;pos = z e r o s ( 8 , 3 ) ;pos n = z e r o s ( 1 , 8 ) ;x pos = z e r o s ( 1 , 8 ) ;f o r j = 1 : n u m c l u s t e r s

p c c u r = p c l i s t {2 , j } ;max x i = f i n d ( p c c u r ( : , 1 ) == max ( p c c u r ( : , 1 ) ) ) ;min x i = f i n d ( p c c u r ( : , 1 ) == min ( p c c u r ( : , 1 ) ) ) ;max y i = f i n d ( p c c u r ( : , 2 ) == max ( p c c u r ( : , 2 ) ) ) ;min y i = f i n d ( p c c u r ( : , 2 ) == min ( p c c u r ( : , 2 ) ) ) ;x r = norm ( p c c u r ( max x i ( 1 ) , : ) − p c c u r ( min x i ( 1 ) , : ) ) ;y r = norm ( p c c u r ( max y i ( 1 ) , : ) − p c c u r ( min y i ( 1 ) , : ) ) ;r = mean ( [ x r , y r ] ) ;r = s o r t ( r ) ;r = r . * 0 . 0 3 9 3 7 0 0 7 8 7 4 * 1 0 0 0 ;error ( j ) = r ;pos ( j , : ) = mean ( [ pc c u r ( max x i ( 1 ) , : ) ; p c c u r ( min x i ( 1 ) , : ) ; pc c u r ( max y i ( 1 ) , : ) ; p c c u r ( min y i ( 1 ) , : ) ] , 1 ) ;x pos ( j ) = mean ( [ pc c u r ( max x i ( 1 ) , 1 ) ; p c c u r ( min x i ( 1 ) , 1 ) ] , 1 ) ;

end

[ x pos , i s o r t ] = s o r t ( x pos , ’ a s c en d ’ ) ;error = error ( i s o r t ) ;pos = pos ( i s o r t , : ) . * 0 . 0 3 9 3 7 0 0 7 8 7 4 * 1 0 0 0 ;f o r j = 1 : n u m c l u s t e r s −1

pos n ( j ) = norm ( pos ( j , : ) − pos ( n u m c l u s t e r s , : ) ) ;enddiams ( i , : ) = error ;e r r o r s ( i , : ) = error − h o l e d i a m s ;p o s l i s t ( i , : ) = pos n ;

end

a v g e r r o r = mean ( e r r o r s , 1 ) ;avg diams = mean ( diams , 1 ) ;avg pos = mean ( p o s l i s t , 1 ) ;s t d e r r o r = s t d ( e r r o r s , 1 ) ;s t d d i a m s = s t d ( diams , 1 ) ;s t d p o s = s t d ( p o s l i s t , 1 ) ;

%% P l o tf i g u r ebar ( 1 : 8 , avg diams )hold on

e r = errorbar ( 1 : 8 , avg diams , s t d d i a m s ) ;e r . Co lo r = [0 0 0 ] ;e r . L i n e S t y l e = ’ none ’ ;x t i c k s ( 1 : 8 ) ;x t i c k l a b e l s ({ ’ 1 / 4 ’ , ’ 5 /16 ’ , ’ 3 / 8 ’ , ’ 7 /16 ’ , ’ 1 / 2 ’ , ’ 5 / 8 ’ , ’ 3 / 4 ’ , ’ 1 ’ } ) ;x l a b e l ( ’ Hole d i a m e t e r ( i n ) ’ ) ;y l a b e l ( ’ Measured d i a m a t e r ( i n ) ’ ) ;

% y l i m ([ −0 .05 0 . 2 5 ] ) ;gr id onhold o f f

f i g u r es c a t t e r ( r e l p o s l i s t , avg pos ) ;hold onp l o t ( 0 : 1 5 , 0 : 1 5 ) ;hold one r 2 = errorbar ( r e l p o s l i s t , avg pos , s t d p o s )e r 2 . Co lo r = [0 0 0 ] ;e r 2 . L i n e S t y l e = ’ none ’ ;x l a b e l ( ’ R u l e r measured r e l a t i v e h o l e p o s i t i o n ( i n ) ’ ) ;y l a b e l ( ’ K i n e c t measured r e l a t i v e h o l e p o s i t i o n ( i n ) ’ ) ;l egend ( ’ P o s i t i o n Measurements ’ , ’ L ine o f E q u a l i t y ’ ) ;gr id ona x i s e q u a lx l im ( [ 0 , 1 8 ] ) ;y l im ( [ 0 , 1 8 ] ) ;hold o f f

%% F u n c t i o n sf u n c t i o n [ p c l i s t , n u m C l u s t e r s ] = s e p e r a t e c l u s t e r s ( pc in , m i n D i s t a n c e )

[ l a b e l s , n u m C l u s t e r s ] = p c s e g d i s t ( pc in , m i n D i s t a n c e ) ;pcshow ( p c i n . Loca t i on , l a b e l s ) ;colormap ( hsv ( n u m C l u s t e r s ) ) ;

t i t l e ( ’ P o i n t Cloud C l u s t e r s ’ ) ;p c l i s t = c e l l ( 2 , n u m C l u s t e r s ) ;f o r i = 1 : n u m C l u s t e r s

c l u s t e r e d p o i n t i n d i c e s = f i n d ( l a b e l s == i ) ;n p o i n t s = s i z e ( c l u s t e r e d p o i n t i n d i c e s , 1 ) ;c l u s t e r e d p o i n t s = z e r o s ( n p o i n t s , 3 ) ;f o r j = 1 : n p o i n t s

c l u s t e r e d p o i n t s ( j , : ) = p c i n . L o c a t i o n ( c l u s t e r e d p o i n t i n d i c e s ( j ) , : ) ;end

% c l u s t e r e d p o i n t c l o u d = p o i n t C l o u d ( c l u s t e r e d p o i n t s ) ;p c l i s t ( : , i ) = { i , c l u s t e r e d p o i n t s } ;c l e a r v a r s c l u s t e r e d p o i n t s ;

endend