![Page 1: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/1.jpg)
1
Surface normals and principal component analysis (PCA)
3DM slides byMarc van Kreveld
![Page 2: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/2.jpg)
2
Normal of a surface
• Defined at points on the surface: normal of the tangent plane to the surface at that point
• Well-defined and unique inside the facets of any polyhedron
• At edges and vertices,the tangent plane is notunique or not defined(convex/reflex edge) normal is undefined
![Page 3: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/3.jpg)
3
Normal of a surface
• On a smooth surface without a boundary, the normal is unique and well-defined everywhere (smooth simply means that the derivatives of the surface exist everywhere)
• On a smooth surface (manifold) with boundary, the normal is not defined on the boundary
![Page 4: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/4.jpg)
4
Normal of a surface
![Page 5: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/5.jpg)
5
Normal of a surface
• The normal at edges or vertices is often definedin some convenient way: some average of normalsof incident triangles
![Page 6: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/6.jpg)
6
Normal of a surface
• No matter what choice we make at a vertex, a piecewise linear surface will not have a continuously changing normal visible after computing illumination
not normals!they would be parallel
![Page 7: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/7.jpg)
7
Curvature
• The rate of change of the normal is the curvature
higher curvature
lower curvature
infinite curvature
zero curvature
![Page 8: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/8.jpg)
8
Curvature
• A circle is a shape that has constant curvature everywhere
• The same is true for a line, whose curvature is zero everywhere
![Page 9: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/9.jpg)
9
Curvature
• Curvature can be positive or negative• Intuitively, the magnitude of the curvature is the
curvature of the circle that looks most like the curve, close to the point of interest
negative curvature
positive curvature
![Page 10: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/10.jpg)
10
Curvature
• The curvature at any point on a circle is the inverse of its radius
• The (absolute) curvature at any point on a curve is the curvature of the circle through that point that has the same first and second derivative at that point (so it is defined only for points where the curve is C2)
r curvature = 1/r
r
![Page 11: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/11.jpg)
11
Curvature
• For a 3D surface, there are curvatures in all directions in the tangent plane
![Page 12: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/12.jpg)
12
Curvature
negative
positive
inside
![Page 13: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/13.jpg)
13
Properties at a point
• A point on a smooth surface has various properties:– location (“zero-th” derivative)– normal (first derivative) / tangent plane– two/many curvatures (second derivative)
![Page 14: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/14.jpg)
14
Normal of a point in a point set?
• Can we estimate the normal for each point in a scanned point cloud? This would help reconstruction (e.g. for RANSAC)
![Page 15: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/15.jpg)
15
Normal of a point in a point set
• Main idea of various different methods, to estimate the normal of a point q in a point cloud:– collect some nearest neighbors of q, for instance 12– fit a plane to q and its 12 neighbors– use the normal of this plane as the estimated normal for q
![Page 16: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/16.jpg)
16
Normal estimation at a point
• Risk: the 12 nearest neighbors of q are not nicely spread in all directions on the plane the computed normal could even be perpendicular to the real normal!
![Page 17: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/17.jpg)
17
Normal estimation at a point
• Also: the quality of normals of points near edges of the scanned shape is often not so good
• We want a way of knowing how good the estimated normal seems to be
![Page 18: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/18.jpg)
18
Principal component analysis
• General technique for data analysis• Uses the statistical concept of correlation• Uses the linear algebra concept of eigenvectors• Can be used for normal estimation and tells
something about the quality (clearness, obviousness) of the normal
![Page 19: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/19.jpg)
19
Correlation
• Degree of correspondence/changing together of two variables measured from objects– in a population of people, length and weight are correlated– in decathlon, performance on 100 meters and long jump
are correlated (so are shot put and discus throw)
Pearson’s correlation coefficient
![Page 20: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/20.jpg)
20
Covariance, correlation
• For two variables x and y, their covariance is defined as
(x,y) = E[ (x – E[x]) (y – E[y]) ]= E[xy] – E[x] E[y]
• E[x] is the expected value of x, equal to the mean x
• Note that the variance 2(x) = (x,x), the covariance of x with itself, where (x) is the standard deviation
• Correlation x,y = (x,y) / ( (x) (y))
![Page 21: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/21.jpg)
21
Covariance
• For a data set of pairs (x1, y1), (x2, y2), …, (xn, yn), the covariance can be computed as
where x and y are the mean values of xi and yi
1𝑛∑𝑖=1
𝑛
(𝑥𝑖−𝑥 )(𝑦 𝑖− 𝑦 )
![Page 22: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/22.jpg)
22
Data matrix
• Suppose we have weight w, length l, and blood pressure b of seven people
• Let the mean of w, l, and b be w, l, and b
• Assume the measurements have been adjusted by subtracting the appropriate mean
• Then the data matrix is X =
• Note: Each row has zero mean, the data is mean-centered
![Page 23: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/23.jpg)
23
Covariance matrix
• The covariance matrix is XXT
• This is in the example:
• The covariance matrix is square and symmetric• The main diagonal contains the variances• Off-diagonal are the covariance values
![Page 24: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/24.jpg)
24
Principal component analysis
• PCA is a linear transformation (3 x 3 in our example) that makes new base vectors such that– the first base vector has a direction that realizes the largest
possible variance (when projected onto a line)– the second base vector is orthogonal to the first and
realizes the largest possible variance among those vectors– the third base vector is orthogonal to the first and second
base vector and …– … and so on …
• Hence, PCA is an orthogonal linear transformation
![Page 25: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/25.jpg)
25
Principal component analysis
• In 2D, after finding the first base vector, the second one is immediately determined because of the requirement of orthogonality
![Page 26: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/26.jpg)
26
Principal component analysis
• In 3D, after the first base vector is found, the data is projected onto a plane with this base vector as its normal, and we find the second base vector in this plane as the direction with largest variance in that plane
(this “removes” the variance captured by the first base vector)
![Page 27: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/27.jpg)
27
Principal component analysis
• After the first two base vectors are found, the data is projected onto a line orthogonal to the first two base vectors and the third base vector is foundon this line (trivial)
it is simply givenby the cross product of the first two base vectors
![Page 28: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/28.jpg)
28
Principal component analysis
• The subsequent variances we find are decreasing in value and give an “importance” to the base vectors
• The mind process explains why principal component analysis can be used for dimension reduction: maybe all the variance in, say, 10 measurement types can be explained using 4 or 3 (new) dimensions
![Page 29: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/29.jpg)
29
Principal component analysis
• In actual computation, all base vectors are found at once using linear algebra techniques
![Page 30: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/30.jpg)
30
Eigenvectors of a matrix
• A non-zero vector v is an eigenvector of a matrix X if X v = v for some scalar , and is called an eigenvalue corresponding to v
• Example 1: (1,1) is an eigenvector of because = 3
In words: the matrix keeps the direction of this eigenvector the same, but its length is scaled by the eigenvalue 3
![Page 31: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/31.jpg)
31
Eigenvectors of a matrix
• A non-zero vector v is an eigenvector of a matrix X if X v = v for some scalar , and is called an eigenvalue corresponding to v
• Example 1: (1, –1) is also an eigenvector of because =
In words: the matrix keeps the direction and length of (1, –1) the same because its eigenvalue is 1
![Page 32: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/32.jpg)
32
Eigenvectors of a matrix
• Consider the transformation animated
Blue vectors: (1,1)
Pink vectors: (1, –1) and (–1, 1)
Red vectors are not eigenvectors (they change direction)
![Page 33: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/33.jpg)
33
Eigenvectors of a matrix
• If v is an eigenvector, then any vector parallel to v is also an eigenvector (with the same eigenvalue!)
• If the eigenvalue is –1 (negative in general), then the eigenvector will be reversed in direction by the matrix
• Only square matrices have eigenvectors and eigenvalues
![Page 34: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/34.jpg)
34
Eigenvectors, a 2D example
• Find the eigenvectors and eigenvalues of • We need: Av = v by definition, or (A – I) v = 0
(in words: our matrix minus times the identity matrix applied to v is the zero vector)
• This is the case exactly when det(A – I) = 0• det(A – I) = det =
= (–2 – )(2 – ) – (–3)1 = 2 – 1 = 0
![Page 35: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/35.jpg)
35
Eigenvectors, a 2D example
• 2 – 1 = 0 gives = 1 or = –1• The corresponding eigenvectors can be obtained by
filling in each and solving a set of equations
• The polynomial in given by det(A – I) is called the characteristic polynomial
![Page 36: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/36.jpg)
36
Questions1. Determine the eigenvectors and eigenvalues of
What does the matrix do? Does that explain the eigenvectors and values?
2. Determine the eigenvectors and eigenvalues of What does the matrix do? Does that explain the eigenvectors and values?
3. Determine the eigenvectors and eigenvalues of
![Page 37: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/37.jpg)
37
Principal component analysis
• Recall: PCA is an orthogonal linear transformation• The new base vectors are the eigenvectors of the
covariance matrix!• The eigenvalues are the variances of the data points
when projected onto a line with the direction of the eigenvector
• Geometrically, PCA is a rotation around the multi-dimensional mean (point) so that the base vectors align with the principal components(which is why the data matrix must be mean centered)
![Page 38: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/38.jpg)
38
PCA example
• Assume the data pairs (1,1), (1,2), (3,2), (4,2), and (6,3)• X = 15/5 = 3 and Y = 10/5 = 2 • The mean-centered data becomes
(-2,-1), (-2,0), (0,0), (1,0), and (3,1)• The data matrix X = • The covariance matrix XXT = • The characteristic polynomial is det
![Page 39: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/39.jpg)
39
PCA example
• The characteristic polynomial is det = ((18 – )(2 – ) – 25) = (2 – 20 + 11)
• When setting it to zero we can omit the factor • We get = , 1 19.43 and 0.57
as the eigenvalues of the covariance matrix
• We always choose the eigenvalues to be in decreasing order: 1 > > …
![Page 40: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/40.jpg)
40
PCA example
• The first eigenvalue 1 19.43 corresponds to an eigenvector (1, 0.29) or anything parallel to it
• The second eigenvalue 0.57 corresponds to an eigenvector (–0.29, 1) or anything parallel to it
![Page 41: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/41.jpg)
41
PCA example
• The data points and the mean-centered data points
![Page 42: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/42.jpg)
42
PCA example
• The first principal component (purple): (1, 0.29)
• Orthogonal projection onto the orange line (direction of first eigenvector) yields the largest possible variance
• The first eigenvalue 1 19.43 is the sum of the squared distances to the mean (variance times 5) for this projection
![Page 43: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/43.jpg)
43
PCA example
• Enlarged, and the non-squared distances shown
![Page 44: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/44.jpg)
44
PCA example
• The second principal component (green): (–0.29, 1)
• Orthogonal projection onto the dark blue line (direction of second eigenvector) yields the remaining variance
• The second eigenvalue 2 0.57 is the sum of the squared distances to the mean (variance times 5) for this projection
![Page 45: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/45.jpg)
45
PCA example
• The fact that the first eigenvalue is much larger than the second means that there is a direction that captures most of the variance of the data a line exists that fits well with the data
• When both eigenvalues are equally large, the data is spread equally in all directions
![Page 46: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/46.jpg)
46
PCA, eigenvectors and eigenvalues
• In the pictures, identify the eigenvectors and state how different the eigenvalues appear to be
![Page 47: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/47.jpg)
47
PCA observations in 3D
• If the first eigenvalue is large and the other two are small, then the data points lie approximately on a line– through the 3D mean– with orientation parallel to the first eigenvector
• If the first two eigenvalues are large and the third eigenvalue is small, then the points lie approximately on a plane– through the 3D mean– with orientation spanned by the first two eigenvectors /
with normal parallel to the third eigenvector
![Page 48: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/48.jpg)
48
PCA and local normal estimation
• Recall that we wanted to estimate the normal at every point in a point cloud
• Recall that we decided to use the 12 nearest neighbors for any point q, and find a fitting plane for q and its 12 nearest neighbors
qAssume we have the 3D coordinates of these points measured in meters
![Page 49: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/49.jpg)
49
PCA and local normal estimation
• Treat the 13 points and their three coordinates as data with three measurements, x, y, and z: we have a 3 x 13 data matrix
• Apply PCA to get three eigenvalues 1 ,2 ,3 , (in decreasing order) and eigenvectors v1 , v2 , and v3
• If the 13 points lie roughly in a plane, then 3 is small and the plane contains directions parallel to v1 , v2
• The estimated normal is the perpendicular to v1 , v2 , so it is the third eigenvector v3
![Page 50: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/50.jpg)
50
PCA and local normal estimation
• How large should the eigenvalues 1 ,2 3 be, to get a reliable normal?
• This depends on – scanning density– point distribution– scanning accuracy– curvature of the surface
![Page 51: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/51.jpg)
51
PCA and local normal estimation
• Example 1: Assume– the surface is a perfect plane (no curvature)– scanning yields uniform distribution– density is 100 pt/m2
– accuracy is 0.03 m
• Then 3 will be less than 13 x 0.032 = 0.0117;we expect the 13 points to roughly lie in a cylinder of radius 0.203 m and thickness 0.06 m, and 1 and 2 should each be about 0.1 (10x as large as 3, but this is just a rough guess)
![Page 52: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/52.jpg)
52
PCA and local normal estimation
• Then 3 will be less than 13 x 0.032 = 0.0117;we expect the 13 points to roughly lie in a cylinder of radius 0.203 m and thickness 0.06 m, and 1 and 2 should each be about 0.1 (10x as large as 3, but this is just a rough guess)
0.06 m
0.203 m
best fit plane is half-way
![Page 53: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/53.jpg)
53
PCA and local normal estimation
• Example 2: Assume– the surface is a perfect plane (no curvature)– scanning yields uniform distribution– density is 500 pt/m2
– accuracy is 0.03 m
• Then 3 will be less than 13 x 0.032 = 0.0117;we expect the 13 points to roughly lie in a circle of radius 0.09 m, and 1 and 2 should each be about 0.026 (maybe 3 times as large as 3)
![Page 54: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/54.jpg)
54
PCA and local normal estimation
![Page 55: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/55.jpg)
55
PCA and local normal estimation
• Example 3: Assume– the surface is a perfect plane (no curvature)– scanning yields uniform distribution– density is 500 pt/m2
– accuracy is 0.07 m
• Then 3 will be close to 0.02 ( 13 x 0.042 );we expect the 13 points to roughly lie in a circle of radius 0.09 m, and 1 and 2 should each be about 0.026 (maybe hardly larger than 3)
![Page 56: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/56.jpg)
56
PCA and local normal estimation
![Page 57: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/57.jpg)
57
PCA and local normal estimation
• When the density goes up and/or the accuracy goes down, we may need to use more than 12 nearest neighbors to observe a considerable difference in eigenvalues for points on a plane, and estimate the normal correctly
![Page 58: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/58.jpg)
58
PCA and local normal estimation
• More neighbors means more reliable covariance estimations, so a better normal, for flat surfaces
• … but for surfaces with considerable curvature, or close to edges of a flat surface, more neighbors means that the quality of normals goes down(because more often, points across the edges will be included in the nearest neighbors)
![Page 59: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/59.jpg)
59
PCA and local normal estimation
• Example 4: Assume– the surface is a perfect plane (no curvature)– scanning is by LiDAR, with dense lines– density is 100 pt/m2
– accuracy is 0.03 m
• We will find one largeeigenvalue and twosmall ones
q
![Page 60: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/60.jpg)
60
PCA and local normal estimation
• Example 5: Assume– point q lies in a tree
(leaf or thin branch)– distribution is uniform– density is 100 pt/m2
– accuracy is 0.03 m
• We will find threeeigenvalues that don’tdiffer too much (but itis rather unpredictable)
![Page 61: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/61.jpg)
61
Another way for normal estimation
• Compute the Voronoi diagram of the points and choose the direction from each point to the furthest Voronoi vertex bounding its cell
• For unbounded cells,take the middledirection of the unbounded rays
• Need to resolveinside/outside
• Works in any dimension
![Page 62: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/62.jpg)
62
Curvature estimation
• Recall: in 3D a curvature at point p exists for any direction in the tangent plane of p
• “The” curvature at p is the maximum occurring curvature
• Basic approach: fit a suitable ball or quadratic surface with p on its boundary that locally is a good fit for the point set. Then compute the curvature of the ball or surface at p
![Page 63: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/63.jpg)
63
Curvature estimation
• Or: choose many directions in the tangent plane and fit a circle tangent to p in the plane normal to the tangent plane and a chosen direction
• Choose the circle with smallest radius over all directions
![Page 64: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/64.jpg)
64
Summary
• Eigenvectors and eigenvalues are a central concept in linear algebra and they are generally useful
• Local normal estimation can be done by principal component analysis on the 12 nearest neighbors, which essentially comes down to eigenvector and eigenvalue computation of 3x3 matrices
• There are other methods for local normal estimation, but these are generally less reliable and may not indicate how “good” the normal is
![Page 65: Surface normals and principal component analysis (PCA)](https://reader033.vdocuments.net/reader033/viewer/2022061615/5681665c550346895dd9ddf0/html5/thumbnails/65.jpg)
65
Questions1. Perform principal component analysis on the following four
data points: (1, 3), (2, 2), (4, 2), (5, 5) [mean-center first!]2. Estimate the normal from 5 points in 3D, namely:
(0,1,1), (4,2,0), (8,5,1), (-6,-2,1), (-6,-6,-3)How clear do you think the estimated normal is?
3. Can the estimated normal always be obtained as the third eigenvector? If not, what can you do?
4. Suppose the density and accuracy suggest that you need the 100 nearest neighbors to get a good normal estimate, but you don’t want to manipulate large matrices (3 x 100) for every point, what can you do?