generating 3d views of an object from images

10
Generating 3D Views of an Object from Images Vimala K.S, Avinash N, Murali S P.E.T. Research Centre, P.E.S.C.E., Mandya, Karnataka, India. [email protected] , wittybot@gm ail.com, [email protected]  Abst ract Co m pu te r V is io n is th e pr oc es s of ob ta inin g information about a scene by processing the images of the scene. Generating 3D model of an object is one of the major goals in computer vision. 3D reconstruction of objects and environments is use ful in its elf since the resulta nt 3D mod els have wid e app lic ati on in vir tua l re ali ty, comput er aid ed des ign and engineering. 3D reconstruction is usually done in a piece-wise  fashion by identifying various parts of the scene and from those  parts constructin g a represen tation of the whole. Generatio n of  model according to its true dimension is an important property especially for building model. In a picture taken by the camera the depth information about the scene is lost. By observing an image a human being can feel the shape of the object but not me tric li ke he ight or wi dt h of obje ct . In this pape r a ne w te ch ni qu e is us ed to ge ne rate 3D vi ews by constructing orth ogra phic views of obje cts from the pers pect ivel y disto red image to get real dimension of object from given photograph. An imag e based modelin g and rende ring is used to perc eive 3D vi ews in re al ti me . Photoreali st ic ef fe ct is ac hi ev ed by el iminating pe rs pe ctive di st or ti on of image using pl ane homography. K e y wo r d s:  Computer vision, view metrology, homography. 1. Introduction Human vision system is capab le of percei ving 3D information with stere ovision . Compu ter vision is a wide area inspired by this natural system. A central problem [1] in com puter vi si on is to obt ai n 3D ge omet ric sha pe informat ion of the obj ects fr om pl anar images. This  process is traditionally termed 3D reconstruction. It is usually done in a piece-wise fashion by identifying various  parts of the scene and from those parts constructing a representation of the whole. In rec ent yea rs gen era ting 3D bui ldi ng model attracts most researcher as it wide range of applications in civil engineering where civil engineers needs to generate 3D views to conceive their product to customer. In games and ent ert ainment to create vir tua l rea lity . Urbanists , architects could all be interested in obtaining models of  buildings from few images and also for robot navigation ins ide the bui ldi ng. In some cases, buil dings tha t have di sa ppe are d can be mo deled fr om as li tt le as a sin gl e image, for example, an old photograph or a painting. To generate 3D vie ws on e s olution is to us e spe cia liz ed dev ice s for acquir ing 3D inf ormati on of an obj ect . Since these devices are cost exp ens ive , it is not  possible to use at all time. The other solution is to manual gene ration . The user requi res prior knowledg e about the obj ect and als o nee ds eng ine eri ng ski lls. To gen era te thousands of models it is a time consu mi ng pro cess. Hence, we require a way of generating 3D views with less user interaction. To reconstruct any building from images dimensions are ver y esse nti al. They include hei ght and length of wal l, wid th of flo or etc. Changing the dimensi on ref lec ts the appearance of resultant 3D model. We use the concept of [2] one point perspe cti ve to ge t rea l dimension. This is done by constructing orthographic views of objects from the perspectiv ely distored image . With these dimen sions we const ruct a model in VRML. We us e image based modeling and rendering to generate photorealistic model. Si nc e we are using perspect ivel y di st ore d image to mea sure dimens ions, we nee d to elim ina te per spe cti ve di storti on for re nd ering. Perspe ctiv e di st or ti on is eliminated by plane homography. Then the perspectively corrected images are mapped to corresponding planes in VRML model. VRML supports walk- through simulation in a 3- D space. So that we ca n pe rce ive the 3D model nearly real time. 2. Related Work Most of the me thods in th e li te ra ture pa id att ention in gen era ting 3D vie ws from stereo ima ges, sequence of monocular images and combining both these images to get 3D views. Each of their approach different in cost, amount of time and user interaction for generating 3D models. R. Koch, et.al .  [3] presented a 3D surface reconstruction from sequence of images. They use structure from motion (SFM) approac h to per for m automa tic cal ibr ati on and de pt h map is ob ta in ed by ap pl yin g mul ti vi ew stereo scopic depth estimation for each calibrate d image . For render ing two di ff er ent text ur e based ren deri ng techn iques view depen dent geometry and textu re (VDGT) and multiple local methods (MLM) are used. Chiu, et.al. [4] used Potemkin model to support reconstruction of the 3D shapes of object instance. They stored different 3D oriented shape primitives of fixed 3D  positions in a class model. Th ey label the each i mages of a cl as s for lear ni ng. A 2D vi ew specif ic recognit ion

Upload: nutsinstitute

Post on 09-Oct-2015

18 views

Category:

Documents


0 download

DESCRIPTION

MATLAB Projects

TRANSCRIPT

  • Generating 3D Views of an Object from ImagesVimala K.S, Avinash N, Murali S

    P.E.T. Research Centre,P.E.S.C.E., Mandya, Karnataka, India.

    [email protected], [email protected], [email protected]

    AbstractComputer Vision is the process of obtaining

    information about a scene by processing the images of the scene.Generating 3D model of an object is one of the major goals incomputer vision. 3D reconstruction of objects and environmentsis useful in itself since the resultant 3D models have wideapplication in virtual reality, computer aided design andengineering. 3D reconstruction is usually done in a piece-wisefashion by identifying various parts of the scene and from thoseparts constructing a representation of the whole. Generation ofmodel according to its true dimension is an important propertyespecially for building model. In a picture taken by the camerathe depth information about the scene is lost. By observing animage a human being can feel the shape of the object but notmetric like height or width of object. In this paper a newtechnique is used to generate 3D views by constructingorthographic views of objects from the perspectively distoredimage to get real dimension of object from given photograph. Animage based modeling and rendering is used to perceive 3Dviews in real time. Photorealistic effect is achieved byeliminating perspective distortion of image using planehomography.

    Key words: Computer vision, view metrology, homography.

    1. IntroductionHuman vision system is capable of perceiving 3D

    information with stereovision. Computer vision is a widearea inspired by this natural system. A central problem [1]in computer vision is to obtain 3D geometric shapeinformation of the objects from planar images. Thisprocess is traditionally termed 3D reconstruction. It isusually done in a piece-wise fashion by identifying variousparts of the scene and from those parts constructing arepresentation of the whole.

    In recent years generating 3D building modelattracts most researcher as it wide range of applications incivil engineering where civil engineers needs to generate3D views to conceive their product to customer. In gamesand entertainment to create virtual reality. Urbanists,architects could all be interested in obtaining models ofbuildings from few images and also for robot navigationinside the building. In some cases, buildings that havedisappeared can be modeled from as little as a singleimage, for example, an old photograph or a painting.

    To generate 3D views one solution is to usespecialized devices for acquiring 3D information of an

    object. Since these devices are cost expensive, it is notpossible to use at all time. The other solution is to manualgeneration. The user requires prior knowledge about theobject and also needs engineering skills. To generatethousands of models it is a time consuming process.Hence, we require a way of generating 3D views with lessuser interaction.To reconstruct any building from images dimensions arevery essential. They include height and length of wall,width of floor etc. Changing the dimension reflects theappearance of resultant 3D model. We use the concept of[2] one point perspective to get real dimension. This isdone by constructing orthographic views of objects fromthe perspectively distored image. With these dimensionswe construct a model in VRML. We use image basedmodeling and rendering to generate photorealistic model.Since we are using perspectively distored image tomeasure dimensions, we need to eliminate perspectivedistortion for rendering. Perspective distortion iseliminated by plane homography. Then the perspectivelycorrected images are mapped to corresponding planes inVRML model. VRML supports walk- through simulationin a 3-D space. So that we can perceive the 3D modelnearly real time.

    2. Related WorkMost of the methods in the literature paid

    attention in generating 3D views from stereo images,sequence of monocular images and combining both theseimages to get 3D views. Each of their approach differentin cost, amount of time and user interaction for generating3D models.R. Koch, et.al. [3] presented a 3D surface reconstructionfrom sequence of images. They use structure from motion(SFM) approach to perform automatic calibration anddepth map is obtained by applying multi viewstereoscopic depth estimation for each calibrated image.For rendering two different texture based renderingtechniques view dependent geometry and texture (VDGT)and multiple local methods (MLM) are used.

    Chiu, et.al. [4] used Potemkin model to supportreconstruction of the 3D shapes of object instance. Theystored different 3D oriented shape primitives of fixed 3Dpositions in a class model. They label the each images of aclass for learning. A 2D view specific recognition

  • system returns the bounding box for the detected objectfrom an image. Then a model based segmentation methodis used to obtain object contour, using that object outlineindividual parts in the model is obtained. Shape contextalgorithm match and deform the boundaries of the storedpart labeled image for detected instance, thus it generated3D model of the class using detected object.

    Saxena, et.al. [5] Presented an algorithm that usesMarkov Random Field (MRF). They showed that howmonocular image cue is combined with triangulation of aset of images to obtain 3D models of large novelenvironment. They over segment the given image to obtainpatches, using MRF the relationship between variousimage patches, and the relation between the image featuresand the 3D location/orientation of the planes are predicted.The relation is obtained for multipatches in the sameimage or with other images. MAP inference is obtained byprobabilistic model and a series of linear programs.

    Ellen, et.al. [6] incorporated an automaticbuilding mode reconstruction procedure. They derive thebuilding orientation from the analysis of height histogrambins. Using the orientation, orthogonal 2D projections ofthe point clouds are generated, where roof planes occur aslines of points. These lines representing planes areextracted by a line-tracking algorithm. The lines areextended to planes, and the planes are analyzed fordeviations from rectangular shape. Two or moreneighboring planes are grouped to generate 3D buildingmodels.Most of the methods in the literature pay attention ingenerating 3D views from stereo images, sequence ofmonocular images and combining both these images to get3D view. In general building stereo system is costly.Further in monocular image sequences finding thecorrespondence in images is a challenge task. There isrelatively little work and many inconsistencies in dynamicreconstruction of 3D objects from true dimension. Hencewe proposed a method which overcomes the shortcomingsby constructing orthographic views of objects from theperspectively distored image.

    2.MethodologyThe outline of overall process of 3D views generation is asshown in the following figure1.

    Figure 1

    The image is captured from a calibrated camera used asinput to the system. In view metrology the differentdimensions of the building are determined using one pointperspective, this is done by constructing orthographicviews of objects from the perspectively distored image.The image contains the perspective distortion which isrectified using plane homography in perspectiveelimination process. Then a 3D geometrical model isconstructed in VRML according to true dimensions frommetrology. In rendering a texture map is performed toeach corresponding surface of a polygon in VRML model.VRML supports for walkthrough to a rendered model,through which different views are generated.

    2.1 View Metrology

    Measurement of objects dimensions from theirimages acquired from an imaging device is called as viewmetrology. In a picture taken by the camera the depthinformation about the scene is lost. By observing an imagea human being can feel the shape of the object but notmetric like height or width of object.

    We use view metrology to find out the distancefactors of the real world or any object from a single image.The use of perspective geometry construction from onepoint perspective is used in this method. The input to themethod is an image having one vanishing point in it. Inthis method, we use the concept of projections in whichthe perspective views in the image are orthographicallyprojected. By converting to the orthographic projectionspace, the objects can be measured in true dimensions. Weuse the principles of construction of one point perspectivein engineering graphics to solve the method by performingthe reverse procedure. The perspective image isrepresented in Euclidean space and the orthographicprojections evolved from them are also represented in thesame space, and the calculations are performed with theuse of analytical geometry.

    For the measurements of the objects using viewmetrology present in the scene, it is required to followcertain steps, viz.

  • 1. Vanishing point determination2. Construction of the ground line3. Construction of the picture plane line4. Dimension finding

    With a pinhole camera a set of parallel lines in ascene is projected onto a set of lines in the images thatmeet in a common point. This point of intersection iscalled Vanishing Point. Vanishing point is determinedfrom known method. Horizon line is a line passingthrough the vanishing point. In case of one pointperspective, the horizon line passes through the vanishingpoint and determines the camera roll. The ground planeand the picture plane are always parallel to the horizonline.

    Ground line (GL) is the line parallel to thehorizon line. A reference parallel edge on the ground in animage, which remains parallel even after perspectivedistortion, is used as the reference parallel line to theground line. To this reference line, a parallel line at thedistance equal to the height of the camera mounting isconstructed below the vanishing point.

    The picture plane line (PP) is the line parallel tothe ground line (and horizon line). This is separated fromthe picture plane at any convenient distance from eachother. Station point S

    T lies on the perpendicular line from

    the vanishing point to the picture plane and at the distanceof focal length away from the picture plane line. Forconvenience we situate the station point on the vanishingpoint, and construct the picture plane, as a parallel line tothe ground line at the distance equal to the focal length ofthe camera and is constructed above the vanishing point.Finally we obtained the dimensions of the objects in thescene. For any object in consideration to be measured weused a method of view metrology from one pointperspective which can measure the width, height and depthof the object. This is carried out for cases of knowncamera parameters. These are found out by the following.

    2.1.1 Finding the width of an object

    The schematic representation of the construction for thewidth measurement is shown in figure 3(b).

    P1

    and P2

    are the two points in the image 3(a)of an object whose width has to be measured.The selected points of the object shouldmake contact on the ground as it appears inthe image. Alternatively they are alsoconsidered from points which project ontothe ground vertically in the image.

    Extend the line formed by joining VP

    and P1

    so that it intersect the ground line at Pg1

    (xpg1

    ,y

    pg1). Similarly obtain P

    g2(x

    pg2,y

    pg2) by line

    formed by joining VP

    and P2.

    Measure the Euclidian distance from Pg1

    to

    Pg2

    ( i.e. ) to get the actual width ofthe object.

    2.1.2 Finding the height of an object

    The schematic representation of theconstruction for the height measurement is shown infigure 4(b).

    Figure 4(a): Two points ofwhich the height has to be

    measured

    Figure 4(b): Two points of which inan image, the height is projected on

    to an orthographic scale.

    Figure2 : Overall construction of view metrology

    Figure 3 (a): Two points of whichthe width has to be measured

    Figure 3 (b): Two points of which in animage, the width is projected on to an

    orthographic scale.

  • P1

    and P2

    are the two points in the image 4(a)of an object whose height has to bemeasured. The selected points of the objectshould make contact on the ground as itappears in the image. Alternatively they arealso considered from points which projectonto the ground vertically in the image.

    Extend the line formed by joining VP

    and P1

    so that it intersect the ground line at Pg1

    (xpg1

    ,y

    pg1).

    Project Pg1

    onto the picture plane line suchthat its foot of perpendicular is atP

    P1(x

    pp1,y

    pp1) on the picture plane.

    Extend the line joining points VP

    & P2, such

    that it will intersect the line joining points Pg1

    and Pp1

    at Pgp2

    (xpgp2

    , ypgp2

    ). Measure the Euclidian distance from P

    g1to

    Pgp2

    ( i.e. ) to get the actual heightof the object to be measured.

    If the object is not touching the ground, thesame can be projected onto the ground firstand then the relative heights can be measuredfor two point P

    1and P

    2above the ground

    individually and then, the difference can becalculated to get the height of the object.

    2.1.3 Finding the Length & Depth of an object

    The schematic representation of the construction for thedepth measurement is shown in figure 5(b).

    P1

    being the point in the image 5(a) of anobject whose depth from the picture planehas to be measured. The selected points ofthe object should make contact on the groundas it appears in the image. Alternatively they

    are also considered from points whichproject onto the ground vertically in theimage.

    Extend the line formed by joining VP

    and P1

    so that it intersect the ground line at Pg1

    (xpg1

    ,y

    pg1).

    Project Pg1

    onto the picture plane line suchthat its foot of perpendicular is atP

    P1(x

    pp1,y

    pp1) on the picture plane.

    Similarly project P1

    onto the picture planeline such that its foot of perpendicular is atP

    1P1(x

    p1p1, y

    p1p1) on the picture plane.

    Since from the construction, we know ST

    isalso the same as V

    P, Find the intersection

    point of the two lines formed from point and to get Pa1(xpa1, ypa1).

    Measure the Euclidian distance from Pa1

    to

    Pp1

    ( i.e. ) to get the actual depth ofthe object from the picture plane. The sum ofthis distance and the focal length is the depthof the object from the camera.

    Similarly find out depth of P2(xp2, yp2) The difference between depth of P1 and P4

    gives the actual length

    In the building image figure 6(a) the user needs togive the corner points in the image interactively.These corner points will separate the given corridorinto different planes like floor, ceiling, and wall.The corner points which partition the building intoseparate planes are listed in table1.

    Figure 5(a): Various points of which thedepth has to be measured from the image

    Figure 5(b): Correspondingprojection of the points selected of

    which the depth is projected on to anorthographic scale.

    Figure 6(a)

    Figure 6(b)

  • Table 1Then the dimensions are measure using referenceof corner points.D (P3, P4) = width of corridorD (P3, P5) or D (P4, P6) = Length of corridorD (P1, P3) or D (P3, P5) = Height of corridor

    where, D is the distance

    2.2 Perspective Elimination ProcessThe mapping between 2D planes or the 2D projective

    mapping is known as homography. In perspective imagingthe shape is distored because parallel lines in the imagetend to converge to a finite point.

    For applications like modeling and renderingperspective distortions are considered as the major threat.Here we remove the perspective distortion by means ofplane homography. The rectangular plane ABCD isacquired using pin hole camera is seen as ABCDperspectively distored in figure7. The camera that acquiresthe image is situated at the station points and the plane ofthe rectangle making an angle with respect to the pictureplane. Perspective transformation turns a perspectiveprojection into a parallel projection (orthographic view).View volume becomes a rectangle.

    Figure 7: Perspective projection depicting the line ofheights.

    To build an orthographic view of the image from thecurrently obtained perspectively distored image, the planehomography [7] is used. The homography can becomputed simply by knowing the relative position of fourpoints on the scene plane and their correspondingpositions in the, yet to be constructed new image. In ourcase, the coordinates of the four end points of the rectanglein the image is known and the dimensional ration in theorthographic view is estimated. Fixing up one point as theorigin and reconstructing the other 3 points, we get therequired four points in the new image. For example thecoordinates A(x1, y1), B(x2, y2), C(x3, y3) and D(x4,y4) is known in the given image which constitutes arectangle, this image when subjected to the perspectivetransformation. We get the length and breadth (l, b)dimensions. Thus the new coordinates of the new imagewould be calculated having the corners A(P1, Q1), B(P2,Q2), C(P3, Q3) and D(P4, Q4) where (P1, Q1) = (0,0),(P2, Q2)=(l,0), (P3, Q3)=(l,b) and (P4, Q4)=(0,b).This transformation is given by the solution of thefollowing equation.

    A*T=B (1)

    Where,

    44441440004444000144332313300033330001332222122000222200012211111110001111000111

    yQxQyxyPxPyxyQxQyxyPxPyxyQxQyxyPxPyxyQxQyxyPxPyx

    A

    (2)

    T = (a11, a12, a13, a21, a22, a23, a31, a32)T (3)

    B = (P1, Q1, P2, Q2, P3, Q3, P4, Q4)T . (4)

    The vector T represents eight values of the 3x3transformation matrix H which incorporates (in general)rotation, translation, scaling, skewing, and stretching as wellas perspective distortion. we only consider perspectivedistortion.

    Corner Points Plane

    P1,P2,P3,P4 Front wall

    P7,P1,P3,P5 Side wall

    P8,P2,P4,P6 Side wall

    P5,P3,P4,P6 Floor

    P7,P1,P2,P8 Ceiling

  • Figure 8: Four corner points are projected onto a rectangularview volume. (left side image shows the perspective image

    and right side image shows the parallel image aftertransformation).

    After this transformation length stretching factor isnecessary to be included to give a realistic effect. Thus thenumber of pixels along the boundary, to which the image isto be rectified, should be calculated. This is calculated withthe help of view metrology by determining true lengthfactor.

    2.3 Modeling

    Virtual reality modeling language (VRML) is usedfor visual representation of model. VRML defines a fileformat that integrates 3D graphics and multimedia.Conceptually, each VRML file is a 3D time-based space thatcontains graphic objects that can be dynamically modifiedthrough a variety of mechanism. The coordinate axes of aVRML universe are traditional right-hand rule coordinatesin the X, Y, and Z dimensions. Coordinate units are definedas meters. After obtaining the dimension we dynamicallycreated the VRML file. The model of a corridor built usingdimension is shown in figure 9(a) and its wire frame modelis shown below figure 9(b).We assume the following, in a corridor

    - the height of the wall is symmetric- the width of corridor is symmetric- the length of the corridor is symmetric- the length of ceiling is same as length of floor- the angle between any two plane is 900

    Figure 9(a) Figure 9(b)

    2.4 Rendering

    In rendering the perspectively corrected textures areused for mapping. The texture is mapped to correspondingpolygon surfaces is as shown in figure 10. Realism isachieved by setting the surface intensity of objectsaccording to the lighting condition and surfacecharacteristics. Lighting specification includes theintensity and position of light sources and the generalbackground illumination required for a scene. Surfaceproperties of objects include degree of transparency andhow rough or smooth the surfaces are to be.

    3. AlgorithmAlgorithm for 3D views generation is given below

    Input : single Imagef Focal length of the camerah Height of the camera while capturing image

    Output: 3D views generated.

    Method

    Step1: Find vanishing point Vp and Horizon line (HL) inthe image.

    Step2: Construct the ground line (GL) at a distance hparallel to horizon line.

    Step3: Construct the picture plane (PP) at a distance fparallel to horizon line.

    Figure 10 : Model with texture

  • Step4: Identify the different planes and make thenecessary measurements as mentioned in section 2.1. Forwidth (2.1.1), for height (2.1.2) and for length (2.1.3).

    Step5: Remove the perspective distortion of the imageusing plane homography.

    Step6: place the planes according to dimension bydynamically creating the VRML

    file.Step7: Map the texture to dimensionally constructed

    planes.Step8: Render the model using lighting and surface

    texturing.Step9: output the different 3D views by walkthrough.Algorithm ends

    5. Results and AnalysisThe proposed methodology generates different

    views, it is tested for different corridors. The image istaken at different corridors. For each corridor image viewmetrology is applied to obtain the true dimensions of thecorridor. The figure 11 shows the different corridor imagesa,b,c,d and the metrology obtained for correspondingcorridors are shown in figure 11 (f),(g),(h),(i).Error in the reconstruction process is due to the error whilemeasuring the dimensions which in turn introduced by thecamera parameters likes focal length. It can also be seenthat as we measure the dimension towards the vanishingpoint, the accuracy of the measuring system will alsodecreases. To perform error analysis we measures thedifferent dimensions of different objects in the corridorwhose reliable and obtained measurements are tabulated.The results for the scene in figure 12 are computed forvarious images taken at different camera height.Percentage experimental discrepancy is calculated by theequation given below.

    % = | |

    100Where,MR = Reliable MeasureMO = Experimental Measure

    Figure 12: Input image and the calculateddimensions

    The error in dimension measurement results ingap between the planes. In the following figure a gaprepresented within ellipse between the planes p1 and p2 isgenerated because of error in height (H) measurement.

    The wireframe model of complete buildingconstructed from 3D models of corridors is shown infigure 14. it is constructed from combining 3D models ofcorridors with user interaction. Each corridor model isgenerated from single image. The corresponding 3D viewsgenerated while walkthrough inside the building is shownin appendix figure15.

    Figure 14: wireframe model of complete building

    ConclusionIn this paper a new method is presented for

    generating 3D views using one point perspective. Thegenerated 3D model preserves the actual dimension of theobject. View metrology constructs orthographic viewsfrom the perspectively distored image. It is simple toconstruct and gives true dimension with minimum error.We correct the perspective distortion by planehomography and stretch the image according to its truedimension obtained from view metrology. It is an easyway to generate model with less user interaction. We alsoperformed error analysis for view metrology to reduce theerror in reconstructed model.

    Figure 13

  • Future work considers automatic partition ofbuilding image into floor, ceiling, walls etc.

    Reference :[1] 3D Scene Reconstruction and Object Recognition:http://www.cs.columbia.edu/ robotics/people/m-reed.html[2] S. Murali, N. Avinash, Estimation of Depth

    Information from a Single View in an Image.ICVGIP 2004:pp. 202-209.

    [3] R. Koch, J.-F.Evers-Senne, J.-M. Frahm, K.Koeser3D Reconstruction and Rendering from ImageSequences.[4] Han-Pang Chiu, Leslie Pack Kaelbling, TomasLozano-Perez Automatic Class-Specific 3D Recontruction from a single Image.[5] Ashutosh Saxena, Min Sun and Andrew Y.Ng 3-DReconstruction from sparse views using MonocularVision.[6] Ellen Schwalbe, Hans-Gerd Maas, Frank Seidel 3DBuilding Model Generation from Airborne Laser scannerData using 2D GIS Data and Orthogonal point cloudProjections.[7] R.Hartley and A.Zisserman, Multiple View

    Geometry, Cambridge University press, 2003.

  • Figure 15(contd..)

  • Figure 15