1999 - three-dimensional vision based on a combination of

Upload: ioenopio

Post on 03-Jun-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/12/2019 1999 - Three-dimensional Vision Based on a Combination Of

    1/9

    Three-dimensional vision based on a combination of

    gray-code and phase-shift light projection: analysis

    and compensation of the systematic errors

    Giovanna Sansoni, Matteo Carocci, and Roberto Rodella

    A combination of phase-shift with gray-code light projection into a three-dimensional vision system basedon the projection of structured light is presented. The gray-code method is exploited to detect withoutambiguity even marked surface discontinuities, whereas the phase-shift technique allows the measure-ment of fine surface details. The system shows excellent linearity. An overall mean value of themeasurement error equal to 40 m, with a variability of approximately 35 m, corresponding to 0.06%

    of full scale, has been estimated. The implementation of the technique is discussed, the analysis of thesystematic errors is presented in detail, and the calibration procedure designed to determine the optimalsetting of the measurement parameters is illustrated. 1999 Optical Society of America

    OCIS codes: 120.0120, 120.2830, 120.3930, 150.0150, 150.6910.

    1. Introduction

    The evaluation of three-dimensional 3D shapes bymeans of optical sensors has an increasing impor-tance in a number of applications because of the in-trinsic noncontact nature of the measurement and

    the possibility of reducing the measurement timewith respect to contact probes. Typical applicationsin the industrial field are production quality control,both in the microrange and in the macrorange,1 thedigitization of free-shape surfaces in the reverse en-gineering process,2 and a number of 3D computer

    vision problems.3 More recently, they have beensuccessfully used in other fields, such as archeology,for measuring and preserving cultural heritage andin entertainment and 3D virtual reality frameworks.4

    A number of publications now exist on the opticaltechniques developed for 3D measurement, both ofthe passive and the active nature. In passive meth-ods, no controlled source of light is necessary: Sur-face reflectance, stereo disparity, and camera motionare examples of techniques based on this passive ap-

    proach. The main drawback is represented by thehigh computational effort needed to get depth infor-mation.5 In active methods, use of a pattern of ra-diation simplifies the problem of depth measurement.Interferometric and moire techniques achieve very

    accurate measurements over small depth ranges,6

    time-of-flight methods are suitable for medium andlong distances,7 and triangulation-based methodsmatch the short-distance interval: Within thisframe, the systems based on the scanning of coherentlight are widely used,8 as well as whole-field pro-filometers, based on the projection of structured light.

    A number of pattern projection schemes belong tothis last category and differ from each other in thecoding used to express the light directions.913

    The research activity reported in this paper dealswith the implementation of a technique that com-bines two methods for the projection and the demod-ulation of bidimensional patterns of light, known as

    the gray-code and the phase-shift methods. The re-sulting technique, hereafter called GCPS has beenintegrated into a prototype for 3D vision developed atour laboratory to achieve a system that performs atoptimal accuracy and speed over a wide typology ofobjects.

    The gray-code technique allows the unique descrip-tion of 2n different directions of projection by meansof the well-known one-distance gray code. The num-ber of the directions of projection that can be unequiv-ocally defined equals the number of the code words:thus, the larger this number, the wider the nonam-

    The authors are with the Instituto Nazionale Fisica della Mate-ria and the Dipartimento di Elettronica per lAutomazione, Uni-versita degli Studi di Brescia, Via Branze 38, I-25123 Brescia,Italy. G. Sansonis e-mail address is [email protected].

    Received 3 May 1999; revised manuscript received 10 August1999.

    0003-693599316565-09$15.000 1999 Optical Society of America

    1 November 1999 Vol. 38, No. 31 APPLIED OPTICS 6565

  • 8/12/2019 1999 - Three-dimensional Vision Based on a Combination Of

    2/9

    biguity height range. On the other hand, as eachprojection direction is associated with a code word,but not further decomposed, the measurement reso-lution is rather low.11

    With the phase-shift approach, the directions ofprojection are coded by phase values: Because thephase is continuously distributed within its range ofnonambiguity, a theoretically infinite height resolu-tion can be obtained, actually limited only by the

    errors that are due to gray-level quantization andnoise. On the other hand, the range of nonambigu-ity is limited to the interval 0 2, and this factstrongly reduces the height range.12

    The combination of gray code and phase shift inGCPS has been proposed by several authors to ex-ploit the positive features of each method and com-pensate for their drawbacks.13 In principle, thelatter is used to increase the information given by theformer by adding a fractional contribution to the codewords that identify each direction of projection.Consequently the measurement performance isstrongly improved as to the resolution and the rangeof the measurement.

    As part of the work performed, we revisited thetheory of GCPS to compensate for the influence of thecrossed-optical-axes geometry of the system, whichdetermines a spatial modulation of the fringes evenin the absence of the target.14 The resulting trian-gulation procedure is given in detail in this paper.

    Moreover, we present the analysis of the system-atic measuring errors and the calibration proceduredesigned to determine the estimates of the measure-ment parameters at optimal accuracy and speed.The set of experiments carried out to evaluate thesystem performance is also reported. In this con-text, we thought it important to derive the inout

    characteristic curves of the system that correspond toa considerable number of measuring points in thework volume and to characterize them in terms oflinearity.

    Section 2 describes the projection technique andthe triangulation procedure, Section 3 reports theanalysis of the systematic errors introduced by aninaccuracy in the determination of the parametersinvolved in the measurement, Section 4 is devoted tothe description of the calibration procedure, and Sec-tion 5 shows the experimental results.

    2. Description of the Technique

    The basic outline of the technique is summarized

    here with the aid of Fig. 1. PointsP and C representthe exit and the entrance pupils of the projection andthe imaging optics, respectively, d is the distancebetween them, and L is the distance from plane R.The coordinate system at the projection sensor planePSP is Xp, Yp, Zp and that of the image sensorplaneISPis Xc,Yc,Zc: The origins are at points

    PandC, respectively. AxesZp andZc coincide withthe optical axes of the projection and the imagingdevices: Zc is perpendicular to referenceR, andZp isat an angle with respect to Zc. The depth informa-tion is defined along theZ coordinate in the reference

    system X, Y,Z. The discrete coordinates at the ISParej and k, corresponding respectively toXc andYc.

    FW is the width of the field of view along the Xcoordinate.

    Because the fringes formed at the PSP are parallelto Yp, when the Xp coordinate varies, different lightplanes can be identified: As an example, Fig. 1shows two light planes, denoted by LP1 and LP2,

    defined in correspondence to two different values ofcoordinate Xp. In the figure, light rayPA on LP1isalso depicted: It intersects planeR at pointA and isimaged by pixelPxj, k at the ISP. Axes Y

    p, Yc, andYare all parallel to each other: Thus the measuringprocedure does not involve them and in the followingthey are not considered.

    GCPS is based on two different steps. The firstone is aimed at coding the light planes from the pro-

    jector; the second step exploits light codings to eval-uate the height.

    A. Light Coding Based on the Combination of Gray-Code

    and Phase-Shift Projection

    The coding procedure that combines the gray-codemethod with the phase-shift method is based on theprojection, atn 11 subsequent instants, ofn blackand white fringe patterns: The first seven patternsof the sequence, denoted by GC_0, GC_1, . . . , GC_6,are formed in such a way that their projection corre-sponds to the formation of a gray code of 7 bits. Asan example, the projection of patterns GC_0, GC_1,GC_2, and GC_3 on plane R corresponds to the for-mation of the gray code shown in Table 1. Rowsr0,r1, r2, and r3 of the table codify word bits from themost-significant to the least-significant one and canbe interpreted as the binary representation of pat-

    terns GC_0, GC_1, GC_2, and GC_3 along the Xdi-rection, provided that black fringes are assigned tothe logic value 0 and white fringes are assigned to thelogic value 1. Columns c0, c1, . . . , c15 are the codewords.

    Each pattern is recorded in sequence by the videocamera, and a suitable thresholding algorithm asso-ciates either the logic value 0 or 1 with the gray levelof each CCD pixel. Figure 2 gives a pictorial repre-sentation of how the gray-code word defined by the bitsequence 0111 is determined in correspondence to theelement of position j, k on the ISP n 4 is still

    Fig. 1. Optical geometry of the system.

    6566 APPLIED OPTICS Vol. 38, No. 31 1 November 1999

  • 8/12/2019 1999 - Three-dimensional Vision Based on a Combination Of

    3/9

    considered. This word codes the light ray seen bypixelPxj,kin Fig. 1, as well as all the rays on lightplane LP1. The number of light planes unequivo-cally defined by this method equals the number of thecode words: thus, the larger this number, the widerthe nonambiguity range of the measurement. Inour system, 27 words, 7 bits deep, can be defined; inthe following they are referred to as l. The gray-codeword describing the light ray seen by the video cam-era atPxj, k on the ISP is denoted as lj, k.

    The last four patterns are obtained as follows:The pattern of the gray-code sequence correspondingto the eighth bit is generated and then spatiallyshifted by a fractionp4 of its spatial periodp. The

    resulting patterns, called PS_0, PS_1, PS_2, andPS_3, are acquired and digitally smoothed in order toobtain fringes of the sinusoidal profile.15 The inten-sity fields Iij, k acquired that correspond to eachshift i 0, 1, 2, 3 can be expressed as follows:

    Iij,k Aj,k Bj,k

    2 cosj,k i 2 . (1)

    In Eq.1,Aj,kandBj,kaccount for the averagebrightness and the fringes contrast, respectively, andj, k represents the phase term. It is evaluated bymeans of the following, well-known relationship14:

    j,k tan1I1j,k I3j,k

    I0j,k I2j,k (2)

    In Eq.2,j, kis the phase coding describing thelight ray seen by the video camera at Pxj, k on theISP. This description is not unequivocal, as therange of nonambiguity is limited to the interval02; however, it yields high resolution, as the phase

    is continuously distributed within its range ofnonambiguity.The integer coding obtained from the gray-code se-

    quence and the phase coding from the phase-shiftapproach are then combined as follows:

    lj,k lj,k 2

    j,k j,k. (3)

    In Eq. 3, lj, k is a real number and denotes thecomplete coding, and j, k represents a phase-correction term, experimentally evaluated, as shownin Table 2. The expression of light directions bymeans of real numbers lj, k in Eq. 3 intrinsically

    yields the extended measurement ranges typical ofthe gray-code method, as well as high resolution, be-cause of the fractional part obtained from phase shift.

    B. Depth Evaluation by Means of Triangulation

    Figure 1 shows the triangulation principle: For sim-plicity, it is applied to the measurement of height

    zH HH, even though it holds for each point in thecoded area on reference R. Two different light raysare viewed along the same direction of acquisitionAC in the figure: These are ray PA and ray PB.They lay on light planes LP1 and LP2, respectively,and are imaged at pixelPxj, k on the ISP, the former

    in the absence of the object, the latter when the objectis placed on reference R. With the coding of lightplanes LP1and LP2denoted bylPAandlPB, it will belj,k lPAin the absence of the object andlj,k lPB in the presence of the object. HeightzHis eval-uated by considering that triangles AH B andP H Care similar and that the following equation holds:

    zHLAB

    d AB. (4)

    In Eq. 4,Landdare defined by the geometry of thesystem, andAB is the base of triangle A H B, repre-senting the distance between raysPA andPB. Thus

    a preliminary step to the use of Eq. 4 is the evalu-ation of AB. To this aim, we developed a specificprocedure that transforms each light coding into thecorresponding abscissa along the X coordinate di-rectly at reference R. This means, for the example

    Table 1. Gray-Code Sequence forn 4

    Word Bits c0 c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 c13 c14 c15

    r0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1r1 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0r2 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0r3 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0

    Fig. 2. Example of the pattern sequence that combines gray-codeand phase-shift projection.

    Table 2. Evaluation of the Correction Term j, k

    j, k j, k , 2 j, k 2, 0 j, k 0, 2 j, k 2,

    lj, kmod4 0 32 2 2 2lj, kmod4 3

    1 November 1999 Vol. 38, No. 31 APPLIED OPTICS 6567

  • 8/12/2019 1999 - Three-dimensional Vision Based on a Combination Of

    4/9

    in Fig. 1, that light codings lPA and lPB are mappedinto the values of the Xcoordinate of points A andBi.e.,xAand xB, and shiftAB is computed as

    ABxB xA. (5)

    The mapping equation, shown here for the evaluationofxB, has the form

    xB PQtan x Ltan x. (6)

    Equation 6 is derived from the geometric modelshown in Fig. 3. In this figure, segmentsPS andPTrepresent the first and the last directions of projec-tion seen by the video camera, respectively, and de-fine the length of field of view FW; segment STrepresents a plane parallel to the PSP at angle withrespect to plane R and intersecting it at point S.

    Angle is the angle between ray PS and line PQ,perpendicular to reference R. It is expressed by

    tan1d FW

    2

    L . (7)

    xrepresents the angle between ray PS and ray PB.It is evaluated by application of the law of sines totriangle S BP:

    SB

    sinx

    SP

    sinx. (8)

    In Eq. 8, x is the angle between ST and PB, andSB represents the distance from point S of the in-

    tersection point between ST and ray PB. Consid-ering that x 2 x and solving Eq. 8with respect to x, we obtain

    x tan1 SBcos

    SPSBsin . (9)Because segmentST is parallel to the PSP, distance

    SB can be expressed as a linear function oflPB:

    SB lPB

    nlpST, (10)

    where the parameter nlp represents the total numberof light codings that fall within the field of view atplane R. The length of segment ST in Eq. 10 isevaluated with Eq. 8 for the case in which B iscoincident with T:

    ST sintot

    sin totSP

    sintot

    sintot

    L

    cos. (11)

    Anglestotand totcan be respectively expressed as

    tot tan1Ltan FW

    L , (12)

    tot

    2 tan1Ltan FW

    L . (13)

    The combination of Eqs. 813 into Eq. 6 resultsin the following expression for xB:

    xB L(tan tan1 Cl lPBcos 1 Cl lPBsin ) . (14)In Eq.14, C1is a constant, defined as

    C1 sintot

    nlpsintot. (15)

    Equation 14 expresses the abscissa xBas a nonlin-ear function oflPB,L,d,FW, and. This is a directconsequence of the crossed-optical-axes geometry ofthe system: In fact, points P and C are at finitedistance L, and the fringes projected on reference Rshow increasing period pR along axis X. A linearmapping, such as that presented in Ref. 13, can besuccessfully applied only ifpRcan be considered con-stant on reference R: in fact, in this case, AB lin-

    early depends on the difference lPB lPA, on FW,and on periodpR. However, this hypothesis holds asfar as distance L is sufficiently large and angle issufficiently small to limit the dependence ofpRon X.In contrast, for the geometric setups characterized byreduced values forL and increased values for , anyattempt at evaluating AB by use of constant pR in-herently yields an error in the estimation of theheight. The use of Eq.14overcomes this problembecause it precisely models the relation between theposition along X of the points where the projectedlight rays impinge on referenceR and the correspond-ing light codings. This derivation is performedpunctually and is intrinsically insensitive to any vari-

    ation ofpR.The extension of the presented measuring principle

    to any point over the illuminated area is carried outas follows: 1 the complete sequence of patternsfrom GC_0 to PS_3 is projected and elaborated in thetwo situations of absence and of presence of the ob-

    ject, yielding two matrices respectively called REF forthe reference and OBJ for the object. A one-to-onecorrespondence is established between the pixels ofthe CCD and the elements of each matrix: For theexample in Fig. 1, it results in REFj, k lPAandOBJj, k lPB. These matrices are then trans-

    Fig. 3. Model of the system geometry used to derive the mappingequation.

    6568 APPLIED OPTICS Vol. 38, No. 31 1 November 1999

  • 8/12/2019 1999 - Three-dimensional Vision Based on a Combination Of

    5/9

    formed, by means of Eq. 14, into matricesxREFandxOBJ, which store the corresponding abscissa values.Finally, the difference between them is calculatedwhen Eq. 5 is applied to each element ofxREFandxOBJ:

    DRj,k xOBJj,k xREFj,k

    L

    (tan

    tan1

    C1OBJj,kcos

    1

    C1OBJj,ksin

    tan tan1 C1REFj,kcos 1 C1REFj,ksin )

    (16)

    whereDRis the difference matrix. Equation 4 canbe rewritten as

    zj,k LDRj,k

    d DRj,k (17)

    where zj, k is the element of matrix z, storing themeasured height in millimeters.

    3. Analysis of the Systematic Errors

    An analysis of the systematic errors introduced intothe measured height by inaccuracies in the determi-nation of the model parameters has been performedto evaluate how critical these are for the accuracy ofthe measurement. This analysis has been limited toparameters L, d, and , as FW can be easily evalu-ated with small uncertainty during the setup of themeasurement by the exploitation of suitable two-dimensional processing for the determination of thescale factor from pixels to millimeters at the refer-ence. In contrast, the accurate determination ofL,

    d, and is not possible because of the difficulty ofprecisely measuring the position of the pupils of theprojector and of the video camera and because theorientation of the LCD within the projector is un-known.

    In the following, Le, de, and erepresent the esti-mates ofL, d, and , and LL Le LL, dd de dd, and e express the inac-curacies of each parameter. The errorzj, kzj,k is expressed as the sum of three terms:

    zj,k

    zj,k

    zLj,k zdj,k zj,k

    zj,k , (18)

    where zLj, k, zdj, k, and zj, k account forthe influence of inaccuraciesLL,dd, and,respectively.

    A. Influence of ParameterL

    Term zLj, k is derived as the difference betweenthe height calculated by Eq. 17that corresponds to

    valuesLeand L:

    zLj,k LeDRj,k

    d DRj,k

    LDRj,k

    d DRj,k. (19)

    From Eq. 16,DRj, k can be expressed as

    DRj,k LC2, (20)

    where C2is

    C2 tan tan1 C1OBJj,kcos 1 C1OBJj,ksin tan

    tan1

    C1REFj,kcos

    1 C1REFj,ksin .

    (21)

    It has been experimentally proven thatC2has a neg-ligible dependence on L and, for practical purposes,can be considered constant. When Eqs. 19 and 20are combined, error zLj, kzj, k results:

    zLj,k

    zj,k

    C2Le2

    d C2Le

    C2L2

    d C2L

    1. (22)

    Considering that parameterdis usually much largerthan the products C2Leand C2L, Eq. 22 becomes

    zLj,k

    zj,k

    Le2

    L2 1

    L L2 L2

    L2 2

    L

    L L

    L

    2

    .

    (23)

    Equation23shows that error zLj,kzj,kis analmost linear function ofLL. This dependence is

    shown in Fig. 4, where LL is varied within theinterval10%10%: The quadratic term is negligi-ble, and the sensitivity coefficient is equal to 2.

    B. Influence of Parameter d

    The term zdj, k is derived as the difference be-tween the height calculated by Eq. 17 that corre-sponds to values deand d:

    zdj,k LDRj,k

    de DRj,k

    LDRj,k

    d DRj,k. (24)

    Fig. 4. Plot of the systematic errorzLz as a function ofLL.

    1 November 1999 Vol. 38, No. 31 APPLIED OPTICS 6569

  • 8/12/2019 1999 - Three-dimensional Vision Based on a Combination Of

    6/9

    Thuszdj, kzj, k results:

    zdj,k

    zj,k

    LDRj,k

    de DRj,k

    LDRj,k

    d DRj,k

    1

    d de

    de DRj,k

    d

    d

    1

    1 dd

    DRj,kd

    . (25)

    Equation 25 expresses zdj, kzj, k as a hyper-bolic function ofdd; however, for small values ofddthis function is almost linear, as shown in Fig.5. Here, the sensitivity coefficient

    1

    1 DRj,k

    d

    is very close to 1.

    C. Influence of Parameter

    The analytical derivation of the dependence of z

    j,k on is very difficult to achieve with the abovemethod, mainly because, in Eq. 17, DRj, k is anonlinear function of . Thus this dependence isstudied with a simulation-based approach, consistingof the following steps: 1 matrixDRis evaluated bymeans of Eq. 17 for known values ofL and d and forconstant height;2supposing that xREFis constant,matrix xOBJ is derived with Eq. 16, and, from Eq.14, the matrices of light codings REF and OBJ canbe determined; 3 for known values of C1, , REF,and OBJ, the estimate e of is varied in Eq. 16,

    and the resulting matrix DR is used as input in Eq.17 to obtain height z

    . The height error z

    j,

    kzj, k is then computed for each element ofz

    .The dependence ofz

    j, kzj, k on is plotted

    in Fig. 6, with index j varied and index k kept con-stant; a variation of within the interval 40%40% is supposed. It is evident that errorz

    j, k

    zj, k increases with , and, the higher j, thehigher the influence ofonz

    j,kzj,k. In

    the experimental practice, this kind of error deter-mines that, even for flat objects, the height profilemeasured in correspondence to points parallel to X

    shows a slope with X. However, this effect can becanceled ifequals zero.

    4. Procedure for the Calibration of the System

    The analysis performed so far allows us to develop a

    procedure to calibrate the system, aimed at finelyadjusting the estimates of the system parametersuntil the required accuracy for the measurement isachieved. The control parameters areL and . Pa-rameter d is evaluated to the required accuracy dur-ing the setup phase and is not involved in thecalibration. This choice is motivated by the fact thatthe sensitivity of the curve in Fig. 4 is twice as highas that in Fig. 5 and presents the opposite sign:thus, by finely varying the estimate of parameter L,we can even compensate for any influence of errorsthat is due to the estimate of parameter d.

    Based on these observations, the calibration proce-

    dure estimates for angle the value c, which mini-mizes z

    , and estimates for L the value Lc thatminimizes the residual height error. The calibra-tion master is a block of parallelepiped shape. Thealgorithm starts from the initial, rough estimates ofLand, denoted byL0and 0, and is composed of twonested loops: The outer one evaluates Lc, and theinner one determines c.

    At each iteration of the outer loop, the procedurecalculates the profile of the master measured in cor-respondence to a section parallel to X. The exit con-dition is that the measured mean profile is withinrequired accuracy z. If this condition is notmatched, the regression line is evaluated over the

    profile: if the angular coefficient is within a presetslope value, the current value eof angle does notneed any variation, and the procedure calculates afiner estimateLeofL. We accomplish this by takingadvantage of the linear dependence of the height onLin Eq. 17. Value Leis then used to determine theprofile of the master at the next iteration. If the exitcondition is verified, current values Le and e areassigned to the searched estimatesLcand cand thecalibration stops; otherwise it enters the next itera-tion.

    The procedure enters the inner loop only if the

    Fig. 5. Plot of the systematic errorzdz as a function ofdd.

    Fig. 6. Plot of the systematic error zz as a function of.

    6570 APPLIED OPTICS Vol. 38, No. 31 1 November 1999

  • 8/12/2019 1999 - Three-dimensional Vision Based on a Combination Of

    7/9

    angular coefficient of the regression line evaluatedover the master profile is greater than a preset valuem. At each iteration, a new value for eis evaluatedas

    e 1 wme, (26)

    whereeandeare the new and the current values

    of, respectively. Parameter w is a weight factor,experimentally determined. The resulting value ofeis then used to evaluate a new profile for the mas-ter and the loop cycles until the exit condition ismatched.

    The algorithm converges very quickly: Typicalvalues for the numbers of iteration are in the range1530.

    5. Experimental Results

    Figure 7 shows a photograph of the prototype used forthe measurements. The projector is based on aliquid-crystal panel Model ABW LCD 320: It

    projects fringes of rectangular profile, variable in con-trast and frequency, and gray-code sequences; thevideo camera is a Sony XC-77CE, equipped with theCCTV Manual Iris12.575-mm zoom. The acqui-sition and the elaboration of the images are per-formed by the PC Image MATRIX VISIONframegrabber, with resolution N 800 and M 2621.

    An extensive set of measurements has been per-formed to evaluate the overall performances of thesystem and to test the effectiveness of the calibrationprocedure. In these experiments, the system hasbeen calibrated having set as input to the calibrationalgorithm a value of height accuracy z 0.025 mm

    and the value of parameter mused to determine cism 0.0001. The initial rough values for distanceLand for angle are L0 1080 mm and 0 0.4871rad. The values determined by the calibration pro-cedure after 16 iterations areLc 1022 mm and c 0.332 rad.

    The first test refers to the face model shown in Fig.7. In this experiment, the field of view is 500 mm 500 mm and the height range is 150 mm. Figure 8presents the deformation induced by the object shapeon patterns GC_0, GC_3, GC_7, and PS_0 of theGCPS projection sequence; the 3D shape obtained by

    elaborating the complete pattern sequence is shownin Fig. 9. It is quite evident that the system is ableto detect both the height-slope changes of the nose aswell as the local deformation that is due to the ma-terial used to build the model polystyrene in theforehead region.

    The second example refers to the flowerpot in Fig.10: The presence of decorations on its surfacemakes this object particularly interesting to evaluatethe resolution improvement achieved with the GCPSapproach with respect to the gray code. In fact, theability of GCPS to detect the fine details on the sur-face can be clearly appreciated from Fig. 11a: Inthis case a resolution of 70 m has been estimated.The same surface has been recovered by means of thegray-code technique: Fig. 11bshows a strong deg-

    Fig. 7. Photograph of the prototype used for the measurements.

    Fig. 8. Example of the GCPS sequence projected on the facemodel: shown arepatterns a GC_0, b GC_3, c GC_7, d PS_1.

    Fig. 9. 3D shape of the face model.

    1 November 1999 Vol. 38, No. 31 APPLIED OPTICS 6571

  • 8/12/2019 1999 - Three-dimensional Vision Based on a Combination Of

    8/9

    radation of the measurement resolution, which hasbeen calculated to be equal to 900 m.

    The In out characteristic curves of the system thatcorrespond to a number of test points on reference Rhave been also evaluated. To this aim, a plane hasbeen used as a target object. It has been orientedparallel to reference R and moved alongZ by meansof a translation stage. A range of 80 mm, at steps of4 mm each, has been considered. At each input po-

    sitionZin, the heightZoutof the target with respect tothe reference has been measured as mean value overa set of 30 measurements. Figure 12 plots the inout characteristic curves of the system derived incorrespondence to a number of test points on planeRchosen at fixed Y, approximately at half of the illu-minated area. All these curves are well approxi-mated by lines: The angular coefficients evaluatedover them are equal to 1 0.0001 and show no de-

    pendence on the X coordinate. This result high-lights the good performance of the calibration, as theheight error that is due to inaccuracy is com-pensated for.

    The dependence of the measurement error zver-sus Zin has also been considered. To this aim, weevaluated the mean value zand the standard devi-ation z of the distribution of error z z Zin,wherez is the height measured in correspondence tothe same test points as before. Figure 13 plots theresult: Both mean value zand standard deviationz are shown as functions ofZin. This figure welldemonstrates the ability of the calibration to restrictthe measurement error within a range of 1233 m,with a variability of37 m. The system charac-terization presented so far has been extended to anumber of different sets of test points, taken in cor-respondence to sections parallel toX, at fixed values

    ofY. The analysis performed demonstrates that allthe parameters presented so far, with the exceptionof the measurement dispersion, do not appreciablychange with either X or Y. However, the meansquare errorzincreases by 70% at the bound-aries of the illuminated area: This is by no meanssurprising, considering that in these regions the qual-ity of the fringes projected decreases, in both focusand contrast, and the influence of lens distortion in-creases.

    Fig. 10. Image of a flowerpot.

    Fig. 11. 3D shape of the flowerpot recovered with a the gray-code technique,bthe combined GCPS technique.

    Fig. 12. Inout characteristic curves of the system.

    Fig. 13. Evaluation of the accuracy of the system. The meanvaluezand standard deviationzof the set of values of errorz along a section parallel to X- are shown as functions ofZin.

    6572 APPLIED OPTICS Vol. 38, No. 31 1 November 1999

  • 8/12/2019 1999 - Three-dimensional Vision Based on a Combination Of

    9/9

    6. Conclusions

    In this paper, a technique that combines the gray-code with the phase-shift methods into a whole-fieldprofilometer has been presented. The resultingmethod, called GCPS, strongly improves the mea-surement performance in relation to resolution andrange of measurement. It has been revised to com-pensate for the spatial modulation that the fringespresent at the reference surface because of the

    crossed-optical-axes geometry of the system.The analysis of the systematic errors and the pro-cedure to calibrate the measurement profilometerbased on the projection of structured light have alsobeen presented. The calibration allows the evalua-tion of fine estimates of the geometric parameters ofthe system following a simple and non-time-crunching approach. An extensive set of measure-ments has been carried out to evaluate themeasurement performance. The system shows highlinearity and good performance as far as both accu-racy and precision: The measurement error showsan overall mean value equal to 40 m, with a vari-ability of35 m. The mean error can be furtherreduced by considering the lens distortion effects: Anew model for both projector and video camera isunder refinement and will be accounted for in futurework.

    References

    1. F. Docchio, M. Bonardi, S. Lazzari, R. Rodella, and E. Zorzella,Electro-optical sensors for mechanical applications, in Optical

    Sensors and Microsystems, A. N. Chester, S. Martellucci, andA. G. Mignani, eds. Plenum, New York, London, 1999, Chap. 1.

    2. S. F. El-Hakim and N. Pizzi, Multicamera vision-based ap-proach to flexible feature measurement for inspection and re-verse engineering, Opt. Eng. 32, 220122151993.

    3. D. Poussart and D. Laurendeau, 3-D sensing for industrial

    computer vision, inAdvances in Machine Vision, J.L. C.Sanz,ed.Springer-Verlag, New York, 1989, Chap. 3.

    4. M. Rioux, G. Godin, F. Blais,and R. Baribeau, High resolutiondigital 3D imaging of large structures, in Three-Dimensional

    Image Capture, R. N. Ellson and J. H. Nurre, eds., Proc. SPIE3023, 109118 1997.

    5. R. A. Jarvis, A perspective on range finding techniques for

    computer vision, IEEE Trans. Pattern Anal. Mach. Intell.

    PAMI-5, 1221391983.

    6. S. Kuwamura and I. Yamaguchi, Wavelength scanning pro-

    filometry for real-time surface shape measurement, Appl.

    Opt. 36, 447344821997.

    7. T. Nielsen, F. Bormann, S. Wolbeck, H. Spiecker, M. D. Bur-

    rows, and P. Andersen, Time-of-light analysis of flight pulses

    with a temporal resolution of 100ps, Rev. Sci. Instrum. 67,

    17211724 1996.

    8. M. Rioux, Laser range finder based on synchronized scan-ners, Appl. Opt. 23, 383738431984.

    9. Q. Fang, and S. Zheng, Linearly coded profilometry, Appl.

    Opt. 36, 240124071997.

    10. T. G. Stahs and F. M. Wahl, Fast and robust range data

    acquisition in a low-cost environment, in Close-Range Photo-

    grammetry Meets Machine Vision, E. P. Baltsavias and A.

    Gruen, eds., Proc. SPIE 1395,496503 1990.

    11. G. Sansoni, S. Corini, S. Lazzari, R. Rodella, and F. Docchio,

    Three-dimensional imaging based on Gray-code light projec-

    tion: characterization of the measuring algorithm and devel-

    opment of a measuring system for industrial applications,

    Appl. Opt. 36, 446344721997.

    12. G. Sansoni, S. Lazzari, S. Peli, and F. Docchio, 3D imager for

    dimensional gauging of industrial workpieces: state of theart of the development of a robust and versatile system, in

    Proceedings of the International Conference on Recent Ad-

    vances in 3-D Digital Imaging and Modeling, G. Roth and M.

    Rioux, eds., IEEE Computer Society, Los Alamitos, Calif.,

    1997, pp. 1926.

    13. W. Krattenthaler, K. J. Mayer, and H. P. Duwe, 3D-surface

    measurement with coded light approach, in Fourth Interna-

    tional Workshop for Digital Image Processing and Computer

    Graphics, Proceedings of Osterreichische Arbeitsgem. Mus-

    tererKennung OCG Schriftenreihe, Oldenburg, Germany,

    1993, Vol. 12, pp. 103114.

    14. G. Sansoni, M. Carocci, S. Lazzari, and R. Rodella, A 3D

    imaging system for industrial applications with improved flex-

    ibility and robustness, J. Opt. A1, 8393 1999.15. M. Carocci, S. Lazzari, R. Rodella, and G. Sansoni, 3D range

    optical sensor: analysis of the measurement errors and de-

    velopment of procedures for their compensation, in Three-

    Dimensional Image Capture and Applications, R. N. Ellson

    and J. H. Nurre, eds., Proc. SPIE 3313,1781881998.

    1 November 1999 Vol. 38, No. 31 APPLIED OPTICS 6573