a numerical simulation model to facilitate the understanding of the properties of a diffraction...

318

Upload: others

Post on 11-Sep-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating
Page 2: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page i[#1] 15/5/2009 10:56

Page 3: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page ii[#2] 15/5/2009 10:56

Page 4: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page iii[#3] 15/5/2009 10:56

Page 5: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page iv[#4] 15/5/2009 10:56

Page 6: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page v[#5] 15/5/2009 10:56

Page 7: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page vi[#6] 15/5/2009 10:56

Page 8: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page vii[#7] 15/5/2009 10:56

Page 9: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page viii[#8] 15/5/2009 10:56

Page 10: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page ix[#9] 15/5/2009 10:56

Page 11: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page x[#10] 15/5/2009 10:56

Page 12: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xi[#11] 15/5/2009 10:56

Contents

From the Series Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xixPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiPreface to First Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxiiiAuthor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv

Chapter 1 Waves and Beams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Plane Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Spherical Waves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Cylindrical Waves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.5 Waves as Information Carriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.5.1 Amplitude/Intensity-Based Sensors. . . . . . . . . . . . . . . . . . 41.5.2 Sensors Based on Phase Measurement . . . . . . . . . . . . . . 41.5.3 Sensors Based on Polarization . . . . . . . . . . . . . . . . . . . . . . . 51.5.4 Sensors Based on Frequency Measurement . . . . . . . . . 51.5.5 Sensors Based on Change of Direction . . . . . . . . . . . . . . 5

1.6 The Laser Beam. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.7 The Gaussian Beam. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.8 ABCD Matrix Applied to Gaussian Beams . . . . . . . . . . . . . . . . . . 8

1.8.1 Propagation in Free Space. . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.8.2 Propagation through a Thin Lens . . . . . . . . . . . . . . . . . . . . 101.8.3 Mode Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.9 Nondiffracting Beams—Bessel Beams . . . . . . . . . . . . . . . . . . . . . . 121.10 Singular Beams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Additional Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Chapter 2 Optical Interference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2 Generation of Coherent Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.2.1 Interference by Division of Wavefront . . . . . . . . . . . . . . 202.2.2 Interference by Division of Amplitude . . . . . . . . . . . . . . 20

2.3 Interference between Two Plane Monochromatic Waves . . . 212.3.1 Young’s Double-Slit Experiment . . . . . . . . . . . . . . . . . . . . 222.3.2 Michelson Interferometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.4 Multiple-Beam Interference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

xi

Page 13: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xii[#12] 15/5/2009 10:56

xii Contents

2.4.1 Multiple-Beam Interference: Divisionof Wavefront . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.4.2 Multiple-Beam Interference: Divisionof Amplitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.4.2.1 Interference Pattern in Transmission . . . . . . . 272.4.2.2 Interference Pattern in Reflection . . . . . . . . . . 29

2.5 Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.5.1 Dual-Wavelength Interferometry . . . . . . . . . . . . . . . . . . . . 302.5.2 White Light Interferometry. . . . . . . . . . . . . . . . . . . . . . . . . . . 312.5.3 Heterodyne Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.5.4 Shear Interferometry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322.5.5 Polarization Interferometers. . . . . . . . . . . . . . . . . . . . . . . . . . 332.5.6 Interference Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342.5.7 Doppler Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362.5.8 Fiber-Optic Interferometers . . . . . . . . . . . . . . . . . . . . . . . . . . 372.5.9 Phase-Conjugation Interferometers . . . . . . . . . . . . . . . . . . 38

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Additional Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Chapter 3 Diffraction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.1 Fresnel Diffraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.2 Fraunhofer Diffraction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.3 Action of a Lens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.4 Image Formation and Fourier Transformation by a Lens . . . 45

3.4.1 Image Formation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473.4.2 Fourier Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.5 Optical Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493.6 Optical Components in Optical Metrology . . . . . . . . . . . . . . . . . . 50

3.6.1 Reflective Optical Components . . . . . . . . . . . . . . . . . . . . . . 503.6.2 Refractive Optical Components . . . . . . . . . . . . . . . . . . . . . . 503.6.3 Diffractive Optical Components . . . . . . . . . . . . . . . . . . . . . 52

3.6.3.1 Sinusoidal Grating . . . . . . . . . . . . . . . . . . . . . . . . . . 553.6.4 Phase Grating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.6.5 Diffraction Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3.7 Resolving Power of Optical Systems . . . . . . . . . . . . . . . . . . . . . . . . 57Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Chapter 4 Phase-Evaluation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.1 Interference Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.2 Fringe Skeletonization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604.3 Temporal Heterodyning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614.4 Phase-Sampling Evaluation: Quasi-Heterodyning . . . . . . . . . . 624.5 Phase-Shifting Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634.6 Phase-Shifting with Unknown

but Constant Phase-Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634.7 Spatial Phase-Shifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Page 14: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xiii[#13] 15/5/2009 10:56

Contents xiii

4.8 Methods of Phase-Shifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.8.1 PZT-Mounted Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.8.2 Tilt of Glass Plate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.8.3 Rotation of Polarization Component. . . . . . . . . . . . . . . . . 704.8.4 Motion of a Diffraction Grating. . . . . . . . . . . . . . . . . . . . . . 724.8.5 Use of a CGH Written on a Spatial

Light Modulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.8.6 Special Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4.9 Fourier Transform Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.10 Spatial Heterodyning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75Additional Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

Chapter 5 Detectors and Recording Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795.1 Detector Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795.2 Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.2.1 Photoconductors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805.2.2 Photodiodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815.2.3 Photomultiplier Tube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

5.3 Image Detectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845.3.1 Time-Delay and Integration Mode of

Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895.4 Recording Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.4.1 Photographic Films and Plates . . . . . . . . . . . . . . . . . . . . . . . 905.4.2 Dichromated Gelatin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945.4.3 Photoresists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.4.4 Photopolymers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.4.5 Thermoplastics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965.4.6 Photochromics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965.4.7 Ferroelectric Crystals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98Additional Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

Chapter 6 Holographic Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016.2 Hologram Recording . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1026.3 Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1036.4 Choice of Angle of Reference Wave . . . . . . . . . . . . . . . . . . . . . . . . . 1046.5 Choice of Reference Wave Intensity . . . . . . . . . . . . . . . . . . . . . . . . . 1056.6 Types of Holograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056.7 Diffraction Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056.8 Experimental Arrangement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

6.8.1 Lasers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1066.8.2 Beam-Splitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076.8.3 Beam-Expanders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076.8.4 Object-Illumination Beam. . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

Page 15: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xiv[#14] 15/5/2009 10:56

xiv Contents

6.8.5 Reference Beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076.8.6 Angle between Object and Reference Beams . . . . . . . 108

6.9 Holographic Recording Materials. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086.10 Holographic Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

6.10.1 Real-Time HI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086.10.2 Double-Exposure HI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096.10.3 Time-Average HI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1106.10.4 Real-Time, Time-Average HI . . . . . . . . . . . . . . . . . . . . . 115

6.11 Fringe Formation and Measurement ofDisplacement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

6.12 Loading of the Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166.13 Measurement of Very Small Vibration Amplitudes . . . . . . . . . 1176.14 Measurement of Large Vibration Amplitudes . . . . . . . . . . . . . . . 117

6.14.1 Frequency Modulation of Reference Wave. . . . . . . 1176.14.2 Phase Modulation of Reference Beam. . . . . . . . . . . . 119

6.15 Stroboscopic Illumination/Stroboscopic HI . . . . . . . . . . . . . . . . . 1206.16 Special Techniques in Holographic Interferometry . . . . . . . . . 121

6.16.1 Two-Reference-Beam HI. . . . . . . . . . . . . . . . . . . . . . . . . . 1216.16.2 Sandwich HI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236.16.3 Reflection HI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

6.17 Extending the Sensitivity of HI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276.17.1 Heterodyne HI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6.18 Holographic Contouring/Shape Measurement . . . . . . . . . . . . . . 1296.18.1 Dual-Wavelength Method . . . . . . . . . . . . . . . . . . . . . . . . . 1296.18.2 Dual-Refractive Index Method . . . . . . . . . . . . . . . . . . . . 1316.18.3 Dual-Illumination Method . . . . . . . . . . . . . . . . . . . . . . . . 132

6.19 Holographic Photoelasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1326.20 Digital Holography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

6.20.1 Recording . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1326.20.2 Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

6.21 Digital Holographic Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . 135Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136Additional Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Chapter 7 Speckle Metrology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1497.1 The Speckle Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1497.2 Average Speckle Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

7.2.1 Objective Speckle Pattern . . . . . . . . . . . . . . . . . . . . . . . . . 1507.2.2 Subjective Speckle Pattern . . . . . . . . . . . . . . . . . . . . . . . . 150

7.3 Superposition of Speckle Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . 1517.4 Speckle Pattern and Object Surface Characteristics . . . . . . . . 1527.5 Speckle Pattern and Surface Motion . . . . . . . . . . . . . . . . . . . . . . . . . 152

7.5.1 Linear Motion in the Plane of the Surface . . . . . . . . 1527.5.2 Out-of-Plane Displacement . . . . . . . . . . . . . . . . . . . . . . . 1527.5.3 Tilt of the Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

7.6 Speckle Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

Page 16: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xv[#15] 15/5/2009 10:56

Contents xv

7.7 Methods of Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1587.7.1 Pointwise Filtering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1587.7.2 Wholefield Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1607.7.3 Fourier Filtering: Measurement of Out-of-Plane

Displacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1617.8 Speckle Photography with Vibrating Objects:

In-Plane Vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1617.9 Sensitivity of Speckle Photography . . . . . . . . . . . . . . . . . . . . . . . . . . 1627.10 Particle Image Velocimetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1627.11 White-Light Speckle Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . 1627.12 Shear Speckle Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1637.13 Speckle Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1647.14 Correlation Coefficient in Speckle Interferometry . . . . . . . . . . 1677.15 Out-of-Plane Speckle Interferometer . . . . . . . . . . . . . . . . . . . . . . . . 1687.16 In-Plane Measurement: Duffy’s Method . . . . . . . . . . . . . . . . . . . . 1697.17 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

7.17.1 Fringe Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717.18 Out-of-Plane Displacement Measurement . . . . . . . . . . . . . . . . . . . 1737.19 Simultaneous Measurement of Out-of-Plane and In-Plane

Displacement Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1747.20 Other Possibilities for Aperturing the Lens. . . . . . . . . . . . . . . . . . 1757.21 Duffy’s Arrangement: Enhanced Sensitivity . . . . . . . . . . . . . . . . 1767.22 Speckle Interferometry—Shape Measurement/

Contouring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1777.23 Speckle Shear Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

7.23.1 The Meaning of Shear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1777.24 Methods of Shearing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1787.25 Theory of Speckle Shear Interferometry. . . . . . . . . . . . . . . . . . . . . 1807.26 Fringe Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

7.26.1 The Michelson Interferometer . . . . . . . . . . . . . . . . . . . . 1817.26.2 The Apertured Lens Arrangement . . . . . . . . . . . . . . . . 182

7.27 Shear Interferometry without Influence of the In-PlaneComponent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

7.28 Electronic Speckle Pattern Interferometry. . . . . . . . . . . . . . . . . . . 1837.28.1 Out-of-Plane Displacement Measurement. . . . . . . . 1847.28.2 In-Plane Displacement Measurement . . . . . . . . . . . . . 1857.28.3 Vibration Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1857.28.4 Measurement on Small Objects . . . . . . . . . . . . . . . . . . . 1867.28.5 Shear ESPI Measurement . . . . . . . . . . . . . . . . . . . . . . . . . 187

7.29 Contouring in ESPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1877.29.1 Change of Direction of Illumination . . . . . . . . . . . . . . 1887.29.2 Change of Wavelength . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1897.29.3 Change of Medium Surrounding the Object . . . . . 1897.29.4 Tilt of the Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

7.30 Special Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1897.30.1 Use of Retro-Reflective Paint . . . . . . . . . . . . . . . . . . . . . 189

Page 17: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xvi[#16] 15/5/2009 10:56

xvi Contents

7.31 Spatial Phase-Shifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191Additional Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

Chapter 8 Photoelasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2018.1 Superposition of Two-Plane Polarized Waves . . . . . . . . . . . . . . . 2018.2 Linear Polarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2028.3 Circular Polarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2038.4 Production of Polarized Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

8.4.1 Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2048.4.2 Refraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2048.4.3 Double Refraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204

8.4.3.1 Phase Plates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2058.4.3.2 Quarter-Wave Plate . . . . . . . . . . . . . . . . . . . . . . . 2068.4.3.3 Half-Wave Plate . . . . . . . . . . . . . . . . . . . . . . . . . . 2068.4.3.4 Compensators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

8.4.4 Dichroism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2078.4.5 Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

8.5 Malus’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2078.6 The Stress-Optic Law. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2078.7 The Strain-Optic Law. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2098.8 Methods of Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

8.8.1 Plane Polariscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2108.8.2 Circular Polariscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

8.9 Evaluation Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2168.10 Measurement of Fractional Fringe Order . . . . . . . . . . . . . . . . . . . . 217

8.10.1 Tardy’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2178.11 Phase-Shifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

8.11.1 Isoclinics Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2208.11.2 Computation of Isochromatics . . . . . . . . . . . . . . . . . . . . 221

8.12 Birefringent Coating Method: Reflection Polariscope . . . . . . 2228.13 Holophotoelasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

8.13.1 Single-Exposure Holophotoelasticity . . . . . . . . . . . . . 2248.13.2 Double-Exposure Holophotoelasticity . . . . . . . . . . . . 225

8.14 Three-Dimensional Photoelasticity . . . . . . . . . . . . . . . . . . . . . . . . . . 2298.14.1 The Frozen-Stress Method . . . . . . . . . . . . . . . . . . . . . . . . 2298.14.2 Scattered-Light Photoelasticity . . . . . . . . . . . . . . . . . . . 230

8.15 Examination of the Stressed Model inScattered Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2308.15.1 Unpolarized Incident Light. . . . . . . . . . . . . . . . . . . . . . . . 2308.15.2 Linearly Polarized Incident Beam . . . . . . . . . . . . . . . . 232

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233Additional Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234

Chapter 9 The Moiré Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2399.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

Page 18: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xvii[#17] 15/5/2009 10:56

Contents xvii

9.2 The Moiré Fringe Pattern between TwoLinear Gratings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2399.2.1 a �= b but θ = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2419.2.2 a = b but θ �= 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241

9.3 The Moiré Fringe Pattern between a Linear Gratingand a Circular Grating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

9.4 Moiré between Sinusoidal Gratings . . . . . . . . . . . . . . . . . . . . . . . . 2439.5 Moiré between Reference and Deformed Gratings . . . . . . . . 2459.6 Moiré Pattern with Deformed Sinusoidal Grating . . . . . . . . . 246

9.6.1 Multiplicative Moiré Pattern . . . . . . . . . . . . . . . . . . . . . 2479.6.2 Additive Moiré Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . 247

9.7 Contrast Improvement of the Additive Moiré Pattern . . . . . 2489.8 Moiré Phenomenon for Measurement. . . . . . . . . . . . . . . . . . . . . . 2489.9 Measurement of In-Plane Displacement . . . . . . . . . . . . . . . . . . . 248

9.9.1 Reference and Measurement Gratings of EqualPitch and Aligned Parallel toEach Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248

9.9.2 Two-Dimensional In-Plane DisplacementMeasurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249

9.9.3 High-Sensitivity In-Plane DisplacementMeasurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

9.10 Measurement of Out-of-Plane Componentand Contouring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2539.10.1 The Shadow Moiré Method . . . . . . . . . . . . . . . . . . . . . . 254

9.10.1.1 Parallel Illumination and ParallelObservation . . . . . . . . . . . . . . . . . . . . . . . . . . 254

9.10.1.2 Spherical-Wave Illumination andCamera at Finite Distance . . . . . . . . . . . 255

9.10.2 Automatic Shape Determination . . . . . . . . . . . . . . . . . 2579.10.3 Projection Moiré . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2579.10.4 Light-Line Projection with TDI Mode

of Operation of CCD Camera . . . . . . . . . . . . . . . . . . . . 2609.10.5 Coherent Projection Method . . . . . . . . . . . . . . . . . . . . . 262

9.10.5.1 Interference between TwoCollimated Beams . . . . . . . . . . . . . . . . . . . 262

9.10.5.2 Interference between TwoSpherical Waves. . . . . . . . . . . . . . . . . . . . . . 263

9.10.6 Measurement of Vibration Amplitudes . . . . . . . . . . 2649.10.7 Reflection Moiré Method. . . . . . . . . . . . . . . . . . . . . . . . . 265

9.11 Slope Determination for Dynamic Events . . . . . . . . . . . . . . . . . 2679.12 Curvature Determination for Dynamic Events . . . . . . . . . . . . 2689.13 Surface Topography with Reflection Moiré Method . . . . . . 2699.14 Talbot Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271

9.14.1 Talbot Effect in Collimated Illumination . . . . . . . . 2729.14.2 Cut-Off Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2739.14.3 Talbot Effect in Noncollimated Illumination . . . . 273

Page 19: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xviii[#18] 15/5/2009 10:56

xviii Contents

9.14.4 Talbot Effect for Measurement . . . . . . . . . . . . . . . . . . . 2749.14.4.1 Temperature Measurement . . . . . . . . . . 2749.14.4.2 Measurement of the Focal Length

of a Lens and the Long Radius ofCurvature of a Surface . . . . . . . . . . . . . . . 275

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275Additional Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283

Page 20: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xix[#19] 15/5/2009 10:56

From the Series Editor

Over the last 20 years, optical science and engineering has emerged as a disciplinein its own right. This has occurred with the realization that we are dealing withan integrated body of knowledge that has resulted in optical science, engineering,and technology becoming an enabling discipline that can be applied to a variety ofscientific, industrial, commercial, and military problems to produce operating devicesand hardware systems. This book series is a testament to the truth of the precedingstatement.

Quality control and the testing have become essential tools in modern industryand modern laboratory processes. Optical methods of measurement have providedmany of the essential techniques of current practice. This current volume on OpticalMethods of Measurement emphasizes wholefield measurement methods as opposedto point measurement, that is, sensing a field all at once and then mapping that fieldfor the parameter or parameters under consideration as contrasted to building upthat field information by a time series of point-by-point determinations. The initialoutput of these wholefield systems of measurement is often a fringe pattern that isthen processed. The required fringes are formed by direct interferometry, holographicinterferometry, phase-shifting methods, heterodyning, speckle pattern interferometry,and moiré techniques.

The methods described here are applicable to many measurement scenarios,although the examples focus on modern experimental mechanics. Since this volumecovers the variety of techniques available and their range of applicability togetherwith their sensitivity and accuracy as determined by the underlying principle, thereader will find these pages an excellent practice guide of wholefield measurementas well as a desk reference volume.

Brian J. Thompson

xix

Page 21: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xx[#20] 15/5/2009 10:56

Page 22: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xxi[#21] 15/5/2009 10:56

Preface

Many good ideas originated from several of my professional colleagues that haveconsiderably improved the book. I would like to thank Prof. Osten from StuttgartUniversity and Professors Hinsch and Helmers from the University of Oldenburg forproviding numerous contributions to the contents of the book. I owe a great deal to mycolleagues Prof. M. P. Kothiyal, Prof. Chandra Shakher, and Dr. N. Krishna Mohanfor their help and support at various stages.

Since the phenomenon of interference is central to many techniques of measure-ment, I have introduced a chapter on “Optical Interference.” There have been severaladditions like the non-diffracting beam and singular beam with their metrologicalapplications. Bibliography and additional reading have been expanded. The bookcontains references to 103 books, 827 journal papers, and 130 figures.

Revision of this book was originally slated for late 2005; however, administrativeresponsibilities prevented this. It is only through the persistent efforts by the staffof Taylor & Francis that the revision finally materialized. I would therefore like toexpress my sincere thanks to Jessica Vakili and Catherine Giacari for keeping myinterest in the book alive.

Rajpal S. Sirohi

xxi

Page 23: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xxii[#22] 15/5/2009 10:56

Page 24: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xxiii[#23] 15/5/2009 10:56

Preface to First Edition

Optical techniques of measurement are among the most sensitive known today. Inaddition, they are noncontact, noninvasive, and fast. In recent years, the use of opticaltechniques for measurement have dramatically increased, and applications range fromdetermining the topography of landscapes to checking the roughness of polishedsurfaces.

Any of the characteristics of a light wave—amplitude, phase, length, frequency,polarization, and direction of propagation—can be modulated by the measurand. Ondemodulation, the value of the measurand at a spatial point and at a particular timeinstant can be obtained. Optical methods can effect measurement at discrete pointsor over the wholefield with extremely fine spatial resolution. For many applications,wholefield measurements are preferred.

This book contains a wealth of information on wholefield measurement methods,particularly those employed frequently on modern experimental mechanics since thevariable that is normally monitored is displacement. Thus, the methods can be used todetermine surface deformation, strains, and stresses. By extension, they can be appliedin the nondestructive evaluation of components. There is no doubt that the methodsdescribed are applicable to other fields as well, such as the determination of surfacecontours and surface roughness. With the appropriate setup, these wholefield opticalmethods can be used to obtain material properties and temperature and pressure gradi-ents. Throughout the book, emphasis is placed on the physics of the techniques, withdemonstrative examples of how they can be applied to tackle particular measurementproblems.

Any optical technique has to involve a light source, beam-handling optics, a detec-tor, and a data-handling system. In many of the wholefield measurement techniques,a laser is the source of light. Since much information on lasers is available, this aspectis not discussed in the book. Instead, we have included an introductory chapter on theconcept of waves and associated phenomena. The propagation of waves is discussedin Chapter 2.

Chapter 3 deals with current phase evaluation techniques, since almost all thetechniques described here display the information in the form of a fringe pattern.This fringe pattern is evaluated to a high degree of accuracy using currently avail-able methods to extract the information of interest. The various detectors availablefor recording wholefield information are described in Chapter 4. Thus the first fourchapters provide the background for the rest of the book.

Chapter 5 is on holographic interferometry. A number of techniques have beenincluded to give readers a good idea of how various applications may be handled.

xxiii

Page 25: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xxiv[#24] 15/5/2009 10:56

xxiv Preface

The speckle phenomenon, although a bane of holographers, has emerged as a verygood measurement technique. Several techniques are now available to measure thein-plane and out-of-plane displacement components, tilts, or slopes. In Chapter 6,these techniques are contrasted with those based on holographic interferometry, withdiscussion on their relative advantages and areas of applicability. In recent times,electronic detection in conjunction with phase evaluation methods has been increas-ingly used. This makes all these speckle techniques almost real-time approaches, andthey have consequently attracted more attention from industry. Chapter 6 includesdescriptions of various measurement techniques in speckle metrology.

Photoelasticity is another wholefield measurement technique that has existed forsome time. Experiments are conducted on transparent models that become birefrin-gent when subjected to load. Unlike holographic interferometry and speckle-basedmethods, which measure displacement or strain, photoelasticity gives the stress dis-tribution directly. Photoelasticity, including the technique of holophotoelasticy, iscovered in Chapter 7.

The moiré technique, along with the Talbot phenomenon, is a powerful mea-surement technique and covers a very wide range of sensitivity. Chapter 8 presentsvarious techniques, spanning the measurement range from geometrical moiré withlow-frequency diffraction gratings.

The book has evolved as a result of the authors’ involvement with research andteaching of these techniques for more than two decades. It puts together the majorwholefield methods of measurement in one text, and, therefore should be useful tostudents taking a graduate course in optical experimental mechanics and to experi-mentalists who are interested in investigating the techniques available and the physicsbehind them. It will also serve as a useful reference for researchers in the field.

We acknowledge the strong support given by Professor Brian J. Thompson andthe excellent work of the staff of Marcel Dekker, Inc. This book would not have beenpossible without the constant support and encouragement of our wives, VijayalaxmiSirohi and Chau Swee Har, and our families, to whom we express our deepest thanks.

Rajpal S. SirohiFook Siong Chau

Page 26: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xxv[#25] 15/5/2009 10:56

Author

R. S. Sirohi, Ph.D. is presently the vice chancellorof Amity University, Rajasthan, India. Prior to this,he was the vice chancellor of Barkatullah University,Bhopal, and director, Indian Institute of TechnologyDelhi, Delhi. He has also served at the Indian Instituteof Science, Bangalore and in various capacities at theIndian Institute of Technology Madras, Chennai.

He holds a postgraduate degree in applied optics anda Ph.D. in physics from IIT Delhi, India.

Prof. Sirohi worked in Germany as a HumboldtFellow and Awardee. He was senior research associateat Case Western Reserve University, Cleveland, Ohio,

and associate professor at the Rose Hulman Institute of Technology, Terre HauteIndiana. He was an ICTP (International Center for Theoretical Physics, Trieste, Italy)visiting scholar to the Institute forAdvanced Studies, University of Malaya, Malaysia,and visiting professor at the National University of Singapore. Currently he is an ICTPvisiting scientist to University of Namibia.

Dr. Sirohi is a Fellow of several important academies in India and abroad includingthe Indian National Academy of Engineering, National Academy of Sciences, OpticalSociety of America, Optical Society of India, SPIE (The International Society forOptical Engineering) and honorary fellow of ISTE (Indian Society for TechnicalEducation) and Metrology Society of India. He is a member of several other scientificsocieties, and founding member of the India Laser Association. He was also the chairfor SPIE-INDIA Chapter, which he established with cooperation from SPIE in 1995at IIT Madras. He was invited as a JSPS (Japan Society for the Promotion of Science)Fellow to Japan. He was a member of the Education Committee of SPIE.

Dr. Sirohi has received the following awards from various organizations: HumboldtResearch Award (1995) by the Alexander von Humboldt Foundation, Germany;Galileo Galilei Award of International Commission for Optics (1995); Amita DeMemorialAward of the Optical Society of India (1998); 13th Khwarizmi InternationalAward, IROST (Iranian Research Organisation for Science and Technology) (2000);Albert Einstein Silver Medal, UNESCO (2000); Dr. Y.T. Thathachari PrestigiousAward for Science by Thathachari Foundation, Mysore (2001); Pt. JawaharlalNehru Award in Engineering & Technology (2000) by the MP Council of Science andTechnology; Giant’s Gaurav Samman (2001); NAFEN’s Annual Corporate Excel-lence Award (2001); NRDC Technology Invention Award (2003); Sir C.V. Raman

xxv

Page 27: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C000.tex” — page xxvi[#26] 15/5/2009 10:56

xxvi Author

Award: Physical Sciences (2002) by UGC; Padma Shri, a National Civilian Award(2004); Sir C.V. Raman Birth CentenaryAward (2005) by the Indian Science CongressAssociation, Kolkata; Inducted into Order of Holo-Knights during the InternationalConference Fringe 05 held at Stuttgart, Germany, 2005; Centenarian Seva RatnaAward (2004) by The Centenarian Trust, Chennai; Instrument Society of India Award(2007); and the Lifetime Achievement Award (2007) by the Optical Society ofIndia. SPIE—The International Society for Optical Engineering—has bestowed onhim the SPIE Gabor Award 2009 for his work on holography, speckle metrology,interferometry, and confocal microscopy.

Dr. Sirohi was the president of the Optical Society of India during 1994–1996.He was also president of the Instrument Society of India during 2003–2006 andreelected for another term beginning 2008. He is on the international advisory boardof the Journal of Modern Optics, UK, on the editorial boards of the Journal of Optics(India) and Optik (Germany). Currently, he is associate editor of the internationaljournal Optical Engineering and regional editor of the Journal of Holography andSpeckle. He was also guest editor for the journals Optics and Lasers in Engineeringand Optical Engineering.

Dr. Sirohi has 438 papers to his credit with 240 published in national andinternational journals, 60 in proceedings of the conferences and 140 presented in con-ferences. He has authored/co-authored/edited thirteen books including five Milestonesfor SPIE, was principal coordinator for 26 projects sponsored by government fund-ing agencies and industries, and supervised 24 Ph.D., 7 M.S. and numerous B.Tech.,M.Sc. and M.Tech. theses.

Dr. Sirohi’s research areas are optical metrology, optical instrumentation, holog-raphy and speckle phenomenon.

Page 28: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 1[#1] 14/5/2009 20:04

1 Waves and Beams

Optics pertains to the generation, amplification, propagation, detection, modification,and modulation of light. Light is considered a very tiny portion of the electromagneticspectrum responsible for vision. In short, optics is science, technology, and engi-neering with light. Measurement techniques based on light fall into the subject ofoptical metrology. Thus, optical metrology comprises varieties of measurement tech-niques, including dimensional measurements, measurement of process variables, andmeasurement of electrical quantities such as current and voltages.

Optical methods are noncontact and noninvasive. Indeed, the light does interactwith the measurand, but does not influence its value in the majority of cases, and hencesuch measurements are termed noncontact. Further, information about the wholeobject can be obtained simultaneously, thereby giving optical methods the capabilityof wholefield measurement. The measurements are not influenced by electromagneticfields. Further, light wavelength is a measurement stick in length metrology, andhence high accuracy in measurement of length/displacement is attained. Some ofthese attributes make optical methods indispensable for several different kinds ofmeasurements.

1.1 THE WAVE EQUATION

Light is an electromagnetic wave, and therefore satisfies the wave equation

∇2E(r, t) − 1

c2

∂2E(r, t)

∂t2= 0 (1.1)

where E(r, t) is the instantaneous electric field of the light wave and c is the velocityof light. [The instantaneous magnetic field of the wave, B(r, t), satisfies a similarequation.] Equation 1.1 is a vector wave equation. We are interested in those of itssolutions that represent monochromatic waves, that is,

E(r, t) = E(r) exp(−iωt) (1.2)

where E(r) is the amplitude and ω is the circular frequency of the wave. SubstitutingEquation 1.2 into Equation 1.1 leads to

∇2E(r) + k2E(r) = 0 (1.3)

1

Page 29: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 2[#2] 14/5/2009 20:04

2 Optical Methods of Measurement

where k2 = ω2/c2. This is the well-known Helmholtz equation. A solution of theHelmholtz equation for E(r) will provide a monochromatic solution of the waveequation.

1.2 PLANE WAVES

There are several solutions of the wave equation; one of these is of the form

E(r) = E0 exp(ik · r) (1.4)

This represents a plane wave, which has constant amplitude E0 and is of infinite cross-section. A plane monochromatic wave spans the whole of space for all times, and is amathematical idealization. A real plane wave, called a collimated wave, is limited inits transverse dimensions. The limitation may be imposed by optical elements/systemsor by physical stops. This restriction on the transverse dimensions of the wave leadsto the phenomenon of diffraction, which is discussed in more detail in Chapter 3.A plane wave is shown schematically in Figure 1.1a.

1.3 SPHERICAL WAVES

Another solution of Equation 1.3 is

E(r) = A

rexp(ik · r) (1.5)

where A is a constant representing the amplitude of the wave at unit distance. Thissolution represents a spherical wave.A plane wave propagates in a particular direction,whereas a spherical wave is not unidirectional. A quadratic approximation to thespherical wave, in rectangular Cartesian coordinates, is

E(x, y, z) = A

zexp(ikz) exp

[ik

2z(x2 + y2)

](1.6)

This represents the amplitude at any point (x, y, z) on a plane distant z from a pointsource. This expression is very often used when discussing propagation through opti-cal systems. A collimated wave is obtained by placing a point source (source sizediffraction-limited) at the focal point of a lens. This is, in fact, a truncated planewave. A spherical wave emanates from a point source, or it may converge to a point.Diverging and converging spherical waves are shown schematically in Figure 1.1b.

1.4 CYLINDRICAL WAVES

Often, a fine slit is illuminated by a broad source. The waves emanating from suchline sources are called cylindrical waves. The surfaces of constant phase, far awayfrom the source, are cylindrical. A cylindrical lens placed in a collimated beam will

Page 30: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 3[#3] 14/5/2009 20:04

Waves and Beams 3

x

y

z

l

(a)

(b)

(c)

FIGURE 1.1 (a) Plane wave. (b) Diverging and converging spherical waves. (c) Cylindricalwave. (Figure 1.1c from E. Hecht and A. Zajac, Optics, Addison-Wesley, Reading, MA, 1974.With permission.)

generate a cylindrical wave that would converge to a line focus. The amplitude of acylindrical wave, far from the narrow slit, can be written as

E(r) = A√r

exp(ik · r) (1.7)

A cylindrical wave is shown schematically in Figure 1.1c.

1.5 WAVES AS INFORMATION CARRIERS

Light is used for sensing a variety of parameters, and its domain of applications is sovast that it pervades all branches of science, engineering, and technology, biomedi-cine, agriculture, etc. Devices that use light for sensing, measurement, and control aretermed optical sensors. Optical sensing is generally noncontact and noninvasive, andprovides very accurate measurements. In many cases, accuracy can be varied over a

Page 31: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 4[#4] 14/5/2009 20:04

4 Optical Methods of Measurement

wide range. In these sensors, an optical wave is an information sensor and carrier ofinformation.

Any one of the following characteristics of a wave can be modulated by themeasured property (the measurand):

• Amplitude or intensity• Phase• Polarization• Frequency• Direction of propagation

However, the detected quantity is always intensity, as the detectors cannot follow theoptical frequency. The measured property modifies the characteristics of the wavein such a way that, on demodulation, a change in intensity results. This change inintensity is related to the measured property. In some measurements, the wave intensityis modulated directly, and hence no demodulation is used before detection.

1.5.1 AMPLITUDE/INTENSITY-BASED SENSORS

There are many sensors that measure the change in intensity immediately after thewave propagates through the region of interest. The measurand changes the intensityor attenuates the wave—the simplest case being measurement of opacity or density.In another application, rotation of the plane of polarization of linearly polarized lightis measured by invoking Malus’s law. The refractive index of a dielectric mediumis obtained by monitoring reflected light when a p-polarized wave is incident on thedielectric–air interface. Measurement of absorption coefficients using Beer’s law andmeasurement of very high reflectivities of surfaces are other examples. Fiber-opticsensors based on attenuation are used for force/pressure measurement, displacementmeasurement, etc. The measurement process may lead to absolute measurements orrelative measurements as the need arises.

1.5.2 SENSORS BASED ON PHASE MEASUREMENT

A variety of measurements are performed using phase measurement over a range ofaccuracies. A light wave can be modulated, and the phase of the modulated wavewith respect to a certain reference can be measured to extract information aboutthe measurand. Alternatively, the phase of the light wave itself can be measured.Phase can be influenced by distance, refractive index, and wavelength of the source.A variety of phase measuring instruments, generally known as interferometers, areavailable that have accuracies from nanometers to millimeters. Interferometers canalso measure derivatives of displacement or, in general, of a measurand. Hetero-dyne interferometry is used for measuring distance, absolute distance, and veryhigh velocities, among other things. Interferometry can also be performed on realobjects without surface treatment, and their response to external perturbations canbe monitored.

Page 32: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 5[#5] 14/5/2009 20:04

Waves and Beams 5

1.5.3 SENSORS BASED ON POLARIZATION

Malus’s law, optical activity, the stress-optic law, Faraday rotation, and so on, allbased on changes of polarization by the measurand, have been exploited for themeasurement of a number of quantities. Current flowing in a wire is measured usingFaraday rotation, and electrically induced birefringence is used for measuring voltage,thereby measurement of power is accomplished with ease. Force is measured usingthe stress-optic law. Measurement of the angle of rotation of the plane of polarizationcan be done by application of Malus’s law.

1.5.4 SENSORS BASED ON FREQUENCY MEASUREMENT

Reflection of light from a moving object results in a shift in the frequency (Dopplershift) of the reflected light. This frequency shift, known as the Doppler shift, is directlyrelated to the velocity of the object. It is measured by heterodyning the received signalwith the unshifted light signal. Heterodyne interferometry is used to measure displace-ment. The Hewlett-Packard interferometer, a commercial instrument, is based on thisprinciple. Flow measurement also utilizes measurements of Doppler shift. Laser–Doppler anemometers/velocimeters (LDA/LDV) are common instruments used tomeasure all three components of the velocity vector simultaneously. Turbulencecan also be monitored with an LDA. Measurement of velocities has also been per-formed routinely by measuring Doppler shift. However, for very high velocities, theDoppler shift becomes very large, and hence unmanageable by electronics. UsingDoppler interferometry, in which Doppler-shifted light is fed into an interferometer,a low-frequency signal is obtained, and this is then related to the velocity.

1.5.5 SENSORS BASED ON CHANGE OF DIRECTION

Optical pointers are devices based on change of direction, and can be used to monitora number of variables, such as displacement, pressure, and temperature. Wholefieldoptical techniques utilizing this effect include shadowgraphy, schlieren photography,and speckle photography.

Sensing can be performed at a point or over the wholefield. Interferometry, whichconverts phase variations into intensity variations, is in general a wholefield technique.Interference phenomena is discussed in detail in Chapter 2.

1.6 THE LASER BEAM

A laser beam propagates as a nearly unidirectional wave with little divergence and withfinite cross-section. Let us now seek a solution of the Helmholtz equation (Equation1.3) that represents such a beam. We therefore write a solution in the form

E(r) = E0(r) exp(ikz) (1.8)

This solution differs from a plane wave propagating along the z-direction in thatits amplitude E0(r) is not constant. Further, the solution should represent a beam,

Page 33: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 6[#6] 14/5/2009 20:04

6 Optical Methods of Measurement

that is it should be unidirectional and of finite cross-section. The variations of E0(r)and ∂E0(r)/∂z over a distance of the order of the wavelength along z-direction areassumed to be negligible. This implies that the field varies approximately as eikz overa distance of a few wavelengths. When the field E0(r) satisfies these conditions, theHelmholtz equation takes the form

∇2TE0 + 2ik

∂E0

∂z= 0 (1.9)

where ∇2T is the transverse Laplacian. Equation 1.9 is known as the paraxial wave

equation, and is a consequence of the weak factorization represented by Equation 1.8and other assumptions mentioned earlier.

1.7 THE GAUSSIAN BEAM

The intensity distribution in a beam from a laser oscillating in the TEM00 mode isgiven by

I(r) = I0 exp(−2r2/w2) (1.10)

where r = (x2 + y2)1/2. The intensity I(r) drops to 1/e2 of the value of the peakintensity I0 at a distance r = w. The parameter w is called the spot size of the beam.w depends on the z-coordinate. The intensity profile is Gaussian; hence the beamis called a Gaussian beam. Such a beam maintains its profile as it propagates. TheGaussian beam has a very special place in optics, and is often used for measurement.

The solution of Equation 1.9 that represents the Gaussian beam can be written as

E(r, z) = Aexp[−r2/w2(z)]

[1 + (λz/πw20)

2]1/2exp

[ikr2

2R(z)

]exp

[−i tan−1

(λz

πw20

)]exp(ikz)

= Aw0

w(z)exp

[− r2

w2(z)

]exp

[ikr2

2R(z)

]exp

[−i tan−1

(λz

πw20

)]exp(ikz)

= Aw0

w(z)exp

{−r2

[1

w2(z)− ik

2R(z)

]}exp[i(kz − ϕ)]

(1.11)

where A and w0 are constants and ϕ = tan−1(λz/w20). The constant w0 is called the

beam waist size and w(z) is called the spot size. We introduce another constant, calledthe Rayleigh range z0, which is related to the beam waist size by z0 = πw2

0/λ. Theradius of curvature R(z) and spot size w(z) of the beam at an arbitrary z-plane aregiven by

R(z) = z

⎡⎣1 +

(πw2

0

λz

)2⎤⎦ = z

[1 +

(z0

z

)2]

(1.12a)

Page 34: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 7[#7] 14/5/2009 20:04

Waves and Beams 7

w(z) = w0

⎡⎣1 +

(λz

πw20

)2⎤⎦

1/2

= w0

[1 +

(z

z0

)2]1/2

(1.12b)

The amplitude of the wave is expressed as

Aw0

w(z)exp

[− r2

w2(z)

](1.13)

The amplitude decreases as exp[−r2/w2(z)] (Gaussian beam). The spot size w(z)increases as the beam propagates, and consequently the amplitude on axis decreases.In other words, the Gaussian distribution flattens. The spot size w(z) at very largedistances is

w(z)

∣∣∣∣z→∞

= λz

πw0(1.14)

The half divergence angle θ is obtained as

θ = dw

dz= λ

πw0(1.15)

The exponential factor exp[ikr2/2R(z)] in Equation 1.11 is the radial phase factor, withR(z) as the radius of curvature of the beam. At large distances, that is, z � z0, R(z) →z, the beam appears to originate at z = 0. However, the wavefront is plane at thez = 0 plane. In general, a z = constant plane is not an equiphasal plane. The secondexponential factor in Equation 1.11 represents the dependence on the propagationphase, which comprises the phase of a plane wave, kz, and Guoy’s phase ϕ. These canbe summarized for near-field and far-field situations as follows. For the near field

• Phase fronts are nearly planar (collimated beam).• The beam radius is nearly constant.• The wave is a plane wave with Gaussian envelope.• For z � z0, w(z) → w0, R(z) → ∞, and tan−1(z/z0) = 0.• There is no radial phase variation if R → ∞.

For the far field

• The curvature increases linearly with z.• The beam radius increases linearly with z.• For z � z0, ω(z) = λz/πω0, R(z) = z, and tan−1(z/z0) ≈ π/2.

Figure 1.2 shows the variation of the amplitude as the beam propagates.The intensity distribution in a Gaussian beam is

I(r, z) = |A|2 w20

w2(z)exp

[− 2r2

w2(z)

](1.16)

Page 35: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 8[#8] 14/5/2009 20:04

8 Optical Methods of Measurement

Amplitude ofthe field

The e–1 amplitude surface

1.0

x, y x, y

w0e–1÷2w0

zz0

q

FIGURE 1.2 A Gaussian beam.

On the beam axis (r = 0), the intensity varies with distance z as follows:

I(0, z) = I0w2

0

w2(z)= I0

1 + (z/z0)2(1.17)

It has its peak value I0 at z = 0, and keeps decreasing with increasing z. The valuedrops to 50% of the peak value at z = ±z0.

1.8 ABCD MATRIX APPLIED TO GAUSSIAN BEAMS

The propagation of a ray through optical elements under the paraxial approximation(linear optics) can be described by 2 × 2 matrices. The height of the output ray andits slope are related to the input ray and its slope through a 2 × 2 matrix characteristicof the optical element. This is described by

[r2r′

2

]=[

A BC D

] [r1r′

1

](1.18)

or

r2 = Ar1 + Br′1

r′2 = Cr1 + Dr′

1

where r2 and r′2 are the height and slope of the output ray. Consider a ray emanating

from a point source, the wavefront being spherical. Its radius of curvature R2(= r2/r′2)

at the output side is related to the radius of curvature at the input side as follows:

R2 = r2

r′2

= Ar1 + Br′1

Cr1 + Dr′1

= AR1 + B

CR1 + D. (1.19)

The propagation of a spherical wave can easily be handled by this approach. It isalso known that the propagation of Gaussian beams can also be described by thesematrices. Since in any measurement process, a lens is an important component of the

Page 36: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 9[#9] 14/5/2009 20:04

Waves and Beams 9

experimental setup, we shall discuss how the propagation of a Gaussian beam througha lens can be handled by a 2 × 2 matrix.

ABCD matrices for free space and many optical elements are given in standardtextbooks. For a thin lens of focal length f , the matrix is

[1 0

−f −1 1

]

The amplitude of a Gaussian beam as described by Equation 1.11 can be rewritten as

E(r, z) = A(z) exp[i(kz − ϕ)] exp

(ik

2

)(x2 + y2)

[1

R(z)− iλ

πω2(z)

](1.20)

This expression is similar to the paraxial approximation to the spherical wave,with the radius of curvature R replaced by a complex Gaussian wave parameter q,given by

1

q= 1

R− iλ

πω2(1.21)

A Gaussian beam is described completely by w, R, and λ. These are contained in thecomplex parameter q, which transforms as follows:

q2 = Aq1 + B

Cq1 + D. (1.22)

The proof of this equality is rather tedious, but, in analogy with the case of a sphericalwave, we may accept that it is correct. We shall now study the propagation of aGaussian beam through free space and then through a lens.

1.8.1 PROPAGATION IN FREE SPACE

The ABCD matrix for free space is

[1 z0 1

], where z is the propagation distance.

Therefore, q2 = q1 + z. In order to study the propagation of a Gaussian beam in freespace, we will assume that the beam waist lies at z = 0 where the spot size and radius ofcurvature are w0 and R0 = ∞, respectively. At this plane, q0 = iπw2

0/λ; the complexparameter at the waist is purely imaginary. Therefore, the complex parameter at anyplane distant z from the beam waist is q(z) = q0 + z. Thus,

1

q(z)= 1

R(z)− iλ

πw2(z)= 1

q0 + z= 1

z + iπw20

λ

= z

z +(

πw20

λ

)2 −iπw2

0

λ

z2 +(

πw20

λ

)2

Page 37: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 10[#10] 14/5/2009 20:04

10 Optical Methods of Measurement

It can be seen that although the Gaussian parameter q was purely imaginary atthe beam waist, it has now become complex at any plane z and has both real andimaginary components. On equating real and imaginary parts, we obtain, after somesimplification,

R(z) = z

⎡⎣1 +

(πw2

0

λz

)2⎤⎦ (1.23a)

w(z) = w0

⎡⎣1 +

(λz

πw20

)2⎤⎦

1/2

(1.23b)

These are the same expressions for the radius of curvature and spot size as wereobtained earlier.

1.8.2 PROPAGATION THROUGH A THIN LENS

Consider a Gaussian beam incident on a thin lens of focal length f . We will assumethat the beam waists w1 and w2 of the incident and transmitted beams lie at distancesd1 and d2 from the lens, respectively, as shown in Figure 1.3. The ABCD matrix forthe light to propagate from the plane at d1 to the plane at d2 is given by

[A BC D

]=[

1 d20 1

] [1 0

−f −1 1

] [1 d10 1

]=

⎡⎢⎢⎣

1 − d2

fd1 + d2 − d1d2

f

−1

f1 − d1

f

⎤⎥⎥⎦

(1.24)

Using the relation given in Equation 1.22, we obtain

q2 = Aq1 + B

Cq1 + D=

(1 − d2

f

)q1 +

(d1 + d2 − d1d2

f

)(

−1

f

)q1 +

(1 − d1

f

) (1.25)

d1 d2

w2w1

FIGURE 1.3 Propagation of a Gaussian beam through a lens.

Page 38: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 11[#11] 14/5/2009 20:04

Waves and Beams 11

The complex parameter q1 is purely imaginary at the beam waist, and hence q1 =iπw2

1/λ. Therefore, q2 can be expressed as follows:

q2 =

(1 − d2

f

) (iπw2

1/λ)+(

d1 + d2 − d1d2

f

)(

−1

f

) (iπw2

1/λ)+

(1 − d1

f

)

Alternatively,

1

q2=

(−1

f

) (iπw2

1/λ)+

(1 − d1

f

)(

1 − d2

f

)(iπw2

1/λ)+(

d1 + d2 − d1d2

f

) = 1

R2− iλ

πw22

(1.26)

Equating real and imaginary parts, we obtain

1

R2= ( f − d1)

[(d1 + d2) f − d1d2

]− ( f − d2)(πw2

1/λ)2

[(d1 + d2) f − d1d2

]2 + ( f − d2)2(πw2

1/λ)2 (1.27a)

λ

πw22

= πw21

λ

f 2

[(d1 + d2) f − d1d2

]2 + ( f − d2)2(πw2

1/λ)2 (1.27b)

We now consider that the beam waist lies at a distance d2, implying that w2 is purelyimaginary, and hence R2 = ∞. This occurs when

( f − d1)[(d1 + d2) f − d1d2

]− ( f − d2)(πw2

1/λ)2 = 0

In order to determine the location of the beam waist, we solve this equation for d2as follows:

d2 = f + [d1f (d1−f )/z21

]1 + [(d1 − f )/z1

]2 (1.28a)

where z1 = πw21/λ.

From Equation 1.27b, we obtain

w22

w21

= ( f /z1)2

1 + [(d1 − f )/z1]2 = d2 − f

d1 − f(1.28b)

As a special case, when the beam waist w1 lies at the lens plane, that is, d1 = 0,Equations 1.28a and 1.28b can be expressed as

d2 = f

1 + ( f /z1)2(1.29a)

w2

w1= f /z1

[1 + ( f /z1)2]1/2(1.29b)

Page 39: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 12[#12] 14/5/2009 20:04

12 Optical Methods of Measurement

1.8.3 MODE MATCHING

We assume that the beam waists lie at distances d1 and d2 from the lens. At the beamwaists, the complex parameters are purely imaginary, and hence q1 = iπw2

1/λ andq2 = iπw2

2/λ. Then Equation 1.26 can be rewritten as follows:

iπw22

λ=

(1 − d2

f

) (iπw2

1/λ)+ f

(d1 + d2 − d1d2

f

)(

−1

f

) (iπw2

1/λ)+

(1 − d1

f

) (1.26a)

On equating real and imaginary parts, we obtain

(d1 − f )(d2 − f ) = f 2 − f 2g (1.30a)

d2 − f

d1 − f= w2

2

w21

(1.30b)

where fg = πw1w2/λ is defined by the products of the waists. The right-hand side ofEquation 1.30b is always positive, and hence d1 − f and d2 − f have to be either bothpositive or both negative. Hence, (d1 − f )(d2 − f ) is always positive, and so f > fg.From Equations 1.30a and 1.30b, we can obtain the following two relations:

d1 = f ± w1

w2( f 2 − f 2

g )1/2 (1.31a)

d2 = f ± w2

w1( f 2 − f 2

g )1/2 (1.31b)

Propagation of a Gaussian beam in the presence of other optical elements can bestudied in a similar way.

1.9 NONDIFFRACTING BEAMS—BESSEL BEAMS

There is a class of beams that propagate over a certain distance without diffraction.These beams can be generated using either axicons or holograms.

Diffractionless solutions of the Helmholtz wave equation in the form of Besselfunctions constitute Bessel beams. A Bessel beam propagates over a considerabledistance without diffraction. It also has a “healing” property in that it exhibits self-reconstruction after encountering an obstacle. A true Bessel beam, being unbounded,cannot be created. However, there are several practical ways to create a beam that is aclose approximation to a true Bessel beam, including diffraction at an annular aperture,focusing by an axicon, and the use of diffractive elements (holograms). Figure 1.4shows a schematic for producing Bessel beam using an axicon. A Gaussian beam isincident on the axicon, which produces two plane waves in conical geometry. Theseplane waves interfere to compensate for the spread of the beam. A Bessel beam existsin the region of interference.

Page 40: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 13[#13] 14/5/2009 20:04

Waves and Beams 13

Gaussianbeam

Axicon Besselbeam

FIGURE 1.4 Generation of a Bessel beam with an axicon.

On the other hand, a hologram with transmittance function t(ρ, φ) can generateBessel beams. The transmittance function is given by

t(ρ, φ) ={

A(φ) exp(−2πiρ/ρ0), ρ ≤ D

0, ρ > D(1.32)

where D is the size of the hologram. The diffraction pattern of the Bessel beamshows a very strong central spot surrounded by large number of rings. The rings canbe suppressed. However, the intensity in the central spot is weaker than that of aGaussian beam of comparable size.

Only first-order Bessel beams can be generated with an axicon, while a hologramgenerates beams of several orders.

Because of several interesting properties of Bessel beams, they find applicationsin several areas of optical research, such as optical trapping and micromanipulation.

1.10 SINGULAR BEAMS

An optical beam possessing isolated zero-amplitude points with indeterminate phaseand helical phase structure is called a singular beam; such points are called singularpoints. Both real and imaginary parts of the complex amplitude are simultaneouslyzero at a singular point. A helical phase front of the singular beam is described by thewave field exp(ilϕ), where l is the topological charge. The value of the topologicalcharge of the vortex field determines the total phase that the wavefront accumulatesin one complete rotation around the singular point. The sign of l determines the senseof helicity of the singular beam. The phase gradient of a wave field with phase sin-gularity is nonconservative in nature. The line integral of the phase gradient over anyclosed path surrounding the point of singularity is nonzero, that is,

∮ ∇φ · dl = m2π.At an isolated point where a phase singularity occurs, the phase is uncertain andthe amplitude is zero, thereby creating an intensity null at the point of singularity.The intensity profile of a phase-singular beam with unit topological charge (l = 1) isshown in Figure 1.5. It is now well established that such beams possess orbital angularmomentum equal to l� per photon, where � is Planck’s constant divided by 2π.

Page 41: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 14[#14] 14/5/2009 20:04

14 Optical Methods of Measurement

FIGURE 1.5 Intensity profile of a phase-singular beam with topological charge l = 1.(Courtesy of D. P. Ghai.)

According to terminology introduced by Nye and Berry, wavefront dislocation ina monochromatic wave can be divided into three main categories: (i) screw disloca-tion; (ii) edge dislocation; and (iii) mixed screw edge dislocation. Screw dislocation,also called optical vortex, is the most common type of phase defect. The equiphasesurfaces of an optical field exhibiting screw dislocation are helicoids or groups ofhelicoids, nested around the dislocation axis. One round trip on the continuous sur-face around the dislocation axis will lead to the next coil with a pitch lλ, where lis the topological charge and λ is the wavelength of operation. Today, “screw-typewavefront dislocation” has become synonymous with “phase singularity.” The laserTEM∗

01 mode, also called the doughnut mode, is the most common example of ascrew dislocation of topological charge ±1.

Properties of vortices have been studied in both linear and nonlinear regimes. It hasbeen shown that a unit-charge optical vortex is structurally stable. On the other hand,multiple-charge optical vortices are structurally unstable. Optical vortices with thesame topological charge rotate about each other on propagation, with the rotation ratedepending on the inverse square of the separation between the two vortices. In contrast,vortices with equal magnitude but opposite polarity of their topological charges driftaway from each other. Vortices with small cores, called vortex filaments, exhibit fluid-like rotation similar to vortices in liquids. An optical vortex propagating in a nonlinearmedium generates a soliton. The optical vortex soliton is a three-dimensional, robustspatial structure that propagates without changing its size. Optical vortex solitons areformed in a self-defocusing medium when the effects of diffraction are offset by therefractive index variations in the nonlinear medium.

One of the possible solutions of the wave equation, as suggested by Nye andBerry, is

E (r, φ, z, t) ∝(

r|l| exp(ilφ + ikz − iwt)r−|l| exp(ilφ + ikz − iwt)

)(1.33)

where E (r, φ, z, t) is the complex amplitude of a monochromatic light wave withfrequency w and wavelength λ propagating along the z axis, and k = 2π/λ is the

Page 42: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 15[#15] 14/5/2009 20:04

Waves and Beams 15

propagation vector. Both solutions have azimuthal phase dependence. As the wavepropagates, the equiphase surfaces (wavefront) trace a helicoid given by

lφ + kz = constant (1.34)

After one round trip of the wavefront around the z axis, there is a continuous transitioninto the next wavefront sheet, separated by lλ, which results in a continuous helicoidalwave surface. The sign of the topological charge is positive or negative, dependingupon whether the wave surface has right- or left-handed helicity.

The solutions given by Equation 1.33 do not describe a real wave, because ofthe radial dependence of the wave amplitude. Therefore, it is necessary to take anappropriate wave as the host beam. We may consider, for example, a phase-singularbeam embedded in a Gaussian beam. Using the paraxial approximation to the scalarwave equation,

1

r

∂r

(r∂E

∂r

)1

r2+ 1

r2

∂2E

∂r2+ 2ik

∂E

∂z= 0 (1.35)

we obtain the solution of a vortex Gaussian envelope as

E (r, φ, z) = E0

(r

w0

)l

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

1

2kw2

0[z2 +

(1

2kw2

0

)2]1/2

⎫⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎭

l+1

× exp

⎡⎢⎢⎢⎣−r2

1

4k2w4

0

z2 +(

1

2kw2

0

)2

⎤⎥⎥⎥⎦ exp[iΦ(r, φ, z)] (1.36)

where E0 is the real amplitude and w0 is the beam waist. The phase Φ is given by

Φ(r, φ, z) = (|l| + 1) tan−1

⎛⎜⎝ z

1

2kw2

0

⎞⎟⎠− kr2

2z + k2w20/2z

− lφ − kz (1.37)

Optical vortices find many applications in interferometry, such as in the studyof fractally rough surfaces, beam collimation testing, optical vortex metrology, andmicroscopy. An optical vortex interferometer employing three-wave interferencecan be used for tilt and displacement measurement, wavefront reconstruction, 3Dscanning, and super-resolution microscopy. Spiral interferometry, where a spiralphase element is used as a spatial filter, removes the ambiguity between elevationand depression in the surface height of a test object. Lateral shear interferome-ters have been used to study gradients of phase-singular beams. It has been shown

Page 43: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 16[#16] 14/5/2009 20:04

16 Optical Methods of Measurement

that shearograms do not represent true phase gradients when vortices are presentsin the optical fields. However, lateral shear interferometry is a potential techniquefor the detection of both an isolated vortex and randomly distributed vortices in aspeckle field. It is a simple, robust, and self-referencing technique, which is insensi-tive to vibrations. Unlike the existing interferometric techniques of vortex detection,it does not require any high-quality plane or spherical reference wavefront to form aninterference pattern.

Vortex generation using interferometric methods is based on the superposition oflinearly varying phase distributions in the plane of observation that arise from interfer-ence of three, four, or more plane or spherical waves. Interferometric techniques havebeen used universally to study the vortex structure of singular beams in both linear andnonlinear media. Depending on whether a plane or a spherical wave interferes withthe singular beam, fork-type (bifurcation of fringes at the vortex points) or spiral-typefringes are observed in the interference pattern, as shown in Figure 1.6. Fork fringes,which are a signature of optical vortices, give information about both the magnitudeand the sign of the topological charge.

Important methods of vortex generation make use of wedge plates, spatial lightmodulators, laser-etched mirrors, and adaptive mirrors. Most common techniques useinterference of three or four plane waves for generating vortex arrays. Computer-generated holograms (CGHs) are now widely used for the generation of phase-singular beams. Spiral phase plates (SPPs) have been used for the generation ofphase singularities for millimeter waves. A random distribution of optical vortices isalso observed in the speckle field resulting from laser light scattered from a diffusesurface or transmitted through a multimode fiber. There is a finite possibility of thepresence of an optical vortex in any given patch of a laser speckle field. On average,there is one vortex per speckle. There is little probability of the presence of opticalvortices of topological charge greater than one in a speckle field.

The magnitude and sign of the topological charge of the vortex beam can bedetermined from the interference pattern of the beam with a plane or a sphericalreference wave. The interference of a vortex beam of finite curvature with a sphericalreference wave gives spiral fringes. The number of fringes originating from the center

(a) (b)

FIGURE 1.6 Simulated interferograms between (a) a tilted plane wave and singular beam;(b) a spherical wave and a singular beam of topological charge l = 1. (Courtesy of D. P. Ghai.)

Page 44: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 17[#17] 14/5/2009 20:04

Waves and Beams 17

equals the magnitude of the topological charge of the vortex beam. The circulation ofthe fringes depends upon the sign of the topological charge and the relation betweenthe radius of curvature of the vortex and the reference wave.

Referring to Equation 1.37, the transverse phase dependence of the vortex beamis given by

Φ(r, φ) = − kr2

2R(z)− lφ (1.38)

where R (z) is the radius of curvature of the phase-singular beam. The condition forfringe maxima in the interference pattern is

cos

[− kr2

2R(z)+ kr2

2R(0)− lφ

]= 1 (1.39)

that is, (kr2 R(z) − R(0)

2R(z)R(0)− lφ

)= 2πl (1.40)

When the radius of curvature of the reference beam is larger than that of the vortexbeam, that is, R0 > R(z), clockwise rotation of fringes corresponds to a positivevalue of the topological charge and anticlockwise rotation to a negative value. ForR0 < R(z), clockwise rotation corresponds to a negative value of the topologicalcharge and anticlockwise rotation to a positive value. When R0 = R(z), instead ofspiral fringes, cross-type fringes are observed.

BIBLIOGRAPHY

1. W. T. Welford, Geometrical Optics, North-Holland, Amsterdam, 1962.2. W. J. Smith, Modern Optical Engineering, McGraw-Hill, New York, 1966.3. R. S. Longhurst, Geometrical and Physical Optics, Longmans, London, 1967.4. F. A. Jenkins and H. E. White, Fundamentals of Optics, McGraw-Hill, NewYork, 1976.5. A. E. Siegman, Lasers, University Science Books, California, 1986.6. M. Born and E. Wolf, Principles of Optics, 7th edn, Cambridge University Press,

Cambridge, 1999.7. O. Svelto, Principles of Lasers, Plenum Press, New York, 1989.8. R. S. Sirohi, Wave Optics and Applications, Orient Longmans, Hyderabad, 1993.9. E. Hecht and A. R. Ganesan, Optics, 4th edn, Dorling Kindersley, Delhi, 2008.

ADDITIONAL READING

1. H. Kogelnik and T. Li, Laser beams and resonators, Proc. IEEE, 54, 1312–1392, 1966.2. J. F. Nye and M. V. Berry, Dislocations in wave trains, Proc. R. Soc. London Ser. A,

336, 165–190, 1974.3. J. M. Vaughan and D. V. Willetts, Interference properties of a light beam having a helical

wave surface, Opt. Commun., 30, 263–267, 1979.4. J Durnin, Exact solutions for nondiffracting beams. I. The scalar theory, J. Opt. Soc.

Am., 4, 651–654, 1987.

Page 45: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C001.tex” — page 18[#18] 14/5/2009 20:04

18 Optical Methods of Measurement

5. P. Coullet, L. Gil, and F. Rocca, Optical vortices, Opt. Commun., 73, 403–408, 1989.6. V. Yu Bazhenov, M. S. Soskin, and M. V. Vasnetsov, Screw dislocations in light

wavefronts, J. Mod. Opt., 39, 985–990, 1992.7. M. Harris, C. A. Hill, and J. M. Vaughan, Optical helices and spiral interference fringes,

Opt. Commun., 106, 161–166, 1994.8. M. Padgett and L. Allen, Light with a twist in its tail, Contemp. Phys., 41, 275–285,

2000.9. P. Muys and E. Vandamme, Direct generation of Bessel beams, Appl. Opt., 41,

6375–6379, 2002.10. V. Garcés-Chávez, D. McGloin, H. Melville, W. Sibbett, and K. Dholakia, Simultaneous

micromanipulation in multiple planes using a self-reconstructing light beam, Nature419, 145–147, 2002.

11. Miguel A. Bandres, Julio C. Gutiérrez-Vega, and S. Chávez-Cerda, Parabolic non-diffracting optical wavefields, Opt. Lett., 29, 44–46, 2004.

12. Carlos López-Mariscal, Julio C. Gutiérrez-Vega, and S. Chávez-Cerda, Productionof high-order Bessel beams with a Mach–Zehnder interferometer, Appl. Opt., 43,5060–5063, 2004.

13. D. McGloin and K. Dholakia, Bessel beams: Diffraction in a new light, Contemp. Phys.,46, 15–28, 2005.

14. S. Fürhapter, A. Jesacher, S. Bernet, and M. Ritsch-Marte, Spiral interferometry, Opt.Lett., 30, 1953–1955, 2005.

15. A. Jesacher, S. Fürhapter, S. Bernet, and M. Ritsch-Marte, Spiral interferogram analysis,J. Opt. Soc. Am. A, 23, 1400–1408, 2006.

16. Carlos López-Mariscal and Julio C. Gutiérrez-Vega, The generation of nondiffractingbeams using inexpensive computer-generated holograms, Am. J. Phys, 75, 36–42, 2007.

17. D. P. Ghai, S. Vyas, P. Senthilkumaran, and R. S. Sirohi, Detection of phase singularityusing a lateral shear interferometer, Opt. Lasers Eng., 46, 419–423, 2008.

Page 46: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 19[#1] 14/5/2009 21:08

2 Optical Interference

2.1 INTRODUCTION

Light waves are electromagnetic waves. They can be spherical, cylindrical, or plane,as has been explained in Chapter 1. By modulating wave properties such as ampli-tude, phase, and polarization, it is possible for light waves to carry information. In ahomogenous, isotropic medium, there is usually no interaction between the wave andthe medium. However, at very high intensities, some nonlinear effects begin to comeinto play. Our interest for now is to study the superposition of two or more waves. If thewaves are traveling in the same direction, they will have a very large region of super-position. In contrast, if the angle between their directions of propagation is large, theregion of superposition will be small. As the angle between the two waves increases,the region of superposition continues to decrease, and reaches a minimum when theypropagate at right angles to each other. As the angle is further increased, the regionof superposition also increases, reaching a maximum when they are antipropagating.

Let us consider the superposition of two coherent monochromatic waves. Theamplitude distribution of these waves can be written as follows:

E1(x, y, z; t) = E01(x, y, z) exp[i(ωt + k1 · r + δ1)] (2.1)

E2(x, y, z; t) = E02(x, y, z) exp[i(ωt + k2 · r + δ2)] (2.2)

The total amplitude is the sum of the amplitudes of the individual waves. This can beexpressed as follows:

E(x, y, z; t) = E1(x, y, z; t) + E2(x, y, z; t) (2.3)

The intensity distribution at a point (x, y, z) can be obtained as follows:

I(x, y, z) = 〈E(x, y, z; t) · E(x, y, z; t)〉= E2

01 + E202 + 2E01 · E02 cos[(k2 − k1) · r + φ] (2.4)

Here (k2 − k1) · r + φ is the phase difference between the waves and φ = δ2 − δ1 isthe initial phase difference, which can be set equal to zero. It is seen that the intensityat a point depends on the phase difference between the waves. There does exist avarying intensity distribution in space, which is a consequence of superposition and

19

Page 47: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 20[#2] 14/5/2009 21:08

20 Optical Methods of Measurement

hence interference of waves. When taken on a plane, this intensity pattern is knownas an interference pattern. In standard books on optics, the conditions for obtaining astationary interference pattern are as follows:

1. Waves must be monochromatic and of the same frequency.2. They must be coherent.3. They should enclose a small angle between them.4. They must have the same state of polarization.

These conditions were stated at a time when fast detectors were not available andto realize a monochromatic wave was extremely difficult. It is therefore worth com-menting that interference does take place when two monochromatic waves of slightlydifferent frequencies are superposed. However, this interference pattern is not sta-tionary but moving: heterodyne interferometry is an example. In order to observeinterference, the waves should have time-independent phases. In the light regime,therefore, these waves are derived from the same source either by amplitude divisionor wavefront division. This is an important condition for observing a stable interfer-ence pattern. It is not necessary that these waves enclose a small angle between them.Previously, when the observations were made visually with the naked eye or under lowmagnification, the fringe width had to be large to be resolved. However, with the avail-ability of high-resolution recording media, an interference pattern can be recordedeven between oppositely traveling waves (with the enclosed angle approaching 180◦).Further interference between two linearly polarized waves is observed except whenthey are orthogonally polarized, albeit with reduced contrast.

2.2 GENERATION OF COHERENT WAVES

There are only two methods that are currently used for obtaining two or more coherentbeams from the parent beam: (1) division of wavefront and (2) division of amplitude.

2.2.1 INTERFERENCE BY DIVISION OF WAVEFRONT

Two or more portions of the parent wavefront are created and then superposed toobtain an interference pattern. Simple devices such as the Fresnel biprism, the Fres-nel bi-mirror, and the split lens sample the incident wavefront and superpose thesampled portions in space, where the interference pattern is observed. This is thewell-known two-beam interference. A simple method that has recently been usedemploys an aperture plate, and samples two or more portions of the wavefront. If thesampling aperture is very fine, diffraction will result in the superposition of waves atsome distance from the aperture plane. The Smartt interferometer and the Rayleighinterferometer are examples of interference by wavefront division. The use of mul-tiple apertures such as a grating gives rise to multiple-beam interference based onwavefront division.

2.2.2 INTERFERENCE BY DIVISION OF AMPLITUDE

A wave incident on a plane parallel or wedge plate is split into a transmitted and areflected wave, which can be superposed by judicious application of mirrors, resulting

Page 48: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 21[#3] 14/5/2009 21:08

Optical Interference 21

in various kinds of interferometers. Michelson and Mach–Zehnder interferometersare two well-known examples. All polarization-based interferometers use division ofamplitude. The Fabry–Perot interferometer, the Fabry–Perot etalon, and the Lummer–Gerchke plate are examples of multiple-beam interference by division of amplitude.

2.3 INTERFERENCE BETWEEN TWO PLANEMONOCHROMATIC WAVES

As seen from Equation 2.4, the intensity distribution in the interference pattern whentwo monochromatic plane waves are propagating in the directions of propagationvectors k1 and k2, respectively, is given by

I(x, y, z) = E201 + E2

02 + 2E01 · E02 cos[(k2 − k1) · r + φ] (2.5)

This equation can also be written as

I(x, y, z) = I1 + I2 + 2E01 · E02 cos[(k2 − k1) · r + φ] (2.6)

where I1 and I2 are the intensities of the individual beams. It is thus seen that theresultant intensity varies in space as the phase changes. This variation in inten-sity is a consequence of the interference between two beams. The intensity will bemaximum wherever the argument of the cosine function takes values equal to 2mπ

(m = 0, 1, 2, . . .) and will be minimum wherever the phase difference is (2m + 1)π.The loci of constant phase 2mπ or (2m + 1)π are bright or dark fringes, respec-tively. The phase difference between two bright fringes or dark fringes is always 2π.Following Michelson, we define the contrast of the fringes as follows:

C = Imax − Imin

Imax + Imin= 2E01 · E02

I1 + I2(2.7)

It can be seen that there are no fringes when the contrast is zero. This can occur whenthe two light beams are orthogonally polarized. The fringes are always formed, albeitwith low contrast, if the two beams are not in the same state of polarization. Assumingthe beams to be in the same state of polarization, the contrast is given by

C = 2√

I1I2

I1 + I2=

2√

I1I2

1 + I1I2

(2.8)

Fringes of appreciable contrast are formed even if the beams differ considerably inintensity.

Assuming the plane waves to be confined to the (x, z) plane as shown in Figure 2.1,the phase difference is given by

δ = [(k2 − k1) · r + φ] = 2π

λ2x sin θ

Page 49: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 22[#4] 14/5/2009 21:08

22 Optical Methods of Measurement

θ

z

x k2

k1

FIGURE 2.1 Interference pattern between two plane waves.

where we have taken φ = 0. Here the beams are symmetric with respect to the z axis,making an angle θ with the axis. It is thus seen that straight-line fringes are formedthat run parallel to the z axis, with spacing x = λ/(2 sin θ). The fringes have a largerwidth for smaller enclosed angles.

2.3.1 YOUNG’S DOUBLE-SLIT EXPERIMENT

This is a basic experiment to explain the phenomenon of interference. Radiationfrom an extremely narrow slit illuminates a pair of slits situated on a plane a distancep from the plane of the first slit. The slits are parallel to each other. A cylindricalwave from the first slit is incident on the pair of slits, which thus sample the wave attwo places. The incident wave is diffracted by each slit of the pair, and the interferencepattern between the diffracted waves is observed at a plane a distance z from the planeof the pair of slits, as shown in Figure 2.2. The amplitude distribution at an observationpoint P on the screen is given by

I(x, z) = 2I0s sinc2(

1

2kb sin θ

)[1 + cos(ka sin θ)] (2.9)

P

P

z

x

r1

r2a

b

z

FIGURE 2.2 Young’s double-slit experiment.

Page 50: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 23[#5] 14/5/2009 21:08

Optical Interference 23

(a) (b)

(c)

Diffractionat single slit

Interferencepattern due todouble slit

Interferencepattern

FIGURE 2.3 (a) Diffraction pattern of a single slit. (b) Interference pattern due to a doubleslit. (c) Interference pattern as observed in Young’s double-slit experiment (with diffractionenvelope shown).

where I0s sinc2(kb sin θ/2) is the intensity distribution on the observation plane dueto a single slit and ka sin θ is the phase difference between two beams diffracted bythe pair of slits. It can be seen that the intensity distribution in the fringe pattern isgoverned by the diffraction pattern of a single slit, and the fringe spacing is governedby the slit separation. This is shown in Figure 2.3.

Let us now broaden the illuminating slit, thereby decreasing the region of coher-ence. The width of the region of coherence can be calculated by invoking the vanCittert–Zernike theorem.

Another interesting interferometer based on the division of wavefronts is theMichelson stellar interferometer, which was devised to measure the angular sizesof distant stars. A schematic of the Michelson stellar interferometer is shown inFigure 2.4.

The combination of mirrors M3 and M1 samples a portion of the wavefront froma distant source and directs it to one of the apertures, while the other mirror com-bination M4 and M2 samples another portion of the same wavefront and directs it

P

z

z

M3

M1

M2

M4

r2

r1

FIGURE 2.4 Michelson stellar interferometer for angular size measurement.

Page 51: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 24[#6] 14/5/2009 21:08

24 Optical Methods of Measurement

to the other aperture. Diffraction takes place at these apertures, and an interferencepattern is observed at the observation plane. In the Michelson stellar interferometer,this arrangement is placed in front of the lens of a telescope, and the interferencepattern is observed at the focal plane.

In this arrangement, the width of the interference fringes is governed by the sep-aration between the pair of apertures, and the intensity distribution is governed bydiffraction at the individual apertures. For a broad source, the visibility of the fringesdepends on the separation between the mirrors M3 and M4. This is also the case whena binary object is considered.

2.3.2 MICHELSON INTERFEROMETER

The Michelson interferometer consists of a pair of mirrors M1 and M2, a beam-splitter BS, and a compensating plate CP identical to the BS, as shown in Figure 2.5.Light from a source S is divided into two parts by amplitude division at the beam-splitter. The beams usually travel in orthogonal directions and are reflected back tothe beam-splitter, where they recombine to form an interference pattern. Reflectionat BS produces a virtual image M′

2 of the mirror M2. The interference pattern observedis therefore the same as would be observed in an air layer bounded by M1 and M′

2.With a monochromatic source, there is no need to introduce the compensating

plate CP. It is required, however, when a quasi-monochromatic source is used, tocompensate for dispersion in glass. It is also interesting to note that nonlocalized

M1

M2′

BS

CP

S

O

M2

FIGURE 2.5 Michelson interferometer.

Page 52: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 25[#7] 14/5/2009 21:08

Optical Interference 25

fringes are formed when a monochromatic point source is used. These fringes can becircular or straight-line, depending on the alignment.

With an extended monochromatic source, circular fringes localized at infinity areformed when M1 and M′

2 are parallel. If M1 and M′2 enclose a small angle and are

very close to each other (a thin air wedge), then fringes of equal thickness are formedthat run parallel to the apex of the wedge and are localized in the wedge itself.

2.4 MULTIPLE-BEAM INTERFERENCE

Multiple-beam interference can be observed either with division of wavefront orwith division of amplitude. There are, however, two important differences:

1. Beams of equal amplitudes participate in multiple-beam interference by divi-sion of wavefront, while the amplitude of successive beams continues todecrease when division of amplitude is used.

2. The number of beams participating in interference is finite owing to the finitesize of the elements generating multiple beams by division of wavefront, andhence coherence effects due to the finite size of the source may also comeinto play. The number of beams participating in interference can be infinitewhen division of amplitude is employed.

Except for these differences, the theory of fringe formation is the same.

2.4.1 MULTIPLE-BEAM INTERFERENCE: DIVISION OF WAVEFRONT

An obvious example of the generation of multiple beams is a grating. Let us considera grating that has Nequispaced slits. Let the slit width be b and let the period of theslits be p. An incident beam is diffracted by each slit. We consider beams propagatingin the direction θ as shown in Figure 2.6.

The total amplitude at any point on the observation screen in the direction θ isobtained by the summation of the amplitudes from each slit along with the appropriatephase differences:

A = a + aeiδ + ae2iδ + · · · + aei(N−1)δ (2.10)

p

b

θ

FIGURE 2.6 Diffraction at a grating: multiple-beam interference by division of wavefront.

Page 53: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 26[#8] 14/5/2009 21:08

26 Optical Methods of Measurement

where δ = kp sin θ is the phase difference between two waves diffracted in thedirection θ by two consecutive slits and a is the amplitude of the diffracted wavefrom any slit. The intensity distribution is obtained by summing this series of N termsas follows:

I = AA∗ = a2sin2 1

2Nδ

sin2 1

(2.11)

Substituting for a2 = I0s sinc2(kb sin θ/2) as the intensity distribution due to a singleslit, the intensity distribution in the interference pattern due to the grating is

I = I0s sinc2(

1

2kb sin θ

) sin2 1

2Nδ

sin2 1

(2.12)

The first term is due to diffraction at the single slit and the second is due to interfer-ence. For a Ronchi grating, 2b = p (the transparent and opaque parts are equal), theintensity distribution is given by

I = I0s

4

sin2 1

2Nδ

cos2 1

The principal maxima are formed wherever δ = 2mπ. Figure 2.7 shows the intensitydistribution due to N = 10 slits. It can be seen that the positions of the principalmaxima remain the same as would be obtained with a two-beam interference, but theirwidth narrows considerably. In addition to the principal maxima, there are secondarymaxima in between two consecutive principal maxima. With an increasing numberof slits, the intensity of the secondary maxima falls off.

1.0

0.8

0.6

0.4

0.2

00 1 2 3 4 5 6 7 8 9 10

N = 20

N = 10

FIGURE 2.7 Intensity distribution in the diffraction pattern of a grating with N = 10 and 20.

Page 54: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 27[#9] 14/5/2009 21:08

Optical Interference 27

2.4.2 MULTIPLE-BEAM INTERFERENCE: DIVISION OF AMPLITUDE

Consider a plane parallel plate of a dielectric material of refractive index μ andthickness d in air, as shown in Figure 2.8. A plane monochromatic wave of amplitudea is incident at an angle θ1. A ray belonging to this wave, as shown in Figure 2.8, givesrise to a series of reflected and transmitted rays owing to multiple reflections. Let thecomplex reflection and transmission coefficients for the rays incident from the side ofthe surrounding medium on the two surface of the plate be r1, t1 and r2, t2, respectively,and let those for the rays incident from the plate side be r′

1, t′1 and r′2, t′2, respectively.

The amplitudes of the waves reflected back into the first medium and transmittedare given in Figure 2.8. The phase difference between any two consecutives rays isgiven by

δ = 4π

λμd cos θ2 (2.13)

It is obvious that we can observe an interference pattern in both reflection and trans-mission. If the angle of incidence is very small and the plate is fairly large, an infinitenumber of beams take part in the interference.

2.4.2.1 Interference Pattern in Transmission

The resultant amplitude when all the beams in transmission are superposed isgiven by

At(δ) = at1t′2e−iδ/2(1 + r′1r′

2e−iδ + r′21 r′2

2 e−i2δ + r′31 r′3

2 e−i3δ + · · · ) (2.14)

The infinite series can be summed to yield

At(δ) = at1t′2e−iδ/2

1 − r′1r′

2e−iδ(2.15)

a ar1

at1t2¢r1¢2r2¢

2e–i5d/2 at1t2¢r1¢3r2¢

3e–i7d/2at1t2¢r1¢r2¢e–i3d/2

at1t1¢r1¢2r2¢

3e–i 3dat1t1¢r2¢e–id at1t1¢r1¢r2¢

2e–i2d

at1t2¢e–id/2

t1, r1

t2, r2

t1¢, r1¢

t2¢ , r2¢d

q1

q2

m

FIGURE 2.8 Multiple-beam formation in a plane parallel plate.

Page 55: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 28[#10] 14/5/2009 21:08

28 Optical Methods of Measurement

The intensity distribution in transmission is then given by

I(δ) = IinT1T2

1 + R1R2 − 2√

R1R2 cos(δ + ψ1 + ψ2)(2.16)

where T1 = |t1|2, T2 = |t′2|2, and R1 = |r′1|2, R2 = |r′

2|2 are the transmittances andreflectances of the two surfaces of the plate; ψ1 and ψ2 are the phases acquiredby the wave on reflection; and Iin = |a|2 is the intensity of the incident wave. In thespecial case when the surrounding medium is the same on both sides of the plate (ashere, where it is assumed that the plate is in air), R1 = R2 = R, T1 = T2 = T , andψ1 + ψ2 = 0 or 2π. Therefore, the intensity distribution in the interference patternin transmission takes the form

It(δ) = IinT2

1 + R2 − 2R cos δ

= IinT2

(1 − R)2 + 4R sin2 1

(2.17)

This is the well-known Airy formula. The maximum intensity in the interferencepattern occurs when sin 1

2δ = 0. The maximum intensity is given by Imax = IinT2/

(I − R)2. Similarly, the minimum intensity occurs when sin 12δ = 1. The minimum

intensity is given by Imin = IinT2/(I + R)2. The intensity distribution can now beexpressed in terms of Imax as follows:

It(δ) = Imax

1 + 4R

(1 − R)2sin2 1

(2.18)

Figure 2.9 shows a plot of the intensity distribution for two values of R. This dis-tribution does not display any secondary maxima, completely in contrast with thatobtained in a multiple-beam interference by division of wavefront.

R = 0.5

R = 0.95

1.0

0.8

0.6

0.4

0.2

0–1 0 1 2 3 4 5 6 7 8

FIGURE 2.9 Intensity distribution in transmission due to multiple-beam interference.

Page 56: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 29[#11] 14/5/2009 21:08

Optical Interference 29

2.4.2.2 Interference Pattern in Reflection

The resultant amplitude in reflection is obtained by the summation of the amplitudesof all the reflected waves as follows:

Ar(δ) = ar1 + at1t′1r′2e−iδ(1 + r′

1r′2e−iδ + r′2

1 r′22 e−i2δ + · · · )

= ar1 + at1t′1r′2e−iδ

1 − r′1r′

2e−iδ

(2.19)

Using the Stokes relations r1 = −r′1 and r2

1 + t1t′1 = 1, the above relation can beexpressed as follows:

Ar(δ) = a−r′

1 + r′2e−iδ

1 − r′1r′

2e−iδ(2.20)

The intensity distribution in the interference pattern in reflection, when the plate is inair, is thus given by

Ir(δ) = Iin2R(1 − cos δ)

1 + R2 − 2R cos δ

= Iin

4R

(1 − R)2sin2 1

1 + 4R

(1 − R)2sin2 1

(2.21)

The intensity maxima occur when sin 12δ = 1. The intensity distribution is comple-

mentary to the intensity distribution in transmission.

2.5 INTERFEROMETRY

Interferometry is a technique of measurement that employs the interference of lightwaves, and the devices using this technique are known as interferometers. These usean arrangement to generate two beams, one of which acts as a reference and the otheris a test beam. The test beam gathers information about the process to be measuredor monitored. These two beams are later combined to produce an interference patternthat arises from the acquired phase difference. Interferometers based on divisionof amplitude use beam-splitters for splitting the beam into two and later combiningthem for interference. Ingenuity lies in designing beam-splitters and beam-combiners.The interference pattern is either recorded on a photographic plate or sensed bya photodetector or array-detector device. Currently charge-coupled device (CCD)arrays are used along with phase-shifting to display the desired information such assurface profile, height variation, and refractive index variation. Observation of thefringe pattern is completely hidden in the process.

Two-beam interferometers using division of wavefront have been used for thedetermination of the refractive index (Rayleigh interferometer), determination of theangular size of distant objects (Michelson stellar interferometer), deformation studies

Page 57: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 30[#12] 14/5/2009 21:08

30 Optical Methods of Measurement

(speckle interferometers), and so on. Interferometers based on division of ampli-tude are used for the determination of wavelength (Michelson interferometer), thetesting of optical components (Twyman–Green interferometer, Zygo interferometer,Wyco interferometer, and shear interferometers), the study of microscopic objects(Mirau interferometer), the study of birefringent objects (polarization interferome-ters), distance measurement (modified Michelson interferometer), and so on. TheMichelson interferometer is also used in spectroscopy, particularly in the infraredregion, and offers Fellgett’s advantage. The Michelson–Morley experiment, whichused a Michelson interferometer to show the absence of the ether, played a great partin the advancement of science. Gratings are also used as beam-splitters and beam-combiners. Because of their dispersive nature, they are used in the construction ofachromatic interferometers.

Multiple-beam interferometers are essentially dispersive, and find applications inspectroscopy. They offer high sensitivity.

2.5.1 DUAL-WAVELENGTH INTERFEROMETRY

The interferometer is illuminated simultaneously with two monochromatic waves ofwavelengths λ1 and λ2. Each wave produces its own interference pattern. The twointerference patterns produce a moiré, which is characterized by a synthetic wave-length λs. If the path difference variation is large, thereby producing an interferencepattern with narrow fringes, then only the pattern due to the synthetic wavelength willbe visible. Under the assumption that the two wavelengths are very close to each otherand the waves are of equal amplitude, the intensity distribution can be expressed asfollows:

Itw = I0(a + b cos δ1 + b cos δ2)

= I0

{a + 2b cos

[1

2(δ1 + δ2)

]cos

[1

2(δ1 − δ2)

]} (2.22)

where a and b are constants and the phase differences δ1 and δ2 are given by

δ1 = 2π

λ1Δ, δ2 = 2π

λ2Δ

Δ is the optical path difference and is assumed to be constant for the two wavelengthsunder consideration. The second term in the intensity distribution has a modulationterm; the argument of the cosine function can be expressed as

δ1 − δ2

2= π

(1

λ1− 1

λ2

)Δ = π

λsΔ (2.23)

The synthetic wavelength λs is thus given by

λs = λ1λ2

|λ1 − λ2| (2.24)

Page 58: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 31[#13] 14/5/2009 21:08

Optical Interference 31

Dual-wavelength interferometry has been used to measure large distances withhigh accuracy. Phase-shifting can be incorporated in dual-wavelength interfer-ometry. There are several other techniques, such as dual-wavelength sinusoidalphase-modulating interferometry, dual-wavelength phase-locked interferometry, anddual-wavelength heterodyne interferometry.

2.5.2 WHITE LIGHT INTERFEROMETRY

Each wavelength in white light produces an interference pattern, which adds up onan intensity basis at any point on the observation plane. The fringe width of thesepatterns is different. However, at a point on the observation plane where the twointerfering beams corresponding to each wavelength have zero phase difference, theinterference results in a bright fringe. At this point, all wavelengths in white lightwill interfere constructively, thereby producing a white fringe. As one moves awayfrom this point, thereby increasing the path difference, an interference color fringecorresponding to the shortest wavelength is seen first, followed by color fringes ofincreasing wavelengths. With increasing path difference, these colors become lessand less saturated, and finally the interference pattern is lost. Figure 2.10 shows theintensity distribution in an interference pattern of white light.

White light interferometry has been used to measure thicknesses and air-gapsbetween two dielectric interfaces. The problem of 2π phase ambiguity can also beovercome by using white light interferometry and scanning the object in depth whenthe object profile is to be obtained. Instead of scanning the object, spectrally resolvedinterferometry can be used for profiling.

2.5.3 HETERODYNE INTERFEROMETRY

Let us consider the interference between two waves of slightly different frequen-cies traveling along the same direction. (In dual-wavelength interferometry, the twowavelengths are such that their frequency difference is in the terahertz region, whichthe optical detector cannot follow, while in heterodyne interferometry, the frequency

FIGURE 2.10 White light interferogram.

Page 59: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 32[#14] 14/5/2009 21:08

32 Optical Methods of Measurement

difference is in megahertz, which the detector can follow.) The two real waves can bedescribed by

E1(t) = a1 cos(2πiν1t + δ1)

E2(t) = a2 cos(2πiν2t + δ2)

where ν1, ν2 are the frequencies and δ1, δ2 are the phases of the two waves. Thesetwo waves are superposed on a square-law detector. The resultant amplitude on thedetector is thus the sum of the two; that is,

E(t) = E1(t) + E2(t)

The output current i(t) of the detector is proportional to |E(t)|2. Hence,

i(t) ∝ a21 cos2(2πiν1t + δ1) + a2

2 cos2(2πiν2t + δ2)

+ 2a1a2 cos(2πiν1t + δ1) cos(2πiν2t + δ2)

= 1

2(a2

1 + a22) + 1

2[a2

1 cos 2(2πiν1t + δ1) + a22 cos 2(2πiν2t + δ2)]

+ a1a2 cos[2πi(ν1 + ν2)t + δ1 + δ2] + a1a2 cos[2πi(ν1 − ν2)t + δ1 − δ2](2.25)

The expression for the output current contains oscillatory components at frequencies2ν1, 2ν2 , and ν1 + ν2 that are too high for the detector to follow, and thus the termscontaining these frequencies are averaged to zero. The output current is then given by

i(t) = 1

2(a2

1 + a22) + a1a2 cos[2πi(ν1 − ν2)t + δ1 − δ2] (2.26)

The output current thus consists of a DC component and an oscillating componentat the beat frequency ν1 − ν2. The beat frequency usually lies in the radiofrequencyrange, and hence can be observed. Thus superposition of two waves of slightly differ-ent frequencies on a square-law detector results in a moving interference pattern; thenumber of fringes passing any point on the detector in unit time is equal to the beatfrequency. Obviously, the size of the detector must be much smaller than the fringewidth for the beat frequency to be observed. The phase of the moving interferencepattern is determined by the phase difference between the two waves. The phase canbe measured electronically with respect to a reference signal derived either from asecond detector or from the source deriving the modulator. Heterodyne interferom-etry is usually performed with a frequency-stabilized single-mode laser and offersextremely high sensitivity at the cost of system complexity.

2.5.4 SHEAR INTERFEROMETRY

In shear interferometry, interference between a wave and its sheared version isobserved. A sheared wave can be obtained by lateral shear, rotational shear, radial

Page 60: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 33[#15] 14/5/2009 21:08

Optical Interference 33

(a) (b)

FIGURE 2.11 Multiple-beam shear interferograms: (a) at collimation; (b) off collimation.

shear, folding shear, or inversion shear. Shear interferometers have certain advantages.They do not require a reference wave for comparison. Both waves travel along thesame path, and thus the interferometer is less susceptible to vibrations and thermalfluctuations. However, the interpretation of shear fringes is more tedious than that offringes obtained with a Michelson or Twyman–Green interferometer. Fringes repre-sent the gradient of path difference in the direction of shear. Fringe interferometry hasfound applications where one is interested in slope and curvature measurements ratherthan in displacement alone. Multiple-beam shear interferometry using a coated shearplate has been used for collimation testing. It can be shown that strong satellite fringesare seen in transmission when the beam is not collimated, as shown in Figure 2.11.

2.5.5 POLARIZATION INTERFEROMETERS

These are basically used to study birefringent specimens. A polarizer is used topolarize the incident collimated beam, which passes through a birefringent mate-rial. The polarized beam then splits into ordinary and extraordinary beams, whichtravel through the same object but cannot interfere, because they are in orthogonalpolarization states. These beams are brought to interfere by another polarizer. Someof these interferometers are described in Chapter 8.

There is another class of polarization interferometers, which use beam-splittersand beam-combiners such as a Savart plate or a Wollaston prism made of birefringentmaterial. Since the two beams generated in the beam-splitter have a very small sepa-ration, these constitute shear interferometers. Figure 2.12 shows a polarization shearinterferometer using Savart plates as a beam-splitter (S1) and a beam-combiner (S2).These are two identical plates of birefringent material cut from a uniaxial crystal,either from calcite or quartz, with the optic axis at approximately 45◦ to the entranceand exit faces and put together with their principal sections crossed. A polarizer P1 isplaced before the Savart plate and oriented such as to produce a ray linearly polarizedat 45◦. A ray from the polarizer P1 is split into an e-ray and an o-ray in the first plate.Since the principal sections of the two plates are orthogonal, the e-ray in the first platebecomes an o-ray in the second plate, while the o-ray in the first plate becomes an

Page 61: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 34[#16] 14/5/2009 21:08

34 Optical Methods of Measurement

S1P1 P2S2

l/2 plate

Phase object

FIGURE 2.12 Polarization interferometer using Savart plates.

e-ray in the second plate. These rays emerge parallel to each other but have a linearshear (displacement) given by

Δsa = √2d∣∣∣(n2

e − n2o)

∣∣∣ (n2e + n2

o) (2.27)

where d is the thickness of the plate and ne and no are the extraordinary and ordinaryrefractive indices of the uniaxial crystal. These two rays are combined using anotherSavart plate. Since the two rays are orthogonally polarized, another polarizer P2 is usedto make them interfere. The interference pattern consists of equally spaced straightfringes localized at infinity. The object is placed in the beam, and its refractive indexand thickness variations introduce phase variations, which distort the fringes.

A Savart plate introduces linear shear in a collimated beam. Usually, the incidentbeam has a small divergence. Savart plates can be modified to accept wider fields.However, with a spherical beam, linear shear can be introduced by a Wollaston prism.

2.5.6 INTERFERENCE MICROSCOPY

In order to study small objects, it is necessary to magnify them. The response ofmicromechanical components to an external agency can be studied under magnifica-tion. Therefore, microinterferometers, particularly Michelson interferometers, havebeen built that can be placed in front of a microscope objective. Obviously, onerequires microscope objectives with long working distances. The Mirau interferom-eter employs a very compact optical system that can be incorporated in a microscopeobjective. It is more compact than the micro-Michelson interferometer and fullycompensated. Figure 2.13 shows a schematic view of the Mirau interferometer. Lightfrom the illuminator through the microscope objective illuminates the object. Thebeam from the illuminator is split into two parts by the beam-splitter. One part illumi-nates the object, while the other is directed toward a reference surface. On reflectionfrom the reference surface as well as from the object, the beams combine at thebeam-splitter, which directs these beams to the microscope objective. An interferencepattern is thus observed between the reference beam and the object beam. The patternrepresents the path variations on the surface of the object. White light can be used witha Mirau interferometer. Phase-shifting is introduced by mounting the beam-splitteron PZT (lead zirconate titanate).

Page 62: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 35[#17] 14/5/2009 21:08

Optical Interference 35

Object

Beam-splitter

Reference surface

Microscope objective

FIGURE 2.13 The Mirau interferometer.

Shearing interferometry can also be carried out under high magnification. TheNomarski interferometer is commonly used within a microscope, as shown in Figure2.14. It uses two modified Wollaston prisms as a beam-splitter and a beam-combiner.The modified Wollaston prism offers the wider field of view that is required inmicroscopy. The angular shear γ between the two rays exiting from the Wollastonprism is given by

γ = 2 |(ne − no)| sin θ (2.28)

where θ is the angle of either wedge forming the Wollaston prism. The Nomarskiinterferometer can be employed in two distinctly different modes. With an isolatedmicroscopic object, it is convenient to use a lateral shear that is larger than the dimen-sions of the object. Two images of the object are then seen, covered with fringesthat contour the phase changes due to the object. Often, the Nomarski interferometeris used with smaller shear than the dimensions of a microscopic object. The inter-ference pattern then shows the phase gradients. This mode of operation is known asdifferential interference contrast (DIC) microscopy. The Nomarski interferometer can

P1 W1 Condenser Object Objective W1 P2

FIGURE 2.14 The Nomarski interferometer.

Page 63: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 36[#18] 14/5/2009 21:08

36 Optical Methods of Measurement

also be used with white light; the phase changes are then decoded as color changes.Biological objects, usually phase objects, can also be studied using microscopes.

2.5.7 DOPPLER INTERFEROMETRY

An intense beam from a laser is expanded by a beam-expander and is directed ontoa moving object. The light reflected/scattered from the moving object is redirectedand then collimated before being fed into a Michelson interferometer as shown inFigure 2.15. The arms of the interferometer contain two 4f arrangements, but thefocal lengths of the lenses in the two arms are different. This introduces a net pathdifference.

The light from a moving object is Doppler-shifted. The Doppler-shifted lightilluminates the interferometer. For a retro-reflected beam, the Doppler shift Δλ isgiven by

Δλ = 2v

where v is the velocity of the object and c is the velocity of light. For objects movingwith a speed of about 1 km/s, the Doppler-shift can be in tens of gigahertz, which isdifficult to follow, and, to overcome this problem and reveal the time history of anobject’s motion, Doppler interferometers have been developed.

If the object is moving with constant velocity, the wavelength of the incidentradiation remains constant, and hence the path difference between two beams inthe interferometer is constant with time; the interference pattern remains stationary.However, if the object is accelerating, there is a change in phase difference, which isgiven by

dδ = −2π

λ2dλΔ = −δ

2v

c(2.29)

Detector

Detector

Laser

FIGURE 2.15 Michelson interferometer for observing the velocity history of a projectile.

Page 64: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 37[#19] 14/5/2009 21:08

Optical Interference 37

Time

Inte

nsity

FIGURE 2.16 Variation of intensity with time in a Doppler interferometer.

The change in phase is directly proportional to the path difference Δ, which dependson the difference in the arm lengths of the interferometer. Further phase change isdirectly proportional to the speed of the object. If the interferometer is aligned fora single broad fringe in the field, the change of phase will result in a variation ofintensity. Therefore, a detector will show a variation of intensity with time. The inter-ference is governed by the equation dδ = 2mπ, where m is the order of the fringe. Thisgives m = (2v/λc)Δ. A full cycle change (m = 1) occurs when the velocity changesby λc/2Δ. A larger path difference between two arms thus gives higher sensitivity.Figure 2.16 shows an example of the variation of intensity with time. Usually, anevent may occur only over a fraction of second. Therefore, a Doppler-shift of tens ofgigahertz is converted into tens of hertz.

2.5.8 FIBER-OPTIC INTERFEROMETERS

Two-beam interferometers can be easily constructed using optical fibers. Michelsonand Fizeau interferometers have been realized. Electronic speckle pattern interferom-etry and its shear variant have been performed using optical fibers. A laser Dopplervelocimetry setup can also be easily assembled.

The building blocks of fiber-optic interferometers are single-mode fibers, birefrin-gent fibers, fiber couplers, fiber polarization elements, and micro-optics. Single-modefibers allow only a single mode to propagate, and therefore a smooth wavefront isobtained at the output end. Very long path differences are achieved with fiber interfer-ometers, and therefore they are inherently suited for certain measurements requiringhigh sensitivity. The fibers are compatible with optoelectronic devices such as semi-conductor lasers and detectors. These interferometers have found applications assensors with distributed sensing elements.

Figure 2.17 shows a fiber-optic Doppler velocimeter arrangement for measuringfluid velocities. Radiation from a semiconductor laser is coupled to a single-modefiber. Part of the radiation is extracted into a second fiber by a directional coupler,thereby creating two beams. Graded-index rod lenses are attached to the fiber ends toobtain collimated beams. The fiber ends can be separated by a desired amount, anda large lens focuses the beams in the sample volume. An interference pattern is thusgenerated there. The beams can be aligned such that the interference fringes in thesample volume run perpendicular to the direction of flow. Alternatively, these beams

Page 65: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 38[#20] 14/5/2009 21:08

38 Optical Methods of Measurement

Diode laser Spectrumanalyzer

Detector

FIGURE 2.17 Fiber-optic Doppler velocimetry.

can be separated by any amount in space by simply displacing the fiber ends. Thefiber ends can be imaged in the sample volume by individual lenses, and can be super-posed by manipulating them. An interference pattern can thus be created in a smallvolume, with the fringe running normal to the direction of flow. The scattered lightfrom the interference volume is collected and sent to a photodiode through a multi-mode fiber. The Doppler signal from the photodiode is analyzed by an RF spectrumanalyzer. A frequency offset for direction determination can be introduced betweenthe beams by using a piezoelectric phase-shifter, driven by a sawtooth waveform,in one arm.

2.5.9 PHASE-CONJUGATION INTERFEROMETERS

Interference between a signal wave or a test wave and its phase-conjugated ver-sion is observed in phase-conjugation interferometers. The phase-conjugated wave isrealized by four-wave mixing in a nonlinear crystal such as BaTiO3. The phase-conjugated wave carries phase variations of opposite signs and travels in the oppositedirection to the signal or test wave. For example, a diverging wave emanating froma point source, when phase-conjugated, will become a converging wave and willconverge to the same point source.

Since interference between the test wave and the phase-conjugated wave isobserved in phase-conjugation interferometers, there is no need to have a reference

BaTiO3

M1

Pump beam

M2

FIGURE 2.18 Phase-conjugated Michelson interferometer for collimation testing.

Page 66: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 39[#21] 14/5/2009 21:08

Optical Interference 39

(a) (b) (c)

FIGURE 2.19 Interferograms (a) inside, (b) at, and (c) outside the collimation position.

wave, and the interference pattern exhibits twice the sensitivity. Phase-conjugate inter-ferometers are robust and are less influenced by thermal and vibration effects thanother interferometers.

Michelson, Twyman–Green, and Mach–Zehnder interferometers have been real-ized using phase-conjugate mirrors. We describe here an interferometer that is usedfor testing collimation. Figure 2.18 shows a schematic view of the interferometer. M1is a double mirror enclosing an angle close to 180◦. A BaTiO3 crystal is used for gen-erating a phase-conjugated beam by four-wave mixing. At collimation, straight-linefringes are observed in both fields, while with a converging or diverging wave fromthe collimator, curved fringes with curvatures reversed in each field are observed.These interferograms are shown in Figure 2.19. The technique is self-referencing andallows sensitivity to be increased fourfold.

BIBLIOGRAPHY

1. C. Candler, Modern Interferometers, Hilger and Watts, London, 1951.2. S. Tolansky, An Introduction to Interferometry, Longmans, London, 1955.3. S. Tolansky, Multiple-Beam Interferometry of Surfaces and Films, Oxford University

Press, London, 1948 (reprinted by Dover Publications, 1970).4. M. Françon and S. Mallick, Polarization Interferometers: Applications in Microscopy

and Macroscopy, Wiley-Interscience, London, 1971.5. D. F. Melling and J. H. Whitelaw, Principles and Practice of Laser–Doppler Anemom-

etry, Academic Press, London, 1976.6. K. Leonhardt, Optische Interferenzen, Wissenschaftliche Vergesellschaft, Stuttgart,

1981.7. W. H. Steel, Interferometry, Cambridge University Press, Cambridge, 1983.8. B. Culshaw, Optical Fibre Sensing and Signal Processing, Peregrinus, London, 1984.9. J. M. Vaughan, The Fabry–Perot Interferometer, Adam Hilger, Bristol, 1989.

10. Prasad L. Polavarapu (Ed.), Principles and Applications of Polarization-DivisionInterferometry, Wiley, New York, 1998.

11. M. Born and E. Wolf, Principles of Optics, 7th edn, Cambridge University Press,Cambridge, 1999.

12. P. Hariharan, Optical Interferometry, Academic Press, San Diego, 2003.

Page 67: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 40[#22] 14/5/2009 21:08

40 Optical Methods of Measurement

ADDITIONAL READING

1. G. Delaunay, Microscope interferential A. Mirau pour la measure du fini des surfaces,Rev. Opt., 32, 610–614, 1953.

2. J. Dyson, Common-path interferometer for testing purposes, J. Opt. Soc. Am., 47,386–390, 1957.

3. M. V. R. K. Murty, Common path interferometer using Fresnel zone plates, J. Opt. Soc.Am., 53, 568–570, 1963.

4. M. V. R. K. Murty, The use of a single plane parallel plate as a lateral shearinginterferometer with a visible gas laser source, Appl. Opt., 3, 531–534, 1964.

5. W. H. Steel, A polarization interferometer for the measurement of transfer functions,Opt. Acta, 11, 9–19, 1964.

6. W. H. Steel, On Möbius-band interferometers, Opt. Acta, 11, 211–217, 1964.7. O. Bryngdahl, Applications of shearing interferometry, In Progress in Optics (ed.

E. Wolf), IV, 37–83, North-Holland, Amsterdam, 1965.8. R. Allen, G. David, and G. Nomarski, The Zeiss–Nomarski differential interference

equipment for transmitted-light microscopy, Z. Wiss. Mikroskop. Mikroskop. Tech., 69,193–221, 1969.

9. A. K. Chakrabory, Polarization effect of beam-splitter on two-beam interferometry,Nouv. Rev. Opt., 4, 331–335, 1973.

10. R. J. Clifton,Analysis of the laser velocity interferometer, J. Appl. Phys., 41, 5335–5337(1970)

11. O. Bryngdahl, Shearing interferometry with constant radial displacement, J. Opt. Soc.Am., 61, 169–172, 1971.

12. L. M. Barker and R. E. Hollenbach, Laser interferometer for measuring high velocitiesof any reflecting surface, J. Appl. Phys., 43, 4669–4675, 1972.

13. L. M. Barker and K. W. Schuler, Correction to the velocity per fringe relationship forthe VISAR interferometer, J. Appl. Phys., 45, 3692–3693, 1974.

14. R. N. Smartt, Zone plate interferometer, Appl. Opt., 13, 1093–1099, 1974.15. P. Hariharan, W. H. Steel, and J. C. Wyant, Double grating interferometer with variable

shear, Opt. Commun., 11, 317–320, 1974.16. K. Leonhardt, Intensity and polarization of interference systems of a two-beam

interferometer, Opt. Commun., 11, 312–316, 1974.17. D. R. Goosman, Analysis of the laser velocity interferometer, J. Appl. Phys., 46, 3516–

3525, 1975.18. J. C. Wyant, A simple interferometric OTF instrument, Opt. Commun., 19, 120–122,

1976.19. W. F. Hemsing, Velocity sensing interferometer (VISAR) modification, Rev. Sci.

Instrum., 50, 73–78, 1979.20. T. Yatagai and T. Kanou, Aspherical surface testing with shearing interferometer using

fringe scanning detection method, Opt. Eng., 23, 357–360, 1984.21. D. Jackson and J. D. C. Jones, Fibre optic sensors, J. Mod. Opt., 33, 1469–1503, 1986.22. D. Z. Anderson, D. M. Lininger, and J. Feinberg, Optical tracking novelty filter, Opt.

Lett., 12, 123–125, 1987.23. W. Zou, G. Yuan, Q. Z. Tian, B. Zhang, and C. Wu, In-bore projectile velocity

measurement with laser interferometer, Proc. SPIE, 1032, 669–671, 1988.24. R. Dändliker, R. Thalmann, and D. Prongué, Two-wavelength laser interferometry

using superheterodyne detection, Opt. Lett., 13, 339–341, 1988.25. T. Suzuki, O. Sasaki, and T. Maruyama, Phase locked laser diode interferometry for

surface profile measurement, Appl. Opt., 28, 4407–4410, 1989.

Page 68: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 41[#23] 14/5/2009 21:08

Optical Interference 41

26. M. Küchel, The new Zeiss interferometer, Proc. SPIE, 1332, 655–663, 1991.27. Y. Ishii and R. Onodera, Two-wavelength laser-diode interferometry that uses phase-

shifting techniques, Opt. Lett., 16, 1523–1525, 1991.28. D. J. Anderson and J. D. C. Jones, Optothermal frequency and power modulation of

laser diodes, J. Mod. Opt., 39, 1837–1847, 1992.29. C. S.Vikram,W. K.Witherow, and J. D. Trolinger, Determination of refractive properties

of fluid for dual-wavelength interferometry, Appl. Opt., 31, 7249–7252, 1992.30. R. Onodera and Y. Ishii, Two-wavelength laser-diode interferometer with fractional

fringe techniques, Appl. Opt., 34, 4740–4746, 1995.31. K. Matsuda, T. Eiju, T. H. Barnes, and S. Kokaji, A novel shearing interferometer with

direct display of lens lateral aberrations, Jpn J. Appl. Phys., 34, 325–330, 1995.32. I. Yamaguchi, J. Liu, and J. Kato, Active phase-shifting interferometers for shape and

deformation measurements, Opt. Eng., 35, 2930–2937, 1996.33. T. Suzuki, T. Muto, O. Sasaki, and T. Maruyama, Wavelength-multiplexed phase-locked

laser diode interferometry using a phase-shifting technique, Appl. Opt., 36, 6196–6201,1997.

34. L. Erdmann and R. Kowarschik, Testing of silicon micro-lenses by use of a lateralshearing interferometer in transmission, Appl. Opt., 37, 676–682, 1998.

35. S. S. Helen, M. P. Kothiyal, and R. S. Sirohi, Achromatic phase shifting by a rotatingpolarizer, Opt. Commun., 154, 249–254, 1998.

36. M. C. Park and S. W. Kim, Direct quadratic polynomial fitting for fringe peak detectionof white light scanning interferograms, Opt. Eng., 39, 952–959, 2000.

37. T. Yokoyama, T. Araki, S. Yokoyama, and N. Suzuki, A subnanometre heterodyneinterferometric system with improved phase sensitivity using three-longitudinal-modeHe–Ne Laser, Meas. Sci. Technol., 12, 157–162, 2001.

38. DeVonW. Griffin, Phase-shifting shearing interferometer, Opt. Lett., 26, 140–141, 2001.39. H. Kihm and S. Kim, Fiber-diffraction interferometer for vibration desensitization, Opt.

Lett., 30, 2059–2061, 2005.40. J. E. Millerd, N. J. Brock, J. B. Hayes, M. B. North-Morris, M. Novak, and J. C. Wyant,

Pixelated phase-mask dynamic interferometer, Proc. SPIE, 5531, 304–314, 2004.41. R. M. Neal and J. C. Wyant, Polarization phase-shifting point-diffraction interferometer,

Appl. Opt., 45, 3463–3476, 2006.42. O. Ferhanoglu, M. F. Toy, and H. Urey, Two-wavelength grating interferometry for

MEMS sensors, Photonics Technol. Lett. IEEE, 19, 1895–1897, 2007.43. J. M. Flores, M. Cywiak, M. Servín, and L. P. Zuárez, Heterodyne two beam Gaussian

microscope interferometer, Opt. Exp., 15, 8346–8359, 2007.44. J. Schmit and P. Hariharan, Improved polarization Mirau interference microscope, Opt.

Eng., 46, 077007, 2007.45. Z. Ge, T. Saito, M. Kurose, H. Kanda, K. Arakawa, and M. Takeda, Precision interfer-

ometry for measuring wavefronts of multi-wavelength optical pickups, Opt. Exp., 16,133–143, 2008.

Page 69: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C002.tex” — page 42[#24] 14/5/2009 21:08

Page 70: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 43[#1] 14/5/2009 16:46

3 Diffraction

In an isotropic medium, a monochromatic wave propagates with its characteristicspeed. For plane and spherical waves, if the phase front is known at time t = t1,its location at a later time is obtained easily by multiplying the elapsed time by thevelocity in the medium. The wave remains either plane or spherical. Mathematically,we can find the amplitude at any point by solving the Helmholtz equation (Equation1.3). However, when the wavefront is restricted in its lateral dimension, diffractionof light takes place. Diffraction problems are rather complex in nature, and analyticalsolutions exist for only a few of them. The Kirchhoff theory of diffraction, althoughafflicted with inconsistency in the boundary conditions, yields predictions that are inclose agreement with experiments.

The term “diffraction” has been conveniently described by Sommerfeld as “anydeviation of light rays from rectilinear paths that cannot be interpreted as reflection orrefraction.” It is often stated in a different form as “bending of rays near the corners.”Grimaldi first observed the presence of bands in the geometrical shadow region of anobject. A satisfactory explanation of this observation was given by Huygens by intro-ducing the concept of secondary sources on the wavefront at any instant and obtainingthe subsequent wavefront as an envelope of the secondary waves. Fresnel improvedupon the ideas of Huygens, and the theory is known as the Huygens–Fresnel theoryof diffraction.

This theory was placed on firmer mathematical ground by Kirchhoff, who devel-oped his mathematical theory using two assumptions about the boundary values ofthe light incident on the surface of an obstacle placed in its path of propagation.This theory is known as Fresnel–Kirchhoff diffraction theory. It was found that thetwo assumptions of Fresnel–Kirchhoff theory are mutually inconsistent. The theory,however, yields results that are in surprisingly good agreement with experiments inmost situations. Sommerfeld modified Kirchhoff’s theory by eliminating one of theassumptions, and the resulting theory is known as Rayleigh–Sommerfeld diffractiontheory. Needless to say, there have been subsequent workers who have introducedseveral refinements to the diffraction theories.

3.1 FRESNEL DIFFRACTION

Let u(xo, yo) be the field distribution at an aperture lying on the (xo, yo) plane locatedat z = 0. Under small-angle diffraction, the field at any point P(x, y) (assumed to be

43

Page 71: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 44[#2] 14/5/2009 16:46

44 Optical Methods of Measurement

S

S(xo, yo, 0)

P(x, y, z)r

x

z

FIGURE 3.1 Diffraction at an aperture.

near the optical axis) on a plane a distance z from the obstacle or aperture plane canbe expressed using Fresnel–Kirchhoff diffraction theory (Figure 3.1) as follows:

u(x, y) = 1

iλz

∫∫u(xo, yo)e

ikr dxo dyo (3.1)

where the integral is over the area S of the aperture, and k = 2π/λ is the propagationvector of the light. We can expand r in the phase term in a Taylor series as follows:

r = z + (x − xo)2 + ( y − yo)

2

2z− [(x − xo)

2 + ( y − yo)2]2

8z3+ · · · (3.2)

If we impose the condition that

[(x − xo)2 + ( y − yo)

2]2/8z3 � λ (3.3)

then we are in the Fresnel diffraction region. The Fresnel region may extend fromvery near the aperture to infinity. The condition 3.3 implies that the path variation atthe point P(x, y) as the point S(xo, yo) spans the whole area S is much less than onewavelength. The amplitude u(x, y) in the Fresnel region is given by

u(x, y) = eikz

iλz

∫∫s

u(xo, yo) exp

{ik

2z[(x − xo)

2 + ( y − yo)2]}

dxo dyo (3.4)

Since we are invariably in the Fresnel region in most of the situations in optics,Equation 3.4 will be used frequently.

3.2 FRAUNHOFER DIFFRACTION

A more stringent constraint may be placed on the distance z and/or on the size of theobstacle or aperture by setting

x2o + y2

o

2z� λ. (3.5)

Page 72: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 45[#3] 14/5/2009 16:46

Diffraction 45

This is the far-field condition, and when this condition is met, we are in the Fraunhoferregion. The amplitude u(x, y) is then given by

u(x, y) = eikz

iλzexp

[ik

2z(x2 + y2)

] ∫∫s

u(xo, yo) exp

[− ik

z(xxo + yyo)

]dxo dyo (3.6)

Let us consider the illumination of the aperture by a plane wave of amplitude A. Thetransmittance function t(xo, yo) of the aperture may be defined as

t(xo, yo) = u(xo, yo)/A

Therefore, the field u(x, y) may be expressed as

u(x, y) = Aeikz

iλzexp

[ik

2z(x2 + y2)

] ∫ ∞

−∞

∫ ∞

−∞t(xo, yo) exp

[− ik

z(xxo + yyo)

]dxo dyo

= Aeikz

iλzexp

[ik

2z(x2 + y2)

] ∫ ∞

−∞

∫ ∞

−∞t(xo, yo) exp

[−2πi(μxo + νyo)]

dxo dyo

(3.7)

The limits of integration have been changed to ±∞, since t(xo, yo) is nonzero withinthe opening and zero outside. It can be seen seen that u(x, y) can be expressed as theFourier transform of the transmittance function. The Fourier transform is evaluatedat the spatial frequencies μ = x/λz and ν = y/λz.

3.3 ACTION OF A LENS

A plane wave incident on a lens is transformed to a converging/diverging sphericalwave; a converging spherical wave will come to a focus. Therefore, a lens may bedescribed by a transmittance function τ(xL, yL), which is represented as

τ(xL, yL) = exp(iφ0) exp

[± ik

2f(x2

L + y2L)

]for x2

L + y2L ≤ R2 (3.8)

where ϕ0 is a constant phase, (k/2f )(x2L + y2

L) is a quadratic approximation to thephase of a spherical wave, and 2R is the diameter of the lens aperture. The minus signcorresponds to a positive lens. The lens is assumed to be perfectly transparent; thatis, it does not introduce any attenuation. If a mask is placed in front of the lens, itstransmittance function is obtained by multiplying the transmittance of the mask bythat of the lens.

3.4 IMAGE FORMATION AND FOURIER TRANSFORMATIONBY A LENS

We consider the geometry shown in Figure 3.2. A transparency of amplitude transmit-tance t(xo, yo) is illuminated by a diverging spherical wave emanating from a point

Page 73: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 46[#4] 14/5/2009 16:46

46 Optical Methods of Measurement

S

y

zx

P(x, y)p(xL, yL)t(xo, yo)

So

d2do d1

FIGURE 3.2 Imaging and Fourier transformation by a lens.

source So. We wish to obtain the amplitude distribution at a plane a distance d2 fromthe lens. This is done by invoking Fresnel–Kirchhoff diffraction theory twice in suc-cession: first at the transparency plane and then at the lens plane. The amplitude atany point P(x, y) on the observation plane is expressed as

u(x, y) = −Aeik(do+d1+d2)

λ2dod1d2exp

[ik(x2 + y2)

2d2

]

×∫∫∫∫

p(xL, yL) exp

[ik(x2

L + y2L)

2

(1

d1+ 1

d2− 1

f

)]

× t(xo, yo) exp

[ik(x2

o + y2o)

2

(1

do+ 1

d1

)]

× exp

{−ik

[xL

(xo

d1+ x

d2

)+ yL

(yo

d1+ y

d2

)]}dxo dyo dxL dyL (3.9)

where we have introduced the lens pupil function

p(xL, yL) ={

1 for (x2L + y2

L)1/2 ≤ R

0 otherwise

The limits of integration on the lens plane are therefore taken as ±∞. Writing1/ε = 1/d1 + 1/d2 − 1/f , we evaluate the integral over the lens plane in Equation 3.9assuming p(xL, yL) = 1 over the infinite plane.This holds valid provided that the entirediffracted field from the transparency has been accepted by the lens. In other words,there is no diffraction at the lens aperture. On substitution of the result thus obtained,we can obtain the amplitude u(x, y) as

u(x, y) = −Aeik(do+d1+d2)

λdod1d2ε(1 + i)2 exp

[ik(x2 + y2)

2d2

(1 − ε

d2

)]

Page 74: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 47[#5] 14/5/2009 16:46

Diffraction 47

×∫∫

t(xo, yo) exp

[ik(x2

o + y2o)

2

(1

do+ 1

d1− ε

d21

)]

× exp

[− ikε(xxo + yyo)

d1d2

]dxo dyo (3.10)

We now examine both Equation 3.9 and Equation 3.10 to see under what conditionsthe (x, y) plane is an image plane or a Fourier transform plane of the (xo, yo) plane.

3.4.1 IMAGE FORMATION

It is known in geometrical optics that an image is formed when the imaging condition1/d1 + 1/d2 − 1/f = 0 is satisfied. We invoke this condition and see whether theamplitude distribution at the (x, y) plane is functionally similar to that existing at the(xo, yo) plane. We begin with Equation 3.9, and invoke the imaging condition to give

u(x, y) = −Aeik(do+d1+d2)

λ2dod1d2exp

[ik(x2 + y2)

2d2

]

×∫∫∫∫

p(xL, yL)t(xo, yo) exp

[ik(x2

o + y2o)

2

(1

do+ 1

d1

)]

× exp

{−ik

[xL

(xo

d1+ x

d2

)+ yL

(yo

d1+ y

d2

)]}dxo dyo dxLdyL (3.11)

We evaluate the integral over the lens plane assuming no diffraction. We thus obtain

u(x, y) = −Aeik(do+d1+d2)

doMexp

[ik(x2 + y2)

2d2

]

× exp

[ik(x2 + y2)

2M2

(1

do+ 1

d1

)]t(− x

M, − y

M

)

= C

Mt(− x

M, − y

M

)(3.12)

where M(= d2/d1) is the magnification of the system and C is a complex constant. Theamplitude u(x, y) at the observation plane is identical to that of the input transparency,except for the magnification. Hence, an image of the transparency is formed on theplane that satisfies the condition 1/d1 + 1/d2 = 1/f .

3.4.2 FOURIER TRANSFORMATION

If the quadratic term within the integral in Equation 3.10 is zero, then the amplitudeu(x, y) is a two-dimensional Fourier transform of the function t(xo, yo) multiplied by

Page 75: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 48[#6] 14/5/2009 16:46

48 Optical Methods of Measurement

a quadratic phase factor, provided that the limits of integration extend from −∞ to∞. For the quadratic term to vanish, the following condition should be satisfied:

1

do+ 1

d1− ε

d21

= 0 (3.13)

This condition is satisfied when do = ∞ and d2 = f , because then ε = d1. Thus, theFourier transform of the transparency t(xo, yo) illuminated by a collimated beam isobserved at the back focal plane of the lens. It should be noted that the lens does nottake the Fourier transform. The Fourier transform is extracted by propagation, andthe lens merely brings it from the far field to the back focal plane.

The amplitude u(x, y) when the transparency is illuminated by a collimated beamof amplitude Ap is given by

u(x, y) = −Ap

λfeik(d1+f )(1 + i)2 exp

[ik

2f(x2 + y2)

(1 − d1

f

)]

×∫∫

t(xo, yo) exp

[− ik

2f(xxo + yyo)

]dxo dyo (3.14)

The position-dependant quadratic phase term in front of the integral sign will alsovanish if the transparency is placed at the front focal plane of the lens. The amplitudeis then given by

u(x, y) = uo

∫∫t(xo, yo) exp

[− ik

2f(xxo + yyo)

]dxo dyo (3.15)

where u0 is a complex constant. Here u(x, y) represents the pure Fourier transform ofthe function t(xo, yo).

We thus arrive at the following conclusions:

1. The Fourier transform of a transparency illuminated by a collimated beamis obtained at the back focal plane of a lens. However, it includes a variablequadratic phase factor.

2. The pure Fourier transform (without the quadratic phase factor) is obtainedwhen the transparency is placed at the front focal plane of the lens.

We have considered the very specific case of plane-wave illumination of the trans-parency. It can be shown that when the transparency is illuminated by a spherical wave,the Fourier transform is still obtained, but at a plane other than the focal plane. Asmentioned earlier, the Fourier transform is obtained when Equation 3.13 is satisfied.This equation can also be written as follows:

1

do+ 1

d1− ε

d21

= d2f + f (d1 + do) − d2(d1 + do)

d2(d2 + f + d1f − d1d2)= 0 (3.16)

Page 76: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 49[#7] 14/5/2009 16:46

Diffraction 49

Equation 3.16 is satisfied when

1

do + d1+ 1

d2− 1

f= 0 (3.17)

This equation implies that the Fourier transform of an input is obtained at a planethat is conjugate to the point source plane. Indeed, the pure Fourier transform withnon-unit scale factor is obtained when a transparency placed at the front focal planeis illuminated by a diverging spherical wave. It is also possible to obtain the Fouriertransform with many other configurations.

3.5 OPTICAL FILTERING

The availability of the Fourier transform at a physical plane allows us to modify itusing a transparency or a mask called a filter. In general, the Fourier transform of anobject is complex; that is, it has both amplitude and phase. A filter should thereforeact upon both the amplitude and phase of the transform; such a filter has complextransmittance. In some cases, certain frequencies from the spectrum are filtered out,which is achieved by a blocking filter. Experiments of Abbé and also of Porter forthe verification of Abbé’s theory of microscope imaging are beautiful examples ofoptical filtering. Figure 3.3 shows a schematic view of an optical filtering set-up.This is the well-known 4f processor. The filter, either as a mask or a transparency,is placed at the filter plane, where the Fourier transform of the input is displayed.A further Fourier transformation yields a filtered image. The technique is used toprocess images blurred by motion or corrupted by aberrations of the imaging system.One of the most common applications of optical filtering is to clean a laser beamusing a tiny pinhole at the focus of a microscope objective. Such an arrangement of amicroscope objective with a tiny pinhole at its focal plane is known as a spatial filter.

Phase-contrast microscopy exploits optical filtering in which the zeroth order isphase-advanced or phase-retarded by π/2, thereby converting phase variation in aphase object into intensity variations. The schlieren technique used in aerodynamicsand combustion science to visualize a refractive index gradient is another example ofoptical filtering. The Foucault knife-edge test for examining the figure of a mirror isyet another example of Fourier filtering. One of the most commonly used techniquesto clean a laser beam is spatial filtering, which filters out all Fourier components that

Input plane Filter plane Output plane

f f f f

FIGURE 3.3 4f arrangement for optical filtering.

Page 77: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 50[#8] 14/5/2009 16:46

50 Optical Methods of Measurement

make the beam dirty.A special mask can be used at the aperture stop in an imaging lensto alter its transfer function, and hence selectively filter the band of frequencies forimage formation. Optical filtering is used in data processing, image processing, andpattern recognition. Theta modulation and frequency modulation are used to encodean object so that certain features can be extracted by optical filtering.

3.6 OPTICAL COMPONENTS IN OPTICAL METROLOGY

Optical components can be classified as reflective, refractive, or diffractive.

3.6.1 REFLECTIVE OPTICAL COMPONENTS

Plane mirrors, spherical mirrors, parabolic mirrors, and ellipsoidal mirrors all fallinto the reflective category. Mirrors are used for imaging. A plane mirror, althoughoften used simply for changing the direction of light beam, can also be consideredan imaging element that always forms a virtual image free from aberrations. On theother hand, a spherical mirror forms both a real and a virtual image: a convex mirroralways forms a reduced virtual image, while a concave mirror can form both realand virtual images of an object, depending on the object’s position and the focallength of the mirror. Mirror imaging is free from chromatic aberrations, but suffersfrom achromatic aberrations. A parabolic mirror is usually used as the primary in atelescope, while ellipsoidal mirrors are used in laser cavities.

Mirrors are usually metallic-coated. Silver, gold, chromium, and aluminum areamong the metals used as reflective coatings. The reflectivity can be altered by deposit-ing a dielectric layer on a metallic coating; an example is enhanced aluminum coating.Alternatively, a mirror can be coated with multiple layers of dielectric materials toprovide the desired reflectivity.

3.6.2 REFRACTIVE OPTICAL COMPONENTS

Refractive optics includes both imaging and nonimaging optics. Lenses are used forimaging: a concave lens always forms a virtual image, while a convex lens can formboth real and virtual images, depending on the object’s position and the focal lengthof the lens. Lens imaging suffers from both achromatic and chromatic aberrations.A shape factor is used to reduce the achromatic aberrations of a single lens. A lensdesigner performs the task of designing a lens with optimized aberrations, bearing inmind both cost and ease of production. These lenses have several elements, includingdiffractive elements in special lenses.

In theory, we assume that a lens is free from aberrations and of infinite size so thatdiffraction effects can also be neglected. In practice, assuming a lens to be welldesigned and theoretically free from aberrations, it still suffers from diffractionbecause of its finite size. A point object at infinity is not imaged as a point evenby an aberration-free lens, but rather as a distribution known as the Airy distribution.

The intensity distribution in the image of point source at infinity is given by[2J1(x)/x]2, where the argument x of the Bessel function J1(x) depends on the

Page 78: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 51[#9] 14/5/2009 16:46

Diffraction 51

(a) (b) 2J1(kax/f ) 2

kax/f

1.0

0.5

0 0.61 l f/a x

FIGURE 3.4 Diffraction at a lens of circular aperture. (a) Photograph of the intensity distri-bution at the focus when a plane wave is incident on it. (b) Intensity distribution.

f -number of the lens, the magnification, and the wavelength of light. This distri-bution is plotted in Figure 3.4b. Figure 3.4a shows a photograph of an Airy pattern.It can be seen that the image of a point source is not a point but a distribution, whichconsists of a central disk, called the Airy disk, within which most of the intensity iscontained, surrounded by circular rings of decreasing intensity. The radius of the Airydisk is 1.22λf /2a, where 2a is the lens aperture and f is the focal length of the lens.The Airy disk defines the size of the image of a point source. It should be kept in mindthat if the beam incident on the lens has a diameter smaller than the lens aperture,then 2a in the expression for the disk radius must be replaced by the beam diameter.The intensity distribution in the diffraction pattern outside the Airy disk is governedby the shape and size of the aperture.

If we carry out an analysis in the spatial frequency domain, the lens is considered tobe a low-pass filter. The frequency response can be modified by apodization: the use ofan appropriate aperture mask on the lens alters its frequency response dramatically.As an obvious example, the use of an annular aperture in a telescope enhances itsangular resolution. Similar arguments apply when imaging by mirrors is considered.

Prisms function as both reflective and refractive components. When used as dis-persive elements in spectrographs and monochromators, prisms function purely asrefractive components. However, when used for bending and splitting of beam, theyuse both refraction and reflection. As an example, we consider a right-angle prism,which can be used to bend rays by 90◦ or by 180◦, as shown in Figure 3.5.

When employed as in Figure 3.5b as a reflector, a prism can be used to introducelateral shear. It may be observed that the beam bending is by total internal reflection.For deviating beams at other angles, special prisms are designed in which there canbe metallic reflection. A Dove prism is used for image rotation. In interferometry,a corner cube is used as a reflector in place of a mirror owing to its insensitivity to

Page 79: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 52[#10] 14/5/2009 16:46

52 Optical Methods of Measurement

(a) (b)

FIGURE 3.5 Right-angle prisms used to bend a beam by (a) 90◦ and (b) 180◦.

angular misalignment. Two cemented right-angle prisms with dielectric or metalliccoatings on the hypotenuse faces form a beam-splitter. The entrance and exit faceshave antireflective coatings. A dielectric coating on the diagonal surface can be sodesigned that a cube beam-splitter also functions as a polarization beam-splitter. Inmany interferometers, plane-parallel plates are used both as beam-splitters and asbeam-combiners. One of the surfaces has an antireflective coating. Plane-parallelplates with uncoated surfaces have been used as shear elements in shear interfero-metry. Sometimes, wedge plates are used as beam-splitters and beam-combiners, andalso as shear elements.

Some optical elements are constructed from anisotropic materials such as calciteand quartz. Savart and half-shade plates are examples of such elements, as are Nichol,Wollaston, Rochon, and Senarmont prisms. These elements are used in polarizationinterferometers, polarimeters, and ellipsometers.

3.6.3 DIFFRACTIVE OPTICAL COMPONENTS

A grating is an example of a diffractive optical element. In general, optical ele-ments that utilize diffraction for their functions are classed as diffractive opticalelements (DOEs). Computer-generated holograms (CGHs) and holographic opticalelements (HOEs) also fall into this category. Their main advantages are flexibility insize, shape, layout, and choice of materials. They can carry out multiple functionssimultaneously, for example bending and focusing of a beam. They are easy to inte-grate with the rest of the system. DOEs offer the advantages of reduction in systemsize, weight, and cost. Because of their many advantages, considerable effort hasgone into designing and fabricating efficient DOEs. DOEs can be produced using aconventional holographic method. However, most such components are formed byeither a laser or an electron-beam writing system on an appropriate substrate. Massproduction is by hot embossing or ultraviolet embossing, or by injection molding.These elements can also be fabricated using diamond turning.

DOEs can be used as lenses, beam-splitters, polarization elements, or phase-shifters in optical systems. A zone plate is an excellent example of a diffractivelens that produces multiple foci. However, using multiple phase steps or a con-tinuous phase profile, diffractive lenses with efficiency approaching 100% can berealized. Gratings can be designed that split the incident beam into two beams of equal

Page 80: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 53[#11] 14/5/2009 16:46

Diffraction 53

intensity, functioning as a conventional beam-splitter. Alternatively, it can yield twobeams of equal intensity in reflection as shown in Figure 3.6. These beams arethen combined by another DOE, thereby making a very compact Mach–Zehnderinterferometer.

A polarizer has been realized in a photoresist on a glass plate in which a deep binarygrating is written. In fact, diffractive elements with submicrometer pitch exhibit strongpolarization dependence. An interesting application of DOEs is in phase-shiftinginterferometry. For studying transient events, spatial phase-shifting is used such thatthree or four phase-shifted data are available from a single interferogram. This may bedone by using multiple detectors or a spatial carrier with a single detector. A DOE hasalso been produced in such a way that when placed in a reference arm, it provides four90◦ phase-shifted reference wavefronts. Thereby, four phase-shifted interferogramsare obtained simultaneously, from which the phase at each pixel is obtained using afour-step algorithm.

Gratings (linear, circular, spiral, or concave) are dispersive elements, and henceare used mainly in spectrometers, spectrophotometers, and similar instrumements.When used with monochromatic radiation, gratings are useful elements in metrology.A low-frequency grating is used in moiré metrology. High-accuracy measurementsare performed with gratings of fine pitch. A grating is usually defined as a periodicarrangement of slits (opaque and transparent regions). A collimated beam incident ona grating is diffracted into a number of orders. When a grating of pitch d is illuminatednormally by a monochromatic beam of wavelength λ, the mth diffraction order is inthe direction θm such that

d sin θm = mλ for m = 0, ±1, ±2, ±3, . . . (3.18)

DOE1

DOE2

M1

M3

M2

M4

Object

FIGURE 3.6 Mach–Zehnder interferometer using diffractive optical elements.

Page 81: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 54[#12] 14/5/2009 16:46

54 Optical Methods of Measurement

Such a grating will generate a large number of orders (m ≤ d/λ, since θm ≤ π/2).Assuming small-angle diffraction (i.e., diffraction from a coarse grating) anddiffraction in the first order, we obtain

θ = λ

dor

1

d= θ

λ(3.19)

Suppose a lens of focal length f is placed behind the grating, then this order willbe focused to a spot such that x = λf /d, or 1/d = x/λf , where x is the distance of thespot from the optical axis. The grating is assumed to be oriented such that its gratingelements are perpendicular to the x axis. We define the spatial frequency μ of thegrating as the inverse of the period d:

μ = 1

d= x

λf(3.20)

It is expressed as lines per millimeter (lines/mm). Thus, the spatial frequency μ andoff-axis distance x are linearly related for a coarse grating. In fact, the spatial frequencyis defined as μ = (sin θ)/λ. It is thus obvious that a sinusoidal grating of frequencyμ will generate only three orders: m = 0, 1, −1.

A real grating can be represented as a summation of sinusoidal gratings whose fre-quencies are integral multiples of the fundamental. When such a grating is illuminatednormally by a collimated beam propagating in the z direction, a large number of spotsare formed on either side of the z axis: symmetric pairs correspond to a particularfrequency in the grating.

If a grating of pitch p is inclined with respect to the x axis as shown in Figure 3.7,then its spatial frequencies μ and ν along the x and y directions are μ = 1/dx and

dy p

y

x

dx

FIGURE 3.7 An inclined linear grating.

Page 82: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 55[#13] 14/5/2009 16:46

Diffraction 55

ν = 1/dy, such that

ρ2 = 1

p2= 1

d2x

+ 1

d2y

(3.21)

where ρ is the spatial frequency of the grating. That is, ρ2 = μ2 + ν2. Diffrac-tion at the inclined sinusoidal grating again results in the formation of threediffraction orders, which will lie on neither the x nor the y axis. It can be shownthat the diffraction spots always lie on a line parallel to the grating vector.

3.6.3.1 Sinusoidal Grating

Interference between two inclined plane waves of equal amplitude generates aninterference pattern in which the intensity distribution is of the form

I(x) = 2I0[1 + cos(2πx/dx)] (3.22)

where dx is the grating pitch and I0 is the intensity of each beam. We associate aspatial frequency μ, which is the reciprocal of dx; that is, μ = 1/dx. Thus, we canexpress the intensity distribution as

I(x) = 2I0[1 + cos(2πμx)] (3.23)

Assuming an ideal recording material that maps the incident intensity distributionbefore exposure to the amplitude transmittance after exposure, the amplitudetransmittance of the grating record is given by

t(x) = 0.5[1 + cos(2πμx)] (3.24)

The factor 0.5 appears owing to the fact that the transmittance varies between 0 and 1.Let a unit-amplitude plane wave be incident normally on the grating. The amplitudejust behind the grating is

a(x) = 0.5(1 + cos 2πμx) = 0.5(1 + 0.5e2πiμx + 0.5e−2πiμx) (3.25)

This represents a combination of three plane waves: one wave propagating along theaxis and other two propagating inclined to the axis. A lens placed behind the gratingwill focus these plane waves to three diffraction spots, each spot corresponding to aplane wave. These spots lie on the x axis in this particular case. If the transmittancefunction of the grating is other than sinusoidal, a large number of diffraction spotsare formed.

Let us now recall the grating equation when a plane wave is incident normally;that is,

dx sin θm = mλ

In the first order m = 1, and assuming small-angle diffraction, so that sin θ ≈ θ, weobtain

1

dx= μ = θ

λ

Page 83: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 56[#14] 14/5/2009 16:46

56 Optical Methods of Measurement

Grating

Lensf

xf

q

FIGURE 3.8 Diffraction at a sinusoidal grating.

From Figure 3.8, θ = xf /f , and hence μ = xf /λf . Thus, the position of the spot isdirectly proportional to the frequency of the grating. This implies that diffraction at asinusoidal grating generates three diffraction orders: the zeroth order lies on the opticalaxis and the first orders lie off-axis, with their displacements being proportional tothe spatial frequency of the grating. These orders lie on a line parallel to the gratingvector (2π/dx); the grating vector is oriented perpendicular to the grating elements.If the grating is rotated in its plane, these orders also rotate by the same angle.

A real grating generates many spots. These can easily be observed by shiningan unexpanded laser beam onto the grating and observing the diffracted orders on ascreen placed some distance away. This is due to the fact that the grating transmittancediffers significantly from a sinusoidal profile. Since a grating is defined as a periodicstructure of transparent and opaque parts, it gives rise to an infinite number of orders,subject to the condition that the diffraction angle θ ≤ π/2. It should be rememberedthat the linear relationship between spatial frequency and distance is valid only forlow-angle diffraction or for coarse gratings.

3.6.4 PHASE GRATING

A phase grating has a periodic phase variation—it does not attenuate the beam. It can berealized either by thickness changes or refractive index changes or by a combination.One of the simplest methods is to record an interference pattern on a photographicemulsion and then bleach it. Assuming that the phase variation is linearly relatedto the intensity distribution, the transmittance of a sinusoidal phase grating can beexpressed as

t(x) = exp(iφ0) exp[iφm cos(2πμ0x)] (3.26)

where φ0 and φm are the constant phase and phase modulation, and μ0 is the spatialfrequency of the grating. This transmittance can be expressed as a Fourier–Besselseries. This implies that when such a grating is illuminated by a collimated beam,an infinite number of orders are produced, unlike the three orders from an ampli-tude sinusoidal grating. The phase grating, therefore, is intrinsically nonlinear. The

Page 84: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 57[#15] 14/5/2009 16:46

Diffraction 57

intensity distribution in the various orders can be controlled by groove shape andmodulation, and also by the frequency.

3.6.5 DIFFRACTION EFFICIENCY

Gratings can be either amplitude or phase gratings. The diffraction efficiency of anamplitude grating having a sinusoidal profile of the form t(x) = 0.5 + 0.5 sin(2πμx)has a maximum value of 6.25%. However, if the profile is binary, the diffractionefficiency in first order is 10.2%. It will also produce a large number of orders. If thewidths of the opaque and transparent parts are equal, then the even orders are missing.For a sinusoidal phase grating, the maximum diffraction efficiency is 33.9%; for abinary phase grating, it is 40.6%. If the grating is blazed, it may have 100% diffractionefficiency in the desired order. Thick phase gratings have high diffraction efficiencies,approaching 100%.

3.7 RESOLVING POWER OF OPTICAL SYSTEMS

Assume a point source emitting monochromatic spherical waves. An image of thepoint source can be obtained by following the procedure outlined in Section 3.4. Theintensity distribution in the image of a point source is proportional to the square ofthe Fourier transform of the aperture function of an aberration-free imaging lens. Asalready mentioned, for a lens with circular aperture, the intensity distribution of theimage of a point source is an Airy distribution, proportional to [2J1(x)/x]2. The imageof a point source is not a point but a distribution, and hence two closely separatedpoint sources can be seen distinctly in the image if they are adequately separated. Theability to see them distinctly depends on the resolving power of the imaging lens.

Resolving power is important for both imaging and dispersive instruments. In onecase, one wants to know how close two objects could be seen as distinct; in the other,one wants to know how close two spectral lines could be seen as distinct. In both cases,the ability to see distinctly is governed by diffraction. In case of imaging, an imagingsystem intercepts only a part of the spherical waves emanating from a point source,thereby losing a certain amount of information. This results in an Airy distribution,with a central maximum surrounded by circular rings with decreasing intensity. Forobjects of the same intensity, it is easier to find a criterion of resolution. According toRayleigh, two objects are just resolved when the intensity maximum of one coincideswith the first intensity minimum of the other. The resulting intensity distributionshows two humps with a central minimum, as shown in Figure 3.9. The intensity ofthe central minimum in the case of a square aperture is 81% of the maximum. Inpractice, several resolution criteria have been proposed, but the most commonly usedcriterion is that due to Rayleigh.

However, in the case of two objects with one very much brighter than the other,it becomes extremely difficult to apply the Rayleigh criterion. Further, there are sit-uations when the intensity distribution does not exhibit any secondary maxima. Insuch cases, the Rayleigh criterion states that two objects are just resolved when theintensity of the central minimum is 81% of the maximum.

Page 85: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C003.tex” — page 58[#16] 14/5/2009 16:46

58 Optical Methods of Measurement

1.4

1.2

1.0

0.8

0.6I(x)

0.4

0.2

0–4 –2 0 2 4

x6 8 10 12

FIGURE 3.9 Rayleigh resolution criterion: maximum of first point image falls over theminimum of the second point image.

The Airy intensity distribution may be called an “instrumental function” of theimaging devices. Similarly, in the case of a grating used for spectral analysis, the

intensity distribution is governed by sin2(

12 Nδ

)/sin2

(12δ)

, where δ is the phase

difference between two consecutive rays. This is the instrumental function, whichgoverns the resolution. An interferometer may therefore be described in terms of aninstrumental function; a common situation where this arises is in Fourier-transformspectroscopy.

BIBLIOGRAPHY

1. A. Sommerfeld, Optics, Academic Press, New York, 1964.2. W. T. Cathey, Optical Information Processing and Holography, Wiley, NewYork, 1974.3. M. Born and E. Wolf, Principles of Optics, 7th edn, Cambridge University Press,

Cambridge, 1999.4. J. D. Gaskill, Linear Systems, Fourier Transforms and Optics, Wiley, New York, 1978.5. D. Casasent (Ed.), Optical Data Processing, Springer-Verlag, Berlin, 1978.6. S. H. Lee (Ed.), Optical Data Processing—Fundamentals, Springer-Verlag, Berlin,

1981.7. R. S. Sirohi, Wave Optics and Applications, Orient Longmans, Hyderabad, 1993.8. J. W. Goodman, An Introduction to Fourier Optics, McGraw-Hill, New York, 1996.9. E. Hecht and A. R. Ganesan, Optics, 4th edn, Dorling Kindersley (India), New Delhi,

2008.

Page 86: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 59[#1] 14/5/2009 20:30

4 Phase-EvaluationMethods

Phase distribution is encoded in an intensity distribution as a result of interferencephenomena, and is displayed in the form of an interference pattern. Phase distribu-tion should therefore be retrievable from the interference pattern. Phase differenceis related to parameters of interest such as the figure of a surface, height varia-tions, and refractive index variations, which can be extracted from measurementsof phase difference.

We employ interference phenomena in classical interferometry, in holographicinterferometry, and in speckle interferometry to convert the phase of a wave of interestinto an intensity distribution. Even in moiré methods, the desired information can becast into the form of an intensity distribution. The fringes in an interference patternare loci of constant phase difference. Earlier methods of extracting phase information,and consequently the related parameters of interest, were very laborious and time-consuming and suffered from the inherent inaccuracies of evaluation procedures. Withthe availability of desktop computers with enormous computational and processingpower and CCD array detectors, many automatic fringe evaluation procedures havebeen developed.

There are many methods of phase evaluation. A comparison of these methods withrespect to several parameters is presented in Table 4.1.

It should be noted that complexity and cost increase with increasing resolution.We will now discuss these methods in detail. During the course of this discussion,the significance of some of the parameters and characteristics of the various methodswill become obvious.

4.1 INTERFERENCE EQUATION

Interference between two waves of the same frequency results in an intensitydistribution of the form

I(x, y) = I0(x, y)[1 + V(x, y) cos δ(x, y)] (4.1)

where I0(x, y) is the low-frequency background, often called the DC background,V(x, y) is the fringe visibility (or modulation), and δ(x, y) is the phase differencebetween the two waves, which is to be determined. Here (x, y) are the coordinates of apoint, or of a pixel on an observation plane. The intensity distribution may be degraded

59

Page 87: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 60[#2] 14/5/2009 20:30

60 Optical Methods of Measurement

TABLE 4.1Comparison of Some Important Methods of Phase Evaluationin Terms of Selected Parameters

Method

Fringe Phase-Stepping Fourier TemporalParameter Skeletonization and Phase-Shifting Transform Heterodyning

Number of interferogramsto be reconstructed

1 Minimum 3 1 (2) 1 per detectionpoint

Resolution λ 1–1/10 1/10–1/100 1/10–1/30 1/100–1/1000Evaluation between

intensity extremaNo Yes Yes Yes

Inherent noisesuppression

Partially Yes No (Yes) Partially

Automatic sign detection No Yes No (Yes) YesNecessary experimental

manipulationNo Phase-shift No

(Phase-shift)Frequency

Experimental effort Low High Low Extremely highSensitivity to external

influencesLow Moderate Low Extremely high

Interaction by theoperator

Possible Not possible Possible Not possible

Speed of evaluation Low High Low Extremely lowCost Moderate High Moderate Very high

by several factors, such as speckle noise, spurious fringe noise, and electronic noise.However, when all of these factors are included, the intensity distribution may stillbe expressed in the form

I(x, y) = a(x, y) + b(x, y) cos δ(x, y) (4.2)

This equation has three unknowns: a(x, y), b(x, y), and δ(x, y). We are mostly inter-ested in obtaining δ(x, y), and hence will discuss the methods of phase evaluation only.

4.2 FRINGE SKELETONIZATION

The fringe skeletonization methods are computerized forms of the former manualfringe counting methods. The fringe pattern is received on a CCD detector. It isassumed that a local extremum of the intensity distribution corresponds to a maximumor minimum of the cosine function. (However, this requirement is not met by fringepatterns obtained in speckle photography.) The interference phase at pixels wherean intensity maximum or minimum is detected is an even or odd integer multipleof π. The fringe skeletonization method seeks maxima or minima of an intensitydistribution. The positions of maxima or minima in the fringe pattern are determinedwithin several pixels. The method is thus similar to the manual method of locating the

Page 88: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 61[#3] 14/5/2009 20:30

Phase-Evaluation Methods 61

fringe maxima or minima, but is performed much more rapidly and with relativelyhigh accuracy. Sign ambiguity is still not resolved.

Methods of fringe skeletonization can be divided into those based on fringe trackingand those related to segmentation. Both techniques require careful preprocessing tominimize speckle noise through low-pass filtering and to correct for uneven brightnessdistribution in the background illumination.

4.3 TEMPORAL HETERODYNING

In temporal heterodyning, a small frequency difference Δω/2π = (ω1 − ω2)/2π,which is less than 100 kHz, is introduced between the two interfering waves. Thelocal intensity of the interference pattern then varies sinusoidally at the beat frequencyΔω/2π. The intensity distribution can be expressed as

I(x, y; t) = a(x, y) + b(x, y) cos[Δωt + δ(x, y)] (4.3)

There is no stationary stable interference pattern of the type familiar in interfer-ometry; instead, the intensity at each point (x, y) varies sinusoidally at the beatfrequency, and the interference phase at that point is transformed into the phaseof the beat frequency signal. There is no way to measure the phase of this beatsignal. However, as is obvious, the intensity at all points varies with the beat fre-quency, but the phases are different, being equal to the values of the interferencephases at these points. As the beat frequency is sufficiently low, the phase differencebetween two points can be measured with very high accuracy independently of a(x, y)and b(x, y) by using two photodetectors and an electronic phase-meter. In this way,both the interpolation problem and the sign ambiguity of classical interferometryare solved. The method requires special equipment such as acousto-optic modulatorsfor frequency shift and a phase-meter. The image (interference pattern) is scannedmechanically by photodetectors to measure the phase difference: one detector is fixedand the other is scanned, thereby measuring the interference phase with respect tothe fixed photodetector. The measurement speed is low (typically 1 s per point), butthe accuracy (typically λ/1000) and spatial resolution (>106 resolvable points) areextremely high.

In double-exposure holographic interferometry, the two interfering waves arereleased simultaneously from the hologram, and hence temporal heterodyning cannotbe applied for phase evaluation. On the other hand, if a two-reference-wave set-up isused such that each state of the object is recorded with one of the waves as a referencewave, this technique can be implemented (see Chapter 6). In practice, two acousto-optic modulators in cascade are placed in the path of one of the reference waves. Thesemodulators introduce frequency shifts of opposite signs. During recording, both mod-ulators are driven at the same frequency, say, at 40 MHz, and hence the net frequencyshift is zero. The initial and final states of the object are recorded sequentially withone reference wave at a time. The reconstruction, however, is performed with bothreference waves simultaneously, with one modulator driven at 40 MHz and the other

Page 89: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 62[#4] 14/5/2009 20:30

62 Optical Methods of Measurement

at 40.1 MHz. Therefore, the reference waves are of different frequencies, with a fre-quency difference of 100 kHz, resulting in an interference pattern that oscillates atthat frequency.

One way to evaluate the interference pattern is to keep one detector fixed atthe reference point while the other is scanned over the image. In this way, the phasedifference is measured with respect to the phase of the reference point. The phasedifferences are stored, arranged, and displayed. Instead of two detectors, one coulduse an array of three, four, or five detectors to scan the whole image. In this way, thephase differences δδx and δδy between adjacent points along the x and y directionswith known separations are measured. Numerical integration of recorded and storedphase differences gives the interference phase distribution.

4.4 PHASE-SAMPLING EVALUATION: QUASI-HETERODYNING

According to Equation 4.2, the intensity distribution in an interference pattern hasthree unknowns: a(x, y), b(x, y), and δ(x, y). Therefore, if three intensity values ateach point are available, we could set up three equations and solve them for theunknowns. These three intensity values are obtained by changing the phase of thereference wave. For the sake of generality, we set up n equations.

Two different approaches are available for quasi-heterodyning phase measurement:the phase-step method and the phase shift method. In the phase-step method, thelocal intensity In(x, y) in the interference pattern is sampled at fixed phases αn of thereference wave; that is,

In(x, y) = a(x, y) + b(x, y) cos[δ(x, y) + αn] for n = 1, 2, 3, . . . , N(n > 3) (4.4)

As mentioned earlier, at least three intensity measurements, I1, I2, I3, must becarried out to determine all the three unknowns.

In the phase-shifting or integrating bucket method, which is intended primarily foruse with CCD detectors on which optical power is integrated by the detector, the phaseof the reference wave is varied linearly and the sampled intensity is integrated overthe phase interval Δα from αn − Δα/2 to αn + Δα/2. The intensity In(x, y) sampledat a pixel at (x, y) is given by

In(x, y) = 1

Δα

∫ αn+Δα/2

αn−Δα/2

{a(x, y) + b(x, y) cos

[δ(x, y) + α(t)

]}dα(t)

= a(x, y) + sinc(Δα/2) b(x, y) cos[δ(x, y) + αn

](4.5)

This expression is equivalent to that of the phase-shifting method, the only dif-ference being that the modulation b(x, y) is multiplied by sinc(Δα/2); there isa reduction in the contrast of fringes by sinc(Δα/2). In this sense, the phase-shifting method is equivalent to the phase-stepping method, and the names are usedsynonymously. Furthermore, for data processing, both methods are handled in anidentical manner.

Page 90: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 63[#5] 14/5/2009 20:30

Phase-Evaluation Methods 63

4.5 PHASE-SHIFTING METHOD

For the solution of the nonlinear system of equations, we use the Gauss least-squareapproach and rewrite the intensity distribution as

In(x, y) = a(x, y) + b(x, y) cos[δ(x, y) + αn] = a + u cos αn + v sin αn (4.6)

where u(x, y) = b(x, y) cos δ(x, y) and v(x, y) = −b(x, y) sin δ(x, y). We obtain a, u,and v by minimizing the errors; that is, the sum of the quadratic errors,

N∑n=1

[In(x, y) − (a + u cos αn + v sin αn)

]2

is minimized. Taking partial derivatives of this function with respect to a, u, and v

and then equating these derivatives to zero gives a linear system of three equations:⎛⎜⎝

N∑

cos αn∑

sin αn∑cos αn

∑cos2 αn

∑sin αn cos αn∑

sin αn∑

cos αn sin αn∑

sin2 αn

⎞⎟⎠⎛⎜⎝

a

u

v

⎞⎟⎠ =

⎛⎜⎝

∑In∑

In cos αn∑In sin αn

⎞⎟⎠ (4.7)

This system is to be solved pointwise. We thus obtain

a(x, y) = 1

N

N∑n=1

In (4.8a)

b(x, y) =√(∑

In cos αn)2 + (∑ In sin αn

)2N

(4.8b)

tan δ(x, y) = −v

u= −

∑In sin αn∑In cos αn

(4.8c)

The interference phase is computed modulo 2π. Many algorithms have been men-tioned in the literature. Some of these algorithms, derived from the interferenceequation, using three, four, up to eight steps, are presented in Table 4.2.

4.6 PHASE-SHIFTING WITH UNKNOWNBUT CONSTANT PHASE-STEP

There is a method known as the Carré method, which utilizes four phase-shiftedintensity distributions; the phase-step need not be known, but must remain constantduring phase-shifting. The four intensity distributions are expressed as

I1(x, y) = a(x, y) + b(x, y) cos[δ(x, y) − 3α

](4.9a)

I2(x, y) = a(x, y) + b(x, y) cos[δ(x, y) − α

](4.9b)

I3(x, y) = a(x, y) + b(x, y) cos[δ(x, y) + α

](4.9c)

I4(x, y) = a(x, y) + b(x, y) cos[δ(x, y) + 3α

](4.9d)

Page 91: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 64[#6] 14/5/2009 20:30

64 Optical Methods of Measurement

TABLE 4.2Some Algorithms Derived from the Interference Equation

N Phase-Step αn Expression for tan δ(x , y)

3 60◦ (0◦, 60◦, 120◦)2I1 − 3I2 + I3√

3 (I2 − I3)

3 90◦ (0◦, 90◦, 180◦)I1 − 2I2 + I3

I1 − I3

3 120◦ (0◦, 120◦, 240◦)

√3 (I3 − I2)

2I1 − I2 − I3

4 90◦ (0◦, 90◦, 180◦, 270◦)I4 − I2I1 − I3

4 60◦ (0◦, 60◦, 120◦, 180◦)5 (I1 − I2 − I3 + I4)√3 (2I1 + I2 − I3 − 2I4)

5 90◦ (0◦, 90◦, 180◦, 270◦, 360◦)7 (I4 − I2)

4I1 − I2 − 6I3 − I4 + 4I5

6 90◦ (0◦, 90◦, 180◦, 270◦, 360◦,450◦)

I1 − I2 − 6I3 + 6I4 + I5 − I64(I2 − I3 − I4 + I5)

7 90◦ (0◦, 90◦, 180◦, 270◦, 360◦,450◦, 540◦)

−I1 + 7I3 − 7I5 + I7−4I2 + 8I4 − 4I6

8 90◦ (0◦, 90◦, 180◦, 270◦, 360◦,450◦, 540◦, 630◦)

−I1 − 5I2 + 11I3 + 15I4 − 15I5 − 11I6 + 5I7 + I8I1 − 5I2 − 11I3 + 15I4 + 15I5 − 11I6 − 5I7 + I8

For the sake of simplicity, the phase step is taken to be 2α. There are four unknowns,namely a(x, y), b(x, y), δ(x, y), and 2α, and hence it should be possible to obtain thevalues of all four from the four equations. However, we present the expressions onlyfor the phase-step and interference phase:

tan2 α = 3(I2 − I3) − (I1 − I4)

(I2 − I3) + (I1 − I4)(4.10a)

tan δ(x, y) = (I2 − I3) + (I1 − I4)

(I2 + I3) − (I1 + I4)tan α (4.10b)

These two equations are combined to yield the interference phase from the measuredintensity values as follows:

tan δ(x, y) =√[

3(I2 − I3) − (I1 − I4)] [

(I2 − I3) + (I1 − I4)]

(I2 + I3) − (I1 + I4)(4.11)

An improvement over the Carré method is realized by calculating the value ofthe phase-step over the whole pixel format, and then taking the average value forcalculation. Constancy of the phase-step is also easily checked. The phase-step 2α is

Page 92: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 65[#7] 14/5/2009 20:30

Phase-Evaluation Methods 65

obtained from the expression

cos 2α = (I1 − I2) + (I3 − I4)

2(I2 − I3)(4.12)

By assumption, the phase-step 2α must be constant over all the pixels. However, wetake an average value, which smoothes out any fluctuations. The interference phaseis then obtained as

tan δ(x, y) = (I3 − I2) + (I1 − I3) cos α1 + (I2 − I1) cos 2α1

(I1 − I3) sin α1 + (I2 − I1) sin 2α1(4.13a)

tan δ(x, y) = (I4 − I3) + (I2 − I4) cos α1 + (I3 − I2) cos 2α1

(I2 − I4) sin α1 + (I3 − I2) sin 2α1(4.13b)

where 2α1 is the average value of the phase-step 2α, averaged over all the pixels.The 2π steps of the interference phase distribution modulo 2π occur at differentpoints in the image. This information can be used in the subsequent demodulation byconsidering the continuous phase variation of the two terms at each pixel.

There is another interesting algorithm, which also utilizes four intensity distribu-tions but can be used for the evaluation of a very noisy interferogram. We consider anintensity distribution in the image of an object being studied using speckle techniques.The intensity distribution is expressed as

I(x, y) = a(x, y) + b(x, y) cos(φo − φR), (4.14)

where φo and φR are the phases of the speckled object wave and the reference wave;the phase difference φ = φo − φR is random. We capture four frames, the intensitydistributions being expressed as

I1(x, y) = a(x, y) + b(x, y) cos(φ − π/2) (4.15a)

I2(x, y) = a(x, y) + b(x, y) cos φ (4.15b)

I3(x, y) = a(x, y) + b(x, y) cos[φ + δ(x, y)

](4.15c)

I4(x, y) = a(x, y) + b(x, y) cos[φ + δ(x, y) + π/2

](4.15d)

Here δ(x, y) is the phase introduced by deformation and is to be determined. Fromthese equations, we obtain

I3 − I2 = −2b sin(φ + δ/2) sin(δ/2) (4.16a)

I4 − I1 = −2b sin(φ + δ/2) cos(δ/2) (4.16b)

From Equations 4.16a and 4.16b, the deformation phase is obtained easily as

tanδ(x, y)

2= I3 − I2

I4 − I1(4.17)

Page 93: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 66[#8] 14/5/2009 20:30

66 Optical Methods of Measurement

The phase is obtained modulo 2π, as in other algorithms. However, the phase shifthas to be exactly π/2.

All of the algorithms listed in Table 4.2 and utilizing a 90◦ phase shift require thephase-step to be exactly 90◦; otherwise, errors are introduced. Simulation studies haveindicated that the magnitude of errors continues to decrease with increasing numberof phase-steps. An eight-step phase shift algorithm is practically immune to phaseshift errors.

Phase-shifting methods are commonly used in all kinds of interferometers. Thephase shift is usually introduced by calibrated PZT shifters attached to a mirror. Thismechanical means of shifting the phase is quite common and convenient to use. Thereare various other methods, which rely on polarization and refractive index changesof the medium by an electrical field, and so on.

Advantages of the phase-shifting technique are (i) almost real-time processing ofinterferograms, (ii) removal of sign ambiguity, (iii) insensitivity to source intensityfluctuations, (iv) good accuracy even in areas of very low fringe visibility, (v) highmeasurement accuracy and repeatability, and (vi) applicability to holographic interfer-ometry, speckle interferometry, moiré methods, and so on. Further, we can calculatethe visibility and background intensity from the measured intensity values.

4.7 SPATIAL PHASE-SHIFTING

Temporal phase-shifting as described above requires stability of the interference pat-tern over the period of phase-shifting. However, in situations where a transient eventis being studied or where the environment is highly turbulent, the results from tem-poral phase-shifting could be erroneous. To overcome this problem, the phase-shiftedinterferograms can be captured simultaneously using multiple CCD array detectorsand appropriate splitting schemes. Recently, a pixelated array interferometer has alsobeen developed that captures four phase-shifted interferograms simultaneously. All ofthese schemes, however, add to the cost and complexity of the system. Another solu-tion is to use spatial phase-shifting. This requires an appropriate carrier. The intensitydistribution in the interferogram can be expressed as

I(x, y) = a(x, y) + b(x, y) cos[2πf0x + φ(x, y)

](4.18)

where f0 is the spatial carrier along the x direction. The spatial carrier is chosensuch that it has either three or four pixels on the adjacent fringes to apply three-stepor four-step algorithms. There are several ways to generate the carrier frequency. Thesimplest is to tilt the reference beam appropriately. Polarization coding and gratingsare also frequently used. We describe a procedure that is used in speckle metrology(Chapter 7).

We consider the experimental set-up shown in Figure 4.1. This arrangement isemployed for the measurement of the in-plane displacement component.

The aperture separation is chosen such that the fringe width is equal to the widthof three pixels of the CCD camera for the 120◦ phase shift algorithm, and to the widthof four pixels for the 90◦ phase shift algorithm, as shown in Figure 4.2. The averagespeckle size is nearly equal to the fringe width. The intensity distribution I(xn, y) at

Page 94: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 67[#9] 14/5/2009 20:30

Phase-Evaluation Methods 67

M1

Right-angle Mirror

Object

k2

Lens

Aperture

CCD

M2

k'2

k1

M3

FIGURE 4.1 Measurement of in-plane component using spatial phase-shifting.

the nth pixel along the x direction is given by

I(xn, y) = a(xn, y) + b(xn, y)sinc

2

)cos[φ(xn, y) + nβ + C

], n = 1, 2, 3, . . . , N

(4.19)

where φ(xn, y) is the phase to be measured,

β = 2π

λ

d

vdp

is the phase change from one pixel to the next (which is separated by dp along the xdirection),

Φ = 2π

λ

d

vdt

(b)(a) speckle speckle

FIGURE 4.2 (a) 120◦ spatial phase-shifting. (b) 90◦ spatial phase-shifting.

Page 95: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 68[#10] 14/5/2009 20:30

68 Optical Methods of Measurement

is the phase shift angle over which the pixel of width dt integrates the intensity, C isa constant phase offset, and N is the number of pixels in a row. Owing to objectdeformation, the initial phase φ1(xn, y) changes to φ2(xn, y) = φ1(xn, y) + δ(xn, y). Ifφ1(xn, y) and φ2(xn, y) are measured, the phase difference δ(xn, y) can be obtained.In order to calculate φn = φ(xn, y), we need at least three adjacent values of I(xn, y).If these values are picked up at 120◦ phase intervals, then the phase φn is given by

φn = tan−1(√

3In−1 − In+1

2In − In−1 − In+1

)mod π (4.20)

In+m = a + b sinc

2

)cos

[φn + 2π

λ

d

vdp(n + m)

],

n = 2, 3, . . . , N − 1; m = −1, 0, +1 (4.21)

The constant C has been dropped from Equation 4.21. It may be seen that in spatialphase-shifting, the modified phase

ψn = φn + 2π

λ

d

vdpn mod π

rather than the speckle phase φn mod π, is reconstructed. Therefore, the phase offset

λ

d

vdpn

must be subtracted in a later step.

4.8 METHODS OF PHASE-SHIFTING

A number of methods for phase shifting have been proposed in the literature. Theyusually introduce phase shifts sequentially. Some methods have also been adopted forsimultaneous phase-shifts, and thus are useful for studying dynamical events. Thesemethods include

• PZT-mounted mirror• Tilt of a glass plate between exposures• Rotation of the phase-plate in a polarization phase-shifter• Motion of a diffraction grating• Use of a computer-generated hologram (CGH) written on a spatial light

modulator• Special methods

4.8.1 PZT-MOUNTED MIRROR

In an interferometric configuration, one of the mirrors is mounted on lead zirconatetitanate (PZT), which can be actuated by applying a voltage. In a Michelson interfero-meter, a shift of the mirror by λ/8 will introduce a phase shift of π/2 (λ/4 in path

Page 96: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 69[#11] 14/5/2009 20:30

Phase-Evaluation Methods 69

l/8 translation

PZT-mountedmirror

p/2 phase shift

(a)

(b)√2λ/16 translation

FIGURE 4.3 Phase-shifting by a PZT-driven mirror: (a) Michelson interferometer configu-ration; (b) Mach–Zehnder interferometer configuration.

change). Of course, any amount of phase shift can be introduced by a PZT-mountedmirror. In a Mach–Zehnder interferometer, the mirrors are inclined at 45◦, and hencethe mirror has to be shifted by

√2λ/16 to introduce a phase shift of π/2. Figure 4.3

shows schematic representations of phase-shifting in Michelson and Mach–Zehnderinterferometers. PZT-mounted mirrors can be used in any interferometric set-up, butthe magnitude of the shift of mirror has to be calculated for each configuration. Careshould be taken to avoid overshoot and hysteresis.

4.8.2 TILT OF GLASS PLATE

A glass plate of constant thickness and refractive index is introduced in the referencearm of the interferometer. The plate is tilted by an appropriate angle to introducea required phase shift, as shown in Figure 4.4. It may be noted that the tilt alsoresults in a lateral shift of the beam, although this is negligibly small. When a plateof refractive index μ, and uniform thickness t is placed normally to the beam, itintroduces a path change (μ − 1)t. When it is inclined such that the incident beamstrikes it at an angle i, the plate introduces a path difference μt cos r − t cos i, wherer is the angle of refraction in the plate. Tilt of the plate by an angle i thus introducesa net path difference μt(1 − cos r) − t(1 − cos i). The plate is therefore tilted by an

i

FIGURE 4.4 Phase-shifting by tilt of a plane parallel plate.

Page 97: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 70[#12] 14/5/2009 20:30

70 Optical Methods of Measurement

appropriate angle between exposures to introduce the desired phase step. This methodis applicable only with collimated illumination.

4.8.3 ROTATION OF POLARIZATION COMPONENT

Polarization components such as half-wave plates (HWPs), quarter-wave plates(QWPs), and polarizers have been used as phase-shifters in phase-shifting interfer-ometry by rotating them in the path of the interfering beams. Different configurationsof polarization-component phase-shifters have been used for phase-shifting, depend-ing on their location in the interferometer. Possible locations are the input end, theoutput end, or one of the arms of the interferometer. Figure 4.5 shows configurationsof phase-shifters at different locations in an interferometer.

Figure 4.5a shows a phase-shifter for the input end. It consists of a rotating half-wave plate followed by a quarter-wave plate fixed at an angle of 45◦. The way in whichpolarization phase-shifters work can be understood with the help of the Jones calculus.The input to the interferometer is linearly polarized at 0◦. The polarized light after thequarter-wave plate (QWP) Q is split by a polarization beam-splitter (PBS) into twoorthogonal linearly polarized beams. The QWPs Q1 and Q2 serve to rotate the planeof polarization of the incoming beam by 90◦, so that both the beams proceed towardsthe polarizer P oriented at 45◦, which takes a component from each beam to producean interference pattern. A rotation of the half-wave plate (HWP) H shifts the phaseof the interference pattern. The shift of the phase is four times the angle of rotationof the HWP.

A phase-shifter for the output end is shown in Figure 4.5b. It consists of a quarter-wave plate at 45◦ followed by a rotating polarizer. Figure 4.5c shows a phase-shifterfor use in one of the arms of an interferometer. The phase-shifter consists of a fixedquarter-wave plate and a rotating quarter-wave plate. A rotating polarizer P (Figure4.5b) and a rotating QWP Q3 (Figure 4.5c) will produce phase modulation at twicethe rotation angle.

The intensity distribution is measured at different orientations of the rotating polar-ization component. A high degree of accuracy can be achieved by mounting therotating component on a precision-divided circle with incremental or coded positioninformation suitable for electronic processing.

Polarization interferometers for phase-shifting as discussed above can also beused with liquid crystals, which can provide variable retardation. Likewise, an electro-optic effect can also be used to produce a variable phase shift in a polarizationinterferometer. Polarization components have also been employed successfully inwhite-light phase-shifting interferometry.

In a variant of the method in which the polarizer is rotated, with the right circularlypolarized and left circularly polarized beams passing through a polarizer inclined atan angle α, a phase change of 2α is introduced. Therefore, using a CGH togetherwith polarizers oriented at 0◦, 45◦, 90◦, and 135◦, phase shifts of π/2, π, 3π/2 and2π are introduced simultaneously. Using micro-optics that function as polarizers,WYCO has introduced a system known as a pixellated phase-shifter. This intro-duces π/2, π, 3π/2, and 2π phase shifts in the four quadrants, thereby achievingsimultaneous phase-shifting.

Page 98: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 71[#13] 14/5/2009 20:30

Phase-Evaluation Methods 71

Interference pattern

H Q

M1

M2

PBS

P

Q1Q2

Phase-shifter

(a)

M1

Interference pattern

M2

PBS

Prot

Q1Q2

Phase-shifterQ3

(b)

Interference pattern

M2

BS

P

Phase-shifter

Q Q3

M1(c)

FIGURE 4.5 Schematic diagrams of polarization phase-shifters for use (a) at the input, (b)at the output, and (c) in the reference arm of an interferometer. PBS, polarizing beam-splitter;M1, M2, mirrors; P, fixed polarizer, Q, Q1, Q2, fixed quarter-wave plates; H, rotating half-waveplate; Q3, rotating quarter-wave plate; Prot , rotating polarizer.

Page 99: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 72[#14] 14/5/2009 20:30

72 Optical Methods of Measurement

Grating Ist-order beamphase-shifted

Grating motion

FIGURE 4.6 Phase-shifting by translation of a grating.

4.8.4 MOTION OF A DIFFRACTION GRATING

A diffraction grating produces several orders; the first-order beam is used for phase-shifting. Translation of the grating in its plane by its pitch p introduces a phase changeof 2π in the diffracted first-order beam. Therefore, the grating is translated by p/4and the frame is captured (Figure 4.6). Successive frames are captured by translatingthe grating by p/4 steps. Alternatively the grating is moved at a constant velocity v,and the frame is captured at instants Np/4v, where N = 1, 2, 3, and 4 for a four-stepalgorithm with π/2 step.

4.8.5 USE OF A CGH WRITTEN ON A SPATIAL LIGHT MODULATOR

A phase shift of π/2 can be introduced using a CGH written on a spatial light mod-ulator. A CGH of a known wavefront, usually a plane wavefront, is written usinga phase-detour method. If each cell of the CGH is divided into four elements, andthese elements are filled in according to the phase at that cell, then their shift by oneelement will introduce a global shift of π/2. Thus, the filled elements over the wholeCGH are shifted by one element each after successive frames, to introduce sequentialphase shifts of 0, π/2, π, and 3π/2.

4.8.6 SPECIAL METHODS

Common-path interferometers are used for optical testing, because to their insen-sitivity to vibrations and refractive index changes over the optical path. Since bothtest and reference beams traverse the same path, it is difficult to separate referenceand test beams for phase-shifting. A clever arrangement has been suggested usinga birefringent scatter-plate. There have also been some modifications of the Smarttpoint-diffraction interferometer.

4.9 FOURIER TRANSFORM METHOD

The Fourier transform method is used in two ways: (i) without a spatial frequencycarrier and (ii) with a spatial frequency carrier added to it (spatial heterodyning). Wewill first describe the Fourier transform method without a spatial carrier. We again

Page 100: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 73[#15] 14/5/2009 20:30

Phase-Evaluation Methods 73

express the intensity distribution in the interference pattern as

I(x, y) = a(x, y) + b(x, y) cos δ(x, y)

With the cosine function written in complex exponentials, that is,

2 cos δ(x, y) = eiδ(x,y) + e−iδ(x,y)

the intensity distribution can be expressed as

I(x, y) = a(x, y) + 2b(x, y)eiδ(x,y) + 2b(x, y)e−iδ(x,y)

= a(x, y) + c(x, y) + c∗(x, y) (4.22)

where c(x, y) and c∗(x, y) are now complex functions. We take the Fourier transformof this distribution, yielding

I(μ, ν) = A(μ, ν) + C(μ, ν) + C∗(−μ, −ν) (4.23)

with μ and ν being the spatial frequencies. Since the intensity distribution I(x, y) isa real-valued function in the spatial domain, its Fourier transform is Hermitian in thefrequency domain; that is,

I(μ, ν) = I∗(−μ, −ν) (4.24)

The real part of I(μ, ν) is even and the imaginary part is odd. The amplitude distribu-tion |I(μ, ν)|1/2 is point-symmetric with respect to the DC term I(0, 0). Now it can beseen that A(μ, ν) contains the zero peak I(0, 0) and the low-frequency content due toslow variation of the background. C(μ, ν) and C∗(−μ, −ν) carry the same informa-tion, but with sign ambiguity. By bandpass filtering in the spatial frequency domain,A(μ, ν) and one of the terms C(μ, ν) or C∗(−μ, −ν) is filtered out. The remainingspectrum C∗(−μ, −ν) or C(μ, ν) is not Hermitian, so the inverse Fourier transformgives complex c(x, y) with nonvanishing real and imaginary parts. The interferencephase δ(x, y) is obtained as follows:

δ(x, y) = tan−1 Im{c(x, y)}Re{c(x, y)} (4.25)

where Re{ } and Im{ } represent real and imaginary parts, respectively.

4.10 SPATIAL HETERODYNING

In spatial heterodyning, a carrier frequency is added to the interference pattern.The spatial frequency is chosen higher than the maximum frequency content ina(x, y), b(x, y), and δ(x, y). The intensity distribution in the interference pattern canthen be expressed as

I(x, y) = a(x, y) + b(x, y) cos[δ(x, y) + 2πf 0x]

Page 101: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 74[#16] 14/5/2009 20:30

74 Optical Methods of Measurement

where f0 is the spatial carrier frequency along the x direction. In interferometry,the carrier is usually added by tilting the reference mirror. There are other waysof introducing the carrier in holographic interferometry, speckle interferometry, andelectronic speckle pattern interferometry. The Fourier transform of this expressioncan be expressed as

I(μ, ν) = A(μ, ν) + C(μ − f0, ν) + C∗(−μ − f0, −ν) (4.26)

Since the spatial carrier frequency is chosen higher than the maximum fre-quency content in a(x, y), b(x, y), and δ(x, y), the spectra A(μ, ν), C(μ − f0, ν), andC∗(−μ − f0, −ν) are separated. The spectrum A(μ, ν) is centered on μ = 0, ν = 0and carries the information about the background. The spectra C(μ − f0, ν) andC∗(−μ − f0, −ν) are placed at μ = f0, ν = 0, and μ = −f0, ν = 0, that is, symmet-rically about the DC term. If, by means of an appropriate bandpass filter, A(μ, ν)and C∗(−μ − f0, −ν) are eliminated and subsequently C(μ − f0, ν) is shifted by f0towards the origin, thereby removing the carrier, we can obtain c(x, y) by inverseFourier transformation. The interference phase is then obtained from the real andimaginary parts of c(x, y).

If, instead of C(μ, ν), the inverse transform of C∗(−μ, −ν) is taken, this resultsin −δ(x, y). The sign ambiguity is always present when a single interferogram isevaluated. Information about the sign of the phase is obtained when an additionalphase-shifted interferogram is available. Let us now write down the expressions for theintensity distributions in the two interferograms, with one of them phase-stepped by α:

I1(x, y) = a(x, y) + b(x, y) cos δ(x, y)

I2(x, y) = a(x, y) + b(x, y) cos[δ(x, y) + α]Theoretically, the value of α must be in the range 0 < α < π; in practice, a value inthe range π/3 < α < 2π/3 is recommended. If this condition is fulfilled, the exactvalue of α need not be known. Again, we can express these intensity distributions interms of complex exponentials and then take the Fourier transforms as described ear-lier. After bandpass filtering and taking the inverse Fourier transforms of the spectrabelonging to each intensity distribution, we obtain

c1(x, y) = 2b(x, y)eiδ(x,y)

c2(x, y) = 2b(x, y)e−i[δ(x,y)+α]

The phase step α is calculated pointwise from the expression

α(x, y) = tan−1 Re{c1(x, y)}Im{c2(x, y)} − Im{c1(x, y)}Re{c2(x, y)}Re{c1(x, y)}Re{c2(x, y)} + Im{c1(x, y)}Im{c2(x, y)} (4.27)

The knowledge of α(x, y) is used for determination of the sign-corrected interferencephase distribution δ(x, y) from the expression

δ(x, y) = sign{α(x, y)} tan−1 Im{c1(x, y)}Re{c1(x, y)} (4.28)

The phase is unwrapped as usual.

Page 102: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 75[#17] 14/5/2009 20:30

Phase-Evaluation Methods 75

BIBLIOGRAPHY

1. A. Gerrard and J. M. Burch, Introduction to Matrix Methods in Optics, Wiley, London,1975.

2. E. Wolf (Ed.), Progress in Optics, Vol. 26, Elsevier Science, Amsterdam, 1988.3. E. Wolf (Ed.), Progress in Optics, Vol. 28, Elsevier Science, Amsterdam, 1990.4. W. Osten, Digital Processing and Evaluation of Interference Images, Akademie Verlag,

Berlin, 1991.5. D. W. Robinson and G. T. Reid (Eds.), Interferogram Analysis–Digital Fringe

Measurement Techniques, Institute of Physics Publishing, Bristol, 1993.6. T. Kreis, Holographic Interferometry, Principles and Methods, Akademie Verlag Series

in Optical Metrology Vol. 1, Akademie Verlag, Berlin, 1996.7. P. K. Rastogi (Ed.), Optical Measurement Techniques and Applications, Artech House,

Boston, 1997.8. D. Malacara, M. Servin, and Z. Malacara, Optical Testing: Analysis of Interferograms,

Marcel Dekker, New York, 1998.9. C. Ghiglia and M. D. Pritt, Two Dimensional Phase Unwrapping: Theory, Algorithms

and Software, Wiley, New York, 1998.10. D. Malacara, M. Servín, and Z. Malacara, Interferogram Analysis for Optical Testing,

2nd edn, CRC Press, Boca Raton, 2005.

ADDITIONAL READING

1. R. C. Jones, A new calculus for the treatment of optical systems. VII. Properties of Nmatrices, J. Opt. Soc. Am., 38, 671–685, 1948.

2. D. A. Tichenor and V. P. Madsen, Computer analysis of holographic interferograms fornon-destructive testing, Opt. Eng., 8, 469–473, 1979.

3. T. M. Kreis and H. Kreitlow, Quantitative evaluation of holographic interferogramsunder image processing aspects, Proc. SPIE, 210, 196–202, 1979.

4. M. Takeda, H. Ina, and S. Kobayashi, Fourier-transform method of fringe pattern anal-ysis for computer based topography and interferometry, J. Opt. Soc. Am., 72, 156–160,1982.

5. T.Yatagai and M. Idesawa, Automatic fringe analysis for moiré topography, Opt. LasersEng., 3, 73–83, 1982.

6. K. Itoh, Analysis of phase unwrapping algorithms, Appl. Opt., 21, 2470–2470, 1982.7. P. Hariharan, B. F. Oreb, and N. Brown, A digital phase-measurement system for real-

time holographic interferometry, Opt. Commun., 41, 393–396, 1982.8. H. E. Cline, W. E. Lorensen, and A. S. Holik, Automatic moiré contouring, Appl. Opt.,

23, 1454–1459, 1984.9. M. P. Kothiyal and C. Delisle, Optical frequency shifter for heterodyne interferometry

using counter-rotating wave plates, Opt. Lett., 9, 319–321, 1984.10. V. Srinivasan, H. C. Liu, and M. Halioua, Automated phase measuring profilometry of

3-D diffuse objects, Appl. Opt., 23, 3105–3108, 1984.11. K. Creath, Phase-shifting speckle interferometry, Appl. Opt., 24, 3053–3058, 1985.12. K. A. Stetson and W. R. Brohinsky, Electrooptic holography and its applications to

hologram interferometry, Appl. Opt., 24, 3631–3637, 1985.13. K. A. Nugent, Interferogram analysis using an accurate fully automatic algorithm, Appl.

Opt., 24, 3101–3105, 1985.14. K. Andresen, The phase shift method applied to moiré image processing, Optik, 72,

115–119, 1986.

Page 103: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 76[#18] 14/5/2009 20:30

76 Optical Methods of Measurement

15. G. T. Reid, R. C. Rixon, S. J. Marshall, and H. Stewart,Automatic on-line measurementsof 3-D shape by shadow casting moiré topography, Wear, 109, 297–304, 1986.

16. K. Andresen and D. Klassen, The phase shift method applied to cross grating moirémeasurement, Opt. Lasers Eng., 7, 101–114, 1987.

17. C. Roddier and F. Roddier, Interferogram analysis using Fourier transform techniques,Appl. Opt., 26, 1668–1673, 1987.

18. M. Owner-Petersen and P. Damgaard Jensen, Computer-aided electronic speckle patterninterferometry (ESPI): Deformation analysis by fringe manipulation, NDT International(UK), 21, 422–426, 1988.

19. K. Creath, Phase-measurement interferometry techniques, In Progress in Optics (ed.E. Wolf), Vol. 26, 349–393, North-Holland, Amsterdam, 1988.

20. Y. Morimoto, Y. Seguchi, and T. Higashi, Application of moiré analysis of strain usingFourier transform, Opt. Eng., 27, 650–656, 1988.

21. M. Kujawinska and D. W. Robinson, Multichannel phase-stepped holographic interfer-ometry, Appl. Opt., 27, 312–320, 1988.

22. J. J. J. Dirckx and W. F. Decraemer, Phase shift moiré apparatus for automatic 3Dsurface measurement, Rev. Sci. Instrum., 60, 3698–3701, 1989.

23. J. M. Huntley, Noise-immune phase unwrapping algorithm, Appl. Opt., 28, 3268–3270,1989.

24. M. Takeda, Spatial-carrier fringe pattern analysis and its applications to precisioninterferometry and profilometry: An overview, Ind. Metrol., 1, 79–99, 1990.

25. M. Kujawinska, L. Salbut, and K. Patorski, 3-channel phase-stepped system for moiréinterferometry, Appl. Opt., 30, 1633–1637, 1991.

26. J. Kato, I.Yamaguchi, and S. Kuwashima, Real-time fringe analysis based on electronicmoiré and its applications, In Fringe’93 (ed.W. Jüptner andW. Osten), 66–71,AkademieVerlag, Berlin, 1993.

27. J. M. Huntley and H. Saldner, Temporal phase-unwrapping algorithm for automatedinterferogram analysis, Appl. Opt., 32, 3047–3052, 1993.

28. C. Joenathan, Phase-measuring interferometry: New methods and error analysis, Appl.Opt., 33, 4147–4155, 1994.

29. G. Jin, N. Bao, and P. S. Chung, Applications of a novel phase shift method using acomputer-controlled mechanism, Opt. Eng., 33, 2733–2737, 1994.

30. S. Yoshida, R. W. Suprapedi, E. T. Astuti, and A. Kusnowo, Phase evaluation for elec-tronic speckle-pattern interferometry deformation analyses, Opt. Lett., 20, 755–757,1995.

31. S. Suja Helen, M. P. Kothiyal, and R. S. Sirohi, Achromatic phase-shifting by a rotatingpolarizer, Opt. Commun., 154, 249–254, 1998.

32. B. V. Dorrío and J. L. Fernández, Phase-evaluation methods in wholefield opticalmeasurement techniques, Meas. Sci. Technol., 10, R33–R55, 1999.

33. S. Suja Helen, M. P. Kothiyal, and R. S. Sirohi, Phase shifting by a rotating polarizerin white light interferometry for surface profiling, J. Mod. Opt., 46, 993–1001, 1999.

34. H. Zhang, M. J. Lalor, and D. R. Burton, Robust accurate seven-sample phase-shiftingalgorithm insensitive to nonlinear phase shift error and second-harmonic distortion: Acomparative study, Opt. Eng., 38, 1524–1533, 1999.

35. S. Suja Helen, M. P. Kothiyal, and R. S. Sirohi, White light interferometry withpolarization phase-shifter at the input of the interferometer, J. Mod. Opt., 47, 1137–1145,2000.

36. J. H. Massig and J. Heppner, Fringe-Pattern Analysis with High Accuracy by Use of theFourier-Transform Method: Theory and Experimental Tests, Appl. Opt., 40, 2081–2088,2001.

Page 104: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 77[#19] 14/5/2009 20:30

Phase-Evaluation Methods 77

37. M.Afifi,A. Fassi-Fihri, M. Marjane, K. Nassim, M. Sidki, and S. Rachafi, Paul wavelet-based algorithm for optical phase distribution evaluation, Opt. Commun., 211, 47–51,2002.

38. M. B. North-Morris, J. VanDelden and J.C. Wyant, Phase-shifting birefringent scatter-plate interferometer, Appl. Opt., 41, 668–677, 2002.

39. K. Kadooka, K. Kunoo, N. Uda, K. Ono, and T. Nagayasu, Strain analysis for moiréinterferometry using the two-dimensional continuous wavelet transform, Exp. Mech.,43, 45–51, 2003.

40. C.-S. Guo, Z.-Y. Rong, H.-T. Wang, Y. Wang, and L. Z. Cai, Phase-shifting withcomputer-generated holograms written on a spatial light modulator, Appl. Opt., 42,6875–6879, 2003.

41. J. Novak, Five-step phase-shifting algorithms with unknown values of phase shift, Optik,114, 63–68, 2003.

42. Y. Fu, C. J. Tay, C. Quan, and H. Miao, Wavelet analysis of speckle patterns with atemporal carrier, Appl. Opt., 44, 959–965, 2005.

43. J. Millerd, N. Brock, J. Hayes, B. Kimbrough, M. Novak, M. North-Morris, and J. C.Wyant, Modern approaches in phase measuring metrology, Proc. SPIE, 5856, 1–22,2005.

44. A. Jesacher, S. Fürhapter, S. Bernet, and M. Ritsch-Marte, Spiral interferogram analysis,J. Opt. Soc. Am. A, 23, 1400–1409, 2006.

45. B. Bhaduri, N. K. Mohan, and M. P. Kothiyal, Cyclic-path digital speckle shear patterninterferometer: Use of polarization phase-shifting method, Opt. Eng., 45, 105604, 2006.

46. K. Patorski andA. Styk, Interferogram intensity modulation calculations using temporalphase shifting: Error analysis, Opt. Eng., 45, 085602, 2006.

47. S. K. Debnath and M. P. Kothiyal, Experimental study of the phase shift miscalibra-tion error in phase-shifting interferometry: Use of a spectrally resolved white-lightinterferometer, Appl. Opt., 46, 5103–5109, 2007.

48. Y. Fu, G. Pedrini, and W. Osten, Vibration measurement by temporal Fourier analysesof a digital hologram sequence, Appl. Opt., 46, 5719–5727, 2007.

49. L. R. Watkins, Phase recovery from fringe patterns using the continuous wavelettransform, Opt. Lasers Eng., 45, 298–303, 2007.

50. Y. H. Huang, S. P. Ng, L. Liu, Y. S. Chen, and M. Y. Y. Hung, Shearographic phaseretrieval using one single specklegram: A clustering approach, Opt. Eng., 47, 054301,2008.

Page 105: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C004.tex” — page 78[#20] 14/5/2009 20:30

Page 106: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 79[#1] 15/5/2009 10:58

5 Detectors andRecording Materials

Detection in optical measurement is a very important step in the measurement process.A detector converts incident optical energy into electrical energy, which is then mea-sured. Many physical effects have been used to detect optical radiation, for example,photo-excitation and photo-emission, the photoresistive effect, and the photothermaleffect. Detectors based on the photo-excitation or photo-emission of electrons are byfar the most sensitive and provide the highest performance. These detectors are, how-ever, small-area detectors, measuring/sampling optical energy over a very small area.In the techniques and methods described in this book, large-area or image detec-tors are employed except in heterodyne holographic interferometry. These includephotographic plates, photochromics, thermoplastics, photorefractive crystals, andcharge-coupled device/complementary metal-oxide semiconductor (CCD)/(CMOS)arrays, among others. CCD arrays, which are now used frequently, consist of an arrayof small-area (pixel) detectors based on photo-excitation. We therefore first discussthe use of semiconductor devices as detectors, then photomultiplier tubes (PMTs),and finally image detectors.

A variety of detectors are used to detect or measure optical signals. Photodetectorsfall into two general categories: thermal detectors and quantum detectors. In thermaldetectors, the incident thermal energy is converted to heat, which in turn may causea rise in temperature and a corresponding measurable change in resistance, electro-motive force (emf), and so on. These detectors are not used in the techniques describedin this book, and hence will not be discussed here. Examples of quantum detectors arePMTs, semiconductor photodiodes, photoconductors, and phototransistors. In suchdetectors, the incident optical energy, that is, the photons, cause the emission ofelectrons in PMTs or the generation of electron-hole pairs in diodes. A resistancechange is directly produced in photoconductors by absorption of photons.

5.1 DETECTOR CHARACTERISTICS

Detectors are compared by the use of several defined parameters. These are (i)responsivity R, (ii) detectivity D or D*, (iii) noise equivalent power (NEP), (iv)noise, (v) spectral response, and (vi) frequency response. The responsivity is theratio of the output signal (in amperes or volts) to the incident intensity, and is

79

Page 107: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 80[#2] 15/5/2009 10:58

80 Optical Methods of Measurement

measured in μA/(mW/cm2). For photodiodes, the quantum efficiency, which is thenumber of electron-hole pairs per incident photon, is sometimes given. Responsivityis proportional to quantum efficiency. The detectivity is the ratio of the responsivityto the noise current (voltage) produced by the detector. This, therefore, is the signal-to-noise ratio (SNR) divided by intensity. A parameter that is often used is D*, whichincludes the dependence on noise frequency bandwidth Δf and detector area A, andis connected to the detectivity through the relation

D∗ = D(AΔf )1/2 (5.1)

NEP is the reciprocal of D. It is thus the light intensity required to produce an SNRof 1. There are several sources of noise, including Johnson (thermal) noise, shotnoise, and generation–recombination noise. In all of these cases, the noise current isproportional to (Δf )1/2. Spectral response refers to the variation of responsivity asa function of wavelength. Finally, the frequency response of a detector refers to itsability to respond to a chopped or modulated beam. Most solid state detectors behavelike low-pass filters, with the responsivity being given by

R = R01(

1 + ω2τ2)1/2

(5.2)

where R0 is the responsivity at zero frequency, f = ω/2π is the frequency ofmodulation, and τ is the time constant. The cut-off frequency fc = (2πτ)−1, whereR = R0/

√2.

5.2 DETECTORS

The incident photon creates an electron-hole pair in semiconductor devices andcauses the release of an electron in a PMT. There is thus a frequency below whichthe detector does not respond to optical energy. The frequency ν of the radiation mustbe greater than the threshold frequency νth, where hνth = Eg, Eg being the band gapin a semiconductor or the work function of the cathode material in a PMT, and h isPlanck’s constant.

5.2.1 PHOTOCONDUCTORS

Photoconductor detectors are of two types: intrinsic and extrinsic. In an intrin-sic semiconductor, an incident photon, on absorption, excites an electron from thevalence band to the conduction band. This requires that the photon energy hν � Eg,where Eg is the band gap. The cut-off wavelength λc is given by

λc(μm) = 1.24/Eg(eV) (5.3)

By choosing an appropriate Eg, the spectral response of the detector can be tailored.In fact, absorption of photons results in the generation of both excess electronsand holes, which are free to conduct under an applied electric field. The excessconductivity is called the intrinsic photoconductivity.

Page 108: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 81[#3] 15/5/2009 10:58

Detectors and Recording Materials 81

When the incident photons excite the electrons from donor levels to the conductionband or holes from acceptor levels to the valence band, the excess conductivity iscalled the extrinsic photoconductivity. In this case, excitation creates only one type ofexcess carriers—electrons or holes—by ionizing a donor or an acceptor, respectively.The spectral response is determined by the donor or the acceptor energies. For donorexcitation, the cut-off wavelength is given by

λc = 1.24

Ec − Ed(5.4)

where Ec and Ed are the energies of the conduction band and the donor level, respec-tively. Since Ec − Ed or Ea − Ev (where Ea and Ev are the energies of the acceptorlevel and the valence band, respectively) is much smaller than Eg, these detectors aresensitive in the long-wavelength infrared region.

When a beam of light of intensity I and frequency ν is incident on a photoconductordetector, the rate of generation of carriers can be expressed as follows:

G = ηI

hν(5.5)

where η is the quantum efficiency. The photocurrent i generated in the externalcircuit is

i = ηe

(τ0

τd

)I (5.6)

where τ0 is the lifetime of the carriers, τd is the transit time (the time required by thecarriers to traverse the device), and the factor τ0/τd is interpreted as photoconductivegain. It can be seen that the photocurrent is proportional to the light intensity.

Photoconductors exhibit a memory effect; that is, at a given illumination, thephotoconductor may have several resistance values, depending on the previous historyof illumination. These devices are not stable, and hence are not suited for precise andaccurate measurements. These are, however, inexpensive, simple, bulky, and durabledevices.

5.2.2 PHOTODIODES

The commonly known photodiodes include the simple p–n junction diode, the p–i–ndiode, and the avalanche diode. The fabrication and operation of photodiodes arebased on p–n technology. A p–n junction is formed by bringing p-type and n-typematerials together. The dominant charge carriers in n-type and p-type semiconductorsare electrons and holes, respectively. At the instant when the materials are joinedtogether, there is an almost infinite concentration gradient across the junction for boththe electrons and the holes (Figure 5.1a). Consequently, the electrons and holes diffusein opposite directions. The diffusion process, however, does not continue indefinitely,since a potential barrier is developed that opposes the diffusion. For every free electronleaving an n-type region, an immobile positive charge is left behind. The amount ofpositive charge increases as the number of departing electrons increases. Similarly,

Page 109: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 82[#4] 15/5/2009 10:58

82 Optical Methods of Measurement

E(a) (b)

Positive ions

ElectronsNegative ionsHoles

Depletionregion

p-type n-typen-typep-type

FIGURE 5.1 p–n junctions: (a) just formed; (b) at equilibrium.

as the holes depart from the p-type region, immobile negative charges build up inthe p-type region. The mobile electrons and holes combine and neutralize each other,forming a region on both sides of the junction where there are no free charge carriers.This region is called the depletion region and acts as an insulator owing to the absenceof free carriers. The immobile charges of opposite polarity generate a barrier voltage,which opposes further diffusion (Figure 5.1b). The barrier voltage depends on thesemiconductor material. Its value for silicon is 0.7 V. The junction appears like acapacitor, that is, with a nonconductive material separating the two conductors.

When forward bias is applied to the junction, it opposes the barrier voltage, therebyreducing the thickness of the depletion region and increasing the junction capacitance.When the bias reaches the barrier voltage, the depletion region is eliminated and thejunction becomes conductive. When the reverse bias is applied, the depletion regionis widened, the junction capacitance is reduced, and the junction stays nonconductive.The reverse-bias junction, however, can conduct current when free carriers are intro-duced into it. These free carriers can be introduced by incident radiation. When theradiation falls in the depletion region or within the diffusion length around it, electron-hole pairs are created. The electrons (as minority carriers in the p-type region) willdrift toward the depletion region, cross it, and hence contribute to the external cur-rent. Similarly, holes created by photo-absorption in the n-type region will drift in theopposite direction, and will contribute to the current. However, within the depletionregion, the electron-hole pairs will be separated, and electrons and holes will driftfrom each other in opposite directions; thus, both will contribute to the current flow.

If we consider only the photons absorbed within the diffusion length to the deple-tion region and neglect the recombination of the generated electron-hole pairs, thephotocurrent I is given by

i = eηI

hν(5.7)

In a p–n junction diode, most of the applied bias voltage appears across the depletionregion. Thus, only those pairs formed within the region or capable of diffusion intothis region can be influenced by the externally applied field and contribute to theexternal current.

Figure 5.2 shows a schematic of a simple p–n junction diode. A heavily dopedp-type material and lightly doped n-type material form the p–n junction. Because ofdifferent doping levels, the depletion region extends deeper into the n-type material.

Page 110: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 83[#5] 15/5/2009 10:58

Detectors and Recording Materials 83

n

n+

Contact

Heavily doped n-substrate

n-type region

p-type region

Depletion regione

SiO2

Contact

FIGURE 5.2 Schematic of a p–n junction diode.

At the bottom of the diode there is usually a region of n+-type material, which is thesubstrate. The substrate terminates at the bottom electrical contact. The top electricalcontact is fused to the p-type semiconductor. An insulating layer of silicon dioxide isprovided, as shown in Figure 5.2.

The energy gap of most semiconductors is of the order of 1 eV, which correspondsto an optical wavelength of about 1 μm. Thus, photodiodes can respond to lightin the spectral range from ultraviolet to near-infrared. However, the penetration ofthe photons through the various regions depends on the frequency of the incidentradiation. Ultraviolet radiation is strongly absorbed at the surface, while infraredradiation can penetrate deep into the structure. In a Schottky photodiode, the p–njunction is replaced by a metal semiconductor junction. A thin layer (<10 nm) ofgold is deposited on the n-type semiconductor. This layer is almost transparent andforms the metal–semiconductor junction. A schematic of a Schottky diode is shown inFigure 5.3. Since the depletion region commences right at the semiconductor surface,Schottky photodiodes have superior ultraviolet response.

n

n+

SiO2 layer

Semi-insulating region

ContactHeavily doped n-substrate

n-type region

Transparent gold layer

Contact

FIGURE 5.3 Schematic of a Schottky diode.

Page 111: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 84[#6] 15/5/2009 10:58

84 Optical Methods of Measurement

Since the minority carriers may have to travel some distance—up to the diffusionlength before being transported across the p–n junction—the response of a p–njunction diode is relatively slow. This is a particularly serious drawback in somesemiconducting materials such as silicon in which the depletion region is small com-pared to the diffusion length. This drawback is overcome by utilizing a p–i–n structure.The thickness of the depletion region can be controlled by doping. In a p–i–n diode,an intrinsic region, which has a very high resistivity, is sandwiched between the p- andn-regions. Figure 5.4a shows the microstructure of a p–i–n diode. The diode is usuallyoperated with a reverse-bias voltage. This expands the depletion region. Furthermore,the voltage drop occurs mostly across the intrinsic layer (Figure 5.4b). When a photonwhose energy exceeds the threshold value enters the intrinsic region, electron-holepairs are created. Under the action of the applied electric field, the photogeneratedelectrons and holes are swept swiftly toward n- and p-type regions, respectively, andcreate a signal current. Figure 5.4c shows a cross-section of a typical p–i–n diode.The avalanche photodiode is a junction diode with an internal gain mechanism.

5.2.3 PHOTOMULTIPLIER TUBE

The photomultiplier tube (PMT) is one of the most sensitive optical detectors: aphoton flux as low as one photon per second can be detected. Many PMT designsexist. Figure 5.5 shows a cross-section of a commonly used PMT. It consists of acathode, a series of dynodes, and an anode in a squirrel cage. A photon of frequency ν

incident on the cathode ejects an electron, provided that hν > φ, where φ is the workfunction of the cathode material. A variety of photocathodes cover the spectral rangefrom 0.1 to 11 μm. Figure 5.6 shows the responsivities and quantum efficiencies ofsome photocathode materials.

The series of dynodes constitutes a low-noise amplifier. These dynodes are atprogressively higher positive potential. A photoelectron emitted from the cathode isaccelerated toward the first dynode. Secondary electrons are ejected as a result ofthe collision of the photoelectron with the dynode surface. This is the first stage ofamplification. These secondary electrons are accelerated toward the more positivedynode, and the process is repeated. The amplified electron beam is collected by theanode. The multiplication of electrons at each dynode or the electron gain A dependson the dynode material and the potential difference between the dynodes. The totalgain G of the PMT is given by

G = An (5.8)

where n is the number of dynodes. The typical gain is around 106. PMTs are extremelyfast and sensitive, but are expensive and require sophisticated associated electronics.These tubes are influenced by stray magnetic fields. However, there are PMT designsthat are not affected by magnetic fields.

5.3 IMAGE DETECTORS

The detectors discussed so far are single-element detectors. In many applications,such as electronic speckle pattern interferometry (ESPI) and robotic vision, spatiallyvarying information—say an image—is to be recorded. This is achieved by area

Page 112: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 85[#7] 15/5/2009 10:58

Detectors and Recording Materials 85

Distance

(b)

(a)

Depletionregion

Negative ions

Holes

Positive ions

Electrons

+_

Reverse bias

Photogenerated electron–hole pair

p-type Intrinsic

i

n+

p-type region

(c)

ContactHeavily doped n-substrate

Depletion region

Contact

n-type

SiO2

FIGURE 5.4 (a) Microstructure of a p–i–n diode. (b) Variation of electric field with distance.(c) Schematic of a p–i–n photodiode.

Page 113: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 86[#8] 15/5/2009 10:58

86 Optical Methods of Measurement

Vacuum

01–9

10W

S

PhotocathodeDynodesAnodeWindowShield

7

9

10

6

4

5

3

1

20

8

S

W

hn

FIGURE 5.5 Cross-section of a squirrel-cage PMT.

array detectors. It is also possible to record an image with a linear array detector byscanning. There are three mechanisms at play in imaging with detector arrays. First,the image is intercepted by the elements of the array and converted to an electricalcharge distribution, which is proportional to the intensity in the image. Second, thecharges are read out as elements of an image while retaining correlation with theposition on the array where each charge was generated. Finally, the image informationis displayed or stored.

Area or focal plane arrays could be based on a single detector, but this wouldrequire many wires and processing electronics. The concept of a CCD makes theretrieval of detector signals easy and eliminates the need for a maze of wires. A CCDin its simplest form is a closely spaced array of metal-insulator–semiconductor (MIS)capacitors. The most important is the metal-oxide–semiconductor (MOS) capacitor,made from silicon and silicon dioxide as the insulator. This can be made monolithic.The basic structure of a CCD is a shift register formed by an array of closely spacedpotential-well capacitors. A potential-well capacitor is shown schematically in Figure5.7a. A thin layer of silicon dioxide is grown on a silicon substrate. A transparentelectrode is then deposited over the silicon dioxide as a gate, to form a tiny capacitor.When a positive potential is applied to the electrode, a depletion region or an electricalpotential is created in the silicon substrate directly beneath the gate. Electron-holepairs are generated on absorption of incident light. The free electrons generated in thevicinity of the capacitor are stored and integrated in the potential well. The numberof electrons (charge) in the well is a measure of the incident light intensity. In aCCD, the charge is generated by the incident photons, and is passed between spatiallocations and detected at the edge of the CCD. The charge position in the MOSarray of capacitors is controlled electrostatically by voltage levels. With appropriateapplication of these voltage levels and their relative phases, the capacitor can beused to store and transfer the charge packet across the semiconductor substrate in acontrolled manner.

Page 114: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 87[#9] 15/5/2009 10:58

Detectors and Recording Materials 87

100

Resp

onsiv

ity (m

A/W

)

30

10

3

1

0.312.4 6.2

CsTe

n = 30% n = 20%

n = 10%

Quantum efficiency

S-11CsSbO

4.1 3.1 2.5 2.1 1.8 1.6 1.4 1.2 1.1

100 200 400 600 800Wavelength (nm)

Photon energy (eV)

1000

K2CsSb

S-20Na2KSbCs

GaAs

R = 1%

S-1AgOCs

n = 0.1%

FIGURE 5.6 Responsivity and quantum efficiency of some common cathode materials. (FromUiga, E., Optoelectronics, Prentice Hall, Englewood Cliffs, NJ, 1995. With permission.)

Figure 5.7b illustrates the principle of charge transfer through the propagationof potential wells in a three-phase clocking layout. In phase φ1, gates G1 and G4are turned on, while all other gates are turned off. Hence, electrons are collected inwells W1 and W4. In phase φ2, gates G1, G2, G4, and G5 are turned on. Therefore,wells W1 and W2, and W4 and W5 merge into wider wells. In phase φ3, gates G1and G4 are turned off, while G2 and G5 are left on. The electrons stored earlier inW1 are now shifted to W2. Similarly, the electrons stored in W4 are now shiftedto W5. By repeating this process, all charge packets will be transferred to the edgeof the CCD, where they are read by external electronics. Area detector arrays withpixels in numbers of 256 × 256 to 4096 × 4096 are available. The center-to-centerdistances of the pixels range from 10 to 40 μm, but most commonly used deviceshave a pixel separation of 12.7 μm. A dynamic range of 1000 to 1 is quite common.The sensitivity of video-rate CCD cameras is of the order of 10−8 W/cm2, which

Page 115: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 88[#10] 15/5/2009 10:58

88 Optical Methods of Measurement

(b)

G7G6G5G4G3G2G1

W4+W5W1+W2

W4W1

W5W2

(a)

SiO2

SiliconsubstratePotential well

Electrode

f1f2

f3

FIGURE 5.7 (a) Structure of a CCD pixel. (b) Mechanism of charge transfer in a three-phaseclocking layout.

corresponds to about 0.05 lux. CCD cameras are available with exposure times assmall as 1/10,000 second, achieved by electronic shuttering, and an SNR of 50 dB.

CMOS sensors are interesting alternatives to CCD sensors in optical metrology.CMOS sensors are relatively cheap and have lower power consumption. Other phys-ical characteristics include random access, which allows fast readout of a small areaof interest, and physical layout, which enables active electronic components to belocated on each pixel and prevents blooming. CMOS sensors also have disadvan-tages, including lower sensitivity due to a smaller fill factor, higher temporal noise,higher pattern noise, higher dark current, and a nonlinear characteristic curve.

As a rule, the pixels in CCD sensors are built from MOS capacitors in whichthe electrons generated by photon absorption during the exposure are stored. Themaximum number of electrons that can be stored in a pixel is the full-well capacity.In interline transfer (IT) and frame transfer (FT) sensors, the electrons are shifted intoseparate storage cells at the end of each exposure time. After this shifting, the nextimage can be exposed. During this exposure, the charge in the storage cells is shiftedpixel by pixel into the sense node (readout node), where it is converted into the outputvoltage.

In CMOS sensors, the single pixels are built from photodiodes. Electronic compo-nents such as storage cells, transistors for addressing, and amplifiers can be assigned

Page 116: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 89[#11] 15/5/2009 10:58

Detectors and Recording Materials 89

to every pixel. This is why such sensors are called active pixel sensors (APS). Thereare two types: integrating and nonintegrating sensors. Nonintegrating sensors pro-vide a pixel signal that depends on the instantaneous current in the photodiode (directreadout sensor). Owing to their nonlinear current-to-voltage conversion, these sensorsusually have a logarithmic characteristic curve. In integrating sensors, the depletion-layer capacity of the photodiode is usually used for charge storage. The characteristiccurve of such sensors is slightly nonlinear.

For integrating sensors with a rolling shutter, the exposure and the readout of thesingle lines occur sequentially. In integrating sensors with a global shutter, each pixelhas its own storage cell. All pixels are exposed at the same time, and at the end of theexposure time the charges in all pixels are shifted simultaneously into storage cells.Afterwards, the storage cells are read out sequentially. Owing to the random accessto individual pixels, it is possible to read out a region of interest (ROI) of the wholesensor. High frame rates can thereby be achieved for small ROIs.

The recording of a dynamic process, such as the measurement of time-dependentdeformations with ESPI, requires a camera with a suitable frame rate and the pos-sibility of simultaneous exposure of all pixels. Therefore, only IT and FT sensorsare suitable for CCD cameras, and only integrating sensors with a global shutter aresuitable in the case of CMOS cameras.

5.3.1 TIME-DELAY AND INTEGRATION MODE OF OPERATION

Conventional CCD cameras are restricted to working with stationary objects. Objectmotion during exposure blurs the image. Time-delay and integration (TDI) is a specialmode of CCD cameras, which provides a solution to the blurring problem. In the TDImode, the charges collected from each row of detectors are shifted to the neighboringsites at a fixed time interval. As an example, consider four detectors operating in TDImode as shown in Figure 5.8. The object is in motion, and at time t1, its image isformed on the first detector, which creates a charge packet.At time t2, the image movesto the second detector. Simultaneously, the pixel clock moves the charge packet to thewell under the second detector. Here, the image creates an additional charge, whichis added to the charge shifted from the first detector. Similarly, at time t3, the imagemoves to the third detector and creates a charge. Simultaneously, the charge from thesecond detector is moved to the well of the third detector. The charge thus increaseslinearly with the number N of detectors in TDI. Therefore, the signal increases withN . The noise also increases, but as

√N , and hence the SNR increases as

√N . The

well capacity limits the maximum number of TDI elements that can be used. As isobvious, the charge packet must always be in synchronism with the image for thecamera to work in TDI mode. The mismatch between the image scan rate and clockrate can adversely smear the output.

5.4 RECORDING MATERIALS

We will now discuss the materials that record the image, that is, the intensity distri-bution. The recording is required either to keep a permanent record or to provide aninput for measurement and processing. Photographic emulsions are by far the most

Page 117: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 90[#12] 15/5/2009 10:58

90 Optical Methods of Measurement

t4

t2

Image moving on the detector, andsimultaneously charge packets are shifted

t1

t3

FIGURE 5.8 TDI mode of operation.

sensitive and widely used recording medium both for photography and for holography.Several other recording media have been developed; these are listed in Table 5.1.We will discuss some characteristics of these media.

5.4.1 PHOTOGRAPHIC FILMS AND PLATES

A photographic film/plate consists of silver halide grains distributed uniformly in agelatin matrix deposited in a thin layer on a transparent substrate: either a glass plateor an acetate film. When the photographic emulsion is exposed to light, the silverhalide grains absorb optical energy and undergo a complex physical change; that is, alatent image is formed. The exposed film is then developed. The development convertsthe halide grains that have absorbed sufficient optical energy into metallic silver. Thefilm is then fixed, which removes the unexposed silver halide grains while leaving themetallic silver. The silver grains are largely opaque at the optical frequency. Therefore,the processed film will exhibit spatial variation of opacity depending on the densityof the silver grains in each region of the transparency.

Page 118: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 91[#13] 15/5/2009 10:58

Detectors and Recording Materials 91

TAB

LE5.

1R

ecor

ding

Med

iafo

rPh

otog

raph

yan

dH

olog

raph

ySp

atia

lM

axim

umC

lass

ofSp

ectr

alFr

eque

ncie

sTy

pes

ofD

iffra

ctio

nS.

No.

Mat

eria

lR

ange

(nm

)R

ecor

ding

Proc

ess

(lin

es/m

m)

Gra

ting

Proc

essi

ngR

eado

utPr

oces

sEf

fici

ency

(%)

1Ph

otog

raph

icm

ater

ials

400–

700

(<13

00)

Red

uctio

nto

Ag

met

algr

ains

>30

00Pl

ane/

volu

me

ampl

itude

Wet

chem

ical

Den

sity

chan

ge5

Ble

ache

dto

silv

ersa

ltsPl

ane/

volu

me

phas

eW

etch

emic

alR

efra

ctiv

ein

dex

chan

ge20

–50

2D

ichr

omat

edge

latin

250–

520

and

633

Phot

o-cr

ossl

inki

ng>

3000

Plan

eph

ase,

volu

me

phas

eW

etch

emic

alfo

llow

edby

heat

Ref

ract

ive

inde

xch

ange

30>

90

3Ph

otor

esis

tsU

V-5

00Ph

oto-

cros

slin

king

orph

otop

olym

eriz

atio

n<

3000

Surf

ace

relie

f/ph

ase-

blaz

edre

flec

tion

Wet

chem

ical

Surf

ace

relie

f30

–90

4Ph

otop

olym

ers

UV

-500

Phot

opol

ymer

izat

ion

∼200

–150

0ba

ndpa

ssV

olum

eph

ase

Non

eor

post

-ex

posu

rean

dpo

st-

heat

ing

Ref

ract

ive

inde

xch

ange

orsu

rfac

ere

lief

10–8

5

5Ph

otop

last

ics/

phot

ocon

duct

orth

erm

opla

stic

s

Nea

rly

panc

hrom

atic

for

PVK

TN

Kph

otoc

ondu

ctor

Form

atio

nof

anel

ectr

osta

ticla

tent

imag

ew

ithel

ectr

ic-

fiel

d-pr

oduc

edde

form

atio

nof

heat

edpl

astic

400–

1000

band

pass

Plan

eph

ase

Cor

ona

char

gean

dhe

atSu

rfac

ere

lief

6–15

6Ph

otoc

hrom

ics

300–

450

Gen

eral

lyph

oto-

indu

ced

new

abso

rptio

nba

nds

>30

00V

olum

eab

sorp

tion

Non

eD

ensi

tych

ange

1–2

7Fe

rroe

lect

ric

crys

tals

400–

650

Ele

ctro

-opt

icef

fect

/ph

otor

efra

ctiv

eef

fect

>30

00V

olum

eph

ase

Non

eR

efra

ctiv

ein

dex

chan

ge90

Page 119: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 92[#14] 15/5/2009 10:58

92 Optical Methods of Measurement

Photographic emulsions (silver halides) are widely used for photography (inco-herent recording) and holography (coherent recording). They are sensitive over awide spectral range and also offer a very wide resolution range and good dynamicresponse. An apparent drawback of photographic materials is that they require wetdevelopment and fixing processing, followed by drying. However, the developmentprocess provides a gain of the order of a million, which amplifies the latent imageformed during the exposure.

It was mentioned earlier that the spatial variation of light intensity incident on aphoto-emulsion is converted into the variation of density of metallic silver grains;consequently, its transmission becomes spatially variable. The transmission functionτ is related to the photographic density D, the density of the metallic silver grains perunit area, by

D = − log τ (5.9)

The transmission function τ is defined by

τ(x, y) = It(x, y)

Ii(x, y)(5.10)

where It(x, y) and Ii(x, y) are the transmitted and incident intensities, respectively.The reflection losses at the interfaces are ignored. The transmission is averaged overa very tiny area around the point (x, y).

One of the most commonly used descriptions of the photosensitivity of a pho-tographic film is the Hurter–Driffield (H&D) curve. This is a plot of the density Dversus the logarithm of the exposure E. The exposure E is defined as the energy perunit area incident on the film and is given by E = IT , where I is the incident intensityand T is the exposure time. Figure 5.9 illustrates a typical H&D curve for a photo-graphic negative. If the exposure is below a certain level, the density is independentof exposure. This minimum density is usually referred to as gross fog. In the “toe” ofthe curve, the density begins to increase with log E. There follows a region where thedensity increases linearly with the logarithm of the exposure; this is the linear regionof the curve, and the slope of this linear region is referred to as the film gamma γ.Finally, the curve saturates in a region called the “shoulder.” There is no change indensity with the logarithm of the exposure after the “shoulder.” However, with verylarge exposures, solarization takes place.

The linear region of the H&D curve is generally used in conventional photography.A film with a large value of γ is called a high-contrast film, while a film with low γ isa low-contrast film. The γ of the film also depends on the development time and thedeveloper. It is thus possible to obtain a prescribed value of γ by a judicious choiceof film, developer, and development time.

Since the emulsion is usually used in the linear portion of the H&D curve, thedensity D when the film is given an exposure E can be expressed as

D = γn log E − D0 = γn log(IT) − D0 (5.11)

where the subscript n means that a negative film is being used and D0 is the intercept.Equation 5.11 can be written in terms of the transmission function τn of the negative

Page 120: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 93[#15] 15/5/2009 10:58

Detectors and Recording Materials 93

Gross fog

log E

Linear region

Toeg

Shoulder

D

0

D0

FIGURE 5.9 The Hurter–Driffield curve.

film as

τn = KnI−γn (5.12)

where Kn = 10D0 T−γn is a positive constant. Equation 5.12 relates the incident inten-sity to the transmission function of the film after development. It can be seen that thetransmission is a highly nonlinear function of the incident intensity for any positivevalue of γn. In many applications, it is required to have either a linear or a power-lawrelationship. This, however, requires a two-step process. In the first step, a nega-tive transparency is made in the usual fashion, which will have a gamma γn1. Thetransmission of this negative transparency is given by

τn1 = Kn1I−γn1 (5.13)

In the second step, the negative transparency is illuminated by a uniform intensity I0and the light transmitted is used to expose a second film, which will have a gammaγn2. This results in a positive transparency with transmission τp, given by

τp = Kn2(I0τn1)−γn2 = Kn2I−γn2

o K−γn2n1 Iγn1γn2 = Kp Iγp (5.14)

where Kp is another positive constant and γp = γn1γn2 is the overall gamma of thetwo-step process. Evidently, a positive transparency does provide a linear mappingof intensity when the overall gamma is unity.

When photographic emulsions are used for holography or, in general, for coherentoptical systems, the H&D curve is never used. Instead, the plot of amplitude trans-mittance t(x, y) versus exposure E is used. The amplitude transmittance is usuallycomplex, since film introduces both amplitude and phase variations to the incident

Page 121: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 94[#16] 15/5/2009 10:58

94 Optical Methods of Measurement

1.0

0.5 10E758E75 HD

8E56HD

10E56649F

Am

plitu

de tr

ansm

ittan

ce

00.001 0.01

Exposure (J/m2)0.1 1.0

FIGURE 5.10 Plots of amplitude transmittance |t| versus exposure E for several holographicemulsions.

plane wave. However, if the film is used in a liquid gate, the phase variations canbe eliminated. The amplitude transmittance then is given by the square root of thetransmission function; that is, |t(x, y)| = +√τ(x, y). Typical plots of |t(x, y)| versus Efor several holographic emulsions are shown in Figure 5.10. Holographic recordingis generally carried out in the linear region of the |t(x, y)| versus E curve.

Since formation of the latent image does not cause any changes in the opticalproperties during exposure, it is possible to record several holograms in the samephotographic emulsion without any interaction between them. The information canbe recorded in the form of either transmittance variations or phase variations. Forrecording in the form of phase variations, the amplitude holograms are bleached.Bleaching converts the metallic silver grains back to transparent silver halide crys-tals. Also, holographic information can be recorded in the volume, provided that theemulsion is sufficiently thick and has enough resolution.

5.4.2 DICHROMATED GELATIN

Dichromated gelatin is, in some respects, an ideal recording material for volume phaseholograms, since it has large refractive index modulation capability, high resolution,and low absorption and scattering. The gelatin layer can be deposited on a glass plateand sensitized. Alternatively, the photographic plates can be fixed, rinsed, and sensi-tized. Ammonium, sodium, and potassium dichromates have been used as sensitizers.Most often, ammonium dichromate is used for sensitization. Sensitized gelatin thusobtained is called dichromated gelatin. Dichromated gelatin exhibits sensitivity in thewavelength range 250–520 nm. The sensitivity at 514 nm is about a fifth of that at448 nm. Gelatin can also be sensitized at the red wavelength of a He–Ne laser by theaddition of methyl blue dye.

The exposure causes crosslinking between gelatin chains and alters swelling prop-erties and solubility. Treatment with warm water dissolves the unexposed gelatinand thereby forms a surface relief pattern. However, much better holograms can beobtained if the gelatin film is processed to obtain a modulation of the refractive index.During processing, rapid dehydration of the gelatin film is carried out in an isopropanol

Page 122: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 95[#17] 15/5/2009 10:58

Detectors and Recording Materials 95

bath at an elevated temperature. This creates a very large number of small vacuolesin the gelatin layer, and hence modulates the refractive index. It is also suggested thatthe formation of complexes of a chromium(III) compound, gelatin, and isopropanolin the hardened areas is also partly responsible for the refractive index modulation.Phase holograms in gelatin are very efficient, directing more than 90% of the incidentlight into the useful image.

5.4.3 PHOTORESISTS

Photoresists are organic materials that are sensitive in the ultraviolet and blue regions.They are all relatively slow, and thus require very long exposure. Usually, a thin layer(about 1 μm) is obtained either by spin coating or spray coating on the substrate. Thislayer is then baked at around 75◦C. On exposure, one of three processes takes place:formation of an organic acid, photo-crosslinking, or photopolymerization.

There are two types of photoresists: negative and positive. In negative photoresists,the unexposed regions are removed during development, while in positive photore-sists, the exposed regions are removed during development. A surface relief recordingis thus obtained. A grating recorded on a photoresist can be blazed by ion bombard-ment. Blazed grating can also be recorded by optical means. Relief recording offersthe advantage of replication using thermoplastics.

When exposure is made in a negative photoresist from the air–film side, the layerclose to the substrate is the last to photolyze. This layer will be simply dissolvedaway during development, since it has not photolyzed fully. This nonadhesion ofnegative photoresists in holographic recording is a serious problem. To overcome thisproblem, the photoresist is exposed from the substrate–film side so as to photolyze theresist better at the substrate–film interface. On the other hand, positive photoresistsdo not have this problem, and hence are preferred. One of the most widely usedpositive photoresists is the Shipley AZ-1350. The recording can be done at the 458nm wavelength of an Ar+ laser or at the 442 nm of a He–Cd laser.

5.4.4 PHOTOPOLYMERS

Photopolymers are also organic materials. Photopolymers for use in holography canbe in the form of either a liquid layer enclosed between glass plates or a dry layer.Exposure causes photopolymerization or crosslinking of the monomer, resulting inrefractive index modulation, which may or may not be accompanied by surface relief.Photopolymers are more sensitive than photoresists, and hence require moderate expo-sure. They also possess the advantage of dry and rapid processing. Thick polymermaterials such as poly-methyl methacrylate (PMMA) and cellulose acetate butyrate(CAB) are excellent candidates for volume holography, since the refractive indexchange on exposure can be as large as 10−3. Two photopolymers are commerciallyavailable: the Polaroid DMP 128 and the Du Pont OmniDex. The Polaroid DMP128 uses dye-sensitized photopolymerization of a vinyl monomer incorporated in apolymer matrix, which is coated on a glass or plastic substrate. Coated plates or filmscan be exposed with blue, green, and red light. Du Pont OmniDex film consists of a

Page 123: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 96[#18] 15/5/2009 10:58

96 Optical Methods of Measurement

polyester base coated with a photopolymer, and is used for contact copying of masterholograms with UV radiation.

The response of photopolymers is band-limited because of the limitation imposedby the diffusion length of the monomer at the lower end of the response curve andthe length of the polymer at the higher end.

5.4.5 THERMOPLASTICS

A thermoplastic is a multilayer structure having a substrate coated with a conduct-ing layer, a photoconductor layer, and a thermoplastic layer. A photoconductor thatworks well is the polymer poly-N-vinylcarbazole (PVK) to which is added a smallamount of the electron donor 2,4,7-trinitro-9-fluorenone (TNF). The thermoplastic isa natural tree resin, Staybelite. The thermoplastics for holographic recording combinethe advantages of high sensitivity and resolution, dry and nearly instantaneous in situdevelopment, erasability, and high readout efficiency.

The recording process involves a number of steps. First, a uniform electrostaticcharge is established on the surface of the thermoplastic in the dark by means ofa corona discharge assembly. The charge is capacitively divided between the pho-toconductor and the thermoplastic layers. In the second step, the thermoplastic isexposed; the exposure causes the photoconductor to discharge its voltage at the illu-minated regions. This does not cause any variation in the charge distribution on thethermoplastic layer; the electric field in the thermoplastic layer remains unchanged.In the third step, the surface is charged again by the corona discharge assembly. Inthis process, the charge is added at the exposed regions. Therefore, an electric fielddistribution, which forms a latent image, is now established. In the fourth step, thethermoplastic is heated to its softening point, thereby developing the latent image.The thermoplastic layer undergoes local deformation as a result of the varying electricfield across it, becoming thinner wherever the field is higher (the illuminated regions)and thicker in the unexposed areas. Rapid cooling to room temperature freezes thedeformation; the recording is now in the form of a surface relief. The recording isstable at room temperature, but can be erased by heating the thermoplastic to a temper-ature higher than that used for development. At the elevated temperature, the surfacetension evens out the thickness variations and hence erases the recording. This isthe fifth step—the erasure step. Figure 5.11 shows the whole recording process. Thethermoplastic can be reused several hundred times. The response of these devices isband-limited, depending on the thickness of the thermoplastic and other factors.

5.4.6 PHOTOCHROMICS

Materials that undergo a reversible color change on exposure are called photochromicmaterials. Photochroism occurs in a variety of materials, both organic and inorganic.Organic photochromics have a limited life and are prone to fatigue. However, organicfilms of spiropyran derivatives have been used for hologram recording in darkeningmode at 633 nm. Inorganic photochromics are either crystals or glasses doped withselected impurities: photochroism is due to a reversible charge transfer between twospecies of electron traps. Recording in silver halide photochromic glasses has been

Page 124: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 97[#19] 15/5/2009 10:58

Detectors and Recording Materials 97

Step 5Erasure

Step 4Development

First charging

Step 3Second charging

Thermoplastic

Photoconductor

Step 2Exposure

Step 1

FIGURE 5.11 The record–erase cycle for a thermoplastic material.

done in darkening mode at 488 nm and in bleaching mode at 633 nm. Doped crystalsof CaF2 and SrO2 have been used in bleaching mode at 633 nm. The sensitivity ofphotochromics is very low, because the reaction occurs at a molecular level. Forthe same reason, they are essentially grain-free and have resolution in excess of3000 lines/mm. Inorganic photochromics have large thicknesses, and hence a numberof holograms can be recorded in them. They do not require any processing, andcan be reused almost indefinitely. In spite of all these advantages, these materialshave limited applications, owing to their low diffraction efficiency (<0.02) and lowsensitivity.

5.4.7 FERROELECTRIC CRYSTALS

Certain ferroelectric crystals, such as lithium niobate (LiNbO3), lithium tantalate(LiTaO3), barium titanate (BaTiO3), and strontium barium niobate (SBN), exhibitsmall changes in refractive index when exposed to intense light. The photo-inducedrefractive index change can be reversed by an application of heat and light. Themechanism of recording in these crystals is as follows: exposure to light frees trappedelectrons, which then migrate through the crystal lattice and are again trapped inadjacent unexposed or low-intensity regions. The migration usually occurs throughdiffusion or an internal photovoltaic effect. This produces a spatially varying netspace-charge distribution and a corresponding electric field distribution. The elec-tric field modulates the refractive index through the electro-optic effect and createsa volume phase grating. These are real-time recording materials; the records are sta-ble, since the charges are bound to the localized traps. The recording can, however,be erased by illuminating it with a light beam of wavelength that can release thetrapped electrons.

Page 125: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 98[#20] 15/5/2009 10:58

98 Optical Methods of Measurement

Lithium niobate crystals—particularly Fe-doped—have been used for holographicinterferometry and data storage. The recording is fixed by temperature. The disad-vantage of lithium niobate is that it is rather slow. Higher sensitivity is obtainedwith photoconductive electro-optic crystals such as bismuth silicon oxide (BSO) andbismuth germanium oxide (BGO) by the application of an external electric field.These crystals are available in the form of thin slices several centimeters in diameter.Barium titanate crystal is used in holographic interferometry and speckle photogra-phy because of its very slow response. Recordings can be made over a very widespectral range.

BIBLIOGRAPHY

1. J. W. Goodman, Introduction to Fourier Optics, McGraw-Hill, New York, 1968, 1996.2. H. M. Smith, Holographic Recording Materials, Springer-Verlag, Berlin, 1977.3. E. L. Dereniak and D.G. Crowe, Optical Radiation Detectors, Wiley, New York, 1984.4. R. S. Sirohi and M. P. Kothiyal, Optical Components, Systems, and Measurement

Techniques, Marcel Dekker, New York, 1991.5. M. P. Petrov, S. I. Stepanov, and A. V. Khomenko, Photorefractive Crystals in Coherent

Optical Systems, Springer-Verlag, Berlin, 1991.6. J. T. Luxon and D. E. Parker, Industrial Lasers and Their Applications, Prentice Hall,

Englewood Cliffs, NJ, 1992.7. H. I. Bjelkhagen, Silver-Halide Recording Materials: For Holography and their

Processing, Springer-Verlag, Berlin, 1993.8. E. Uiga, Optoelectronics, Prentice Hall, Englewood Cliffs, NJ, 1995.9. S. S. Jha (Ed.), Perspectives in Optoelectronics, World Scientific, Singapore, 1995.

10. J. E. Stewart, Optical Principles and Technology for Engineers, Marcel Dekker, NewYork, 1996.

11. P. Hariharan, Optical Holography, Cambridge University Press, Cambridge, 1996.12. H. I. Bjelkhagen (Ed.), Holographic Recording Materials, SPIE Milestone MS 130,

SPIE Optical Engineering Press, Bellingham, WA, 1996.13. G. C. Holst, CCD Arrays, Cameras, and Displays, JCD Publishing/SPIE Optical

Engineering Press, Bellingham, WA, 1996.14. F. T. S. Yu and X.Yang, Introduction to Optical Engineering, Cambridge University

Press, Cambridge, 1997.15. G. C. Holst and T. S. Lomheim, CMOS/CCD Sensors and Camera Systems, JCD

Publishing/SPIE Optical Engineering Press, Bellingham, WA, 2007.

ADDITIONAL READING

1. J. C. Urbach and R. W. Meier, Thermoplastic xerographic holography, Appl. Opt., 5,666–667, 1966.

2. T. A. Shankoff, Phase holograms in dichromated gelatin, Appl. Opt., 7, 2101–2105,1968.

3. M. J. Beesley and T. G. Castledine, The use of photoresist as a holographic recordingmedium, Appl. Opt., 9, 2720–2724, 1970.

4. W. S. Colburn and K. A. Haines, Volume hologram formation in photopolymermaterials, Appl. Opt., 10, 1636–1641, 1971.

Page 126: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 99[#21] 15/5/2009 10:58

Detectors and Recording Materials 99

5. B. Smolinska, Relief hologram formation and replication in hardened dichromatedPVA films, Acta Phys. Pol., A40, 327–332, 1971.

6. E. G. Ramberg, Holographic information storage, RCA Rev., 33, 5–53, 1972.7. D. Meyerhofer, Phase-holograms in dichromated gelatin, RCA Rev., 33, 110–130,

1972.8. T. L. Credelle and F. W. Spong, Thermoplastic media for holographic recording, RCA

Rev., 33, 206–226, 1972.9. R. A. Bartolini, Characteristics of relief holograms recorded on photoresists, Appl. Opt.,

13, 129–139, 1974.10. A. Graube, Advances in bleaching methods for photographically recorded holograms,

Appl. Opt., 13, 2942–2946, 1974.11. K. Biedermann, Information storage materials for holography and optical data storage,

Opt. Acta, 22, 103–124, 1975.12. B. L. Booth, Photopolymer materials for holography, Appl. Opt., 14, 593–601,

1975.13. S. L. Norman and M. P. Singh, Spectral sensitivity and linearity of Shipley AZ 1350 J

photoresist, Appl. Opt., 14, 818–820, 1975.14. S. K. Case and R. Alferness, Index modulation and spatial harmonic generation in

dichromated gelatin films, Appl. Phys., 10, 41–51, 1976.15. R. A. Bartolini, H. A. Weakliem, and B. F. Williams, Review and analysis of optical

recording media, Ferroelectrics, 11, 393–396, 1976.16. R. A. Bartolini, Optical recording media review, Proc. SPIE, 123, 2–9, 1977.17. B. L. Booth, Photopolymer laser recording materials, J. Appl. Photographic Eng., 3,

24–30, 1977.18. D. Casasent and F. Caimi, Photodichroic crystals for coherent optical data processing,

Opt. Laser Technol., 9, 63–68, 1977.19. T. C. Lee, J. W. Lin, and O. N. Tufte, Thermoplastic photoconductor for optical

recording and storage, Proc. SPIE, 123, 74–77, 1977.20. B. J. Chang, Dichromated gelatin as holographic storage medium, Proc. SPIE, 177,

71–81, 1979.21. K. Blotekjaer, Limitations on holographic storage capacity of photochromic and

photorefractive media, Appl. Opt., 18, 57–67, 1979.22. B. J. Chang and C. D. Leonard, Dichromated gelatin for the fabrication of holographic

elements, Appl. Opt., 18, 2407–2417, 1979.23. T. Kubota and T. Ose, Methods of increasing the sensitivity of methylene-blue sensitized

dichromated gelatin, Appl. Opt., 18, 2538–2539, 1979.24. P. Hariharan, Holographic recording materials: Recent development, Opt. Eng., 19,

636–641, 1980.25. P. Hariharan, Silver halide sensitised gelatin holograms: Mechanism of hologram

formation, Appl. Opt., 25, 2040–2042, 1986.26. S. Calixto, Dry polymer for holographic recording materials, Appl. Opt., 26, 3904–3910,

1987.27. R. T. Ingwall, M. Troll, and W. T.Vetterling, Properties of reflection holograms recorded

on Polaroid’s DMP-128 photopolymer, Proc. SPIE, 747, 67–73, 1987.28. N. V. Kukhtarev and V. V. Mauravev, Dynamic holographic interferometry in photo-

refractive crystals, Opt. Spectrosc. (USSR), 64, 656–659, 1988.29. R. Changkakoti and S. V. Pappu, Methylene blue sensitised DCG holograms: A study

of their storage and reprocessibility, Appl. Opt., 28, 340–344, 1989.30. R. A. Lessard and J. J. Couture, Holographic recordings in dye/polymer systems for

engineering applications, Proc. SPIE, 1183, 75–89, 1989.

Page 127: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C005.tex” — page 100[#22] 15/5/2009 10:58

100 Optical Methods of Measurement

31. R. T. Ingwall and M. Troll, Mechanism of hologram formation in DMP-128photopolymer, Opt. Eng., 28, 586–591, 1989.

32. S. Redfield and L. Hesselink, Data storage in photorefractives revisited, Opt. Comput.,63, 35–45, 1989.

33. W. K. Smothers, B. M. Monroe, A. M. Weber, and D. E. Keys, Photopolymers forholography, Proc. SPIE, 1212, 20–29, 1990.

34. F. Ledoyen, P. Bouchard, D. Hennequin, and M. Cormier, Physical model of a liquid thinfilm: Application to infrared holographic recording, Phys. Rev. A, 41, 4895–4902, 1990.

35. G. D. Savant and J. L. Jannson, Optical recording materials, Proc. SPIE, 1461, 79–90,1991.

36. S. A. Zager and A. M. Weber, Display holograms in Du Pont’s Omnidex™ films, Proc.SPIE, 1461, 58–67, 1991.

37. F. P. Shvartsman, Dry photopolymer embossing: Novel photo-replication technologyfor surface relief holographic optical elements, Proc. SPIE, 1507, 383–391, 1991.

38. E. Bruzzone and F. Mangili, Calibration of a CCD camera on a hybrid coordinatemeasuring machine for industrial metrology, Proc. SPIE, 1526, 96–112, 1991.

39. J. L. Salter and M. F. Loeffler, Comparison of dichromated gelatin and Du PontHRF-700 photopolymer as media for holographic notch filters, Proc. SPIE, 1555,268–278, 1991.

40. R. D. Rallison, Control of DCG and non silver halide emulsions for Lippmannphotography and holography, Proc. SPIE, 1600, 26–37, 1991.

41. T. J. Cvetkovich, Holography in photoresist materials, Proc. SPIE, 1600, 60–70, 1991.42. K. Kurtis and D. Psaltis, Recording of multiple holograms in photopolymer films,

Appl. Opt., 31, 7425–7428, 1992.43. R. A. Lessard, C. Malouin, R. Changkakoti, and G. Mannivannan, Dye-doped polyvinyl

alcohol recording materials for holography and non-linear optics, Opt. Eng., 32,665–670, 1993.

44. U. S. Rhee, H. J. Caulfield, J. Shamir, C. S. Vikram, and M. M. Mirsalehi, Characteris-tics of the Du Pont photopolymer for angularly multiplexed page-oriented holographicmemories, Opt. Eng., 32, 1839–1847, 1993.

45. D. Dirksen and G. von Bally, Holographic double-exposure interferometry in near realtime with photorefractive crystals, J. Opt. Soc. Am. B, 11, 1858–1863, 1994.

46. K. Meerholz, B. L. Volodin, Sandalphon, B. Kippelen, and N. Peyghambarlan, Aphotorefractive polymer with high optical gain and diffraction efficiency near 100%,Nature, 371, 497–500, 1994.

47. Jean-Pierre Fouassler and F. Morlet-Savary, Photopolymers for laser imaging and holo-graphic recording: Design and reactivity of photosensitizers, Opt. Eng., 35, 304–312,1996.

48. V. V. Vlad, D. Malacara-Hernandez, and A. Petris, Real-time holographic interferome-try using optical phase conjugation in photorefractive materials and direct spatial phasereconstruction, Opt. Eng., 35, 1383–1388, 1996.

49. D. Litwiller, CMOS vs. CCD: Facts and Fiction, Photonic Spectra, 35, 154–158, 2001.50. H. Helmers and M. Schellenberg, CMOS vs. CCD sensors in speckle interferometry,

Opt. Laser Technol., 35, 587–595, 2003.51. S. R. Guntaka, V. Toal, and S. Martin, Holographic and electronic speckle pattern

interferometry using a photopolymer recording material, Strain 40, 79–81, 2004.52. D. Litwiller, CMOS vs. CCD: Maturing technologies, maturing markets, Photonic

Spectra, 39(8), 54–61, 2005.53. A. C. Sullivan, M. W. Grabowski, and R. R. McLeod, Three-dimensional direct-write

lithography into photopolymer, Appl. Opt., 46, 295–301, 2007.

Page 128: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 101[#1] 14/5/2009 21:08

6 HolographicInterferometry

6.1 INTRODUCTION

Holography was born out of the very challenging technological problem of improvingthe resolution of the electron microscope, which was limited by the spherical aberra-tion of the electron lenses. Gabor therefore invented a two-step process, the first stepinvolving recording without the lenses and the second step being reconstruction. Thetechnique was demonstrated with microscopic objects and with spatially and tempo-rally filtered radiation from a mercury lamp. This was necessary, since high-resolutionrecording materials and coherent sources were not available at that time.

After its invention in 1948, holography remained practically dormant until thearrival of the laser, since a long-coherence-length source was needed to record ahologram of an object. Earlier recordings of three-dimensional (3D) objects weremade on Kodak 649F plates, with very impressive results. Holography, therefore, cameto be known as 3D photography. It, however, is more than ordinary 3D photography,since it provides 3D views with changing perspectives.

Holography records the complex amplitude of a wave coming from an object (theobject wave), rather than the intensity distribution in the image, as is the case inphotography. Holography literally means “total recording,” that is, recording of boththe phase and the amplitude of a wave. The detectors in the optical regime respondto the intensity (energy) of the wave, and hence phase information is to be convertedinto intensity variations. This is accomplished by interferometry. A reference wave isadded to the object wave at the recording plane. The recording is done on a variety ofmedia, including photographic emulsions, photopolymers, and thermoplastics. Therecord is called a hologram. The hologram is like a window with a memory. Differentperspectives of the scene are seen through different portions of the hologram. Inaddition to recording holograms of 3D objects, several new applications of holographyhave emerged, holographic interferometry being one of these.

Holographic interferometry (HI) has emerged as a technique of unparalleledapplications, since it provides interferometric comparison of real objects or eventsseparated in time and space. Various kinds of HI have been developed: real-time,double-exposure, time-average, etc. Further, it can be performed with one referencewave, two reference waves, and so on. These reference waves can be of the same

101

Page 129: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 102[#2] 14/5/2009 21:08

102 Optical Methods of Measurement

or different wavelengths. The reference wave can come from the same side of thehologram as the object wave or from the other side. HI can be performed with acontinuous-wave laser or a pulsed laser. The record can be made on a photographicemulsion, a thermoplastic, a photopolymer, a charge-coupled device (CCD), or othermedia. Digital holography provides for comparison of objects situated at differentlocations. Small objects can be studied for their responses to external agency. Thepossibilities are endless, and so are the applications. This chapter presents some ofthese applications, along with the relevant theoretical background.

6.2 HOLOGRAM RECORDING

An object is illuminated by a wave from a laser, and the diffracted field is receivedon a recording plate lying in the (x, y) plane. A reference wave is added to this fieldat the recording plane, as shown in Figure 6.1a. The diffracted field from the objectconstitutes the object wave, which is represented by O(x, y) = O0(x, y) exp[iφo(x, y)],where O0(x, y) is the amplitude of the object wave and φo(x, y) is its phase. Thecomplex amplitude R(x, y) of the reference wave at the recording plane is expressed

LaserShutter BS ND

MO1

MO2

M1

CollimatorObject

Holo-plate

(a)

M2

(b)

Hologram

Virtual image

Conjugate image

Reference wave

q

q

FIGURE 6.1 (a) Recording of a hologram. (b) Its reconstruction.

Page 130: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 103[#3] 14/5/2009 21:08

Holographic Interferometry 103

as R(x, y) = R exp(2πiνRy), where νR is the spatial frequency of the wave. We havetaken a plane wave incident at an angle θ with the z axis as a reference wave; hence,νR = (sin θ)/λ. These waves are derived from the same wave (source), and hence arecoherent with each other. The total amplitude at the recording plane is

A(x, y) = O(x, y) + R(x, y) (6.1)

Therefore, the intensity distribution I(x, y) on the recording plane is given by

I(x, y) = O20(x, y) + R2 + 2O0(x, y)R cos[2πνRy − φo(x, y)] (6.2)

It can thus be seen that both the amplitude O0(x, y) variations and the phase φo(x, y)variations have been converted into intensity variations, to which the recording mate-rial responds. We assume that the recording material is a photographic emulsion,say, a holographic plate or a film. The intensity is recorded over an appropriate timeperiod T , resulting in an exposure variation E(x, y) = I(x, y)T . After development,the plate/film is called a hologram. On the hologram, the exposure variation is con-verted into density variation or amplitude transmittance variation. The amplitudetransmittance is complex, since the hologram introduces both amplitude and the phasevariations—the latter as a result of thickness variations. The phase variations can beeliminated if the hologram is placed in a liquid gate. Further, if the phase variationsare shared between two wavefronts, they are not seen when these wavefronts inter-fere. We assume here, however, that the amplitude transmittance of the hologram isproportional to the exposure incident during recording. Under this assumption, theamplitude transmittance t(x, y) of the hologram is expressed as

t(x, y) = t0 − βE(x, y) (6.3)

where β is a constant dependent on processing parameters, exposure, and so on.

6.3 RECONSTRUCTION

This hologram is placed back in the same position held during recording, and isilluminated by the reference wave. The field just behind the hologram is given by

t(x, y)R(x, y) = t′0R(x, y) − βTR(x, y){O20(x, y)

+ 2Oo(x, y)R cos[2πνRy − φo(x, y)]} (6.4)

where t′0 = t0 − βTR2 is the modified DC transmittance. Equation 6.4 can also bewritten as

t(x, y)R(x, y) = t′0R(x, y) − βTR(x, y)[O20(x, y)

+ O(x, y)R*(x, y) + O*(x, y)R(x, y)] (6.5)

where ∗ signifies the complex conjugate. It can be seen that there are four wavesjust behind the hologram, of which the wave t′0R(x, y) is the uniformly attenuated

Page 131: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 104[#4] 14/5/2009 21:08

104 Optical Methods of Measurement

reference wave. The second wave, −βTR(x, y)O20(x, y), also propagates in the direc-

tion of the reference wave. Owing to the slow spatial variation of O20(x, y), there

is a diffracted field around the direction of the reference wave. The third wave,−βTR2O(x, y), is the original object wave multiplied by a constant, −βTR2. Thiswave propagates in the direction of the object wave and has all the attributes of theobject wave, except that its spatial dimensions are restricted by the size of the holo-gram. The negative sign signifies that a phase change of π has taken place owingto the wet development process. The fourth wave, −βTR2(x, y)O*(x, y), representsa conjugate wave, which propagates in a different direction. This wave can also bewritten as −βTR2O(x, y) exp[2πi2νRy − φo(x, y)]; the phase of the reference waveis modulated by that of the object wave, but it essentially travels in the direction φ,where sin φ = 2 sin θ. Figure 6.1b shows that several waves are generated during thereconstruction step.

The recording and reconstruction geometries just described are the off-axis holog-raphy geometries due to Leith and Upatnieks. However, the reference beam can beadded axially to the object beam. This is in-line holography or Gabor holography.In-line holography has applications in particle size measurements, etc.

6.4 CHOICE OF ANGLE OF REFERENCE WAVE

One of the shortcomings of Gabor holography is that all the diffracted waves propagatein the same direction. These waves are angularly separated if the reference waveis added at an angle during the recording of the hologram. The question is, howlarge should the angle be? In fact, the off-axis reference wave acts as a carrier ofobject information. It also results in very fine interference fringes, into which objectinformation is coded. The fringe frequency is given by (sin θ)/λ, where θ is themean angle between the reference and object waves. The recording medium shouldbe capable of resolving this fringe frequency. Further, as has been described earlier,the various waves are angularly separated by nearly θ for small angles. However, arestriction is placed on θ by the simple condition that the spectra of the various wavesshould not overlap. If the bandwidth of the object wave is νo, then it is easily seen fromFigure 6.2 that the spectra will be just separated when 3νo = νR. This immediatelygives the smallest angle θmin as θmin = sin−1(3λνo). For angles smaller than θmin,the various diffracted beams will overlap.

0 2no nR + nonR

FIGURE 6.2 Fourier transform of the transmittance of a hologram.

Page 132: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 105[#5] 14/5/2009 21:08

Holographic Interferometry 105

t

1

00

E

FIGURE 6.3 Amplitude transmittance |t| versus exposure E curve for a typical photographicemulsion.

6.5 CHOICE OF REFERENCE WAVE INTENSITY

It has been tacitly assumed above that the amplitude transmittance of the hologramis proportional to the exposure. This is valid only when one operates over a verysmall portion of the t–E curve, which is shown in Figure 6.3. In order to meet thiscondition, the reference wave should be 3–10 times stronger than the object wave. Ifthis condition is not met, then nonlinearities will begin to play a role, and higher-orderimages will be produced.

6.6 TYPES OF HOLOGRAMS

Holograms are classified in a variety of ways. Table 6.1 summarizes these.

6.7 DIFFRACTION EFFICIENCY

Diffraction efficiency is a measure of the amount of light that goes into forming theuseful image when the hologram is illuminated by the reference wave. It is calculatedfor the ideal situation of interference between plane waves. For a thin amplitudetransmission hologram, the maximum diffraction efficiency is 6.25%. This value canbe improved by bleaching; that is, the amplitude hologram is converted into a phasehologram. It can then reach up to 33.6% for a thin phase transmission hologram. Thediffraction efficiency for a thick phase hologram can approach 100%: essentially,then, the entire incident light is utilized for image formation and there is only onediffraction order.

6.8 EXPERIMENTAL ARRANGEMENT

The experimental arrangement (Figure 6.1a) consists of a laser, a beam-splitter, beam-expanding optics, and the recording medium. We describe these in detail.

Page 133: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 106[#6] 14/5/2009 21:08

106 Optical Methods of Measurement

TABLE 6.1Classification of Hologram Types

Properties Hologram Types

Transmission function AmplitudePhase

Region of diffraction FresnelFraunhoferFourierImage plane

Recording geometry On-axis/Gabor/in-lineOff-axis/Leith–Upatnieks/carrier-frequencyReflection/Denisyuk

Exposure Single-exposure/real-time/live-fringeDouble-exposure/lapsed-time/frozen-fringeMultiple-exposureTime-averaged

Emulsion thickness/Q-parameter ThinThick

Reconstruction Monochromatic-lightWhite-light

Recording configuration TransmissionReflectionRainbowDigital

Object type Transmission-object/phase-objectTransmission-objects—diffuse illuminationOpaque-object

6.8.1 LASERS

There are a large number of lasers from which to choose. Some of these are describedbelow:

• Ruby laser. This is a pulsed laser with a pulse width of the order of 10 ns. Itscoherence length can be greater than 2 m when it is used with an etalon, andit can deliver more than 1 J in a single pulse. Lasers for HI have dual-pulsefacilities, with variable pulse separation for dynamical studies.

• Argon–ion laser. This is a continuous-wave laser with output exceeding5 W. It gives a multiline output, and hence a dispersion prism is mounted toselect a line. Further, an intercavity etalon increases the coherence lengthto a usable length for studying moderate-sized objects. It can be used withphotoresisst and thermoplastics as recording media.

• He–Ne laser. This is the most commonly used laser for holography and HI,with a power output in the range 25–35 mW. The coherence length is greaterthan 20 cm.

Page 134: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 107[#7] 14/5/2009 21:08

Holographic Interferometry 107

• Nd :YAG semiconductor pumped, first-harmonic green. This laser is findingacceptance for holography. It offers a very long coherence length, withcontinuous-wave operation. The output ranges from 40 to 150 mW.

• He–Cd laser. This is a continuous-wave laser with output in the range40–150 mW. It is suitable for recording on photoresists. Both blue andultraviolet lines are usable for holography.

• Semiconductor or diode lasers (LDs). These can be run in both continuous-wave and pulsed modes. Continuous-wave lasers are preferred for hologra-phy, but are not very common. They require temperature and current stability.They can output up to 500 mW in continuous-wave mode.

6.8.2 BEAM-SPLITTERS

Usually, an unexpanded laser beam is split into two or more beams. For this, a glassplate (preferably a wedge plate) serves as a good beam-splitter. However, when anexpanded collimated beam is to be split, a cube beam-splitter or a plane parallel platesplitter with one side antireflection-coated is recommended. Pellicle beam-splittersare also used. In some cases, polarization optics (Wollaston prisms) are employed asbeam-splitters.

6.8.3 BEAM-EXPANDERS

With high-power lasers, a suitable concave or negative lens is used for beam expansion;otherwise, a microscope objective 5×, 10×, 20×, 40× (45×), or 60× (63×) can beused. As a result of dust particles on the optical surfaces, the beam is usually dirty;that is, it has circular rings, specks, etc. It can be cleaned by placing a pinhole at thefocus of the microscope objective. The arrangement is known as spatial filtering. Thepinhole size must be matched with the microscope objective. An achromatic lens isplaced in the beam such that the pinhole is at its focus. This results in an expandedcollimated beam—hence the name “beam-expander.”

6.8.4 OBJECT-ILLUMINATION BEAM

For small objects, collimated-beam illumination is preferred as the propagation vectorof the illumination beam is the same over the whole surface. For large objects, aspherical diverging wave is used. If the object is cylindrical in shape, a cylindricaldiverging wave is preferred. Objects must be located in the coherence volume. Specialrecording geometries that relax the coherence requirement may be used.

6.8.5 REFERENCE BEAM

Either a diverging spherical wave or a plane wave can be used as a reference wave.However, if the hologram is to be reconstructed at a later time, or at a different location,the use of a plane wave is recommended. The beam is overexpanded to make it uniformover the hologram plane. The reference-beam intensity must be 3–10 times strongerthan the object-beam intensity at the hologram plane for linear recording.

Page 135: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 108[#8] 14/5/2009 21:08

108 Optical Methods of Measurement

6.8.6 ANGLE BETWEEN OBJECT AND REFERENCE BEAMS

The object wave is a diffuse wave. Therefore, the angle between the object wave andthe reference wave (even when the latter is a plane wave) will vary over the recordingplane. However, we take the mean angle. The fringe frequency on the recording planewill be approximately (sin θ)/λ, where θ is the mean angle between the reference waveand the object wave incident nearly normally on the recording plane. The recordingmedium should be able to resolve this fringe structure. Too large an angle puts ahigher demand on the resolution. A small angle, on reconstruction, may not result inseparation of the beams. Therefore, the mean angle between the object wave and thereference wave has to be chosen judiciously.

6.9 HOLOGRAPHIC RECORDING MATERIALS

A variety of recording materials are available. They have been described in Chapter 5.The choice is made based on wavelength sensitivity, resolution, sensitivity, and otherproperties.

6.10 HOLOGRAPHIC INTERFEROMETRY

HI is used to compare two waves from real objects by the process of interference.These two waves are usually from an initial unstressed state and a final stressedstate of an object. It is, however, assumed that the microstructure of the surfacedoes not change as a result of loading: the two comparison waves differ owing topath-difference changes rather than microstructural changes. HI can be performed ina variety of ways. The most common methods are real-time, double-exposure, andtime-averaged. Several novel configurations have been developed, depending on theapplication and also to exploit the strengths of HI. They are discussed at appropriateplaces below.

6.10.1 REAL-TIME HI

A hologram of the object is recorded, processed, and placed back in the experimentalset-up at exactly the same location that it occupied during recording. Reconstruc-tion of the hologram by the reference wave generates a replica of the original objectwave, which propagates in the direction of the original wave. This wave is, however,phase-shifted by π owing to wet photographic development. Since the object waveis also present, it will undergo diffraction on passage through the hologram: a wavetransmitted by the DC transmittance of the hologram will propagate in the originaldirection. Therefore, there are two waves: one released from the hologram by inter-action of the reference wave and the other transmitted by the DC transmittance of thehologram. These waves are identical in all respects except for a phase change of π,and hence produce a dark field on interference. If the object is now loaded, the objectwave carries the deformation phase, and hence an interference pattern is observed.This pattern changes in real time with the change in load. One can therefore monitorthe response of the object to an external loading agency continuously until the fringes

Page 136: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 109[#9] 14/5/2009 21:08

Holographic Interferometry 109

become too fine to be resolved. The technique is also known as single-exposure orlive-fringe HI.

Mathematically, we can explain the procedure by writing—the transmittance ofthe single-exposure hologram as

t(x, y) = t′0 − βT{O2

0(x, y) + 2O0(x, y)R cos[2πνRy − φo(x, y)]} (6.6)

where t′0 = t0 − βTR2, with t′0 being a constant. Interrogation by the reference waveR(x, y) releases the object wave, which is now given by −βTR2O(x, y), while thedirectly transmitted object wave is written as [t′0 − βTO2

0(x, y)]O(x, y)eiδ, where δ

is the phase change introduced by the deformation. Interference between these twowaves results in an intensity distribution that can be expressed as

I =(βTR2

)2O2

0

[1 +

(t0 − βTR2 − βTO2

0

)2(βTR2

)2 − 2(t0 − βTR2 − βTO2

0

)βTR2

cos δ

]

(6.7)

Assuming βTR2 � βTO20, which is usually true, Equation 6.7 can be rewritten as

I = I0

[1 +

(t0 − βTR2

)2(βTR2

)2 − 2(t0 − βTR2

)βTR2

cos δ

](6.8)

Dark fringes are formed wherever δ = 2mπ, m being an integer. The contrast η ofthe fringes is given by

η = 2βTR2(t0 − βTR2)

(βTR2)2 + (t0 − βTR2)2

It can be seen that the contrast of the fringes can be controlled by the reference-waveintensity during reconstruction. In general, the contrast is less than unity, but, byappropriate increase of the reference-wave intensity, it is possible to achieve unit-contrast fringes. For example, if the bias transmittance t0 is equal to 2βTR2, then aunit-contrast fringe pattern results.

6.10.2 DOUBLE-EXPOSURE HI

Here, both exposures, belonging to the initial and final states of the object, are recordedsequentially on the same photographic plate. The total exposure recorded can beexpressed as

βT [I1(x, y) + I2(x, y)] (6.9)

The transmittance of the hologram is given by

t(x, y) = t′′0 − βT{2O2o(x, y) + [O(x, y) + O(x, y)eiδ]R∗(x, y) + [O∗(x, y)

+ O∗(x, y)e−iδ]R(x, y)} (6.10)

Page 137: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 110[#10] 14/5/2009 21:08

110 Optical Methods of Measurement

Defect

FIGURE 6.4 Double-exposure interferogram of a pipe with a defect.

where t′′0 = t0 − 2βTR2. When the double-exposure hologram is reconstructed, bothobject waves are released simultaneously, and interfere to produce a unit-contrastfringe pattern characteristic of the deformation. The intensity distribution in the fringepattern is

I = 2(βTR2)2O20(1 + cos δ) = I0(1 + cos δ) (6.11)

Bright fringes are formed wherever δ = 2mπ, m being an integer. The technique isalso known as frozen-fringe or lapsed-time HI. It may be noted that only two states ofthe object are being compared by this technique. Figure 6.4 shows a double-exposureinterferogram of a pipe with a defect. The pipe was loaded by an application ofhydraulic pressure between exposures. It can be seen that the defect is a region ofthinner wall thickness. Table 6.2 gives a comparison between real-time and double-exposure HI.

6.10.3 TIME-AVERAGE HI

Time-average HI is utilized for the study of vibrating bodies such as musical instru-ments. As the name suggests, the recording is carried out over a time period that isseveral times the period of vibration. Since an object vibrating sinusoidally spendsmost of the time at the locations of maximum displacement, a recording of sucha vibrating object is equivalent to a double-exposure record. However, the intensitydistribution in the reconstruction is modified considerably owing to the time of excur-sion between these extreme positions. To study this phenomenon, we can write forthe instantaneous intensity on the recording plane,

I(x, y; t) = O2o(x, y) + R2 + O*(x, y; t)R(x, y) + O(x, y; t)R*(x, y) (6.12)

where the object wave O(x, y; t) is expressed as

O(x, y; t) = O0eiφo(x,y)eiδ(x,y;t) (6.13)

The phase δ(x, y; t) represents the phase change introduced by the vibration. If thebody is vibrating with amplitude A(x, y) at a frequency ω and is illuminated and

Page 138: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 111[#11] 14/5/2009 21:08

Holographic Interferometry 111

TABLE 6.2Comparison of Single- and Double-Exposure HI

Single-Exposure HI Double-Exposure HI

Monitors/compares different states of theobject (due to loading) with the initialstate over a range determined by usablefringe density and speckle noise.

Compares only the final state with theinitial state.

In situ processing is recommended;otherwise the hologram has to berepositioned very accurately. Once theexperimental set-up is disturbed, thehologram cannot be used forcomparison.

Hologram can be reconstructed at leisure,even at a different location.

Fringe contrast is poor, but can beimproved.

Unit-contrast fringe pattern is obtained,since the diffraction efficiency is equallyshared by the two beams.

Bright fringes are formed whenδ = (2m + 1)π. A phase change of π

takes place owing to wet photographicdevelopment.

Bright fringes are formed when δ = 2mπ.

Usually performed to obtain the correctparameters for double-exposure HI.

Usually used for quantitative evaluation.

observed along directions making angles θ1 and θ2 with the local normal, then thephase difference δ can be written as

δ(x, y; t) = 2π

λA(x, y) (cos θ1 + cos θ2) sin ωt (6.14)

The intensity distribution given by Equation 6.12 is recorded over a period T muchlonger than the period of vibration; that is, it is recorded over a large number ofvibration cycles. The average intensity recorded over the period T is

I(x, y) = 1

T

∫ T

0I(x, y; t) dt (6.15)

This record, on development, is called a time-average hologram, and the procedure isknown as time-average HI. The hologram is reconstructed with the reference wave.Since, on reconstruction, the various waves generated are separable, we consider onlythe amplitude of the desired wave, that is,

a(x, y) = −βTR2 1

T

∫ T

0O(x, y; t) dt

= −βTR2O(x, y)1

T

∫ T

0e(2πi/λ)A(x,y)(cos θ1+cos θ2) sin ωt dt (6.16)

Page 139: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 112[#12] 14/5/2009 21:08

112 Optical Methods of Measurement

The time integral T−1∫ T

0 e(2πi/λ)A(x,y)(cos θ1+cos θ2) sin ωt dt is called the characteris-tic function of the sinusoidal vibration, and is denoted by MT . Thus, the intensitydistribution in the reconstructed image is given by

Irec(x, y) = a(x, y)a∗(x, y) = β2T2R4O20|MT |2 (6.17)

The characteristic function has been generalized for various types of motion(Table 6.3). The characteristic function for sinusoidal motion with phase differenceδ(x, y; t) as described by Equation 6.14 can be obtained analytically. It is given by

MT = 1

T

∫ T

0e(2πi/λ)A(x,y)(cos θ1+cos θ2) sin ωt dt = J0

(2π

λA(x, y)(cos θ1 + cos θ2)

)

(6.18)

where J0(x) is the Bessel function of zeroth order and first kind. The intensitydistribution in the reconstructed image will be

Irec(x, y) = I0(x, y)

[J0

(2π

λA(x, y)(cos θ1 + cos θ2)

)]2

(6.19)

The variation of [J0(x)]2 with the argument x is shown in Figure 6.5. It can be seenthat the reconstructed image is modulated by the function [J0(x)]2. Thus, the image is

TABLE 6.3Characteristic Function |M(t)|2HI Type Displacement |M(t )|2

Real-time Static (L) 1 + c2 − 2c cos(k · L)

Harmonic of amplitude A(x, y) 1 + c2 − 2cJ0(k · A)

Real-time with reference fringes Harmonic 1 + c2 − 2c cos(k · L)J0(k · A)

Real-time stroboscopic Harmonic: pulses at ωt = π/2 and 3π/2 1 + c2 − 2c cos(2k · A)

Double-exposure Static (L) cos2(k · L/2)

Double-exposure stroboscopic Harmonic: pulses at ωt = π/2 and 3π/2 cos2(k · A)

Time-average Harmonic, of amplitude A(x, y) [J0(k · A)]2Constant velocity Lr = vT sinc2(k · Lr/2)

Constant acceleration from restC2(√

k · A)

+ S2(√

k · A)

(2/π) (k · A)

Time-average Irrationally related modes J0(k · A1)J0(k · A2)

Temporally frequency-translated Harmonic motion [Jm(k · A)]2Amplitude-modulated reference wave

fr(t) = ei(ωt−Δ) [Jm(k · A)]2 cos2 Δ

Phase-modulated reference wavefr(t) = eiMR sin ωt [Jm(k · A − MR)]2

Note: c is the contrast, and C and S are Fresnel cosine and sine integrals.

Page 140: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 113[#13] 14/5/2009 21:08

Holographic Interferometry 113

1.0

0.5[J 0

(x)]2

01 2 3 4 5 6 7 8 9

x10 11 12

FIGURE 6.5 [J0(x)]2 distribution, which defines the intensity distribution in time-averagedHI fringes for a sinusoidally vibrating object.

covered by the fringes. Figure 6.6 shows the time-average interferogram of an edge-clamped diaphragm vibrating in the second-harmonic mode. The zero intensity in theimage occurs at the zeros of the function [J0(x)]2. It can also be seen that the intensityis maximum at the stationary region (A(x, y) = 0), and decreases to zero when

λA(x, y)(cos θ1 + cos θ2) = 2.4048 (6.20)

This gives the amplitude of vibration at the first zero of the Bessel function. Assumingillumination and observation directions along the normal to the surface of the object,

FIGURE 6.6 Time-averaged interferogram of an edge-clamped diaphragm vibrating insecond-harmonic mode. (From J. W. Goodman, Introduction to Fourier Optics, McGraw-Hill,New York, 1968. With permission.)

Page 141: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 114[#14] 14/5/2009 21:08

114 Optical Methods of Measurement

the amplitudes at the first and successive zeros of the Bessel function are given by

A(x, y) = λ

4π× 2.4048, (6.21a)

= λ

4π× 5.5200, (6.21b)

= λ

4π× 8.6537, (6.21c)

= λ

4π× 11.7915 (6.21d)

Since the intensity falls off rapidly, high amplitudes of vibration are difficult tomonitor by this technique. In brief, time-average HI gives the vibration map (phase ofthe vibration is lost) over a range of frequencies; that is, the amplitude of the vibrationcan be measured.

Let us consider a simple example of a cantilever as shown in Figure 6.7. It isilluminated normally and observed in the same direction. We assume illumination bythe red radiation of a He–Ne laser. The amplitude of vibration can be expressed as

z(x, t) = z(x) sin ωt (6.22)

The intensity distribution in the image is given by

Irec(x, y) = I0(x, y)

[J0

(4π

λA(x, y)

)]2

(6.23)

The fringes run parallel to each other, and, for higher amplitudes, are almostequidistant. Dark fringes are located where the amplitudes of vibration are 0.12,0.28, 0.44, 0.59 μm, . . . .

k1

k2

z(x) sin wt

z

x

FIGURE 6.7 Vibrating cantilever and simulated fringe pattern as obtained by time-averaged HI.

Page 142: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 115[#15] 14/5/2009 21:08

Holographic Interferometry 115

6.10.4 REAL-TIME, TIME-AVERAGE HI

In this technique, a single-exposure record of a stationary object is first made. Afterdevelopment, the hologram is precisely relocated. The object is now set in vibrationand viewed through the hologram. Interference between the wave reconstructed fromthe hologram and the directed transmitted wave from the vibrating object will yield anintensity distribution that is proportional to [1 − J0(x)]2, where x = (4π/λ)A(x, y).Thus, the nodal points on the object appear dark. The contrast of this fringe pattern isvery low. While observing the real-time pattern, the laser beam can be chopped at thefrequency of vibration. The method is then equivalent to the real-time HI for staticobjects. The fringes correspond to the displacement between the initial position andthe position when the laser beam illuminates the object. By varying the time at whichthe light pulse illuminates the object during its vibration cycle, the displacement atdifferent phases of vibration can be mapped.

6.11 FRINGE FORMATION AND MEASUREMENTOF DISPLACEMENT VECTOR

We have mentioned that object deformation causes a phase change between the twowaves, at least one of which is derived from the hologram. This phase change isresponsible for the formation of the fringe pattern. In this section, we address twoissues:

• How is the phase change related to the deformation vector?• Where are the fringes really localized?

Let us consider a point P on the object surface. Owing to loading, this pointmoves to a different location P′. The vector distance PP′ is the deformation vector d.We consider the geometry as shown in Figure 6.8 to calculate the phase differenceintroduced by the deformation. The phase difference δ at the observation point O isgiven by

δ = k1 · r1 + k2 · r2 − (k1 + Δk1) · r′1 − (k2 + Δk2) · r′

2 (6.24)

We can express r′1 and r′

2 as

r1 + d = r′1 (6.25a)

r′2 + d = r2 (6.25b)

Substituting for r′1 and r′

2, we obtain

δ = k1 · r1 + k2 · r2 − (k1 + Δk1) · (r1 + d) − (k2 + Δk2) · (r2 − d)

= k1 · r1 + k2 · r2 − k1 · r1 − Δk1 · r1 − k1 · d

− Δk1 · d − k2 · r2 + k2 · d + Δk2 · d − Δk2 · r2

= (k2 − k1) · d (6.26)

Page 143: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 116[#16] 14/5/2009 21:08

116 Optical Methods of Measurement

r1'

r2

r2'

r1

k1 + Δk1

k2

k1

d

Object Hologram

S

O

P

P'k2 + Δk2

FIGURE 6.8 Calculation of the phase difference.

This expression has been obtained under the experimental conditions where thedisplacement vector is of the order of a few micrometers and the distances r1 andr2 involved could be from a few tens of centimetres to metres. When this is takeninto account, the vectors r1 and r2 are perpendicular to Δk1 and Δk2, respectively,and hence their product terms are automatically zero, while the remaining two terms,Δk1 · d and Δk2 · d, which represent second-order contributions, are also neglected.

This expression tells us when and how the fringe pattern is formed; it does not tellus where it actually is formed. The situation is quite different from that encounteredin classical interferometry. In HI, the fringes are normally localized not on the objectsurface but on a surface in space. This surface is known as the surface of localiza-tion. Knowledge of the location of the surface of localization helps in determiningthe deformation vector. In fact, the surface of localization depends on several factors,including the deformation vector. Several theories have been presented in the litera-ture. Needless to say, it is necessary to know about fringe localization in detail beforeany quantitative evaluation is carried out.

6.12 LOADING OF THE OBJECT

Since HI sees only the change in the state of the object, the object must be subjected toexternal agency to change its state. This can be performed by any one of the followingmethods:

• Mechanical loading• Thermal loading• Pressure/vacuum loading

Page 144: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 117[#17] 14/5/2009 21:08

Holographic Interferometry 117

• Vibratory or acoustic loading• Impact loading

The most commonly used method is mechanical loading, in which the object is sub-jected to either compressive or tensile forces. If the dynamic response needs to bestudied, it is subjected to either vibratory or impact loading. In holographic non-destructive testing (HNDT), it is essential to use the appropriate kind of loading.For example, debonds are easily seen when pressure or vacuum stressing is applied.Table 6.4 gives some areas of application of the five types of loading.

6.13 MEASUREMENT OF VERY SMALL VIBRATION AMPLITUDES

It has been shown that the first minimum of intensity in time-average HI occurs whenA(x, y) = (λ/4π)2.4048. This is valid only when the directions of illumination andobservation are along the surface normal. For illumination from a He–Ne laser (λ =632.8 nm), the amplitude of vibration A(x, y) corresponds to 0.12 μm. Even smalleramplitudes of vibration can be monitored by observing the variation of intensitybetween the maximum and first zero of the Bessel function.

6.14 MEASUREMENT OF LARGE VIBRATION AMPLITUDES

6.14.1 FREQUENCY MODULATION OF REFERENCE WAVE

The frequency of the reference wave is shifted by nω, where n is an integer and ω

is the frequency of vibration, and a time-average hologram is recorded in the normalway. The recorded intensity distribution is given by

I(x, y) = 1

T

∫ T

0[O2

0 + R20 + OR∗ + O∗R] dt (6.27)

where the object wave and the reference wave are written explicitly as

O(x, y; t) = O0 ei[�t+φo(x,y)+δ(x,y;t] (6.28a)

R(x, y; t) = R0 ei[(�−nω)t+φR] (6.28b)

Further, assuming that the object is vibrating sinusoidally with an amplitudeA(x, y), we can write for the phase difference δ(x, y; t)

δ(x, y; t) = 2π

λ(cos θ1 + cos θ2)A(x, y) sin ωt

= 2π

λ2A(x, y) sin ωt

when the directions of illumination and observation are along the surface normal.

Page 145: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 118[#18] 14/5/2009 21:08

118 Optical Methods of Measurement

TABLE 6.4Some Applications of Holographic Testing

Loading/Problem Detected Items Tested

Mechanical LoadingCracks Bolt holes in steel channels, concrete, rocket nozzle liners, welded metal

plates, turbine blades, glassDebonds Honeycomb panels, propellant linersFatigue damage Composite materialsDeformation Radial deformation in carbon cylinders and tubes

Pneumatic LoadingCracks Welded joints, aluminum shells, bonded cylinders, ceramic heat exchanger

tubesDebonds/delaminations Composite panels and tubes, honeycomb structures, bonded cylinders,

tires, rocket launch tubes, rocket nozzle linersWeld defects Welded plastic pipes, honeycomb panelsWeakness (thinness) Aluminum cylinders, pressure vessels, tubes, composite tubesStructural flaws Composite domesSoldered joints Medical implant devicesBad brazes Silicon pressure sensors

Thermal LoadingCracks Glass/ceramic tubes, train wheels, turbofan bladesDebonds Aircraft wing assemblies, honeycomb structures, laminate structures,

rocket motor casings, rocket nozzle liners, rocket propellant liners, tires,bonded structures

Delaminations Antique paintingsVarious defects Circuit boards and electronic modules

Vibrational LoadingCracks Train wheels, ceramic barsDebonds Turbine blades, rocket propellant liners, sheet metal sandwich structures,

laminate structures, honeycomb structures, fiberglass-reinforced plastics,helicopter rotors, ceramics bonded to composite plates, brake disks,adhesive joints, metallic interfaces, elastomer

Strength CRTs

Impulse LoadingCracks Steel plates, wing plank splices, turbine bladesDebonds Foam insulationVoids and thin areas Aluminum plates

Since the various terms in the expression for the recorded intensity are separable,we pick up the term of interest as

1

T

∫ T

0OR∗ dt = O0eiφo R0e−iφR

1

T

∫ T

0e(4πi/λ)A sin ωte−inωt dt

= O0R0ei(φo−φR) 1

T

n′=∞∑n′=−∞

Jn′(

λA

)∫ T

0ein′ωte−inωt dt (6.29)

Page 146: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 119[#19] 14/5/2009 21:08

Holographic Interferometry 119

The integral will vanish for all values of n′ except n′ = n. Thus, the amplitudedistribution, on reconstruction, becomes proportional to Jn(4πA/λ). The intensitydistribution is then given by

I(x, y) = I0J2n

(4π

λA

)(6.30)

Unlike the case of time-average HI, fringes of nearly equal intensity are formed, andhence large vibration amplitudes can be measured.

6.14.2 PHASE MODULATION OF REFERENCE BEAM

The mirror in the reference arm is mounted on PZT, which is excited by a voltage signalat the frequency of vibration. The amplitude of the reference wave at the recordingplane is

R(x, y) = R0 eiφR eiδR (6.31)

where

δR = 2π

λ2AR sin(ωt − Δ)

with AR and Δ representing the amplitude and phase of the reference mirror vibration.The object wave is expressed as

O(x, y) = O0 eiφo(x,y)eiδ (6.32)

where

δ = 2π

λ2A(x, y) sin ωt

is the phase difference introduced by object vibration. It should be noted that themodulation in the reference wave is phase-shifted by Δ, which may be varied exper-imentally. A time-average hologram is now recorded. Again, we consider only theterm of interest in the expression for the amplitude transmittance, and express theamplitude of the reconstructed wave as proportional to

1

T

∫ T

0OR∗ dt = O0R0 ei(φo−φR) 1

T

∫ T

0e(4πi/λ)[A sin ωt−AR sin(ωt−Δ)] dt

= O0R0 ei(φo−φR)J0

(4π

λ(A2 + A2

R − 2AAR cos Δ)1/2)

(6.33)

The intensity distribution in the reconstructed image is given by

I(x, y) = I0

[J0

(4π

λ(A2 + A2

R − 2AAR cos Δ)1/2)]2

(6.34)

Page 147: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 120[#20] 14/5/2009 21:08

120 Optical Methods of Measurement

As a special case, when the phase of modulation Δ of the reference wave is zero, theintensity distribution is given by

I(x, y) = I0

[J0

(4π

λ(A − AR)

)]2

(6.35)

Thus, control of the amplitude of the reference mirror vibration can be used tomeasure large vibration amplitudes. The zero-order fringe (the brightest maximum)will be formed at those regions where the vibration amplitude of the reference mirrormatches that of the object. In this way, it is possible to extend the measurable rangeconsiderably, in practice up to about 10 μm. By varying the phase of the referencemirror, it is also possible to trace out areas of the object vibrating in the same phaseas that of the mirror, thereby mapping the phase distribution of the object vibration.

6.15 STROBOSCOPIC ILLUMINATION/STROBOSCOPIC HI

The object is illuminated by laser pulses whose duration is much shorter than theperiod of vibration. The recorded hologram is thus like a double-exposure hologram.Let the first record be made when the object is in a state of vibration 1 and the secondwhen the object is in state 2, as shown in Figure 6.9. The double-exposure hologramwill display fringes corresponding to the displacement A between these two vibrationstates. By varying the pulse separation, holograms for different displacements can berecorded.

Alternatively, pulse separation may be adjusted to correspond to one-quarter of thetime period, and double-exposure holograms may be recorded at various phases ofthe vibration cycle. The advantage of stroboscopic HI is that fringes of unit contrast,similar to those in double-exposure HI, are formed. In addition, the use of pulseillumination does not require vibration isolation of the holographic set-up.

2A

0 t

(b)

(a)

t

Disp

lace

men

t z (x

, y : t

)

l

FIGURE 6.9 Stroboscopic HI of a vibrating object: (a) displacement of sinusoidally vibratingobject; (b) pulse illumination.

Page 148: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 121[#21] 14/5/2009 21:08

Holographic Interferometry 121

6.16 SPECIAL TECHNIQUES IN HOLOGRAPHICINTERFEROMETRY

6.16.1 TWO-REFERENCE-BEAM HI

In the discussion of the phase-shift method of evaluation of the phase in an interfero-gram, it was mentioned that an additional phase is introduced that is independent ofthe phase to be measured. In real-time HI, this is provided by shifting the phase ofthe reference wave during reconstruction. However, it is not possible to achieve thisin double-exposure HI, since the reconstruction of both the recorded waves is donewith a single reference beam.

In order to have independent access to the two reconstructed waves, and introducethe desired phase difference between them, a holographic set-up with two referencewaves is required (Figure 6.10a). The first exposure of a double-exposure hologramis performed with the object in its initial state and with the reference wave R1 (thereference wave R2 is blocked). The object is loaded, and the second exposure is madewith the reference wave R2 (the reference wave R1 is blocked). The reconstruction of

LaserShutter BSBS ND

MO1 M1

Object

Holo-plate

M3

M2

R1 R2

LaserShutter BS ND

MO1

Object

Holo-plate

q

M2

Wedge

R1R2

(a)

(b)

FIGURE 6.10 (a) Schematic of two-reference-wave HI. (b) Two-reference-wave HI withclose waves.

Page 149: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 122[#22] 14/5/2009 21:08

122 Optical Methods of Measurement

the hologram is accomplished with both reference waves R1 and R2 simultaneouslyto generate the interference pattern between the initial and final states of the object:the interference pattern may be varied by changing one of the reference waves whileleaving the other unaffected. Mathematically, the procedure is described as follows.

In the first exposure, an intensity distribution I1(x, y) is recorded, where

I1(x, y) = O20 + R2

10 + OR∗1 + O∗R1 (6.36)

The intensity distribution recorded in the second exposure is given by

I2(x, y) = O20 + R2

20 + O′R∗2 + O′∗R2 (6.37)

where O′(x, y) = O(x, y)eiδ(x,y) is the object wave from the deformed state of theobject. The total exposure is T(I1 + I2), and the amplitude transmittance t(x, y) of thedoubly-exposed hologram is

t(x, y) = t0 − βT(I1 + I2) (6.38)

This hologram is illuminated by both reference waves. Therefore, the amplitude ofthe waves just behind the hologram is (R1 + R2)t(x, y). This hologram generates amultiplicity of images, which may or may not overlap. However, R1R∗

1O and R2R∗2O′

overlap completely and give rise to interference fringes. Other undesired images canbe separated from this interference pattern if the angle between the two referencewaves is made very large. Generally, the two reference waves are taken on the sameside of the object normal and have mutual angular separation much larger than theangular size of the object.

Although the desired images are separated by the large angular separation betweenthe reference waves, the arrangement is highly sensitive to misalignment of the holo-gram during repositioning and also to any wavelength change between hologramrecording and reconstruction. Assuming that the reference waves are plane wavespropagating in directions given by propagation vectors k and k′, the additional phasedifference Δφ at any point rH on the hologram due to misalignment and wavelengthchange is

Δφ(rH) = [(k − k′) × ω] · rH + Δλ

λ(k − k′) · rH (6.39)

where ω = (Δξ, Δη, Δχ) is the rotation vector for small rotations Δξ, Δη, Δχ ofthe hologram about the x, y, and z axes, and Δλ is the wavelength change betweenrecording and reconstruction. It can be seen that the additional phase change Δφ willbe small if k − k′ is small, that is, if the two reference waves propagate in nearly thesame direction. The price to be paid for this is a reduction in fringe contrast, since all ofthe reconstructed waves will now overlap. A schematic of an arrangement to producetwo close reference waves is shown in Figure 6.10b. This set-up reduces the influenceof misalignment errors and wavelength-change error. In fact, if the recording andreconstruction are done at the same wavelength, there is no error due to wavelengthchange. However, if the recording is done with a pulsed laser and the reconstructionwith a continuous-wave laser, the error due to wavelength change cannot be removed,but can be reduced using the above experimental arrangement.

Page 150: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 123[#23] 14/5/2009 21:08

Holographic Interferometry 123

6.16.2 SANDWICH HI

HI records phase difference no matter what the source of this difference. However,when using HI for studying strains in an object, the main interest is in measuringlocal deformations on the object surface. Depending on the loading conditions, theobject very often undergoes rigid-body movements (translations and tilts), and theseare commonly much greater in magnitude than the local deformations. These rigid-body deformations may be too large to be measured by HI, or they may completelymask the local deformations of interest. Therefore, there is considerable interest inexploring techniques that can compensate for the influence of rigid-body movements.Fringe-control techniques have been developed, but they are applicable only with real-time HI. Sandwich HI offers an attractive alternative for compensating for rigid-bodymovements.

As the name suggests, the recording is done on two holographic plates H1 andH2, which form a sandwich as shown in Figure 6.11a. We assume a plane referencewave. The initial state of the object is recorded on H1 (H2 is replaced by a dummy)and the final state on H2. A ray from an object point P hits plate H1 at B and plate H2at A. Because of this geometry, an additional path difference AB − BD is recorded.From Figure 6.11a,AB = s/cos α and BD = AB cos(α + β), where s is the separationbetween the plates of the sandwich and α and β are the angles shown in Figure 6.11a.Therefore, the path difference AB − BD is equal to

s

cos α[1 − cos (α + β)]

In other words, if only the initial state of the object is recorded on both the plates, thereconstruction will give a fringe-free image of the object. However, the path differenceAB − BD can be varied by tilting the sandwich, resulting in the appearance of a fringepattern.

We will now see how much change is introduced in the path difference by the tiltof the sandwich. Let us assume that the sandwich is tilted by a small angle φ so thatthe reference beam now makes an angle of β′ with the normal to the sandwich asshown in Figure 6.11b. The corresponding points on the tilted sandwich are denotedby A1 and B1. The reconstructed rays from A1 and B1 are shown by the dotted lines.The path difference between these rays is r cos φ − r cos(α + β′), with β′ = β − φ

and r = s/ cos α. Therefore, the net path difference is

r[cos φ − cos(α + β′) − 1 + cos(α + β)]≈ rφ sin(α + β) = sφ tan(α + β) (6.40)

It can be seen that the path difference change due to the tilt of the sandwich isproportional to φ.

Let us now assume that the object is tilted by a small angle Δξ as shown inFigure 11c as a result of loading. This introduces an additional path equal to 2xΔξ.For this tilt to be compensated, we must have

2xΔξ = sφ tan(α + β) (6.41)

Page 151: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 124[#24] 14/5/2009 21:08

124 Optical Methods of Measurement

B

Reference beam

P

a

a + b

Dx

b

D

H1H2

A

(a)

s

(b)

(c)

P

H1H2

x

ββ′

α

B

(α + β)

Reference beam

P

D

H1H2

A

s

B1

A1

FIGURE 6.11 Sandwich HI. (a) Recording of a sandwich hologram. (b) Influence of the tiltof the sandwich. (c) Sandwich HI with an object tilted between exposures.

Page 152: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 125[#25] 14/5/2009 21:08

Holographic Interferometry 125

When the angles α and β are small and are also taken equal to each other (i.e.,α = β), Equation 6.41 becomes

2xΔξ = 2αsφ (6.42)

But the angle α can be expressed as α = x/z, where z is the distance between thesandwich hologram and the object plane. Therefore,

φ = z

sΔξ (6.43)

A tilt of the sandwich hologram of magnitude φ will fully compensate for a tilt ofthe object of magnitude Δξ.

In order to fully utilize the strength of sandwich HI, a reference surface is placedby the side of the actual object—the reference surface should be free from fringeson reconstruction. However, when the fringe pattern on the object is compensated, alinear fringe pattern appears on the reference surface, which is used to determine thetilt given to the sandwich hologram. Alternatively, the reference surface is so placedthat it is unaffected by loading but is influenced by bodily tilts and translations. Then,on reconstruction, the reference surface will also carry a fringe pattern, which shouldbe fully compensated in order to obtain the correct deformation map of the object.

6.16.3 REFLECTION HI

Another method to eliminate the contributions of rigid-body movements and tilts to themeasured phase difference is to clamp the holographic plate to the object under studyas shown in Figure 6.12a. The photographic plate, with its emulsion side facing the

H(a)

(b)

Laser

Laser

Object

FIGURE 6.12 (a) Recording of a hologram in reflection HI. (b) Its reconstruction.

Page 153: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 126[#26] 14/5/2009 21:08

126 Optical Methods of Measurement

object, is illuminated by a plane wave at normal incidence. In the first pass through theplate, this wave acts as a reference wave, and the light scattered from the object formsthe object wave. Therefore, a reflection hologram is recorded within the emulsion asinterference planes between the incident wave and the one scattered by the object. Therecording is made on a thick emulsion, say of about 10 μm thickness: Agfa-Gevaert8E75 plates are suitable for recording with He–Ne lasers.

For interferometric comparison, two exposures with the object loaded in-betweenare made. The hologram is reconstructed with the laser light as shown in Figure 6.12b.As a result of emulsion shrinkage, reconstruction is possible only at a suitable shorterwavelength. Otherwise, the emulsion is swelled to its original thickness by soakingit in an aqueous solution of triethanolamine, (CH2OHCH2)3N, followed by slow andcareful drying in air to observe the fringe pattern at the recording wavelength.

The resulting hologram is insensitive to rigid-body translations parallel and per-pendicular to the direction of illumination if plane-wave illumination is used duringrecording. Also, in-plane rotation has no influence in this case. This is because eitherno path difference or a constant path difference is introduced. However, additionalfringes may be introduced if the rigid-body rotation is around an axis perpendicular tothe direction of illumination and if the object and the holographic plate are separatedas shown in Figure 6.13. We take a point P on the object, and consider recording, say,at point H of the hologram. Owing to the rotation, the hologram–object combinationtakes a new position, that is, P moves to P′ as the object rotates, and H moves to H′ asthe plate rotates along with the object. In the first exposure, the optical path recorded

H

Object

Hologram

Axis

P

M

I

f

b

FIGURE 6.13 Calculation of path change when the hologram–object combination is rotated.

Page 154: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 127[#27] 14/5/2009 21:08

Holographic Interferometry 127

is IP + PH, and in the second recording, the path is MP′ + P′H′. The optical path dif-ference Δ is IP + PH − MP′ − P′H′. But PH = P′H′. Therefore, the path differenceΔ is IP − MP′ = PH cos β − P′H′ cos(β + φ), where β is the direction of observationand φ is the angle of rotation. This may be rewritten as

Δ = PH[cos β − cos(β + φ)] = PH cos β

[1 − cos(β + φ)

cos β

]

= zp

[1 − cos β − φ sin β

cos β

], since φ � 1

= zpφ tan β (6.44)

where zp is the distance between the object and the holographic plate. It can be seenthat this path difference yields additional fringes. However, no additional fringes areintroduced (i.e., Δ = 0 for φ �= 0) when (i) zp = 0 and/or (ii) β = 0. Thus, placingthe holographic plate in contact with the object eliminates the influence of rotationabout an axis perpendicular to the illumination beam.

6.17 EXTENDING THE SENSITIVITY OF HI

6.17.1 HETERODYNE HI

It has been shown that the fringe separation usually corresponds to λ/2 in path differ-ence. With phase-shifting techniques, deformations as small as λ/30 can be measured.If even smaller deformations are to be monitored, other means must be explored. Het-erodyne HI is one such technique, being capable of giving a resolution of λ/1000.This is, however, achieved at the expense of a more complicated experimental set-upand slower data acquisition.

Figure 6.14 shows an experimental arrangement: it is a two-reference-wave geom-etry. One of the reference waves passes through two acousto-optical (AO) modulators.The first exposure is made with the reference beam R1 (R2 being blocked), and thesecond exposure, after the object has been loaded, is made with the reference waveR2 (R1 being blocked). Both exposures are made at the same wavelength, with theAO modulators turned off. The reconstruction is done with both reference waves;the AO modulators are turned on so that the wavelengths of the two reference wavesare different. As an example, one AO modulator modulates the beam at 40 MHz; thefrequency of the light wave diffracted in the first order is thus shifted by 40 MHz. Thisthen passes through the second AO modulator, which modulates it at 40.08 MHz. Wenow consider diffraction in −1 diffraction order, to cancel the deviation of the beamand obtain a light wave that is frequency-shifted by 80 kHz (= 40.08 − 40.0 MHz).Thus, the two reference waves are now frequency-shifted by 80 kHz.

As has been pointed out earlier, a multiplicity of images are produced in two-reference-wave HI. However, we consider only terms of interest in the amplitudetransmittance of the hologram, that is, OR∗

1 + O′R∗2, where reference waves R1 and

Page 155: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 128[#28] 14/5/2009 21:08

128 Optical Methods of Measurement

ShutterLaser

BSBS ND

MO1 M1

Object

Holo-plate

M3

M2

R1 R2

AO modulators

Real image

Scanningdevice

FIGURE 6.14 Schematic of heterodyne HI.

R2 have the same frequency. We write the reference and object waves explicitly as

R1 = R01 ei(ω1t+φR1) (6.45a)

R2 = R02 ei(ω1t+φR2) (6.45b)

O = O0(x, y) ei(ω1t+φo) (6.45c)

O′ = O eiδ(x,y,z) (6.45d)

During reconstruction, R1 = R01ei(ω1t+φR1) and R2 = R′2, where R′

2 = R02ei(ω2t+φR2),with ω2 = ω1 ± Δω, Δω being the frequency shift introduced by theAO modulators.Again considering just the terms of interest, the amplitude in the reconstructed imageis proportional to

(R1 + R′2)(OR∗

1 + O′R∗2)

and we obtain the intensity distribution in the image as

∣∣OR1R∗1 + O′R′

2R∗2

∣∣2 = a(x, y) + b(x, y) cos[δ(x, y) + Δωt] (6.46)

where the various appropriate terms have been absorbed into a(x, y) and b(x, y). Ifthe intensity at a point (x, y) is observed, then it is found to vary sinusoidally withtime. The interference phase that is to be measured introduces a phase shift in thissignal. Unfortunately, the phase δ(x, y) + Δωt carries no information. However, fromthe phase difference of the oscillating intensities at two different points, one obtains

Δδ(x1, y1 : x2, y2) = [δ(x1, y1) + Δωt] − [δ(x2, y2) + Δωt] = δ(x1, y1) − δ(x2, y2)

The phase difference Δδ, which is the difference between the interference phases attwo points, can be measured with a very high degree of accuracy using an electronicphasemeter.

Page 156: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 129[#29] 14/5/2009 21:08

Holographic Interferometry 129

There are two ways to perform temporal heterodyne evaluation. In one approach,the phase difference is measured by keeping one detector fixed to a reference point andscanning the image with another detector. In this way, the phase difference modulo 2π

with respect to the phase at the reference point is measured. The interference phasedifference can be summed from point to point to yield the phase distribution along aline or a plane. In the second method, the real image is scanned using a pair, triplet,quadruplet, or quintuplet of photodetectors with known separation, and the phasedifferences are then measured. In practice, the detector consists of a photodiode towhich a fiber is pigtailed. The other end of the fiber scans the real image.

Heterodyne HI cannot be implemented as real-time HI, because of the extremelyhigh stability requirement of the experimental set-up and also of the surroundingenvironment.

6.18 HOLOGRAPHIC CONTOURING/SHAPE MEASUREMENT

For complete stress analysis, it is necessary to know the shape of the object understudy. Three techniques that can be used to obtain the shape of an object are as follows:

• Change of wavelength between exposures—the dual-wavelength method• Change of refractive index of the medium surrounding the object between

exposures—the dual-refractive-index method• Change of direction of the illumination beam between exposures—the dual-

illumination method

6.18.1 DUAL-WAVELENGTH METHOD

We will describe the dual-wavelength method in detail. The technique requires thata holographic recording of the object be made on the same plate with two slightlydifferent wavelengths λ1 and λ2. The processed plate, the hologram, is then illumi-nated with the reference wave at wavelength λ1, so that the original object wave isreconstructed at λ1 together with the distorted reconstruction of the object recordedat wavelength λ2.

The waves from the original reconstruction and the distorted reconstruction inter-fere to yield a fringe pattern that is related to the shape of the object. In order tounderstand how this method works, we again go through the mathematics of record-ing and reconstruction. In the first exposure, we record an intensity distributionproportional to

O20 + R2

01 + O1R∗1 + O∗

1R1

The intensity recorded in the second exposure is proportional to

O20 + R2

02 + O2R∗2 + O∗

2R2

Page 157: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 130[#30] 14/5/2009 21:08

130 Optical Methods of Measurement

where the object and reference waves are defined as

O1 = O0 eik1·r = O0 exp

(2πi

λ1zo

)exp

[2πi

2λ1zo(x − xo)

2]

(6.47a)

O2 = O0 eik2·r = O0 exp(ik2zo) exp

[k2i

2zo(x − xo)

2]

(6.47b)

R1 = R0 eik1x sin θ (6.47c)

R2 = R0 eik2x sin θ (6.47d)

For reconstruction, the hologram is illuminated with a reference wave at wave-length λ1. Therefore, considering the terms of interest, the amplitude distribution inthe virtual image is proportional to

R1(O1R∗

1 + O2R∗2

) = O1R20 + O2R2

0eix sin θ(k1−k2) (6.48)

The term eix sin θ(k1−k2) represents a different direction of propagation of the objectwave O2. This phase term will be zero provided that the angle of the reference beamR2, before the second exposure, is adjusted such that k1 sin θ = k2 sin φ, where φ isthe angle that the reference beam R2 makes with the optical axis. Then, the amplitudedistribution can be rewritten proportional to

R2oOo

{1 + ei(k2−k1)zo exp

[i(k2 − k1)

2zo(x − xo)

2]}

The phase termk2 − k1

2zo(x − xo)

2

is very small, since x � zo and xo � zo, and hence the amplitude distribution in thereconstructed virtual image is

R20O0(1 + ei(k2−k1)zo)

The intensity distribution in the image can therefore be written as

I = Io{1 + cos[(k2 − k1)zo]} = Io

{[1 + cos

2π|λ1 − λ2|λ1λ2

zo

]}(6.49)

Bright fringes are formed wherever

2π |λ1 − λ2|λ1λ2

zo(x, y) = 2mπ

or

zo(x, y) = mλ1λ2

|λ1 − λ2| (6.50)

Page 158: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 131[#31] 14/5/2009 21:08

Holographic Interferometry 131

FIGURE 6.15 Fringes over a spherical surface obtained with dual-wavelength HI.

The contour interval is Δzo = λ1λ2/|λ1 − λ2|. Essentially, the method producesinterference planes perpendicular to the z axis and with a separation of λ1λ2/|λ1 −λ2|. The sensitivity of the contour fringes can be varied from 1 μm to several millime-ters using two suitable wavelengths. Figure 6.15 is an interferogram of an object takenwith two wavelengths. The object is a spherical surface, and hence the intersectionby parallel planes yields circular contour lines.

6.18.2 DUAL-REFRACTIVE-INDEX METHOD

Similar results are obtained when double-exposure HI is performed with a change inthe refractive index of the medium surrounding the object. The object is placed in anenclosure with a transparent window, and the first exposure is made. The refractiveindex of the medium surrounding the object is changed, and then the second exposureis made on the same plate. The refractive index can be easily changed when, say, thewater medium around the object is replaced by a mixture of water and alcohol. Thehologram, when viewed, displays the object covered with interference planes.

If the vacuum wavelength of the recording light is λ0, then the wavelength in amedium of refractive index n1 is λ1 = λ0/n1, and in the other medium of refractiveindex n2 it is λ2 = λ0/n2. Essentially, this is two-wavelength HI, and the contourinterval is given by

Δzo(x, y) = λ1λ2

|λ1 − λ2| = λ20/n1n2∣∣∣∣λ0

n1− λ0

n2

∣∣∣∣= λ0

|n2 − n1| (6.51)

Page 159: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 132[#32] 14/5/2009 21:08

132 Optical Methods of Measurement

There is no need to correct for the angle of the reference wave prior to the secondexposure as was done in two-wavelength HI. The contour interval can be varied by asuitable choice of the media surrounding the object.

6.18.3 DUAL-ILLUMINATION METHOD

The dual-illumination method is frequently used for contouring, because of its sim-plicity. A collimated beam illuminates the object at an angle θ with the optical axis,and a real-time hologram is recorded in the usual way. On reconstruction, a dark fieldis observed. Now, if the illumination beam is tilted by a small angle Δθ a system ofequispaced interference planes intersects the object. These planes run parallel to thebisector of the angle Δθ. The contour interval Δzo(x, y) is given by

Δzo(x, y) = λ

sin θ sin Δθ≈ λ

sin θ

1

Δθ(6.52)

Usually, the interference planes do not intersect the object perpendicular to the lineof sight. Large objects are illuminated by a spherical wave from a point source thatis translated laterally to produce the interference surfaces. Double-exposure HI canalso be performed for contouring by shifting the point source between exposures.

6.19 HOLOGRAPHIC PHOTOELASTICITY

This section deals with the application of holographic interferometry to the study ofphoto-elastic models. The technique is described in detail in Chapter 8.

6.20 DIGITAL HOLOGRAPHY

6.20.1 RECORDING

Instead of using photographic emulsions or other recording media, a CCD is used asrecording medium and the information is stored electronically. The reconstruction isperformed numerically on the electronically stored data. The advantages of recordingon CCD are that the hologram is recorded at video frequency and no chemical orphysical process for development is necessary. CCD cameras, however, have resolu-tions of order 100 lines/mm, which is at least one order of magnitude smaller thanthat of the photographic emulsions commonly used in holography. For this reason, themaximum admissible angle between the object and the reference waves is about 1◦.

Let us assume that the CCD contains N × N pixels of sizes Δx and Δy along the xand y directions. Assuming that the adjacent pixels have no space between them andno overlap, Δx and Δy are also pixel center distances. In digital holography, a Fresnelhologram is generated on the CCD target by the superposition of an object wave anda reference wave. The hologram is digitized, quantized, and stored in the memoryof the image-processing system. Figure 6.16 shows schematics of the arrangementsfor recording a digital hologram. A plane reference wave is assumed for simplicity.The reconstruction is performed numerically. However, according to the samplingtheorem, only spatial frequencies less than μmax along the x direction and νmax along

Page 160: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 133[#33] 14/5/2009 21:08

Holographic Interferometry 133

(a)

Object CCD

(b)

Image

LensObject

CCD

FIGURE 6.16 Schematic of arrangements for recording a digital hologram of (a) a smallobject and (b) a large object.

the y direction can be reliably reconstructed, where μmax = (2Δx)−1 and νmax =(2Δy)−1. This sets the maximum allowable angle between the object wave and thereference wave. The angle θ in the (x, z) plane can be written as θ = sin−1(λμmax).Asa numerical example, consider a CCD of 1024 × 1024 pixels, each 6.8 μm × 6.8 μmin size, and illumination of the object by a He–Ne laser beam at 0.633 μm. Themaximum allowable angle is 2.67◦. Therefore, an object placed at a distance of 1 mfrom the CCD must be smaller than 4.3 cm along the x direction. Larger objects eitherhave to be placed further away from the CCD plane or have their apparent size reducedby a lens.

6.20.2 RECONSTRUCTION

The reconstruction of the hologram is done by illuminating it with a plane referencewave. The amplitude in the real image can be expressed as

A(ξ, η) = iR

λzexp

[− iπ

λz(ξ2 + η2)

]

×∫ ∞

−∞

∫ ∞

−∞t(x, y) exp

[− iπ

λz(x2 + y2)

]exp

[2πi

λz(xξ + yη)

]dx dy

(6.53)

where R is the amplitude of the reference wave. This equation is valid under theFresnel approximation, namely,

z3 � π

[(x − ξ)2 + ( y − η)2

]2

where z is the distance between the object and the hologram, and hence between thehologram and the real image.

Page 161: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 134[#34] 14/5/2009 21:08

134 Optical Methods of Measurement

Since the data is available in digital form, Equation 6.53 can be written as

A(m, n) = iR

λzexp

[− iπ

λz

(m2Δξ2 + n2Δη2

)]

×N−1∑k=0

N−1∑l=0

t(k, l) exp

[− iπ

λz(k2Δx2 + l2Δy2)

]exp

[2πi

(km

N+ ln

N

)]

(6.54)

with m = 0, 1, 2, . . . , N − 1; n = 0, 1, 2, . . . , N − 1. Here t(k, l) is an matrix of N × N(pixels in CCD) data that describes the digitally sampled amplitude transmittance ofthe hologram; Δx, Δy, and Δξ, Δη are the pixel sizes in the hologram plane and theplane of the real image, respectively. The amplitude distribution in the real image isobtained as the inverse Fourier transform of the product consisting of the hologramtransmittance t(k, l) multiplied by an exponential factor that contains a quadraticphase factor. The Fourier transform is performed by a fast Fourier transform (FFT)algorithm. The amplitude A(ξ, η) in the reconstructed image is a complex function,and hence the intensity and the phase are computed at each pixel. The intensity I(m, n)

Hologram t2(k, l)(final state)

Hologram t1(k, l)(initial state)

Addition t1(k, l) + t2(k, l)(double exposure)

Phase f2(m, n)

= tan–1 Im{A2(m, n)}Re{A2(m, n)}

Phase f1(m, n)

= tan–1 Im{A1(m, n)}Re{A1(m, n)}

NumericalFresnel transformationA1(m, n) = {t1(k, l)}

NumericalFresnel transformationA2(m, n) = {t2(k, l)}

NumericalFresnel transformation

A(m, n) = (t1 + t2)

Intensity distributionI(m, n) = ÁA(m, n)Á2

Interference phasedetermination

I(m, n) → d(m, n)

Interference phased(m, n) = f2(m, n)–f2(m, n)

Interference phased(m, n)

FIGURE 6.17 Procedures for obtaining digital holographic interferograms.

Page 162: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 135[#35] 14/5/2009 21:08

Holographic Interferometry 135

and the phase φ(m, n) are determined as follows:

I(m, n) = |A(m, n)|2 = Re{A(m, n)}2 + Im{A(m, n)}2 (6.55a)

φ(m, n) = tan−1 Im{A(m, n)}Re{A(m, n)} , with value −π to π (6.55b)

As a consequence of surface roughness, the phase φ(m, n) varies randomly. In digitalholography, only the intensity variation on the object is of interest, and hence severalcalculations, such as that of iR/λz, the phase φ(m, n), and the exponential terme−(iπ/λz)(m2Δξ2+n2Δη2) are not carried out.

6.21 DIGITAL HOLOGRAPHIC INTERFEROMETRY

Comparison of states of the object due to loading can be performed by digitalholographic interferometry in a manner similar to that of double-exposure HI. Twoholograms with amplitude transmittances t1(x, y) and t2(x, y) belonging to the twostates of the object are digitally recorded and stored. During reconstruction, there aretwo procedures that can be used to obtain the interferogram representing the changein the state of the object. In the first procedure, the transmittances of both hologramsare added pixel by pixel and the Fresnel transform is taken. This reconstructs the

FIGURE 6.18 Numerically reconstructed phase of (a) an undeformed object and (b) adeformed object, and (c) phase difference between (a) and (b) producing an interferogram.(From U. Schnars, T. M. Kreis, and W. P. O. Jüptner, Opt. Eng., 35, 977–982, 1996. Withpermission.)

Page 163: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 136[#36] 14/5/2009 21:08

136 Optical Methods of Measurement

sum of the two amplitudes. When the intensity distribution is calculated, it exhibits acosine variation characteristic of the interference between the two waves. This inter-ference pattern can be evaluated by any of the well-known procedures. In the secondmethod, the holograms are reconstructed individually, and their phases φ1(m, n) andφ2(m, n) are calculated pixel-wise. Although φ1(m, n) and φ2(m, n) are random func-tions, their difference δ(m, n) = φ2(m, n) − φ1(m, n)] is deterministic, and gives thephase change due to loading. The phase change δ(m, n) is calculated as

δ(m, n) ={

φ2(m, n) − φ1(m, n) for φ2(m, n) ≥ φ1(m, n)

φ2(m, n) − φ1(m, n) + 2π for φ2(m, n) < φ1(m, n)(6.56)

These two approaches are summarized in Figure 6.17. Figures 6.18a and 6.18bshow the numerically reconstructed phase of an undeformed and a deformedobject, respectively, and Figure 6.18c shows the difference between these two asan interferogram.

Since digital HI can handle only small objects, it has potential applications in thetesting and characterization of microsystems. Comparison of remote objects and alsotheir responses to external agencies can be done conveniently with digital holography.

BIBLIOGRAPHY

1. R. K. Erf (Ed.), Holographic Non-Destructive Testing, Academic Press, New York,1974.

2. Ju. I. Ostrovskii, M. M. Butusov, and G. V. Ostrovskaja, GolografiCeskaja interfer-ometrija [Holographic Interferometry], Nauka, Moskva, 1977.

3. Yu. I. Ostrovsky, M. M. Butusov, and G. V. Ostrovskaya, Interferometry by Holography,Springer Series in Optical Sciences, Vol. 20, Springer-Verlag, Berlin/Heidelberg, 1980.

4. C. M. Vest, Holographic Interferometry, Interscience, New York, 1979.5. W. Schumann and M. Dubas, Holographic Interferometry, Springer Series in Optical

Sciences, Vol. 16, Springer-Verlag, Berlin/Heidelberg, 1979.6. N. Abramson, The Making and Evaluation of Holograms, Academic Press, London,

1981.7. G. Wernicke and W. Osten, Holografische Interferometrie, Physik-Verlag, Weinheim,

1982.8. R. J. Jones and C. Wykes, Holographic and Speckle Interferometry, Cambridge

University Press, Cambridge (1983, 1989).9. W. Schumann, J. P. Zürcher and D. Cuche, Holography and Deformation Analysis,

Springer Series in Optical Sciences, Vol. 46, Springer-Verlag, Berlin/Heidelberg, 1985.10. Yu. I. Ostrovsky, V. P. Shchepinov, and V. V. Yakovlev, Holographic Interferometry in

Experimental Mechanics, Springer Series in Optical Sciences, Vol. 60, Springer Verlag,Berlin/Heidelberg, 1991.

11. P. K. Rastogi (Ed.), Holographic Interferometry, Principles and Methods, SpringerSeries in Optical Sciences, Vol. 68, Springer-Verlag, Berlin/Heidelberg, 1994.

12. T. Kreis, Holographic Interferometry, Principles and Methods, Akademie VerlagSeries in Optical Metrology, Vol. 1, Akademie Verlag, Berlin, 1996.

13. V. P. Shchepinov (Ed.), Strain and Stress Analysis by Holographic and SpeckleInterferometry, Wiley, Chichester, 1996.

Page 164: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 137[#37] 14/5/2009 21:08

Holographic Interferometry 137

14. P. K. Rastogi (Ed.), Optical Measurement Techniques and Applications, Artech House,Boston, 1997.

15. R. S. Sirohi and K. D. Hinsch, Selected Papers on Holographic Interferometry, SPIEMilestone Series MS144, SPIE, Bellingham, WA (1998).

16. Thomas Kreis, Handbook of Holographic Interferometry: Optical and Digital Methods,Wiley-VCH, Weinheim, 2005.

17. W. Osten (Ed.), Optical Inspection of Microsystems, Taylor & Francis, NewYork, 2006.

ADDITIONAL READING

1. R. L. Powell and K. A. Stetson, Interferometric vibration analysis by wavefrontreconstruction, J. Opt. Soc. Am., 55, 1593–1598, 1965.

2. K. A. Stetson and R. L. Powell, Interferometric hologram evaluation and real-timevibration analysis of diffuse objects, J. Opt. Soc. Am., 55, 1694–1695, 1965.

3. R. J. Collier, E. T. Doherty, and K. S. Pennington, Application of moiré techniques toholography, Appl. Phys. Lett., 7, 223–225, 1965.

4. R. E. Brooks, L. O. Heflinger, and R. F. Wuerker, Interferometry with a holographicallyreconstructed comparison beam, Appl. Phys. Lett., 7, 248–249, 1965.

5. M. H. Horman, An application of wavefront reconstruction to interferometry, Appl.Opt., 4, 333–336, 1965.

6. L. H. Tanner, Some applications of holography in fluid mechanics, J. Sci. Instrum.,43, 81–83, 1966.

7. B. P. Hildebrand and K. A. Haines, Interferometric measurements using wavefrontreconstruction technique, Appl. Opt., 5, 172–173, 1996.

8. K. A. Haines and B. P. Hildebrand, Surface-deformation measurement using thewavefront reconstruction technique, Appl. Opt., 5, 595–602, 1966.

9. L. O. Heflinger, R. F. Wuerker, and R. E. Brooks, Holographic Interferometry, J. Appl.Phys., 37, 642–649, 1966.

10. J. M. Burch, A. E. Ennos, and R. J. Wilson, Dual- and multiple-beam interferometryby wave front reconstruction, Nature, 209, 1015–1016, 1966.

11. B. P. Hildebrand and K. A. Haines, Multiple-wavelength and multiple-source holog-raphy applied for contour generation, J. Opt. Soc. Am., 57, 155–162, 1967.

12. E. Archbold, J. M. Burch, and A. E. Ennos, The application of holography to thecomparison of cylinder bores, J. Sci. Instrum., 44, 489–494, 1967.

13. L. H. Tanner, The scope and limitations of three-dimensional holography of phaseobjects, J. Sci. Instrum., 44, 1011–1014, 1967.

14. M. De and L. Sevigny, Three beam holographic interferometry, Appl. Opt., 6, 1665–1671, 1967.

15. E. B. Aleksandrov and A. M. Bonch-Bruevich, Investigation of surface strains by thehologram technique, Sov. Phys. Tech. Phys., 12, 258–265, 1967.

16. N. Shiotake, T. Tsuruta,Y. Itoh, J. Tsujiuchi, N. Takeya, and K. Matsuda, Holographicgeneration of contour map of diffusely reflecting surface by using immersion method,Jpn J. Appl. Phys., 7, 661–662, 1968.

17. T. Tsuruta, N. Shiotake, and Y. Itoh, Hologram interferometry using two referencebeams, Jpn J. Appl. Phys., 7, 1092–1100, 1968.

18. J. S. Zelenka and J. R. Varner, A new method for generating depth contoursholographically, Appl. Opt., 7, 2107–2110, 1968.

19. E. Archbold and A. E. Ennos, Observation of surface vibration modes by stroboscopichologram interferometry, Nature, 217, 942–943, 1968.

Page 165: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 138[#38] 14/5/2009 21:08

138 Optical Methods of Measurement

20. P. Shajenko and C. D. Johnson, Stroboscopic holographic interferometry, Appl.Phys., 13, 44–46, 1968.

21. B. M. Watrasiewicz and P. Spicer, Vibration analysis by stroboscopic holography,Nature, 217, 1142–1143, 1968.

22. J. W. C. Gates, Holographic phase recording by interference between reconstructedwavefronts from separate holograms, Nature, 220, 473–474, 1968.

23. G. S. Ballard, Double exposure holographic interferometry with separate referencebeams, J. Appl. Phys., 39, 4846–4848, 1968.

24. W. G. Gottenberg, Some applications of holographic interferometry, Exp. Mech., 8,405–410, 1968.

25. M. A. Monahan and K. Bromley, Vibration analysis by holographic interferometry,J. Acouts. Soc. Am., 44, 1225–1231, 1968.

26. A. E. Ennos, Measurement of in-plane surface strain by hologram interferometry,J. Sci. Instrum.: Phys. E, 1, 731–734, 1968.

27. O. Bryngdahl, Shearing interferometry by wavefront reconstruction, J. Opt. Soc. Am.,58, 865–871, 1968.

28. C. C. Aleksoff, Time average holography extended, Appl. Phys. Letts., 14, 23–24,1969.

29. F. M. Mottier, Time average holography with triangular phase modulation of thereference wave, Appl. Phys. Lett., 15, 285–287, 1969.

30. R. M. Grant and G. M. Brown, Holographic non-destructive testing (HNDT), Mater.Eval., 27, 79–84, 1969.

31. G. M. Brown, R. M. Grant and G. W. Stroke, Theory of holographic interferometry,J. Acoust. Soc. Am., 45, 1166–1179, 1969.

32. K. Matsumoto, Holographic multiple beam interferometry, J. Opt. Soc. Am., 59,777–778, 1969.

33. N.-E. Molin and K. A. Stetson, Measuring combination mode vibration patterns byhologram interferometry, J. Sci. Instrum.: Phys. E, 2, 609–612, 1969.

34. I. Yamaguchi and H. Saito, Application of holographic interferometry to the measure-ment of Poisson’s ratio, Jpn J. Appl. Phys., 8, 768–771, 1969.

35. W. Van Deelan and P. Nisenson, Mirror blank testing by real-time holographicinterferometry, Appl. Opt., 8, 951–955, 1969.

36. N. Abramson, The holodiagram: A practical device for making and evaluatingholograms, Appl. Opt., 8, 1235–1240, 1969.

37. A. A. Friesem and C. M. Vest, Detection of micro fractures by holographicinterferometry, Appl. Opt., 8, 1253–1254, 1969.

38. J. S. Zelenka and J. R. Varner, Multiple-index holographic contouring, Appl. Opt., 8,1431–1434, 1969.

39. J. E. Sollid, Holographic interferometry applied to measurements of small staticdisplacements of diffusely reflecting surfaces, Appl. Opt., 8, 1587–1595, 1969.

40. W. T. Welford, Fringe visibility and localization in hologram interferometry, Opt.Commun., 1, 123–125, 1969.

41. T. Tsuruta, N. Shiotake and Y. Itoh, Formation and localization of holographicallyproduced interference fringes, Opt. Acta, 16, 723–733, 1969.

42. K. A. Stetson, A rigorous treatment of the fringes of hologram interferometry, Optik,29, 386–400, 1969.

43. S. Mallick and M. L. Roblin, Shearing interferometry by wavefront reconstructionusing single exposure, Appl. Phys. Lett., 14, 61–62, 1969.

44. S. Walles, On the concept of homologous rays in holographic interferometry ofdiffusely reflecting surfaces, Opt. Acta, 17, 899–913, 1970.

Page 166: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 139[#39] 14/5/2009 21:08

Holographic Interferometry 139

45. R. A. Jeffries, Two wavelength holographic interferometry of partially ionisedplasmas, Phys. Fluids, 13, 210–212, 1970.

46. A. D. Wilson, Holographically observed torsion in a cylindrical shaft, Appl. Opt., 9,2093–2097, 1970.

47. R. C. Sampson, Holographic interferometry applications in experimental mechanics,Exp. Mech., 10, 313–320, 1970.

48. A. D. Wilson, Characteristic functions for time-average holography, J. Opt. Soc. Am.,60, 1068–1071, 1970.

49. K. A. Stetson, Effect of beam modulation on fringe loci and localization in time-average hologram interferometry, J. Opt. Soc. Am., 60, 1378–1384, 1970.

50. D. B. Neumann, C. F. Jacobson, and G. M. Brown, Holographic technique fordetermining the phase of vibrating objects, Appl. Opt., 9, 1357–1362, 1970.

51. R. Pryputniewicz and K. A. Stetson, Determination of sensitivity vectors inhologram interferometry from two known rotations of the object, Appl. Opt., 19,2201–2205, 1970.

52. K. S. Mustafin, V. A. Seleznev, and E. I. Shtyrkov, Use of the non-linear propertiesof a photoemulsion for enhancing the sensitivity of holographic interferometry, Opt.Spectrosc., 28, 638–640, 1970.

53. K. Matsumoto and M. Takashima, Phase difference amplification by non-linearholography, J. Opt. Soc. Am., 60, 30–33, 1970.

54. C. H. F. Velzel, Small phase differences in holographic interferometry, Opt.Commun., 2, 289–291, 1970.

55. A. D. Wilson, Computed time-average holographic interferometric fringes of a circularplate vibrating simultaneously in two rationally or irrationally related modes, J. Opt.Soc. Am., 61, 924–929, 1971.

56. F. Weigl, A generalized technique of two wavelength non diffuse holographicinterferometry, Appl. Opt., 10, 187–192, 1971.

57. G. V. Ostrovskaya and Yu. I. Ostrovsky, Two-wavelength hologram method for study-ing the dispersion properties of phase objects, Sov. Phys. Tech. Phys., 15, 1890–1892,1971.

58. R. D. Matulka and D. J. Collins, Determination of three-dimensional density fieldsfrom holographic interferograms, J. Appl. Phys., 42, 1109–1119, 1971.

59. L. A. Kersch, Advanced concepts of holographic nondestructive testing, Mater. Eval.,29, 125–129, 1971.

60. R. A. Ashton, D. Slovin, and H. I. Gerritsen, Interferometric holography applied toelastic stress and surface corrosion, Appl. Opt., 10, 440-441, 1971.

61. A. D. Wilson, In-plane displacement of a stressed membrane with a hole measuredby holographic interferometry, Appl. Opt., 10, 908–912, 1971.

62. C. C. Aleksoff, Temporally modulated holography, Appl. Opt., 10, 1329–1341, 1971.63. J. P. Waters, Holographic inspection of solid propellant to liner bonds, Appl. Opt., 10,

2364–2365, 1971.64. T. D. Dudderar and R. O’Regan, Measurement of the strain field near a crack tip in

polymethylmethacrylate by holographic interferometry, Exp. Mech., 11, 49–56, 1971.65. A. D. Wilson, C. H. Lee, H. R. Lominac, and D. H. Strope, Holographic and analytic

study of semi-clamped rectangular plate supported by struts, Exp. Mech., 11, 229–234,1971.

66. W. Aung and R. O’Regan, Precise measurement of heat transfer using holographicinterferometry, Rev. Sci. Instrum., 42, 1755–1759, 1971.

67. P. M. Boone, Determination of three orthogonal displacement components from onedouble-exposure hologram, Opt. Laser Technol., 4, 162–167, 1972.

Page 167: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 140[#40] 14/5/2009 21:08

140 Optical Methods of Measurement

68. C. H. Agren and K. A. Stetson, Measuring the resonances of treble-viol plates byhologram interferometry, J. Acoust. Soc. Am., 51, 1971–1983, 1972.

69. P. Larinov, A. V. Lukin, and K. S. Mustafin, Holographic inspection of shapes ofunpolished surfaces, Sov. J. Opt. Technol., 39, 154–155, 1972.

70. B. U.Achia and D. W. Thompson, Real-time hologram-moiré interferometry for liquidflow visualization, Appl. Opt., 11, 953–954, 1972.

71. W. P. Chu, D. M. Robinson and J. H. Goad, Holographic non-destructive testing withimpact excitation, Appl. Opt., 11, 1644–1645, 1972.

72. R. E. Rowlands and I. M. Daniel, Application of holography to anisotropic compositeplates, Exp. Mech., 12, 75–82, 1972.

73. S. K. Dhir and J. P. Sikora,An improved method for obtaining the general-displacementfield from a holographic interferogram, Exp. Mech., 12, 323–327, 1972.

74. T. R. Hsu and R. G. Moyer, Application of holography in high-temperature displace-ment measurements, Exp. Mech., 12, 431–432, 1972.

75. K. S. Mustafin and V. A. Seleznev, Holographic interferometry with variablesensitivity, Opt. Spectrosc., 32, 532–535, 1972.

76. T. Matsumoto, K. Iwata and R. Nagata, Measuring accuracy of three dimensionaldisplacements in holographic interferometry, Appl. Opt., 12, 961–967, 1973.

77. P. Hariharan and Z. S. Hegedus, Simple multiplexing technique for double-exposurehologram interferometry, Opt. Commun., 9, 152–155, 1973.

78. C. A. Sciammarella and J. A. Gilbert, Strain analysis of a disk subjected to diametralcompression by means of holographic interferometry, Appl. Opt., 12, 1951–1956,1973.

79. D. W. Sweeny and C. M. Vest, Reconstruction of three-dimensional refractiveindex field from multidirectional interferometric data, Appl. Opt., 12, 2649–2664(1973).

80. N. Abramson and H. Bjelkhagen, Industrial holographic measurements, Appl. Opt.,12, 2792–2796, 1973.

81. K. Grunwald, D. Kaletsch, V. Lehmann, and H. Wachuta, Holographische Interfer-ometrie und deren quantitative Auswertung, demonstriet am Beispiel zylindrischerGfK-Rohre, Optik, 37, 102–109, 1973.

82. A. D. Luxmoore, Holographic detection of cracks in concrete, Nondestr. Test., 6,258–263, 1973.

83. R. Dandliker, B. Ineichen, and F. M. Mottier, High resolution hologram interferometryby electronic phase measurement, Opt. Commun., 9, 412–416, 1973.

84. Y. Doi, T. Komatsu and T. Fujimoto, Shearing interferometry by holography, Jpn J.Appl. Phys., 12, 1036–1042, 1973.

85. N. Abramson, Sandwich hologram interferometry: A new dimension in holographiccomparison, Appl. Opt., 13, 2019–2025, 1974.

86. M. Dubas and W. Schumann, Sur la détermination holographique de l’état dedéformation á la surface d’un corps non-transparent, Opt. Acta, 21, 547–562, 1974.

87. I. Prikryl, Localization of interference fringes in holographic interferometry, Opt. Acta,21, 675–681, 1974.

88. K.A. Stetson, Fringe interpretation for hologram interferometry of rigid-body motionsand homogeneous deformations, J. Opt. Soc. Am., 64, 1–10, 1974.

89. R. Jones and D.A. Bijl, Holographic interferometric study of the end effects associatedwith the four point bending technique for measuring Poisson’s ratio, J. Phys. E : J. Sci.Instrum., 7, 357–358, 1974.

90. J. P. Sikora and F. T. Mendenhall, Holographic vibration study of a rotating propellerblade, Exp. Mech., 14, 230–232, 1974.

Page 168: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 141[#41] 14/5/2009 21:08

Holographic Interferometry 141

91. T. R. Hsu, Large-deformation measurements by real-time holographic interferometry,Exp. Mech., 14, 408–411, 1974.

92. E. R. Robertson and W. King, The technique of holographic interferometry applied tothe study of transient stresses, J. Strain Anal., 9, 44–49, 1974.

93. A. R. Luxmoore, Holographic detection of cracks, J. Strain Anal., 9, 50–51, 1974.94. T. Matsumoto, K. Iwata, and R. Nagata, Measurement of deformation in a cylindrical

shell by holographic interferometry, Appl. Opt., 13, 1080–1084, 1974.95. V. Fossati Bellani and A. Sona, Measurement of three-dimensional displacements by

scanning a double-exposure hologram, Appl. Opt., 13, 1337–1341, 1974.96. S. Amadesi, F. Gori, R. Grella, and G. Guattari, Holographic methods for painting

diagnostics, Appl. Opt., 13, 2009–2013, 1974.97. P. R. Wendendal and H. I. Bjelkhagen, Dynamics of human teeth in function by

means of double-pulsed holography: An experimental investigation, Appl. Opt., 13,2481–2485, 1974.

98. M. Dubas and W. Schumann, The determination of strain at a nontransparent body byholographic interferometry, Opt. Acta, 21, 547–562, 1974.

99. H. J. Raterink and R. L. van Renesse, Investigation of holographic interferometry,applied for the detection of cracks in large metal objects, Optik, 40, 193–200, 1974.

100. H. Spetzler, C. H. Scholz, and C. P. J. Lu, Strain and creep measurements on rocks byholographic interferometry, Pure Appl. Geophys., 112/113, 571–582, 1974.

101. C. A. Sciammarella and T. Y. Chang, Holographic interferometry applied to thesolution of shell problem, Exp. Mech., 14, 217–224, 1974.

102. E. Roth, Holographic interferometric investigations of plastic parts, In Laser 75,Proceedings of International Conference on Optoelectronics, 168–174, 1975.

103. M. C. Collins and C. E. Watterson, Surface-strain measurements on a hemisphericalshell using holographic interferometry, Exp. Mech., 15, 128–132, 1975.

104. K. Grunwald, W. Fritzsch, and H. Wachuta, Quantitative holographische Verfor-mungsmessungen an Kunststoff- und GFK Bauteilen, Materialprufung, 17, 69–72,1975.

105. Y. Chen, Holographic interferometry applied to rotating disk, Trans. ASME, 42, 499–512, 1975.

106. F. Able, A. L. Dancer, H. Fagot, R. B. Franke, and P. Smigielski, Holographic interfer-ometry applied to investigations of tympanic-membrane displacements in guinea-pigears subjected to acoustical pulses, J. Acoust. Soc. Am., 58, 223–228, 1975.

107. J. D. Trolinger, Flow visualization holography, Opt. Eng., 14, 470–481, 1975.108. R. J. Radley Jr, Two-wavelength holography for measuring plasma electron density,

Phys. Fluids, 18, 175–179, 1975.109. C. A. Sciammarella and J. A. Gilbert, Holographic interferometry applied to the mea-

surement of displacements of interior points of transparent bodies, Appl. Opt., 15,2176–2182, 1976.

110. C. G. H. Foster,Accurate measurement of Poisson’s ratio in small samples, Exp. Mech.,16, 311–315, 1976.

111. H. D. Meyer and H. A. Spetzler, Material properties using holographic interferometry,Exp. Mech., 16, 434–438, 1976.

112. D. Dameron and C. M. Vest, Fringe sharpening and diffraction in nonlinear two-exposure holographic interferometry, J. Opt. Soc. Am., 66, 1418–1421, 1976.

113. N. Takai, M. Yamada and T. Idogawa, Holographic interferometry using a referencewave with a sinusoidally modulated amplitude, Opt. Laser Technol., 8, 21–23, 1976.

114. J. R. Crawford and R. Benson, Holographic interferometry of circuit board compo-nents, Appl. Opt., 15, 24–25, 1976.

Page 169: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 142[#42] 14/5/2009 21:08

142 Optical Methods of Measurement

115. P. Greguss, Holographic interferometry in biomedical sciences, Opt. Laser Technol.,8, 153–159, 1976.

116. N. Abramson, Holographic contouring by translation, Appl. Opt., 15, 1018–1022,1976.

117. P. M. de Larminant and R. P. Wie, A fringe compensation technique for stressanalysis by reflection holographic interferometry, Exp. Mech., 16, 241–248, 1976.

118. R. Dandliker, E. Marom and F. M. Mottier, Two-reference beam holographicinterferometry, J. Opt. Soc. Am., 66, 23–30, 1976.

119. K. A. Stetson, Use of fringe vectors in hologram interferometry to determine fringelocalization, J. Opt. Soc. Am., 66, 627–628, 1976.

120. P. Hariharan and Z. S. Hegedus, Two-hologram interferometry: A simplified sandwichtechnique, Appl. Opt., 15, 848–849, 1976.

121. G. E. Sommargren, Double-exposure holographic interferometry using common-pathreference waves, Appl. Opt., 16, 1736–1741, 1977.

122. N. Abramson, Sandwich hologram interferometry: Holographic studies of two millingmachines, Appl. Opt., 16, 2521–2531, 1977.

123. D. C. Holloway, A. M. Patacca and W. L. Fourney, Application of holographicinterferometry to a study of wave propagation in rock, Exp. Mech., 17, 281–289, 1977.

124. G. Wernicke and G. Frankowski, Untersuchung des Fliessbeginns einsatzgeharterund gaskarbonitrierter Stahl mit Hilfe der holographischen Interferometrie, DieTechnik, 32, 393–396, 1977.

125. C. M. Vest and D. W. Sweeney, Applications of holographic interferometry tonon-destructive testing, Int. Adv. Nondestr. Test., 5, 17–21, 1977.

126. M. D. Mayer and T. E. Katyanagi, Holographic examination of a composite pressurevessel, J. Test. Eval., 5, 47–52, 1977.

127. S. Toyooka, Holographic interferometry with increased sensibility for diffuselyreflecting objects, Appl. Opt., 16, 1054–1057, 1977.

128. H. Kohler, General formulation of the holographic interferometric evaluation methods,Optik, 47, 469–475, 1977.

129. N.Abramson and H. Bjelkhagan, Pulsed sandwich holography, 2. Practical application,Appl. Opt., 17, 187–191, 1978.

130. K. A. Stetson and I. R. Harrison, Computer-aided holographic vibration analysis forvectorial displacements of bladed disks, Appl. Opt., 17, 1733–1738, 1978.

131. D. Nobis and C. M. Vest, Statistical analysis of errors in holographic interferometry,Appl. Opt., 17, 2198–2204, 1978.

132. P. M. De Laminat and R. P. Wei, Normal surface displacement around a circular holeby reflection holographic interferometry, Exp. Mech., 18, 74–80, 1978.

133. K. N. Petrov and Yu P. Presnyakov, Holographic interferometry of the corrosionprocess, Opt. Spectrosc., 44, 179–181, 1978.

134. J. M. Caussignac, Application of holographic interferometry to the study of structuraldeformations in civil engineering, Proc. SPIE, 136, 136–142, 1978.

135. A. Felske, Holographic analysis of oscillations in squealing disk brakes, Proc. SPIE,136, 148–155, 1978.

136. J. D. Dubourg, Application of holographic interferometry to testing of spun structures,Proc. SPIE, 136, 186–191, 1978.

137. P. Meyrueis, M. Pharok, and J. Fontaine, Holographic interferometry in osteosynthesis,Proc. SPIE, 136, 202–205, 1978.

138. S. Toyooka, Computer mapping of the first and the second derivatives of plate deflec-tion by using modulated diffraction gratings made by double-exposure holography,Opt. Acta, 25, 991–1000, 1978.

Page 170: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 143[#43] 14/5/2009 21:08

Holographic Interferometry 143

139. Hans Rottenkolber and Hans Steinbichler, Method for the optical determination andcomparison of shapes and positions of objects, and arrangement for practicing saidmethod, US Patent, 4,111,557, 1978.

140. K. A. Stetson, The use of an image-derotator in hologram interferometry and specklephotography of rotating objects, Exp. Mech., 18, 67–73, 1978.

141. K. D. Hinsch, Holographic interferometry of surface deformations of transparentfluids, Appl. Opt., 17, 3101–3108, 1978.

142. F. T. S.Yu,A. Tai, and H. Chen, Multi-wavelength rainbow holographic interferometry,Appl. Opt., 18, 212–218, 1979.

143. C. Shakher and R. S. Sirohi, Fringe control techniques applied to holographicnon-destructive testing, Can. J. Phys., 57, 2155–2160, 1979.

144. J. C. McBain, I. E. Horner, and W. A. Stange, Vibration analysis of a spinning diskusing image-derotated holographic interferometry, Exp. Mech., 19, 17–22, 1979.

145. G. von Bally, Otological investigations in living man using holographic interfero-metry, In Holography in Medicine and Biology (ed. G. von Bally), Vol. 18, 198–205,Springer-Verlag, Berlin/Heidelberg, 1979.

146. S. Vikram and K. Vedam, Holographic interferometry of corrosion, Optik, 55, 407,1980.

147. S. Vikram, K. Vedam, and E. G. Buss, Non-destructive evaluation of the strength ofeggs by holography, Poultry Sci., 59, 2342, 1980.

148. J. C. McBain, W. A. Stange, and K. G. Harding, Real-time response of a rotating diskusing image-derotated holographic interferometry, Exp. Mech., 21, 34–40, 1981.

149. W. F. Fagan, M.-A. Beeck, and H. Kreitlow, The holographic vibration analysis ofrotating objects using a reflective image derotator, Opt. Lasers Eng., 2, 21–33, 1981.

150. C. A. Sciammarella, Holographic moiré, an optical tool for the determination ofdisplacements, strains, contours, and slopes of surfaces, Opt. Eng., 21, 447–457, 1982.

151. M.Yonemura, Holographic contour generation by spatial frequency modulation, Appl.Opt., 21, 3652–3658, 1982.

152. H. J. Tiziani, Real-time metrology with BSO crystals, Opt. Acta, 29, 463–470, 1982.153. M. J. Marchant and M. B. Snell, Determination of flexural stiffness of thin plates from

small deflection measurements using optical holography, J. Strain Anal., 17, 53–61,1982.

154. A. E. Ennos and M. S. Virdee, Application of reflection holography to deformationmeasurement problems, Exp. Mech., 22, 202–209, 1982.

155. P. Hariharan, B. F. Oreb and N. Brown, A digital phase-measurement system forreal-time holographic interferometry, Opt. Commun., 41, 393–396, 1982.

156. T. Yatagai, S. Nakadate, M. Idesawa and H. Saito, Automatic fringe analysis usingdigital image processing techniques, Opt. Eng., 21, 432–435, 1982.

157. K. A. Stetson, Method of vibration measurements in heterodyne interferometry, Opt.Lett., 7, 233–234, 1982.

158. H. G. Leis, Vibration analysis of an 8-cylinder V-engine by time-average holographicInterferometry, Proc. SPIE, 398, 90–94, 1983.

159. R. Dandliker and R. Thalmann, Deformation of 3D displacement and strain byholographic interferometry for non-plane objects, Proc. SPIE, 398, 11–16, 1983.

160. J. Geldmacher, H. Kreitlow, P. Steinlein, and G. Sepold, Comparison of vibrationmode measurements on rotating objects by different holographic methods, Proc. SPIE,398, 101–110, 1983.

161. G. Schonebeck, New holographic means to exactly determine coefficients ofelasticity, Proc. SPIE, 398, 130–136, 1983.

Page 171: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 144[#44] 14/5/2009 21:08

144 Optical Methods of Measurement

162. S. Amadesi, A. D’Altorio, and D. Paoletti, Single-two hologram interferometry: Acombined method for dynamic tests on painted wooden statues, J. Optics, 14, 243–246,1983.

163. A. A. Antonov, Inspecting the level of residual stresses in welded joints by laserinterferometry, Weld Prod., 30, 29–31, 1983.

164. J. L. Goldberg, A method of three-dimensional strain measurement on non-idealobjects using holographic interferometry, Exp. Mech., 23, 59–73, 1983.

165. H. P. Rossmanith, Determination of dynamic stress-intensity factors by holographicinterferometry, Opt. Lasers Eng., 4, 129–143, 1983.

166. P. Hariharan, B. F. Oreb, and N. Brown, Real-time holographic interferometry: Amicrocomputer system for the measurement of vector displacement, Appl. Opt., 22,876–880, 1983.

167. Z. Füzessy and F. Gyimesi, Difference holographic interferometry: Displacementmeasurement, Opt. Eng., 23, 780–783, 1984.

168. P. Lam, J. D. Gaskill, and J. C. Wyant, Two-wavelength holographic interferometer,Appl. Opt., 23, 3079–3081, 1984.

169. L. Wang, J. D. Hovanesian, and Y. Y. Hung, A new fringe carrier method for the deter-mination of displacement derivatives in hologram interferometry, Opt. Lasers Eng., 5,109–120, 1984.

170. P. Hariharan, Quasi-heterodyne hologram interferometry, Opt. Eng., 24, 632–638,1985.

171. R. Thalmann and R. Dandliker, Heterodyne and quasi-heterodyne holographicinterferometry, Opt. Eng., 24, 824–831, 1985.

172. R. J. Pryputniewicz, Heterodyne holography, applications in studies of small com-ponents, Opt. Eng., 24, 843–848, 849–854, 1985.

173. R. Thalmann and R. Dandliker, Holographic contouring using electronic phasedetection, Opt. Eng., 24, 930–935, 1985.

174. D. L. Mader, Holographic interferometry on pipes: Precision interpretation by leastsquare fitting, Appl. Opt., 24, 3784–3790, 1985.

175. D. B. Neumann, Comparative holography: A technique for eliminating backgroundfringes in holographic interferometry, Opt. Eng., 24, 625–627, 1985.

176. A. Stimpfling and P. Smigielski, New method for compensating and measuring anymotion of three-dimensional objects in holographic interferometry, Opt. Eng., 24,821–823, 1985.

177. R. Pryputniewicz, Time average holography in vibration analysis, Opt. Eng., 24,843–848, 1985.

178. R. Pryputniewicz, Heterodyne holography, applications in study of small components,Opt. Eng., 24, 849–854, 1985.

179. W. Osten, Some consideration on the statistical error analysis in holographic interfer-ometry with application to an optimized interferometry, Opt. Acta, 32, 827–838, 1985.

180. Z. Fuzessy, F. Geimesi, and J. Kornis, Comparison of two filament lamps by differencehologram interferometry, Opt. Laser Technol., 18, 318–320, 1986.

181. D. V. Nelson and J. T. McCrickerd, Residual-stress determination through combineduse of holographic interferometry and blind-hole drilling, Exp. Mech., 26, 371–378(1986).

182. S. Takemoto, Application of laser holographic techniques to investigate crustaldeformations, Nature, 322, 49–51, 1986.

183. T. Kreis, Digital holographic phase measurement using Fourier transform method,J. Opt. Soc. Am., A3, 847–855, 1986.

Page 172: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 145[#45] 14/5/2009 21:08

Holographic Interferometry 145

184. D. Paoletti, G. Schirripa-Spagnolo, and A. D’Altorio, Sandwich hologram fordisplacement derivatives, Opt. Commun., 56, 325–329, 1986.

185. D. W. Watt and C. M. Vest, Digital interferometry for flow visualization, Exp. Fluids,5, 401–406, 1987.

186. P. Del Vo and M. L. Rizzi, Vibrational testing of an X-ray concentrator by holographicinterferometry, Proc. SPIE, 814, 357–364, 1987.

187. H. Kreitlow, T. Kreis and W. Jüptner, Holographic interferometry with reference beamsmodulated by the object motion, Appl. Opt., 26, 4256–4262, 1987.

188. D. E. Cuche, Determination of Poisson’s ratio by the holographic moiré technique,Proc. SPIE, 1026, 165–170, 1988.

189. P. J. Bryanston-Cross and J. W. Gardner, Application of holographic interferometry tothe vibrational analysis of the harpsichord, Opt. Laser Technol., 20, 199–204, 1988.

190. L. Wang and J. Ke, The measurement of residual stresses by sandwich holographicinterferometry, Opt. Lasers in Eng., 9, 111–119, 1988.

191. H. Kasprzak, H. Podbielska, and G. von Bally, Human tibia rigidly examined in bend-ing and torsion loading by using double-exposure holographic interferometry, Proc.SPIE, 1026, 196v201, 1988.

192. P. K. Rastogi, P. Jacquot, and L. Pflug, Holographic interferometry applied at subfreez-ing temperatures: Study of damage in concrete exposed to frost action, Opt. Eng., 27,172–178, 1988.

193. T. E. Carlsson, G. Bjarnholt, N. Abramson, and D. C. Holloway, Holographic inter-ferometry applied to a model study of ground vibrations produced from blasting, Opt.Eng., 27, 923–927, 1988.

194. R. Dandliker, R. Thalmann, and D. Prongue, Two-wavelength laser interferometryusing super-heterodyne detection, Opt. Lett., 13, 339–341, 1988.

195. G. Lai and T.Yatagai, Dual-reference holographic interferometry with a double pulsedlaser, Appl. Opt., 27, 3855–3858, 1988.

196. S. Fu and J. Chen, Phase difference amplification of double-exposure holograms, Opt.Commun., 67, 417–420, 1988.

197. N. V. Kukhtarev and V. V. Muravev, Dynamic holographic interferometry in photo-refractive crystals, Opt. Spectros., 64, 656–659, 1988.

198. M. Kujawinska and D. W. Robinson, Multichannel phase-stepped holographicinterferometry, Appl. Opt., 27, 312–320, 1988.

199. I. N. Odintsev,V. P. Shchepinov andV.V.Yakovlev, Material elastic constants measure-ment by holographic compensation technique, Zhur. Tech. Fiz., 58, 108–113, 1988.

200. J. Gryzagoridis, Holographic non-destructive testing of composites, Opt. LaserTechnol., 21, 113–116, 1989.

201. E. Simova and V. Sainov, Comparative holographic moiré interferometry for non-destructive testing: Comparison with conventional holographic interferometry, Opt.Eng., 28, 261–266, 1989.

202. A. J. Decker, Holographic interferometry with an injection sealed Nd :Yag laser andtwo reference beams, Appl. Opt., 29, 2697–2700, 1990.

203. M. Takeda, Spatial-carrier fringe-pattern analysis and its application to precisioninterferometry and profilometry: An overview, Indust. Metrol., 1, 79–99, 1990.

204. K. A. Stetson, Use of sensitivity vector variations to determine absolute displacementsin double-exposure hologram interferometry, Appl. Opt., 29, 502–504, 1990.

205. V. V. Balalov, V. S. Pisarev, V. P. Shchepinov, and V. V. Yakovlev, Holographic inter-ference measurements of 3D displacement fields and their use in stress determination,Opt. Spectros., 68, 75–78, 1990.

Page 173: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 146[#46] 14/5/2009 21:08

146 Optical Methods of Measurement

206. D. W. Watt, T. S. Gross and S. D. Hening, Three illumination beam phase-shifted holo-graphic interferometry study of thermally induced displacements on a printed wiringboard, Appl. Opt., 30, 1617–1623, 1991.

207. Z. Fuzessy, Application of double pulse holography for the investigations of machinesand systems, Physical Res., 15, 75–107, 1991.

208. M. R. Sajan, T. S. Radha, B. S. Ramprasad, and E. S. R. Gopal, Measurement of cor-rosion rate of aluminum in sodium hydroxide using holographic interferometry, Opt.Lasers Eng., 15, 183–188, 1991.

209. P. K. Rastogi, M. Barillot, and G. H. Kaufmann, Comparative phase-shiftingholographic interferometry, Appl. Opt., 30, 722–728, 1991.

210. R. C. Troth and J. C. Dainty, Holographic interferometry using anisotropic self-diffraction in Bi12SiO20, Opt. Lett., 16, 53–55, 1991.

211. S. L. Sochava, R. C. Troth, and S. I. Stepanov, Holographic interferometry using −1order diffraction in photorefractive Bi12SiO20 and Bi12TiO20 crystals, J. Opt. Soc.Am. B, 9, 1521–1527, 1992.

212. A. M. Lyalikov and A. F. Tuev, Multiwavelength single-exposure holographicinterferometry, Opt. Spectros., 73, 227–229, 1992.

213. C. S.Vikram and W. K. Witherow, Critical needs of fringe order accuracies in two-colorholographic interferometry, Exp. Mech., 32, 74–77, 1992.

214. M.-A. Beeck, Pulsed holographic vibration analysis on high speed rotating objects:fringe formation, recording techniques and practical applications, Opt. Eng., 31,553–561, 1992.

215. W. Jüptner, J. Geldmacher, Th. Bischof, and Th. Kreis, Measurement of thedeformation of a pressure vessel above a weld point, Proc. SPIE, 1756, 98–105, 1992.

216. G. Brown, J. W. Forbes, Mitchell M. Marchi, and R. R. Wales, Hologram interfero-metry in automotive component vibration testing, Proc. SPIE, 1756, 146–152, 1992.

217. S. W. Biederman and R. P. Pryputniewicz, Holographic study of vibrations of a wingsection, Proc. SPIE, 1756, 153–163, 1992.

218. J. Woisetschlaeger, D. B. Sheffer, H. Mikati, K. Somasundaram, C. W. Loughry, S. K.Chawla, and P. J. Wesolowski, Breast cancer detection by holographic interferometry,Proc. SPIE, 1756, 176–183, 1992.

219. X. Wang, R. Magnusson, and A. Haji-Sheikh, Real-time interferometry with photo-refractive reference holograms, Appl. Opt., 32, 1983–1986, 1993.

220. P. Zanetta, G. P. Solomos, M. Zurn, and A.C. Lucia, Holographic detection of defectsin composites, Opt. Laser Technol., 25, 97–102, 1993.

221. M. M. Ratnam and W. T. Evans, Comparison of measurement of piston deformationusing holographic interferometry and finite elements, Exp. Mech., 12, 336–342, 1993.

222. C. M. E. Holden, S. C. J. Parker, and P. J. Bryanston-Cross, Quantitative three-dimensional holographic interferometry for flow field analysis, Opt. Lasers Eng., 19,285–298, 1993.

223. J. D. Trolinger and J. C. Hsu, Flowfield diagnostics by holographic interferometry andtomography, In Proceedings of FRINGE’93, 423–439, 1993.

224. A. Kreuttner, B. Lau, A. Mann, E. Mattes, R. Miller, M. Stuber, and K. Stocker,Holographic interferometry for display of shock wave induced deformations andvibrations—A contribution to laser lithotripsy, Lasers Med. Sci., 8, 211–220, 1993.

225. Q. Huang, J.A. Gilbert, and H. J. Caulfield, Holographic interferometry using substrateguided waves, Opt. Eng., 33, 1132–1137, 1994.

226. P. K. Rastogi, Holographic comparison between waves scattered from two physicallydistinct rough surfaces, Opt. Eng., 33, 3484–3485, 1994.

Page 174: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 147[#47] 14/5/2009 21:08

Holographic Interferometry 147

227. U. Schnars and W. Jüptner, Digital recording and reconstruction of holograms inhologram interferometry and shearography, Appl. Opt., 33, 4373–4377, 1994.

228. S. V. Miridonov, A. A. Kamshilin, and E. Barbosa, Recyclable holographic interfer-ometer with a photorefractive crystal: Optical scheme optimization, J. Opt. Soc. Am.A, 11, 1780–1788, 1994.

229. D. Dirksen and G. von Bally, Holographic double-exposure interferometry in near realtime with photorefractive crystals, J. Opt. Soc. Am. A, 11, 1858–1863, 1994.

230. U. Schnars, Direct phase determination in hologram interferometry with use ofdigitally recorded holograms, J. Opt. Soc. Am. A, 11, 2011–2015, 1994.

231. H. Kasprzak, W. N. Forester, and G. von Bally, Holographic measurement of changesof the central corneal curvature due to interocular pressure differences, Opt. Eng., 33,198–203, 1994.

232. N. Ninane and M. P. George, Holographic interferometry using two-wavelengthholography: The measurement of large deformations, Appl. Opt., 34, 1923–1928,1995.

233. Th. Kreis, R. Biederman, and W. Jüptner, Evaluation of holographic interferencepatterns by artificial neural networks, Proc. SPIE, 2544, 11–24, 1995.

234. B. Pouet and S. Krishnaswamy, Dynamic holographic interferometry by photorefrac-tive crystals for quantitative deformation measurements, Appl. Opt., 35, 787–794,1996.

235. V. I. Vlad, D. Malacara-Hernandez, and A. Petris, Real-time holographic interfero-metry using optical phase conjugation in photorefractive materials and direct spatialphase modulation, Opt. Eng., 35, 1383–1387, 1996.

236. J. Zhu, Optical nondestructive examination of glass-fiber reinforced plastichoneycomb structures, Opt. Lasers Eng., 25, 133–143, 1996.

237. M. F. Ralea and N. N. Rosu, Laser holographic interferometry for investigations ofcylindrical transparent tubes, Opt. Eng., 35, 1393–1395, 1996.

238. T. J. McIntyre, M. J. Wegener, A. I. Bishop, and H. Rubinsztein-Dunlop, Simultaneoustwo-wavelength holographic interferometry in a superorbital expansion tube facility,Appl. Opt., 36, 8128–8134, 1997.

239. S. Seebacher, W. Osten, and W. Juptner, Measuring shape and deformation of smallobjects using digital holography, Proc. SPIE, 3479, 104–115, 1998.

240. G. Pedrini, P. Fröning, H. Fessler, and H. J. Tiziani, In-line digital holographicinterferometry, Appl. Opt., 37, 6262–6269, 1998.

241. C. Herman, E. Kang, and M. Wetzel, Expanding the applications of holographic inter-ferometry to the quantitative visualization of oscillatory thermofluid processes usingtemperature as tracer, Exp. Fluids, 24, 431–446, 1998.

242. J. T. Sheridan and R. Patten, Holographic interferometry and the fractional Fouriertransformation, Opt. Lett., 25, 448–450, 2000.

243. C. Wagner, W. Osten, and S. Seebacher, Direct shape measurement by digital wave-front reconstruction and multiwavelength contouring, Opt. Eng., 39, 79–85, 2000.

244. G. Cedilnik, M. Esselbach, A. Kiessling, and R. Kowarschik, Real-time holographicinterferometry with double two-wave mixing in photorefractive crystals, Appl. Opt.,39, 2091–2100, 2000.

245. V. Palero, N. Andrés, M. P. Arroyo, and M. Quintanilla, Holographic interferometryversus stereoscopic PIV for measuring out-of-plane velocity fields in confined flows,Meas. Sci. Technol., 11, 655–666, 2000.

246. S. Herrmann, H. Hinrichs, K.D. Hinsch, and C. Surmann, Coherence concepts inholographic particle image velocimetry. Exp. Fluids, 29, S108–S116, 2000.

Page 175: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C006.tex” — page 148[#48] 14/5/2009 21:08

148 Optical Methods of Measurement

247. S. Seebacher, W. Osten, T. Baumbach, and W. Jüptner, The determination of mate-rial parameters of microcomponents using digital holography, Opt. Lasers Eng., 36,103–126, 2001.

248. V. S. Pisarev, V. V. Balalov, V. S. Aistov, M. M. Bondarenko, and M. G. Yustus,Reflection hologram interferometry combined with hole drilling technique as aneffective tool for residual stresses fields investigation in thin-walled structures, Opt.Lasers Eng., 36, 551–597, 2001.

249. W. Osten, T. Baumbach, and W. Jüptner, Comparative digital holography, Opt. Lett.,27, 1764–1766, 2002.

250. S. R. Guntaka, V. Toal, and S. Martin, Holographically recorded photopolymerdiffractive optical element for holographic and electronic speckle-pattern interfer-ometry, Appl. Opt., 41, 7475–7479, 2002.

251. K. D. Hinsch, Holographic particle image velocimetry. Meas. Sci. Technol., 13,R61–R72, 2002.

252. W. Steinchen, G. Kupfer, and P. Mäckel, Full field tensile strain shearography ofwelded specimens, Strain, 38, 17–26, 2002.

253. N. Demoli, D. Vukicevic, and M. Torzynski, Dynamic digital holographic interfero-metry with three wavelengths, Opt. Exp., 11, 767–774, 2003.

254. L. Zipser, H. Franke, E. Olsson, N.-E. Molin, and M. Sjödahl, Reconstruction of two-dimensional acoustic object fields by use of digital phase conjugation of scanninglaser vibrometry recordings, Appl. Opt., 42, 5831–5838, 2003.

255. F. Pinard, B. Laine, and H. Vach, Musical quality assessment of clarinet reeds usingoptical holography, J. Acoust. Soc. Am., 113, 1736–1742, 2003.

256. N. Demoli and D. Vukicevic, Detection of hidden stationary deformations of vibratingsurfaces by use of time-averaged digital holographic interferometry, Opt. Lett., 29,2423–2425, 2004.

257. V. S. Pisarev, V. S. Aistov, V. V. Balalov, M. M. Bondarenko, S. V. Chumak, V. D.Grigoriev, and M. G. Yustus, Metrological justification of reflection hologram inter-ferometry with respect to residual stresses determination by means of blind holedrilling, Opt. Eng., 41, 353–410, 2004.

258. S. R. Guntaka, V. Toal, and S. Martin, Holographic and electronic speckle patterninterferometry using a photopolymer recording material, Strain, 40, 79–81, 2004.

259. J. Fernández-Sempere, F. Ruiz-Beviá, and R. Salcedo-Díaz, Measurements byholographic interferometry of concentration profiles in dead-end ultrafiltration ofpolyethylene glycol solutions, J. Membr. Sci., 229, 187–197, 2004.

260. Y. Morimoto, T. Nomura, M. Fujigaki, S. Yoneyama, and I. Takahashi, Deformationmeasurement by phase-shifting digital holography, Exp. Mech., 45, 65–70, 2005.

261. R. Ambu, F. Aymerich, F. Ginesu, and P. Priolo, Assessment of NDT interferometrictechniques for impact damage detection in composite laminates, Comp. Sci. Technol.,66, 199–205, 2006.

262. T. Baubach, W. Osten, C. Von Kopylow, and W. Jüptner, Remote metrology bycomparative digital holography, Appl. Opt., 45, 925–934, 2006.

263. D. Pantelic, L. Blažic, S. Savic-Ševic, and Bratimir Panic, Holographic detection ofa tooth structure deformation after dental filling polymerization, J. Biomed. Opt., 12,024026, 2007.

Page 176: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 149[#1] 14/5/2009 20:42

7 Speckle Metrology

7.1 THE SPECKLE PHENOMENON

Illumination of a diffuse object by coherent light produces a grainy structure in space.This grainy light distribution is known as a speckle pattern. It arises as a result of self-interference of numerous waves from scattering centers on the surface of the diffuseobject, as shown in Figure 7.1: the amplitudes and phases of these scattered waves arerandom variables. We assume that (i) the amplitude and phase of each scattered waveare statistically independent variables, and also independent of the amplitudes and thephases of all other waves, and (ii) the phases of these waves are uniformly distributedbetween −π and π. Such a speckle pattern is fully developed. The resultant complexamplitude u(x, y) = u(x, y)eiφ is given by

u(x, y)eiφ = 1√N

N∑k=1

uk = 1√N

N∑k=1

akeiφk (7.1)

where ak and φk are the amplitude and phase of the wave from the kth scatterer. Forsuch a speckle pattern, the complex amplitude of the resultant u(x, y) obeys Gaussianstatistics.

The probability density function p(I) of speckle intensity is given by

p(I) = e−I/I

I(7.2)

where I is the average intensity. This gives the probability that the intensity at a pointin the speckle pattern will have the value I . The probability density function in thespeckle pattern follows a negative exponential law. The most probable intensity valueis zero. A measure of the contrast in the speckle pattern is the ratio c = σ/I , where σ

is the standard deviation of the speckle intensity. The contrast in the fully developedlinearly polarized speckle pattern is unity.

7.2 AVERAGE SPECKLE SIZE

The grains or the speckles in the pattern are not well defined but have a structure.However, we can associate with the speckle an average size. We consider two cases.

149

Page 177: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 150[#2] 14/5/2009 20:42

150 Optical Methods of Measurement

Object Observation plane

P

l

Illumination beam

FIGURE 7.1 Formation of a speckle pattern.

7.2.1 OBJECTIVE SPECKLE PATTERN

It was pointed out earlier that a speckle pattern is formed in space when a diffuseobject is illuminated by a coherent wave. This pattern, resulting from free-spacepropagation, is termed an objective speckle pattern. The speckle size in an objectivespeckle pattern is given by

σob = λz

D(7.3)

where D is the size of the illuminated area of the object and z is the distance between theobject and the observation plane (Figure 7.2a). The size is governed by the interferencebetween the waves from the extreme scattering points on the object. The relationshipis the same as that expected from Young’s double-slit experiment, the slits being atthe extreme positions in the illuminated area. The speckle size increases linearly withthe separation between the object and the observation plane.

7.2.2 SUBJECTIVE SPECKLE PATTERN

A speckle pattern formed at the image plane of a lens is called a subjective specklepattern. This arises owing to interference of waves from several scattering centersin the resolution element (area) of the lens: in the image of this resolution area,the randomly dephased impulse response functions are added, resulting in a speckle.Therefore, the speckle size is governed by the well-knownAiry formula. The diameterof the Airy disk is given approximately by

σs ≈ λb

D′ (7.4)

where D′ is the diameter of the lens and b is the image distance (Figure 7.2b). Hereagain, the speckle size is determined by the maximum aperture of the lens. Theobjective speckle pattern immediately in front of the lens is transmitted through itand appears on the other side of the lens. The speckles at the periphery of this patterndetermine the speckle size at the image plane. By introducing the f -number of the

Page 178: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 151[#3] 14/5/2009 20:42

Speckle Metrology 151

Recordingplane

Recordingplane

Object

D

z

(a)

ab

Object Lens

(b)

FIGURE 7.2 (a) Objective speckle pattern. (b) Subjective speckle pattern.

lens, F# = f /D′, where f is the focal length of the lens, the average speckle size canbe expressed as

σs = (1 + m)λF# (7.5)

Here we have introduced the magnification of the lens, m = b/a = (b − f )/f . It canthus be seen that the speckle size can be controlled by (i) the magnification m, and (ii)the F# of the lens. Control of speckle size by F# is often used in speckle metrologyto match the speckle size with the pixel size of charge-coupled device (CCD) arraydetectors.

7.3 SUPERPOSITION OF SPECKLE PATTERNS

Speckle patterns can be added on either an amplitude basis or an intensity basis. Anexample of the addition of speckle patterns on an amplitude basis arises in shearspeckle interferometry, where the two speckle patterns are shifted and then super-posed. In such a speckle pattern, the statistics of the resultant speckle pattern remainsunchanged. However, when the speckle patterns are added on an intensity basis—forexample, when two speckle records are made on the same plate—the speckle statisticsis completely modified and is governed by the correlation coefficient.

Page 179: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 152[#4] 14/5/2009 20:42

152 Optical Methods of Measurement

7.4 SPECKLE PATTERN AND OBJECT SURFACECHARACTERISTICS

In the explanation of a fully developed linearly polarized speckle pattern, it must bestressed that the phases are uniformly distributed between −π and π and that thereare large number of scatterers participating in speckle formation. This results in aunit-contrast speckle pattern. However, if the surface is smoother than is necessary tosatisfy this condition, the speckle contrast decreases. The speckle contrast depends onsurface roughness and coherence of light. In fact, speckle contrast has been employedto measure surface roughness over a large range using both monochromatic andpolychromatic illumination.

7.5 SPECKLE PATTERN AND SURFACE MOTION

7.5.1 LINEAR MOTION IN THE PLANE OF THE SURFACE

Let us assume a translucent object that is translated by a distance d in its own plane:the objective speckle pattern also translates by the same magnitude in the same direc-tion (Figure 7.3a). However, the structure of the speckle pattern begins to change(i.e., decorrelation sets in) when some of the scattering centers go out of the illumi-nation beam. For the subjective speckle pattern, the speckle motion is in the directionopposite to that of the object and its magnitude is md, where m is the magnification(Figure 7.3b).

7.5.2 OUT-OF-PLANE DISPLACEMENT

Let us consider that a speckle is formed at a position P(r,0), as shown in Figure 7.4a.This speckle is due to the superposition of all the waves from the object. When theobject translates axially by a small distance ε, all of these waves are phase-shifted bynearly the same amount. If this phase shift is a multiple of 2π, then a similar stateof speckle will exist at the point P′(r − Δr, 0); that is, the speckle will have shifted

dd

(a)

md

d(b)

FIGURE 7.3 In-plane displacement: (a) objective speckle pattern; (b) subjective specklepattern.

Page 180: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 153[#5] 14/5/2009 20:42

Speckle Metrology 153

z

A

(a)

(b)

A'

P´(r – Dr, 0)P(r, 0)

a be

e

FIGURE 7.4 Out-of-plane displacement: (a) objective speckle pattern; (b) subjective specklepattern.

radially by Δr. We therefore have

r

z= r − Δr

z − ε

or

|Δr| = εr

z(7.6)

A similar situation holds for a subjective speckle pattern, as shown in Figure 7.4b.We therefore obtain

a

a − ε= r

r + Δr

or

|Δr| = εr

a(7.7)

where a is the object distance. Thus, for axial translation of the object, the specklepattern shifts radially—it either expands or contracts, depending on the direction ofthe shift of the object. It may be noticed that the radial movement of the specklepattern requires a rather large out-of-plane displacement of the object owing to thepresence of the r/z or r/a factor, which is very small.

7.5.3 TILT OF THE OBJECT

Let the object be illuminated in a direction that makes an angle α with the z axis (takenin the direction of the surface local normal) and let the speckle pattern be observedat a point P in the direction making an angle β, as shown in Figure 7.5a. When theobject is tilted by a small angle Δφ, the speckle pattern shifts to a new position,which is shifted angularly by Δψ, where Δψ = (1 + cos α/ cos β)Δφ. Only for the

Page 181: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 154[#6] 14/5/2009 20:42

154 Optical Methods of Measurement

P'

P

(a)Df

Df

Dy

a

b

FT plane

(b)

FIGURE 7.5 Motion of the speckle due to the tilt of the object: (a) at a far plane (objectivespeckle pattern); (b) at the FT plane.

case of normal illumination and observation directions (very small angles α and β)

is the angular shift of the speckle pattern twice the tilt of the object. The situationis quite different when we consider the subjective speckle pattern. In fact, there isno shift at the image plane due to the tilt of the object. Between the image planeand the focal plane [or Fourier-transform (FT) plane], speckle shift is due to bothin-plane and tilt contributions. At the FT plane, the speckle pattern does not shiftowing to in-plane translation, but shifts by Δxf = f Δφ when the object is tilted byΔφ (Figure 7.5b). When the object is illuminated by a divergent wave, one can finda plane that is sensitive only to the in-plane motion and another plane sensitive totilt alone.

It can thus be seen that a speckle pattern undergoes changes when the object iseither translated or tilted. However, for deformation measurement, we are interestedin measuring changes/shifts at various points of an object, and hence we employ onlythe subjective speckle pattern. In this way, a correspondence between the object andthe image points is maintained. In other words, the local changes in the object causechanges locally in the speckle pattern rather than over the whole speckle pattern plane.The changes in a speckle pattern due to object deformation are (i) positional changesaccompanied by irradiance changes and decorrelation and (ii) phase changes, whichare made visible by adding a specular reference wave or speckled reference wave at

Page 182: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 155[#7] 14/5/2009 20:42

Speckle Metrology 155

the image plane. Often, both changes occur simultaneously, but one may dominateover the other.

The speckle methods used for deformation measurement and vibration analysiscan be placed in the following four categories:

• Speckle photography• Speckle interferometry• Speckle shear interferometry• Electronic speckle pattern interferometry (ESPI) and shear ESPI.

The first three of these methods employ a photographic medium, photorefractivecrystals, and so on for recording; the ESPI methods employ electronic detection.Correlation, as in speckle photography, can also be accomplished digitally.

7.6 SPECKLE PHOTOGRAPHY

The object may be illuminated obliquely or normally. We make an image of theobject, as shown in Figure 7.6a, on a photographic plate capable of resolving thespeckle pattern, and the first exposure is recorded. The object is then loaded, andanother record of the displaced speckle pattern is made on the same plate. In this way,we have recorded two speckle patterns, one of them translated locally by d. We needto find out d at various locations on the plate, and then generate the deformation map.It was pointed out earlier that the speckle displacement has poor sensitivity for axial(out-of-plane) displacements. Hence, speckle photography is used mostly to measurein-plane displacements and in-plane vibration amplitudes.

Let us first examine the specklegram (negative film or plate) realized by making asingle exposure. The intensity recorded is given by I(x, y) = |u(x, y)|2. The amplitudetransmittance of this negative (specklegram) is expressed as

t(x, y) = t0 − βTI(x, y) (7.8)

where t0 is the bias transmittance, β is a constant, and T is the exposure time. Asthe speckle pattern consists of a grainy structure, each grain being identified by aδ-function, the intensity I(x, y) can also be expressed as

I(x, y) =∫∫

I(x′, y′)δ(x − x′, y − y′) dx′dy′ (7.9)

When this specklegram is placed in a set-up as shown in Figure 7.6b, and illuminatedby a parallel beam of light, the amplitude transmitted is given by u0(x, y)t(x, y), whereu0(x, y) is the amplitude of the illuminating plane wave. The specklegram will diffractthe light over a reasonably large cone, depending on the speckle size.

We collect this diffracted light with a lens and make an observation at the back focalplane of the lens. Essentially, we are taking the Fourier transform of the amplitudetransmittance t(x, y) of the speckle record. The amplitude at the FT plane, assuming

Page 183: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 156[#8] 14/5/2009 20:42

156 Optical Methods of Measurement

(a)

f

Specklegram(b)

(c) (d)

FIGURE 7.6 Speckle photography: (a) bodily in-plane displacement; (b) set-up to observediffracted filed from the specklegram at the focal plane of a lens; (c) halo from a single-exposurespecklegram; (d) fringes in the halo from a double-exposure specklegram.

a unit-amplitude illumination wave, is given by

U(μ, ν) =∫∫

t(x, y) e−2πi(μx+νy)dx dy

=∫∫

t0 e−2πi(μx+νy)dx dy − βT∫∫

I(x, y)e−2πi(μx+νy)dx dy

= t0δ(μ, ν) − βT�[I(x, y)] (7.10)

where �[ ] signifies the Fourier transform. The intensity distribution at the FT plane,|U(μ, ν)|2, consists of a strong central peak and a light distribution around called thehalo. This is shown in Figure 7.6c; the central portion is blocked while recording thehalo. The halo contains a range of spatial frequencies between 0 and 1/σs, where σs isthe average size in the subjective speckle pattern. The diameter of the halo is 2f λ/σs.The halo distribution is given by the autocorrelation of the aperture function of theimaging lens. A physical insight into the formation of the halo can be obtained if we

Page 184: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 157[#9] 14/5/2009 20:42

Speckle Metrology 157

consider a specklegram as having a large number of sinusoidal gratings of continu-ously varying pitch and random orientations. When the specklegram is illuminated,these gratings diffract the beam in various directions, forming the halo at the backfocal plane of the lens.

Let us now consider a double-exposure specklegram. In the first exposure, anintensity distribution I1(x, y) is recorded. The object is then deformed, and the secondexposure I2(x, y) is recorded on the same plate. Owing to the deformation, the specklepattern shifts locally. Therefore, the intensity distribution I2(x, y) can be expressed as

I2(x, y) =∫∫

I(x′, y′)δ(x + dx − x′, y + dy − y′) dx′dy′ (7.11)

where dx and dy are the components of d along the x and y directions respectively.The total intensity recorded is

It(x, y) = I1(x, y) + I2(x, y)

=∫∫

I(x′, y′)[δ(x − x′, y − y′) + δ(x + dx − x′, y + dy − y′)

]dx′dy′

(7.12)

Again, if this double-exposure specklegram is illuminated by a collimated beam, thenone obtains, at the focal plane of the lens, a central order and the superposition of halosbelonging to the initial and the final states of the object. The amplitude transmittanceof the double-exposure specklegram is given by

t(x, y) = t0 − βT [I1(x, y) + I2(x, y)] (7.13)

The amplitude at the FT plane is given by

�[t(x, y) = t0δ(μ, ν) − βT�[I1(x, y) + I2(x, y)] (7.14)

For simplicity, we now confine ourselves to one dimension, and hence write the totalintensity as

It(x) =∫∫

I(x′)[δ(x − x′) + δ(x + dx − x′)

]dx′dy′ (7.15)

which is the convolution of I(x) with δ(x) + δ(x + dx). Therefore,

�[It(x)] = �[I1(x, y) + I2(x, y)] = �[I(x)]�[δ(x) + δ(x + dx)] (7.16)

The Fourier transform of δ(x) is a plane wave propagating on-axis, and that ofδ(x + dx) is also a plane wave propagating inclined with the axis; that is,

�[δ(x) + δ(x + dx)] =∫

δ(x)e−2πiμx dx +∫

δ(x + dx)e−2πiμx dx

= c + ce2πiμdx (7.17)

Page 185: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 158[#10] 14/5/2009 20:42

158 Optical Methods of Measurement

where c is a constant, being the Fourier transform of a δ-function.The spatial frequencyμ is defined as μ = xf/f λ, with xf being the x coordinate on the FT plane and f thefocal length of the lens. Suppressing the central order, we can write the intensitydistribution in the FT plane (in the halo) as

I(μ) ∝ 2β2T2c2(1 + cos 2πμdx) = I0(μ) cos2 πμdx (7.18)

It is therefore seen that the halo, given by I0(μ), is modulated by the cos2(πμdx) term.In other words, a fringe pattern appears in the halo. Figure 7.6d shows the fringe patternin the halo. This fringe pattern is similar to theYoung’s double-aperture fringe pattern.Therefore the fringes are termed Young’s fringes. In the cos2(πμdx) term, there aretwo variables, namely μ, (the coordinate along the x axis) and dx, which is a functionof local coordinates if the deformation is not constant over the whole specklegram. Forconstant dx , the interpretation is simple and straightforward: the halo distribution ismodulated by the cosinusoidal fringes, whose spacing is Δxf = f λ/dx . Therefore, themagnitude of dx may be obtained from the fringe width measurement. The directionof the displacement dx is always along the normal to the fringes. However, if dx isnot constant, then each value of dx will produce its own fringe pattern, which onsuperposition may completely wash out this intensity variation. In such a situation,the specklegram is interrogated with a narrow or unexpanded laser beam to extractdisplacement information from each region of interrogation. The displacement in theregion of interrogation should be constant.

7.7 METHODS OF EVALUATION

A double-exposure specklegram contains two shifted speckle patterns, and this shift isto be determined at a number of locations on the specklegram in order to generate thedeformation map. It was remarked earlier that Young’s-type fringes are formed whenthe specklegram is illuminated by a narrow beam. These fringes give the direction andmagnitude of the displacement at a point. The process of extracting the informationfrom a specklegram is called filtering. The filtering is done both at the plane of thespecklegram and at its FT plane. These methods are called pointwise filtering andwholefield filtering.

Another method of filtering, usually applicable to out-of-plane displacement mea-surement, is known as Fourier filtering. It can also be used in other cases; the fringesare generally localized on the specklegram.

7.7.1 POINTWISE FILTERING

It was mentioned earlier that if the displacement of the speckles is nonuniform, nofringe pattern may be formed at the FT plane when the whole specklegram is illumi-nated. However, it is safe to assume that the speckle movement over a very small regionof the image (specklegram) is uniform. Therefore, if the double-exposure speckle-gram is illuminated by a narrow (unexpanded) beam and the observation is made ata plane sufficiently far away (far field), as shown in Figure 7.7a, then a system ofYoung’s fringes will be formed. The fringes always run perpendicular to the direction

Page 186: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 159[#11] 14/5/2009 20:42

Speckle Metrology 159

z

Specklegram(a)

f f f f

Specklegram Aperture

(b)x

zy

Specklegram Filter(c)

z1Image

Dr sin q

Dr

(d)

FIGURE 7.7 (a) Pointwise filtering arrangement. (b) Wholefield filtering arrangement.(c) Fourier filtering arrangement. (d) Path difference from a speckle pair.

of displacement, and the fringe width p is inversely proportional to the displacement;that is, p = λz/|d|. Thus, both the direction and magnitude of the displacement ateach interrogation region on the specklegram are obtained. The sign ambiguity is stillnot resolved. It can be resolved by giving a linear known displacement to the photo-graphic plate before the second exposure. By obtaining the magnitude and directionof the displacement at a large number of points on the specklegram, a displacementmap on the specklegram is generated, which is then translated to the object surfacethrough the magnification of the imaging system.

Page 187: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 160[#12] 14/5/2009 20:42

160 Optical Methods of Measurement

The contrast of the Young’s fringes is influenced by several factors, the mostimportant of which are nonuniform displacement and missing speckle pairs in theregion of illumination. Further, it must be kept in mind that the halo distribution isnonuniform and the background intensity may not be constant. Therefore, theYoung’sfringes do not have unit contrast and the positions of their maxima and minima areshifted. This shift introduces errors in the calculation of the displacement. Methodshave been developed to correct for this.

7.7.2 WHOLEFIELD FILTERING

We consider an arrangement shown in Figure 7.7b. This is a 4f configuration. Thefiltering is carried out at the FT plane by an aperture. The image of the specklegramis formed at unit magnification on the output plane by the light allowed through thefiltering aperture. The image contains the fringes that represent constant in-planedisplacements in the direction of the filtering aperture. In order to understand theworking of the wholefield filtering technique, let us place an aperture of appropriatesize at a position xf along the x direction. All of the identical pair scatterers (speckles)on the specklegram will diffract light in phase in the direction of the filtering apertureif their separation is such that the waves diffracted by these point scatterers have apath difference of integral multiples of λ; that is,

dx sin θ = mλ, m = 0, ±1, ±2, ±3, . . . (7.19)

where dx is the x component of the displacement vector and sin θ = xf/f . Thus, weobtain

dx = mλf /xf (7.20a)

These areas therefore appear bright in the image, forming bright fringes that are lociof constant dx. Similarly, when the filtering aperture is placed at yf along the y axis,we obtain dy:

dy = m′λf /yf (7.20b)

These fringes are loci of constant dy. In other words, the fringes represent the contoursof constant in-plane displacement components, which are separated by incrementaldisplacements Δdx and Δdy, where

Δdx = λf /xf (7.21a)

Δdy = λf /yf (7.21b)

This is a wholefield method: the fringes are formed over the whole object surface.In order to obtain dx and dy, we need to know m and m′, and hence regions on theobject where no displacement has taken place. It can also be seen that the sensitivityof the method is variable, being maximum when the filtering aperture is placed at theperiphery of the diffraction halo; the increase in sensitivity follows the decrease inthe available light for image formation.

Page 188: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 161[#13] 14/5/2009 20:42

Speckle Metrology 161

7.7.3 FOURIER FILTERING: MEASUREMENT OF OUT-OF-PLANE DISPLACEMENT

It has been shown that a longitudinal or axial displacement ε of the object results in aradial displacement of the speckle pattern, Δr = εr/z. The displacement magnitudeε is obtained by Fourier filtering. The specklegram is illuminated by a parallel beam,as shown in Figure 7.7c, and an aperture is placed on the optical axis for filtering. Alens placed just behind the aperture images the specklegram at the observation plane.All of these point pairs (identical scatterers) will diffract the incident light in phaseat the filtering plane if Δr sin θ = mλ, where sin θ = r/z1 and λ is the wavelength oflight used for filtering (Figure 7.7c). Thus,

Δr sin θ = (εr/z)(r/z1) = mλ

or

εr2/zz1 = mλ (7.22)

This indicates that a circular fringe pattern is observed at the observation plane. Thedisplacement ε is obtained by measuring the radii of fringes of different orders asfollows:

ε = nλzz1

r2m+n − r2

m

(7.23)

where rm+n and rm are the radii of the (m + n)th- and mth-order circular fringes,respectively.

7.8 SPECKLE PHOTOGRAPHY WITH VIBRATINGOBJECTS: IN-PLANE VIBRATION

Let us consider an object that is executing an in-plane vibration with amplitude A(x, y).The object is imaged on a photographic plate. Owing to the in-plane vibration, aspeckle stretches to a line of length 2A(x, y)m, where m is the magnification. Asmentioned earlier, a sinusoidally vibrating object spends more time at the positionsof maximum displacements. Therefore, the speckle line has a nonuniform brightnessdistribution. However, if the recording is made on a high-contrast photographic plate,the speckle line will have a uniform density after development. We therefore assumethat the speckles are uniformly elongated to 2A(x, y)m. When such a specklegram(time-average recording) is pointwise-filtered, the halo distribution is modulated bysinc2(2μmA). The zeros of this function occur where

2μmA = n for n = 0, ±1, ±2, ±3, . . .

or

A(x, y) = nλz/2mxfn (7.24)

where xfn is the position of the nth fringe. From this expression, the amplitude ofvibration at any location on the specklegram can be found.

Page 189: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 162[#14] 14/5/2009 20:42

162 Optical Methods of Measurement

In fact, when an object vibrates, the speckles on the regions that are moving smearout and the contrast of the speckles is reduced. The nodal regions where there isno movement have unit contrast. Therefore, one observes a low-contrast vibrationmode pattern, with the nodal lines appearing dark. A visual speckle interferometerwas proposed by Burch and later modified by Stetson. This interferometer uses areference beam to visualize out-of-plane vibration amplitudes.

7.9 SENSITIVITY OF SPECKLE PHOTOGRAPHY

The halo size is governed by the speckle size, which, in turn, is governed by theF# of the imaging lens. For the speckle displacement to be measurable, at least onefringe should be formed within a halo. In other words, the fringe width must be equalto or less than the halo diameter. Indeed, one fringe will be formed within the halowhen the speckle shifts by an amount equal to its average size. This probably is thelower limit of the displacement that speckle photography can sense. The upper limitis also set by the speckle size. When the fringe width becomes equal to the specklesize, the fringes will not be discernible. In fact, one must choose a fringe width ofapproximately 10 times the speckle size for it to be discernible. This therefore sets anupper limit for speckle displacement measurement. For wholefield filtering, similarlimits apply. The quality of fringes in wholefield filtering is considerably poorer thanin pointwise filtering.

One of the serious drawbacks of speckle photography is that the positions of themaxima of the fringes in the fringe pattern are shifted owing to nonuniform halodistribution. The same is true for the minima positions when some background ispresent. This effect introduces errors in the measured values of the displacements.Methods are available to correct for these errors.

7.10 PARTICLE IMAGE VELOCIMETRY

Speckle photography of seeded flows carried out with pulsed lasers or scanned laserbeams provides information about the flow (spatial variation of the velocity of flow—in fact the velocity of seeds). The technique is known as particle image velocimetry(PIV). First, a record is made with a short-duration laser pulse so that the motion isfrozen, and then the second exposure is made a short time later. The seeds (particles)will have moved during this time to new locations depending on their velocities. Thedouble-exposure record is pointwise-filtered to generate the velocity map.

7.11 WHITE-LIGHT SPECKLE PHOTOGRAPHY

Most objects are good candidates for white-light speckle photography, since the sur-face structure is quite adequate for this. However, this property is enhanced by coatingthe surface with a retro-reflective paint. The images of the embedded glass balls inretro-reflective paint act as speckles. Deformation studies can therefore be carriedout using white-light speckles based on principles similar to those applied to laserspeckles.

Page 190: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 163[#15] 14/5/2009 20:42

Speckle Metrology 163

7.12 SHEAR SPECKLE PHOTOGRAPHY

It is obvious that speckle photography is essentially a tool to measure in-plane dis-placement or the in-plane component of deformation, since it has very low sensitivityto the out-of-plane component. It is also used to measure bodily tilts. A stress analystis often interested in strain rather than displacement. Strain is obtained from displace-ment data by numerical differentiation, which is error-prone. However, strain can beobtained by optical differentiation, that is, by obtaining displacement values at twoclosely separated points. With a double-exposure specklegram, this can be achievedby interrogating it with two parallel-displaced narrow beams. Each beam producesYoung’s-type fringes in the diffraction halo, and the superposition of the two patterns,from each illumination beam, will generate a moiré pattern, from which strain can becalculated (Chapter 9). Alternatively, two states of the object can be recorded on twoseparate films/plates. During filtering, one plate is displaced with respect to the other,to introduce shear. Again, a moiré pattern is formed, from which strain is calculated.Both of these techniques provide a variable shear by varying the beam separation orspecklegram displacement.

When two narrow beams are used for illumination, the diffraction halos are spatiallydisplaced. For low-angle diffraction, the overlap region may be very small, to provide ameaningful moiré pattern. On the other hand, when two double-exposure films/platesare used, the underlying assumption that the object has been identically loaded foreach double-exposure record may be difficult to realize. It is therefore desirable torecord a single-shear double-exposure specklegram. For this purpose, a pair of wedgeplates is used in front of the imaging lens: each wedge plate carries a sheet polarizer.The transmission axes of the polarizers are crossed. The object is illuminated eitherwith circularly polarized light or with linearly polarized light with its azimuth at 45◦to the polarizer axis. A double-exposure record, with object loaded in between, canbe recorded. Interrogation of such a record with a narrow beam will generate a moirépattern. The drawback of this method is that it has a fixed shear determined by thewedge angles. In addition, theYoung’s fringes have a nonuniform intensity, are noisy,

FIGURE 7.8 Moiré pattern due to superposition of two Young’s fringe patterns.

Page 191: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 164[#16] 14/5/2009 20:42

164 Optical Methods of Measurement

and are few in number, and hence the moiré fringes are of poor contrast. Figure 7.8shows a moiré pattern obtained using such an arrangement.

7.13 SPECKLE INTERFEROMETRY

The speckle phenomenon itself is essentially an interference phenomenon. However,when a reference beam is added to the speckle pattern to code its phase, the techniqueis then termed speckle interferometry. Speckle interferometry was first applied tomeasure in-plane displacement by Leendertz. The sensitivity of the measurementcould be increased over that of speckle photography, which was limited by the specklesize. The basic theory was borrowed from holographic interferometry, since the phasedifference introduced by deformation is governed by the same equation, namely,δ = (k2 − k1) · d. When the object is illuminated with two beams with directionssymmetric with respect to the object normal (also the optical axis) and observationis made along the optical axis, the arrangement generates fringes that are contours ofconstant in-plane displacement. The fringes are called correlation fringes—the reasonfor this name will become obvious in later sections.

The schematic of the set-up to measure in-plane displacement is shown inFigure 7.9. The object is illuminated by two plane waves incident symmetricallyat angles θ and −θ with respect to the optical axis. An image of the object is madeon the recording material, say a photographic plate. The object is deformed, and asecond exposure is made on the same plate. This double-exposure record, on filter-ing, generates fringes representing the in-plane component along the x direction. Thewhole process can be explained mathematically as follows.

Let us consider that the object is illuminated by unit-amplitude plane waves, whichare represented by eikxsin θ and e−ikx sin θ at the plane of the object. The net amplitudeof the waves at the plane of the object is eikxsin θ + e−ikxsin θ. Let the scattering processfrom the object be represented by �(x, y), which is a complex function and includesthe surface characteristics. Thus, the field on the object surface just after scattering is

�(x, y)(eikxsin θ + e−ikxsin θ)

–qz

x

z q

FIGURE 7.9 Configuration for in-plane displacement measurement.

Page 192: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 165[#17] 14/5/2009 20:42

Speckle Metrology 165

The amplitude at the image plane is obtained through the superposition integral;that is,

u(xi, yi) =∫∫

�(x, y)(eikxsin θ + e−ikxsin θ)h(xi − mx, yi − my) dx dy

= a1(xi, yi)eiφ1(xi, yi) + a2(xi, yi)e

iφ2(xi, yi) (7.25)

where h(x, y) is the impulse function and m is the magnification of the imaginglens. a1(xi, yi), φ1(xi, yi), a2(xi, yi), φ2(xi, yi) are the amplitudes and phases of thespeckled waves at the image plane due to each of the illuminating waves. Whenthe object is displaced by dx in the x direction, the amplitude in the image may bewritten as

u′(xi, yi) =∫∫

�(x + dx, y)(eik(x+dx)sin θ + e−ik(x+dx)sin θ)h(xi − mx, yi − my) dx dy

= a1(xi, yi)eiφ1(xi, yi) eikdxsin θ + a2(xi, yi)e

iφ2(xi, yi)e−ikdx sin θ (7.26)

This equation has been derived under the tacit assumption that �(x + dx, y) =�(x, y). This assumption implies that there is no speckle displacement at the objectsurface due to the motion and that the surface characteristics do not change over adistance dx. In fact, there is a speckle displacement, but we assume that it is muchsmaller than the speckle size.

Now, we can write the intensities recorded in the two exposures as

I1(xi, yi) = a21 + a2

2 + 2a1a2 cos(φ1 − φ2) = a21 + a2

2 + 2a1a2 cos φ (7.27a)

I2(xi, yi) = a21 + a2

2 + 2a1a2 cos(φ + 2kdx sin θ) (7.27b)

For subsequent analyses of the various techniques, we will be writing the intensitydistribution in the second exposure as

I2(xi, yi) = a21 + a2

2 + 2a1a2 cos(φ + δ) (7.28)

where δ is the phase introduced by the deformation. In this case,

δ = 2kdx sin θ = 4π

λdx sin θ

Later, we will use different arguments to prove that δ is indeed equal to 2kdx sin θ forthe experimental configuration shown in Figure 7.9. For the present, we mention thatφ1, φ2, a1, a2 are random variables, since �(x, y) is a random variable. Therefore,φ = φ1 − φ2 is also a random variable.

The total intensity recorded will be

I1 + I2 = 2a21 + 2a2

2 + 4a1a2 cos(φ + δ/2) cos(δ/2) (7.29)

Page 193: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 166[#18] 14/5/2009 20:42

166 Optical Methods of Measurement

In fact, examination of Equations 7.27a and 7.28 for I1 and I2 reveals that whenδ = 2mπ, the equations are identical—the speckles in the two exposures are fully cor-related.When δ = (2m + 1)π, the speckles are uncorrelated.This correlation providesthe basis for fringe formation.

It should be noted that the contrast of the fringes is very poor in this arrangement,mainly because of the very strong granular bias term 2(a2

1 + a22). We will discuss

other arrangements in which this bias term has been isolated, thereby improvingthe contrast of the fringes. First, let us prove that the phase change δ due to an in-plane displacement dx is indeed given by 2kdx sin θ. The phase change δ due to adisplacement d can be expressed as

δ = δ2 − δ1 = (k2 − k1) · d − (k2 − k′1) · d = (k′

1 − k1) · d (7.30)

where d = dxi + dyj + dzk, and k′1 and k1 are the propagation vectors of the illumi-

nation beams. The equation for the phase difference δ2 = (k2 − k1) · d was derivedin Chapter 5, and is also applicable to speckle interferometry. From the geometry ofthe experimental configuration of Figure 7.9, k′

1 and k1 are expressed as

k′1 = 2π

λ(sin θi − cos θk) (7.31a)

k1 = 2π

λ(− sin θi − cos θk) (7.31b)

Thus, the vector k′1 − k1 = (2π/λ)2 sin θi lies in the plane of the object. Hence,

δ = (k′1 − k1) · d =

(2π

λ2 sin θi

)· (dxi + dyj + dzk) = 2π

λ2dx sin θ

The phase δ is thus governed by the in-plane component dx of the displacement d.Bright fringes are formed when

λ2dx sin θ = 2mπ

Therefore,

dx = mλ

2 sinθ(7.32)

The arrangement is sensitive only to the in-plane component of displacementlying in the plane of the illumination beams. Consecutive in-plane fringes differ byλ/(2 sin θ). Obviously, the sensitivity can be varied over a very wide range from 0 toλ/2per fringe by varying the interbeam angle. Furthermore, this particular arrangementhas the following features:

• As can be seen from the theoretical analysis, the contribution of the out-of-plane displacement component dz has been fully compensated for collimatedillumination. This is due to the fact that both waves suffer equal phase

Page 194: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 167[#19] 14/5/2009 20:42

Speckle Metrology 167

changes arising from out-of-plane displacement, and hence the net phasechange is zero.

• The arrangement is not sensitive to the y component dy. However, dy can bemeasured by rotating the experimental arrangement through 90◦.

• It offers variable sensitivity.

The disadvantage is that the fringes obtained are of low contrast.

7.14 CORRELATION COEFFICIENT IN SPECKLEINTERFEROMETRY

The correlation coefficient of two random variables X and Y is defined by

ρXY = 〈XY〉 − 〈X〉〈Y〉σXσY

(7.33)

where σ2X = 〈X2〉 − 〈X〉2, σ2

Y = 〈Y2〉 − 〈Y〉2, and the angular brackets indicate theensemble average. If X and Y are uncorrelated, then 〈XY〉 = 〈X〉〈Y〉 and the correlationcoefficient is zero, as expected. The correlation coefficient of the random variablesI1(xi, yi) and I2(xi, yi), the intensities recorded in the first and second exposures, isgiven by

ρ(δ) = 〈I1I2〉 − 〈I1〉〈I2〉[〈I2

1 〉 − 〈I1〉2]1/2[〈I22 〉 − 〈I2〉2]1/2

(7.34)

where the intensities I1 and I2 are expressed as follows:

I1(x, y) = a21 + a2

2 + 2a1a2 cos φ = i1 + i2 + 2√

i1i2 cos φ (7.35a)

I2(x, y) = i1 + i2 + 2√

i1i2 cos(φ + δ) (7.35b)

In these expressions, the intensities i1 and i2 refer to the intensities at the image planedue to individual illuminating beams. Equation 7.34 for the correlation coefficient isevaluated by noting the following:

• The intensities i1, i2, and the phase φ are independent random variables, andhence can be averaged separately.

• 〈cos φ〉 = 〈cos(φ + δ)〉 = 0• 〈i21〉 = 2〈i1〉2, and.

When the intensity values I1 and I2 are substituted into Equation 7.34 for thecorrelation coefficient and the averages are taken, we obtain

ρ(δ) = 〈i21〉 + 〈i22〉 + 2〈i1〉〈i2〉 cos δ

(〈i1〉 + 〈i2〉)2 (7.36)

The correlation coefficient depends on the intensities of the beams and the phaseintroduced by deformation. If we assume that 〈i1〉 = r〈i2〉, that is, one beam is r

Page 195: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 168[#20] 14/5/2009 20:42

168 Optical Methods of Measurement

times stronger than the other, the correlation coefficient takes the simpler form

ρ(δ) = 1 + r2 + 2r cos δ

(1 + r)2(7.37)

This has a maximum value of unity when δ = 2mπ and a minimum value of(1 − r)2/(1 + r)2, when δ = (2m + 1)π. The minimum value will be zero whenr = 1, that is, when the average intensities of the beams are equal. The correlationcoefficient then varies between 0 and 1 as the value of δ varies over the record. This sit-uation is completely at variance with holographic interferometry, where the referencebeam is taken as being stronger than the object beam.

7.15 OUT-OF-PLANE SPECKLE INTERFEROMETER

An interferometer for measuring out-of-plane displacement is shown in Figure 7.10.This is a Michelson interferometer in which one of the mirrors has been replaced bythe object under study. It therefore has a reference wave that is smooth or specular.The lens L2 makes an image of the object at the recording plane. The record consistsof an interference pattern between the smooth reference wave and the speckle fieldin the image of the object. The second exposure is recorded on the same plate afterthe object has been loaded. This double-exposure specklegram (interferogram), whenfiltered, yields fringes that are contours of constant out-of-plane displacement. Asbefore, the intensity distribution in the first exposure is

I1(xi, yi) = a21 + r2

0 + 2a1r0 cos(φ1 − φr) = a21 + r2

0 + 2a1r0 cos φ (7.38)

k2 k1

L2

FIGURE 7.10 Configuration for out-of-plane displacement measurement.

Page 196: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 169[#21] 14/5/2009 20:42

Speckle Metrology 169

where a1, φ1, φ are random variables. The second exposure records the intensitydistribution given by

I2(xi, yi) = a21 + r2

0 + 2a1r0 cos(φ + δ) (7.39)

where the phase difference δ, introduced by the deformation, is given by

δ = (k2 − k1) · d = 2π

λ2dz

The phase difference depends only on the out-of-plane displacement component dz.Bright fringes are formed wherever

λ2dz = 2mπ

or

dz = mλ/2 (7.40)

Thus, consecutive fringes are separated by out-of-plane displacements of λ/2.Again, the fringes are of low contrast. However, it may be noted that, as a con-sequence of the imaging geometry and customized configurations, only one of thecomponents of deformation is sensed, unlike in holographic interferometry, wherethe phase difference δ introduced by deformation is invariably dependent on all threecomponents. Further, the correlation fringes are localized at the plane of the speck-legram, unlike in holographic interferometry, where the fringe pattern is localized inspace in general.

7.16 IN-PLANE MEASUREMENT: DUFFY’S METHOD

In the method due to Leendertz described earlier, the object is illuminated symmetri-cally with respect to the surface normal, and observation is made along the bisectorof the angle enclosed by the illuminating beams. This method, which is also calledthe dual-illumination, single-observation-direction method, has very high sensitivitybut yields poor-contrast fringes.

It is also possible to illuminate the object along one direction and make an obser-vation along two different directions symmetric with respect to the optical axis, whichis also along the local normal to the object. This method is due to Duffy, and wasdeveloped in the context of moiré gauging. It is also known as the single-illumination,dual-observation-direction method. Figure 7.11a shows a schematic of the experimen-tal arrangement. The object is illuminated at an angle θ, and a two-aperture mask isplaced in front of the lens. The apertures enclose an angle of 2α at the object distance.The lens makes an image of the object via each aperture—these images are perfectlysuperposed.

Each wave passing through the aperture generates a speckled image with a specklesize λb/D, where D is the aperture size. These waves are superposed obliquely, andhence each speckle is modulated by a fringe pattern when recorded. The fringe spacing

Page 197: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 170[#22] 14/5/2009 20:42

170 Optical Methods of Measurement

(a)

(b)

x

z

p

D

b

lb/plb/D

Speckle

θ

–α

α

k′2

k2

k1x

z b

FIGURE 7.11 (a) Duffy’s two-aperture arrangement. (b) Modulation of a speckle.

in the speckle is λb/p, where p is the separation between the two apertures. This isshown in Figure 7.11b. When the object is deformed, these fringes shift in the speckle.When the deformation is such that a fringe moves by one period or a multiple thereof,it is then exactly superposed on the earlier recorded position: the fringe contrast isthen high. The region will diffract strongly on filtering, and hence these areas willappear bright. On the other hand, if the deformation is such that the fringe patternmoves by half a fringe width or an odd multiple of a half-period, the new pattern willfall midway in the earlier recorded pattern, resulting in almost complete washout ofthe fringe pattern. These regions will not diffract, or will diffract poorly, and hencewill appear dark on filtering. Therefore, bright and dark fringes correspond to regionswhere the displacement is an integral multiple of λb/p = λ/(2 sin α) and an oddintegral multiple of λb/2p, respectively.

We write the amplitudes of the waves via each aperture as

a11 = a0 eiφ11 (7.41a)

a12 = a0 ei(φ12+2πβx) (7.41b)

Page 198: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 171[#23] 14/5/2009 20:42

Speckle Metrology 171

where β = p/λb is the spatial frequency of the fringe pattern in the speckle pattern.The intensity recorded in the first exposure is

I1(x, y) = |a11 + a12|2 = a20 + a2

0 + 2a20 cos(φ1 + 2πβx), φ1 = φ12 − φ11 (7.42)

Similarly, when the object is loaded, the waves, acquire additional phase changes δ1and δ2, respectively; that is, they can be expressed as

a21 = a0 ei(φ11+δ1) (7.43a)

a22 = a0 ei(φ12+2πβx+δ2) (7.43b)

The intensity distribution recorded in the second exposure is then

I2(x, y) = |a21 + a22|2 = a20 + a2

0 + 2a20 cos(φ1 + 2πβx + δ) (7.44)

where δ = δ2 − δ1 is the phase difference introduced by the deformation. Thespecklegram is now ascribed a transmittance t(x, y), given by

t(x, y) = t0 − βT(I1 + I2) (7.45)

where β is a constant and T the exposure time. Information about the deformation isextracted by filtering.

7.17 FILTERING

The specklegram is placed in a set-up as shown in Figure 7.12 and is illuminated bya collimated beam, say, a unit-amplitude wave. The field at the FT plane is �[t(x, y)].This consists of a halo distribution with a very strong central peak, as shown in thefigure. Owing to the grating-like structure in each speckle, the halo distribution ismodified: it has a central halo (zero order) and ±1-order halos. The zero order arisesas a consequence of the terms a2

0 + a20, which do not carry any information. Since

the speckle size is now larger, owing to the smaller apertures used for imaging, thehalo size (zero-order halo) shrinks. The filtering is done by choosing any one of thefirst-order halos. A fringe pattern of almost unit contrast is obtained, since the halo(zero order), which carries no information, has been isolated.

7.17.1 FRINGE FORMATION

The phase differences δ2 and δ1, due to the deformation, experienced by the wavespassing through the apertures can be expressed as

δ2 = (k′2 − k1) · d

δ1 = (k2 − k1) · d

Hence, the phase difference δ = δ2 − δ1 is given by

δ = (k′2 − k2) · d (7.46)

Page 199: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 172[#24] 14/5/2009 20:42

172 Optical Methods of Measurement

Specklegram Output

Halo distribution

FIGURE 7.12 Wholefield filtering of a specklegram recorded with Duffy’s arrangement.

This phase difference generates an interference pattern. Since the illumination andobservation beams lie in the (x, z) plane, the wave vectors k′

2 and k2 can beexpressed as

k′2 = 2π

λ(sin αi + cos αk)

k2 = 2π

λ(−sin αi + cos αk)

Thus,

δ = (k′2 − k2) · d = 2π

λ2dx sin α

Bright fringes are formed wherever

λ2dx sin α = 2mπ

or

dx = mλ

2 sin α(7.47)

This result is similar to that obtained earlier for the Leendertz method, except that theangle θ is replaced by the angle α. Obviously, α cannot take very large values—themagnitude of α is determined by the lens aperture or by F#. Therefore, the method hasintrinsically poor sensitivity.At the same time, the speckle size is very large comparedwith that in the Leendertz method, and hence the range of in-plane displacementmeasurement is large. The method yields high-contrast fringes owing to the removalof the unwanted speckled field by the grating-like structure formed during recording.

It is indeed very simple and easy to extend Duffy’s method to measure bothcomponents of the in-plane displacement simultaneously. We describe here two con-figurations. In one, an aperture configuration as shown in Figure 7.13a is used.The apertures are located at the four corners of a square. In another configuration,

Page 200: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 173[#25] 14/5/2009 20:42

Speckle Metrology 173

x

y

x

y

dy

dy

dxdx

dy

dy

dxdx

(b)(a)

(d)(c)

FIGURE 7.13 (a, b) Aperture configurations. (c, d) Corresponding halo distributions.

shown in Figure 7.13b, only three apertures are used: these apertures lie on the ver-tices of a right-angled triangle. The corresponding halo distributions at the FT planeare shown in Figure 7.13c and 7.13d, respectively.

Filtering through the halos indicated as dx or dy yields the component dx or dy. It isthus seen that both components, either with the same sensitivity (when the aperturesare placed at equal distances along the x and y directions) or with different sensitivities(when the aperture separations are different) can be obtained from a single double-exposure specklegram.

Leendertz’s and Duffy’s methods can be combined: the object is illuminated bysymmetric collimated beams and the specklegram is recorded using an aperturedlens. This combination extends the range and sensitivity of in-plane displacementmeasurement from a small value dictated by Leendertz’s method to a large valuegoverned by Duffy’s method. On filtering the double-exposure specklegram, bothsystems of fringes are observed simultaneously.

7.18 OUT-OF-PLANE DISPLACEMENT MEASUREMENT

A schematic of a configuration to measure the out-of-plane component of deformationis shown in Figure 7.14. The object is illuminated by collimated beam at an angle θ.A two-aperture mask is placed in front of the imaging lens. One of the apertures carriesa ground-glass plate, which is illuminated by the unexpanded laser beam to generatea diffuse reference wave. The object is imaged via the other aperture. At the imageplane, an interference pattern between the diffuse reference wave and the speckledobject image is recorded. This constitutes the first exposure. The object is loaded, and

Page 201: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 174[#26] 14/5/2009 20:42

174 Optical Methods of Measurement

k1

k2

x

z

q

FIGURE 7.14 Configuration for out-of-plane displacement measurement when θ = 0.

the second exposure is made on the same plate. This constitutes a double-exposurespecklegram.

On filtering via one of the first-order halos, a fringe pattern depicting contoursof constant out-of-plane displacement is obtained. This can be seen as follows. Thephase difference δ due to deformation is again given by

δ = (k2 − k1) · d

where

k2 = 2π

λk, k1 = 2π

λ(−sin θi + cos θk)

Therefore,

δ = 2π

λ[sin θ dx + (1 + cos θ)dz] (7.48)

The arrangement senses both components, dx and dz. However, when the object isilluminated normally (i.e., θ = 0), the phase difference δ = (2π/λ)2dz. The phasedifference depends only on the out-of-plane component. Bright fringes are formedwherever

λ2dz = 2mπ

or

dz = mλ/2 (7.49)

Consecutive bright fringes are separated by λ/2 in out-of-plane displacement changes.

7.19 SIMULTANEOUS MEASUREMENT OF OUT-OF-PLANEAND IN-PLANE DISPLACEMENT COMPONENTS

By a judicious choice of aperture configuration, it is possible to obtain in-planeand out-of-plane displacement components from a single double-exposure speckle-gram. One such aperture configuration, along with the halo distribution, is shown

Page 202: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 175[#27] 14/5/2009 20:42

Speckle Metrology 175

x

y

A

B

C

G

(a)

CG–1

CG1

AC–1

AC

AG–1

AG1

BG–1

BG

AB–1

AB

BC–1 BC1

(b)

FIGURE 7.15 Simultaneous measurement of in-plane and out-of-plane displacement com-ponents: (a) aperture configuration; (b) halo distribution.

in Figure 7.15. It can be seen that the halo AB or AB−1 yields the y component ofthe in-plane displacement, and the halo BC1 or BC−1 yields the x component. Thehalos BG or BG−1, or AG1 or AG−1, or CG1 or CG−1 yield the z component. Obvi-ously, all three components of the displacement vector can be retrieved from a singledouble-exposure specklegram. In fact, redundancy is built into the arrangement.

7.20 OTHER POSSIBILITIES FOR APERTURING THE LENS

In addition to obtaining all components of the deformation vector from a singledouble-exposure specklegram, aperturing can be used to multiplex the informationrecord; that is, several states of the object can be stored and the information retrievedfrom the specklegram. The multiplexing can be done in two ways: (i) frequencymodulation, where the apertures are shifted laterally on the lens after each exposure;

Page 203: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 176[#28] 14/5/2009 20:42

176 Optical Methods of Measurement

(ii) θ-multiplexing (modulation), where the apertures are shifted angularly. Thesemethods can also be combined. The amount of information that can be recordedand retrieved depends on lens size, aperture size, the spatial frequency content ofthe object, and the dynamic range of the recording material. In some experiments,in addition to measurement of displacement components, slope information is alsorecorded and retrieved later.

7.21 DUFFY’S ARRANGEMENT: ENHANCED SENSITIVITY

The sensitivity of Duffy’s arrangement is limited by lens aperture. This limitation,however, can be overcome by modification of the recording set-up, as shown inFigure 7.16. The object is illuminated normally, and is observed along two symmetricdirections: the beams are folded and directed to a pair of mirrors and then onto atwo-aperture mask in front of the lens. The image of the object is thus made via thesefolded paths.

In this arrangement, the speckle size is governed by the aperture diameter (as inthe case with the other methods); the fringe frequency is determined by the apertureseparation; and the sensitivity is governed by the angle θ, which can be varied over alarge range, and is not restricted by the lens aperture.

The phase difference δ is given by

δ = δ2 − δ1 = δ = (k2 − k1) · d − (k′2 − k1) · d

= (k2 − k′2) · d = 2π

λ2dx sin θ (7.50)

The in-plane fringes are extracted by filtering using one of the first-order halos. Thetechnique introduces perspective errors, and also shear when large objects are studiedat larger angles. The perspective error, however, can be reduced using a pair of prismsto decrease the convergence.

k′2

k1

k2

x

z

–θθ

FIGURE 7.16 Configuration for in-plane measurement with enhanced sensitivity.

Page 204: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 177[#29] 14/5/2009 20:42

Speckle Metrology 177

7.22 SPECKLE INTERFEROMETRY—SHAPEMEASUREMENT/CONTOURING

The following methods are available for contouring using speckle interferometry:

• Change of direction of the illumination beam between exposures• Change of wavelength between exposures: dual-wavelength technique• Change of refractive index of the surrounding medium between exposures:

dual-refractive-index technique• Rotation of the object between exposures in an in-plane sensitive configu-

ration

These methods will be discussed in Section 7.28, where we present a technique thatallows the contour fringes to be obtained in real time.

7.23 SPECKLE SHEAR INTERFEROMETRY

So far, we have discussed techniques that measure the displacement componentsonly. As mentioned earlier, a stress analyst is usually interested in strains rather thandisplacements. The strain is obtained by fitting the displacement data numerically andthen differentiating it. This procedure could lead to large errors. Therefore, methodshave been investigated that can yield fringe patterns representing the derivatives of thedisplacement. This is achieved with speckle shear interferometry. Since all speckletechniques for displacement measurement use subjective speckles (i.e., image planerecordings), we restrict ourselves to shear at the image plane. The shear methods aregrouped under the five categories listed in Table 7.1.

7.23.1 THE MEANING OF SHEAR

Shear essentially means shift. When an object is imaged via two identical paths,as in a two-aperture arrangement, the images are perfectly superposed; there is noshear, even though there are two images. Since the imaging is via two independentpaths, the two images can be manipulated independently. In linear shear, one image is

TABLE 7.1Shear Methods Used in Speckle Interferometry

Shear Types Phase Difference Leading to FringeFormation

Lateral shear or linear shear δ(x + Δx, y + Δy) − δ(x, y)Rotational shear δ(r, θ + Δθ) − δ(r, θ)

Radial shear δ(r ± Δr, θ) − δ(r, θ)Inversion shear δ(x, y) − δ(−x, −y)Folding shear δ(x, y) − δ(−x, y): folding about y axis

δ(x, y) − δ(x, −y): folding about x axis

Page 205: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 178[#30] 14/5/2009 20:42

178 Optical Methods of Measurement

laterally shifted in any desired direction by an appropriate amount. In interferometry,one image acts as a reference for the other, and hence there is no need to supply anadditional reference wave. With shear, we compare the response of an object to theexternal agencies at any point with that of a shifted point.

In rotational shear, one image is rotated, usually about the optical axis, by a smallangle with respect to the other image. In radial shear, one image is either radiallycontracted or radially expanded with respect to the other image. The inversion shearallows a point at (x, y) to be compared with another point at (−x, −y). This is equiv-alent to a rotational shear of π. In folding shear, a point is compared with its mirrorimage: the image may be taken about the y axis or the x axis.

7.24 METHODS OF SHEARING

One of the most commonly used methods of shearing employs the Michelson inter-ferometer, where the object is seen via two independent paths, OABAD and OACAD,as shown in Figure 7.17a. When the mirrors M1 and M2 are normal to each other, andat equal distances from A, the two images are perfectly superposed. Tilting one ofthe mirrors, as shown, displaces one image in the direction of the arrow: the images

qa

(a)

–ak2

k'1

k1 bx

z

Shear element

Shear

(b)

k2

k1

θ

x

z

Shear

M

M1

OA

B

C

D′D

α2

FIGURE 7.17 Shearing with (a) a Michelson interferometer and (b) Duffy’s arrangementwith a wedge.

Page 206: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 179[#31] 14/5/2009 20:42

Speckle Metrology 179

are linearly separated. A detailed analysis of this arrangement reveals that a largespherical aberration is introduced owing to the use of a beam-splitting cube as thebeams travel in glass a distance of three times the size of the cube.

–ak2

k'2

k1 bx

z

Shear

(a)

k2

k'2

k1x

z

(b)

k2

k'2

k1 bx

z

Dove prism rotated by Df/2

Image rotatedby Df

(c)

k2

k'2

k1 bx

z

Image rotatedby 180°

Dove prism rotated by 90°

(d)

q a

–a

–a

q

q

a

a

–a

qa

FIGURE 7.18 Shearing methods for (a) lateral shear, (b) radial shear, (c) rotational shear,and (d) inversion shear.

Page 207: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 180[#32] 14/5/2009 20:42

180 Optical Methods of Measurement

All of the shear methods described in Table 7.1 can be conveniently implementedwhen an aperture mask is placed in front of an imaging lens. For linear shear, a pair ofplane parallel plates, wedges (Figure 7.17b), or a biprism have been used. Shearinghas also been done with gratings. If the imaging lens is cut into two halves that canbe translated in the lens own plane or along the axis, it makes an excellent shearingdevice: both functions of shearing and imaging are performed by the same device.In fact, a diffractive optical element can be designed that performs both functions ofimaging and shearing. Figure 7.18 shows the schematics of introducing various sheartypes using an aperture mask and additional optical components.

We now present two configurations based on the Michelson interferometer thatcan be used to introduce both lateral shear and folding shear. Figure 7.19a shows aconventional arrangement except that the mirrors are replaced by right-angled prisms.Lateral shear is introduced by translation of one of the prisms. The configurationshown in Figure 7.19b is used for folding shear. It uses only one right-angled prism.Depending on the orientation of the prism, folding about the x or the y axis can beachieved.

7.25 THEORY OF SPECKLE SHEAR INTERFEROMETRY

As pointed out earlier, in shear interferometry, a point on the object is imaged astwo points or two points on the object are imaged as a single point. One thereforeobtains either object plane shear or image plane shear: these are related through themagnification of the imaging lens.

Let a1 and a2 be the amplitudes at any point on the image plane due to twopoints (xo, yo) and (xo + Δxo, yo + Δyo) at the object plane. The intensity distributionrecorded at the image plane is given by

I1(x, y) = a21 + a2

2 + 2a1a2 cos φ, φ = φ2 − φ1 (7.51a)

After the object has been loaded, the deformation vectors at the two points are rep-resented by d(xo, yo) and d(xo + Δxo, yo + Δyo), respectively. Therefore, the wavesfrom these two points arrive at the image point with a phase difference δ = δ2 − δ1.The intensity distribution in the second exposure can therefore be expressed as

I2(x, y) = a21 + a2

2 + 2a1a2 cos(φ + δ), δ = δ2 − δ1 (7.51b)

Lateral shearFolding shear

(a) (b)

FIGURE 7.19 Michelson interferometer for (a) lateral shear and (b) folding shear.

Page 208: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 181[#33] 14/5/2009 20:42

Speckle Metrology 181

The total intensity recorded is I1(x, y) + I2(x, y), and hence the amplitude transmit-tance of the double-exposure specklegram is

t(x, y) = t0 − βT [I1(x, y) + I2(x, y)] (7.52)

On filtering, a fringe pattern is obtained representing the derivatives of the displace-ment components, as will be shown later.

7.26 FRINGE FORMATION

7.26.1 THE MICHELSON INTERFEROMETER

The phase difference δ can be expressed, assuming shear only along the x direction(Figure 7.17a), as

δ = (k2 − k1) · d(xo + Δxo, yo)(k2 − k1) · d(xo, yo)

≈ (k2 − k1) · ∂d∂x

Δxo

(7.53)

Now, substituting

k2 = 2π

λk, k1 = 2π

λ(− sin θi − cos θk)

into Equation 7.53, we obtain

δ ≈ 2π

λ

[sin θ

∂dx

∂x+ (1 + cos θ)

∂dz

∂x

]Δxo (7.54)

Bright fringes are formed wherever

sin θ∂dx

∂x+ (1 + cos θ)

∂dz

∂x= mλ

Δxo, m = 0, ±1, ±2, ±3, . . . (7.55)

The fringe pattern has contributions from both the strain ∂dx/∂x and the slope ∂dz/∂x.However, when the object is illuminated normally (i.e., θ = 0), the fringe patternrepresents a partial x-slope pattern only; that is,

∂dz

∂x= mλ

2Δxo(7.56)

The fringe pattern corresponding to partial y-slope ∂dz/∂y is obtained when a shearis applied along the y direction.

Page 209: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 182[#34] 14/5/2009 20:42

182 Optical Methods of Measurement

7.26.2 THE APERTURED LENS ARRANGEMENT

The phase difference δ can again be expressed, assuming shear only along the xdirection (Figure 7.17b), as

δ = (k2 − k1) · d(xo + Δxo, yo) − (k′2 − k1) · d(xo, yo)

≈ (k2 − k1) · d(xo, yo) + (k2 − k1) · ∂d∂x

Δxo − (k′2 − k1) · d(xo, yo)

≈ (k2 − k′2) · d(xo, yo) + (k2 − k1) · ∂d

∂xΔxo

δ ≈ 2π

λ2dx sin α + 2π

λ

[sin θ

∂dx

∂x+ (1 + cos θ)

∂dz

∂x

]Δxo (7.57)

Comparison with the expression for the Michelson interferometer reveals that thereis an in-plane component-dependent term in addition to the usual expression. Thisterm arises owing to the two apertures separated by a distance—an arrangementthat has been shown to be inherently sensitive to the in-plane component. An inter-esting aspect of aperturing of a lens, as has been pointed out earlier, is its abilityto measure simultaneously in-plane and out-of-plane displacement components andtheir derivatives. Figure 7.20 shows photographs of out-of-plane displacement, partial

(a)

(b) (c)

FIGURE 7.20 Interferograms from a double-exposure specklegram: (a) out-of-plane dis-placement fringe pattern; (b) partial x-slope fringe pattern; (c) partial y-slope fringe pattern.

Page 210: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 183[#35] 14/5/2009 20:42

Speckle Metrology 183

k2

k1

bx

z

Shear element

Shear

M1

M2

M3

FIGURE 7.21 An experimental arrangement in shear interferometry that is insensitive toin-plane displacement.

x-slope, and partial y-slope fringe patterns of a defective pipe obtained from the samedouble-exposure specklegram.

7.27 SHEAR INTERFEROMETRY WITHOUT INFLUENCEOF THE IN-PLANE COMPONENT

It has been shown earlier that shear interferometry performed with an aperture maskin front of the imaging lens always yields a fringe pattern that is due to the combinedeffect of the in-plane displacement component and the derivatives of the displace-ment. At the same time, it is desirable to have an aperture mask for obtaininghigh-contrast fringe patterns. In order to retain this advantage and eliminate the in-plane displacement-component sensitivity, the configuration is modified as shown inFigure 7.21.

The object is illuminated normally and viewed axially. The object beam is dividedand arranged to pass through the apertures to form the two images. Shear is introducedby placing a wedge plate in front of an aperture. It could just as well be introducedby tilting any one of the mirrors M1, M2, or M3. The mirror combination M2, M3compensates for any extra path difference. Fringe formation is now governed by

∂dz

∂x= mλ

2Δxo(7.58)

This arrangement gives a pure partial x-slope fringe pattern.

7.28 ELECTRONIC SPECKLE PATTERN INTERFEROMETRY

The speckle size in speckle interferometry can be controlled by the F# of the imaginglens. Further, the size can be doubled by adding a reference beam axially. It is thuspossible to use electronic detectors, which have limited resolution, for recordinginstead of photographic emulsions. The use of electronic detectors avoids the messywet development process. Further, the processing is done at video rates, making thetechnique almost real-time. Photographic emulsions, as mentioned earlier, integratethe light intensity falling on them. In speckle techniques with photographic recording,

Page 211: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 184[#36] 14/5/2009 20:42

184 Optical Methods of Measurement

the two exposures were added in succession, and then techniques were developed toremove the undesired DC component. In electronic detection, the two exposures arehandled independently, and subtraction removes the DC component. Phase-shiftingtechniques are easily incorporated, and hence deformation maps can be presentedalmost in real-time. The availability of fast PCs and high-density CCD detectorsmakes the technique of electronic detection very attractive. In fact, electronic specklepattern interferometry (ESPI) is an alternative to holographic interferometry, andperhaps will replace it in industrial environments.

It might be argued that all of the techniques discussed under speckle interferometryand speckle shear interferometry can be adopted simply by replacing the recordingmedium by an electronic detector. However, this is not the case, since the resolution ofelectronic detectors is limited to the range of 50–100 lines/mm, and hence to specklesizes in the range of 10–20 μm are desired.

7.28.1 OUT-OF-PLANE DISPLACEMENT MEASUREMENT

Figure 7.22 shows one of several configurations used for measuring the out-of-planecomponent. The reference beam is added axially such that it appears to emerge fromthe center of the exit pupil of the imaging lens. The speckle size is matched with thepixel size by controlling the lens aperture. The intensities of both the object and thereference beams at the CCD plane are adjusted to be equal. The first frame is storedin the frame-grabber, and the second frame, captured after loading of the object, issubtracted pixel by pixel. The difference signal is rectified and then sent to the monitorto display the fringe pattern.

This procedure can be described mathematically as follows. The intensity recordedin the first frame is given by

I1(x, y) = a21(x, y) + a2

2(x, y) + 2a1(x, y)a2(x, y) cos φ, φ = φ2 − φ1 (7.59a)

where we have taken a speckled reference wave. The output of the detector is assumedto be proportional to the intensity incident on it. The intensity recorded in the second

ApertureLens

BS

CCD

Reference

Object

xzy

PZT

FIGURE 7.22 A configuration for electronic speckle pattern interferometry (ESPI). BS,beam-splitter.

Page 212: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 185[#37] 14/5/2009 20:42

Speckle Metrology 185

frame is

I2(x, y) = a21(x, y) + a2

2(x, y) + 2a1(x, y)a2(x, y) cos(φ + δ) (7.59b)

where the phase difference δ introduced by deformation is given by δ = (k2 − k1) · d.The subtracted signal I2 − I1 will generate a voltage signal ΔV as follows:

ΔV ∝ I2 − I1 = 2a1(x, y)a2(x, y)[cos(φ + δ) − cos φ]

= 4a1(x, y)a2(x, y) sin

(φ + δ

2

)sin

δ

2(7.60)

The brightness on the monitor will be proportional to the voltage signal ΔV (differencesignal) from the detector, and hence

B = 4℘a1(x, y)a2(x, y) sin

(φ + δ

2

)sin

δ

2(7.61)

where ℘ is the constant of proportionality. As δ varies, sin(δ/2) will vary between−1 and 1. The negative values of sin(δ/2) will appear dark on the monitor, resultingin loss of signal. This loss is avoided by rectifying the signal before it is sent to themonitor. The brightness B is thus given bys

B = 4℘a1(x, y)a2(x, y)

∣∣∣∣sin

(φ + δ

2

)sin

δ

2

∣∣∣∣ (7.62)

The brightness will be zero when δ/2 = mπ, that is, δ = 2mπ, with m =0, ±1, ±2, . . . . This means that the speckle regions in the speckle pattern that arecorrelated will appear dark. This is due to the difference operation. Also as a result ofthis operation, undesirable terms are eliminated. Phase-shifting is easily incorporatedby reflecting the reference wave from a PZT-mounted mirror.

7.28.2 IN-PLANE DISPLACEMENT MEASUREMENT

In-plane displacement can be measured using the arrangement due to Leendertz.The sensitivity can be varied by changing the interbeam angle. Unfortunately, theconfigurations based on aperturing the lens do not work in ESPI, because the fringepattern generated by the pair of apertures is not resolved by the CCD camera.

7.28.3 VIBRATION ANALYSIS

ESPI is an excellent tool for studying vibration modes of an object. It can be usedto measure extremely small, moderate, and large vibration amplitudes. The arrange-ment used is the one suited for out-of-plane displacement measurement. The objectis excited acoustically or by directly attaching PZT that is run through a functiongenerator, thereby scanning a large frequency range over which the response of theobject can be studied. Since the video rates are very slow compared with the resonance

Page 213: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 186[#38] 14/5/2009 20:42

186 Optical Methods of Measurement

frequencies, the pattern observed on the monitor represents time-average fringes. Theintensity distribution is given by

[J0

(4π

λA(x, y)

)]2

where A(x, y) is the amplitude of vibration. However, when the reference wave is alsomodulated at the frequency of object excitation, the intensity distribution in the fringepattern can be expressed as

I(x, y) ∝[

J0

(4π

λ

{[A(x, y)]2 + a2

r − 2A(x, y)ar cos(φ − φr)}1/2

)]2

(7.63)

where ar and φr are the amplitude and phase of the reference mirror. Obviously,when the object and reference mirror vibrate in phase, the intensity distribution isproportional to [

J0

(4π

λ[A(x, y) − ar]

)]2

The zero-order fringe now occurs where A(x, y) = ar . Therefore, large amplitudesof vibration can be measured. However, if very small vibration amplitudes are tobe measured, the frequency of reference-wave modulation is taken slightly differentto that of the object vibration, but still within the video frequency. Because of this,the phase of the reference wave varies with time. The intensity distribution is nowproportional to

[J0

(4π

λ

{[A(x, y)

]2 + a2r − 2A(x, y)ar cos [φ − φr(t)]

}1/2)]2

Since the phase φr(t) varies with time, the argument of the Bessel function variesbetween A(x, y) + ar and A(x, y) − ar, and hence the intensity on the monitor willfluctuate. However, if A(x, y) = 0, then the argument of the Bessel function remainsconstant and there is no fluctuation or flicker. Only at those locations where theflicker occurs will the amplitude of vibration be nonzero, thereby allowing very smallvibration amplitudes to be detected.

7.28.4 MEASUREMENT ON SMALL OBJECTS

EPSI has been used for studying the performance of a variety of objects, rangingin size from large to small. However, there is considerable interest in evaluatingthe performance of small size objects particularly microelectromechanical systems(MEMS) in real time. MEMS are the result of the integration of mechanical ele-ments, sensors, actuators, and electronics on a common silicon substrate throughmicrofabrication technology. They are used in a number of fields, including telecom-munications, computers, aerospace, automobiles, biomedical, and micro-optics. ESPI

Page 214: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 187[#39] 14/5/2009 20:42

Speckle Metrology 187

for the inspection and characterization of MEMS should not alter the integrity or themechanical behavior of the devices. Since MEMS have an overall size up to fewmillimeters, a high-spatial-resolution measuring system is required; that is, a long-working-distance microscope with a combination of different magnification objectivelenses is incorporated in ESPI. A schematic of a configuration for microscopic ESPIis shown in Figure 7.23a. Instead of a normal camera lens for imaging, a long-working-distance microscope is used for imaging on the CCD array. Phase-shiftingis accomplished by a PZT-driven mirror. The MEMS device chosen for study in thisexample is a pressure transducer. The diaphragm is normally etched out in silicon; thedeflection of the diaphragm due to application of pressure is measured using Wheat-stone circuitry. However, the deflection profile can be measured using ESPI. ESPI is,in fact, used to calibrate the pressure transducer. Figure 7.23b–d show the results ofmeasurement when pressure is applied to the sensor in between two frames capturedby a CCD array camera. Figure 7.23e shows the deflection profile of the pressuresensor.

7.28.5 SHEAR ESPI MEASUREMENT

Again, only the Michelson-interferometer-based shear configurations can be usedin ESPI. Other methods of shearing, such as aperture masks with wedges, producefringes in speckles that are too fine to be resolved by a CCD detector. As mentionedearlier, the fringe pattern carries information about the derivatives of the in-plane andout-of-plane components. This can be seen from the expression

δ ≈ 2π

λ

[sin θ

∂dx

∂x+ (1 + cos θ)

∂dz

∂x

]Δxo

where θ is the angle that the illumination beam makes with the z axis. Obviously, purepartial slope fringes are obtained when θ = 0.

When an in-plane-sensitive configuration is used and shear ESPI is performed, it ispossible to obtain both strain and partial slope fringe patterns. Assuming illuminationby one beam at a time, we can express the phase difference introduced by deformationin a shear ESPI set-up as

δ1 ≈ 2π

λ

[sin θ

∂dx

∂x+ (1 + cos θ)

∂dz

∂x

]Δxo

δ2 ≈ 2π

λ

[− sin θ

∂dx

∂x+ (1 + cos θ)

∂dz

∂x

]Δxo

Obviously, when we subtract these two expressions, we obtain strain fringes, and ifwe add them, we obtain partial slope fringes.

7.29 CONTOURING IN ESPI

All of the contouring methods discussed in Chapter 6 (Section 6.1.8) can be easilyincorporated in ESPI. We give a brief account of these methods and show how theycan be adopted in ESPI.

Page 215: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 188[#40] 14/5/2009 20:42

188 Optical Methods of Measurement

(a)

(b) (c) (d)

(e)

CCD

PZT

3-axis stage

DAQ

Reference mirror

BS

NDF

Specimen

2.0

1.0

02.5

1.5

0.50 0

0.51.0

1.5

2.5

x (mm)y (mm)

Def

lect

ion

(mm

)

2.0

10.90.80.70.60.50.40.30.20.10

1.0

2.0

Laser

SF CL

Iris

PCVND

LDM

FIGURE 7.23 (a) Schematic of an ESPI system for studying small objects. (b) Correla-tion fringes. (c) Unwrapped fringes. (d) Wrapped phase. (e) Deflection profile of the sensor.(Courtesy of Dr N. Krishna Mohan, IIT Madras.)

7.29.1 CHANGE OF DIRECTION OF ILLUMINATION

The object is illuminated by a diverging wave from a point source and the usual ESPIset-up is used. The first frame is grabbed and stored. The illumination point sourceis shifted laterally slightly so as to change the direction of illumination. The secondframe captured is now subtracted from the stored frame, and the contour fringes aredisplayed on the monitor. The contour interval is given by

λL

2 sin θΔs= λ

2 sin θΔφ(7.64)

Page 216: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 189[#41] 14/5/2009 20:42

Speckle Metrology 189

where Δs is the lateral shift of the point source that causes an angular shift of Δφ andL is the distance between the point source and object plane.

7.29.2 CHANGE OF WAVELENGTH

The dual-wavelength method requires two sources of slightly different wavelength ora single source that can be tuned. Two frames are grabbed and subtracted, each framewith one of the wavelengths of light. The true depth contours separated by Δz aregenerated by this method, where the separation Δz is given by

Δz = λeff/2 = λ1λ2

2 |λ1 − λ2| (7.65)

where λeff is the effective (synthetic) wavelength.

7.29.3 CHANGE OF MEDIUM SURROUNDING THE OBJECT

Here, the medium surrounding the object is changed between exposures. Subtractionyields true depth contours. In fact, the method is equivalent to the dual-wavelengthmethod, since the wavelength in the medium changes when its refractive index ischanged. We can thus arrive at the same result by writing the wavelength in terms ofthe refractive index and the vacuum wavelength. The contour interval Δz is given by

Δz = λ

2 |n1 − n2| = λ

2Δn(7.66)

Heres, Δn is the change in refractive index when one medium is replaced by the other.

7.29.4 TILT OF THE OBJECT

This is a new contouring method, and is applicable to speckle interferometry only. Inthis method, an in-plane sensitive configuration is used, that is, Leendertz’s configu-ration. The object is rotated by a small amount between exposures. This converts thedepth information, due to rotation, into an in-plane displacement to which the set-upis sensitive. The depth contour interval Δz is given by

Δz = λ

2 sin θ sin Δφ≈ λ

2 sin θΔφ(7.67)

where 2θ is the interbeam angle of the illumination waves and Δφ is the angle ofrotation. Several modifications of this technique have been published.

7.30 SPECIAL TECHNIQUES

7.30.1 USE OF RETRO-REFLECTIVE PAINT

One of the features of speckle techniques is that they can be performed with sur-faces without any treatment. However, treatment of the surfaces does help in several

Page 217: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 190[#42] 14/5/2009 20:42

190 Optical Methods of Measurement

Retro-object

Aperture

LensCCD

M3

–q

M2

M1

M4

BS1

BS2

x

zy

q

k2

k'2

FIGURE 7.24 In-plane displacement measurement with enhanced sensitivity.

cases: for example, coating a dark surface with white paint improves the contrast offringes. However, when a retro-reflective paint is used, it is possible to essentiallyenhance the sensitivity of in-plane measurement by a factor of two. The sensitivityremains practically unchanged for out-of-plane displacement measurements. Fig-ure 7.24 shows a schematic of the experimental arrangement to measure the in-planedisplacement component.

It can easily be shown that the phase change introduced due to deformation canbe written as

δ = 2π

λ4dx sin θ

This shows that the sensitivity is enhanced by a factor of two. It can be seen that thisarrangement can also be used for shear ESPI. The shear is introduced by tilting oneof the mirrors. The phase difference introduced is given by

δ2 ≈ 2π

λ

[4dx sin θ + 2

(sin θ

∂dx

∂x+ cos θ

∂dz

∂x

)Δxo

](7.68)

It can be seen that there is increased sensitivity for the in-plane derivative but almostno change of sensitivity for the out-of-plane displacement derivative.

7.31 SPATIAL PHASE-SHIFTING

It has been pointed out that if the object wave changes during the time interval requiredfor temporal phase shifting, the results of such a measurement are likely to be grosslyerroneous. In such a situation, one uses spatial phase-shifting where the informationis extracted from a single interferogram. This requires the presence of carrier fringes.Fortunately, these can easily be introduced in Duffy’s set-up. The fringe period canalso be varied. Moreover, it has been shown that the sensitivity of Duffy’s config-uration can also be increased. Therefore, this arrangement is best suited for spatial

Page 218: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 191[#43] 14/5/2009 20:42

Speckle Metrology 191

phase-shifting. It has been used to carry out spatial phase-shifting for measuringin-plane displacement, out-of-plane displacement, and shear measurement, as well asfor contouring.

BIBLIOGRAPHY

1. J. C. Dainty (Ed.), Laser Speckle and Related Phenomena, Springer-Verlag, Berlin,1975.

2. R. K. Erf (Ed.), Speckle Metrology, Academic Press, New York, 1978.3. M. Françon (Ed.), Laser Speckle and Applications in Optics, Academic Press, New

York, 1979.4. R. S. Sirohi (Ed.), Selected Papers on Speckle Metrology, Milestone Series MS 35,

SPIE, Bellingham, WA, 1991.5. R. S. Sirohi (Ed.), Speckle Metrology, Marcel Dekker, New York, 1993.6. P. Meinlschmidt, K. D. Hinsch, and R. S. Sirohi, (Eds.), Selected Papers on Electronic

Speckle Pattern Interferometry, Milestone Series MS 132, SPIE, Bellingham,WA, 1996.7. P. K. Rastogi (Ed.), Digital Speckle Pattern Interferometry and Related Techniques,

Wiley-VCH, Weinheim, 2000.8. W. Steinchen and L. Yang, Digital Shearography: Theory and Application of Digital

Speckle Pattern Shearing Interferometry, Optical Engineering Press, SPIE, Washington,DC, 2003.

ADDITIONAL READING

1. J. A. Leendertz, Interferometric displacement measurement on scattering surfaceutilizing speckle effect, J. Phys. E: Sci. Instrum., 3, 214–218, 1970.

2. J. N. Butters and J. A. Leendertz, Speckle pattern and holographic techniques inengineering metrology, Opt. Laser Technol., 3, 26–30, 1971.

3. J. N. Butters and J. A. Leendertz, A double exposure technique for speckle patterninterferometry, J. Phys., 4, 272–279, 1971.

4. J. N. Butters and J. A. Leendertz, Holographic and video techniques applied toengineering measurement, Trans. Inst. Meas. Control, 4, 349–354, 1971.

5. A. Macovski, S. D. Ramsey and L. F. Schaefer, Time-lapse interferometry andcontouring using television systems, Appl. Opt., 10, 2722–2727, 1971.

6. H. J. Tiziani, Application of speckling for in-plane vibration analysis, Opt. Acta, 18,891–902, 1971.

7. H. J. Tiziani, A study of the use of laser speckle to measure small tilts of opticallyrough surfaces accurately, Opt. Commun., 5, 271–276, 1972.

8. D. E. Duffy, Moiré gauging of in-plane displacement using double aperture imaging,Appl. Opt., 11, 1778–1781, 1972.

9. E. Archbold and A. E. Ennos, Displacement measurement from double-exposure laserphotographs, Opt. Acta, 19, 253–271, 1972.

10. J. A. Leendertz and J. N. Butters, An image-shearing speckle-pattern interferometerfor measuring bending moments, J. Phys. E: Sci. Instrum., 6, 1107–1110, 1973.

11. Y. Y. Hung, A speckle-shearing interferometer: A tool for measuring derivatives ofsurface displacements, Opt. Commun., 11, 132–135, 1974.

12. H. M. Pedersen, O. J. Løkberg and B. M. Foerre, Holographic vibration measurementusing a TV speckle interferometer with silicon target vidicon, Opt. Commun., 12,421–426, 1974.

Page 219: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 192[#44] 14/5/2009 20:42

192 Optical Methods of Measurement

13. K. A. Stetson, Analysis of double-exposure speckle photography with two-beamillumination, J. Opt. Soc. Am., 64, 857–861, 1974.

14. G. Cloud, Practical speckle interferometry for measuring in-plane deformation, Appl.Opt., 14, 878–884, 1975.

15. P. M. Boone, Determination of slope and strain contours by double-exposure shearinginterferometry, Exp. Mech., 15, 295–302, 1975.

16. M. Francon, P. Koulev and M. May, Detection of small displacements of diffuse objectsilluminated by a speckle pattern, Opt. Commun., 13, 138–141, 1975.

17. R. Jones and J. N. Butters, Some observations on the direct comparison of the geom-etry of two objects using speckle pattern interferometric contouring, J. Phys. E: Sci.Instrum., 8, 231–234, 1975.

18. C. Forno, White-light speckle photography for measuring deformation, strain, andshape, Opt. Laser Technol., 7, 217–221, 1975.

19. D. A. Gregory, Basic physical principles of defocused speckle photography: A tilttopology inspection technique, Opt. Laser Technol., 8, 201–213, 1976.

20. R. Jones, The design and application of a speckle pattern interferometer for total planestrain field measurement, Opt. Laser Technol., 8, 215–219, 1976.

21. R. P. Khetan and F. P. Chiang, Strain analysis by one-beam laser speckle interferometry.1: Single aperture method, Appl. Opt., 15, 2205–2215, 1976.

22. F. P. Chiang and R. M. Juang, Vibration analysis of plate and shell by laser speckleinterferometry, Opt. Acta, 23, 997–1009, 1976.

23. K. Hogmoen and O. J. Løkberg, Detection and measurement of small vibrations usingelectronic speckle pattern interferometry, Appl. Opt., 16, 1869–1875, 1977.

24. K. P. Miyake and H. Tamura, Observation and explanation of the localization of fringesformed with double-exposure speckle photograph, In Applications of Holography andOptical Data Processing (ed. E. Marrom, A. A. Friesem, and E. Wiener), 333–339Pergamon Press, Oxford, 1977.

25. C. Wykes, De-correlation effects in speckle-pattern interferometry 1. Wavelengthchange dependent de-correlation with application to contouring and surface roughnessmeasurement, Opt. Acta, 24, 517–532, 1977.

26. T. J. Cookson, J. N. Butters, and H. C. Pollard, Pulsed lasers in electronic specklepattern interferometry, Opt. Laser Technol., 10, 119–124, 1978.

27. J. Brdicko, M. D. Olson, and C. R. Hazell, Theory for surface displacement and strainmeasurements by laser speckle interferometry, Opt. Acta, 25, 963–989, 1978.

28. Y. Y. Hung, I. M. Daniel, and R. E. Rowlands, Full-field optical strain measurementhaving postrecording sensitivity and direction selectivity, Exp. Mech., 18, 56–60,1978.

29. O. J. Løkberg, K. Hogmoen, and O. M. Holje, Vibration measurement on the humanear drum in vivo, Appl. Opt., 18, 763–765, 1979.

30. J. Brdicko, M. D. Olson and C. R. Hazell, New aspects of surface displacement andstrain analysis by speckle interferometry, Exp. Mech., 19, 160–165, 1979.

31. Y.Y. Hung and A. J. Durelli, Simultaneous measurement of three displacement deriva-tives using a multiple image-shearing interferometric camera, J. Strain Anal., 14,81–88, 1979.

32. J. Holoubek and J. Ruzek, A strain analysis technique by scattered-light specklephotography, Opt. Acta, 26, 43–54, 1979.

33. G. E. Slettemoen, General analysis of fringe contrast in electronic speckle patterninterferometry, Opt. Acta, 26, 313–327, 1979.

34. R. P. Khetan and F. P. Chiang, Strain analysis by one-beam laser speckle interferometry.2: Multiaperture method, Appl. Opt., 18, 2175–2186, 1979.

Page 220: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 193[#45] 14/5/2009 20:42

Speckle Metrology 193

35. O. J. Løkberg, Electronic speckle pattern interferometry, Phys. Technol., 11, 16–22,1980.

36. G. H. Kaufmann, On the numerical processing of speckle photograph fringes, Opt.Laser Technol., 12, 207–209, 1980.

37. G. E. Slettemoen, Electronic speckle pattern interferometric system based on aspeckle reference beam, Appl. Opt., 19, 616–623, 1980.

38. S. Nakadate, T. Yatagai and H. Saito, Electronic speckle-pattern interferometry usingdigital image processing techniques, Appl. Opt., 19, 1879–1883, 1980.

39. S. Nakadate, T. Yatagai and H. Saito, Digital speckle-pattern shearing interferometry,Appl. Opt., 19, 4241–4246, 1980.

40. G. H. Kaufmann, A. E. Ennos, B. Gale and D. J. Pugh, An electro-optical read-outsystem for analysis of speckle photographs, J. Phys. E: Sci. Instrum., 13, 579–584,1980.

41. I. Yamaguchi, Speckle displacement and decorrelation in the diffraction and imagefields for small object deformation, Opt. Acta, 28, 1359–1376, 1981.

42. O. J. Løkberg and G. E. Slettemoen, Interferometric comparison of displacements byelectronic speckle pattern interferometry, Appl. Opt., 20, 2630–2634, 1981.

43. I. Yamaguchi, A laser-speckle strain gauge, J. Phys. E: Sci. Instrum., 14, 1270–1273,1981.

44. G. K. Jaisingh and F. P. Chiang, Contouring by laser speckle, Appl. Opt., 20, 3385–3387, 1981.

45. C. S. Vikram and K. Vedam, Speckle photography of lateral sinusoidal vibrations:Error due to varying halo intensity, Appl. Opt., 20, 3388–3391, 1981.

46. R. Jones and C. Wykes, General parameters for the design and optimization ofelectronic speckle pattern interferometers, Opt. Acta, 28, 949–972, 1982.

47. C. Wykes, Use of electronic speckle pattern interferometry (ESPI) in the measurementof static and dynamic surface displacements, Opt. Eng., 21, 400–406, 1982.

48. R. Krishna Murthy, R. S. Sirohi and M. P. Kothiyal, Detection of defects in plates anddiaphragms using split lens speckle-shearing interferometer, NDT International (UK),15, 329–33, 1982.

49. A. Asundi and F. P. Chiang, Theory and applications of the white light speckle methodfor strain analysis, Opt. Eng., 21, 570–580, 1982.

50. R. K. Mohanty, C. Joenathan, and R. S. Sirohi, Speckle shear interferometry withdouble dove prisms, Opt. Commun., 47, 27–30, 1983.

51. T. D. Dudderar, J. A. Gilbert, A. J. Boehnlein and M. E. Schultz, Application of fiberoptics to speckle metrology – a feasibility study, Exp. Mech., 23, 289–297, 1983.

52. D. W. Robinson, Automatic fringe analysis with a computer image-processing system,Appl. Opt., 22, 2169–2176, 1983.

53. D. K. Sharma, R. S. Sirohi, and M. P. Kothiyal, Simultaneous measurement of slopeand curvature with three aperture speckle interferometer, Appl. Opt., 23, 1542–1546,1984.

54. C. Shakher and G. V. Rao, Use of holographic optical elements in speckle metrology,Appl. Opt., 23, 4592–4595, 1984.

55. R. Krishna Murthy, R. K. Mohanty, R. S. Sirohi, and M. P. Kothiyal, Radialspeckle shearing interferometer and its engineering applications, Optik, 67, 85–94,1984.

56. C. Joenathan, R. K. Mohanty, and R. S. Sirohi, On the methods of multiplexing inspeckle shear interferometry, Optik, 69, 8–12, 1984.

57. R. K. Mohanty, C. Joenathan, and R. S. Sirohi, Speckle interferometric methods ofmeasuring out of plane displacements, Opt. Lett., 9, 475–477, 1984.

Page 221: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 194[#46] 14/5/2009 20:42

194 Optical Methods of Measurement

58. R. S. Sirohi, Speckle shear interferometry—A review, J. Opt. (India), 13, 95–113,1984.

59. B. P. Holownia, Non-destructive testing of overlap shear joints using electronic specklepattern interferometry, Opt. Lasers Eng., 6, 79–90, 1985.

60. C. Joenathan, R. K. Mohanty, and R. S. Sirohi, Hololens in speckle and speckle shearinterferometry, Appl. Opt., 24, 1294–1298, 1985.

61. R. K. Mohanty, C. Joenathan, and R. S. Sirohi, Multiplexing in speckle shearinterferometry – An optimal method, Appl. Opt., 24, 2043–2044, 1985.

62. S. Nakadate and H. Saito, Fringe scanning speckle-pattern interferometry, Appl. Opt.,24, 2172–2180, 1985.

63. K. Creath, Phase-shifting speckle interferometry, Appl. Opt., 24, 3053–3058, 1985.64. R. K. Mohanty, C. Joenathan, and R. S. Sirohi, Speckle and speckle shear interfer-

ometers combined for simultaneous determination of out of plane displacement andslope, Appl. Opt., 24, 3106–3109, 1985.

65. O. J. Løkberg, J. T. Malmo, and G. E. Slettemoen, Interferometric measurements ofhigh temperature objects by electronic speckle pattern interferometry, Appl. Opt., 24,3167–3172, 1985.

66. K. A. Stetson and W. R. Brohinsky, Electrooptic holography and its application tohologram interferometry, Appl. Opt., 24, 3631–3637, 1985.

67. R. Erbeck, Fast image processing with a microcomputer applied to speckle photogra-phy, Appl. Opt., 24, 3838–3841, 1985.

68. F. P. Chiang and D. W. Li, Random (speckle) patterns for displacement and strainmeasurement: Some recent advances, Opt. Eng., 24, 936–943, 1985.

69. K. Creath and G. E. Slettemoen, Vibration-observation techniques for digital speckle-pattern interferometry, J. Opt. Soc. Am. A, 2, 1629–1636, 1985.

70. C. Joenathan, R. K. Mohanty, and R. S. Sirohi, Contouring by speckle interferometry,Opt. Letts., 10, 579–581, 1985.

71. R. K. Mohanty, C. Joenathan, and R. S. Sirohi, NDT speckle rotational shearinterferometry, NDT International (UK), 18, 203–205, 1985.

72. R. K. Mohanty, C. Joenathan, and R. S. Sirohi, High sensitivity tilt measurement byspeckle shear interferometry, Appl. Opt., 25, 1661–1664, 1986.

73. J. R. Tyrer, Critical review of recent developments in electronic speckle patterninterferometry, Proc. SPIE, 604, 95–111, 1986.

74. J. M. Huntley, An image processing system for the analysis of speckle photographs,J. Phys. E: Sci. Instrum., 19, 43–49, 1986.

75. C. Joenathan, C. S. Narayanamurthy, and R. S. Sirohi, Radial and rotational slopecontours in speckle shear interferometry, Opt. Commun., 56, 309–312, 1986.

76. C. Joenathan and R. S. Sirohi, Elimination of errors in speckle photography, Appl.Opt., 25, 1791–1794, 1986.

77. S. Nakadate, Vibration measurement using phase-shifting speckle pattern interferom-etry, Appl. Opt., 25, 4162–4167, 1986.

78. A. R. Ganesan, C. Joenathan and R. S. Sirohi, Real-time comparative digital specklepattern interferometry, Opt. Commun., 64, 501–506, 1987.

79. F. Ansari and G. Ciurpita, Automated fringe measurement in speckle photography,Appl. Opt., 26, 1688–1692, 1987.

80. R. Hoefling and W. Osten, Displacement measurement by image-processed specklepatterns, J. Mod. Opt., 34, 607–617, 1987.

81. O. J. Løkberg and J. T. Malmo, Detection of defects in composite materials by TVholography, NDT International (UK), 21, 223–228, 1988.

82. S. Winther, 3D Strain Measurements using ESPI, Opt. Lasers Eng., 8, 45–57, 1988.

Page 222: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 195[#47] 14/5/2009 20:42

Speckle Metrology 195

83. M. Owner-Petersen and P. Damgaard Jensen, Computer-aided electronic specklepattern interferometry (ESPI): Deformation analysis by fringe manipulation, NDTInternational (UK), 21, 422–426, 1988.

84. C. Joenathan, C. S. Narayanamurthy, and R. S. Sirohi, Localization of fringes inspeckle photography that are due to axial motion of the diffuse object, J. Opt. Soc.Am. A, 5, 1035–1040, 1988.

85. R. W. T. Preater, Measuring rotating component in-plane strain using conventionalpulsed ESPI and optical fibres, Proc. SPIE, 952, 245–250, 1988.

86. A. R. Ganesan and R. S. Sirohi, New method of contouring using digital speckle patterninterferometry (DSPI), Proc. SPIE, 954, 327–332, 1988.

87. O. J. Løkberg and J. T. Malmo, Long-distance electronic speckle pattern interferom-etry, Opt. Eng., 27, 150–156, 1988.

88. Boxiang Lu, H. Abendroth, H. Eggers, E. Ziolkowski, and X. Yang, Real timeinvestigation of rotating objects using ESPI system, Proc. SPIE, 1026, 218–221, 1988.

89. V. Parthiban, C. Joenathan, and R. S. Sirohi, A simple inverting interferometer withholo elements, Appl. Opt., 27, 1913–1914, 1988.

90. A. R. Ganesan, D. K. Sharma, and M. P. Kothiyal, Universal digital speckle shearinginterferometer, Appl. Opt., 27, 4731–4734, 1988.

91. J. Davies and C. Buckberry, Applications of fibre optic TV holography system to thestudy of large automotive structures, Proc. SPIE, 1162, 279–291, 1989.

92. R. J. Pryputniewicz and K.A. Stetson, Measurement of vibration patterns using electro-optic holography, Proc. SPIE, 1162, 456–467, 1989.

93. V. M. Chaudhari, A. R. Ganesan, C. Shakher, P. B. Godbole, and R. S. Sirohi, Inves-tigations of in-plane stresses on bolted flange joint using digital speckle patterninterferometry, Opt. Lasers Eng., 11, 257–264, 1989.

94. E. Vikhagen, Vibration measurement using phase shifting TV-holography and digitalimage processing, Opt. Commun., 69, 214–218, 1989.

95. M. Kujawinska, A. Spik, and D.W. Robinson, Quantitative analysis of transient eventsby ESPI, Proc. SPIE, 1121, 416–423, 1989.

96. W. Arnold and K. D. Hinsch, Parallel optical evaluation of double-exposure recordsin optical metrology, Appl. Opt., 28, 726–729, 1989.

97. J. M. Huntley, Speckle photography fringe analysis:Assessment of current algorithms,Appl. Opt., 28, 4316–4322, 1989.

98. K. D. Hinsch, Fringe positions in double-exposure speckle photography, Appl. Opt.,28, 5298–5304, 1989.

99. A. J. Moore and J. R. Tyrer, An electronic speckle pattern interferometer for completein-plane displacement measurement, Meas. Sci. Technol., 1, 1024–1030, 1990.

100. R. P. Tatam, J. C. Davies, C. H. Buckberry, and J. D. C. Jones, Holographic surfacecontouring using wavelength modulation of laser diodes, Opt. Laser Technol, 22,317–321, 1990.

101. G. Gulker, K. Hinsch, C. Holscher, A. Kramer, and H. Neunaber, Electronic specklepattern interferometry system for in situ deformation monitoring on buildings, Opt.Eng., 29, 816–820, 1990.

102. E. Vikhagen, Nondestructive testing by use of TV holography and deformation phasegradient calculation, Appl. Opt., 29, 137–144, 1990.

103. M. Owner-Petersen, Decorrelation and fringe visibility: On the limiting behavior ofvarious electronic speckle-pattern correlation interferometers, J. Opt. Soc. Am. A, 8,1082–1089, 1991.

104. M. Owner-Petersen, Digital speckle pattern shearing interferometry: Limitations andprospects, Appl. Opt., 30, 2730–2738, 1991.

Page 223: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 196[#48] 14/5/2009 20:42

196 Optical Methods of Measurement

105. C. Joenathan and B. M. Khorana, A simple and modified ESPI system, Optik, 88,169–171, 1991.

106. H. Kadono, S. Toyooka, and Y. Iwasaki, Speckle-shearing interferometry using aliquid-crystal cell as a phase modulator, J. Opt. Soc. Am. A, 8, 2001–2008, 1991.

107. R. Hoefling, P. Aswendt, W. Totzauer, and W. Jüptner, DSPI: A tool for analysingthermal strains on ceramic and composite materials, Proc. SPIE, 1508, 135–142, 1991.

108. S. Takemoto, Geophysical applications of holographic and ESPI techniques, Proc.SPIE, 1553, 168–174, 1991.

109. M. M. Ratnam, W. T. Evans, and J. R. Tyrer, Measurement of thermal expansion of apiston using holographic and electronic speckle pattern interferometry, Opt. Eng., 31,61–69, 1992.

110. S. Ellingsrud and G. O. Rosvold, Analysis of a data-based TV-holography system usedto measure small vibration amplitudes, J. Opt. Soc. Am. A, 9, 237–251, 1992.

111. R. R. Vera, D. Kerr, and F. M. Santoyo, Electronic speckle contouring, J. Opt. Soc.Amer. A, 9, 2000–2008, 1992.

112. R. S. Sirohi and N. Krishna Mohan, Speckle interferometry for deformation measure-ment, J. Mod. Opt., 39, 1293–1300, 1992.

113. A. R. Ganesan, B. C. Tan, and R. S. Sirohi, Tilt measurement using digital specklepattern interferometry, Opt. Laser Technol., 24, 257–261, 1992.

114. G. Gulker, O. Haack, K. D. Hinsch, C. Holscher, J. Kuls, and W. Platen, Two-wavelength electronic speckle-pattern interferometry for the analysis of discontinuousdeformation fields, Appl. Opt, 31, 4519–4521, 1992.

115. R. Spooren, Double-pulse subtractionTV holography, Opt. Eng., 31, 1000–1007, 1992.116. R. Rodriguez-Vera, D. Kerr, and F. Mendoza-Santoyo, Electronic speckle contouring,

J. Opt. Soc. Am. A, 9, 2000–2007, 1992.117. X. Peng, H. Y. Diao, Y. L. Zou, and H. Tiziani, A novel approach to determine decor-

relation effect in a dual-beam electronic speckle pattern interferometer, Optik, 90,129–133, 1992.

118. I. Yamaguchi, Z. B. Liu, J. Kato, and J. Y. Liu, Active phase-shifting speckle inter-ferometer, In Fringe ’93: Proceedings of 2nd International Workshop on AutomaticProcessing of Fringe Pattern (ed. W. Jüptner and W. Osten), 103–108, AkademieVerlag, Bremen, 1993.

119. P. Aswendt and R. Hoefling, Speckle interferometry for analysing anisotropic thermalexpansion—Application to specimens and components, Composites, 24, 611–617,1993.

120. R. Spooren, A. Aksnes Dyrseth, and M. Vaz, Electronic shear interferometry:application of a (double-)pulsed laser, Appl. Opt., 32, 4719–4727, 1993.

121. A. J. P. von Haasteren and H. J. Frankena, Real time displacement measurement usinga multi-camera phase stepping interferometer, In Fringe ’93: Proceedings of 2nd Inter-national Workshop on Automatic Processing of Fringe Patterns (ed. W. Jüptner andW. Osten), 417–422, Akademie Verlag, Bremen, 1993.

122. G. Pedrini, B. Pfister, and H. Tiziani, Double pulse-electronic speckle patterninterferometry, J. Mod. Opt., 40, 89–96, 1993.

123. N. Krishna Mohan, H. Saldner, and N.-E. Molin, Electronic speckle pattern interfer-ometry for simultaneous measurement of out-of-plane displacement and slope, Opt.Lett., 18, 1861–1863, 1993.

124. J. Kato, I. Yamaguchi, and Q. Ping, Automatic deformation analysis by a TV speckleinterferometer using a laser diode, Appl. Opt., 32, 77–83, 1993.

125. D. Paoletti and G. S. Spagnolo, Automatic digital speckle pattern interferometrycontouring in artwork surface inspection, Opt. Eng., 32, 1348–1353, 1993.

Page 224: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 197[#49] 14/5/2009 20:42

Speckle Metrology 197

126. B. F. Pouet and S. Krishnaswamy,Additive/subtractive decorrelated electronic specklepattern interferometry, Opt. Eng., 32, 1360–1368, 1993.

127. R. S. Sirohi and N. Krishna Mohan, In-plane displacement measurement configurationwith two fold sensitivity, Appl. Opt. 32, 6387–6390, 1993.

128. C. Joenathan, B. Franze, and H. J. Tiziani, Oblique incidence and observationelectronic speckle-pattern interferometry, Appl. Opt., 33, 7307–7311, 1994.

129. A. R. Ganesan, P. Meinlschmidt, and K. Hinsch,Vibration mode separation using com-parative electronic speckle pattern interferometry (ESPI), Opt. Commun., 107, 28–34,1994.

130. J. Sivaganthan, A. R. Ganesan, B. C. Tan, K. S. Law, and R. S. Sirohi, Study of steelweldments using electronic speckle pattern interferometry, J. Test. Eval., 22, 42–44,1994.

131. P. R. Sreedhar, N. Krishna Mohan, and R. S. Sirohi, Real time speckle photography withtwo wave mixing in photorefractive BaTiO3 crystal, Opt. Eng., 33, 1989–1995, 1994.

132. G. Jin, N. Bao, and P. S. Chung, Applications of a novel phase shift method using acomputer-controlled polarization mechanism, Opt. Eng., 33, 2733–2737, 1994.

133. O. J. Løkberg, Sound in flight: Measurement of sound fields by use of TV holography,Appl. Opt., 33, 2574–2584, 1994.

134. H. M. Pedersen, O. J. Løkberg, H.Valø, and G. Wang, Detection of non-sinusoidal peri-odic vibrations using phase modulated TV-holography, Opt. Commun., 104, 271–276,1994.

135. R. S. Sirohi and N. Krishna Mohan, Role of lens aperturing in speckle metrology,J. Sci. Ind. Res., 54, 67–74, 1995.

136. G. Gulker and K. D. Hinsch, Detection of microstructure changes by electronic specklepattern interferometry, Opt. Lasers Eng., 26, 1348–1353, 1996.

137. R. S. Sirohi, N. Krishna Mohan, and T. Santhanakrishnan, Optical configuration formeasurement in speckle interferometry, Opt. Lett., 21, 1958–1959, 1996.

138. N. Krishna Mohan, T. Santhanakrishnan, P. Senthilkumaran, and R. S. Sirohi, Simul-taneous implementation of Leendertz and Duffy methods for in-plane displacementmeasurement, Opt. Commun., 124, 235–239, 1996.

139. P. K. Rastogi, Measurement of in plane strains using electronic speckle and electronicspeckle shearing pattern interferometry, J. Mod. Opt., 43, 1577–1581, 1996.

140. M. Sjödahl, Accuracy in electronic speckle photography, Appl. Opt., 36, 2875–2885,1997.

141. T. Bothe, J. Burke, and H. Helmers, Spatial phase shifting in electronic specklepattern interferometry: Minimization of phase reconstruction errors, Appl. Opt., 36,5310–5316, 1997.

142. R. S. Sirohi, J. Burke, H. Helmers, and K. D. Hinsch, Spatial phase shifting for pure in-plane displacement and displacement-derivative measurements in electronic specklepattern interferometry (ESPI), Appl. Opt., 36, 5787–5791, 1997.

143. P. K. Rastogi, An electronic pattern speckle shearing interferometer for measurementof surface slope variations of three-dimensional objects, Opt. Lasers Eng., 26, 93–100,1997.

144. K. S. Kim, J. H. Kim, J. K. Lee, and S. S. Jarng, Measurement of thermal expansioncoefficients by electronic speckle pattern interferometry at high temperature, J. Mater.Sci. Lett., 16, 1753–1756, 1997.

145. P. K. Rastogi, Determination of surface strains by speckle shear photography, Opt.Lasers Eng., 29, 103–116, 1998.

146. R. S. Sirohi and N. Krishna Mohan, An in-plane insensitive multiaperture speckleshear interferometer for slope measurement, Opt. Laser Technol., 29, 415–417, 1997.

Page 225: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 198[#50] 14/5/2009 20:42

198 Optical Methods of Measurement

147. P. K. Rastogi, Speckle shearing photography: A tool for direct measurement of surfacestrains, Appl. Opt., 37, 1292–1298, 1998.

148. J. N. Petzing and J. R. Tyrer, Recent developments and applications in electronicspeckle pattern interferometry, J. Strain Anal. Eng. Des., 33, 153–169, 1998.

149. C. Joenathan, B. Franze, P. Haible, and H. J. Tiziani, Speckle interferometry withtemporal phase evaluation for measuring large-object deformations, Appl. Opt., 37,2608–2614, 1998.

150. J. Zhang, Two-dimensional in-plane electronic speckle pattern interferometer and itsapplication to residual stress determination, Opt. Eng., 37, 2402, 1998.

151. J. Zhang and T. C. Chong, Fiber electronic speckle pattern interferometry and itsapplications in residual stress measurements, Appl. Opt., 37, 6707–6715, 1998.

152. A. J. Moore, D. P. Hand, J. S. Barton, and J. D. C. Jones, Transient deformationmeasurement with electronic speckle pattern interferometry and a high speed camera,Appl. Opt., 38, 1159–1162, 1999.

153. J. M. Huntley, G. H. Kaufmann, and D. Kerr, Phase-shifted dynamic speckle patterninterferometry at 1 kHz, Appl. Opt., 38, 6556–6563, 1999.

154. A. Andersson, A. Runnemalm, and M. Sjödahl, Digital speckle-pattern interferome-try: Fringe retrieval for large in-plane deformations with digital speckle photography,Appl. Opt., 38, 5408–5412, 1999.

155. B. B. García, A. J. Moore, C. Pérez-López, L. Wang, and T. Tschudi, Transient defor-mation measurement with electronic speckle pattern interferometry by use of a holo-graphic optical element for spatial phase stepping, Appl. Opt., 38, 5944–5947, 1999.

156. H. A. Aebischer and S. Waldner, A simple and effective method for filteringspeckle-interferometric phase fringe patterns, Opt. Commun., 162, 205–210, 1999.

157. F. Chen, G. M. Brown, and M. Song, Overview of three-dimensional shape measure-ment using optical methods, Opt. Eng., 39, 10–22, 2000.

158. H. Van der Auweraer, H. Steinbichler, C. Haberstok, R. Freymann, and D. Storer,Integration of pulsed-laser ESPI with spatial domain modal analysis: Results from theSALOME project, Proc. SPIE, 4072, 313, 2000.

159. B. Kemper, D. Dirksen,W.Avenhaus,A. Merker, and G.Von Bally, Endoscopic double-pulse electronic-speckle-pattern interferometer for techinal and medical intracavityinspection, Appl. Opt., 39, 3899–3905, 2000.

160. H. Van der Auweraer, H. Steinbichler, C. Haberstok, R. Freymann, D. Storer, andV. Linet, Industrial applications of pulsed-laser ESPI vibration analysis, Proc. SPIE,4359, 490–496, 2001.

161. A. Federico and G. H. Kaufmann, Comparative study of wavelet thresholding meth-ods for denoising electronic speckle pattern interferometry fringes, Opt. Eng., 40,2598–2604, 2001.

162. A. Kishen, V. M. Murukeshan, V. Krishnakumar, and A. Asundi, Analysis on the natureof thermally induced deformation in human dentine by electronic speckle patterninterferometry (ESPI), J. Dentistry, 29(8), 531–537, 2001.

163. C. Vial-Edwards, I. Lira, A. Martinez, and M. Münzenmayer, Electronic speckle pat-tern interferometry analysis of tensile tests of semihard copper sheets, Exp. Mech., 41,58–62, 2001.

164. D. Karlsson, G. Zacchi, and A. Axelsson, Electronic speckle pattern interferometry: Atool for determining diffusion and partition coefficients for proteins in gels, Biotechnol.Prog., 18, 1423–1430, 2002.

165. H. Van der Auweraer, H. Steinbichler, S. Vanlanduit, C. Haberstok, R. Freymann,D. Storer, andV. Linet,Application of stroboscopic and pulsed-laser electronic speckle

Page 226: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 199[#51] 14/5/2009 20:42

Speckle Metrology 199

pattern interferometry (ESPI) to modal analysis problems, Meas. Sci. Technol., 13,451–463, 2002.

166. C. Shakher, R. Kumar, S. K. Singh, and S. A. Kazmi, Application of wavelet filter-ing for vibration analysis using digital speckle pattern interferometry, Opt. Eng., 41,176–180, 2002.

167. G. H. Kaufmann and G. E. Galizzi, Phase measurement in temporal speckle patterninterferometry: Comparison between the phase-shifting and the Fourier transformmethods, Appl. Opt., 41, 7254–7263, 2002.

168. L. Yang and A. Ettemeyer, Strain measurement by three-dimensional electronicspeckle pattern interferometry: Potentials, limitations, and applications, Opt. Eng.,42, 1257–1266, 2003.

169. V. Madjarova, H. Kadono, and S. Toyooka, Dynamic electronic speckle pattern inter-ferometry (DESPI) phase analyses with temporal Hilbert transform, Opt. Exp., 11,617–623, 2003.

170. A. Martínez, R. Rodríguez-Vera, J. A. Rayas, and H. J. Puga, Fracture detection bygrating moiré and in-plane ESPI techniques, Opt. Lasers Eng., 39, 525–536, 2003.

171. N. Krishna Mohan and P. K. Rastogi, Recent developments in digital speckle patterninterferometry, Opt. Lasers Eng., 40, 439–445, 2003.

172. F. Chen, W. D. Luo, M. Dale, A. Petniunas, P. Harwood, and G. M. Brown, High-speedESPI and related techniques: Overview and its application in the automotive industry,Opt. Lasers Eng., 40, 459–485, 2003.

173. X. Li, K. Wang, and B. Deng, Matched correlation sequence analysis in temporalspeckle pattern interferometry, Opt. Laser Technol., 36, 315–322, 2004.

174. R. R. Cordero, A. Martínez, R. Rodríguez-Vera, and P. Roth, Uncertainty evaluation ofdisplacements measured by electronic speckle-pattern interferometry, Opt. Commun.,241, 279–292, 2004.

175. Y. Fu, C. J. Tay, C. Quan, and L. J. Chen, Temporal wavelet analysis for deformationand velocity measurement in speckle interferometry, Opt. Eng., 43, 2780–2787, 2004.

176. T. R. Moore, A simple design for an electronic speckle pattern interferometer, Am. J.Phys., 72, 1380–1384, 2004.

177. Y. Y. Hung and H. P. Ho, Shearography: An optical measurement technique andapplications, Mater. Sci. Eng. R: Rep., 49(3), 61–87, 2005.

178. P. D. Ruiz, J. M. Huntley, and R. D. Wildman, Depth-resolved wholefield displace-ment measurement by wavelength-scanning electronic speckle pattern interferometry,Appl. Opt., 44, 3945–3953, 2005.

179. G. A. Albertazzi, Radial metrology with electronic speckle pattern interferometry,J. Hologr. Speckle, 3, 117–124, 2006.

180. H. Joost and K. D. Hinsch, Sound field monitoring by tomographic electronic specklepattern interferometry, Opt. Commun., 259, 492–498, 2006.

181. E. A. Barbosa and A. C. L. Lino, Multi-wavelength electronic speckle patterninterferometry for surface shape measurement, Appl. Opt., 46, 2624–2631, 2007.

Page 227: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C007.tex” — page 200[#52] 14/5/2009 20:42

Page 228: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 201[#1] 14/5/2009 20:44

8 Photoelasticity

So far, we have described techniques that are based on interference or diffraction oflight, and even on the geometrical theory of light. The phenomena of interference anddiffraction are manifestations of wave nature, and hence are not peculiar to light alone.On the other hand, light falls in a very small portion of a very wide electromagneticspectrum. Electromagnetic waves are transverse waves: the electric and magneticfield vectors are orthogonal and vibrate perpendicular to the direction of propagationin free space or in an isotropic medium. In fact, the electric field vector E, magneticfield vector B, and propagation vector form an orthogonal triplet. Photo effects, suchas vision and recording of images on a photographic emulsion, are attributable to E,and hence when dealing with light, we shall be concerned only with this field vectorand not with B.

Let us consider a plane wave propagating along the z direction, with the electricvector confined to the ( y, z) plane. The tip of the electric vector describes a linein the ( y, z) plane as the wave propagates. The ( y, z) plane is called the plane ofvibration and the (x, z) plane the plane of polarization. Such a wave is called a planepolarized wave. The light emitted by an incandescent lamp or a fluorescent tube is notplane polarized, since the waves emitted by the source, although plane polarized, arerandomly oriented. Such a wave is called unpolarized or natural light. We can obtainpolarized light from unpolarized light. This will be discussed in Section 8.4.

8.1 SUPERPOSITION OF TWO-PLANE POLARIZED WAVES

Let us consider two orthogonally polarized plane waves propagating in the z direction:in one wave, the E vector is vibrating along the y direction and the other along the xdirection. These waves are described as follows:

Ey(z; t) = E0y cos(ωt − kz + δy

)(8.1)

Ex(z; t) = E0x cos(ωt − kz + δx) (8.2)

where δy and δx are the phases of the waves. These waves satisfy the wave equation.Owing to the superposition principle, the sum of the waves will also satisfy the waveequation. In general, a wave will have both x and y components and can be written as

E(z; t) = iEx(z; t) + jEy(z; t) (8.3)

201

Page 229: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 202[#2] 14/5/2009 20:44

202 Optical Methods of Measurement

We wish to find out what track the tip of the electric vector traces when the wavesare superposed. To do this, we introduce a variable τ = ωt − kz and express the planewaves as

Ey(z; t) = E0y(cos τ cos δy − sin τ sin δy

)Ex(z; t) = E0x (cos τ cos δx − sin τ sin δx)

From these equations, we obtain

Ey

E0ysin δx − Ex

E0xsin δy = cos τ sin

(δx − δy

)(8.4a)

Ey

E0ycos δx − Ex

E0xcos δy = sin τ sin

(δx − δy

)(8.4b)

Squaring both the left-hand and right-hand sides of Equations 8.4a,b and summingthem, we obtain

(Ey

E0y

)2

+(

Ex

E0x

)2

− 2Ey

E0y

Ex

E0xcos δ = sin2 δ (8.5)

where δ = δx − δy. This equation represents an ellipse. The ellipse is inscribed in arectangle of sides 2E0y and 2E0x that are parallel to the y and x axes, respectively.Hence, in the general case of the propagation of a monochromatic wave, the tip of itselectric vector traces out an ellipse in any z plane. Such a wave is called ellipticallypolarized. Since it is a propagating wave, the tip of the E vector traces out a spiral.The tip of the E vector can rotate either clockwise or anticlockwise in the plane. Thesecases are termed right-hand polarization (the E vector rotates clockwise when facingthe source of light) and left-hand polarization (the E vector rotates anticlockwisewhen facing the source), respectively.

It can be shown that the direction of rotation is governed by the sign of thephase difference δ. Let us consider a moment of time t0 when ωt0 − kz + δy = 0.At this moment, Ey = E0y and Ex = Ex0 cos(δx − δy) = Ex0 cos δ and dEx/dt =−ωEx0 sin δ. The rate of change of Ex, dEx/dt, is negative when 0 < δ < π andpositive when π < δ < 2π. Obviously, the former case corresponds to the right-handpolarized wave and the latter to the left-hand polarized wave.

There are two special cases of elliptical polarization, which are known as plane orlinear polarization and circular polarization. Various polarization states of light areshown in Figure 8.1.

8.2 LINEAR POLARIZATION

When the phase difference between the two waves is a multiple of π, we obtain alinearly polarized wave. When both waves are in phase (i.e., δ = 2mπ, with m =0, 1, 2), the E vector traces a line in the first and third quadrants. When they are inantiphase (i.e., δ = (2m + 1)π), the E vector traces a line in the second and fourthquadrants.

Page 230: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 203[#3] 14/5/2009 20:44

Photoelasticity 203

(a) (b) (c)

(d)

y

x

d = 0

d = pd = 3p/2

d = p/2

2E0x

2E0yy

3p/2 < d < 2pp < d < 3p/2

0 < d < p/2 p/2 < d < p

d = 3p/2

d = p/2

FIGURE 8.1 Polarization states of a sinusoidal electromagnetic wave for different values ofδ: (a) elliptical; (b) linear; (c) circular; (d) elliptical.

8.3 CIRCULAR POLARIZATION

When the phase difference between the two waves is an odd multiple of π/2 [i.e.,δ = (2m + 1)π/2], the wave is elliptically polarized, but its major and minor axesare now aligned parallel to the x and y axes. However, if the amplitudes of the twowaves are equal, then it becomes a circularly polarized wave. It is right-hand circu-larly polarized if the phase difference is π/2, 5π/2, 9π/2, . . . and left-hand circularlypolarized when δ = 3π/2, 7π/2, 11π/2, . . . . Therefore, two conditions must be metto obtain circularly polarized light: the phase difference must be an odd multiple ofπ/2 and the amplitudes of the waves must be equal.

8.4 PRODUCTION OF POLARIZED LIGHT

Natural light is unpolarized, with the E vector taking all possible orientations ran-domly. We can, however, resolve the E-vector orientations into two components:one oscillating in the plane of incidence (p-polarized) and the other orthogonal to this(s-polarized). The amplitudes of these components at any instant are equal. It can alsobe seen that, in general, it is possible to obtain elliptically polarized wave by intro-ducing a phase difference between the orthogonally polarized components. However,most often a linearly polarized wave is required. Fortunately, there are a numberof ways to obtain a linearly polarized wave from natural light. These methods arebased on

1. Reflection at a dielectric interface2. Refraction at a dielectric interface

Page 231: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 204[#4] 14/5/2009 20:44

204 Optical Methods of Measurement

3. Double refraction4. Dichroism5. Scattering

Of these, the first four methods are used for making practical polarizers—the devicesthat produce linearly polarized light wave from natural light.

8.4.1 REFLECTION

When a beam of light is incident on the air–dielectric interface at an angle of incidenceθB, called the Brewster angle, the reflected beam is linearly polarized: the E vectoroscillates in the plane perpendicular to the plane of incidence. This is also called ap-polarized beam. The transmitted beam is partially polarized. The Brewster angleθB is governed by the refractive index n of the dielectric medium. For reflection at anair-dielectric interface, it is given by the relation

tan θB = n

It may be noted that reflection at θB at an air–dielectric interface fixes the direc-tion of vibration of the electric vector in the reflected beam, and hence is used forcalibration of polarizers, among other things.

8.4.2 REFRACTION

As mentioned earlier, when the light is incident at the angle θB, the reflected beamis p-polarized and the transmitted beam is partially polarized. The transmitted beamhas less of a p-component, since some of this has been removed by reflection atthe angle θB. However, if several successive reflections are allowed at a number ofplane parallel plates aligned at θB, most of the p-polarized light will be removed byreflection, and the transmitted beam is then s-polarized. Such a polarizer is known asa pile-of-plates polarizer. This polarizer is often used with high-power lasers.

8.4.3 DOUBLE REFRACTION

There is a class of crystals in which an incident beam is decomposed into two linearlyorthogonally polarized beams inside the crystal. The structure of the crystal supportstwo orthogonally polarized beams. These are anisotropic crystals. In such crystals,there is a direction along which there is no decomposition. This is known as theoptic axis. Some crystals have only one optic axis, and are called uniaxial crystals;others have two optic axes, and are called biaxial crystals. From the point of viewof polarizers or other polarization components, uniaxial crystals are of importance,and hence we will discuss them in more detail. Two well-known examples of uniaxialcrystals are calcite and quartz.

Let us consider a plate of a uniaxial crystal on which a beam of light is obliquelyincident. This beam is decomposed into two orthogonally polarized beams insidethe plate. One beam obeys Snell’s law of refraction, and is called the ordinary beam

Page 232: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 205[#5] 14/5/2009 20:44

Photoelasticity 205

e-ray

Optic axis

o-rayOptic axis

FIGURE 8.2 A Glan–Thompson polarizer.

(o-beam); the other beam does not obey this law, and is called the extra-ordinarybeam (e-beam). The refractive index no of the o-beam is independent of the directionof propagation, whereas that of the e-beam, ne, varies with this direction, takingextreme values in a direction orthogonal to the optic axis. If no > ne, the crystal isa negative uniaxial crystal: calcite is one such crystal (no = 1.658 and ne = 1.486for the yellow sodium wavelength). The quartz crystal is a positive uniaxial crystal,since ne > no (no = 1.544 and ne = 1.553). The optic axis is a slow axis in calcite,and the axis orthogonal to this is a fast axis. The E vector in the o-beam oscillatesin a plane that is perpendicular to the principal section of the crystal. The principalsection contains the optic axis and the direction of propagation. The E vector of thee-beam lies in the principal plane.

Since there are two linearly polarized beams inside the crystal, it is easy to obtaina linearly polarized beam by eliminating one of these beams. Fortunately, owingto the angle-dependent refractive index of the e-beam and the availability of mediaof refractive index intermediate to no and ne, it is possible to remove the o-beamby total internal reflection in a calcite crystal. One of the early devices based onthis principle is the Nicol prism. Its more versatile companion is a Glan–Thompsonprism, which is shown in Figure 8.2. It has two halves, cemented by Canada balsam.When an unpolarized beam is incident on the polarizer, the outgoing light beam islinearly polarized. However, a linearly polarized beam will be completely blocked ifthe transmission axis of the polarizer is orthogonal to the beam. Polarizers obtainedfrom anisotropic crystals are generally small, but have very high extinction ratios.Such polarizers are not generally used for photo-elastic work, which requires largepolarizers, since photo-elastic models are usually moderately large.

8.4.3.1 Phase Plates

Besides obtaining polarizers from these crystals, we can also obtain phase plates.These produce a fixed but wavelength-dependent phase difference between the twocomponents. Let us consider a plane parallel plate of a uniaxial crystal with the opticaxis lying in the surface of the plate.A linearly polarized beam is incident normally onthis plate. This beam is decomposed into two beams, which propagate with differentvelocities along the slow and fast axes of the plate. Let the refractive indices alongthese axes be no and ne, respectively. A plate of thickness d will introduce a path dif-ference |(no − ne)|d between the two waves. Therefore, any required path difference

Page 233: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 206[#6] 14/5/2009 20:44

206 Optical Methods of Measurement

can be introduced between two waves by an appropriate choice of plate thickness ofa given anisotropic crystal.

8.4.3.2 Quarter-Wave Plate

In a quarter-wave plate, the plate thickness d is chosen so as to introduce a pathdifference of λ/4 or an odd multiple thereof, that is, (2m + 1)λ/4, where m is aninteger. In other words, such a plate introduces a phase difference of a quarter-wave.Therefore,

d = 2m + 1

|no − ne|λ

4

For a half-wave plate, the path difference introduced is λ/2 or (2m + 1)λ/2.A quarter-wave plate is used to convert a linearly polarized beam into a circularly

polarized beam. It is oriented such that the E vector of the incident beam makesan angle of 45◦ with either the fast or the slow axis of the quarter-wave plate. Thecomponents in the plate are then of equal amplitude, and the plate introduces a pathdifference of λ/4 between these components. The outgoing beam is thus circularlypolarized. The handedness of circular polarization can be changed by rotating theplate by 90◦ about the axis of the optical beam.

8.4.3.3 Half-Wave Plate

A half-wave plate, on the other hand, rotates the plane of a linearly polarized beam.For example, if a beam of linearly polarized light with its azimuth 45◦ is incident ona half-wave plate, its azimuth is rotated by 90◦. In other words, the beam emergesstill linearly polarized, but the orientation of the E vector is rotated by 90◦.

8.4.3.4 Compensators

Phase plates are devices that introduce fixed phase differences. In some applications, itis necessary either to introduce a path difference that could be varied or to compensatefor a path difference. This is achieved by compensators. There are two well-knowncompensators: the Babinet and the Soleil–Babinet compensators. The Babinet com-pensator consists of two wedge plates with their optic axes orthogonal to each other,as shown in Figure 8.3a. The role of the o- and e-beams changes when the beams passfrom one wedge to the other. The path difference introduced by the compensator isgiven by |no − ne|[d2( y) − d1( y)], where d2( y) and d1( y) are the thicknesses of thetwo wedge plates at any position (0, y). Obviously, the path difference varies alongthe y direction on the wedge plate.

If a constant path difference between the two beams is required, the Soleil–Babinetcompensator (Figure 8.3b) is used. This again consists of two elements: one is a plateand the other is a combination of two identical wedge plates forming a plane parallelplate. The optic axes in the plate and the wedge combination are orthogonal. Thethickness of the plate formed as a result of the wedge combination is varied by slidingone wedge over the other. Thus, the thickness difference d2 − d1 remains constantover the whole surface, where d2 and d1 are the thicknesses of the plate and the wedgecombination, respectively.

Page 234: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 207[#7] 14/5/2009 20:44

Photoelasticity 207

t1 t2t

t2( y) t1( y)

(a) (b)

FIGURE 8.3 (a) Babinet compensator. (b) Soleil–Babinet compensator.

8.4.4 DICHROISM

There are anisotropic crystals that are characterized by different absorption coeffi-cients with respect to o- and e-beams. For example, a tourmaline crystal stronglyabsorbs an o-beam. Therefore, we can obtain an e-polarized beam when a beam ofnatural light passes through a sufficiently thick plate of this crystal. Very large polar-izers based on selective absorption are available as sheets, and are known as sheetpolarizers or Polaroids. These are the ones often used in photo-elastic work.

8.4.5 SCATTERING

Light scattered by particles is partially polarized. However, polarizers based onscattering are not used in practice.

8.5 MALUS’S LAW

Consider a linearly polarized light beam incident on a polarizer. The E vector ofthe beam makes an angle θ with the transmission axis of the polarizer. The beamis resolved into two components, one parallel to the transmission axis and the otherperpendicular to it. The component perpendicular to the transmission axis is blocked.Therefore, the amplitude of the light transmitted by the polarizer is E(θ) = E0 cos θ.Hence, the intensity of the transmitted light is given by I(θ) = I0 cos2 θ, where I0 isthe intensity of the incident beam. This is a statement of Malus’s law. It can be seenthat a polarizer could also be used as an attenuator in the beam.

8.6 THE STRESS-OPTIC LAW

The phenomenon of double refraction or optical anisotropy may also occur in certainisotropic materials, such as glass and plastics, when subjected to stress or strain. Thiscondition is temporary, and disappears when the stress is removed. This phenomenonwas first observed by Brewster, and forms the basis of photoelasticity. In photoe-lasticity, models of objects are cast or fabricated from isotropic materials, and are

Page 235: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 208[#8] 14/5/2009 20:44

208 Optical Methods of Measurement

then subjected to stress. The stress produces physical deformations that completelyalter the initial isotropic character of the material. We can then characterize the mate-rial with three principal refractive indices, which are along the principal axes ofthe stress.

The relationship between the principal indices of refraction ni of a temporarybirefringent material and the principal stresses σi were formulated by Maxwell, andare given by

n1 − n0 = C1σ1 + C2(σ2 + σ3) (8.6a)

n2 − n0 = C1σ2 + C2(σ3 + σ1) (8.6b)

n3 − n0 = C1σ3 + C2(σ1 + σ2) (8.6c)

where n0 is the refractive index of the unstressed (isotropic) material and C1, C2 areconstants depending on the material.

For materials under general triaxial stress, the stress-optic law is expressed as

n1 − n2 = C(σ1 − σ2) (8.7a)

n2 − n3 = C(σ2 − σ3) (8.7b)

n1 − n3 = C(σ1 − σ3) (8.7c)

where C = C1 − C2 is the stress-optic coefficient of the photo-elastic material.Let us now consider a plate of isotropic material. This could be subjected to either

(a) a uniaxial state of stress or (b) a biaxial state of stress. In the first case, σ2 = σ3= 0, and hence n2 = n3. The stress-optic law takes the very simple form

n1 − n2 = Cσ1 (8.8)

The plate behaves like a uniaxial crystal. When the plate is subjected to a biaxial stateof stress (i.e., σ3 = 0), the stress-optic law takes the form

n1 − n2 = C(σ1 − σ2) (8.9a)

n2 − n3 = Cσ2 (8.9b)

n1 − n3 = Cσ1 (8.9c)

The plate behaves like a biaxial crystal.Now let us assume that a beam of linearly polarized light of wavelengthλ is incident

normally on a plate of photo-elastic material of thickness d. Within the plate, thereare two linearly polarized beams, one vibrating in the (x, z) plane and the other in the( y, z) plane. While traversing the plate, these two waves acquire a phase difference,the value of which at the exit surface is given by

δ = 2π

λ|n1 − n2| d = 2πC

λ(σ1 − σ2) d (8.10)

Page 236: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 209[#9] 14/5/2009 20:44

Photoelasticity 209

The phase change δ depends linearly on the difference of the principal stresses, onthe thickness of the plate, and inversely on the wavelength of light used. If the beamstrikes the plate at an angle θ, the phase difference is given by

δ = 2πC

λ

(σ1 − σ2 cos2 θ

)d sec θ (8.11)

In photo-elastic practice, it is more convenient to write Equation 8.10 in the form

σ1 − σ2 = mfσd

(8.12)

where m = δ/2π is the fringe order and fσ = λ/C is the material fringe value for agiven wavelength of light. This relationship is known as the stress-optic law.

The principal stress difference σ1 − σ2, in a two-dimensional model, can be deter-mined by measuring the fringe order m, if the material fringe value fσ of the materialis known or obtained by calibration. The fringe order at each point in the photo-elasticmodel can be measured by observing the model in a polariscope.

At this juncture, it should be mentioned that a plate of thickness d and refrac-tive index n0 introduces a phase delay of k(n0 − 1)d; k = 2π/λ. When the plate isstressed, the linearly polarized components travel with different speeds and acquirephase delays k(n1 − 1)d1 and k(n2 − 1)d1, where d1 is the thickness of the stressedplate and is related to the thickness d of the unstressed plate by

d1 = d�1 − νE (σ1 + σ2)� (8.13)

where E and ν are the Young’s modulus and Poisson’s ratio of the material.This change in thickness, d1 − d, is very important in interferometry and also inholophotoelasticity.

8.7 THE STRAIN-OPTIC LAW

The stress-strain relationships for a material exhibiting perfectly linear elasticbehavior under a two-dimensional state of stress are:

ε1 = 1

E(σ1 − νσ2) (8.14a)

ε2 = 1

E(σ2 − νσ1) (8.14b)

From Equations 8.14a,b, the difference between the principal stresses is

σ1 − σ2 = E

1 + ν(ε1 − ε2) (8.15)

Substituting this into the stress-optic law, we obtain

ε1 − ε2 = mfεd

(8.16)

Page 237: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 210[#10] 14/5/2009 20:44

210 Optical Methods of Measurement

where fε = fσ(1 + ν)/E is the material fringe value in terms of strain. The relation-ship given in Equation 8.16 is known as the strain-optic law in photoelasticity.

8.8 METHODS OF ANALYSIS

The optical system most often used for stress analysis is a polariscope. It takes avariety of forms, depending on the end use. However, in general, a polariscope consistsof a light source, a device to produce polarized light called a polarizer, a model, anda second polarizer called an analyzer. In addition, it may contain a set of lenses,quarter-wave plates, and photographic or recording equipment. We will discuss theoptical systems of plane polariscopes and circular polariscopes.

8.8.1 PLANE POLARISCOPE

The plane polariscope consists of a light source, a light filter, collimating opticsto provide a collimated beam, a polarizer, an analyzer, a lens and photographicequipment as shown in Figure 8.4. The model is placed between the polarizer and theanalyzer. The polarizer and the analyzer are crossed, thus producing a dark field.

Let the transmission axis of the polarizer be along the y direction. The amplitudeof the wave just behind the polarizer is given by

Ey(z; t) = E0y cos(ωt − kz)

where k = 2π/λ. The field incident on the model is also given by this expression,except that z refers to the plane of the model. Let us also assume that one of theprincipal stress directions makes an angle α with the transmission direction of thepolarizer (i.e., the y axis). The incident field just at the entrance face of the model splitsinto two components that are orthogonally polarized and vibrate in the planes of σ1and σ2. The amplitudes of these components are E0y cos α and E0y sin α, respectively.

y

z x

xE0y

Polarizer Analyzer

Observationplane

as1s2

FIGURE 8.4 Schematic of a plane polariscope.

Page 238: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 211[#11] 14/5/2009 20:44

Photoelasticity 211

The amplitudes of these components at the exit face of the model are

E1(z; t) = E0y cos α cos[ωt − kz − k(n1 − 1)d] = E0y cos α cos(ωt − kz + δy

)(8.17a)

E2(z; t) = E0y sin α cos[ωt − kz − k(n2 − 1)d] = E0y sin α cos(ωt − kz + δy − δ

)(8.17b)

The model introduces a phase difference

δ = 2π

λ(n1 − n2)d = 2π

fσ(σ1 − σ2)d

between these components. The analyzer resolves these components further intocomponents along and perpendicular to its direction of transmission, which is alongthe x direction. The components along the direction of transmission are allowedthrough, and produce a photo-elastic pattern, while those in the orthogonal directionare blocked. The net transmitted amplitude is

E1 sin α − E2 cos α = E0y

2sin 2α

[cos(ωt − kz + δy

)− cos(ωt − kz + δy − δ

)]

= E0y sin 2α sin

2

)sin

(ωt − kz + δy − δ

2

)(8.18)

This also represents a wave of amplitude E0y sin 2α sin(δ/2) propagating along the zdirection. The intensity of this wave is therefore

I = E20y sin2 2α sin2

2

)= I0 sin2 2α sin2

[π(σ1 − σ2)d

](8.19)

The intensity of the transmitted beam is governed by α, the orientation of the prin-cipal stress direction with respect to the polarizer’s transmission axis, and the phaseretardation δ. The transmitted intensity will be zero when sin 2α sin(δ/2) = 0. In otherwords, the transmitted intensity is zero when either sin 2α = 0 or sin(δ/2) = 0. Whensin 2α = 0, the angle α = 0 or π/2. In either case, one of the principal stress directionsis aligned with the polarizer’s transmission axis. Therefore, these dark fringes givethe directions of the principal stresses at any point on the model, and are known asisoclinics or isoclinic fringes.

When sin(δ/2) = 0, δ = 2mπ, or

σ1 − σ2 = mfσd

(8.20)

The transmitted intensity is zero when σ1 − σ2 is an integral multiple of fσ/d. There-fore, the fringes are loci of constant σ1 − σ2, and are referred to as isochromatics.The adjacent isochromatics differ by fσ/d. When white light is used for illuminationof the model, these fringes are colored; each color corresponds to a constant value ofσ1 − σ2, hence the name isochromatics.

Page 239: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 212[#12] 14/5/2009 20:44

212 Optical Methods of Measurement

It can be seen that both the isoclinics and the isochromatics appear simultaneouslyin a plane polariscope. It is desirable to separate these fringe patterns. A circularpolariscope performs this function and provides only isochromatics.

The isoclinics are the loci of points at which the directions of the principal stressesare parallel to the transmission axes of the polarizer and the analyzer. The isoclinicpattern is independent of the magnitude of the load applied to the model and thematerial fringe value. When white light is used for illumination, the isoclinics appeardark in contrast to the isochromatics, which, with the exception of the zero-orderfringe, are colored. In regions where the directions of the principal stresses do notvary greatly from point to point, the isoclinics appear as wide diffuse bands. Theisoclinics do not intersect each other except at an isotropic point, which is a point wherethe principal stresses are equal in magnitude and sign, that is, σ1 − σ2 = 0. Further,at a point on a shear-free boundary where the stress parallel to the boundary has amaximum or a minimum value, the isoclinic intersects the boundary orthogonally.

8.8.2 CIRCULAR POLARISCOPE

It can be seen that the isoclinics appear because a linearly polarized light wave isincident on the model. They will disappear if the light incident on the model iscircularly polarized. Therefore, a circular polarizer, which is a combination of a linearpolarizer and a quarter-wave plate at 45◦ azimuth, is required. Further, to analyze thislight, we also need a circular analyzer. Therefore, a circular polariscope consists ofa light source, collimating optics, a polarizer, two quarter-wave plates, an analyzer,and recording optics, as shown in Figure 8.5.

The model is placed between the two quarter-wave plates. Since these plates aredesigned with slow and fast axes, they can be arranged in two ways, namely with axesparallel or with axes crossed. Similarly, the transmission axes of the polarizer and theanalyzer can be parallel or crossed. Therefore, there are four ways of assembling a

x

y

y

zx

xE0y

x

ss ffAQ2MQ1P

Observation plane

as1

s2

FIGURE 8.5 Schematic of a circular polariscope: P, polarizer; Q1, Q2, quarter-wave plates;M, model; A, analyzer; for other symbols, see text.

Page 240: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 213[#13] 14/5/2009 20:44

Photoelasticity 213

TABLE 8.1Four Configurations of a Circular Polariscope

Polarizer and Quarter-WaveConfiguration Analyzer Axes Plate Axes Field

1 Parallel Parallel Dark2 Parallel Crossed Dark3 Crossed Parallel Bright4 Crossed Crossed Bright

circular polariscope: two of these configurations give a dark field and the remainingtwo a bright field at the output, as shown in Table 8.1.

We now consider a configuration that has the analyzer and the polarizer crossedand the quarter-wave plates in parallel, resulting in a bright-field output.

Let the transmission axis of the polarizer be along the y direction. The fieldtransmitted by the polarizer is given by

Ey(z; t) = E0y cos(ωt − kz)

This field is split into two components, which propagate along the fast and slow axesof the quarter-wave plate. The field at the exit face of the first quarter-wave plate is

E1(z; t) = E0y√2

cos[ωt − kz − k(n′ − 1)d′] = E0y√

2cos(ωt − kz + ψ1) (8.21a)

E2(z; t) = E0y√2

cos[ωt − kz − k(n′′ − 1)d ′]

= E0y√2

cos(ωt − kz + ψ1 − π

2

)= E0y√

2sin(ωt − kz + ψ1) (8.21b)

where the phase difference π/2, introduced by the quarter-wave plate of thickness d′,is expressed as

λ(n′′ − n′)d′ = π

2

and

ψ1 = −2π

λ(n′ − 1)d′

This indicates that n′ corresponds to the fast axis of the quarter-wave plate. Thisfield is now incident on the model, and hence becomes further decomposed alongthe directions of the principal stresses. We assume that one of the principal stressdirections makes an angle α with the polarizer’s transmission axis (i.e., with the

Page 241: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 214[#14] 14/5/2009 20:44

214 Optical Methods of Measurement

y axis). The field at the entrance face of the model as decomposed along the σ1 andσ2 directions is now given by

Eσ1(z; t) = E0y√2

cos(π

4− α)

cos(ωt − kz + ψ1)

+ E0y√2

sin(π

4− α)

sin(ωt − kz + ψ1)

= E0y√2

cos(ωt − kz + ψ1 − π

4+ α)

= E0y√2

cos τ (8.22a)

Eσ2(z; t) = −E0y√2

sin(π

4− α)

cos(ωt − kz + ψ1)

+ E0y√2

cos(π

4− α)

sin(ωt − kz + ψ1)

= E0y√2

sin(ωt − kz + ψ1 − π

4+ α)

= E0y√2

sin τ (8.22b)

The field amplitudes at the exit face of the model are given by

Eσ1(z; t) = E0y√2

cos(τ + ψ2) (8.23a)

Eσ2(z; t) = E0y√2

sin(τ + ψ2 + δ) (8.23b)

where

ψ2 = −2π

λ(n1 − 1)d

δ = 2π

λ(n1 − n2) d

We now decompose these fields along the axes of the second quarter-wave plate,which are inclined at 45◦ and −45◦ to the y axis. These are given by

E′1(z; t) = E0y√

2cos(π

4− α)

cos(τ + ψ2) − E0y√2

sin(π

4− α)

sin(τ + ψ2 + δ)

(8.24a)

E′2(z; t) = E0y√

2sin(π

4− α)

cos(τ + ψ2) + E0y√2

cos(π

4− α)

sin(τ + ψ2 + δ)

(8.24b)

Page 242: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 215[#15] 14/5/2009 20:44

Photoelasticity 215

We now assume that both quarter-wave plates are aligned in parallel; that is, their fastand slow axes are parallel. The field at the exit face of the second plate is

E′1(z; t) = E0y√

2cos(π

4− α)

cos(τ + ψ2 + ψ1)

− E0y√2

sin(π

4− α)

sin(τ + ψ2 + δ + ψ1) (8.25a)

E′2(z; t) = E0y√

2sin(π

4− α)

cos(τ + ψ2 + ψ1 − π

2

)

+ E0y√2

cos(π

4− α)

sin(τ + ψ2 + δ + ψ1 − π

2

)

= E0y√2

sin(π

4− α)

sin(τ + ψ2 + ψ1)

− E0y√2

cos(π

4− α)

cos(τ + ψ2 + δ + ψ1) (8.25b)

Since the analyzer is crossed, it takes the components along the x direction. Therefore,the amplitude of the transmitted wave is given by

E′1(z; t) = E0y

2

[cos(π

4− α)

cos(τ + ψ2 + ψ1)

− sin(π

4− α)

sin(τ + ψ2 + δ + ψ1) − sin(π

4− α)

sin(τ + ψ2 + ψ1)

+ cos(π

4− α)

cos(τ + ψ2 + δ + ψ1)

](8.26)

Equation 8.26 can be rewritten as

E′(z; t) = E0y

2

[cos(τ + ψ2 + ψ1 + π

4− α)

+ cos(τ + ψ2 + δ + ψ1 + π

4− α)]

= E0y cos

2

)cos

(τ + ψ2 + δ

2+ ψ1 + π

4− α

)(8.27)

Again, this represents a wave with an amplitude E0y cos(δ/2). Therefore, the trans-mitted intensity is given by

I = I0 cos2(δ/2) (8.28)

When there is no stress distribution, δ = 0, and the transmitted intensity is maximumand uniform. This therefore represents a bright-field configuration. It can be seen thatwhen the quarter-wave plates are crossed, the field exiting from the second plate is

Page 243: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 216[#16] 14/5/2009 20:44

216 Optical Methods of Measurement

(a) (b)

FIGURE 8.6 Isochromatics in (a) a dark field and (b) a bright field.

given by

E′1(z; t) = E0y√

2cos(π

4− α)

cos(τ + ψ2 + ψ1 − π

2

)

− E0y√2

sin(π

4− α)

sin(τ + ψ2 + δ + ψ1 − π

2

)(8.29a)

E′2(z; t) = E0y√

2sin(π

4− α)

cos(τ + ψ2 + ψ1)

+ E0y√2

cos(π

4− α)

sin(τ + ψ2 + δ + ψ1) (8.29b)

The intensity transmitted by the polarizer is now given by

I = I0 sin2(δ/2) (8.30)

This represents a dark field, since the intensity is zero when there is no stress distribu-tion on the model. Equations 8.28 and 8.30 show that the intensity of light emergingfrom the analyzer in a circular polariscope is a function of the difference of principalstresses σ1 − σ2 only. The isoclinics have been eliminated.

In a dark-field configuration, the dark fringes occur wherever δ = 2mπ (m =0, 1, 2, . . .), and they correspond to the integer isochromatic fringe order m =0, 1, 2, 3, . . . , respectively. An example of this fringe pattern is shown in Figure8.6a. However, for a bright-field configuration, the dark fringes are obtained whenδ = (2m + 1)π. These corresponds to isochromatic fringes of half order, that is,m = 1

2 , 32 , 5

2 , . . . . An example of a light-field fringe pattern is shown in Figure 8.6b.

8.9 EVALUATION PROCEDURE

Directions of principal stresses at any point in the model are determined using a planepolariscope. The polarizer and analyzer are rotated about the optical axis until theisoclinic passes through the point of interest. The inclination of the transmission axisgives the principal stress direction. The principal stress difference σ1 − σ2 in the dark

Page 244: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 217[#17] 14/5/2009 20:44

Photoelasticity 217

field is given by

σ1 − σ2 = mfσd

(8.31a)

and that in the bright field by

σ1 − σ2 =(

m + 1

2

)fσd

(8.31b)

Therefore, the material fringe value fσ must be known before σ1 − σ2 can be calcu-lated. fσ is obtained by calibration. A circular disk of the same photo-elastic materialand thickness is used as a model, and is diametrically loaded. The fringe order in thecenter of the disk is measured, and fσ is calculated using the formula

fσ = 8F d

nπD(8.32)

where F is the applied force, D is the diameter of the disk, and n is the measuredfringe order at the center of the disk. There are other calibration methods that usetensile loading or bending.

The principal stress difference σ1 − σ2 in an arbitrary model is then found by usingthis value of the material fringe value and the order m of the isochromatics. However,the order m is to be counted from m = 0, which is normally not known. It can easilybe found, if it exists in the model, by using white-light illumination, since m = 0is an achromatic fringe whereas the higher-order fringes are colored. Therefore, apolariscope is usually equipped with both white-light and monochromatic sources:the white-light source for locating the zero-order isochromatic and the monochromaticsource for counting the higher-order isochromatics. If the principal stress differenceis desired at a point where neither a bright isochromatic nor a dark isochromaticpasses, some method of measuring fractional fringe order must be implemented.There are methods to measure fractional fringe orders, which are discussed in thenext section.

8.10 MEASUREMENT OF FRACTIONAL FRINGE ORDER

The methods described here assume that the directions of the principal stresses areknown. In one method, a Babinet or a Soleil–Babinet compensator is used. The princi-pal axes of the compensator are aligned along the directions of the principal stresses.An additional phase difference can then be introduced to shift the dark isochromaticsto the point of interest, and this additional phase shift is read from the compensator.Another method makes use of a quarter-wave plate for compensation, and is knownas Tardy’s method.

8.10.1 TARDY’S METHOD

This makes use of a plane polariscope in dark-field configuration, where the intensitydistribution is given by I = I0 sin2 2α sin2(δ/2). The observation field contains both

Page 245: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 218[#18] 14/5/2009 20:44

218 Optical Methods of Measurement

Isochromatics Isoclinics

PP

nn n + 1n + 1n + 1n

P

FIGURE 8.7 Tardy’s method of compensation.

the isochromatics and the isoclinics. Let us now assume that we wish to measure thefractional isochromatics order at a point P, as shown in Figure 8.7. Since this is adark-field configuration, the integral isochromatic orders correspond to dark fringes,and in order to work with the dark fringes, we need to make the region at and aroundthe point P bright. For this purpose, the polarizer–analyzer combination is rotatedby 45◦ so that their transmission axes make angles of 45◦ with the directions ofthe principal stresses. The intensity distribution is now given by I = I0 sin2(δ/2). Aquarter-wave plate is now inserted between the model and the analyzer in such a waythat its principal axes are parallel to the transmission axes of the polarizer and theanalyzer. We can now shift the isochromatics by rotation of the analyzer by an anglethat is related to the phase shift. In order to understand the working of this principle,we proceed as follows.

We express the amplitude of the wave transmitted by the polarizer as

Ep = E0p cos(ωt − kz) (8.33)

Since the polarizer–analyzer combination has its transmission axes at 45◦ to theprincipal stress directions in the model, this field is resolved along these directions.The components of the field at the exit face of the model are then expressed as

Eσ1 = E0p√2

cos[ωt − kz − k(n1 − 1)d] = E0p√2

cos(ωt − kz + ψ1) (8.34a)

Eσ2 = E0p√2

cos[ωt − kz − k(n2 − 1)d] = E0p√2

cos(ωt − kz + ψ1 + δ) (8.34b)

Page 246: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 219[#19] 14/5/2009 20:44

Photoelasticity 219

where δ = k(n1 − n2)d. The field transmitted by the analyzer, which is crossed to thepolarizer, is

EA = Eσ2√2

− Eσ1√2

= E0p

2[cos(ωt − kz + ψ1 + δ) − cos(ωt − kz + ψ1)]

= E0p

22 sin

2

)sin

(ωt − kz + ψ1 + δ

2

)(8.35)

As mentioned earlier, this describes a wave of amplitude E0p sin(δ/2), and hencethe intensity of the transmitted light is given by I = I0 sin2(δ/2), as expected.We now introduce a quarter-wave plate after the model and align its axes parallelto the transmission axes of the polarizer and the analyzer. The field componentsalong the fast and slow axes of the quarter-wave plate are

Ef = Eσ2√2

+ Eσ1√2

(8.36a)

Es = Eσ2√2

− Eσ1√2

(8.36b)

The field components after passage through the quarter-wave plate can beexpressed as

Ef = E0p

2

{cos[ωt − kz + ψ1 − k(n′

1 − 1)d ′]

+ cos[ωt − kz + ψ1 + δ − k(n′

1 − 1)d′] }

= E0p

2[cos τ + cos(τ + δ)] (8.37a)

where τ = ωt − kz + ψ1 − k(n′1 − 1)d′ and

Es = E0p

2

{cos[ωt − kz + ψ1 + δ − k(n′

2 − 1)d′ ]

− cos[ωt − kz + ψ1 − k(n′

2 − 1)d′] }

= E0p

2

[cos(τ + δ − π

2

)− cos

(τ − π

2

)]= E0p

2[sin(τ + δ) − sin τ] (8.37b)

where

k(n′

2 − n′1

)d′ = π

2

The analyzer will transmit only the Es component. However, if the analyzer is rotatedby an angle χ from this position, then the components of the field along the trans-mission axis of the analyzer are Es cos χ + Ef sin χ. After substitution of Es and Ef ,

Page 247: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 220[#20] 14/5/2009 20:44

220 Optical Methods of Measurement

and a little trigonometry, we obtain the amplitude of the wave transmitted through theanalyzer as

EA = E0p sin

(χ + δ

2

)cos

(τ + δ

2

)(8.38)

This represents a wave with an amplitude E0p sin(χ + δ

/2). Therefore, the intensity

of the wave is given by I = I0 sin2(χ + δ/

2). For the mth isochromatic, δ = 2mπ,

and hence the intensity at this location must be I = I0 sin2 χ, which is evidentlyzero when χ = 0. Further, the intensity at the same location will also be zero whenχ = π, but then the mth isochromatic will have moved to the (m + 1)th isochromatic.In other words, a rotation of the analyzer by π shifts the isochromatics by one order.Therefore, if the analyzer is rotated by an angle χp to shift the isochromatics to thepoint P, then the fractional order at that point must be χp/π.

8.11 PHASE-SHIFTING

Phase-shifting is a technique for the automatic evaluation of phase maps from theintensity data, and has been described in detail in Chapter 4. However, it assumes aspecial significance in photoelasticity, since both beams participating in interferencetravel along the same path and the phase of one cannot be changed independently ofother, as has been done for the other interferometric methods. We therefore presentmethods of phase-shifting in photoelasticity.

8.11.1 ISOCLINICS COMPUTATION

For this purpose, a bright-field plane polariscope is used. The transmitted intensity inthis configuration is given by

I = I0 − I0 sin2 2α sin2(

δ

2

)= I0

[1 − sin2

2

)(1 − cos2 2α)

]

= I0

[1 − 1

2sin2

2

)(1 − cos 4α)

]= IB + V cos 4α (8.39)

When the whole polariscope is rotated by an angle β, the intensity transmitted can beexpressed as

I = IB + V cos 4(α − β) (8.40)

Both IB and V depend on the value of the isochromatic parameter, and consequentlyon the wavelength of light used. However, the isoclinic parameter does not dependon the wavelength. The phase of the isoclinics is obtained using a four-step algorithmwith the intensity data obtained at βi = (i − 1)π/8, with i = 1, . . . , 4, and the relation

tan 4α = I4 − I2

I3 − I1(8.41)

The phase of the isoclinics is obtained from this relation, except at regions where themodulation V is very small. Since V depend on δ, the low-modulation areas depend on

Page 248: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 221[#21] 14/5/2009 20:44

Photoelasticity 221

the wavelength. If monochromatic light is used, there may be several areas where thevalue of δ makes the modulation unusable. This problem, however, can be overcomeby using a white-light source, since the low-modulation areas corresponding to agiven wavelength will be high-modulation areas for another wavelength. Hence, themodulation is kept high enough for use, except at the zero-order fringe, where themodulation is obviously zero.

8.11.2 COMPUTATION OF ISOCHROMATICS

As can be seen from Tardy’s method of compensation, the isochromatic fringe at apoint can be shifted by rotation of the analyzer from the crossed position: a rotation ofπ shifts the isochromatic by one order. This provides a nice method of phase-shifting,but it suffers from drawbacks: the principal axes must be known beforehand and theisochromatics are calculated for a fixed value of the isoclinic parameter. The phaseof the isochromatics can be obtained pixel by pixel by taking intensity data at fourpositions of the analyzer, say, at 0◦, 45◦, 90◦, and 135◦.

We now present a method that is free from these shortcomings. It takes intensitydata at several orientations of a circular polariscope. We present in Table 8.2 theconfigurations, along with the corresponding expressions for the transmitted intensity.

From the eight transmitted intensity data, the phase of the isochromatic pattern iscomputed at each pixel using the relation

tanδ = (I1 − I2)cos2α+(I5 − I6)sin2α

1

2[(I4 − I3) + (I8 − I7)]

(8.42)

TABLE 8.2Polariscope Configurations and the CorrespondingTransmitted Intensities

No. Polariscope Configuration Transmitted Intensity

1 Pπ/2Qπ/4Qπ/4Aπ/4 I1 = I0

2(1 + cos 2α sin δ)

2 Pπ/2Qπ/4Q−π/4Aπ/4 I2 = I0

2(1 − cos 2α sin δ)

3 Pπ/2Qπ/4Q−π/4A0 I3 = I0

2(1 − cos δ)

4 Pπ/2Qπ/4Qπ/4A0 I4 = I0

2(1 + cos δ)

5 P−π/4Qπ/2Qπ/2A0 I5 = I0

2(1 + sin 2α sin δ)

6 P−π/4Qπ/2Q0Aπ/2 I6 = I0

2(1 − sin 2α sin δ)

7 P−π/4Qπ/2Q0Aπ/4 I7 = I0

2(1 − cos δ)

8 P−π/4Qπ/2Qπ/2Aπ/4 I8 = I0

2(1 + cos δ)

Page 249: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 222[#22] 14/5/2009 20:44

222 Optical Methods of Measurement

It can be seen that I3 and I4 are theoretically equal to I7 and I8, respectively. However,in practice, owing to polariscope imperfections, they may differ, and hence all fourvalues are used in the algorithm.

The Fourier transform method can also be used for phase evaluation. The carrierfringe pattern is introduced by a birefringent wedge plate of an appropriate wedgeangle. Usually, a carrier frequency of 3–5 lines/mm is adequate. The plate is placedclose to the model.

8.12 BIREFRINGENT COATING METHOD: REFLECTIONPOLARISCOPE

The use of a birefringent coating on the surface of an object extends photoelasticityto the measurement of surface strains on opaque objects and eliminates the need tomake models. In this method, a thin layer of a birefringent material is bonded ontothe surface of the object. Assuming the adhesion to be good, the displacements onthe surface of the object on loading are transferred to the coating, which inducesbirefringence in the latter. The strain-induced birefringence is observed in reflection.In order to obtain good reflected intensity, either the surface of the object is polishedto make it naturally reflecting or some reflective particles are added to the cementthat bonds the birefringent coating to the surface of the object. Figure 8.8 shows aschematic of a reflection polariscope used with birefringent coatings.

Such an instrument can be used either as a plane polariscope or a circular polar-iscope. The isochromatics obtained with a circular polariscope give the differencebetween the principal stresses in the coating, that is,

(σ1 − σ2)c = mfσc

2d(8.43)

where d is the thickness and fσc the fringe value of the coating. Since the light travelsthrough nearly the same region twice, the effective thickness is 2d. The principalstrains are related to the principal stresses through Hooke’s law. We thus obtain the

Birefringent coating

Specimen

SFP

A

Q1

Q2

FIGURE 8.8 Schematic of a reflection polariscope.

Page 250: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 223[#23] 14/5/2009 20:44

Photoelasticity 223

difference of the principal strains as

ε1 − ε2 = 1 + νc

Ec(σ1 − σ2)c (8.44)

where Ec and νc are the elastic constants of the birefringent coating material. Similarly,we can express the difference of principal strains at the surface of the object as

ε1 − ε2 = 1 + νo

Eo(σ1 − σ2)o (8.45)

Assuming that the strains in the coating and at the surface of the object are same, wehave

(σ1 − σ2)o = Eo

Ec

1 + νc

1 + νo(σ1 − σ2)c (8.46)

Separation of stresses in the coating is accomplished by the oblique incidencemethod. Hence, the principal strains in the coating are calculated. Having obtainedthese, the principal stresses at the surface of the object are obtained from the followingequations:

σ1o = Eo

1 − ν2o

(ε1 + νoε2) (8.47a)

σ2o = Eo

1 − ν2o

(ε2 + νoε1) (8.47b)

The analysis is based on the assumption that the strains in the coating and at thesurface of the object are the same.

8.13 HOLOPHOTOELASTICITY

Separation of stresses requires that σ1, σ2, or σ1 + σ2 be known in addition to σ1 − σ2obtained from photoelasticity. The sum of the principal stresses is obtained interfero-metrically, for example by using a Mach–Zehnder interferometer. On the other hand,holophotoelasticity provides fringe patterns belonging to σ1 − σ2 and σ1 + σ2 simul-taneously, thereby effecting easy separation of stresses. However, the method requirescoherent light for illumination. Here, we use holography to record the waves transmit-ted through the model and later reconstruct this record to extract the information. Wecan indeed use the technique in two ways. In one method, we obtain only isochromat-ics, and hence the method is equivalent to a circular polariscope; it also provides theflexibility of leisurely evaluation of the fringe pattern. In the other method, both theisochromatic and isopachic fringe patterns are obtained. This method, which requirestwo exposures, is termed double-exposure holophotoelasticity, while the first methodis a single-exposure method.

Page 251: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 224[#24] 14/5/2009 20:44

224 Optical Methods of Measurement

8.13.1 SINGLE-EXPOSURE HOLOPHOTOELASTICITY

The experimental arrangement is shown in Figure 8.9. The model is already stressed,and hence is birefringent. The light from a laser is generally polarized with its E vectorvibrating in the vertical plane when the beam propagates in the horizontal plane. Thebeam is expanded and collimated. In the case where the laser output is randomlypolarized, a polarizer is used, followed by a quarter-wave plate oriented at 45◦. Inbrief, the model is illuminated by a circularly polarized wave. The reference wave isalso circularly polarized and of the same handedness, so that both components areinterferometrically recorded.

The components of the wave just after the model can be expressed as

Eσ1(z; t) = E0y√2

cos(τ + ψ2) (8.48a)

Eσ2(z; t) = E0y√2

sin(τ + ψ2 + δ) (8.48b)

Consistent with the treatment presented in Chapter 6, we write these components as

Eσ1(z; t) = Re

{E0y√

2ei(τ1+ψ2)

}(8.49a)

Eσ2(z; t) = Re

{E0y√

2ei (τ1+ψ2+π/2+δ)

}(8.49b)

with

ψ2 = −2π

λ(n1 − 1)d1, δ = 2π

λ(n1 − n2)d1

Holo-plateModel

P

Q

QP

FIGURE 8.9 An experimental arrangement for single-exposure holophotoelasticity.

Page 252: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 225[#25] 14/5/2009 20:44

Photoelasticity 225

and τ1 does not have any time dependence—which has been ignored since amonochromatic wave is used for illumination—and d1 is the thickness of the stressedmodel.As usual, Re{ } denotes the real part. Similarly, the reference wave componentsare written as

Er1 = Re{are

iφr}

(8.50a)

Er2 = Re{

arei(φr+π/2)

}(8.50b)

Since these components are orthogonally polarized, they will interfere with therespective components—essentially, we record two holograms. The recorded intensityis given by

I = ∣∣Eσ1 + Er1

∣∣2 + ∣∣Eσ2 + Er2

∣∣2 (8.51)

This record, on processing, is a hologram.Assuming linear recording and illuminationwith a reference beam that releases two beams, these beams interfere and generate anintensity distribution of the type

I = I ′0

∣∣∣ei(τ1+ψ2)+ei(τ1+ψ2+δ)∣∣∣2 = I0 (1 + cos δ) (8.52)

This is the intensity distribution as obtained in a bright-field circular polariscope. Itmay be noted that a quarter-wave plate–analyzer combination is not placed behindthe model during recording. The state of polarization in the reference wave servesthe function of this assembly. If the state of polarization in the reference wave isorthogonal to that in the object wave from the model, that is, the reference wave isof opposite handedness, then the isochromatics pattern corresponding to a dark-fieldcircular polariscope will be obtained.

8.13.2 DOUBLE-EXPOSURE HOLOPHOTOELASTICITY

The experimental arrangement is similar to that shown in Figure 8.9. The model isilluminated by a circularly polarized wave, and another circularly polarized wave ofthe same handedness is used as a reference wave. The first exposure is made with themodel unstressed and the second exposure with the model stressed. During the firstexposure, the model is isotropic. However, to be consistent with our earlier treatment,we write the amplitudes of the object and reference waves recorded during the firstexposure as

E1(z; t) = Re

{E0y√

2ei (τ1+ψ0)

}(8.53a)

E2(z; t) = Re

{E0y√

2ei (τ1+ψ0+π/2)

}(8.53b)

with

ψ0 = −2π

λ(n0 − 1)d

Page 253: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 226[#26] 14/5/2009 20:44

226 Optical Methods of Measurement

and

Er1 = Re{are

i φr}

(8.50a)

Er2 = Re{

arei (φr+π/2)

}(8.50b)

In the second exposure, we record two waves from the stressed model. These wavesare represented as

Eσ1(z; t) = Re

{E0y√

2ei (τ1+ψ2)

}(8.54a)

Eσ2(z; t) = Re

{E0y√

2ei (τ1+ψ2+π/2+δ)

}(8.54b)

with

ψ2 = −2π

λ(n1 − 1)d1, δ = 2π

λ(n1 − n2)d1.

The reference waves are the same as those used in the first exposure. As explainedearlier, we record two holograms in the second exposure. The total intensity recordedcan be written as

I = ∣∣E1+Er1

∣∣2 + ∣∣E2+Er2

∣∣2 + ∣∣Eσ1+Er1

∣∣2 + ∣∣Eσ2+Er2

∣∣2 (8.55)

The amplitudes of the waves of interest, on reconstruction of the double-exposurehologram, are proportional to

2ei(τ1+ψ0) + ei (τ1+ψ2) + ei (τ1+ψ2+δ) (8.56)

These three waves interfere to generate a system of isochromatic (σ1 − σ2) andisopachic (σ1 + σ2) fringe patterns. The intensity distribution in the interferogramis given by

I = I0

{1 + 2 cos

(2ψ2 + δ − 2ψ0

2

)cos

δ

2+ cos2 δ

2

}(8.57)

Before proceeding further, we need to know what 2ψ2 + δ − 2ψ0 represents.Substituting for ψ2, δ, and ψ0, we obtain

2ψ2 + δ − 2ψ0 = −2π

λ[2(n1 − 1)d1 − (n1 − n2)d1 − 2(n0 − 1)d]

= −2π

λ[(n1 + n2)d1 − 2n0d − 2Δd]

= −2π

λ[(n1 − n0)d+(n2 − n0)d+(n1 + n2)Δd − 2Δd] (8.58)

Page 254: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 227[#27] 14/5/2009 20:44

Photoelasticity 227

Assuming the birefringence to be small, so that n1 + n2 can be replaced by 2n0,substituting for n1 − n0 and n2 − n0, and using Equation 8.13, we obtain

2ψ2 + δ − 2ψ0 = −2π

λ

[(C1 + C2)(σ1 + σ2)d − 2(n0 − 1)

ν

E(σ1 + σ2)d

]

= −2π

λ

{[(C1 + C2) − 2(n0 − 1)

ν

E

](σ1 + σ2)d

}

= −2π

λ

[(C′

1 + C′2)(σ1 + σ2)d

] = 2π

λC′(σ1 + σ2)d (8.59)

It is thus seen that the argument of the cosine in the second term in Equation 8.57depends only on the sum of the principal stresses, and hence generates an isopachicfringe pattern. We can rewrite Equation 8.57 as

I = I0

{1 + 2 cos

[πλ

C′(σ1 + σ2)d]

cos[πλ

C(σ1 − σ2)d]

+ cos2[πλ

C(σ1 − σ2)d]}

(8.60)

It can be seen that the second term in Equation 8.60 contains information about theisopachics, and the second and third terms contain information about the isochromat-ics. Figure 8.10 shows an interferogram depicting both types of fringes. We will nowexamine Equation 8.60 and study the formation of isochromatics and isopachics.

Since we are using a bright-field configuration, the dark isochromatics will occurwhen

π

λC(σ1 − σ2)d = (2n+1)

π

2, with n an integer (8.61)

FIGURE 8.10 Interferogram showing both the isochromatics (broad fringes) and isopachics.Note the phase shift of π when the isopachics cross an isochromatic.

Page 255: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 228[#28] 14/5/2009 20:44

228 Optical Methods of Measurement

However, the intensity of the dark isochromatics is not zero but I0. The brightisochromatics occur when (π/λ)C(σ1 − σ2)d = nπ, and the intensity in the brightisochromatics is given by

I= 2I0

{1 + (−1)n cos

[πλ

C′(σ1 + σ2)d]}

(8.62)

The intensity of the bright isochromatics is modulated by isopachics. Let us firstconsider a bright isochromatics of even order, whose intensity is given by

I= 2I0

{1 + cos

[πλ

C′(σ1 + σ2)d]}

(8.63)

The intensity in the bright isochromatics will be zero whenever

π

λC′(σ1 + σ2)d = (2K+1)π, with K = 0, 1, 2, 3 (8.64)

The integer K gives the order of the isopachic. When this condition is satisfied, theisochromatics will have zero intensity. Therefore, the isopachics modulate the brightisochromatics. Let us now see what happens to the next-order bright isochromatic.Obviously, the intensity distribution in this isochromatic will be

I = 2I0

{1 − cos

[πλ

C′(σ1 + σ2)d]}

(8.65)

Darkisopachics

Darkisochromatics

k

k + 1k + 1

k

k – 1k – 1

k + 1/2k + 1/2

k – 1/2k – 1/2

FIGURE 8.11 Simplified combined isochromatic and isopachic pattern.

Page 256: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 229[#29] 14/5/2009 20:44

Photoelasticity 229

Substituting the condition for the K th dark isopachic in this equation, gives a maxi-mum intensity of 4I0. This shows that the K th isopachic has changed by one half-orderin going over from one bright isochromatic to the next. This interpretation is sim-ple, and remains valid when the two families of fringes are nearly perpendicular,as shown in Figure 8.11. In the other extreme case, where the isochromatics andisopachics are parallel to each other, this analysis breaks down. It is therefore advis-able to use some method to separate out these two fringe patterns. The influence ofbirefringence can be eliminated by passing the beam twice through the model anda Faraday rotator, thereby eliminating the isochromatic pattern. For the model, onecan also use materials such as poly-methyl methacrylate (PMMA), which has almostno or very little birefringence. For such a model, only the isopachic pattern will beobserved. Holophotoelasticity can also be performed in real time, which offers certainadvantages.

8.14 THREE-DIMENSIONAL PHOTOELASTICITY

Photo-elastic methods, thus far described, cannot be used for investigations of objectsunder a three-dimensional state of stress. When a polarized wave propagates throughsuch an object, assumed to be transparent, it integrates the polarization changes overthe distance of travel. The integrated optical effect is so complex that it is impossibleto analyze it or relate it to the stresses that produced it. There are, however, severalmethods for such investigations. Of these, we discuss two, namely the frozen-stressmethod and the scattered-light method. The former is restricted in its application tostatic cases of loading by external forces.

8.14.1 THE FROZEN-STRESS METHOD

The frozen-stress method is possibly the most powerful method of experimen-tal stress analysis. It takes advantage of the multiphase nature of plastics used asmodel materials. The procedure for stress freezing consists of heating the modelto a temperature slightly above the critical temperature and then cooling it slowlyto room temperature, typically at less than 2◦C/h under the desired loading con-dition. The load may be applied to the model either before or after reaching thecritical temperature. Extreme care is taken to ensure that the model is subjectedto correct loading, since spurious stresses due to bending and gravitational loadmay be induced as a result of the low rigidity of the model material at the criticaltemperature.

After the model has been cooled to room temperature, the elastic deformationresponsible for the optical anisotropy is permanently locked. The model is now cutinto thin slices for examination under the polariscope. The optical anisotropy is nor-mally not disturbed during slicing if the operation is carried out at high speedsand under coolant conditions. Of two methods of data collection and interpreta-tion from these slices—sub-slicing and oblique incidence—the latter is the morepractical.

Page 257: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 230[#30] 14/5/2009 20:44

230 Optical Methods of Measurement

Unpolarized beam

y

z

xP

FIGURE 8.12 Scattering of unpolarized beam by a scatterer at P.

8.14.2 SCATTERED-LIGHT PHOTOELASTICITY

When a beam of light passes through a medium containing fine particles dispersed inthe volume, part of the beam is scattered. The intensity of the scattered light, when theparticles are much smaller than the wavelength of light, varies as ω4, where ω is thecircular frequency of the light waves. This phenomenon was investigated by Rayleighin detail and is called Rayleigh scattering. The most beautiful observations of redsunset and blue sky are due to scattering from gaseous molecules in the atmosphere.Further, the light from the blue sky is partially linearly polarized. In some observationdirections, the scattered light is linearly polarized.

Consider a scattering center located at a point P, as shown in Figure 8.12. Let theincident light be unpolarized light, which can be resolved into two orthogonal linearlypolarized components with random phases. The incident component vibrating in the( y, z) plane, when absorbed, will set the particle (rather the electrons in the particle)vibrating along the y direction. The re-radiated wave will have zero amplitude alongthe y direction. On the other hand, if the particle is oscillating along the x direction, there-radiated wave will have zero amplitude in that direction. Thus, when the observationdirection lies along the y direction in the (x, y) plane passing through the point P, thescattered wave will be plane polarized. The particle acts as a polarizer.

Let us now assume that the incident wave is linearly polarized, with the E vectorvibrating in the ( y, z) plane. The electrons in the particle will oscillate along the ydirection. The re-radiated wave will have zero amplitude when observed along they axis. The particle thus acts as an analyzer. This picture is equivalent to placinga polarizer and an analyzer anywhere in the model. Therefore, stress informationcan be obtained without freezing the stress and slicing the model. The scattered-light method therefore provides a nondestructive means of optical slicing in threedimensions.

8.15 EXAMINATION OF THE STRESSED MODEL INSCATTERED LIGHT

8.15.1 UNPOLARIZED INCIDENT LIGHT

Let us consider a stressed model in the path of a narrow beam of unpolarized light. Weassume that there are a large number of scatterers in the model. Let us now consider

Page 258: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 231[#31] 14/5/2009 20:44

Photoelasticity 231

the light scattered by a scatterer at point P inside the model when the observationdirection is perpendicular to the incident beam. The scattered light is resolved intocomponents along the directions of principal stresses σ2 and σ3, as shown in Figure8.13. In traversing a distance PQ in the model, these two orthogonally polarizedcomponents acquire a phase difference. If an analyzer is placed in the observationpath, the transmitted intensity will depend on the phase difference acquired. Sincethe incident beam is unpolarized, there is no influence of the transverse distance APin the model. Assume that the transmitted intensity is zero for a certain location P ofthe scatterer; this occurs when the phase difference is a multiple of 2π. As the beamis moved to illuminate another scatterer at point P′ in the same plane along the lineof sight, the transmitted intensity will undergo cyclic variation between minima andmaxima, depending on the additional phase acquired when traversing the distancePP′. It is, however, assumed that the directions of the principal stresses σ2 and σ3 donot change over the distance PP′.

Let m1 and m2 be the fringe orders when the light scattered from scatterers atpoints P and P′ is analyzed. Then

x1(σ2 − σ3) = m1 f (8.66a)

x2(σ2 − σ3) = m2 f (8.66b)

where x1 = PQ and x2 = P′Q. Therefore, we obtain

σ2 − σ3 = dm

dxf (8.67)

The principal stress difference at any point along the observation direction isproportional to the gradient of the fringe order.

Analyzer

σ2σ3

QB

P′ P

yx

z

A

FIGURE 8.13 Stressed model: illumination by an unpolarized beam and observation throughan analyzer.

Page 259: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 232[#32] 14/5/2009 20:44

232 Optical Methods of Measurement

8.15.2 LINEARLY POLARIZED INCIDENT BEAM

We now consider another situation of a linearly polarized beam incident on the model,as shown in Figure 8.14. We assume for the sake of simplicity that the principal stressdirections are along the x and y axes. The transmission axis of the polarizer makesan angle α with the x axis. The incident wave of amplitude E0 is resolved along the xand y directions. These linearly polarized components travel with different velocitiesin the model, and hence pick up a phase difference δ. Thus, at any plane normal tothe direction of propagation, the state of polarization of the wave in general will beelliptical. It can be expressed as

E2x

E20 cos2 α

+ E2y

E20 sin2 α

− 2ExEy

E20 cos α sin α

cos δ = sin2 δ (8.68)

where Ex and Ey are the components along the x and y directions, respectively. Themajor axis of the ellipse makes an angle ψ with the x axis, where

tan 2ψ = tan 2α cos δ (8.69)

When δ = 2pπ, p = 0, ±1, ±2, ±3, ±4, . . . , the state of polarization of the wave islinear, with orientation ψ = ±α. For positive values of the integer p, the state ofpolarization of the wave at any plane is the same as that of the incident wave. Ingeneral, a scatterer at point P in any plane is excited by an elliptically polarized wave.In scattered-light photoelasticity, we are looking in the model normal to the directionof propagation; that is, the observation is confined to the plane of the ellipticallypolarized light. If the observation is made in the direction along the major axis ofthe ellipse, the amplitude of the re-radiated wave received by the observer will beproportional to the magnitude of the minor axis of the ellipse, and hence a minimum.On the other hand, if the observation direction coincides with the minor axis, theintensity will be a maximum.

As the beam propagates in the stressed model, the ellipse just described contin-ues to rotate as the phase difference changes. Therefore, the observation made in thescattered light normal to the direction of the incident beam will show a variation inintensity along the length of the model in the direction of the incident beam. There is no

s2

a

s1

Direction ofobservation

Q

P

x

y

z

FIGURE 8.14 Stressed model: illumination by a linearly polarized beam.

Page 260: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 233[#33] 14/5/2009 20:44

Photoelasticity 233

Direction ofobservation

Sheet of light

P

P45°

(a) (b)

FIGURE 8.15 (a) Scattered-light pattern of a disk under radial compression. (b) Schematicshowing the directions of illumination and observation. (From L. S. Srinath, Scattered LightPhotoelasticity, Tata McGraw-Hill, New Delhi, 1983. With permission.)

influence of birefringence in the model on traverse through distance PQ. Figure 8.15shows a scattered-light stress pattern of a disk under radial compression. The direc-tions of the incident beam and that of the scattered light are also shown in Figure 8.15.

Owing to the weak intensity of the scattered light, an intense incident beam isused. This beam is generated by a high-pressure mercury lamp with suitable colli-mating optics. The laser is an attractive alternative, since the beam can then be usedwithout optics. The model is placed in a tank containing an index-matching liquid toavoid refraction and polarization changes at the model surfaces. To facilitate properadjustments, the model in the tank is mounted on a stage capable of translational androtational movement.

BIBLIOGRAPHY

1. M. Frocht, Photoelasticity, Wiley, New York, 1941.2. H. T. Jessop and F. C. Harris, Photoelasticity: Principles and Methods, Dover, New

York, 1949.3. E. G. Coker and L. N. G. Filon, A Treatise on Photoelasticity, Cambridge University

Press, New York, 1957.4. A. J. Durelli and W. F. Riley, Introduction to Photomechanics, Prentice-Hall, Englewood

Cliffs, NJ, 1965.5. A. S. Holister, Experimental Stress Analysis, Cambridge University Press, New York,

1967.6. A. Kuske and G. Robertson, Photoelastic Stress Analysis, Wiley, New York, 1974.7. J. W. Dally and W. F. Riley, Experimental Stress Analysis, McGraw-Hill, New York,

1978.

Page 261: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 234[#34] 14/5/2009 20:44

234 Optical Methods of Measurement

8. H. Aben, Integrated Photoelasticity, McGraw-Hill, New York, 1979.9. A. Lagarde (Ed.), Optical Methods in Mechanics of Solids, Sijthoff en Noordhoff,

Alphen aan den Rijn, The Netherlands, 1981.10. L. S. Srinath, Scattered Light Photoelasticity, Tata McGraw-Hill, New Delhi, 1983.11. R. S. Sirohi and M. P. Kothiyal, Optical Components, Systems, and Measurement

Techniques, Marcel Dekker, New York, 1991.12. A. S. Kobayashi (Ed.), Handbook on Experimental Mechanics, Society for Experimental

Mechanics, Bethel, CT, 1993.13. K. Ramesh, Digital Photoelasticity, Springer-Verlag, Berlin, 2000.

ADDITIONAL READING

1. D. C. Drucker, The method of oblique incidence in photoelasticity, Proc. SESA, VIII,1, 51–66, 1950.

2. K. Ito, New model material for photoelasticity and photoplasticity, Exp. Mech., 2, 373–376, 1962.

3. A. S. Redner, New oblique-incidence method for direct photoelastic measurements ofprincipal strains, Exp. Mech., 3, 67–72, 1963.

4. A. Robert and E. Guillemet, New scattered light method in three dimensionalphotoelasticity, Br. J. Appl. Phy., 15, 567–578, 1964.

5. M. Nishida and H. Saito, A new interferometric method of two dimensional stressanalysis, Proc. SESA, 21, 366–376, 1964.

6. R. O’Regan, New method of determining strain on the surface of a body withphotoelastic coating, Exp. Mech., 5, 241–246, 1965.

7. J. W. Dally and E. R. Erisman, An analytic separation method for photoelasticity, Exp.Mech., 6, 493–499, 1966.

8. A. J. Robert, New methods in photoelasticity, Exp. Mech., 7, 224–232, 1967.9. M. E. Fourney, Application of holography to photoelasticity, Exp. Mech., 8, 33–38,

1968.10. D. Hovanesian, V. Broic, and R. L. Powell, A new experimental stress-optic method:

Stress-holo-interferometry, Exp. Mech., 8, 362–368, 1968.11. H. H. M. Chau, Holographic interferometer for isopachic stress analysis, Rev. Sci.

Instrum., 39, 1789–1792, 1968.12. W. Wetzels, Holographie als Hilfsmittel zur Isopachenbestimmung, Optik, 27,

271–272, 1968.13. E. von Hopp and G. Wutzke, The application of holography in plane photoelasticity,

Materialprufung, 11, 409–415, 1969.14. L. S. Srinath, Analysis of scattered-light methods in photoelasticity, Exp. Mech., 9,

463–468, 1969.15. E. von Hopp and G. Wutzke, Holographic determination of the principal stresses in

plane models, Materialprufung, 12, 13–22, 1970.16. M. E. Fourney and K. V. Mate, Further applications of holography to photoelasticity,

Exp. Mech., 10, 177–186, 1970.17. R. C. Sampson, A stress-optic law for photoelastic analysis of orthotropic composites,

Exp. Mech., 10, 210–215, 1970.18. D. Post, Photoelastic fringe multiplication for ten fold increase in sensitivity, Exp.

Mech., 10, 305–312, 1970.19. D. C. Holloway and R. H. Johnson, Advancements in holographic photoelasticity, Exp.

Mech., 11, 57–63, 1971.

Page 262: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 235[#35] 14/5/2009 20:44

Photoelasticity 235

20. B. Chatelain, Holographic photoelasticity: Independent observation of the isochromaticand isopachic fringes for a single model subjected to only one process, Opt. LaserTechnol., 5, 201–204, 1973.

21. S. Redner, New automatic polariscope system, Exp. Mech., 14, 486–491, 1974.22. J. Ebbeni, J. Coenen, and H. Hermanne, New analysis of holophotoelastic patterns and

their application, J. Strain Anal., 11, 11–17, 1976.23. J. S. Parks and R. J. Sanford, On the role of material and optical properties in complete

photoelastic analysis, Exp. Mech., 16, 441–447, 1976.24. H. Uozato and R. Nagata, Holographic photoelasticity by using dual hologram method,

Jpn J. Appl. Phys., 16, 95–100, 1977.25. M. Arcan, Z. Hashin, and A. Voloshin, A method to produce uniform plane stress states

with applications, Exp. Mech., 18, 141–145, 1978.26. J. F. Doyle and H. T. Denyluk, Integrated photoelasticity for axisymmetric problems,

Exp. Mech., 18, 215–220, 1978.27. J. W. Dally and R. J. Sanford, Classification of stress intensity factors from isochromatic

fringe patterns, Exp. Mech., 18, 441–448, 1978.28. P. S. Theocaris and E. E. Goloutos, A unified interpretation of interferometric and

holographic fringe patterns in photoelasticity, J. Strain Anal., 13, 95, 1978.29. D. Post, Photoelasticity, Exp. Mech., 19, 176–192, 1979.30. S. B. Mazurikiewicz and J. T. Pindera, Integrated photoelastic method—Application to

photoelastic isodynes, Exp. Mech., 19, 225–234, 1979.31. R. K. Mueller and L. R. Saackel, Complete automatic analysis of photoelastic fringes,

Exp. Mech., 19, 245–251, 1979.32. J. W. Dally, Dynamic photoelastic studies on fracture, Exp. Mech., 19, 349–361,

1979.33. Y. Seguchi, Y. Tomita and M. Wanatabe, Computer-aided fringe-pattern analyser—A

case of photoelastic fringe, Exp. Mech., 19, 362–370, 1979.34. R. J. Sanford, Application of the least squares method to photoelastic analysis, Exp.

Mech., 20, 192–197, 1980.35. W. F. Swinson, J. L. Turner, and W. F. Ranson, Designing with scattered light

photoelasticity, Exp. Mech., 20, 397–402, 1980.36. J. W. Dally,An introduction to dynamic photoelasticity, Exp. Mech., 20, 409–416, 1980.37. J. Cernosek, Three dimensional photoelasticity by stress freezing, Exp. Mech., 20, 417–

426, 1980.38. R. J. Sanford, Photoelastic holography—A modern tool for stress analysis, Exp. Mech.,

20, 427–436, 1980.39. H. A. Gomide and C. P. Burger, Three dimensional strain distribution in upset rings by

photo-plastic simulation, Exp. Mech., 21, 361–370, 1981.40. D. G. Berghaus, Simplification for scattered-light photoelasticity when using the

unpolarised incident beam, Exp. Mech., 22, 394–400, 1982.41. C. P. Burger and H.A. Gomide, Three-dimensional strain in rolled slabs by photo-plastic

simulation, Exp. Mech., 22, 441–447, 1982.42. R. Prabhakaran, Photo-orthotropic-elasticity: A new technique for stress analysis of

composites, Opt. Eng., 21, 679–688, 1982.43. C. W. Smith, W. H. Peters, and G. C. Kirby, Crack-tip measurements in photoelastic

models, Exp. Mech., 22, 448–453, 1982.44. C. P. Burger and A. S. Voloshin, Half-fringe photoelasticity: A new instrument for

wholefield stress analysis, ISA Trans., 22, 85–95, 1982.45. R. B. Agarwal and L. W. Teufel, Epon 828 epoxy: A new photoelastic-model material,

Exp. Mech., 23, 30–35, 1983.

Page 263: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 236[#36] 14/5/2009 20:44

236 Optical Methods of Measurement

46. K. A. Jacob, V. Dayal, and B. Ranganayakamma, On stress analysis of anisotropiccomposites through transmission optical patterns: Isochromatics and isopachics, Exp.Mech., 23, 49–54, 1983.

47. J. Komorowski and J. Stupnicki, Strain measurement with asymmetric oblique-incidence polariscope for birefringent coatings, Exp. Mech., 23, 171–176, 1983.

48. F. Zhang, S. M. Zhao and B. Chen, A digital image processing system for photoelasticstress analysis, Proc. SPIE, 814, 806–809, 1987.

49. T. Y. Chen and C. E. Taylor, Computerised fringe analysis in photomechanics, Exp.Mech., 29, 323–329, 1989.

50. I. V. Zhavoronok, V. V. Nemchinov, S. A. Litvin, A. M. Skanavi, U. V. Pavlov, and V.S. Evsenev, Automatization of measurement and processing of experimental data inphotoelasticity, Proc. SPIE, 1554A, 371–379, 1991.

51. J. T. Pindera, Scattered light optical isodynes basis for 3-D isodyne stress analysis,Proc. SPIE, 1554A, 458–471, 1991.

52. C. Quan, P. J. Bryanston-Cross and T. R. Judge, Photoelasticity stress analysis usingcarrier fringes and FFT techniques, Opt. Lasers Eng., 18, 79–108, 1993.

53. A. Asundi and M. R. Sajan, Digital dynamic photoelasticity, Opt. Lasers Eng., 20,135–140, 1994.

54. M. Wolna, Polymer materials in practical uses of photoelasticity, Opt. Eng., 34, 3427–3432, 1995.

55. S. J. Haake, E. A. Patterson, and Z. F. Wang, 2D and 3D separation of stresses usingautomated photoelasticity, Exp. Mech., 36, 269–276, 1996.

56. K. Ramesh and S. S. Deshmukh, Three fringes photoelasticity: Use of colour imageprocessing hardware to automate ordering of isochromatics, Strain, 32, 79–86, 1996.

57. T. Kihara, A measurement method of scattered light photoelasticity using unpolarizedlight, Exp. Mech., 37, 39–44, 1997.

58. J.-C. Dupré and A Lagarde, Photoelastic analysis of a three-dimensional specimen byoptical slicing and digital image processing, Exp. Mech., 37, 393–397, 1997.

59. A. D. Nurse, Full-field automated photoelasticity by use of a three-wavelength approachto phase-stepping, Appl. Opt., 36, 5781–5786, 1997.

60. G. Petrucci, Full field automatic evaluation of isoclinic parameter in white light, Exp.Mech., 37, 420–426, 1997.

61. J.A. Quiroga andA. González-Cona, Phase measuring algorithms for extraction of infor-mation of photoelastic fringe patterns, In Fringe’97, Automatic Processing of FringePatterns (ed. W. Jüptner and W. Osten), 77–83, Akademie Verlag, Berlin, 1997.

62. J. A. Quiroga and A. Gonzalez-Cona, Phase measuring algorithm for extraction ofisochromatics of photoelastic fringe patterns, Appl. Opt., 36, 8397–8402, 1997.

63. W. Ji and E. A. Patterson, Simulation of errors in automated photoelasticity, Exp. Mech.,38, 132–139, 1998.

64. M. J. Ekman and A. D. Nurse, Absolute determination of the isochromatic parameterby load-stepping photoelasticity, Exp. Mech., 38, 189–195, 1998.

65. E. A. Patterson and Z. W. Wang, Simultaneous observation of phase-stepped images forautomated photoelasticity, J. Strain Anal. Eng. Des., 33, 1–15, 1998.

66. A. Ajovalasit, B. Barone, and G. Petrucci, A method for reducing the influence ofquarter-wave plate errors in phase stepping photoelasticity, J. Strain Anal. Eng. Des.,33, 207–216, 1998.

67. K. Ramesh and S. K. Mangal, Data acquisition techniques in digital photoelasticity: Areview, Opt. Lasers Eng., 30, 53–75, 1998.

68. A.Asundi, L. Tong, and C. G. Boay, Dynamic phase-shifting photoelasticity, Appl. Opt.,40, 3654–3658, 2001.

Page 264: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 237[#37] 14/5/2009 20:44

Photoelasticity 237

69. D. K. Tamrakar and K. Ramesh, Simulation of error in digital photoelasticity by Jonescalculus, Strain, 37, 105–112, 2001.

70. S. Barone, G. Burriesci, and G. Petrucci, Computer aided photoelasticity by an optimumphase stepping method, Exp. Mech., 42, 132–139, 2002.

71. K. Ramesh and G. Lewis, Digital photoelasticity: Advanced techniques and applica-tions, Appl. Mech. Rev., 55, B69–B71, 2002.

72. G. Calvert, J. Lesniak, and M. Honlet,Applications of modern automated photoelasticityto industrial problems, Insight, 44(4), 1–4, 2002.

73. T. Kihara, An arctangent unwrapping technique of photoelasticity using linearlypolarized light at three wavelengths, Strain, 39, 65–71, 2003.

74. I. A. Jones and P. Wang, Complete fringe order determination in digital photoelasticityusing fringe combination matching, Strain, 39, 121–130, 2003.

75. J. W. Hobbs, R. J. Greene, and E. A. Patterson, A novel instrument for transientphotoelasticity, Exp. Mech., 43, 403–409, 2003.

76. V. Sai Prasad, K. R. Madhu, and K. Ramesh, Towards effective phase unwrapping indigital photoelasticity, Opt. Lasers Eng., 42(4), 421–436, 2004.

77. P. Pinit and E. Umezaki, Full-field determination of principal-stress directions usingphotoelasticity with plane polarized RGB lights, Opt. Rev., 12, 228–232, 2005.

78. S. Yoneyama and H. Kikuta, Phase-stepping photoelasticity by use of retarders witharbitrary retardation, Exp. Mech., 46, 289–296, 2006.

79. E. Patterson, P. Brailly, and M. Taroni, High frequency quantitative photoelasticityapplied to jet engine components, Exp. Mech., 46, 661–668, 2006.

80. K. Ashokan and K. Ramesh, A novel approach for ambiguity removal in isochromaticphase map in digital photoelasticity, Meas. Sci. Technol., 17, 2891–2896, 2006.

81. A. Ajovalasit, G. Petrucci, and M. Scafidi, Phase shifting photoelasticity in white light,Opt. Lasers Eng., 45, 596–611, 2007.

82. P. Pinit and E. Umezaki, Digitally wholefield analysis of isoclinic parameter in photoe-lasticity by four-step color phase-stepping technique, Opt. Lasers Eng., 45, 795–807,2007.

Page 265: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C008.tex” — page 238[#38] 14/5/2009 20:44

Page 266: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 239[#1] 14/5/2009 20:31

9 The Moiré Phenomenon

9.1 INTRODUCTION

When two periodic patterns are superposed, a moiré pattern is formed. This is infact a very common phenomenon. Moiré patterns are formed by periodic struc-tures of lines or stripes. In general, the superposed patterns should have opaqueand transparent regions. Although moiré patterns can be observed with superpositionof periodic objects of nearly the same periodicity, the most commonly used objectsare linear gratings with equal opaque and transparent regions, often called Ronchigratings. Sometimes, circular gratings with equal opaque and transparent regions areused. The moiré phenomenon is a mechanical effect, and the formation of moiréfringes is best explained by mechanical superposition of the gratings. However, whenthe period of these periodic structures is very fine, diffraction effects play a verysignificant role.

The fringes formed in holographic interferometry (HI) can be considered as a moirépattern between the primary grating structures belonging to the initial and final statesof the object. These primary grating structures are a result of interference betweenthe object wave and the reference wave. Two-wavelength interferometric fringes arealso moiré fringes. In general, the moiré pattern can be regarded as a mathematicalsolution to the interference of two periodic functions. Holo-diagram, a tool developedby Abramson to deal with several issues in HI, is a beautiful device to study fringeformation and fringe control using the moiré phenomenon.

We can explain the moiré phenomenon either using the indicial equation or by thesuperposition of two sinusoidal gratings. We will follow both approaches, beginningwith the indicial equation.

9.2 THE MOIRÉ FRINGE PATTERN BETWEEN TWOLINEAR GRATINGS

Let us consider a line grating with lines running parallel to the y axis and havingperiod b (Figure 9.1a). This grating is described as follows:

x = mb (9.1)

where the various lines in the grating are identified by the index m, which takes values0, ±1, ±2, ±3, . . . . A second grating of period a is inclined at an angle θ with the

239

Page 267: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 240[#2] 14/5/2009 20:31

240 Optical Methods of Measurement

y axis as shown in Figure 9.1b. The lines in this grating are represented by

y = x cot θ − na/sin θ (9.2)

where a/sinθ is the intercept with the y axis and the index n takes values 0, ±1, ±2,±3, . . . , and hence identifies various lines in the grating. On superposing the two linegratings, the moiré fringe pattern formed is governed by the indicial equation

m ± n = p (9.3)

where p is another integer that takes values 0, ±1, ±2, ±3, . . . . The plus sign in theindicial equation generates a sum moiré pattern, which usually has high frequency,and the minus sign generates difference moiré fringes, which are the moiré patternsthat are most frequently used and observed. We will be using the difference moirépattern unless mentioned otherwise. We obtain the equation of the moiré fringes byeliminating m and n from Equations 9.1 through 9.3 as

y = x(b cos θ − a)

b sin θ+ pa

sin θ(9.4)

This is shown in Figure 9.1c. Equation 9.4 can be written in a more familiar form likeEquation 9.2 as

y = x cot φ + pd

sin φ(9.5)

This implies that the moiré pattern is a grating of period d that is inclined at an angleφ with the y axis, where

d = ab(a2 + b2 − 2ab cos θ

)1/2(9.6a)

sin φ = sin θb(

a2 + b2 − 2ab cos θ)1/2 = d

asin θ (9.6b)

It is interesting to study moiré patterns for the following two situations:

(a) (b) (c)

FIGURE 9.1 (a) Vertical grating. (b) Inclined grating. (c) Moiré pattern as a result ofsuperposition.

Page 268: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 241[#3] 14/5/2009 20:31

The Moiré Phenomenon 241

9.2.1 a �= b BUT θ = 0

This is a well-known situation of pitch mismatch: the moiré fringes run parallel tothe grating lines. The moiré fringe spacing is given by d = ab/|a − b|. Here a − brepresents the pitch mismatch. If the gratings are of nearly the same pitch, a ≈ b, thend = a2/|a − b|. Figures 9.2a and 9.2b show gratings of pitches a and b, respectively,with grating elements parallel to the y axis. The moiré fringes due to pitch mismatchare shown in Figure 9.2c. Physically, the moiré spacing is the distance over whichpitch mismatch accumulates to the pitch of the grating itself. When the gratings areof equal period, the moiré spacing is infinite. This arrangement is therefore called theinfinite fringe mode.

9.2.2 a = b BUT θ �= 0

This is referred to as an angular mismatch between two identical gratings. This resultsin moiré pattern formation with a period d = a/[2 sin(θ/2)] and orientation with they axis given by φ = π/2 + θ/2. In fact, the moiré fringes run parallel to the bisectorof the larger enclosed angle between the gratings.

Moiré fringe formation is easily appreciated when we work in the Fourier domain.In the Fourier domain, a sinusoidal grating of finite size generates three spectra (spots):the spots lie on a line that passes through the origin and is perpendicular to thegrating elements. This is due to the fact that the spectrum of a real grating (intensitygrating) is centrosymmetric. We need therefore consider only one half of the spectrum.The spots lie on a line that is perpendicular to the grating elements (lines), and thedistance between two consecutive spots is proportional to the frequency of the grating.The second grating also generates a spectrum, which is rotated by an angle equal to theangle between the two gratings. When two gratings are superposed, the difference Δrbetween the two spots (Figure 9.3) generates a grating. If Δr lies within the visibilitycircle (a spectrum lying in this circle will generate a grating that can be seen by theunaided eye), a moiré pattern is formed: the pitch of the moiré fringes is inverselyproportional to the length Δr and the orientation is normal to it. Obviously, whentwo gratings of equal period are superposed with an angular mismatch, a moiré fringepattern is formed with fringes running parallel to the bisector of the angle betweenthese gratings and with spacing inversely proportional to the angular mismatch.

(a) (b) (c)

FIGURE 9.2 (a) Vertical grating. (b) Another vertical grating with a different pitch. (c) Moirépattern as a result of pitch mismatch.

Page 269: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 242[#4] 14/5/2009 20:31

242 Optical Methods of Measurement

Dr

FIGURE 9.3 Spectra of two inclined sinusoidal gratings.

9.3 THE MOIRÉ FRINGE PATTERN BETWEEN A LINEARGRATING AND A CIRCULAR GRATING

As mentioned earlier, moiré fringes are produced by the superposition of periodicstructures. The superposition of a linear grating with a circular grating is of someacademic interest: we consider moiré formation by two such gratings. We take a lineargrating of period b with its elements running parallel to the y axis. It is represented,as in Equation 9.1, by

x = mb for m = 0, ±1, ±2, ±3, . . .

A circular grating of period a is centered at the origin of the coordinate system, andhence can be represented as

x2 + y2 = a2n2 for n = 0, ±1, ±2, ±3, . . . (9.7)

The indicial equation is m ± n = p. Therefore, the moiré pattern for m − n = p isgiven by

x

b−(x2 + y2

)1/2

a= p

or

x2 + y2

a2= p2 + x2

b2− 2xp

b(9.8)

This expression can be rewritten as

x2(

1

a2− 1

b2

)+ y2

b2+ 2

x

bp − p2 = 0 (9.9)

Equation 9.9 represents a hyperbola, ellipse, or parabola, depending on the relativegrating periods. The moiré pattern for a = b is shown in Figure 9.4, which showsparabolic moiré fringes. The Fourier transform (FT) of a circular grating lies on acircle; hence an overlap of a linear grating with a circular grating produces a moiréthat has a very wide FT spectrum.

Page 270: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 243[#5] 14/5/2009 20:31

The Moiré Phenomenon 243

FIGURE 9.4 Moiré pattern between a linear grating and a circular grating.

9.4 MOIRÉ BETWEEN SINUSOIDAL GRATINGS

So far, we have looked into the process of moiré formation using line gratings. Suchgratings are seldom used in practice. One normally uses gratings that are binary, sincethey are very easy to produce. However, when high-frequency gratings are used, theyare generally fabricated interferometrically, and hence have a sinusoidal profile. Evenbinary gratings can be Fourier-decomposed into sinusoidal components, however, soit is instructive to see how the moiré pattern of sinusoidal gratings is formed.

Sinusoidal gratings can be recorded either on the same film or on two separatefilms, which are then overlapped, as was done with line gratings. We examine thesetwo situations separately. Let us consider a grating defined by a transmittance functiont1(x):

t1(x) = t0

(1 − M cos

bx

)= t0(1 − M cos 2πμ0x) (9.10)

where t0 is the bias transmission, M is the modulation, b is the period, and μ0 is thespatial frequency of the grating. The grating elements run parallel to the y axis. WhenM = 1, the grating has unit contrast and its transmission function lies between 0 and2t0. Let us now take another sinusoidal grating that is inclined with respect to the firstgrating. Its transmission function t2(x, y) is expressed as

t2(x, y) = t0

[1 − M cos

a(x cos θ − y sin θ)

]= t0

[1 − M cos 2π(μx − νy)

]

(9.11)

Page 271: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 244[#6] 14/5/2009 20:31

244 Optical Methods of Measurement

The grating is inclined at an angle θ with the y axis and has a period a. Its spatialfrequencies along the xand y directions are μ and ν, respectively, such that a−2 =μ2 + ν2. Further, the gratings are assumed to have the same modulation.

When the gratings are exposed on the same transparency, the transmission functionof the positive transparency, with proper processing, can be taken as being proportionalto the sum of the two transmission functions:

t(x, y) ∝ t1(x) + t2(x, y)

= 2t0{1 − M cos π[(μ + μ0)x − νy] cos π[(μ − μ0)x − νy]} (9.12)

This transmission function corresponds to a grating that is modulated by a low-frequency grating (i.e., the moiré pattern). Bright moiré fringes are formed when

cos π[(μ − μ0)x − νy] = −1

or

(μ − μ0)x − νy = 2m + 1 (9.13)

where m is an integer. Similarly, dark moiré fringes are formed when

cos π[(μ − μ0)x − νy] = 1, or (μ − μ0)x − νy = 2m

The moiré fringes are inclined at an angle φ with the y axis such that

cot φ = μ − μ0

ν= cos θ − a/b

sin θ(9.14)

When a = b, we have

cot φ = cos θ − 1

sin θ(9.15)

The period of the moiré pattern is also obtained as

d =[

1

(μ − μ0)2

+ 1

ν2

]1/2

= ab(a2 + b2 − 2ab cos θ

)1/2(9.16)

These are the same formulae that were obtained for line gratings.When the gratings are recorded on separate films and the moiré pattern due to

their overlap is observed, the transmission function is obtained by multiplication oftheir respective transmission functions: t(x, y) = t1(x)t2(x, y). The moiré pattern isthen obtained following the procedure explained earlier.

Page 272: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 245[#7] 14/5/2009 20:31

The Moiré Phenomenon 245

9.5 MOIRÉ BETWEEN REFERENCE AND DEFORMED GRATINGS

When the moiré phenomenon is used for metrology, one of the gratings is mountedon the object that is subjected to deformation. Therefore, one then observes the moirépattern from the deformed grating and the reference grating (undeformed grating).One can also obtain a moiré pattern between two deformed gratings (i.e., when twodeformed states of the object are compared). It is thus instructive to study moiré for-mation from deformed and undeformed gratings and learn how to extract informationabout the deformation from the moiré pattern.

The deformation is represented by a function f (x, y), which is assumed to be slowlyvarying. The deformed grating can be expressed as

x + f (x, y) = mb for m = 0, ±1, ±2, ±3, ±4, . . . (9.17a)

This grating is superposed on a reference grating represented by

x = nb for n = 0, ±1, ±2, ±3, ±4, . . . (9.17b)

This results in a moiré pattern represented by

f (x, y) = pb for p = 0, ±1, ±2, ±3, ±4, . . . (9.18)

The moiré fringes represent a contour map of f (x, y) with period b. Here, the elementsin both the gratings run parallel to the y axis with the deformed grating exhibiting slowvariation. We can also obtain moiré between the deformed grating and a referencegrating that is oriented at an angle θwith the y axis.That is, the gratings are expressed as

x + f (x, y) = mb

and

y = x cot θ − nb

sin θ(9.19)

The moiré pattern, when the grating is inclined by a small angle such that cos θ ≈ 1and sin θ ≈ θ is given by

y + f (x, y)

θ= p

b

θ(9.20)

This describes a moiré grating with the period and distortion function magnified bya factor 1/θ.

When two distorted gratings are superposed, the moiré pattern gives the differencebetween the two distortion functions. This difference can also be magnified when thefinite-fringe mode of moiré formation is used.

The derivative of the distortion function is obtained by observing the moiré patternfrom the deformed grating and its displaced replica. For example, let us consider thedeformed grating represented as in Equation 9.17a:

x + f (x, y) = mb

Page 273: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 246[#8] 14/5/2009 20:31

246 Optical Methods of Measurement

Its replica has been displaced along the x direction by Δx, and hence is represented by

x + Δx + f (x + Δx, y) = nb (9.21)

When these gratings are superposed, the moiré pattern is given by

Δx + f (x + Δx, y) − f (x, y) = pb (9.22)

In the limit when the lateral shift Δx is small, we obtain

Δx + ∂f (x, y)

∂xΔx = pb (9.23)

The first term is a constant, and just represents a shift of the moiré pattern. Themoiré pattern thus displays the partial x derivative of the distortion function f (x, y).As mentioned earlier, the moiré effect can be magnified by 1/θ using the finite-fringe mode.

Pitch mismatch between the gratings can also be used to magnify the effect ofdistortion. As an example, we consider a distorted grating and a reference gratinggiven by

x + f (x, y) = mb

and

x = na (9.24)

The moiré pattern is given by

x + a

|a − b| f (x, y) = ab

|a − b| p (9.25)

The period of the moiré pattern is ab/|a − b| and the distortion function has beenmagnified by a/|a − b|.

9.6 MOIRÉ PATTERN WITH DEFORMED SINUSOIDALGRATING

The transmission function of a deformed sinusoidal grating is given by

t1(x, y) = A0 + A1 cos

{2π

b

[x − f (x, y)

]}(9.26)

where A0 and A1 are constants specifying the bias transmission and the modulation ofthe grating and f (x, y) represents the distortion of the grating. The reference grating,oriented at an angle θ, is represented by

t2(x, y) = B0 + B1 cos

[2π

a(x cos θ − y sin θ)

](9.27)

where B0 and B1 are constants.

Page 274: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 247[#9] 14/5/2009 20:31

The Moiré Phenomenon 247

9.6.1 MULTIPLICATIVE MOIRÉ PATTERN

Moiré fringes are formed when the angle θ is small, the deformation f (x, y) variesslowly in space, and the periods of the two gratings are nearly equal; in general,a = Nb, where N is an integer. The transmission function for multiplicative moiré isthe product of the transmission functions of the gratings:

t(x, y) = t1(x, y) t2(x, y) = A0B0 + A1B0 cos

{2π

b

[x − f (x, y)

]}

+ A0B1 cos

[2π

a(x cos θ − y sin θ)

]

+ A1B1 cos2π

b

[x − f (x, y)

]cos

[2π

a(x cos θ − y sin θ)

](9.28)

For simplicity, the contrasts of the gratings are assumed to be same (i.e., A0 = B0 andA1 = B1). Then,

t(x, y) = A20 + A1A0 cos

{2π

b

[x − f (x, y)

]}+ A0A1 cos

[2π

a(x cos θ − y sin θ)

]

+ A21

2

{cos

{2π

[x

(1

b+ cos θ

a

)− y

sin θ

a− f (x, y)

b

]}

+ cos

{2π

[x

(1

b− cos θ

a

)+ y

sin θ

a− f (x, y)

b

]}}(9.29)

In Equation 9.29, the first term is a DC term; the second, third, and fourth terms arethe carriers; and the fifth term represents the moiré pattern. Moiré fringes are formedwherever

x

(1

b− cos θ

a

)+ y

sin θ

a− f (x, y)

b= p (9.30)

The moiré fringes are deformed straight lines as a result of f (x, y), the localdeformation being f (x, y)/θ when θ is small.

9.6.2 ADDITIVE MOIRÉ PATTERN

An additive moiré pattern is obtained when the transmission functions of the individualgratings are added. Therefore, the transmission function, assuming gratings of thesame modulation, is given by

t(x, y) = t1(x, y) + t2(x, y) = 2A0 + A1 cos

{2π

b

[x − f (x, y)

]}

+ A1 cos

[2π

a(x cos θ − y sin θ)

](9.31)

Page 275: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 248[#10] 14/5/2009 20:31

248 Optical Methods of Measurement

This can be written as

t(x, y) = 2A0 + 2A1 cos

[x

(1

b+ cos θ

a

)− sin θ

a− f (x, y)

b

]}

× cos

[x

(1

b− cos θ

a

)+ sin θ

a− f (x, y)

b

]}(9.32)

The second cosine term represents the moiré pattern, which modulates the carriergrating. Owing to the cosine variation of the moiré term, the phase of the carrierchanges intrinsically at the crossover point. The visibility of the moiré pattern isgenerally poor, and imaging optics are required to resolve the carrier grating.

9.7 CONTRAST IMPROVEMENT OF THE ADDITIVEMOIRÉ PATTERN

It has been mentioned that the contrast of the additive moiré pattern is poor. In orderto observe the fringe pattern, it is necessary either to resolve the carrier grating orto use nonlinear recording. Alternately, one could use an FT processor to filter outthe required information. The transparency t(x, y) is placed at the input of the FTprocessor, and the spectrum, which consists of the zero order and two first orders,is observed at the filter plane. Either of the first orders is filtered out and used forimaging. The intensity distribution at the output plane is proportional to

A21 cos2

[x

(1

b− cos θ

a

)+ sin θ

a− f (x, y)

b

]}

= 1

2A2

1

{1 + cos

{2π

[x

(1

b− cos θ

a

)+ sin θ

a− f (x, y)

b

]}}(9.33)

This represents a unit-contrast moiré pattern.

9.8 MOIRÉ PHENOMENON FOR MEASUREMENT

The moiré phenomenon has been used extensively for the measurement of length andangle. It has also been applied to the study of deformation of objects and also forshape measurement. We will now investigate how to employ this tool in the area ofexperimental solid mechanics.

9.9 MEASUREMENT OF IN-PLANE DISPLACEMENT

9.9.1 REFERENCE AND MEASUREMENT GRATINGS OF EQUAL PITCH

AND ALIGNED PARALLEL TO EACH OTHER

A measurement grating is bonded to the object, and the reference grating is alignedparallel to it. In-plane displacement in the direction of the grating vector (the directionnormal to the grating elements) causes the period of the bonded grating to change

Page 276: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 249[#11] 14/5/2009 20:31

The Moiré Phenomenon 249

from b to a. The moiré fringes formed will have a period d = ab/|a − b|. The normalstrain ε as measured by the moiré method is |a − b|/a. Therefore, ε = b/d: the ratioof the grating period to that of the moiré pattern. At this juncture, it should be notedthat the moiré method measures the Lagrangian (engineering) strain. However, if thedeformation is small, the Lagrangian and Eulerian strains are practically equal.

The shear strain is obtained likewise: the measurement grating (the grating bondedon the object), as a result of shear, is inclined at an angle θ with the reference grating,resulting in the formation of a moiré pattern. Moiré fringes are formed wherever

x(1 − cos θ) + y sin θ = pb (9.34)

The period d of the moiré fringes for very small rotation is b/θ. In fact, the shearstrain, when the rotation is small and also when strains are small, is equal to θ. Thus,the shear strain γ is given by

γ = b/d (9.35)

The shear strain is also obtained as the ratio of the grating pitch to that of the moiré.This is valid for the homogeneous normal strain and simple shear strain.

9.9.2 TWO-DIMENSIONAL IN-PLANE DISPLACEMENT MEASUREMENT

The analysis of the moiré pattern becomes quite easy when it is recognized that themoiré fringes are the contours of the constant displacement component—the so-calledisothetics. The zero-order moiré fringe runs through regions where the periods ofthe reference and measurement gratings are equal (i.e., the displacement componentis zero). Similarly, the N th-order moiré fringe runs through the regions where thedisplacement component is N times the period of the grating. If the reference gratinghas its grating vector along the x direction, the moiré fringes represent loci of constant udisplacement (i.e., u = Na). If the v component of the displacement is to be measured,both the reference grating and the measurement grating are aligned to have theirgrating vectors along the y direction. The moiré fringes are now the loci of constantv component, and hence v = N ′a, where N ′ is the moiré fringe order. To obtainboth u and v components simultaneously, a cross grating may be used. The u and v

components are isolated by optical filtering.Let us consider an example where the moiré pattern is obtained when both the

reference and measurement gratings are aligned with their grating vectors along thex direction. The moiré pattern then represents the u displacement component. Sincestrain is a derivative of the displacement, absolute displacement measurement is notnecessary. Hence, it is not necessary to assign the absolute fringe order to the moiréfringes; instead, the orders are assigned arbitrarily. However, it should be kept inmind that the increasing orders represent tensile strain and the decreasing orderscompressive strain. For the sake of analysis, the moiré fringes are assigned orders 8,7, 6, 5, 4 in the moiré pattern shown in Figure 9.5. We now wish to obtain the strainat a point A. Through A, a line parallel to the x axis is drawn, and a displacementcurve (the moiré fringe order versus the position on the object) is drawn as shown inFigure 9.5: this curve represents the u displacement on the line over the object. We

Page 277: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 250[#12] 14/5/2009 20:31

250 Optical Methods of Measurement

x

y

udN =

∂N∂x

A5

6

7

8

8 7 6 5 4

5

6

7

8

4y

x

x

y

tan q =

q'

q

udN' = ∂N

∂ytan q' =

FIGURE 9.5 In-plane displacement and normal strain measurement from the moiré pattern.

now draw the tangent at the point A. The slope of the tangent at this point gives thenormal strain along the x direction. That is,

tan θ = ∂N

∂x= 1

a

∂u

∂x

or

∂u

∂x= εx = a

∂N

∂x(9.36)

In fact, the strain field along this line on the object can be obtained from this curve. Ifanother line parallel to the y axis but passing through A is drawn and the displacement

Page 278: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 251[#13] 14/5/2009 20:31

The Moiré Phenomenon 251

curve plotted as shown in Figure 9.5, we can obtain

∂u

∂y= a

∂N

∂y

at point A. Similarly, we can obtain

∂v

∂y= εy = a

∂N ′

∂y

from the v family of displacement fringes obtained by conducting the experimentwith the gratings aligned with their grating vectors parallel to the y axis. Also, weobtain the quantity

a∂N ′

∂x

The shear strain γxy is then calculated from

γxy = a

(∂N

∂y+ ∂N ′

∂x

)(9.37)

Thus, both the normal strains and the shear strain can be obtained from the moiréphenomenon. However, it is very clear from this analysis that the following two factorsare very important: first, correct assignment of the fringe orders to the moiré fringes;second, accurate generation of the displacement curve.Assignment of the fringe ordersis not trivial, and requires considerable effort and knowledge. The rules of topographyof continuous surfaces govern the order of fringes. Adjacent fringes differ by plus orminus one fringe order, except in zones of local fringe maxima or minima, where theycan have equal fringe orders. Local maxima and minima are usually distinguishedby closed loops or intersecting fringes; in topography, such contours represent hillsand saddle-like features, respectively. Fringes of unequal orders cannot intersect. Thefringe order at any point is unique and independent of the path of the fringe count usedto reach that point. A fringe can be assigned any number, since absolute displacementinformation is not required for strain measurement. Relative displacement can bedetermined using an arbitrary datum.

For generating the displacement curve accurately, one needs a large number ofdata points, and so methods have been found to increase the number of moiré fringes,and hence the data points, for the same loading conditions. This can be accomplishedby (i) pitch mismatch, (ii) angular mismatch, or (iii) their combination.

The strains can also be obtained by shearing: the moiré patterns representing thedisplacement fringes are sheared to obtain the fringes corresponding to the strain.

9.9.3 HIGH-SENSITIVITY IN-PLANE DISPLACEMENT MEASUREMENT

The sensitivity of in-plane displacement measurement depends on the period of thegrating. In moiré work, low-frequency gratings are usually employed, and the anal-ysis is based on geometrical optics. However, with fine-period gratings, although

Page 279: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 252[#14] 14/5/2009 20:31

252 Optical Methods of Measurement

Object

CCD

z

x

M4

M3

M7, M8

M5, M6

M2

M1

FIGURE 9.6 High-sensitivity in-plane measurement set-up: mirrors M5, M6, M7, and M8generate two beams in the ( y, z) plane.

the sensitivity is increased, diffraction effects are prominent, and consequently quasi-monochromatic, spatially coherent light needs to be used. Indeed, with laser radiation,in-plane displacement measurements with a very high degree of accuracy can be made.A cross grating, say 1200 lines/mm in either direction, recorded holographically andaluminum-coated for reflection, is bonded onto the surface under test. The use ofhigh-density gratings puts a limitation on the size of the object that can be examined.The grating is illuminated by four collimated beams, as shown in Figure 9.6. Twobeams lie in the (x, z) plane and two in the ( y, z) plane, and make angles such thatfirst-order diffracted beams propagate along the z axis. These beams, on interference,generate fringe patterns characteristic of the u and v families of the displacement.

To understand the working of the technique, let us consider a one-dimensionalgrating with its grating vector along the x direction. The grating is recorded holo-graphically by interference between two plane waves, one propagating along the zaxis and the other making an angle θ with this but lying in the (x, z) plane. Theperiod of the grating is b, or its spatial frequency μ = (sin θ)/λ (i.e., b sin θ = λ).This grating is bonded onto the surface of the object. When the grating is illuminatednormally by a collimated beam, the first-order diffracted beams make angles of θ and−θ with the normal to the grating. Let us assume that the object is loaded, resultingin distortions in the grating period. Let the modified spatial frequency be μ(x). Thegrating is illuminated symmetrically at angles of θ and −θ by two plane waves thatcan be represented by Re2πiμx and Re−2πiμx, where R is the amplitude of the wave.These plane waves will be diffracted by the grating, and the diffracted field can beexpressed as

R(

e2πiμx + e−2πiμx) 1

2{1 + cos[2πμ(x)x]} (9.38)

Page 280: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 253[#15] 14/5/2009 20:31

The Moiré Phenomenon 253

Collecting terms of interest (i.e., those terms representing waves that propagate alongthe z axis), we obtain

1

4R(

e2πi[μ−μ(x)]x + e−2πi[μ−μ(x)]x)

(9.39)

The intensity distribution in the image is

I(x) = 1

4R2 cos2{2π[μ − μ(x)]x} (9.40)

This represents a moiré pattern. The moiré fringe width is

d = 1

2

aa(x)

|a − a(x)| = 1

2Na (9.41)

where a(x) is the period of the deformed grating and N is the order of the moiréfringe. It should be noted that the sensitivity of the method is twice as large as wouldbe obtained with the same grating using the conventional method. This is due to thefact that the deformed grating in +1 and −1 diffraction orders is being compared.This increase in sensitivity by a factor of two has been explained by Post as beingdue to moiré formation between a grating on the test surface and a virtual grating oftwice the frequency formed by interference between the two beams, thus providinga multiplicative factor of two. Use of higher diffraction orders results in increasedfringe multiplication.

An arrangement to measure u and v components of the displacement simultane-ously uses a high-frequency cross grating bonded onto the surface of the object. Thegrating is illuminated simultaneously and symmetrically along the x and y direc-tions to generate four beams travelling axially. The moiré fringes representing u andv displacement components are then obtained by interference of these four beams.Figure 9.7 shows the u and v families of fringes for an electronic component thatwas heated to show the influence of heat (generated internally during operation) onthe packaging. A cross grating with 1200 lines/mm is transferred to the face of theelectronic component and He–Ne laser light is used for illumination.

9.10 MEASUREMENT OF OUT-OF-PLANE COMPONENTAND CONTOURING

The Moiré technique is well suited for the measurement of in-plane displacementcomponents: the sensitivity is controlled by the period of the grating. Further, thetechniques used for moiré formation are either pitch mismatch or angular mismatch.Therefore, moiré formation when measuring out-of-plane displacement will also begoverned by these techniques. Consequently, the moiré method is not as sensitive forout-of-plane measurement as for in-plane measurement. Out-of-plane displacementand surface topography can be measured by the shadow moiré method or the projectionmoiré method.

We will discuss these methods in detail.

Page 281: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 254[#16] 14/5/2009 20:31

254 Optical Methods of Measurement

FIGURE 9.7 Interferograms showing (a) u-displacement,and (b) v-displacement fringes atthe face of an electronic component when energized. (Courtesy of Fu Yu, National Universityof Singapore.)

9.10.1 THE SHADOW MOIRÉ METHOD

As the name suggests, the moiré pattern is formed between the grating and its shadowon the object. The shadow grating will be distorted by the object topography, andhence moiré fringes from the distorted and reference gratings are observed.

9.10.1.1 Parallel Illumination and Parallel Observation

Figure 9.8 shows an arrangement for shadow moiré that relies on parallel illuminationand observation; a reference grating is placed on the object. Without loss of generality,we may assume that the pointA on the object surface is in contact with the grating. Thegrating is illuminated by a collimated beam incident at an angle α with the normalto the grating surface (i.e., the z axis). It is viewed from infinity at an angle β. Itis obvious that the grating elements contained in a distance AB occupy a distanceAD on the object surface. The elements on AD will form a moiré pattern with thegrating elements contained in distance AC. Let us assume that AB and AC have pand q grating elements, respectively. Therefore, AB = pa, and AC = qa = pb. Fromgeometry, BC = AC − AB = (q − p)a = Na for N = 0, ±1, ±2, . . .; N is the orderof the moiré fringes.

Therefore, we may write

z(x, y)(tan α + tan β) = Na

or

z(x, y) = Na

tan α + tan β(9.42)

where z(x, y) is the depth as measured from the grating. This is the governing equationof this method. It can be seen that the moiré fringes are contours of equal depth mea-sured from the grating. If the viewing is along the normal to the grating (i.e., β = 0),

Page 282: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 255[#17] 14/5/2009 20:31

The Moiré Phenomenon 255

z(x, y)

Grating

z

x

To infinityCollimated beam

CB

ab

D

A

FIGURE 9.8 Schematic for shadow moiré.

then z(x, y) = Na/ tan α. Alternatively, the grating may be illuminated normally andviewed obliquely. Then z(x, y) = Na/ tan β. The assumption that the viewing is donewith a collimated beam is not always valid. However, when the object under study issmall and the camera is placed sufficiently far away, this requirement is nearly met,although the method is thus not suited for large objects.

9.10.1.2 Spherical-Wave Illumination and Camera at Finite Distance

The assumption made earlier that both the source and the camera are at infinity limitsthe application of the method to the study of small objects. However, when divergentillumination is used, larger objects can be studied. In general, the source and thecamera may be located at different distances from the reference grating. However, aspecial case where the source and the camera are at equal distances from the gratingis of considerable practical importance, and hence is discussed here in detail.

Let the point source S and the camera be at a distance L from the grating surfaceand let their separation be P, as shown in Figure 9.9. The object is illuminated bya divergent wave from a point source. As before, the number of grating elements pcontained in AB on the grating are projected onto AD on the object surface. Theseelements interact with the q elements in AC, thus producing a moiré pattern.

If we assume that an N th-order moiré fringe is observed at the point D, then

BC = AC − AB = (q − p)a = Na (9.43)

But BC = z(x, y)(tan α′ + tan β′), where z(x, y) is the depth of the point D from thegrating. Therefore, we obtain

z(x, y) = Na

tan α′ + tan β′ (9.44)

Here, α′ and β′ vary over the surface of the grating or over the surface of the object.From Figure 9.9, we have tan α′ = x/[L + z(x, y)] and tan β′ = (P − x)/[L + z(x, y)].

Page 283: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 256[#18] 14/5/2009 20:31

256 Optical Methods of Measurement

B

D

CGrating

CameraS

A

P

(0, 0)

x

xa

L

a' b'

FIGURE 9.9 Shadow moiré method with point source illumination and point receiving.

Substituting these expressions into Equation 9.44, we obtain

z(x, y) = Nax

L + z+ P − x

L + z

= NaL + z(x, y)

P

Rearranging, we obtain

z(x, y) = NaL

P − Na= Na

P

L− Na

L

(9.45)

The ratio P/L is called the base-to-height ratio. This is an extremely simple formula.In fact, it is this simplicity that makes this technique attractive over the one in whichthe source and the camera are placed at different distances from the grating. Thedistance Δz(x, y) between adjacent moiré fringes (i.e., ΔN = 1) is

Δz(x, y) =La

P

(1 + z

L

)2(9.46)

It can be seen that the fringe spacing is not constant, but increases with depth. Sincez(x, y)/L � 1, then the fringe spacing is constant and is given by Δz(x, y) = La/P.The moiré fringes then represent true depth contours.

Although the method is applicable to the study of large objects, it is essential tocorrect for perspective error. This error arises because the actual coordinates (x, y) of

Page 284: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 257[#19] 14/5/2009 20:31

The Moiré Phenomenon 257

a point appear as (xa, ya). From Figure 9.9, the actual coordinates (x, y) and apparentcoordinates (xa, ya) are related by

xa − x

z(x, y)= P − xa

L

or

x = xa − z

L(P − xa) = xa

(1 + z

L

)− zP

L(9.47a)

Similarly,

y = ya

(1 + z

L

)− zP

L(9.47b)

Using this arrangement, the shadow moiré method can be applied to the study(obviously limited by the size of the grating) of very large structures.

Since the fringe spacing is not constant over the object depth, the sensitivity of themethod may not be adequate to map out surfaces that are strongly curved or very steep.Therefore, the use of a composite grating has been suggested. The composite gratingconsists of two parallel superposed gratings with two discretely different periods.This thus provides two sensitivities.

One of the problems of the shadow moiré method is that the moiré fringes do notlocalize on the surface of the object. Also, the contrast of the fringes is not constant.Fringes of maximum contrast are obtained when the surface under test is nearly flat.The moiré fringes become fuzzy when objects of steep curvature are examined. Asa precaution, objects with steep curvature should be placed as close to the grating aspossible, and the depth of field of the photographic objective should be large enoughto focus the grating and its shadow simultaneously.

9.10.2 AUTOMATIC SHAPE DETERMINATION

When the distance between the grating and the object surface changes, the moiréfringes shift. From the direction of the shift—which can differ locally—informationabout the local surface gradient can be obtained. In one approach to automatic surfacedetermination, a grating placed on the object is illuminated obliquely, and a moirépattern is recorded as shown in Figure 9.10. A number of moiré patterns are recordedby shifting the grating axially. The phase of the moiré fringes is calculated usingwell-known phase-shift techniques. The object topography is then calculated usingthe appropriate expression.

9.10.3 PROJECTION MOIRÉ

There are several differences between the shadow moiré and projection moiré tech-niques. In projection moiré, a higher-frequency grating is used, and the grating isprojected on the object surface and is re-imaged on a reference grating that is identi-cal to the projection grating and aligned parallel to it. The moiré fringes are formedat the plane of the reference grating. The projection moiré method also provides a

Page 285: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 258[#20] 14/5/2009 20:31

258 Optical Methods of Measurement

Grating

CCD camera

Computer

Object

FIGURE 9.10 Automatic shape determination.

certain flexibility that is not possible in the shadow moiré method. We will examinethe projection moiré method in the following two cases:

• The optical axes of the projection and imaging (observation) systems areparallel.

• The optical axes are inclined with each other.

Referring to Figure 9.11, a grating G1 of period b is imaged on a reference plane, whereits period is Mb, M being the magnification of the projection system. If the objectsurface is plane and located on this reference plane, the projected grating will have aconstant period. This grating is re-imaged on the reference grating G2. Assuming theprojection and imaging systems to be identical, the pitch of the imaged grating willbe equal to that of G2 and the grating elements will lie parallel to each other owingto their initial alignment. Hence, no moiré pattern will be formed. However, if thesurface is curved, the projection grating on the surface will have a varying period, andhence a moiré pattern is formed. We can examine this in exactly the same manner asapplied to shadow moiré; that is, the grating of period Mb at the reference plane isilluminated by a spherical wave from the exit pupil of the projection lens.As describedin the shadow moiré method, we can write

z(x, y) = NMb

tan α′ + tan β′ = NMbL + z(x, y)

P(9.48)

Equation 9.48, on rearranging, becomes

z(x, y) = NMbL

P − NMb(9.49)

Page 286: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 259[#21] 14/5/2009 20:31

The Moiré Phenomenon 259

B

D

C Reference plane

G1 G2

A

P

L

FIGURE 9.11 Projection moiré method.

Writing the magnification of the imaging system as M = (L − f )/f , where f is thefocal length, we obtain

z(x, y) = NbL(L − f )

fP − Nb(L − f )(9.50)

Here, z(x, y) corresponds to the depth of the object at the point where the N th-ordermoiré fringe is formed. It can be seen that the moiré fringe spacing is not constant.However, when Mb � P, the moiré fringes are equally spaced.

The disadvantage of this method is that only very small objects can be studied.However, large objects also can be studied when the optical axes of the projectionsystem and the imaging system are inclined, as shown in Figure 9.12. The gratingcan be imaged on the reference plane where the Scheimpflug condition is satisfied.However, the magnification is not constant, and consequently the period of the gratingon the reference plane varies. This problem can be solved by using a special gratingwhose period remains constant on projection at the reference plane.

The technique can be used for the comparison of two objects if, instead of thereference grating, a photographic record of the projected grating is made first withone object and then with the object replaced by another object. The records canbe made on the same film (additive moiré) or on two separate films (multiplicativemoiré). Likewise, one can study out-of-plane deformation of the object by recordinggratings corresponding to two states of the object.

As should be obvious, the grating plays a very important role: the sensitivitydepends on the grating period. Therefore, several gratings may be required to maintainaccuracy over a steeply curved surface. Further, for phase-shifting, the grating needsto be shifted. The use of programmed LCD panels for generating gratings of variable

Page 287: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 260[#22] 14/5/2009 20:31

260 Optical Methods of Measurement

G1

G2

Reference plane

D

A

FIGURE 9.12 Projection moiré method with oblique incidence and observation.

period, with a sinusoidal or binary transmission function, and arbitrarily inclined,makes automatic shape determination considerably more convenient.

9.10.4 LIGHT-LINE PROJECTION WITH TDI MODE OF OPERATION

OF CCD CAMERA

If depth determination of an object is to be carried out over the whole 360◦, thenseveral measurements are needed. These measurements are made by changing theviewing and projection directions. An alternative approach is to unwrap the objectsurface. This is particularly straightforward if the object has axial symmetry. Theobject is placed on a turntable with its axis coinciding with the axis of rotation. Abeam of light in the form of a line is projected on the surface, as shown in Figure9.13a. An image is made on a CCD camera; the image of the projected line will besmeared out owing to rotational motion. On the other hand, if the laser is operated inthe pulse mode and the camera in TDI (time-delay and integration) mode, then theobject surface will be unwrapped and it will be covered with equidistant lines. Theline width will be governed by the width of the projected line and the line separationby the pulse rate. If the object departs from axial symmetry, the pitch of the gratingwill not be uniform, but will vary, depending on the magnitude of departure from thenominal cylindrical surface.

Let us consider a section of the object, as shown in Figure 9.13b. It is illuminatedat an angle θ with the optical axis. The dashed line is the nominal circular section andthe solid line represents the surface of the object at that section. The point B on the

Page 288: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 261[#23] 14/5/2009 20:31

The Moiré Phenomenon 261

B

(b)

(c)

(d)

(a)

C

A

Cylindrical object

Turntable

LensCCD

Laser

A'

B'

FIGURE 9.13 TDI mode of a CCD camera: (a) line scan of an object; (b) calculation of theshift of the line due to out-of-plane displacement; (c) unwrapped image of a cylindrical objectwith a dent; (d) moiré pattern showing the dented region. (Courtesy of Dr. M. R. Sajan.)

surface is imaged at B′, which would also be the image of the point C if the objectwere not deformed. Further, if the object were not deformed, the point A would beimaged on the axis as A′ instead of the point B illuminated now. Therefore, the imageof the line on the object will be distorted. From the figure, we have AC = BC tan θ.The out-of-plane displacement d is BC. Thus d = AC cot θ. Figure 9.13c shows adistorted image on the unwrapped object. This distortion is seen vividly when a

Page 289: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 262[#24] 14/5/2009 20:31

262 Optical Methods of Measurement

moiré is formed with an internally generated electronic grating, as shown in Figure9.13d. The moiré condition, as before, is AC = Np, where p is the grating pitch.Therefore, d = Np cot θ. The out-of-plane displacement is obtained from the moirépattern. Phase-shifting can easily be introduced by a shift of the laser diode.

If the pulse width is larger than the scanning time for one column of detectors,the bright line will be smeared. Hence, the pulse width is determined accord-ing to the selected scanning frequency. For example, for a scanning frequency of50,000 lines/min, that is, a scanning time T = 1.2 ms/line, the suggested pulse widthshould be less than 10% of T , that is, 0.12 ms. Figure 9.13c was obtained with aTDI camera with 192 pixels per column (line) and has 165 lines. In the TDI mode,the charge of each pixel on a line is added to itself as it moves from one end of thedetector to the other. There is thus a time delay from the moment a line of pixels isfirst illuminated to the instant when it is read into the buffer, and the charge of thepixel is integrated over 165 lines.

9.10.5 COHERENT PROJECTION METHOD

So far, we have discussed projection moiré methods in which a grating is projected onthe object surface, and its image is formed by another optical system on an identicalgrating. The illumination is incoherent and the method requires two physical gratings.However, it is possible to create a grating structure on the object by interferencebetween two coherent waves. We shall consider two distinct situations: one where twoplane (collimated) waves interfere, generating equidistant equiphasal plane surfacesin space, and the other where interference between two spherical waves is used togenerate interference surfaces (hyperboloidal surfaces).

9.10.5.1 Interference between Two Collimated Beams

There are several ways of generating two inclined collimated beams. Let us considertwo plane waves, as shown in Figure 9.14, which enclose a small angle Δθ betweenthem. Assuming the amplitudes of these waves to be equal, the total amplitude u(x, y)at the (x, z) plane can be written as

u(x, y) = e(2π/λ)i[x sin(θ − Δθ/2) + z cos(θ − Δθ/2)] + e(2π/λ)i[x sin(θ + Δθ/2) + z cos(θ + Δθ/2)]

(9.51)

where θ represents the mean direction. When such an amplitude distribution isrecorded, the intensity distribution in the record is

I = I0

{1 + cos

[2π

λ

(2 sin

Δθ

2

)(x cos θ + z sin θ)

]}

= I0

{1 + cos

[2π

a(x cos θ + z sin θ)

]}(9.52)

where a = λ/[2 sin(Δθ/2)]. Thus, a grating of period a is formed on the objectsurface. The period of the grating along the x direction is a/cos θ and that along the z

Page 290: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 263[#25] 14/5/2009 20:31

The Moiré Phenomenon 263

x

z

Object

q

Dq

FIGURE 9.14 Coherent projection method.

direction is a/sin θ. This grating is photographed normally. If the object surface is aplane lying in the (x, y) plane, then the grating period is ax = a/cos θ, and the periodin the photographic record is Max , where M is the magnification.

In order to visualize the moiré formation, we note that, in collimated illumination,the fringe width along the x direction remains unchanged, irrespective of the depthof the object. However, there is an in-plane shift that varies linearly with depth. So,if the object depth changes by Δz, the x shift is Δz tan θ. If this shift is equal to thegrating period ax, then a moiré fringe is formed. The moiré fringe interval is thusgoverned by ax = Δz tan θ = a/cos θ. This leads to Δz = a/sin θ. In other words, amoiré fringe is formed wherever the object depth changes by a/sin θ. This is thus aneat method of measuring out-of-plane displacement.

We again have the possibilities of either additive moiré or multiplicative moiré. Thefirst exposure is made with the object in its initial state and the second exposure whenit is deformed. The moiré pattern gives the out-of-plane displacement. Alternatively, areference record can be made of the grating on a reference plane, and the second recordmade with dual illumination of the object surface. This way, the surface topographycan be obtained. The requirement is that the grating period be resolved. Therefore,the interbeam angle should be only a few degrees: this will give a grating structure ofabout 100 lines/mm, which can be resolved by a good imaging system. Obviously,the size of the object that can be studied depends on the size of the collimated beams,which is limited by the collimator optics.

9.10.5.2 Interference between Two Spherical Waves

In order to cover large objects, interference between diverging waves is utilized. Onesimple method employs a single-mode 50 : 50 fiber-optic beam-splitter (coupler).

Page 291: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 264[#26] 14/5/2009 20:31

264 Optical Methods of Measurement

The laser beam is coupled to one port of the coupler, and the beam is equally dividedinto two output ports. The separation between the output fibers can be adjusted toachieve the desired fringe width. Unfortunately, the surfaces of constant intensity inthe interference pattern are not plane surfaces but hyperboloids. The period of thegrating on the reference plane thus varies, but can be calculated given the coordinatesof the point sources. This method can therefore be used with large objects.

9.10.6 MEASUREMENT OF VIBRATION AMPLITUDES

Both the shadow moiré and projection moiré methods can be used to measure theamplitude of vibration. We use the method described earlier in which two planewaves are superposed on the object surface. A record of this will be a grating onthe surface of the object, with intensity distribution given by Equation 9.52. Let usnow assume that the object surface executes simple harmonic motion of amplitudew(x, y) at frequency ω. We can express this as

z = z0 + w(x, y) sin ωt (9.53)

where z0 represents the static position of the surface. Owing to surface motion, thephases of the two interfering beams at the surface will change with time; consequently,the instantaneous intensity distribution in the record will be

I(x, t) = I0

{1 + cos

[2π

a

[x cos θ + (z0 + w(x, y) sin ωt) sin θ

]]}(9.54)

This intensity distribution is integrated over a period T much longer than the periodof vibration 2π/ω; that is,

I(x) = 1

T

∫ T

0I(x, t)dt = I0

{1 + J0

(2π

aw(x, y) sin θ

)cos

[2π

a(x cos θ + z0 sin θ)

]}

(9.55)

where J0(x) is the Bessel function of zero order. This again represents a grating withmodulation given by the Bessel function. Information about the vibration amplitudew(x, y) can be extracted from this record by optical filtering. We assume that therecord has an amplitude transmittance proportional to the intensity distribution.The record is placed at the input plane of a FT processor, and one of the firstorders is allowed for image formation. The intensity distribution at the output planeis proportional to

[J0

(2π

aw(x, y) sin θ

)]2

(9.56)

The intensity distribution exhibits maxima and minima. The amplitude of vibrationis obtained as explained for time-average HI in Chapter 6.

Page 292: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 265[#27] 14/5/2009 20:31

The Moiré Phenomenon 265

9.10.7 REFLECTION MOIRÉ METHOD

Unlike the shadow moiré and projection moiré methods, the reflection moiré methodyields information about slope and curvature. In the problem of flexure of thin plates,the second derivatives of the deflection are related to the bending moments and twist.The deflection data obtained from shadow moiré or projection moiré need to bedifferentiated twice in order to obtain curvature, resulting in inaccuracies. Therefore, itis necessary to obtain slope or curvature data so that either only a single differentiationneed be performed or differentiation is eliminated altogether. The reflection moirémethod serves this purpose. The only disadvantage is that the specimen has to bespecular.

Figure 9.15 shows a schematic of the experimental set-up for reflection moiré,also known as Ligtenberg’s arrangement. The object is an edge-clamped plate. Thesurface of the plate is mirror-like so that a virtual image of the reference grating isformed. Consider a ray from a point D on the grating that, after reflection from pointP on the object, meets a point I at the image plane, as shown in Figure 9.15. In otherwords, point I is the image of point D as formed by reflection on the plate surface.When the plate is deformed, point P moves to point P′. If the local slope is φ, thenpoint I now receives the ray from point E: the image of point E is formed at I again.In reality, an image of the grating as seen on reflection from the plate is recorded.This image itself may contain distortions due to nonflatness of the plate surface. Itcan be shown that, owing to self-healing, these distortions do not influence moiréformation. After the plate is deformed (loaded), the image is again recorded on thesame photographic film/plate. Owing to loading, the grating image will be furtherdistorted, thus forming a moiré pattern.

Following the arguments presented earlier, a grating AE, after loading, is super-posed on another grating AD, leading to moiré formation due to pitch mismatch. Letthere be p grating elements in AE and q elements in AD. If the N th-order moiré fringeappears at point I, then

ED = AE − AD = pb − qb = ( p − q)b = Nb (9.57)

A

P

P

E

C

D

I

L

x

L'

P'

2f

q

q

FIGURE 9.15 Reflection moiré: Ligtenberg’s arrangement.

Page 293: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 266[#28] 14/5/2009 20:31

266 Optical Methods of Measurement

But

ED = EC − CD = L tan(θ + 2φ) − L tan θ

= Ltan θ + tan 2φ

1 − tan θ tan 2φ− L tan θ

Assuming φ to be very small, which is usually the case, we write tan 2φ ≈ 2φ.Therefore,

ED = L2φ(1 + tan2 θ)

1 − 2φ tan θ= Nb (9.58)

Further, 2φ tan θ � 1, and, writing the partial x slope φ = ∂w/∂x, we have

∂w

∂x= Nb

2L(1 + tan2 θ)= Nb

2L

(1 + x2

L′2

) (9.59)

The slope depends on the value of x. In order to eliminate the dependence of ∂w/∂x onx, a curved grating is used. However, Ligtenberg’s method utilizing curved gratingsalso suffers from several disadvantages: the large dimensions of the curved surfacegratings, the necessity of using gratings of low spatial frequency to obtain a moirépattern of good contrast, and the limitation to static loading problems and to modelsof relatively large flexures. The disadvantages of Ligtenberg’s method are largelyremoved by the Rieder–Ritter arrangement. A cross-grating is used so that ∂w/∂xand ∂w/∂y are recorded simultaneously. These can be separated by optical filtering.

Further, as with other moiré methods, the sensitivity is dependent on the gratingperiod. It is therefore desirable to have an arrangement in which the grating periodcan be varied easily. Figure 9.16 shows an arrangement where a grating is projectedonto a ground-glass screen. The projected grating is imaged using the Rieder–Ritterarrangement. The pitch of the grating can be varied by changing the magnificationduring projection of the grating.

Several other arrangements have been suggested to obtain slope and curvature.A diffraction grating is placed at or near the focal plane of the imaging lens. Thediffraction grating produces sheared fields. Double exposure, before and after plateloading, is performed. In another interesting arrangement, an inclined plane parallelplate is placed near the focal plane of the imaging lens. The image is recorded inreflection: two reflected beams, which are sheared, produce a grating in the imageplane. Loading of the plate deforms this grating. Therefore, double-exposure record-ing, before and after plate loading, results in the formation of a moiré pattern that is dueto the slope variation. When two plates are used in tandem and properly aligned suchthat three sheared beams participate in interference, curvature fringes are obtained.

These methods are coherent methods requiring laser radiation. The permissible sizeof the object depends on the collimating optics, and hence these methods are limited tosmall to moderate size objects. As mentioned earlier, the surface of the plate (object)need not be optically flat. The departure from flatness causes some distortions in the

Page 294: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 267[#29] 14/5/2009 20:31

The Moiré Phenomenon 267

BS

Source

Grating Projection system

Projected grating

T

I

x

FIGURE 9.16 Reflection moiré: grating with variable pitch.

grating. However, the moiré phenomenon has a self-healing property, and therebythese distortions cancel out, and the moiré fringes are due alone to the deformationof the plate caused by loading.

9.11 SLOPE DETERMINATION FOR DYNAMIC EVENTS

So far, we have described methods that are applicable only to static problems. Pulseillumination is frequently used for the examination of dynamic processes. Gratingimages corresponding to successive states of the object are recorded. However, it isnecessary to record the reference grating each time as well, in order to generate themoiré pattern. This makes these methods rather troublesome to apply to the study ofdynamic events.

Several methods have been developed whereby the images of the reference gratingand the deformed grating from the surface under investigation are recorded simulta-neously. In one such arrangement (Figure 9.17), the beam splitter provides two pathsfor the imaging. One image is formed by reflection from the object surface T andthe other image is formed by reflection from a mirror surface M. Therefore, bothimages of the grating can be recorded in a single exposure (pulse illumination), andhence an additive moiré pattern can be observed for the various states of the object.This arrangement, however, does not exploit the healing property of the moiré phe-nomenon. Since, in general, the object surface in its initial state and the mirror surfaceare not identical, the grating formed by reflection from the object surface will carrysome distortions, which will show up as moiré fringes even for the unloaded state ofthe object. In other words, an initial fringe-free field is not obtained. This problem issolved in the arrangement shown in Figure 9.18. The grating G1 is a replica of theimage formed by reflection from the object surface. Therefore, the images of gratings

Page 295: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 268[#30] 14/5/2009 20:31

268 Optical Methods of Measurement

T

M

Source

GDS

BS

I

x

FIGURE 9.17 Arrangement for slope measurement.

BS

T

I

x

DSSource

G2

G1DS

Source

FIGURE 9.18 Modified arrangement for slope measurement.

G1 and G2 will be identical, and an initial fringe-free field is obtained. However, foreach object surface, a new grating G1 has to be produced and aligned.

9.12 CURVATURE DETERMINATION FOR DYNAMIC EVENTS

The arrangements shown in Figures 9.17 and 9.18 can be modified to provide foradditional shear by incorporating a Mach–Zehnder interferometer. Figure 9.19 showsthis arrangement. The images are sheared by the tilt given to one of the mirrors.

Page 296: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 269[#31] 14/5/2009 20:31

The Moiré Phenomenon 269

T

M2

M1

BS3

BS2

BS1

DSSource

G

FIGURE 9.19 Arrangement for curvature measurement.

9.13 SURFACE TOPOGRAPHY WITH REFLECTIONMOIRÉ METHOD

The methods described for slope and curvature measurements exploit the imaging ofa reference grating when the separation between the grating and the object surface islarge. However, when the reference grating is placed very close to the object surface,the moiré pattern is formed between the reference grating and its image. Under certainconditions, the moiré fringes are loci of constant depth. To explain the formation ofthe moiré pattern, we consider the arrangement shown in Figure 9.20.

Grating G1 (which essentially is an image of G2) is viewed through grating G2.Let the gratings G1 and G2 be described by

x = mb

x = na

z

x

G2

E(x, 0)

D(x1, 0)

(xc, zc)

B(0, –d )

C(xi, –d )

x = naxi = mb

G1

A(0, 0)

FIGURE 9.20 Surface topography via reflection moiré.

Page 297: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 270[#32] 14/5/2009 20:31

270 Optical Methods of Measurement

respectively, where m and n are integers and the gratings have different periods band a respectively. The camera is located at the position (xc, zc). The x coordinate ofany point projected from grating G1 to grating G2 is obtained from

xc − x

zc= xc − mb

zc + d

or

x = bzc

zc + dm + d

zc + dxc (9.60)

Using the indicial equation m − n = p, we obtain an equation for the moiré pattern:

x = abzc

(a − b)zc + adp + ad

(a − b)zc + adxc (9.61)

In general, d is a function of x and y: d(x, y). Let us now consider another experimentalarrangement (Figure 9.21): a grating is placed on a reflecting surface, and an imageof the grating is formed by reflection. Owing to imaging, both the period and theseparation of the image grating will vary spatially. The moiré pattern is observed asa result of interaction of the image grating with the reference grating.

We now assume that the reflecting surface departs very little from the plane surfaceand that the reference grating is in contact with the surface. Then, it is safe to assumethat the periods of both the reference and the image gratings are equal: a = b. The

Grating(xc, zc)x

z

Virtual or imagegrating

Illumination

Reflecting surface

FIGURE 9.21 Reflection moiré for contouring.

Page 298: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 271[#33] 14/5/2009 20:31

The Moiré Phenomenon 271

moiré pattern is then described by

x = azc

d(x, y)p + xc (9.62)

Since d(x, y) is the separation between the two gratings, the depth z(x, y) of the surfacefrom the reference grating will be half of it: z(x, y) = d(x, y)/2. Therefore, we write

z(x, y) = azc

2(x − xc)p (9.63)

Further, if the area to be examined is very small and the illumination source is faraway obliquely, then

z(x, y) = − azc

2xcp (9.64)

The moiré fringes now represent true depth contours.We now apply this technique to study moiré pattern formation when the grating is

placed on a spherical surface of radius curvature R. Let us assume that the referencegrating is placed at z = 0. The equation of the spherical surface is

x2 + y2 + z2 + 2z R = 0 (9.65)

The distance z(x, y) between the grating and the surface for large R is

z = −x2 + y2

2R

Substituting for z(x, y), the equation of the moiré pattern is now given by

x2 + y2 = azcR

xcp (9.66)

The moiré fringes are circles of radii√

(azcR/xc) p. The radii vary as√

p, as in aFresnel zone plate. Figures 9.22a and 9.22b show photographs of such moiré patternstaken by placing gratings of 200 lines/inch and 500 lines/inch, respectively, on aconcave mirror surface.

9.14 TALBOT PHENOMENON

When a periodic object is illuminated by a coherent monochromatic beam, its imageis formed at specific planes called the self-image planes or Talbot planes. The effectwas first observed by Talbot in 1836 and its theory was worked out by Rayleigh in1881. Self-imaging is due to diffraction and can be observed with periodic objectsthat satisfy the Montgomery condition. A linear (1D) grating is one such object. For a1D grating of spatial frequency μ, illuminated by a collimated beam of wavelength λ,the self-image planes are equidistant and are located at distances N/μ2λ from the

Page 299: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 272[#34] 14/5/2009 20:31

272 Optical Methods of Measurement

FIGURE 9.22 Contour maps of a spherical surface: (a) with a grating of 200 lines/inch;(b) with a grating of 500 lines/inch.

object, where N = 0, 1, 2, 3, . . . gives the order of the Talbot planes. In other words,the transverse periodicity of the object manifests itself as longitudinal periodicity.The imaging is called self-imaging because no imaging devices are used. A two-dimensional (cross) grating with the same spatial frequency μ in both directions alsoself-images at the planes located at N/μ2λ from the grating.

9.14.1 TALBOT EFFECT IN COLLIMATED ILLUMINATION

To explain Talbot imaging, let us consider a 1D grating whose transmittance is given by

t(x) = 1

2(1 + cos 2πμx)

This grating is placed at the plane z = 0 and is illuminated by a collimated beam ofamplitude A, as shown in Figure 9.23. The amplitude of the wave just behind thegrating (z = 0 plane) will be given by

u(x, z = 0) = 1

2A(1 + cos 2πμx) (9.67)

Using the Fresnel diffraction approach, the amplitude at any plane z is obtained as

u(x1, z) = A

2eikz(

1 + e−iππ2λz cos 2πμx1

)(9.68)

Talbot imagesGrating

Collimatedillumination zT

zc

FIGURE 9.23 Formation of Talbot images in collimated illumination.

Page 300: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 273[#35] 14/5/2009 20:31

The Moiré Phenomenon 273

The amplitude distribution at any plane z will be identical to the grating transmit-tance function, except for a constant multiplicative phase factor, if e−iππ2λz = 1.This condition is satisfied when

πμ2λ = 2Nπ for N = 0, 1, 2, 3, . . . (9.69)

The planes at which this condition is satisfied are the Talbot planes. However, whenN takes half-integer values, we still obtain the transmittance function of a sinusoidalgrating, but it is phase-shifted by π. Thus, the Talbot planes are separated by zT =1/μ2λ. In the case of collimated illumination, the Talbot planes are equispaced.

9.14.2 CUT-OFF DISTANCE

The Talbot images are formed as a result of constructive interference among thediffracted waves at successive Talbot planes. For an infinite grating illuminated by aninfinitely large beam, the various diffracted waves will continue to overlap, irrespec-tive of distance, thus producing an infinite number of Talbot images. In a practicalsituation, both the grating and the beam are of finite dimensions. Therefore, thediffracted waves will cease to overlap after a certain distance; consequently, no Tal-bot images are formed after this distance. Let us consider a grating of linear dimensionD and spatial frequency μ illuminated by a beam of size D. The cut-off distance zcis defined as the distance from the grating over which the first-order diffracted beamdeviates by half the beam size. This is given by zc = D/2μλ.

9.14.3 TALBOT EFFECT IN NONCOLLIMATED ILLUMINATION

Let us assume that a point source is placed at (0, 0) and that the grating is at the planez = R. The grating is illuminated by a divergent spherical wave from the point source.The amplitude of the wave just behind the grating is

u(x, R) = A

2ReikR ei(k/2R)x2

(1 + cos 2πμx) (9.70)

This is valid under the paraxial approximation. Using the Fresnel diffractionapproximation, the amplitude at a plane distant z from the grating is

u(x1, R + z) = A

2(R + z)eik(R+z) exp

[i

k

2(R + z)x2

1

]

×[

1 + exp

(−iπ

Rz

R + zμ2λ

)cos

(2πμ

R

R + zx1

)](9.71)

This expression represents a transmittance function of the grating of spatial frequencyμ′ = μR/(R + z), provided that

exp

(−iπ

Rz

R + zμ2λ

)= 1

Page 301: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 274[#36] 14/5/2009 20:31

274 Optical Methods of Measurement

This yields the self-image planes distances (zT )s as

(zT )s = 2N

μ2λ − 2N

R

for N = 1, 2, 3, (9.72)

The spacing between successive Talbot planes increases with the order N . The periodof the grating also increases as if it were geometrically projected. Similarly, when thegrating is illuminated by a convergent spherical wave, the successive Talbot planescome closer and the spatial frequency increases. This is valid until some distancefrom the point of convergence.

9.14.4 TALBOT EFFECT FOR MEASUREMENT

Instead of using a projection system for projecting a grating on the object surface foreither shadow moiré or projection moiré work, the projection can be done without alens but using the Talbot effect. This, however, necessitates the use of laser radiation.The additive moiré method can then be applied to measure out-of-plane deformation.The shape of the object can also be obtained by imaging the Talbot grating on anotheridentical grating.

When a grating of higher frequency is used, several Talbot planes may intersectthe object surface. A moiré pattern of high contrast will be formed at the Talbotplanes. Therefore, objects of large depth can be topographically examined using thistechnique.

9.14.4.1 Temperature Measurement

Several optical methods, such as holographic interferometry, moiré deflectometry,schlieren photography, and laser speckle photography, have been used for the mea-surement of flame temperature. These are noncontact methods. Talbot interferometrywith a circular grating has been proposed as another method of temperature profilingof flames. Two identical circular gratings (containing equispaced concentric circles)are used in the experiment. They are aligned with their centers collinear with the beamdirection and are placed such that the second grating lies on the Talbot image of thefirst grating. The flame is placed between the two gratings. For a centrosymmetricflame, the center of the flame lies on the line joining the centers of the gratings. Insuch a case, the rays bend away from the center: the Talbot image of the circulargrating takes an elliptical shape, which forms a moiré pattern with the second grat-ing. The angle of deflection is calculated from the moiré pattern. Using the inverseAbel transform, the change in refractive index, (n − n0)/n0, is obtained, where n andn0 are the refractive indices at a point in the flame and of the ambient atmosphere,respectively. This value is then used to calculate the temperature using the formula

T = T0(n − n0

n0

)(3PA + 2RT0

3PA

)+ 1

Page 302: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 275[#37] 14/5/2009 20:31

The Moiré Phenomenon 275

where T0 is the temperature at the ambient condition for which the refractive indexis n0, P is the pressure, R is the gas constant, and A is the molar refractivity of air atthe wavelength of the light used in the experiment. Temperatures measured at variouslocations in the flame with Talbot interferometry and with thermocouples are in goodagreement.

The sensitivity of the method depends nonlinearly on the pitch of the circulargrating.

9.14.4.2 Measurement of the Focal Length of a Lens and the LongRadius of Curvature of a Surface

Talbot interferometry has been used for measuring the focal length of a positive lensand power variations for a multifocus lens. It is a convenient technique to the measureradii of curvature of shallower concave surfaces.

BIBLIOGRAPHY

1. K. J. Gasvik, Optical Metrology, Wiley, Chichester, 1987.2. O. D. D. Soares (Ed.), Optical Metrology, Martinus Nijhoff, Dordrecht, 1987.3. A. Lagarde (Ed.), IUTAM Symposium on Advanced Optical Methods and Applications

in Solids Mechanics (Poitiers, France, 31 August–4 September, 1998), Springer Vienna,2000.

4. J.F. Doyle and J. W. Phillips (Eds.), Manual of Experimental Stress Analysis, Societyfor Experimental Stress Analysis, Brookfield, CT, 1989.

5. O. Kafri and I. Glatt, The Physics of Moiré Metrology, Wiley, New York, 1990.6. G. Indebetouw and R. Czarnek (Eds.), Selected Papers on Optical Moiré and

Applications, MS64, SPIE, Bellingham, WA, 1992.7. K. Patorski and M. Kujawinska, Handbook of the Moiré Fringe Technique, Elsevier,

Amsterdam, 1993.8. D. Post, B. Han, and P. Ifju, High Sensitivity Moiré, Springer-Verlag, New York, 1994.9. G. L. Cloud, Optical Methods of Engineering Analysis, Cambridge University Press,

Cambridge, 1995.10. N. H. Abramson, Light in Flight or The Holodiagram; The Columbi Egg of Optics,

SPIE, Bellingham, WA, 1997.11. I. Amidror, The Theory of the Moire Phenomenon, Springer Verlag, 2000.12. C. A. Walker (Ed.), Handbook of Moiré Measurement, Taylor & Francis, New York,

2003.

ADDITIONAL READING

1. F. K. Ligtenberg, The moiré method, a new experimental method for the determinationof moments in small slab models, Proc. Soc. Exp. Stress Anal., 12, 83–98, 1954/55.

2. G. L. Rogers, A simple method of calculating moiré patterns, Proc. Phys. Soc., 73,142–144, 1959.

3. D. L. A. Barber and M. P. Atkinson, Method of measuring displacement using opticalgratings, J. Sci. Instrum., 36, 501–504, 1959.

4. J. M. Burch, The metrological applications of diffraction gratings, In Progress inOptics (ed. E. Wolf), Vol. 2, 73–108, North-Holland Amsterdam, 1963.

5. G. Oster and Y. Nishijima, Moiré patterns, Sci. Am., 208, 54–63, 1963.

Page 303: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 276[#38] 14/5/2009 20:31

276 Optical Methods of Measurement

6. P. S. Theocaris, Isopachic patterns by the moiré method, Exp. Mech., 4, 153–159,1964.

7. G. Rieder and R. Ritter, Kruemmungsmessung an belasteter Platten nach demLigtenbergschen Moiré-Verfahren, Forsch. Ing.-Wes., 31, 33–44, 1965.

8. C. A. Sciammarella, Basic optical law in the interpretation of moiré patterns appliedto the analysis of strains, Part 1, Exp. Mech., 5, 154–160, 1965.

9. B. E. Ross, C. A. Sciammarella, and D. E. Sturgeon, Basic law in the interpretation ofmoiré patterns applied to the analysis of strains, Part 2, Exp. Mech., 5, 161–166, 1965.

10. D. Post, Analysis of moiré fringe multiplication phenomenon, Appl. Opt., 6,1938–1942, 1967.

11. U. Heise, A moiré method for measuring plate curvature, Exp. Mech., 7, 47–48, 1967.12. F. P. Chiang, Techniques of optical spatial filtering applied to processing of moiré

fringe patterns, Exp. Mech., 9, 523–526, 1969.13. H. H. M. Chau, Moiré pattern resulting from the superposition of two zone plates,

Appl. Opt., 8, 1707–1712, 1969.14. D. W. Meadows, W. O. Johnson and J. B. Allen, Generation of surface contours by

moiré patterns, Appl. Opt., 9, 942–947, 1970.15. H. Takasaki, Moiré topography, Appl. Opt., 9, 1467–1472, 1970.16. J. der Hovanesian and Y. Y. Hung, Moiré contour-sum contour-difference and

vibration analysis of arbitrary objects, Appl. Opt., 10, 2734–2738, 1971.17. H. Takasaki, Moiré topography, Appl. Opt., 12, 845–850, 1973.18. F. P. Chiang and G. Jaisingh, Dynamic moiré methods for the bending of plates, Exp.

Mech., 13, 168–171, 1973.19. K. Matsumoto and M. Takashima, Improvement on moiré technique for in-plane

deformation measurements, Appl. Opt., 12, 858–864, 1973.20. O. Bryngdahl, Moiré: Formation and interpretation, J. Opt. Soc. Am., 64, 1287–1294,

1974.21. J. M. Burch and C. Forno, A high sensitivity moiré grid technique for studying

deformation in large objects, Opt. Eng., 14, 178–185, 1975.22. C. Chiang, Moiré topography, Appl. Opt., 14, 177–179, 1975.23. J. M. Walls and H. N. Southworth, The moiré pattern formed on superposing a zone

plate with a grating or grid, Opt. Acta, 22, 591–601, 1975.24. J. Motycka, A grazing-incidence moiré interferometer for displacement and planeness

measurement, Exp. Mech., 15, 279–281, 1975.25. A. L. Browne and S. M. Rhode, Moiré contouring: Mathematical analysis, Appl. Opt.,

14, 1260–1261, 1975.26. F. P. Chiang, A shadow moiré method with two discrete sensitivities, Exp. Mech., 15,

382–385, 1975.27. J. Marasco, Use of curved grating in shadow moiré, Exp. Mech., 15, 464–470, 1975.28. Y. Yoshino and H. Takasaki, Doubling and visibility enhancement of moiré fringes of

the summation type, Appl. Opt., 15, 1124–1126, 1976.29. Y. Y. Hung, C. Y. Liang, J. D. Hovanesian and A. J. Durelli, Time-averaged shadow-

moiré method for studying vibrations, Appl. Opt., 16, 1717–1719, 1977.30. M. Idesawa, T. Yatagai and T. Soma, Scanning moiré method and automatic

measurement of 3-D shapes, Appl. Opt., 16, 2152–2162, 1977.31. A. Assa, A. A. Betser and J. Politch, Recording slope and curvature contours of flexed

plates using a grating shearing interferometer, Appl. Opt., 16, 2504–2513, 1977.32. C. Forno, Welds at high temperatures studied by moiré photography, Weld Met. Fabr.,

46, 661–667, 1978.

Page 304: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 277[#39] 14/5/2009 20:31

The Moiré Phenomenon 277

33. A. Assa, J. Politch, and A. A. Betser, Slope and curvature measurement by a double-frequency-grating shearing interferometer, Exp. Mech., 20, 129–137, 1979.

34. D. T. Moore and B. E. Truax, Phase locked moiré fringe analysis for automatedcontouring of diffuse surfaces, Appl. Opt., 18, 91–97, 1979.

35. J. C. Perrin and A. Thomas, Electronic processing of moiré fringes: Application tomoiré topography and comparison with photogrammetry, Appl. Opt., 18, 563–574,1979.

36. S. Yokozeki and S. Mihara, Moiré interferometry, Appl. Opt., 18, 1275–1280, 1979.37. Y. Y. Hung, C. Y. Liang, J. D. Hovanesian and A. J. Durelli, Time averaged shadow

moiré method for studying vibrations, Appl. Opt., 18, 1717–1719, 1979.38. R. E. Rowlands and P. O. Lemens, Moiré strain analysis in cryogenic environments,

Appl. Opt., 18, 1886–1887, 1979.39. F. P. Chiang, Moiré methods of strain analysis, Exp. Mech., 19, 290–308, 1979.40. M. L. Basehore and D. Post, Moiré method for in-plane and out-of-plane displacement

measurement, Exp. Mech., 21, 321–328, 1981.41. C. A. Sciammarella, Holographic moiré an optical tool for the determination of

displacements, strains, contours and slopes of surfaces, Opt. Eng., 21, 447–457, 1982.42. J. M. Burch and C. Forno, High resolution moiré photography, Opt. Eng., 21, 602–614,

1982.43. L. Pirodda, Shadow and projection moiré techniques for absolute or relative mapping

of surface shapes, Opt. Eng., 21, 640–649, 1982.44. D. R. Andrews, Shadow moiré contouring of impact craters, Opt. Eng., 21, 650–654,

1982.45. T. Y. Kao and F. P. Chiang, Family of grating techniques of slope and curvature

measurements for static and dynamic flexure of plates, Opt. Eng., 21, 721–742,1982.

46. S. Yokozeki, Moiré fringes, Opt. Lasers Eng., 3, 15–27, 1982.47. C. A. Sciammarella, The moiré method—A Review, Exp. Mech., 22, 418–433, 1982.48. T. Yatagai and M. Idesawa, Automatic fringe analysis for moiré topography, Opt.

Lasers Eng., 3, 73–83, 1982.49. R. Ritter, Reflection moiré methods for plate bending studies, Opt. Eng., 21, 663–671,

1982.50. M. L. Basehore and D. Post, Displacement fields (U, W) obtained simultaneously by

moiré interferometry, Appl. Opt., 21, 2558–2562, 1982.51. J. Fujimoto, Determination of the vibrating phase by a time-averaged shadow moiré

method, Appl. Opt., 21, 4373–4376, 1982.52. T. Y. Kao and F. P. Chiang, Family of grating techniques of slope and curvature

measurements for static and dynamic flexure of plates, Opt. Eng., 21, 721–742, 1982.53. R. Ritter and R. Schettler-Koehler, Curvature measurement by moiré effect, Exp.

Mech., 23, 165–170, 1983.54. G. Subramanian and S. Krishnakumar, On a curvature based non-destructive moiré

technique for detecting defects in plates, NDT Int. (UK), 16, 271–274, 1983.55. O. Kafri, A. Livnat, and E. Keren, Optical second differentiation by shearing moiré

deflectometry, Appl. Opt., 22, 650–652, 1983.56. M. Halioua, R. S. Krishnamurthy, H. Liu, and F. P. Chiang, Projection moiré with

moving grating for automated 3-D topography, Appl. Opt., 22, 850–855, 1983.57. G. T. Reid, Moiré fringes in metrology, Opt. Lasers Eng., 5, 63–93, 1984.58. H. E. Cline, W. E. Lorensen, and A. S. Holik, Automatic moiré contouring, Appl. Opt.,

23, 1454–1459, 1984.

Page 305: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 278[#40] 14/5/2009 20:31

278 Optical Methods of Measurement

59. G. T. Reid, R. C. Rixon, and H. I. Messer, Absolute and comparative measurements ofthree dimensional shape by phase measuring moiré topography, Opt. Laser Technol.,16, 315–319, 1984.

60. Y. Morimoto, T. Hayashi, and N.Yamaguchi, Strain measurement by scanning—Moirémethod, Bull. JSME, 27, 2347–2352, 1984.

61. E. M. Weissman, D. Post, and A. Asundi, Wholefield strain determination by moiréshearing interferometry, J. Strain Anal., 19, 77–80, 1984.

62. Y. Nakano and K. Murata, Measurement of phase objects using Talbot effect and moirétechniques, Appl. Opt., 23, 2296–2299, 1984.

63. L. Pirroda, Strain analysis by grating interferometry, Opt. Lasers Eng., 5, 7, 1984.64. J. Jahns, A. W. Lohmann, and J. Ojeda-Castaneda, Talbot and Lau effects, a

parageometrical approach, Opt. Acta, 31, 313–324, 1984.65. D. Post, R. Czarnek, and D. Joh, Shear strain contours from moiré interferometry, Exp.

Mech., 25, 282–287, 1985.66. G. Subramanian and S. M. Nair, Direct determination of curvatures of bent plates

using double-glass plate shearing interferometer, Exp. Mech., 25, 376–380, 1985.67. K. Patorski and M. Kujawinska, Optical differentiation of displacement patterns using

moiré interferometry, Appl. Opt., 24, 3041–3048, 1985.68. A. Asundi, Variable sensitivity shadow moiré method, J. Strain Anal., 20, 59–61, 1985.69. K. Patorski, Generation of derivatives of out-of-plane displacement using conjugate

shear and moiré interferometry, Appl. Opt., 25, 3146–3151, 1986.70. O. J. Løkberg and J. T. Malmo, Deformation measurements at very high temperatures

by ESPI and moiré methods, Proc. SPIE, 661, 62–68, 1986.71. A. S. Voloshin, C. P. Burger, R. E. Rowlands, and T. S. Richard, Fractional moiré strain

analysis using digital image techniques, Exp. Mech., 26, 254–258, 1986.72. K. J. Gasvik and M. E. Fourney, Projection moiré using digital video processing. A

technique for improving the accuracy and sensitivity, Trans. ASME, 108, 652–656,1986.

73. A. Asundi and M. T. Cheung, Moiré interferometry for out-of-plane displacementmeasurement, J. Strain Anal., 21, 51–54, 1986.

74. K. Andresen and D. Klassen, The phase-shift method applied to cross grating moirémeasurements, Opt. Lasers Eng., 7, 101–114, 1986/87.

75. K. Patorski, D. Post, R. Czarnek, and Y. Guo, Real-time optical differentiation formoiré interferometry, Appl. Opt., 26, 1977–1982, 1987.

76. Y. Marimoto, Y. Seguchi, and T. Higashi, Application of moiré analysis of strain usingFourier transform, Proc. SPIE, 814, 295–302, 1987.

77. A. Asundi and M. T. Cheung, Three-dimensional measurement using moiré interfer-ometry, Strain, 24, 25–26, 1988.

78. M. Suzuki and M. Kanaya, Applications of moiré topography measurement methodsin industry, Opt. Lasers Eng., 8, 171–188, 1988.

79. C. Forno, Deformation measurement using high resolution moiré photography, Opt.Lasers Eng., 8, 189–212, 1988.

80. Y. Morimoto,Y. Seguchi, and T. Higashi, Application of moiré analysis of strain usingFourier Transform, Opt. Eng., 27, 650–656, 1988.

81. I. Glatt and O. Kafri, Moiré deflectometry–ray tracing interferometry, Opt. LasersEng., 8, 277–320, 1988.

82. J. M. Huntley and J. E. Field, High resolution moiré photography: Application todynamic stress analysis, Opt. Eng., 28, 926–933, 1989.

83. A. Asundi, Moiré interferometry for deformation measurement, Opt. Lasers Eng., 11,281–292, 1989.

Page 306: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 279[#41] 14/5/2009 20:31

The Moiré Phenomenon 279

84. Y. Guo, D. Post, and R. Czarnek, The magic of carrier fringes in moiré interferometry,Exp. Mech., 29, 169–173, 1989.

85. D. Post, Determination of the thermal strains by moiré interferometry, Exp. Mech., 29,318–323, 1989.

86. J. S. Lim, J. Kim and M. S. Chung, Additive type moiré with computer imageprocessing, Appl. Opt., 28, 2677–2680, 1989.

87. M. E. Tuttle and D. L. Graesser, Compression creep of graphite/epoxy laminatesmonitored using moiré interferometry, Opt. Lasers Eng., 12, 151–171, 1990.

88. L. Salbut and K. Patorski, Polarization phase-shifting method for moiré interferometryand flatness testing, Appl. Opt., 29, 1471–1473, 1990.

89. J. J. J. Dirckx and W. F. Decraemer, Automatic calibration method for phase shiftshadow moiré interferometry, Appl. Opt., 29, 1474–1476, 1990.

90. F.-L. Dai, J. McKelvie, and D. Post, An interpretation of moiré interferometry fromwavefront interference theory, Opt. Lasers Eng., 12, 101–118, 1990.

91. Y. Sogabe, Y. Morimoto, and S. Murata, Displacement measurement of track onmagnetic tape by Fourier transform grid method, Proc. SPIE, 1554B, 289–297, 1991.

92. C. G. Woychik andY. Guo, Thermal strain measurements of solder joints in electronicpackaging using moiré interferometry, Proc. SPIE, 1554B, 461–470, 1991.

93. Y. Morimoto, D. Post, and H. E. Gascoigne, Carrier pattern analysis of moiréinterferometry using Fourier transform, Proc. SPIE, 1554B, 493–502, 1991.

94. M. Kujawinska, L. Salbut, and K. Patorski, 3-channel phase stepped system for moiréinterferometry, Appl. Opt., 30, 1633–1635, 1991.

95. M. Suganuma and T. Yoshizawa, Three-dimensional shape analysis by the use of aprojected grating image, Opt. Eng., 30, 1529–1533, 1991.

96. Y. Z. Dai and F. P. Chiang, Contouring by moiré interferometry, Exp. Mech., 31, 76–81,1991.

97. Q. Yu, K. Andresen, and D. S. Zhang, Digital pure shear-strain moiré patterns, Appl.Opt., 31, 1813–1817, 1992.

98. K. V. Sriram, M. P. Kothiyal, and R. S. Sirohi, Talbot interferometry in noncollimatedillumination for curvature and focal length measurement, Appl. Opt., 31, 75–79, 1992.

99. B. Han and D. Post, Immersion interferometer for microscopic moiré interferometry,Exp. Mech., 32, 38–41, 1992.

100. B. Han, Higher sensitivity moiré interferometry for micromechanics studies, Opt. Eng.,31, 1517–1526, 1992.

101. K. J. Gasvik and G. K. Robbersmyr, Autocorrelation of the setup for the projectedmoiré method, Exp. Tech., 17, 41–44, 1993.

102. J. Kato, I. Yamaguchi, and S. Kuwashima, Real-time fringe analysis based on elec-tronic moiré and its applications, In Fringe ’93 (ed. W. Jüptner and W. Osten), 66–71.Akademie Verlag, Berlin, 1993.

103. M. Priga, A. Kozlowska, and M. Kujawinska, Generalization of the scaling problemfor the automatic moiré and fringe projection shape measurement systems, In Fringe’93 (ed. W. Jüptner and W. Osten), 188–193, Akademie Verlag, Berlin, 1993.

104. C. Y. Poon, M. Kujawiñska, and C. Ruiz, Automated fringe pattern analysis for moiréinterferometry, Exp. Mech., 33, 234–241, 1993.

105. J. T. Atkinson and M. J. Lalor, Range measurement using moiré contouring, Proc.SPIE, 2340, 202–210, 1994.

106. M. B. Whitworth and J. M. Huntley, Dynamic stress analysis by high-resolutionreflection moiré, Opt. Eng., 33, 924–931, 1994.

107. P. K. Rastogi, Visualization of contours of equal slope of an arbitrarily shaped objectusing holographic moiré, Opt. Eng., 33, 2373–2377, 1994.

Page 307: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 280[#42] 14/5/2009 20:31

280 Optical Methods of Measurement

108. Y. Morimoto, H. E. Gascoigne, and D. Post, Carrier pattern analysis of moiréinterferometry using the Fourier transform method, Opt. Eng., 33, 2646–2653,1994.

109. Y. Wang and F. P. Chiang, New moiré interferometry for measuring three-dimensionaldisplacements, Opt. Eng., 33, 2654–2658, 1994.

110. A. K. Asundi, Chi-Shing Chan and M. R. Sajan, 360◦ profilometry: New techniquesfor display and acquisition, Opt. Eng., 33, 2760–2769, 1994.

111. C. A. Walker, A historical review of moiré interferometry, Exp. Mech., 34, 281–299,1994.

112. S. K. Bhadra, S. K. Sarkar, R. N. Chakraborty, and A. Basuray, Coherent moiré tech-nique for obtaining slope and curvature of stress patterns, Opt. Eng., 33, 3359–3363,1994.

113. C. Shakher and A. J. Pramila Daniel, Talbot interferometer with circular gratingsfor the measurement of temperature in axisymmetric gaseous flames, Appl. Opt., 33,6068–6072, 1994.

114. A. J. Moore and J. R. Tyre, Phase-stepped ESPI and moiré interferometry for measuringstress-intensity factor and J integral, Exp. Mech., 35, 306–314, 1995.

115. T. Matsumoto, Y. Kitagawa, and T. Minemoto, Sensitivity-variable moiré topographywith a phase-shift method, Opt. Eng., 35, 1754–1760, 1996.

116. M. Wang, Projection moiré deflectometry for the automatic measurement of phaseobjects, Opt. Eng., 35, 2005–2011, 1996.

117. M. K. Kalms, W. Jüptner and W. Osten, Projected-fringe-technique with automaticpattern adoption using a programmable LCD-projector, In Fringe ’97 (ed. W. Jüptnerand W. Osten), 231–236 Akademie Verlag, Berlin, 1997.

118. D. Malacara-Doblado, Measuring the curvature of spherical wavefronts with Talbotinterferometry, Opt. Eng., 36, 2016–2024, 1997.

119. X. He, D. Zou, S. Liu, andY. Guo, Phase-shifting analysis in moiré interferometry andits applications in electronic packaging, Opt. Eng., 37, 1410–1419, 1998.

120. B. Han, Recent advancements of moiré and microscopic moiré interferometry forthermal deformation analyses of microelectronics devices, Exp. Mech., 38, 278–288,1998.

121. S. De Nicola, P. Ferraro, and S. De Nicola, Fourier transform method of fringe analysisfor moiré interferometry, J. Opt. A: Pure Appl. Opt., 2, 228–233, 2000.

122. J. Villa, J. A. Quiroga, and M. Servin, Improved regularized phase-tracking techniquefor the processing of square-grating deflectograms, Appl. Opt., 39, 502–508, 2000.

123. M. R. Miller, I. Mohammed, and P.S. Ho, Quantitative strain analysis of flip-chipelectronic packages using phase-shifting moiré interferometry, Opt. Lasers Eng., 36,127–139, 2001.

124. S.-T. Lin,Three-dimensional displacement measurement using a newly designed moiréinterferometer, Opt. Eng., 40, 822–826, 2001.

125. A. Martínez, R.Rodríguez-Vera, J. A. Rayas, and H. J. Puga, Fracture detection bygrating moiré and in-plane ESPI techniques, Opt. Lasers Eng., 39, 525–536, 2003.

126. K. Kadooka, K. Kunoo, N. Uda, K. Ono, and T. Nagayasu, Strain analysis for moiréinterferometry using the two-dimensional continuous wavelet transform, Exp. Mech.,43, 45–51, 2003.

127. Q. Kemao, S. H. Soon, and A. Asundi, Instantaneous frequency and its application tostrain extraction in moiré interferometry, Appl. Opt., 42, 6504–6513, 2003.

128. R. R. Cordero and I. Lira, Uncertainty analysis of displacements measured by phase-shifting moiré interferometry, Opt. Commun., 237, 25–36, 2004.

Page 308: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 281[#43] 14/5/2009 20:31

The Moiré Phenomenon 281

129. C. J. Tay, C. Quan, Y. Fu, and Y. Huang, Instantaneous velocity, displacement andcontour measurement by use of shadow moiré and temporal wavelet analysis, Appl.Opt., 43, 4164–4171, 2004.

130. L. J. Fellows and D. Nowell, Measurement of crack closure after the application of anoverload cycle, using moiré interferometry, Int. J. Fatigue, 27, 1453–1462, 2005.

131. F. Labbe and R. R. Cordero, Monitoring the plastic deformation progression of aspecimen undergoing tensile deformation by moiré interferometry, Meas. Sci. Technol.,16, 1469–1476, 2005.

132. D. Post, Moiré interferometry for engineering and science, Proc. SPIE, 5776, 29–43,2005.

133. A. Kishen, K. Tan, and A. Asundi, Digital moiré interferometric investigations on thedeformation gradients of enamel and dentine: An insight into non-carious cervicallesions, J. Dentistry, 34, 12–18, 2006.

134. Y. Yamamoto, Y. Morimoto, and M. Fusigaki, Two-direction phase-shifting moiréinterferometry and its application to thermal deformation measurement of an electronicdevice, Meas. Sci. Technol., 18, 561–566, 2007.

Page 309: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C009.tex” — page 282[#44] 14/5/2009 20:31

Page 310: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C010.tex” — page 283[#1] 14/5/2009 20:17

Index

Abbé’s theory, 49ABCD matrix applied to Gaussian beams,

8–12Active pixel sensors (APS), 89Additive moiré pattern, 247–248Airy disk, 51Airy formula, 28, 150Airy intensity distribution, 58Amplitude

amplitude/intensity-based sensors, 4interference by division of, 20–21

Angular mismatch, 251Apertured lens arrangement, speckle

metrology, 175–176, 182–183APS. See Active pixel sensors (APS)Argon–ion laser, 106Automatic shape determination, moiré, 257Average speckle size, 149–151Axicon, 12, 13

Babinet compensators, 206–207Barium titanate (BaTiO3), 97Base-to-height ratio, 256Beam-expanders, 107Beam-splitters, 107Beam waist size, 6Beer’s law, 4Bessel beams/function, 12–13, 50, 113Biaxial crystals, 204Birefringent coating method, 222–223Bismuth germanium oxide (BGO), 98Bismuth silicon oxide (BSO), 98Blazed grating, 95Brewster angle, 204

Carré method, 63–64CCD. See Charge-coupled device (CCD)Cellulose acetate butyrate (CAB), 95CGHs. See Computer generated holograms

(CGHs)Charge-coupled device (CCD), 29, 79,

86–89, 151Circular polariscope, 212–216

configurations of, 213Circular polarization, 203

CMOS. See Complementary metal-oxidesemiconductor (CMOS) sensors

Coherent projection method, 262–264interference between two collimated

beams, 262–263interference between two spherical waves,

263–264Coherent waves, generation, 20–21

division of amplitude, 20division of wavefront, 20

Compensators, 206–207Babinet compensators, 206–207Soleil–Babinet compensators, 206–207

Complementary metal-oxide semiconductor(CMOS) sensors, 79, 88

Computer generated holograms (CGHs),16, 52

Spiral phase plates (SPPs), 16Contouring

in ESPI, 187–189holographic, 129–132moiré measurement, 253–267

Converging spherical waves, 3Correlation coefficient in speckle

interferometry, 167–168Cylindrical waves, 2–3

DC background, 59Deformed and reference gratings, moiré

between, 245–246Deformed sinusoidal grating, moiré pattern

with, 246–248Detection in optical measurement, 79–98.

See also Image detectors;Photodiodes

active pixel sensors (APS), 89charge-coupled device (CCD), 29, 79,

86–89, 151complementary metal-oxide

semiconductor (CMOS) sensors,79, 88

detector characteristics, 79–80interline transfer (IT) sensors, 88Lead zirconate titanate (PZT)-mounted

mirror, 68–69Metal-insulator–semiconductor (MIS)

capacitors, 86283

Page 311: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C010.tex” — page 284[#2] 14/5/2009 20:17

284 Index

Detection in optical measurement(Continued)

Metal-oxide–semiconductor (MOS)capacitor, 86

photoconductors, 80–81extrinsic, 80intrinsic, 80

photodiodes, 81–84photomultiplier tube (PMT), 79, 84time-delay and integration (TDI) mode of

operation, 89DIC. See Differential interference contrast

(DIC) microscopyDichroism, 207Dichromated gelatin, 94–95Differential interference contrast (DIC)

microscopy, 35Diffraction, 43–58

action of a lens, 45computer generated holograms (CGHs),

16, 52diffractive optical components (DOEs),

52–56efficiency, 57

holographic interferometry, 105Fraunhofer diffraction, 44–45Fresnel diffraction, 43–44grating, motion of, 72resolving power of optical systems, 57–58

Diffractive optical components (DOEs),52–56

Mach–Zehnder interferometer using, 53Digital holography, 132–135

interferometry, 135–136reconstruction, 133–135recording, 132–133

Direction, sensors based on change in, 5Displacement vector, measurement, in HI,

115–116Diverging spherical waves, 3DOEs. See Diffractive optical components

(DOEs)Doppler interferometry, 36–37Doppler-shifted light, 5Doppler velocimetry, 37Double-exposure HI, 61, 109–111Double-exposure holophotoelasticity, 223,

225–229Double-exposure specklegram, 163Double refraction, 204–207

Babinet compensators, 206–207compensators, 206–207dichroism, 207extra-ordinary beam (e-beam), 204Glan–Thompson polarizer, 205

half-wave plate, 206ordinary beam, 204phase plates, 205–206quarter-wave plate, 206scattering, 207Soleil–Babinet compensators, 206–207

Doughnut mode, 14Dove prism, 51Dual-illumination,

single-observation-directionmethod, 169

Dual-illumination method, in HI, 132Dual-refractive-index method, in HI,

131–132Dual-wavelength interferometry, 30–31Dual-wavelength method, in HI, 129–131Duffy’s method, 169–171, 176Dynamic events

curvature determination for, 268–269slope determination for, 267–268

Edge dislocation, 14Efficiency, diffraction, 57Electronic speckle pattern interferometry

(ESPI), 183–187contouring in ESPI, 187–189

direction of illumination, change of,188–189

medium surrounding the object,change of, 189

object tilt, 189wavelength, change of, 189

in-plane displacement measurement, 185measurement on small objects, 186–187out-of-plane displacement measurement,

184–185shear ESPI measurement, 187vibration analysis, 185–186

ESPI. See Electronic speckle patterninterferometry (ESPI)

Experimental arrangement, holographicinterferometry, 105–108

Extra-ordinary beam (e-beam), 205Extrinsic photoconductivity, 80–81

Fabry–Perot interferometer, 21Fast Fourier transform (FFT) algorithm, 134Ferroelectric crystals, 97–98

Strontium barium niobate (SBN), 97FFT. See Fast Fourier transform (FFT)

algorithmFiber-optic Doppler velocimetry, 38Fiber-optic interferometers, 37–38Filtering, speckle metrology, 158, 171–173

Page 312: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C010.tex” — page 285[#3] 14/5/2009 20:17

Index 285

Fizeau interferometers, 37Foucault knife-edge test, 49Fourier–Bessel series, 56Fourier filtering, 159

out-of-plane displacement,measurement, 161

Fourier transform method, 45–49,72–73, 134

Fractional fringe order, measurement,217–220

Frame transfer (FT) sensors, 88Fraunhofer diffraction, 44–45Frequency measurement, sensors

based on, 5Fresnel approximation, 133Fresnel diffraction, 43–44Fresnel–Kirchhoff diffraction theory, 44Fringe formation, 171–173, 181–183. See

also under Speckle metrologyin HI, 115–116

Fringe interferometry, 33Fringe skeletonization, 60–61Frozen-stress method, 229–230FT. See Frame transfer (FT) sensors

Gabor holography, 104Gaussian beams, 6–8

ABCD matrix applied to, 8–12mode matching, 12propagation in free space, 9–10propagation through a thin lens,

10–11Gaussian parameter, 10Gelatin, as recording material, 94–95

dichromated gelatin, 94–95Glan–Thompson polarizer, 205Gratings, 30, 53

blazed grating, 95inclined linear grating, 54phase grating, 56–57sinusoidal grating, 55–56

Guoy’s phase, 7

Half divergence angle, 7Half-wave plates (HWPs), 70, 206He–Cd laser, 107He–Ne laser, 106, 114Helmholtz equation, 2, 6Heterodyne interferometry, 4, 31–32

heterodyne HI, 127–129HI. See Holographic interferometry (HI)HNDT. See Holographic nondestructive

testing (HNDT)

HOEs. See Holographic optical elements(HOEs)

Holographic interferometry (HI)/Holograms,31–32, 101–136. See also DigitalHolography; Lasers

beam-expanders, 107beam-splitters, 107contouring/shape measurement, 129–132

dual-illumination method, 132dual-refractive-index method, 131–132dual-wavelength method, 129–131

diffraction efficiency, 105displacement vector, measurement,

115–116double-exposure HI, 109–110experimental arrangement, 105–108Fast Fourier transform (FFT)

algorithm, 134fringe formation, 115–116heterodyne HI, 127–129hologram recording, 102–103large vibration amplitudes, measurement,

117–120frequency modulation of reference

wave, 117–119phase modulation of reference beam,

119–120object loading, 116–117object-illumination beam, 107photoelasticity, 132real-time HI, 108–109, 115reconstruction, 103–104recording materials, 108reference beam, 107reference wave angle, choice

of, 104–105reference wave intensity, choice

of, 105reflection HI, 125–127sandwich HI, 123–125sensitivity, extending, 127–129single- and double-exposure HI,

comparison, 111stroboscopic illumination/stroboscopic

HI, 120time-average HI, 110–114, 115two-reference-beam HI, 121–122types, 105, 106very small vibration amplitudes,

measurement, 117Holographic nondestructive testing

(HNDT), 117Holographic optical elements (HOEs), 52Holography (coherent recording), 91–92Holography, recording media for, 91

Page 313: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C010.tex” — page 286[#4] 14/5/2009 20:17

286 Index

Holophotoelasticity, 223–229double-exposure holophotoelasticity, 223,

225–229single-exposure method, 223–225

Hurter–Driffield (H&D) curve, 92–93Hyperboloidal surface, 262

Image detectors, 84–89Image formation by lens, 45–49In-plane displacement measurement,

248–253high-sensitivity, 251–253in speckle metrology, 174–175two-dimensional, 249–251

Inclined grating, 240Information carriers, waves as, 3–5

amplitude/intensity-based sensors, 4Inorganic photochromics, 96Interference equation, 59–60Interferometry/Interferometers, 4, 29–39.

See also Holographic interferometry;Optical Interference

differential interference contrast (DIC)microscopy, 35

Doppler interferometry, 36–37dual-wavelength, 30–31fiber-optic interferometers, 37–38heterodyne, 31–32interference microscopy, 34–36Mirau interferometer, 34–35multiple-beam shear, 33phase-conjugation interferometers, 38–39polarization, 33–34shear, 32–33shearing, 35two-beam, 29white light, 31

Interferogram, 254u-displacement fringes, 254v-displacement fringes, 254

Interline transfer (IT) sensors, 88Intrinsic photoconductivity, 80–81Isochromatics, 211

in a bright field, 216computation, 221–222in a dark field, 216

Isoclinics, 212computation, 220–221

IT. See Interline transfer (IT) sensors

Lagrangian strain, 249Lasers, 106–107

Argon–ion laser, 106He–Cd laser, 107

He–Ne laser, 106laser beam, 5–6Laser–Doppler anemometers/velocimeters

(LDA/LDV), 5Nd :YAG semiconductor pumped,

first-harmonic green, 107Ruby laser, 106semiconductor or diode lasers (LDs), 107

Lateral shear interferometry, 15–16Lead zirconate titanate (PZT)-mounted

mirror, 68–69Leendertz’s methods, 173Lens

action of, 45Fourier transformation by, 45–49image formation by, 45–49

Light-line projection with TDI mode ofoperation of CCD camera, 260–262

Ligtenberg’s arrangement, 265–266Linear gratings, Moiré fringe pattern

between, 239–242Linear polarization, 202–203Linearly polarized incident beam, 232–233Line gratings, 243Lithium niobate (LiNbO3), 97Lithium tantalate (LiTaO3), 97Lummer–Gerchke plate, 21

Mach–Zehnder interferometer, 69, 223using DOEs, 53

Malus’s law, 4–5, 207MEMS. See Microelectromechanical

systems (MEMS)Metal-insulator–semiconductor (MIS)

capacitors, 86Metal-oxide–semiconductor (MOS)

capacitor, 86Michelson interferometers, 24–25, 37Michelson stellar interferometer, 23–24, 29Microelectromechanical systems

(MEMS), 186Micro-Michelson interferometer, 34Microscopy, interference, 34–36Mirau interferometer, 30, 34–35Mirrors, 50MIS. See Metal-insulator–semiconductor

(MIS) capacitorsMixed screw edge, dislocation 14Mode matching, Gaussian beams, 12Modified Michelson interferometer, 30Moiré phenomenon, 239–275. See also

Shadow moiré methoda = b but θ �= 0, 241a �= b but θ = 0, 241

Page 314: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C010.tex” — page 287[#5] 14/5/2009 20:17

Index 287

additive moiré pattern, 247–248automatic shape determination, 257coherent projection method, 262–264contouring, measurement, 253–267with deformed sinusoidal

grating, 246–248dynamic events

curvature determination for, 268–269slope determination for, 267–268

fringe pattern between a linear grating anda circular grating, 242–243

fringe pattern between two linear gratings,239–242

high-sensitivity in-plane displacementmeasurement, 251–253

in-plane displacement, measurement,248–253

light-line projection with TDI mode ofoperation of CCD camera, 260–262

for measurement, 248multiplicative moiré pattern, 247out-of-plane component, measurement,

253–267projection moiré, 257–260between reference and deformed gratings,

245–246reflection moiré method, 265–267

surface topography with, 269–271between sinusoidal gratings, 243–244Talbot phenomenon, 271–275

cut-off distance, 273two-dimensional in-plane displacement

measurement, 249–251vibration amplitudes, measurement, 264

Monochromatic wavesdislocation in, 14

edge, 14mixed screw edge, 14screw, 14

interference between two, 21–25MOS. See Metal–oxide–semiconductor

(MOS) capacitorMultiple-beam interference, 25–29

division of amplitude, 27–28division of wavefront, 25–26in reflection, 29in transmission, 27–28

Multiple-beam shear interferometry, 33Multiple-charge optical vortices, 14Multiplicative moiré pattern, 247

Nd :YAG semiconductor pumped,first-harmonic green, 107

Nicol prism, 205

Nomarski interferometer, 35Noncollimated illumination, Talbot effect

in, 273–274Nondiffracting beams, Bessel

beams, 12–13Nye and Berry terminology, 14

Object-illumination beam, 107Objective speckle pattern, 150Optic axis, 204Optical filtering, 49–50

4f arrangement for, 49Optical interference, 19–39

between two plane monochromaticwaves, 21–25

conditions for obtaining, 20Michelson interferometer, 23–25multiple-beam interference, 25–29Young’s double-slit experiment, 22–24

Optical metrology, 1components in, 50–57

reflective, 50refractive, 50–52

diffractive optical components, 52–56Optical pointers, 5Optical sensing, 3Optical vortex interferometer, 15Ordinary beam, 204Organic photochromics, 96Out-of-plane component, moiré

measurement, 253–267Out-of-plane displacement measurement in

speckle metrology, 173–174Out-of-plane speckle interferometer,

168–169

Particle image velocimetry (PIV), 162Path difference from a speckle pair, 159PBS. See Polarization beam-splitter (PBS)Phase-conjugation interferometers,

38–39Phase-contrast microscopy, 49Phase-evaluation methods, 59–74

double-exposure holographicinterferometry, 61

fringe skeletonization, 60–61interference equation, 59–60phase-shifting with unknown but constant

phase-step, 63–66quasi-heterodyning, 62spatial heterodyning, 73–74spatial phase-shifting, 66–68temporal heterodyning, 61–62

Phase gratings, 56–57

Page 315: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C010.tex” — page 288[#6] 14/5/2009 20:17

288 Index

Phase measurement, sensors based on, 4Phase plates, 205–206Phase shifting evaluation methods, 62–66,

68–72, 220–222half-wave plates (HWPs), 70motion of a diffraction grating, 72Polarization beam-splitter (PBS), 70PZT-mounted mirror, 68–69quarter-wave plates (QWPs), 70, 206rotation of polarization component, 70–72tilt of glass plate, 69–70use of CGH, 72

Photochromics, 96–97inorganic, 96organic, 96

Photoconductor detectors, 80–81Photodiodes, 81–84

avalanche diode, 81p–i–n photodiode, 85p–n technology, 81Schottky photodiodes, 83simple p–n junction diode, 81

Photoelasticity, 201–233. See also Malus’slaw; Polarized light

analysis methods, 210–216birefringent coating method, 222–223circular polarization, 203evaluation procedure, 216–217fractional fringe order, measurement,

217–220holographic, 132holophotoelasticity, 223–229linear polarization, 202–203Mach–Zehnder interferometer, 223phase-shifting, 220–222plane polarized waves, superposition,

201–202scattered-light photoelasticity, 230strain-optic law, 209–210stressed model examination in scattered

light, 230–233stress-optic law, 207–209Tardy’s method, 217–220three-dimensional photoelasticity,

229–230Photographic emulsions (silver halides), 92Photographic film/plate, 90–94Photography (incoherent recording), 91–92

recording media for, 91Photography, speckle, 155–158Photomultiplier tube (PMT), 79, 84Photopolymers, 95–96Photoresists, 95

negative photoresists, 95positive photoresists, 95

Pitch mismatch, 251PIV. See Particle image velocimetry (PIV)Plane polariscope, 210–212Plane polarized waves, superposition,

201–202Plane waves, 2–3PMT. See Photomultiplier tube (PMT)p–n technology, 81–83Pointwise filtering, 158–160

Fourier filtering, 159out-of-plane displacement,

measurement, 161path difference from a speckle pair, 159wholefield filtering arrangement, 159–160

Polarizationcircular polarization, 203linear polarization, 202–203sensors based on, 5

Polarization beam-splitter (PBS), 70Polarization interferometers, 30, 33–34, 70Polarization phase-shifters, 71Polarized light, production, 203–207

double refraction, 204–207reflection, 204refraction, 204

Poly-methyl methacrylate (PMMA), 95Poly-N-vinylcarbazole (PVK), 96Prisms, 51

Dove prism, 51Projection moiré method, 257–260Propagation distance, 9Propagation in free space, Gaussian beams,

9–10Propagation through a thin lens, Gaussian

beams, 10–11PZT. See Lead zirconate titanate (PZT)

Quarter-wave plates (QWPs), 70, 206Quasi-heterodyning, 62

phase shift method, 62, 63phase-step method, 62

Rayleigh criterion, resolving power, 57Rayleigh interferometer, 20, 29Rayleigh range, 6Rayleigh scattering, 230Real-time HI, 108–109, 115Recording materials in optical measurement,

89–98dichromated gelatin, 94–95ferroelectric crystals, 97–98holographic recording materials, 108photochromics, 96–97

Page 316: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C010.tex” — page 289[#7] 14/5/2009 20:17

Index 289

photographic emulsions (silverhalides), 92

photographic film/plate, 90–94photopolymers, 95–96photoresists, 95Poly-methyl methacrylate (PMMA), 95Poly-N-vinylcarbazole (PVK), 96recording media for photography and

holography, 91thermoplastics, 962,4,7-Trinitro-9-fluorenone (TNF), 96

Reference and deformed gratings, moirébetween, 245–246

Reference beam, 107Reference wave, 225–226

angle, choice of, 104–105intensity, choice of, 105

Reflection, 204interference pattern in, 29

Reflection HI, 125–127Reflection moiré method, 265–267

surface topography with, 269–271Reflection polariscope, 222–223Refraction, 204Resolving power of optical systems, 57–58

Airy intensity distribution, 58Rayleigh criterion, 57–58

Retro-reflective paint, 189–190Rieder–Ritter arrangement, 266Ronchi gratings, 239Ruby laser, 106

Sandwich HI, 123–125Savart plates, 33–34Scattered light method, 230Scattered-light photoelasticity, 230

stressed model examination in, 230–233linearly polarized incident beam,

232–233unpolarized incident light, 230–231

Scattering, 207Schottky photodiodes, 83Screw dislocation, 14Self-imaging, 271Semiconductor or diode lasers (LDs), 107Sensors

amplitude/intensity-based sensors, 4based on change of direction, 5based on frequency measurement, 5based on phase measurement, 4based on polarization, 5

Shadow moiré method, 254–257parallel illumination and parallel

observation, 254–255

spherical-wave illumination and camera atfinite distance, 255–257

Shear ESPI measurement, 187Shear interferometry, 30, 32–33Shear methods used in speckle

interferometry, 177–180Duffy’s arrangement, 178inversion shear, 179lateral shear, 179meaning of shear, 177–178Michelson interferometer, 178radial shear, 179rotational shear, 179theory of, 180–181

Shear speckle photography, 163–164Shearing interferometry, 35Single-exposure HI, 111Single-exposure specklegram, 156Single-exposure holophotoelasticity, 223,

224–225Single-illumination,

dual-observation-directionmethod, 169

Singular beams, 13–17. See alsoMonochromatic waves

Sinusoidal grating, 55–56moiré between, 243–244

Smartt interferometer, 20Snell’s law of refraction, 204Soleil–Babinet compensators, 206–207Spatial heterodyning, 73–74Spatial phase-shifting, 66–68, 190–191

90◦, 67120◦, 67

Speckle interferometry, 30, 164–167correlation coefficient in, 167–168out-of-plane, 168–169

Speckle metrology, 149–191. See alsoElectronic speckle patterninterferometry; Shear methods

aperturing the lens, 175–176average speckle size, 149–151

objective speckle pattern, 150subjective speckle pattern, 150–151

Duffy’s arrangement, 176evaluation methods, 158–161

pointwise filtering, 158–160filtering, 171–173

fringe formation, 171–173Leendertz’s methods, 173

fringe formation, 181–183apertured lens arrangement, 182–183Michelson interferometer, 181

in-plane displacement components,174–175

Page 317: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating

“DK4217_C010.tex” — page 290[#8] 14/5/2009 20:17

290 Index

Speckle metrology (Continued)in-plane measurement, 169–171out-of-plane displacement measurement,

173–174particle image velocimetry (PIV), 162retro-reflective paint use, 189–190shape measurement/contouring, 177shear speckle photography, 163–164spatial phase-shifting, 190–191speckle phenomenon, 149speckle photography

sensitivity of, 162with vibrating objects, 161–162

speckle photography, 155–158white-light speckle photography, 162without influence of in-plane

component, 183Speckle pattern, formation of, 150Speckle pattern and surface motion, 152–155

linear motion in plane of the surface, 152object tilt, 153–155out-of-plane displacement, 152–153

Spherical waves, 2converging, 3diverging, 3

Spiral interferometry, 15Spiral phase plates (SPPs), 16Spot size, 6SPPs. See Spiral phase plates (SPPs)Staybelite, 96Stokes relations, 29Strain-optic law, 209–210Stressed model examination in scattered

light, 230–233Stress-optic law, 207–209Stroboscopic illumination/stroboscopic

HI, 120Strontium barium niobate (SBN), 97Subjective speckle pattern, 150–151

superposition, 151

Talbot phenomenon, 271–275cut-off distance, 273for measurement, 274–275in collimated illumination, 272–273in noncollimated illumination, 273–274temperature measurement, 274–275

Tardy’s method, 217–220TDI. See Time-delay and integration (TDI)

Temperature measurement, Talbotphenomenon, 274–275

Temporal heterodyning, 61–62Temporal phase-shifting, 66Thermoplastics, 96

record-erase cycle for, 97Three-dimensional photoelasticity, 229–230

frozen-stress method, 229–230Time-average HI, 110–114, 115Time-delay and integration (TDI), 89Topological charge, 13Transmission, interference pattern in, 27–28Transmittance function, 243Triethanolamine, 1262,4,7-Trinitro-9-fluorenone (TNF), 96Two-beam interferometers, 37Two-reference-beam HI, 121–122Twyman–Green interferometer, 30

Unpolarized incident light, 230–231

van Cittert–Zernike theorem, 23Vertical grating, 240Vibration amplitudes, measurement, 264Vibration analysis, electronic speckle pattern

interferometry, 185–186Vortex filaments, 14Vortices/Vortex generation, 14–16

Wave equation, 1–2Wavefront, interference by division of, 20Waves, 1–17

as information carriers, 3–5cylindrical waves, 2–3plane waves, 2spherical waves, 2

White light interferometry, 31White-light speckle photography, 162Wholefield filtering arrangement, 159–160Wollaston prism, 33–35Wyco interferometer, 30

Young’s double-slit experiment, 22–24

Zero-order fringe, 120Zygo interferometer, 30

Page 318: A numerical simulation model to facilitate the understanding of the properties of a diffraction grating