i executive summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to...

22
DARPA BAA 12-08 GALILEO TASK 2 JPL, DRAPER, CALTECH 36 I Executive Summary I.A Point Design Task 2 consists of a hardware component and an algorithm component. We plan to use two, 500 m PCF optical fibers per telescope to transport the two polarizations of light to the combiner. Optical path in the two arms can differ by over 100 m so the laser’s wavelength will be stabilized to 10 -12 . Because the fibers are above ground, their lengths need to be continuously monitored while we’re observing fringes from a faint target. As a result, we plan to use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10 -6 of the 1.06 m laser light while transmitting 97% of the starlight from 1.1 to 1.8 m. The metrology will extend from the beam combiner all the way to the optical fiducial that defines the baseline vector at the telescope. The beam combiner will use all the light from 1.1 m to 1.8 m and both polarizations. Doubling the number of photons detected per coherent integration time of about 37 msec @ 1.6 m, doubles the SNR because the fringe is photon starved. Doubling SNR means a decrease of 4X in integration time. The net result is a noise in V 2 and differential phase of 0.0003 in 300 sec of integration with two, 1.8 m telescopes. A total of ~30,000 sec of integration on the target (300 sec on 100 baselines) will provide near full UV plane coverage for a high SNR image. A cryogenic NIR array will be used as the fringe detector. We also propose two new image reconstruction algorithms. At the proposer’s briefing, MIT/LL, speaking for DARPA, made two interesting statements. One is that closure phase is a quantity that in invariant of image translation but differential phase is such a clean measurement. Indeed if one tries to calculate phase from differential phase one needs to know the position of the target with respect to the baseline to 5 nano radians. This might be the origin of the government requirement for 5 nano-radian pointing of the telescope. The second statement by MIT/LL was that no one had done an end-to-end imaging simulation of differential-phase imaging. We have developed a variant of the GSF algorithm that uses differential phase and V 2 information in a way that does not require knowledge of the phase center to 5 nrad. A simulated image reconstruction with noise is in the proposal body. We propose to study a second passive-interferometer imaging concept (combination of hardware and algorithm) for an eventual operational GeoSat imaging facility that can produce an image in up to 100 times less integration time on the target, making possible a high quality image in under 1 hr. This is a variant of mosaic imaging planned for the sub-millimeter ALMA array, but where the huge SNR benefits only apply in the optical/Near IR. I.B Tasks Most of the hardware is standard off the shelf, except the detector which has a long lead time. The hardware design activities will be mainly requirements derivation, hardware approach identification and preliminary design, at the levels appropriate for the SRR. In algorithms, we plan during phase 1 to more thoroughly explore our modified GSF algorithm. GSF can often be successful in reconstructing images when small parts of the UV plane information are missing. In the case of the Galileo interferometer, the use of an existing 1.5 m telescope means that the shortest baselines will not be available because we can’t get the two telescopes close enough together. We will explore combining AO images from the Palomar 5 m and the interferometer to partially fill in the missing UV points. Last of all we want to further develop our second algorithm, mosaic imaging, to more clearly define the requirements of a differential phase imaging system that can produce an image in < 1 hr.

Upload: others

Post on 30-Apr-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 36

I Executive Summary

I.A Point Design

Task 2 consists of a hardware component and an algorithm component. We plan to use two, 500 m PCF optical

fibers per telescope to transport the two polarizations of light to the combiner. Optical path in the two arms can

differ by over 100 m so the laser’s wavelength will be stabilized to 10-12

. Because the fibers are above ground, their

lengths need to be continuously monitored while we’re observing fringes from a faint target. As a result, we plan to

use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6

of the 1.06 m laser light while transmitting 97% of the starlight from 1.1 to 1.8m. The metrology will extend

from the beam combiner all the way to the optical fiducial that defines the baseline vector at the telescope. The beam

combiner will use all the light from 1.1m to 1.8m and both polarizations. Doubling the number of photons

detected per coherent integration time of about 37 msec @ 1.6m, doubles the SNR because the fringe is photon

starved. Doubling SNR means a decrease of 4X in integration time. The net result is a noise in V2 and differential

phase of 0.0003 in 300 sec of integration with two, 1.8 m telescopes. A total of ~30,000 sec of integration on the

target (300 sec on 100 baselines) will provide near full UV plane coverage for a high SNR image. A cryogenic NIR

array will be used as the fringe detector.

We also propose two new image reconstruction algorithms. At the proposer’s briefing, MIT/LL, speaking for

DARPA, made two interesting statements. One is that closure phase is a quantity that in invariant of image

translation but differential phase is such a clean measurement. Indeed if one tries to calculate phase from differential

phase one needs to know the position of the target with respect to the baseline to 5 nano radians. This might be the

origin of the government requirement for 5 nano-radian pointing of the telescope. The second statement by MIT/LL

was that no one had done an end-to-end imaging simulation of differential-phase imaging. We have developed a

variant of the GSF algorithm that uses differential phase and V2 information in a way that does not require

knowledge of the phase center to 5 nrad. A simulated image reconstruction with noise is in the proposal body. We

propose to study a second passive-interferometer imaging concept (combination of hardware and algorithm) for an

eventual operational GeoSat imaging facility that can produce an image in up to 100 times less integration time on

the target, making possible a high quality image in under 1 hr. This is a variant of mosaic imaging planned for the

sub-millimeter ALMA array, but where the huge SNR benefits only apply in the optical/Near IR.

I.B Tasks

Most of the hardware is standard off the shelf, except the detector which has a long lead time. The hardware design

activities will be mainly requirements derivation, hardware approach identification and preliminary design, at the

levels appropriate for the SRR. In algorithms, we plan during phase 1 to more thoroughly explore our modified GSF

algorithm. GSF can often be successful in reconstructing images when small parts of the UV plane information are

missing. In the case of the Galileo interferometer, the use of an existing 1.5 m telescope means that the shortest

baselines will not be available because we can’t get the two telescopes close enough together. We will explore

combining AO images from the Palomar 5 m and the interferometer to partially fill in the missing UV points. Last of

all we want to further develop our second algorithm, mosaic imaging, to more clearly define the requirements of a

differential phase imaging system that can produce an image in < 1 hr.

Page 2: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 37

I.C Risks (none)

I.D Cost and Schedule

The total cost for Task 2 is $5.5M, with $500K in phase 1. The cost to get to PDR is an additional $500K, and $1M

to CDR, with the remaining $3.5M (which includes 400K reserves) to finish. PDR would occur ~1 month after start

of phase 2, CDR 3~4 months after that and ship to site 12.5 months after start of phase 2.

I.E Summary Slides

Page 3: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 38

Page 4: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 39

Page 5: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 40

II Detailed Proposal Information

II.A Point Design

II.A.1 Overview of the Concept

(Note: The narrative in this section is common to all three of our submitted proposals for the three tasks in the BAA,

and provides the overall context for each. The material specific to this task begins in section III.A.2.)

We are proposing a variant of long-baseline differential phase imaging that is, under some circumstances,

significantly faster than the default differential phase concept described in the DARPA BAA. We also derive, from

first principles, the requirements for the long baseline passive interferometer that would enable this imaging concept.

We begin this introduction with a description of a passive long baseline interferometer and a requirements flow

down. We follow that with an SNR analysis of the differential phase interferometry concept from OPC, which is the

basis of the DARPA BAA. We then end with a description of Mosaic interferometric imaging, a new concept that is

compatible with differential phase interferometry (as well as with more traditional closure phase imaging), but offers

a factor of 100 to 1000 advantage in the integration time needed to produce an image.

II.A.1.a Fringe Tracking vs. Passive Interferometry

Most long-baseline interferometers, including the ones built by various members of this team (the Mark I through

Mark III interferometers on Mt Wilson, the 120 meter-baseline Palomar Testbed Interferometer on at Palomar

Observatory, the Navy Optical Interferometer in Flagstaff, and the Keck interferometer on Mauna Kea), are active

“fringe tracking” interferometers that directly measure and track the phase of interference of light between two or

more telescopes.

Fringe tracking is the interferometric equivalent of adaptive optics (AO), where the fluctuations due to the

atmosphere are measured then corrected in real time. The limiting magnitude for active fringe tracking is similar to

the limiting magnitude for natural guide star AO. However, a major difference between long-baseline interferometry

and AO for faint objects is that, while AO can use a laser guide star to provide the photons for the wavefront sensor,

there is no laser guide star equivalent for long baseline interferometry. Furthermore, in natural guide star AO, an

additional requirement is that the star not be resolved by a subaperture of the wavefront sensor in order to not

degrade sensitivity. In interferometry, the equivalent requirement is that the star being fringe-tracked not be

resolved by the interferometer baseline. When the baseline is a few hundred meters, a typical Geosync satellite will

be highly resolved with a fringe visibility of 1~2%, with the consequence that the number of photons required to

track fringes is insufficient by many orders of magnitude. OPC, under DARPA funding, studied differential phase

imaging which circumvents the need to track fringes while still obtaining some phase information.

Prior to the OPC concept it had been widely known that a “passive” interferometer can measure fringe visibility (or

fringe amplitude) even when the fringe SNR is < 1 in an atmospheric coherence time. It is also widely known that

for complex images (such as a 100x100 pixel image of a 10-meter satellite with 10 cm resolution) one cannot

reconstruct the image without any phase information. So while a passive interferometer can measure fringe visibility

for targets much fainter than the fringe tracking limit, the measurement of fringe amplitude alone is not sufficient to

reconstruct an image. The importance of the OPC differential phase concept is that it provides a way to obtain phase

information on targets too faint for active fringe tracking. How one goes from differential phase to a reconstructed

image was not described at the DARPA proposer’s briefing. In fact it was stated that an end-to-end simulation

resulting in a reconstructed image had not been done. We describe our approach to reconstructing an image in in

some detail in the Task 2 proposal. But first we describe the requirements of a “passive” interferometer.

Page 6: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 41

II.A.1.b Passive Long Baseline Interferometer

In an active fringe tracking interferometer, if there are slight errors in our baseline vector or an error in our

measurement of the internal paths of the interferometer, those errors don’t matter once we lock on to the fringe. But

in a passive interferometer, there is no real time feedback and one can potentially spend 1000 sec integrating on the

target only to find out later that the delay was in error by 1 mm and hence the visibility was too small. The

requirements for a passive interferometer can be derived from two items, the interferometer equation and the

atmosphere.

The interferometer equation can be written as:

ˆx B s D= × +v

(1)

where x is target-light fringe delay seen by the fringe detector, B is the baseline vector, s is the unit vector to the

target (star or satellite), and D is the ‘internal’ delay, is the portion of x that is internal to the interferometer. The

relative delay x is the full optical pathlength difference (OPD) sensed by the fringe detector. The current high

resolution imaging application requires that the fringe be dispersed on the detector. This achieves two things: a) the

coherence length of the interfered light is increased by dividing the spectral band to narrower channels, and b) the

“u-v plane” containing the image information is filled out more completely (which implies the assumption that the

image is color-independent.) The baseline vector is measured using an ‘external metrology’ system involving

commercial off-the-shelf laser trackers. The unit vector s has to be obtained by doing astrometry with the

interferometer. The delay D is the portion of x that is on the interferometer side of the baseline, and is measured

using laser metrology.

Atmospheric fluctuations are usually described by a Kolmogorov turbulence model with an outer scale term. Phase

errors grow with baseline length as 5/6 power until the baseline approaches the outer scale length, beyond which the

fluctuations (approximately) saturate. If we assume 1 arcsec seeing and a turbulence outer scale of 50 m – a

plausible value based on measurements at several sites – then the maximum atmospheric piston fluctuations are

about 40 m (rms). This much phase error would “wash out” the fringe and destroy the visibility. To mitigate this,

each measurement must be done within an atmospheric coherence time (0, or about 20 msec in the near IR) and the

fringe must be spectrally dispersed to increase the coherence length of the light. At each telescope configuration (u-

v point), a large number of such measurements are taken and averaged.

An ideal “passive” interferometer would have its baseline vector and internal delay known to better than the

uncertainty in the atmospheric delay. When we image an object at 100 pixels across the target, we have to do so in a

wavelength interval smaller than about 1/100 to avoid blending of adjacent points in the UV plane. Within

each spectral channel the coherence length of the light is ~ , which at 1.6 m is ~160 m. Since this is larger

than the expected atmospheric fluctuations, we see that the coherence length specified by the desired resolution will

drive the requirement on knowledge of the terms in Equation (1).

II.A.1.c Definition of the Baseline Vector

What do we mean when we say we know the baseline vector to about 100m when the telescope is 1.5 m in

diameter, 10,000 times larger? In radio astronomy, a similar problem exists where the telescope is 25 m in diameter

and the baseline known to 1 mm. In radio astronomy the baseline vector is often taken to be the vector between the

pivot points of the telescopes. That is, an ideal Alt-Az telescope with a perfect azimuth bearing and an elevation axis

intersecting the azimuth axis will have a pivot point that is fixed relative to the earth, and that point serves as one

end of the baseline vector. While this model is sufficient for radio interferometry it is not adequate for optical

interferometry. The requirement for the Galileo project of metrology at ~160 m is somewhat tighter than what is

Page 7: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 42

routinely done in radio astronomy. It’s also important to define the baseline vector in a way that the design of the

metrology system to measure the baseline vector is not too complicated.

Figure 1: Retro-reflecting fiducials define the reference points in an optical interferometer. A flight brassboard multi-cornercube fiducial built for the Space Interferometry Mission is shown on right.

Optical interferometers, especially in space, require a definition of the baseline vector with much higher accuracy,

sometimes at the sub nanometer level. For SIM (the Space Interferometry Mission) we adopted a definition of the

baseline vector that satisfies Equation (1) with sub-nanometer accuracy, based on the use of optical fiducials, shown

in Figure 1. This approach will also work well for Galileo. Optical retroreflectors define the end points of the

baseline vector and the internal delay D is defined by the path difference in the two arms of the interferometer

measured to those same optical fiducials that define the baseline vector. When using detached retroreflectors as the

fiducials defining the baseline, a number of important simplifications occur that we list here as points to keep in

mind:

the optical fiducials do not have to be at the center of the telescope aperture;

the optical fiducials do not have to be at the pivot point of the telescope mount;

the telescope mount azim. and elev. axes do not have to intersect or be perpendicular to each other;

the azimuth axis does not have to be vertical;

the optical fiducials completely define the baseline, so long as D is measured to those same fiducials.

Thus, to measure the baseline, the fiducials are interrogated by laser beams from a wide range of angles. While a

single corner cube cannot provide a clear aperture over the entire needed range of angles, as shown in Figure 1, one

can build a “compound” corner cube composed of multiple corner cubes or spherical cat’s eyes with a common

vertex. For this project the vertices of the compound corner cubes must be calibrated to better than 100m

accuracy. This is readily achievable: for the SIM project we had CSIRO build a compound corner cube with better

than 10 m fabrication accuracy which we then calibrated to < 0.1 m knowledge – orders of magnitude better than

the current requirements.

As seen in Equation (1), all three components of the baseline vector are important. So in addition to the telescope

(x,y) location knowledge stipulated in the DARPA BAA, the z location is in fact even more important. For a target

within 15 degrees of zenith, the interferometric delay x is four times more sensitive to the z component of the

baseline vector than the (x,y) components of the baseline vector. Our task 1 proposal shows how we will achieve the

required ~100m, 3-axis baseline vector accuracy.

Page 8: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 43

II.A.1.d Target Astrometry

Equation (1) shows the importance of knowing the values of B and D to ~100m, and it also shows the importance

of knowing s, the target position, with high accuracy. Stars in the sky are almost stationary. Typical 11 mag stars

have proper motions of a few milliarcsec per year. Once we correct for proper motion and parallax (and the various

relativity effects) the stars are essentially stationary in the sky. The satellite, however, is not in an orbit that is stable

at the imaging resolution of a 500 meter-baseline interferometer. The simplest model of the satellite’s motion is a

Keplerian orbit around the center of the Earth. But that ignores the gravitation effect of the moon. And it ignores the

Sun and the other major planets like Jupiter. Last of all, it ignores solar radiation pressure on the spacecraft. The

solar flux varies by about 0.1% when sun spots rotate in and out of view, but an even larger effect is that the

spacecraft rotates. A geo–satellite whose body is constantly pointed at the Earth while its solar panels rotate to

follow the sun presents a time-varying surface facing the Sun and hence a time-varying acceleration due to solar

radiation pressure. A crude calculation shows a geo-satellite has an orbit stable to ~1 m only for a few days. At the

level of 1 m, the orbit of the satellite has to be updated on a daily basis or perhaps more often. JPL is the only

institution to have demonstrated ~0.5 milliarcsec (2.5 nrad) astrometry (ref Shaklan) from a ground based telescope

needed to measure satellite orbits with this level of precision.

II.A.1.e The D term and Disturbance Rejection

The use of optical fiducials to define the baseline vector and interrogating the same fiducials with internal

metrology makes Equation (1) powerful in capturing the performance of the interferometer. So far we’ve discussed

requirements on the knowledge of the baseline vector and the unit vector to the target. The last term in Equation (1)

is D, the internal optical path difference in the arms of the interferometer. The average value of D during each

observation must be known to the same level as the baseline (order 100m). But in addition, the high frequency

changes in D must be known to much better accuracy (< /10, or about 100 nm) if the measurement is to be

atmosphere-limited. To better facilitate our discussion, consider the differential of Equation (1):

ˆ ˆx B s B s Dd d d d= × + × +v v

(2)

Equation (2) can be used to relate alternatively either the changes or the error in the delay measured by the

interferometer. If we focus on disturbances, it says that the delay can change when the baseline vector (length or

direction) changes, the satellite moves, or the internal path changes. During a single measurement (about 20 msec)

the satellite motion is negligible so we consider the other two terms. These both can arise from various sources of

disturbances and vibrations.

Every long baseline interferometer with moderate to large telescopes has experienced serious vibration problems

with the telescopes. Often large telescopes, like the Keck 10 m telescopes, were not originally built with

interferometry in mind and many dozens of sources of mechanical disturbance had to be identified and vibration

isolated. Similarly an optical fiber that is 500 m long will be an extremely sensitive acoustic and thermal sensor. For

fibers buried 10 ft below ground level, the optical path fluctuations can be very small and very slow, but a 0.1 oC

change in the temperature of 500 m of fiber with 1e-5 CTE can change the optical path by 500 m. Faster changes

in pathlength can be caused by mechanical disturbances of the fiber from other environmental or instrumental

effects. The obvious solution is to monitor these optical pathlength changes with metrology and remove them with a

high speed, controllable optical delay line.

When the optical fiducial moves, Equation (2) shows that it can affect the measured delay in two ways: via the

internal delay change D as well as the baseline vector change through the ˆB sd ×v

term. The former is directly

measured by internal metrology. The latter is measured by attaching a 3-axis accelerometer to the fiducial. The high

frequency motions of each end of the baseline (and hence the vector itself) are given by the accelerometer readings.

Page 9: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 44

We plan to use a 3 axis accelerometer so that we don’t have to mechanically align the accelerometer’s sensitive axis

to the direction of starlight, and equally important, it lets us in software reject cross coupling of vibration from the

X, Y axis to the Z axis.

Measuring in real time the length of the 500 m of fiber is necessary because the fiber will not be buried

underground. Normally one thinks of doing laser metrology with milliwatt beams of laser light in order to monitor

this length in real time. Inside the fiber this would correspond to 1015

photons/sec. None of this light, however, must

be allowed to produce photoelectrons in the fringe detector, which is receiving target photons at a meager rate of 10

photons/pixel/0. Our approach is described in more detail in the fiber metrology section III.A.2.b.

II.A.1.f Mosaic Interferometric Imaging to Reduce Integration Time

To image a complex object one needs near full UV plane coverage. With a single baseline this means moving one

telescope to many positions. Mosaic interferometric imaging can reduce the total integration time by potentially a

factor of 1000. Mosaic interferometric imaging is used at the large sub-mm array ALMA, a joint project between

Europe, US, and Japan. The huge reduction in integration time, however, is only present at Vis-NIR wavelengths.

Mosaic imaging is used when the target being imaged is larger than the diffraction limit of one telescope.

To see the dramatic impact of mosaic interferometric imaging, we consider three scenarios. The first involves two

telescopes of the same diameter D, the second involves 2 telescopes larger telescope of diameter ND, which resolves

the target into N*N pixels, and the third is an intermediate with N telescopes of diameter D and one telescope of

diameter ND. In this comparison, we will assume that, in the wavelength band being used, the target spans one

diffraction limit of the smaller telescopes. This is in fact the case with the 1.8 m telescope in the near-IR looking at a

10 m object in Geosync. We now ask, what is the gain in the image acquisition speed in going from scenario 1 to 2

and then to 3?

The standard imaging approach involves measuring the fringe at a number of baselines. The SNR of the

measurement at each baseline improves with the square root of the integration time tb at that baseline, or inversely,

the integration time required to achieve a given SNR, all else equal, goes as the square of the desired SNR. The total

required imaging time ttot is then given by tb times the number of baselines nb:

2

2

tot b b b

ph b

t t n SNR n

N V n

= × µ ×

µ × × (3)

where Nph is the total number of collected photons by the two telescopes during tb, and V is the typical visibility of

the fringe. In scenario 2, the larger telescopes now resolve the target into N x N pixels. At the focus of each telescope

we place an N x N fiber bundle and combine the individual fibers. The individual fibers, each of which see only a

part of the target, have an improved object visibility relative to the scenario 1 case, by a factor of N, so that V2 is

increased by a factor of N2: we will call this the visibility gain, Gvis. Furthermore, the photons from each segment of

the target seen by one fiber are being collected by an aperture that is N2 times bigger in area than the case in scenario

1: we will call this the photon gain, Gphot. Finally, since the larger telescopes are already imaging the target to N x N

pixels, the number of baseline changes needed to cover the UV plane and synthesize down do the required resolution

is now down by a factor of N x N accordingly: we will call this the baseline gain, Gbase. So, in all, scenario 2 ends up

with an overall image acquisition time advantage of N 6:

Page 10: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 45

2 2 2 6

1 1 1 (for scenario 2)

mosaic tot phot vis base

tot

tot

t t G G G

tt

N N N N

= × × ×

» × × × = (4)

The above equation says that, relative to the conventional approach of scenario 1 with two small telescopes, scenario

2 with the two, N-times-larger diameter telescopes has an advantage of N 6 in integration time. In our proposal the

smaller telescope is the Palomar 60 inch telescope. The big telescope at the Palomar observatory could be the 200

inch Hale telescope, which is 3.3X larger. Here the comparison would be between two 60 inch telescopes and two

200 inch telescopes using in mosaic imaging. Based on the above analysis, the advantage would be (3.33)6, more

than 1000X.

Less dramatic but still quite significant gains can be accomplished with the more modest scenario 3. As an example

of this scenario, suppose the large telescope is the Palomar 200 inch, used along with nine, 60 inch smaller

telescopes. The 200 inch would be used in mosaic mode with a 3x3 fiber array in its focal plane. If we combine each

of the 9 smaller telescopes with each fiber from the large telescope, the photon advantage would be about 5 (or

(10+1)/(1+1)). The visibility advantage would be 10 multiplied by an intensity mismatch factor of

1 2 1 22 / 0.6I I I I , or about 6, and the baseline advantage would be still 10. The total advantage would be

about 300X.

These are rough calculations. Experimentally these experiments would be outside the scope of this DARPA call. But

the usefulness of producing an image in a couple of hours versus weeks/month for DOD needs is so large we feel

this should be explored at least in computer simulation.

II.A.2 Task 2 Design and Operational Concept

II.A.2.a Fibers and Fiber Metrology

As described in section III.A.1.e, Measuring the D term, we need to measure the internal optical path from the beam

combiner to the optical fiducial that defines the baseline through the optical fiber. We plan to use 2 optical fibers for

the two orthogonal polarizations, and use PCF fiber to span the wavelength range from 1.06um to 1.8um. What we

call internal metrology (from beam combining splitter to CC at the telescope) is split into segments: from the beam

combiner to an intermediate fiducial just before the Herriott delay line on each arm, from that intermediate fiducial

to a similar intermediate fiducial at the output of the fiber at the telescope and from the outside intermediate fiducial

to the baseline CC. Thus there is one differential and four point-to-point segments in all, as shown here:

The “freespace” parts of this metrology system at both ends are conceptually similar. The major challenge is to do

the metrology simultaneously with stellar observations while keeping the 1.06um light out of the stellar fringe

detector.

Our approach is to use low light-level metrology, nano-watts instead of milliwatts, together with a “razoredge” filter

that can block all but 10-6

of the light from the metrology laser (at 1064nm) while transmitting > 95% of the light

between 1.1m and 2.0m. The choice of 1.06m for metrology is also mandated by the narrow line width

needed for the metrology laser. At the maximum baseline of 500 m, the two arms of the interferometer will be

Page 11: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 46

different by 130 m for a target that is 15 deg from zenith. If we want to measure a 200 m length with < 2 nm jitter,

the laser wavelength/frequency has to be stable to 1 part in 1011

on a short time scale. We plan to lock a NRPO

YAG laser to a temperature-stabilized Fabry-Perot cavity in a small vacuum chamber to < 100 Hz (a few parts in

1012

). This is similar to prior work by the USNO team. As mentioned before “normal” nanometer metrology lasers

use ~1 milliwatt of laser power. Nanowatt and picowatt metrology is possible by making use of the heterodyne gain

when the ‘local oscillator’ (LO) light is made much brighter than the signal. Quantum limited optical heterodyne

detection is achieved when the photon noise of the LO is greater than the detector noise of the photodiode/op-amp.

In theory 10 nm precision in 1 msec is possible with < 107 photons/sec, which would be attenuated to 10 photons/s

by the razor edge filter. Only a small fraction of these 10 1.06 um photons will fall on the spectrometer pixels

between 1.1 and 1.8um.

In addition to “relative” metrology with nanometer resolution, we need a quasi-absolute metrology system because

when the telescopes are moved, disconnecting the fibers and reconnecting them can change their length by

millimeters: a 0.5 mm change in 500 m of fiber is a 1 part per million change. This could easily happen by pulling

on the fiber or by a small, 0.2 deg temperature change in the fiber temperature. We plan to use an absolute

metrology system to find the absolute path length after the telescope moves. The system can measure the optical

path in the fiber and the whole internal metrology path to 200 nm in a 1-second integration. The fiber dispersion

will be measured in the lab. We’re currently not planning to do calibrate dispersion every time we move the

telescopes.

Figure 2: The fiber metrology concept. Also shown are the RazorEdge filter characteristics.

II.A.2.b Coarse Delay Line (Herriott multi-pass cell)

The main products of the interferometer are the measurements of the fringe visibility and differential phase.

Visibility is lost with optical pathlength difference (OPD, or delay) when the combined light is broadband. It is also

lost, even for narrow-band combined light, when there is OPD jitter due to vibrations or thermal changes in the

Page 12: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 47

fibers or other parts of the optical train. Thus, delay compensation and stabilization are two of the main functions of

an interferometer. For Galileo these are achieved by the combined effect of the coarse and continuous delay lines. In

this section we describe the coarse delay line.

As mentioned above, a target that is 15 deg from zenith will have a delay of 130 m at the maximum baseline of 500

m, and the imaging considerations place the requirement for delay compensation at about 100 m. In other words,

the ‘DC’ part of the delay compensation must bring 130 m of delay down to less than 100 m, a dynamic range of

over 106. Our coarse delay line, based on the Herriott multi-pass cell architecture, is designed to cover part of this

range, bringing a maximum delay of 130 m down to a maximum delay of 4 m.

Fiber based delay lines are undesirable due to the large difference in dispersion between two arms with different

lengths of fiber. However, a long, double-pass, air-gap, delay line is also undesirable due to its large footprint. As

such, the use of a Herriott-type multi-pass cell as a delay line is ideal for our purposes.

The basic Herriott multipass cell uses two spherical mirrors with an off-axis hole in one mirror acting as both an

injection port as well as an exit port. If a properly focused light beam is injected into the cell through an injection

port, the beam will resonate within the cell until it exits from the same hole without diffraction. As the beam

resonates between the mirrors, the transverse positions of the beam on each mirror has an angular step, , defined by

the radius of curvature (ROC) of the mirrors and the distance between them, d. The number of round trip passes is

increased by slightly increasing d which directly increases . Each integer increase in the number of round trips

effectively increases the path length through the cell by 2d. Thus small changes in d can result in large changes in

delay.

JPL has extensive experience in the use of Herriott Multipass cells. The Lab has recently developed, tested, and

flown a Herriott multi-pass cell in the Tunable Laser Spectrometer (TLS) instrument on the Mars Science

Laboratory. This spectrometer utilizes multiple injection and exit ports, on the near and far mirror respectively, to

allow four distinct lasers to resonate inside the cell. With mirrors of 0.5 meter ROC, 227 mm diameter, and a

separation of 228 mm, a total path length of 18.9 meters for each channel was obtained in a very compact package.

Figure 3: JPL-designed and built Herriott Multipass cell in the Tunable Laser Spectrometer (TLS). Images from Tarsitano and Webster, Applied Optics 2007.

A Herriott cell with a mirror separation between 1 and 2 meters can act as a delay line with path lengths of 4 meters

and greater, whereby the maximum path length is limited only by the size and placement of the injection and exit

holes. Our design calls for a maximum delay length of 130meters with 2 channels.

Page 13: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 48

The maximum expected number of reflections in the Galileo Coarse Delay Line is approximately 74. We have

received confirmation from Barr Associates (a division of Materion) that coating reflectivities of greater than 99.8%

over the waveband 1.1–1.8um is within their capabilities. Thus, the minimum throughput is expected to be 0.99874

or 86.7%.

The nominal design of the CDL requires mechanical movement of one mirror by 250 mm with 0.25m resolution.

This can be accomplished with a PI M-414 High-Load Precision Stage. In addition, as the distance between the

mirrors change, the proper angle of beam injection and angle of the exiting beam also changes. Both input and

output fibers will require a precision hexapod mechanism with about 15 arcminutes of range and 1 milli-arcsecond

resolution. The position of the input and exit port remains the same regardless of mirror separation.

II.A.2.c Continuous Delay Line

The maximum uncompensated delay after the Herriott cell is 4 m. Our continuous delay line (CDL) design has the

dynamic range and bandwidth to reduce this delay to nanometers, better than the levels required by Galileo. The

CDL architecture follows the approach used by JPL at a number of ground based interferometers, as well as the

Space Interferometry Mission. These all use a series of nested servo systems in order to achieve a large dynamic

range and are capable of high bandwidth operation. Our baseline concept for the CDL is based on an existing

brassboard optical delay line left over from the Space Interferometry Mission (SIM).

The SIM brassboard optical delay line uses three layers of actuation: a cart on a track for long travel, supporting a

voice coil actuated cage containing the beam train optics, with one of the optics actively controlled by a PZT stack.

With each layer of actuation providing about three orders of magnitude dynamic range, the requirements for the

Galileo CDL are easily met. The unique features of the SIM brassboard include preloaded wheels to maintain

positive contact with the track, rugged construction, and light weight. In all these areas the SIM delay line exceeds

the CDL requirements because it was designed for picometer interferometry and launch survivability. The

brassboard ODL was taken through full launch qualification level vibration and thermal vacuum testing. Before

and after such testing it was able to meet the demanding nanometer control requirements, evidenced by the lab data

shown in the figure, indicating the 1.4 nm stabilization capability of the unit. We plan to use this existing delay line

with only a few modifications, mainly using longer rails (2 m instead of 1.5 m) to accommodate larger travel

required.

Figure 4: The SIM optical delay line brassboard is planned for use as the Continuous Delay Line (CDL). Three levels of actuation allow rugged, high-bandwidth, continuous delay actuation from meters down to a few nanometers. Right, lab data showing SIM brassboard optical delay line performance at < 2 nm rms, better than the requirements of Galileo.

Page 14: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 49

Since the CDL actuation capability exceeds the Galileo requirements, we will be limited by the atmosphere. In

controlling the CDL, the error signal will be based on knowledge from a combination of sensors. Knowledge of the

required ‘DC delay’ will be based on target astrometry, external metrology, and internal ‘absolute’ metrology.

Knowledge of the internal pathlength changes will come from internal ‘relative’ metrology. In both of these cases

the residual error will be small compared with the atmospheric effects.

II.A.2.d The Beam Combiner and Metrology

The beam combiner receives separately the S- and P-polarized beams from the two telescopes. There are two

combined beams, from the two sides of each beam splitter, for each polarization, so that in the end there are four

combined beams. These are then dispersed using a prism and focused on a single detector. The detector sees fringes

along the dispersion axis. Visibility and phase are obtained by modulating the delay using the CDL. Since the CDL

is common to both the S and P beams, it is servoed based on one of them and the residual delay error in the other is

compensated by a simple piezo driven ‘delta delay line.’

The beam combiner also houses the free-space internal metrology beam launchers for the S and P beams. Using

special masks, the metrology beams going to the fixed and mobile sides are separated. These beams go out, reflect

off of fiducials and return to the beam launchers where their phase is measured for metrology. Our beam combiner

concept is shown in Figure 5, with the insets showing the metrology beam launcher and the compound beam

structure. The compound beam contains the target light in the center, and internal metrology as two pairs of ‘pencil’

beams in a cross pattern outside it. JPL has already built the brassboard beam launchers, designed for the SIM

mission, and demonstrated metrology accuracy at the picometer level, well beyond the needs of Galileo.

Figure 5: The beam combiner concept, showing the free space internal metrology scheme and dispersed fringe layout.

One aspect not shown in the figure is polarization control. While there will be a single polarization from the target

arriving from each fiber, the output polarization will be linear but random and will need to be corrected. There are

two options for the fiber: polarization maintaining and regular fiber. We prefer regular fiber because with

Page 15: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 50

polarization maintaining fiber the light that is launched into the wrong mode will have a large delay difference. We

will design polarization control into the system.

II.A.2.e The Fringe Detector

The modulated fringes are detected in a cryogenic infrared camera that is designed to operate in the atmospheric J

and H transmission bands (1.17 – 1.33 microns and 1.49 – 1.78 microns, respectively). The detector is a Hawaii -

2RG (H2RG) array from Teledyne Scientific and Imaging. The H2RG is a Mercury-Cadmium-Telluride (HgCdTe)

detector with a bandgap engineered to about 2.5 microns cutoff wavelength. This HgCdTe layer is hybridized to a

CMOS multiplexor which offers highly flexible readout modes well suited for high speed fringe sensing. The H2RG

is an optimal detector, unsurpassed in overall performance in its range of operation:

Quantum efficiency ≥ 85 % at J and H, and ≥ 70 % as short at V;

Dark Current < 0.1 e-/pix/sec for 100 K operation;

Read Noise < 5 e- rms with multiple non-destructive reads.

To save cost, we plan to use a so-called “engineering grade” detector, which has certain blemishes but offers a

sufficient properly operating real estate for our non-imaging application.

The detector cryostat will be cooled using a closed-cycle Joule-Thompson cooler. Preliminary estimates show that a

Polycold Joule Thompson Compressor and Cold-Head system using the PT-14 gas blend will deliver about 10 Watts

of cooling capacity at a temperature of 92 K, obviating the need for cryogens. The detector itself will be hot biased

and temperature stabilized at about 100 K. At this operating point the detector dark current will be negligible (about

0.1 e-/s). The detector mount, cold baffles, and a 1.78 m micron blocking filter and will be mounted directly to the

refrigerator cold head. The blocking filter blocks any thermal radiation between our longest operating band and the

detector long wavelength cutoff at 2.5 m from hitting the detector.

Figure 6: Left, the Teledyne Hawaii 2RG array has exceptional performance in the bands of interest. Right, a compact NIR camera cryostat similar to the one proposed is shown. Light enters through a window in the snout on right.

The entire 2k × 2k pixel H2RG detector is divided into 32 independent strips, each with its own readout tap. We will

format the fringe spectra in identical locations along 4 of these strips, clocking each out using a simple subarray

readout mode. All reads are conducted in synchronism with the fringe modulation. More complicated readout

schemes can allow the spectra to be read at different rates if needed. At nominal pixel rates of 400 kHz, a 1×100

Page 16: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 51

pixel spectrum can be read at the required 200 Hz (maximum) frame rate, leaving adequate time for multiple non-

destructive reads to average the effective read noise to ~5 e- rms using for example the Fowler 8 scheme.

II.A.2.f Computers

Computers and electronics will be chosen based on previous successful implementations of real-time systems for

interferometers of similar scope and scale to Galileo. This greatly reduces development risk by choosing

components and software layers which have already been proven to meet the timing and performance requirements

for Galileo.

The real-time computing software will be based on an existing transportable framework developed specifically to

allow developing and deploying full-featured, more reliable, and more flexible systems than is possible with one-off

developments. Current deployments include the Keck Interferometer and the Palomar P1640 CAL interferometric

wavefront sensor subsystem, as well as several testbeds at JPL, including APEP which allows development of

advanced coronagraph techniques.

The application software framework supports incremental development by organizing hardware control and higher-

level functionality as loosely-coupled modules. Each component can be developed and tested standalone or in a

subset of other components which may include simulations of future modules. This greatly reduces development

risk by allowing unit testing component integration during the full development cycle.

The real-time operating system will be Xenomai, a Linux-based environment. This specific environment has been in

continuous use on real-time systems at JPL for the last several years, and is specified as a cost-effective high-

performance solution meeting the timing requirements for Galileo. Other operating systems, including VxWorks,

can be selected if required.

The Compact PCI (cPCI) bus and form factor will be used for real-time control, telemetry, and command computers.

The cPCI bus will be packaged into a rack-mount crate and similar hardware has been deployed for production use

at Palomar Observatory on the P1640 CAL interferometric wavefront sensor subsystem.

II.A.2.g The Image Reconstruction Algorithm

If one had full UV coverage with both amplitude and phase information, then according to the van Cittert-Zernike

theorem a single Fourier transform would reconstruct the image. Because we do not have the full phase information

in the UV plane, the baseline image reconstruction approach we plan to use is a variation of the Gerchberg-Saxton-

Fienup (GSF) algorithm. The following chart describes the overall flow.

Page 17: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 52

In the traditional GSF algorithm, we set numerical constraints: in the image plane every pixel is forced to be real and

positive, and zero outside a region where we know the target does not extend, and in the Fourier plane the values of

fringe amplitudes are set to the measured values. The fringe measurements from multiple spectral channels carry

extra differential phase information. In fact, the unknown portion of the phases is what atmospheric turbulence

introduces. For a given baseline, we model the phase shift due to the atmosphere for spectral channel i using an

affine modelfiatmos =q + kid , where ki is the wavenumber at the center of channel i, while and parameterize

the atmospheric effect. For the differential phase information across multiple spectral channels, we impose a

functional constraint instead, i.e. we will set the constraint on the phases in the UV plane leaving an overall constant

and slope unchanged for the UV points of each baseline. The idea is to use the GSF iteration approach to yield the

overall phase constant and slope converging to the true value. To keep the overall constant and slope in the phase

unchanged, in the UV plane we use the phase of the Fourier transform of the image after imposing the constraint to

estimate an overall phase and slope, i.e. we find the best overall phase and slope so that the measured complex

visibilities matches the Fourier transform of an image satisfying the constraint in the image plane.

The figure at right side above is one of our preliminary results after 5000 iterations in reconstructing an image using

complex visibility measurements with 1% RMS error in both real and imaginary parts. The original image is shown

at the left side. These two pictures are more than 95% correlated.

Gerchberg-Saxton approach usually does not converge very fast. A better algorithm focusing on estimating the

atmospheric effects parameters may be more efficient. We are currently researching this possibility.

II.A.3 Key Features

The key features of our Task 2 proposal are:

1) Use of reflected target light at 2 NIR atmospheric windows, 1.2 m and 1.65 m to increase SNR. Both

spectral bands use the same PCF fiber. Use two fibers per arm for both polarizations.

2) End to end realtime internal metrology in 3 legs. There are two intermediate fiducials between the beam

combiner and the optical fiducial at the telescope.

3) Realtime metrology concurrent with starlight by doing metrology at nanowatt-picowatt levels.

4) Absolute metrology with 200-300 nm accuracy, used when moving telescopes.

5) NIR fringe detection with the best detector currently available. Progress is being made in this area and lower

read noise detectors may be available in a few years.

q d

Page 18: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 53

6) Herriott multi-bounce delay line that will be adjustable in increments of ~4m. Followed by a continuous

delayline that has 4m of continuous motion. The continuous delayline will have nanometer resolution. The

Herriott cell will have specialty coatings with reflectivity ~99.6~99.8% in the 1.2 and 1.65um bands.

7) We have developed an imaging algorithm that uses V2 and differential phase, using a modification of

Gerchberg-Saxton-Fienup approach that uses the traditional “numerical” constraints of positive definite and

zero outside the support in the image plane and numerical constraint of V2 in the Fourier plane, but

augmented by a “functional” constraint that uses differential phase without a need for an ultra-precise phase

center. This approach relaxes the need to point the "array" with 5 nrad accuracy.

8) We have introduced a new concept of mosaic interferometric imaging that in a future operation version of

this facility could image a GeoSat 100 times faster than two 1.5 m telescopes.

II.A.4 Requirements Flow Down

This is our first cut at estimating the SNR of our version of the Galileo interferometer’s SNR and ability to

reconstruct images. We assumed the satellite is an 11 Vmag object, with a solar like spectrum, making it slightly

brighter in the near IR: 9.6 Hmag (1.6 m) and 9.8 Jmag (1.2m). The assumed photon throughput calculation

appears in is in Figure 7 below. The atmospheric coherence time 0 was assumed to be 10 msec at = 0.55 m and

scaled to 1.2 and 1.65m. The read noise for the detector was assumed to be 5e- per pixel per sample. It would take

on average 300 seconds to obtain a standard deviation () in V2 of 3e-4. We simulated the effect of 3e-4 noise in V

2

on image reconstruction and the results are in the following figures. Note that noise in V2 is very different than noise

in V.

Figure 7: Basic throughput and photometric calculation for the Galileo interferometer.

We expect about 230 photons in 0 (40 msec) at 1.65m, both polarizations (1.8 m telescopes). This resulted in a V2

noise of 0.033 in one 0 and a V2 f 3.6e-4 in 300 sec. If the visibility was 0.03, the SNR would be 0.20 in

40 msec. V2 is the product a complex phasor and its complex conjugate. Differential phase is the product of two

different phasors: they would be expected to have similar noise properties. At low SNR, the variance of the V2

estimator is independent of V. However when we generate an image, we Fourier transform the phasor, not the square

of the phasor. While the noise in V2 is independent of V, the noise in V is not. After applying the proper corrections,

we arrive at the following simulated images that examine the effects of various levels of noise in V2, and V on an

image of a satellite (in this case PanAmSat).

Galileo Photometrics

Component Reflections per surf Transmission Comments

Telescope 4 0.92 0.716 enh. Alum for UV AO RGS Band m) Dm) eo (W/m2/m) ph/s/m2/m ph/s/m2/nm ph/s/m2

AO transmission 1 0.6 0.600 J band 1.26 2.60E-01 3.40E-09 2.16E+10 2.16E+03 5.61E+09

AO Strehl 1 0.6 0.600 H band 1.64 3.30E-01 1.18E-09 9.75E+09 9.75E+02 3.22E+09

fiber inject 4 0.98 0.922 after AO all si lver or better

fiber mode match 1 0.92 0.920 Using PIAA Target Vmag Mag @ band Atmos 0, ms Strehl

1 polarization 1 1 1.000 11 V-J 1.148 9.852 25.5 0.5

Fiber atten 1.5 0.6 0.813 Fiber in db/km and #km V-H 1.429 9.571 37.4 0.6

Met injection loss 4 0.97 0.885

Herriott DL 70 0.997 0.810 Telescope: Diameter, m Area, m2

Continuous DL 4 0.98 0.922 Calculation of photons per tau-0 Keck Outrigger 1.8 2.54

Beam Combiner 10 0.98 0.817 per polarization per beam splitter side

Detector QE 1 0.8 0.800 Flux at Band Mag ph/sec/m2 phot / 0 /2 tel x Strehl/25 photons ph/pol/BSside

Det optics 10 0.98 0.817 J 1.20 um 9.85 642806 83431 2781 175 44

0.063 @ 1.6 um H 1.65 um 9.57 477416 90805 3632 228 57Total transmission

Mag difference

Page 19: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 54

At least by eye, an image reconstructed with full UV plane coverage with a V2 noise of 3e-4 is quite good, similar to

an image reconstructed with uniform noise in V of 1%. These simulations were only intended to study noise

propagation from the V2 measurement to the image. We did not simultaneously conduct the functional constrained

Fienup algorithm using differential phase. The differential phase simulation assumed 0.01 noise in V.

The proposed interferometer measures about 50 spectral channels at once: 25 at 1.2 m and ~25 at 1.65m. A 1 0m

satellite resolved to 10 cm has 104 pixels and would require ~5000 points on the UV plane with visibility and phase.

This means that the telescope has to be moved to at least 100 different locations. The total integration time (on

target) to generate this image would be 500 minutes, or about 10 hrs. As we mentioned earlier, a future operational

facility with one medium sized telescope, say a 3.6 m, combined with 4 movable, 1.8m telescopes, could produce an

image of similar quality in less than one hour.

Almost all geo-satellites will have the body pointed at the Earth, but the solar panels will follow the Sun. The

problem with an object that is constantly changing its morphology is much worse for an aperture synthesis

interferometer than an ordinary camera. In an ordinary camera or a set of dilute aperture telescopes where all the UV

points are measured simultaneously, motion of part of the object will result in blurring of the parts of the object that

are moving. But the parts of the object that aren’t moving will not be degraded. But when one measures the UV

plane serially in time, as is the case where the telescopes are moved to cover the UV plane, major changes in the

morphology can ruin the entire image. The solar panels as seen from the Earth rotate the least when a GeoSat is seen

at midnight. That’s because the width of the solar panels are changing as cos (). At twilight, they width of the solar

panels are changing as sin (), and we might have only 10~20 min to take an image. If we assume we have 2 hrs

around midnight, while the GeoSat’s morphology is stationary, it will take at least 5 nights to generate the above

image. (This does not include the time needed to transport the telescope to 100 different positions)

II.A.5 Summary, Where We Exceed Government Specs

By using both polarizations we increase the number of photons in a coherence time by 2, and in photon starved

mode that is a 4X decrease in integration time. By using both 1.2m and 1.6m light we decrease further by 2X the

integration time needed for a total of 8X faster operation. The use of 1.8 m vs. 1.5 m telescopes also improves the #

of photons by 1.4X, potentially a 2X improvement in integration time.

At the proposer's briefing, MIT/LL said closure phase is a quantity that is invariant to translation of the object being

imaged, and differential phase is not. This may be the reason why the government has a 5 nrad telescope pointing

requirement. We have developed a variant of the Fienup algorithm where differential phase is incorporated as a

Page 20: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 55

"functional" rather than numerical constraint. This makes our version of differential phase invariant to target

translation.

II.B Metric Checklists

Metric Preliminary Govt. Value Proposer Value

Aperture Diameter 1.5 meters 1.8 meters Telescope Throughput 70% 72% Polarization Purity Maintenance 95% 95% AO Strehl 0.4 0.6 @1.65 μm Coupling Efficiency into Optical Fiber 48.5% 92% with PIAA Mobile Telescope Overall Efficiency 15.5% 22% Pointing/Tracking Control Accuracy 5 nrad 33 nrad (/20D @ 1.2 μm) Minimum/Maximum Coelevation Angle 17.5°/62.5° 20°/90° Pointing/Tracking Control Slew Rate 35 μrad/sec 100 μrad/sec Azimuthal Angle Range 10° 360° Absolute Position Measurement Accuracy ≤ 0.5 mm 115 μm Absolute Spatial Coverage of System 500 m diameter 500m Max. Time to Move and Restart 5 min 5 min Max. Time to Acquire Position Meas. 30 sec 1 min

Additional Information Proposer Value

End to End Photon Throughput to Computer 6.3% H mag for 11 V mag (solar spectrum) object 9.57 Hmag Photonic Crystal Fiber Passband 1.1-1.8 μm, both polarizations

II.C Program Structure

The program schedule appears in Error! Reference source not found., page Error! Bookmark not defined.. The

program for phase 1 is structured to protect phase 2 against schedule risk in procuring long lead specialty items and

in getting first fringes during system integration and test. The phase 1 milestones are as follows:

1. Project kickoff (KO) with the sponsor

2. Team Meeting 1: Detailed Operational Concept (KO + 2 wks.)

3. Team Meeting 2: Preliminary requirements at WBS Level 3 (KO + 5 wks.)

4. Team Meeting 3: Preliminary design and refined WBS Level 3 Requirements (KO + 9 wks.)

5. Project SRR with the sponsor (KO + 13 wks.)

The tasks that go in between these milestones are described in Section Error! Reference source not found.,

page Error! Bookmark not defined..

II.D Prior Relevant Work

Our JPL team pioneered modern stellar interferometry in the U.S. Previously, JPL and USNO together have built the

1st 2 telescope long baseline stellar interferometers in the US. Three operational long baseline interferometers were

built on Mt Wilson prior to 1990. Three after 1990, the Palomar Testbed Interferometer with a 120m baseline at

2.2um, the NOI at Flagstaff with ~400m baseline and multiple beam combination, and the Keck interferometer,

Page 21: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 56

linking the two 10m telescopes on Mauna Kea. The USNO part of this team has extensive experience in precision

clocks and the use of long distance optical fibers to synchronize clocks at different locations to ~1 part in 1015

.

The members of this team, in developing the technology for the Space Interferometry Mission (SIM) over the last

decade, pushed the state of the art in many areas of optical astronomy. A few of our accomplishments include:

picometer displacement metrology, picometer Michelson interferometry, nanometer class optical path stabilization,

and micrometer-class absolute metrology. We demonstrated star tracking at the few tens of micro-arcseconds, again

the best in the world.

Figure 8: The internal metrology beam launcher brassboard, developed for the Space Interferometry Mission (SIM), demonstrated picometer differential (a-minus-b) metrology accuracy in various testbeds. Since it is essentially identical to the architecture planned for our Galileo proposal, we plan to modify and use the two of the existing brassboard SIM beam internal metrology beam launchers.

Specific to Galileo, we have already built and flight qualified metrology beam launchers and continuous delay lines

originally meant for the SIM program but now usable on this project. These were shown in figures in the Task 2

design narrative. Here we show the internal metrology beam launcher in more detail, as well as the astrometric beam

combiner. The astrometric beam combiner featured all the components of our proposed beam combiner, but

operated at the picometer level, and included precision angle tracking within it. It was designed to survive launch.

While many aspects of flight qualification are not necessary for Galileo, the do demonstrate the ability of this group

in building interferometers that are not only the most precise in the world, but are also rugged and reliable enough

for field use.

Page 22: I Executive Summary€¦ · use an ultra-low power (picowatts) laser metrology beam in addition to a “razor edge” filter that blocks all but 10-6 of the 1.06 m laser light while

DARPA BAA 12-08 – GALILEO – TASK 2 JPL, DRAPER, CALTECH 57

Figure 9: The astrometric beam combiner built by JPL for the Space Interferometry Mission (SIM) was designed for picometer interferometry, with requirements surpassing those needed for Galileo. It is shown on right as it is being prepared for flight qualification vibration testing.

Figure 10: The Micro-arcsecond Metrology (MAM) Testbed, built by the JPL team for the Space Interferometry Mission, included a pseudo star and test article interferometer and demonstrated micro-arcsecond stellar interferometry.

II.E Risks and Risk Reduction

In Phase 1, there is no risk.

In phase 2, the principal risk is schedule risk. Cost risk is significantly reduced because we will be borrowing the

telescope free of charge from USNO for the demonstration.