research article optical flow inversion for remote sensing...
TRANSCRIPT
Research ArticleOptical Flow Inversion for Remote SensingImage Dense Registration and Sensorrsquos Attitude MotionHigh-Accurate Measurement
Chong Wang12 Zheng You12 Fei Xing12 Borui Zhao12 Bin Li12
Gaofei Zhang12 and Qingchang Tao12
1 Department of Precision Instrument Tsinghua University Beijing 100084 China2The State Key Laboratory of Precision Measurement Technology and Instruments Tsinghua University Beijing 100084 China
Correspondence should be addressed to Fei Xing xingfeimailtsinghuaeducn
Received 18 September 2013 Revised 4 November 2013 Accepted 4 November 2013 Published 16 January 2014
Academic Editor Bo Shen
Copyright copy 2014 Chong Wang et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
It has been discovered that image motions and optical flows usually become much more nonlinear and anisotropic in space-bornecameras with large field of view especially when perturbations or jitters existThe phenomenon arises from the fact that the attitudemotion greatly affects the image of the three-dimensional planet In this paper utilizing the characteristics an optical flow inversionmethod is proposed to treat high-accurate remote sensor attitude motion measurement The principle of the new method is thatangular velocities can be measured precisely by means of rebuilding some nonuniform optical flows Firstly to determine therelative displacements and deformations between the overlapped images captured by different detectors is the primary process ofthe method A novel dense subpixel image registration approach is developed towards this goal Based on that optical flow canbe rebuilt and high-accurate attitude measurements are successfully fulfilled In the experiment a remote sensor and its originalphotographs are investigated and the results validate that the method is highly reliable and highly accurate in a broad frequencyband
1 Introduction
For the remote sensors in dynamic imaging one importanttechnology is imagemotion compensation Actually to deter-mine image motion velocity precisely is a very hard problemIn [1 2] optical correlators are utilized to measure imagemotion in real time based on a sequence of mild smearedimages with low exposure This technique is appropriate tothe situations in which the whole image velocity field isuniform Some other blind motion estimation algorithmsin [3ndash5] have been used to image postprocessing whichcan roughly detect inhomogeneous image motion but lackreal-time performance because of complexity As for spaceimaging in order to avoid motion blurring image motionvelocity needs to be computed in real time according tothe current physical information about spacecraftrsquos orbitand attitude motion which can be obtained by the space-borne sensors such as star trackers gyroscopes and GPS
Wang et al developed a computational model for imagemotion vectors and presented error budget analysis in [6]They focused on the small field of view (FOV) space cameraswhich are used in push-broom imaging with small attitudeangles In that situation the nonlinearity of image motionvelocity field does not appear significantly However forothers with larger FOV image motion velocity fields aredefinitely nonlinear and anisotropic because the geometryof the planet will greatly modulate the moving imagesUnder the circumstances the detectors need to be controlledseparately to keep the time series synchronized with theinstantaneous image velocities
The time-phase relations between the photos belongingto different detectors are affected by optical flows whichare uniquely determined by the behavior of image velocityfield in a specific period Some phenomena about movingimage variation and distortion due to optical flow have beendiscovered [7ndash10] References [7 8] reported the camera on
Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2014 Article ID 432613 16 pageshttpdxdoiorg1011552014432613
2 Mathematical Problems in Engineering
Mars Reconnaissance Orbiter (MRO) in High ResolutionImaging Science Experiment (HiRISE) missions of NASAIt takes pictures of Mars with resolutions of 03mpixelresolving objects Fourteen staggered parallel CCDs areoverlapped with 48 pixels at each end to fulfill the entirefield of view Although adjacent detectors overlap with equalphysical pixels yet their lapped image pixels are not equal andvarying with time because spacecraft jitters cause undulatingoptical flows within the interlaced areas [8] In additionwe found that when large FOV remote sensors performstereoscopic imaging with large pitch angles the lappedimages belonging to marginal detectors are bound to exceedor lose several hundred pixels compared to their physicaloverlaps Furthermore the unexpected quantity decreasessignificantly for the detectors mounted at the central regionof the focal plane
Although nonuniform optical flow brings many troublesin image processing such as registration resample intercon-nection and geometrical rectification it permits us to mea-sure the spacecraft attitudemotionwith very high accuracy ina broad bandwidth It is nearly impossible for conventionalspace-borne sensors to realize the target Precision attitudemotionmeasurement is very useful for remote sensing imageprocessing especially for image restoration from motionblurring as studied in [11 12] Associating the measurementand optical flow models the dynamic point spread functions(PSF) are able to be estimated to be set as the convolutionkernels in nonblinded deconvolution algorithms
The behavior of optical flow characterizes the entire two-dimensional flow field for an imagersquos motion and variationIn [13] optical flow estimation based on image sequencesof the same aurora to determine the flow field will provideaccess to the phase space the important information forunderstanding the physical mechanism of the aurora For thepurpose to improve the accuracy of optical flow estimationa two-step matching paradigm for optical flow estimation isapplied in [14] firstly the coarse distribution measuring ofmotion vectors is done with a simple frame-to-frame corre-lation technique also known as the digital symmetric phase-only filter (SPOF) and after that subpixel accuracy estimationresult is achieved by using the sequential tree-reweightedmax-product message passing (TRW-S) optimization Sim-ilarly Sakaino overcame the disadvantages in optical flowdetermination when moving objects with different shapesand sizes move against a complicated background the imageintensity between frames may violate the common situationimage brightness constancy to image brightness 5 changemodels as constraints in regular situations [15] Howeverunlike continuous image sequences if we merely obtainedseveral images of the identical moving objects captured bydifferent detectors with long intervals the former techniquesdo not work well for optical flow estimation for lacking of theinformation of imaging process of the instrument
In this paper a new optical flow inversion methodis proposed for precise attitude measurement Unlike thesituations in [13ndash15] the image sequences of video do notexist for the transmission type remote sensors instead of theimage pairs of the same earth scene which are captured bydifferent TDI CCD detectors in push-broom fashion The
time intervals between the independent image formationscorresponding to the overlapped detectors are much morethan the interval between sequential frames in video forwhich the frame rates usually exceed tens of frames persecond (fps) However we can model optical flows basedon the working mechanism of the instrument and imageprocessing techniques rather than estimating from framesequences of a specific detector The contents of this paperare organized as follows in Section 2 an analytical model ofimage motion velocity field is established which is applicableto dynamic imaging for three-dimensional planet surfaceby large FOV remote sensors The phenomenon of movingimage deformation due to optical flow is investigated inSection 3 Based on rough inversion of optical flow a novelmethod for dense image registration is developed to measurethe subpixel offsets between the lapped images which arecaptured by adjacent detectors In Section 4 an attitudemotion measuring method based on precise optical flowinversion is studied and the results of the experiment supportthe whole theory perfectly
2 Image Velocity Field Analysis
Suppose that a large FOV camera is performing push-broom imaging to the earth the scenario is illustrated inFigure 1 The planetrsquos surface cannot be regarded as a localplane but a three-dimensional ellipsoid since it may greatlyinfluence the image motion and time-varying deformationwhen complicated relative motion exists between the imagerand the earth
In order to set up the model of space imaging somecoordinate systems need to be defined as follows
(1) I(119874119890minus 119883119884119885) the inertial frame of the earth For
convenience here we choose 1198692000 frameThe origin119874119890is located at the earth center
(2) C(119900119904minus 1199091015840
11199091015840211990910158403) the frame of camera Axis 119900
11990411990910158403is the
optical axis and origin 119900119904is the center of the exit pupil
(3) O(119900119904minus 119906
111990621199063) the orbit frame Axis 119900
1199041199063passes
through the center of earth and axis 1199001199041199062is perpen-
dicular to the instant orbit plane(4) B(119900
119904minus 119909
119887119910119887119911119887) the body frame of the satellite
(5) P(119900 minus 119909119910119911) the frame of photographyThe origin 119900 isthe center of the photo Axis 119900119909 points to the columndirection and axis 119900119910 points to the row direction
(6) F(1199001015840 minus 119909101584011991010158401199111015840) the frame of focal plane Axes 11990010158401199091015840and 11990010158401199101015840 lie in the focal plane They are respectivelyparallel to 119900
11990411990910158402and 119900
11990411990910158401 Axis 11990010158401199111015840 coincides with the
optical axis(7) E(119874
119890minus 119909
119890119910119890119911119890) the frame of Terrestrial Reference
Frame (TRF) Axis 119874119890119910119890points to the North Pole
and axis 119874119890119909119890passes through the intersection of
Greenwich meridian and the equator
According to Figure 1 1205970is the ground track of the
satellite and 1205971and 120597
2are the ground traces corresponding to
two fixed boresights in FOV which are far away from 1205970if the
Mathematical Problems in Engineering 3
pminusrarr120591
Γ
q1205972
12059711205970
u1x9984001 u2
x9984002
u3
Os
p998400 rarr120591 998400
x9984003
Figure 1 The analysis of dynamic imaging for the three-dimensional planet
imager holds a large attitude angle Obviously the shapes andlengths of 120597
1and 120597
2also have notable differences during push
broom which implies that the geometrical structure of theimage is time varying as well as nonuniform Furthermoreit can be discovered later that the deforming rates mainlydepend on the planetrsquos apparent motion observed by thecamera
Considering an object point 119901 on the earth its positionvector relative to 119874
119890is denoted as 120588
119901 As a convention in the
following discussions I120588119901represents the vectormeasured in
frame I and accordingly C120588119901is the same vector measured
in frame C We select one unit vector 120591 which is tangent tothe surface of the earth at 119901 Let 119903(119909
1 119909
2 119909
3) be the position
vector of 119901 relative to 119900119904 then C 119903 and C 119903 characterize the
apparent motion of 119901 Assume that the image point 1199011015840 isformed on the focal plane with coordinates (1199091015840
1 1199091015840
2 1199091015840
3) in
frameC Generally the optical systems of space cameras arewell designed and are free from optical aberrations and thestatic PSF is approximate to the diffraction limit [16 17] thusfollowing [18] we have
1199091015840
119894= 120573119909
119894 (119894 = 1 2)
1199091015840
3= 119891
1015840
(1)
where1198911015840 is the effective focal length the lateral magnificationof 1199011015840 120573 = (minus1)
119898minus1
(1198911015840( 119903 sdot 1198903)) 119898 is the number of
intermediate images in the optical system and 119890119894(119894 = 1 2 3)
is the base ofCLet 119903
119904be the position vector of satellite relative to119874
119890 then
119903 = 120588 minus 119903119904 In imaging the flight trajectory of the satellite
platform inI can be treated as Keplerian orbit as illustratedin Figure 2 According to the orbit elements 119894
0 inclination
Ω longitude of ascending node 120596 argument of perigee 119886semimajor axis 119890 eccentricity119872
119905 mean anomaly at epoch
we implement Newton-Raphson method to solve (2) and getthe eccentric anomaly 119864 from the given mean anomaly119872
119905=
1198720+ 119899(119905 minus 119905
0) where 119899 = 2120587119875 119875 is the orbit period [11]
119872119905minus (119864 minus 119890 sin119864) = 0 (2)
4 Mathematical Problems in Engineering
Orbit plane
The equatorial plane
Perigee
u3
osu1
u2 rarrrsY
2a
i
Oe
Ω X
Υ
120596
rarrs
Figure 2 Orbital motion of remote sensor
In frame O
O119903119904= (
119886 (cos119864 minus 119890)119887 sin1198640
) OV
119904=(
minus119886 sin119864119887 cos1198640
)119899
1 minus 119890 cos119864
(3)
The coordinate transform matrix between O andI is
TOI
= (
119862120596119862Ω minus 1198781205961198621198940119878Ω minus119878120596119862Ω minus 119862120596119862119894
0119878Ω 119878119894
0119878Ω
119862120596119878Ω + 1198781205961198621198940119862Ω minus119878120596119878Ω + 119862120596119862119894
0119862Ω minus119878119894
0119862Ω
1198781205961198781198940
1198621205961198781198940
1198621198940
)
(4)
For simplicity we write 119862120572 = cos120572 119878120572 = sin120572In engineering the coordinate transfer matrix TOI also
can be derived from the real-time measurements of GPSSince the base vectors of frame O in I
3= minus
I119903119904| 119903
119904|
2
= (IV
119904times
I119903119904)|V
119904times 119903
119904| and
1=
2times
3then TOI =
(123)minus1
I119903119904= TOIsdot
O119903119904
IV119904= TOIsdot
OV119904 (5)
Associating the equation of boresight with the ellipsoidsurface of the earth inC yields
1198832 + 1198852
1198602
119890
+1198842
1198612119890
= 1
119883 minus 119883119904
1199041
=119883 minus 119884
119904
1199042
=119883 minus 119885
119904
1199043
(6)
Here 119860119890= 6378137 km and 119861
119890= 6356752 km being the
length of earthrsquos semimajor axis and semiminor axis 119904119894(119894 =
1 2 3) are the unit vectors of I 119903 We write the solution of (7)as I 120588 = (119883 119884 119885)
119879 Hence I119903 =
I120588 minus
I119903119904 C 119903 = M sdot A sdot
Tminus1
OI sdotI119903 where M is the coordinate transformation matrix
from frame B to frame C it is a constant matrix for fixedinstallation A is the attitude matrix of satellite according to1-2-3 rotating order we have
A = R120595sdot R
120579sdot R
120593(7)
in which
R120595= (
cos120595119905
sin1205951199050
minus sin120595119905cos120595
1199050
0 0 1
)
R120579= (
cos 1205791199050 minus sin 120579
119905
0 1 0
sin 1205791199050 cos 120579
119905
)
R120593= (
1 0 0
0 cos120593119905
sin120593119905
0 minus sin120593119905cos120593
119905
)
(8)
where 120593119905 120579
119905 and 120595
119905are in order the real-time roll angle
pitch angle and yaw angle at moment 119905 The velocity of 119901 inC can be written in the following scalar form
119894=
C 119903 sdot 119890119894
(119894 = 1 2 3) (9)
Thus the velocity of image point of 1199011015840 will be
1015840
119894= 120573119909
119894+ 120573
119894= (minus1)
1198981198911015840
( 119903 sdot 1198903)23119909119894
+ (minus1)119898minus1
1198911015840
119903 sdot 1198903
119894
(119894 = 1 2)
(10)
Substituting (2)ndash(9) into (10) the velocity vector of imagepoint V1015840 = (1015840
1 1015840
2)119879 can be expressed as the explicit function
of several variables that is
V1015840 = V (1198940 Ω 120596 119890119872
1199050 120593
119905 120579
119905 120595
119905
119905 120579
119905
119905 119909
1015840
1 119909
1015840
2) (11)
For conciseness this analytical expression of V1015840 is omittedhere
The orbit elements can be determined according toinstantaneous GPS data Besides they also can be calculatedwith sufficient accuracy in celestial mechanics [19] On theother hand the attitude angles 120593
119905 120579
119905 and 120595
119905can be roughly
measured by the star trackers andGPSMeanwhile their timerates
119905 120579
119905 and
119905have the following relations
(
1205961
1205962
1205963
) = R120595
(
0
0
119905
) + R120579
[
[
(
0120579119905
0
) + R120593(
0
0
119905
)]
]
(12)
1205961 120596
2 and 120596
3are the three components of the remote sen-
sorrsquos angular velocity C119904relative to orbital frame O which
is calibrated in frame C Those can be roughly measured byspace-borne gyroscopes or other attitude sensors
It is easy to verify from (11) that the instantaneous imagevelocity field on the focal plane appears significantly nonlin-ear and isotropic for large FOV remote sensors especially
Mathematical Problems in Engineering 5
when they are applied to perform large angle attitudemaneu-vering for example in sidelooking by swing or stereoscopiclooking by pitching and so forth Under these circumstancesin order to acquire photos with high spatial temporal andspectral resolution image motion velocity control strategiesshould be executed in real time [20] based on auxiliary datawhich measured by reliable space-borne sensors [21 22] Indetail for TDI CCD cameras the line rates of the detectorsmust be controlled synchronizing to the local image velocitymodules during exposure so as to avoid along-track motionblurring the attitude of remote sensor should be regulated intime to maintain the detectors push-broom direction aimingat the direction of image motion to avoid cross-track motionblurring
3 Optical Flow Rough Inversion andDense Image Registration
Optical flow is another important physical model carryingthe whole energy and information of moving images indynamic imaging A specific optical flow trajectory is anintegral curve which is always tangent to the image velocityfield thus we have
1199091015840
1(119879) = int
119879
0
1015840
1(119909
1015840
1 119909
1015840
2 119905) 119889119905
1199091015840
2(119879) = int
119879
0
1015840
2(119909
1015840
1 119909
1015840
2 119905) 119889119905
(13)
Since (13) are coupled nonlinear integral equations weconvert them to numerical forms and solve them iteratively
1199091015840
i (0) = 1199091015840
i (119905)10038161003816100381610038161003816119905=0
1199091015840
j (119899) = 1199091015840
j (119899 minus 1) +1
2
1015840
119895[119909
1015840
1(119899 minus 1) 119909
1015840
2(119899 minus 1) 119899]
+ 1015840
119895[119909
1015840
1(119899 minus 1) 119909
1015840
2(119899 minus 1)
119899 minus 1] Δ119905
(119895 = 1 2 119899 isin Z+
)
(14)
It is evident that the algorithm has enough precision solong as the step-size of time interval Δ119905 is small enough Itcan be inferred from (13) that strong nonlinear image velocityfield may distort optical flows so much that the geometricalstructure of image may have irregular behaviors Thereforeif we intend to inverse the information of optical flow tomeasure the attitude motion the general formula of imagedeformation due to the optical flows should be deduced
31 Time-Varying Image Deformation in Dynamic ImagingFirstly we will investigate some differential characteristics ofthe moving image of an extended object on the earth surfaceAs shown in Figure 1 considering a microspatial variation of119901 along 120591 on the curved surface can be expressed as 120575 120588
119901= 120575119897 120591
Its conjugated image is
1205751199091015840
119894= 120575120573119909
119894+ 120573120575119909
119894 (15)
We expand the term of 120575120573
120575120573 = (minus1)119898
1198911015840
( 119903 + 120575 119903) sdot 1198903
minus1198911015840
119903 sdot 1198903
= (minus1)119898minus1
1198911015840
119903 sdot 1198903
infin
sum119896=1
(minus1)119896
(120575 119903 sdot 119890
3
119903 sdot 1198903
)
119896
asymp (minus1)1198981198911015840 120591 sdot 119890
3120575119897
( 119903 sdot 1198903)2
(16)
Taking derivatives with respect to variable 119905 for either part of(15) we have
1205751015840
119894= 120575 120573119909
119894+ 120575120573
119894+ 120573120575119909
119894+ 120573120575
119894 (17)
According to (16) we know that 120575 120573 asymp 0 On the otherhand the variation of 119903 can be expressed through a series ofcoordinate transformations that is
C(120575 119903) = 120575119897 [MATminus1
OITEIE120591] (18)
Notice that E 120591 is a fixed tangent vector of earth surfaceat object point 119901 which is time-invariant and specifies anorientation of motionless scene on the earth
Consequently
(
C120575 119903
120575119897)
120591
= (MATminus1
OITEI +MATminus1
OITEI
+MATminus1
OITEI +MATminus1
OITEI)E120591
(19)
where the coordinate transformmatrix from frameE toI is
TEI = (
cos1198671199010 minus sin119867
119901
0 1 0
sin1198671199010 cos119867
119901
) (20)
Let 120596119890be the angular rate of the earth and 120572
119901the longitude of
119901 on the earth then the hour angle of 119901 at time 119905 is 119867119901(119905) =
GST+120572119901+120596
119890119905 in which GST represents Greenwich sidereal
timeThe microscale image deformation of the extended scene
on the earth along the direction of 120591 during 1199051sim 119905
2can be
written as
[1205751199091015840
119894]1199052
120591
minus [1205751199091015840
119894]1199051
120591
= int1199052
1199051
(1205751015840
119894)
120591
119889119905 (21)
From (17) we have
(1205751015840119894)
120591
120575119897=120575120573
120575119897119894+ 120573
120575119909119894
120575119897+ 120573
120575119894
120575119897 (22)
According to (16) (18) and (19) we obtain the terms in (22)
120575120573
120575119897= (minus1)
1198981198911015840 C 120591 sdot 119890
3
( 119903 sdot 1198903)2
120575119909119894
120575119897= MATminus1
OITEI 119890119894sdotE120591
120575119894
120575119897= (
C120575 119903
120575119897)
120591
sdot 119890119894+ (
C120575 119903
120575119897)
120591
sdot 119890119894
(23)
6 Mathematical Problems in Engineering
Furthermore if the camera is fixed to the satellite platformthen M = 0 119890
119894= 0
Consequently (22) becomes
F119894(119905 120591) =
(1205751015840119894)
120591
120575119897
= (minus1)1198981198911015840 C 120591 sdot 119890
3
( 119903 sdot 1198903)2119894
+ (minus1)1198981198911015840 ( 119903 sdot 119890
119894)
( 119903 sdot 1198903)2MATminus1
OITEI 119890119894sdotE120591
+ (minus1)119898minus1
1198911015840
119903 sdot 1198903
(MATminus1
OITEI
+MATminus1
OITEI
+MATminus1
OITEI)E120591 sdot 119890
119894
(24)
For the motionless scene on the earth surface E120591 is a time-
independent but space-dependent unit tangent vector whichmeanwhile represents a specific orientation on the groundMoreover the physical meaning of function F
119894(119905 120591) is the
image deformation of unit-length curve on the curved surfacealong the direction of E
120591 in unit time interval That is theinstantaneous space-time deforming rate of the image of theobject along E
120591Consequently in dynamic imaging macroscopic defor-
mation on themoving image can be derived from the integralofF
119894(119905 120591) in space and time Referring to Figure 1 let Γ be an
arbitrary curve of the extended object on the earth let Γ1015840 be itsimage let two arbitrary points 119901 119902 isin Γ and let their Gaussianimages1199011015840 1199021015840 isin Γ1015840 Let E 120591 = T(119904) be a vector-valued functionwith variable 119904 (the length of the arc) which is time-invariantin frame E and gives the tangent vectors along the curve
So the image deformation taking place during 1199051sim 119905
2is
able to be described as
[(1199091015840
119901)119894
]1199052
1199051
minus [(1199091015840
119902)119894
]1199052
1199051
= intΓ
int1199052
1199051
F119894∘ T119889119905 119889119904 (25)
in whichF119894∘ T = F
119894[119905 T(119904)]
Now in terms of (24) and (25) we can see that the imagedeformation is also anisotropic and nonlinear which dependsnot only on optical flowrsquos evolution but also on the geometryof the scene
32 Dense Image Registration throughOptical Flow PredictionAs mentioned in the preceding sections optical flow is themost precise model in describing image motion and time-varying deformation On the contrary it is possible to inverseoptical flow with high accuracy if the image motion anddeformation can be detected As we know the low frequencysignal components of angular velocity are easier to be sensedprecisely by attitude sensors such as gyroscopes and startrackers but the higher frequency components are hard to
be measured with high accuracy However actually pertur-bations from high frequency jittering are the critical reasonfor motion blurring and local image deformations since theinfluences brought by low components of attitude motion areeasier to be restrained in imaging through regulating remotesensors
Since (13) and (25) are very sensitive to the attitudemotion the angular velocity is able to be measured with highresolution as well as broad frequency bandwidth so long asthe image motion and deformation are to be determinedwith a certain precision Fortunately the lapped images ofthe overlapped detectors meet the needs because they werecaptured in turn as the same parts of the optical flow passthrough these adjacent detectors sequentiallyWithout losinggenerality we will investigate the most common form ofCCD layout for which two rows of detectors are arrangedin parallel The time-phase relations of image formation dueto optical flow evolution are illustrated in Figure 3 wherethe moving image elements 120572
1 120572
2 (in the left gap)
1205731 120573
2 (in the right gap) are captured firstly at the same
time since their optical flows pass through the prior detectorsHowever because of nonuniform optical flows they willnot be captured simultaneously by the posterior detectorsTherefore the geometrical structures of photographs willbe time varying and nonlinear It is evident from Figure 3that the displacements and relative deformations in frameCbetween the lapped images can be determined by measuringthe offsets of the sample image element pairs in frameP
Let Δ1199101015840 = Δ11990910158401 Δ1199091015840 = Δ1199091015840
2be the relative offsets of the
same objectrsquos image on the two photos they are all calibratedinC orF We will measure them by image registration
As far as image registration method is concerned one ofthe hardest problems is complex deformation which is proneto weaken the similarity between the referenced images andsensed images so that itmight introduce large deviations fromthe true values or even lead to algorithm failure Some typicalmethods have been studied in [23ndash25] Generally most ofthem concentrated on several simple deforming forms suchas affine shear translation rotation or their combinationsinstead of investigating more sophisticated dynamic deform-ing models In [26ndash30] some effective approaches havebeen proposed to increase the accuracy and the robust ofalgorithms according to the respective reasonable modelsaccording to the specific properties of objective images
For conventional template based registration methodsonce a template has been extracted from the referencedimage the information about gray values shape and fre-quency spectrum does not increase since no additionalphysical information resources would be offered But actuallysuch information has changed when the optical flows arriveat the posterior detectors Therefore the cross-correlationsbetween the templates and sensed images certainly reduceSo in order to detect the minor image motions and com-plex deformations between the lapped images high-accurateregistration is indispensable which means that more pre-cise model should be implemented We treat it using thetechnique called template reconfiguration In summary themethod is established on the idea of keeping the completionof the information about optical flows
Mathematical Problems in Engineering 7
y998400
x998400
Posterior CCD
12057211205722
1205731
1205732
13998400
Prior CCD
Δx998400120578
Figure 3Nonlinear image velocity field and optical flow trajectoriesinfluence the time-phase relations between the lapped imagescaptured by the adjacent overlapped detectors
In operating as indicated in Figure 3 take the lappedimages captured by the detectors in prior array as thereferenced images and the images captured by posteriordetectors as the sensed images Firstly we will rebuild theoptical flows based on the rough measurements of the space-borne sensors and then reconfigure the original templates toconstruct the new templates whose morphologies are moreapproximate to the corresponding parts on the sensed imagesWith this process the information about imaging proceduresis able to be added into the new templates so as to increase thedegree of similarity to the sensed images The method maydramatically raise the accuracy of dense registration such thatthe high-accurate offsets between the lapped image pairs areable to be determined
In the experiment we examined Mapping Satellite-1 aChinese surveying satellite operating in 500 km height sunsynchronous orbit which is used for high-accurate pho-togrammetry [31] whose structure is shown in Figure 4 Oneof the effective payload three-line-array panchromatic CCDcameras has good geometrical accuracy whose ground pixelresolution is superior to 5m spectral range is 051 120583m sim
069 120583m and the swath is 60 km Another payload is that thehigh resolution camera is designed possessing Cook-TMAoptical system which gives a wide field of view [16 17] andthe panchromatic spatial resolution can reach 2m
In engineering for the purpose to improve the imagequality and surveying precision the high-accuracy measure-ments of jitter and attitude motion are very essential for pos-terior processing Thus here we investigate the images andthe auxiliary data of the large FOV high resolution camera todeal with the problem The experimental photographs werecaptured with 10∘ side looking The focal plane of the camera
High resolutionpanchromatic camera
Optical axis
Mapping satellite-01
O998400
x9984001x9984003
x9984002
Figure 4 The structure of Mapping Satellite-1 and its effectivepayloads
consists of 8 panchromatic TDI CCD detectors and there are120578 = 96 physical lapped pixels between each other
The scheme of the processing in registering one imageelement 120594 is illustrated in Figure 5
Step 1 Set the original lapped image strips (the images whichwere acquired directly by the detectors and without anypostprocessing) in frameC
Step 2 Compute the deformations of all image elementson referenced template with respect to their optical flowtrajectories
We extract the original template from the referencedimage denoted as 119879
1 which consists of 1198732 square elements
that is dim(1198791) = 119873 times 119873 Let 120594 be its central element and
119908 the width of each element here 119908 = 875 120583m Beforethe moving image was going to be captured by the posteriordetector in terms of (25) their current shapes and energydistribution can be predicted by the optical flow based on theauxiliary data of the remote sensor
In order to simplify the algorithm first order approxima-tion is allowed without introducing significant errors Thisapproximation means that the shape of every image elementis always quadrilateral Linear interpolations are carried outto determine the four sides according to the deformationsalong the radial directions of the vertexes as showed inFigure 5 The unit radial vectors are denoted by 120591
1015840
1sim 1205911015840
4in
frameC
1205911015840
1=radic2
21198901minusradic2
21198902 120591
1015840
3= minus
radic2
21198901+radic2
21198902
1205911015840
2=radic2
21198901+radic2
21198902 120591
1015840
4= minus
radic2
21198901minusradic2
21198902
(26)
Suppose image point 1199011015840 is the center of an arbitrary elementΣ1015840 in 119879
1 Let Σ be the area element on the earth surface which
is conjugate to Σ1015840 The four unit radial vectors of the vertexes
8 Mathematical Problems in Engineering
1
3
2
1
4
T0
T1 T9984001
T2 Ts
Referenced image of prior CCD Sensed image of posterior CCD
Figure 5 Optical flow prediction and template reconfiguration
on Σ 1205911sim 120591
4are conjugate to 1205911015840
1sim 1205911015840
4and tangent to the earth
surface at 119901 From the geometrical relations we have
C120591119894= (minus1)
119898
1199031015840 times 1205911015840119894times
C119899119901
100381610038161003816100381610038161199031015840 times 1205911015840
119894times
C119899119901
10038161003816100381610038161003816
E120591119894= Tminus1
EITOIAminus1Mminus1 C
120591119894
C119899119901= MATminus1
OITEIE119899119901
(27)
where E 119899119901is the unit normal vector of Σ at 119901 We predict
the deformations along 1205911sim 120591
4during 119905
1sim 119905
2according to
the measurements of GPS star trackers and gyroscopes asexplained in Figure 6 119905
1is the imaging time on prior detector
and 1199052is the imaging time on the posterior detector
[1205751199091015840
1]Δ119905
120591119896
= [1205751199091015840
1]1199052
120591119896
minus [1205751199091015840
1]1199051
120591119896
[1205751199091015840
2]Δ119905
120591119896
= [1205751199091015840
2]1199052
120591119896
minus [1205751199091015840
2]1199051
120591119896
(119896 = 1 sim 4)
(28)
The shape of deformed image Σ10158401199052can be got through linear
interpolation with
[120575 1199031015840
]Δ119905
120591119896
= ([1205751199091015840
1]Δ119905
120591119896
[12057511990910158402]Δ119905
120591119896
) (29)
Step 3 Reconfigure referenced template 1198791according to
optical flow prediction and then get a new template 1198792
Let 11987910158401be the deformed image of 119879
1computed in Step 2
Let 120594 = 119861119894119895be the central element of 1198791015840
1 integers 119894 and 119895 are
respectively the row number and column number of 119861119894119895The
gray value 119897119894119895of each element in 1198791015840
1is equal to its counterpart
in 1198791with the same indexes In addition we initialize a null
template 1198790whose shape and orientation are identical to 119879
1
the central element of 1198790is denoted by 119879
119894119895
[120575rarrr 998400]Δtminusrarr1205911[120575rarrr 998400]Δtminusrarr1205912
[120575rarrr 998400]Δtminusrarr1205913[120575rarrr 998400]Δtminusrarr1205914
1 2
34
1998400
2998400
39984004998400
Σ998400t2
Σ998400t1
p998400rarr120591 998400
1rarr120591 998400
2
rarr120591 9984003
rarr120591 9984004
Figure 6 Deformation of single element
Then we cover 1198790upon 1198791015840
1and let their centers coincide
that is 119879119894119895= 119861
119894119895 as shown in Figure 7 Denote the vertexes
of 11987910158401as 119881119896
119894119895(119896 = 1 sim 4) Therefore the connective relation
for adjacent elements can be expressed by 1198811
119894119895= 119881
2
119894119895minus1=
1198813
119894minus1119895minus1= 1198814
119894minus1119895
Next we will reassign the gray value ℎ1015840119894119895to 119879
119894119895(119894 =
1 sdot sdot sdot 119873 119895 = 1 sdot sdot sdot 119873) in sequence to construct a new template1198792 The process is just a simulation of image resample when
optical flow arrives at the posterior detector as indicated inFigure 3
That is
ℎ1015840
119894119895=
119894+1
sum119898=119894minus1
119895+1
sum119899=119895minus1
120578119898119899119897119898119899 (30)
Weight coefficient 120578119898119899
= 1198781198981198991199082 where 119878
119898119899is the area of the
intersecting polygon of 119861119898119899
with 119879119894119895
Mathematical Problems in Engineering 9
V1iminus1jminus1
Biminus1jminus1
Bijminus1
Bi+1jminus1
V4i+1jminus1
Biminus1j
V1ij
Bij
Tij
V4ij V3
ij
Bi+1j
T9984001
T0
Biminus1j+1
V2ij
Bij+1
Bi+1j+1
V2iminus1j+1
V3i+1j+1
Figure 7 Template reconfiguration
Step 4 Computenormalized cross-correlation coefficientsbetween 119879
2and the sensed image and then determine the
subpixel offset of 1198792relative to the sensed image in frameP
Firstly for this method the search space on the sensedimage can be contracted so much since the optical flowtrajectories for the referenced elements have been predictedin Step 2 Assuming that the search space is 119879
119904 dim(119879
119904) =
119872 times 119872 When 119879119894119895
moves to the pixel (1198991 119899
2) on 119879
119904 the
normalized cross-correlation (NCC) coefficient is given by
120574 (1198991 119899
2)
=sum119909119910
[119892 (119909 119910) minus 119892119909119910] [ℎ (119909 minus 119899
1 119910 minus 119899
2) minus ℎ]
sum119909119910
[119892 (119909 119910) minus 119892119909119910]2
sum119909119910
[ℎ (119909 minus 1198991 119910 minus 119899
2) minus ℎ]
2
05
(31)
where 119892119909119910
is the mean gray value of the segment of 119879119904
that is masked by 1198792and ℎ is the mean of 119879
2 Equation
(31) requires approximately 1198732(119872 minus 119873 + 1)2 additions and
1198732(119872 minus 119873 + 1)2 multiplications whereas the complexity of
FFT algorithm needs about 121198722log2119872 real multiplications
and 181198722log2119872 real additionssubtractions [32 33]
At the beginning we take119872 = 101119873 = 7 and computethe NCC coefficient When 119872 is much larger than 119873 thecalculation in spatial domain will be efficient Suppose thatthe peak value 120574max is taken at the coordinate (119896119898) 119896119898 isin Z
in the sensed window Hence we will reduce search space intoa smaller one with dimension of 47 times 47 which centered on119879119904(119896119898) Next the subpixel registration is realized by phase
correlation algorithm with larger 119872 and 119873 to suppress thesystem errors owing to the deficiencies of detailed textures
on the photo Here we take119872 = 47119873 = 23 Let the subpixeloffset between the two registering image elements be denotedas 120575
119909and 120575
119910in frameP
The phase correlation algorithm in the frequency domainbecomes more efficient as 119873 approaches 119872 and both havelarger scales [28] Moreover the Fourier coefficients are nor-malized to unitmagnitude prior to computing the correlationso that the correlation is based only on phase information andbeing insensitive to changes in image intensity [27 29]
LetG(119906 V) be the 2D Discrete Fourier Transforms (DFT)of the sensed window then we have
G (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
119892 (119909 119910)119882119906119909
119872119882
V119910119872
H (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
ℎ (119909 119910)119882119906119909
119873119882
V119910119873
(32)
Here
119882119873= exp(minus1198952120587
119873) (33)
Cross-phase spectrum is given by
R (119906 V) =G (119906 V)Hlowast
(119906 V)|G (119906 V)Hlowast (119906 V)|
= exp (119895120601 (119906 V)) (34)
whereHlowast is the complex conjugate ofH By inverse DiscreteFourier Transform (IDFT) we have
120574 (1198991 119899
2) =
1
1198732
(119873minus1)2
sum119906=minus(119873minus1)2
(119873minus1)2
sumV=minus(119873minus1)2
R (119906 V)119882minus1199061198991
119873119882
minusV1198992
119873
(35)
10 Mathematical Problems in Engineering
Figure 8 Dense image registration for lapped image strips CCD1versus CCD2 (Gap 1 the left two) and CCD3 versus CCD4 (Gap 3the right two)
Suppose that the new peak 120574max appears at (1198961015840 1198981015840) 1198961015840 1198981015840 isin
Z referring to [27] we have the following relation
120574max (1198961015840
1198981015840
)
asymp120582
1198732
sin [120587 (1198961015840 + 120575119909)] sin [120587 (1198981015840 + 120575
119910)]
sin [(120587119873) (1198961015840 + 120575119909)] sin [(120587119873) (1198981015840 + 120575
119910)]
(36)
The right side presents the spatial distribution of the normal-ized cross-correlation coefficientsTherefore (120575
119909 120575
119910) are able
to be measured based on that In practice constant 120582 le 1which tends to decrease when small noise exists and equalsunity in ideal cases
Step 5 Dense registration is executed for the lapped imagestrips
Repeating Step 1simStep 4 we register the along-track sam-ple images selected from the referenced images to the sensedimageThemaximal sample rate can reach up to line-by-lineThe continuous procedure is shown in Figure 8 in which theimage pairs are marked
The curves of relative offsets inP are shown in Figures 9and 10
Let col119903 row
119903be the column and row indexes of image
elements on the referenced image and let col119904 row
119904be the
indexes of the same elements on the sensed image The totalcolumns of each detector 119876 = 4096 pix and the verticaldistance between the two detector arrays 119863 = 184975mmAccording to the results of registration we get the offsets
50 100 150 200 250 300 350 400 450 500
minus28minus26minus24 X 258
Y minus2515
Image rows (pixels)
Cros
s tra
ck(p
ixel
s)
CCD1 versus CCD2
50 100 150 200 250 300 350 400 450 500
minus8minus6minus4 X 258
Y minus5393
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 423Y minus7363
S11S22
S22
S11
X 423Y minus2378
Figure 9Theoffsets of lapped images captured byCCD1 andCCD2
50 100 150 200 250 300 350 400 450 500minus17minus16minus15minus14minus13minus12
X 266Y minus1285 X 436
Y minus1297
Image rows (pixels)Cr
oss t
rack
(p
ixel
s)
CCD3 versus CCD4
50 100 150 200 250 300 350 400 450 500minus9minus8minus7minus6minus5
X 436Y minus6869
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 266Y minus7663
S31
S31
S32
S32
Figure 10 The offsets of lapped images captured by CCD3 andCCD4
of images at 119899th gap 120575119899119909(cross track) 120575119899
119910(along track) in
frameP and Δ1199091015840119899 Δ1199101015840
119899(mm) in frameF
120575119899119909= col
119903+ col
119904minus 119876 minus 120578
119899
Δ1199091015840
119899= Δ(119909
1015840
2)119899
= 120575119899119909sdot 119908
120575119899119910= row
119904minus row
119903minus119863
119908
Δ1199101015840
119899= Δ(119909
1015840
1)119899
= 120575119899119910sdot 119908 + 119863
(37)
Four pixels S11 S12 S31 and S32 are examinedTheir data arelisted in Table 1
S11 and S31 are the images of the same object which wascaptured in order by CCD1 and CCD2 (Gap 1) S12 and S32were captured respectively by CCD3 and CCD4 (Gap 3)Referring to the auxiliary data S11 and S31 were capturedat same time and S12 and S32 were captured at differenttime which means that the along-track speeds of the twomoving images were quite different Moreover the cross-track image offsets in Gap 1 and Gap 3 vary so much whichsays that the optical flows were also distorted unevenly anddeflects away from the along-track directionOn the other
Mathematical Problems in Engineering 11
Table 1 The offsets between overlapped images
Sample Row no(pixel)
120575119899119909
(pixel)Δ119909
1015840
119899
(mm)120575119899
119910
(pixel)Δ119910
1015840
119899
(mm)
S11 258 minus2515 minus02200625 minus539 184503
S12 423 minus2378 minus02080750 minus736 184331
S31 266 minus1285 minus01124375 minus766 184304
S32 436 minus1297 minus01134875 minus687 184374
hand it is has been discovered in Figures 9 and 10 that thefluctuation of image offsets taking place in Gap 1 is greaterin magnitude than in Gap 3 All the facts indicate that thedistorted optical flows can be detected from a plenty of imageoffsets We will see later that the nonlinear distribution of thedata strengthens the well-posedness of optical flow inversionalgorithm
4 Remote Sensor AttitudeMotion Measurement
In this section the attitude velocity of the remote sensor isgoing to be resolved by using optical flow inversion methodThe results of dense registration are applied to produceconditions of fixed solution for optical flow equations
41 The Principle of Optical Inversion For clarity in frameC the two coordinate components of image displacementof 119896th sample element belonging to 119899th lapped strip pair arewritten as Δ1199091015840
119899119896 Δ1199101015840
119899119896 From (13) and (25) it is easy to show
that the contributions to optical flow owing to orbital motionand earthrsquos inertial movement are of very slightly varying inshort term such that the corresponding displacements can beregarded as piecewise constants 119904
119909 119904119910
Let 120591119894119895 119905119894119895
be in order the two sequential imaging timeof the 119895th image sample on the overlapped detectors in 119895thgap They are usually recorded in the auxiliary data of theremote sensor Hence for every image element the quantityof discrete status in optical flow tracing will be
119873119894119895= [
119905119894119895minus 120591
119894119895
Δ119905] isin Z
+
(119894 = 1 sdot sdot sdot 119899 119895 = 1 sdot sdot sdot 119898) (38)
where 119899 is the amount of CCD gaps 119898 is the amount ofsample groups and Δ119905 is the time step We set samples withsame 119895 index into the same group in which the samples arecaptured by the prior detectors simultaneously
We expand (11) substitute it into (14) and (13) and thenarrange the scalar optical flow inversion equations in termsof the three axial angular velocity components 120596
1 120596
2 and 120596
3
(the variables in the inverse problem) yielding the linearoptical flow equations
Locus of optical flow
CCD
CCD
120575max
D
ci120583120581 = const
Figure 11 Coefficients Determination according to the CurrentLocation of the Image
For the 119897th group samples
1198731119897
sum119894=119897
119888119894
11198971120596119894
1+ 119888
119894
11198972120596119894
2+ 119888
119894
11198973120596119894
3= Δ119909
1015840
1119897minus 119904
1199091
1198731119897
sum119894=119897
119889119894
11198971120596119894
1+ 119889
119894
11198972120596119894
2+ 119889
119894
11198973120596119894
3= Δ119910
1015840
1119897minus 119904
1199101
119873119899119897
sum119894=119897
119888119894
1198991198971120596119894
1+ 119888
119894
1198991198972120596119894
2+ 119888
119894
1198991198973120596119894
3= Δ119909
1015840
119899119897minus 119904
119909119899
119873119899119897
sum119894=119897
119889119894
1198991198971120596119894
1+ 119889
119894
1198991198972120596119894
2+ 119889
119894
1198991198973120596119894
3= Δ119910
1015840
119899119897minus 119904
119910119899
(39)
Suppose that the sample process will stop until119898 groupshave been founded The coefficients are as follows
119888119894
120583]120581 = Ξ120581 (120583 lceil119894 minus ] + 1119873120583]
Nrceil)
119889119894
120583]120581 = Λ 120581(120583 lceil
119894 minus ] + 1119873120583]
Nrceil) (120581 = 1 2 3)
(40)
12 Mathematical Problems in Engineering
Here
Ξ119896= (
12058511119896
12058512119896
sdot sdot sdot 1205851N119896
12058521119896
12058522119896
sdot sdot sdot 1205852N119896
sdot sdot sdot sdot sdot sdot
1205851198991119896
1205851198992119896
sdot sdot sdot 120585119899N119896
)
Λ119896= (
12058211119896
12058212119896
sdot sdot sdot 1205821N119896
12058221119896
12058222119896
sdot sdot sdot 1205822N119896
sdot sdot sdot sdot sdot sdot
1205821198991119896
1205821198992119896
sdot sdot sdot 120582119899N119896
)
(41)
As for the algorithm to reduce the complexity all possiblevalues for the coefficients are stored in the matrixes Ξ
119896and
Λ119896 The accuracy is guaranteed because the coefficients for
the images moving into the same piece of region are almostequal to an identical constant in a short period which isexplained in Figure 11
It has beenmentioned that the optical flow is not sensitiveto satellitersquos orbit motion and earth rotation in a short term
namely the possible values are assigned by the followingfunctions
120585119894119895119896= 120585
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
120582119894119895119896= 120582
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
119894 = 1 sim 119899 119895 = 1 sim N 119902 = 1 sim N
(42)
HereN is the number of constant-valued segments in theregion encompassing all the possible optical flow trajectoriesThe orbital elements and integral step size Δ119905 are commonto all functions Furthermore when long termmeasurementsare executed Ξ
119896and Λ
119896only need to be renewed according
to the current parametersThe coefficientmatrix of the optical flow equations for 119895th
(1 le 119895 le 119898) group can be written as
C119895=
(((((((((((
(
1198881
11198951119888111198952
119888111198953
sdot sdot sdot 1198881198731119895
111989511198881198731119895
111989521198881198731119895
11198953sdot sdot sdot 0 0
119889111198951
119889111198952
119889111198953
sdot sdot sdot 1198891198731119895
111989511198891198731119895
111989521198891198731119895
11198953sdot sdot sdot 0 0
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811199021198951
11988811199021198952
11988811199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119888119873119902119895
1199021198951119888119873119902119895
1199021198951119888119873119902119895
1199021198953
11988911199021198951
11988911199021198952
11988911199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119889119873119902119895
1199021198951119889119873119902119895
1199021198952119889119873119902119895
1199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811198991198951
11988811198991198952
11988811198991198953
sdot sdot sdot sdot sdot sdot 119888119873119899119895
1198991198951119888119873119899119895
1198991198952119888119873119899119895
1198991198953sdot sdot sdot 0
1198891
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot sdot sdot sdot 119889
1
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot 0
)))))))))))
)2119899times3119873119902119895
(43)
where119873119902119895= max119873
1119895 119873
119899119895 Consequently as we organize the equations for all groups
the global coefficient matrix will be given in the followingform
C =((
(
[C1]2119899times3119873
1199021
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
0 [C2]2119899times3119873
1199022
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
d sdot sdot sdot sdot sdot sdot
[C]2119899times3119873maxd
[C119898]2119899times3119873
119902119898
0
))
)2119899119898times3119873max
(44)
C is a quasidiagonal partitioned matrix every subblockhas 2119899 rows The maximal columns of C are 119873max =
max1198731199021 119873
119902119898
The unknown variables are as follows
[Ω]3119873maxtimes1
= [120596111205961212059613sdot sdot sdot 120596
119873max1
120596119873max2
120596119873max3
]119879
(45)
The constant are as followsΔu
2119898119899times1= [ Δ1199091015840
11Δ1199101015840
11sdot sdot sdot Δ1199091015840
1198991Δ1199101015840
1198991
sdot sdot sdot Δ11990910158401198981
Δ11991010158401198981
sdot sdot sdot Δ1199091015840119899119898
Δ1199101015840119899119898
]119879
s2119898119899times1
= [ 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899
sdot sdot sdot 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899]119879
(46)
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 Mathematical Problems in Engineering
Mars Reconnaissance Orbiter (MRO) in High ResolutionImaging Science Experiment (HiRISE) missions of NASAIt takes pictures of Mars with resolutions of 03mpixelresolving objects Fourteen staggered parallel CCDs areoverlapped with 48 pixels at each end to fulfill the entirefield of view Although adjacent detectors overlap with equalphysical pixels yet their lapped image pixels are not equal andvarying with time because spacecraft jitters cause undulatingoptical flows within the interlaced areas [8] In additionwe found that when large FOV remote sensors performstereoscopic imaging with large pitch angles the lappedimages belonging to marginal detectors are bound to exceedor lose several hundred pixels compared to their physicaloverlaps Furthermore the unexpected quantity decreasessignificantly for the detectors mounted at the central regionof the focal plane
Although nonuniform optical flow brings many troublesin image processing such as registration resample intercon-nection and geometrical rectification it permits us to mea-sure the spacecraft attitudemotionwith very high accuracy ina broad bandwidth It is nearly impossible for conventionalspace-borne sensors to realize the target Precision attitudemotionmeasurement is very useful for remote sensing imageprocessing especially for image restoration from motionblurring as studied in [11 12] Associating the measurementand optical flow models the dynamic point spread functions(PSF) are able to be estimated to be set as the convolutionkernels in nonblinded deconvolution algorithms
The behavior of optical flow characterizes the entire two-dimensional flow field for an imagersquos motion and variationIn [13] optical flow estimation based on image sequencesof the same aurora to determine the flow field will provideaccess to the phase space the important information forunderstanding the physical mechanism of the aurora For thepurpose to improve the accuracy of optical flow estimationa two-step matching paradigm for optical flow estimation isapplied in [14] firstly the coarse distribution measuring ofmotion vectors is done with a simple frame-to-frame corre-lation technique also known as the digital symmetric phase-only filter (SPOF) and after that subpixel accuracy estimationresult is achieved by using the sequential tree-reweightedmax-product message passing (TRW-S) optimization Sim-ilarly Sakaino overcame the disadvantages in optical flowdetermination when moving objects with different shapesand sizes move against a complicated background the imageintensity between frames may violate the common situationimage brightness constancy to image brightness 5 changemodels as constraints in regular situations [15] Howeverunlike continuous image sequences if we merely obtainedseveral images of the identical moving objects captured bydifferent detectors with long intervals the former techniquesdo not work well for optical flow estimation for lacking of theinformation of imaging process of the instrument
In this paper a new optical flow inversion methodis proposed for precise attitude measurement Unlike thesituations in [13ndash15] the image sequences of video do notexist for the transmission type remote sensors instead of theimage pairs of the same earth scene which are captured bydifferent TDI CCD detectors in push-broom fashion The
time intervals between the independent image formationscorresponding to the overlapped detectors are much morethan the interval between sequential frames in video forwhich the frame rates usually exceed tens of frames persecond (fps) However we can model optical flows basedon the working mechanism of the instrument and imageprocessing techniques rather than estimating from framesequences of a specific detector The contents of this paperare organized as follows in Section 2 an analytical model ofimage motion velocity field is established which is applicableto dynamic imaging for three-dimensional planet surfaceby large FOV remote sensors The phenomenon of movingimage deformation due to optical flow is investigated inSection 3 Based on rough inversion of optical flow a novelmethod for dense image registration is developed to measurethe subpixel offsets between the lapped images which arecaptured by adjacent detectors In Section 4 an attitudemotion measuring method based on precise optical flowinversion is studied and the results of the experiment supportthe whole theory perfectly
2 Image Velocity Field Analysis
Suppose that a large FOV camera is performing push-broom imaging to the earth the scenario is illustrated inFigure 1 The planetrsquos surface cannot be regarded as a localplane but a three-dimensional ellipsoid since it may greatlyinfluence the image motion and time-varying deformationwhen complicated relative motion exists between the imagerand the earth
In order to set up the model of space imaging somecoordinate systems need to be defined as follows
(1) I(119874119890minus 119883119884119885) the inertial frame of the earth For
convenience here we choose 1198692000 frameThe origin119874119890is located at the earth center
(2) C(119900119904minus 1199091015840
11199091015840211990910158403) the frame of camera Axis 119900
11990411990910158403is the
optical axis and origin 119900119904is the center of the exit pupil
(3) O(119900119904minus 119906
111990621199063) the orbit frame Axis 119900
1199041199063passes
through the center of earth and axis 1199001199041199062is perpen-
dicular to the instant orbit plane(4) B(119900
119904minus 119909
119887119910119887119911119887) the body frame of the satellite
(5) P(119900 minus 119909119910119911) the frame of photographyThe origin 119900 isthe center of the photo Axis 119900119909 points to the columndirection and axis 119900119910 points to the row direction
(6) F(1199001015840 minus 119909101584011991010158401199111015840) the frame of focal plane Axes 11990010158401199091015840and 11990010158401199101015840 lie in the focal plane They are respectivelyparallel to 119900
11990411990910158402and 119900
11990411990910158401 Axis 11990010158401199111015840 coincides with the
optical axis(7) E(119874
119890minus 119909
119890119910119890119911119890) the frame of Terrestrial Reference
Frame (TRF) Axis 119874119890119910119890points to the North Pole
and axis 119874119890119909119890passes through the intersection of
Greenwich meridian and the equator
According to Figure 1 1205970is the ground track of the
satellite and 1205971and 120597
2are the ground traces corresponding to
two fixed boresights in FOV which are far away from 1205970if the
Mathematical Problems in Engineering 3
pminusrarr120591
Γ
q1205972
12059711205970
u1x9984001 u2
x9984002
u3
Os
p998400 rarr120591 998400
x9984003
Figure 1 The analysis of dynamic imaging for the three-dimensional planet
imager holds a large attitude angle Obviously the shapes andlengths of 120597
1and 120597
2also have notable differences during push
broom which implies that the geometrical structure of theimage is time varying as well as nonuniform Furthermoreit can be discovered later that the deforming rates mainlydepend on the planetrsquos apparent motion observed by thecamera
Considering an object point 119901 on the earth its positionvector relative to 119874
119890is denoted as 120588
119901 As a convention in the
following discussions I120588119901represents the vectormeasured in
frame I and accordingly C120588119901is the same vector measured
in frame C We select one unit vector 120591 which is tangent tothe surface of the earth at 119901 Let 119903(119909
1 119909
2 119909
3) be the position
vector of 119901 relative to 119900119904 then C 119903 and C 119903 characterize the
apparent motion of 119901 Assume that the image point 1199011015840 isformed on the focal plane with coordinates (1199091015840
1 1199091015840
2 1199091015840
3) in
frameC Generally the optical systems of space cameras arewell designed and are free from optical aberrations and thestatic PSF is approximate to the diffraction limit [16 17] thusfollowing [18] we have
1199091015840
119894= 120573119909
119894 (119894 = 1 2)
1199091015840
3= 119891
1015840
(1)
where1198911015840 is the effective focal length the lateral magnificationof 1199011015840 120573 = (minus1)
119898minus1
(1198911015840( 119903 sdot 1198903)) 119898 is the number of
intermediate images in the optical system and 119890119894(119894 = 1 2 3)
is the base ofCLet 119903
119904be the position vector of satellite relative to119874
119890 then
119903 = 120588 minus 119903119904 In imaging the flight trajectory of the satellite
platform inI can be treated as Keplerian orbit as illustratedin Figure 2 According to the orbit elements 119894
0 inclination
Ω longitude of ascending node 120596 argument of perigee 119886semimajor axis 119890 eccentricity119872
119905 mean anomaly at epoch
we implement Newton-Raphson method to solve (2) and getthe eccentric anomaly 119864 from the given mean anomaly119872
119905=
1198720+ 119899(119905 minus 119905
0) where 119899 = 2120587119875 119875 is the orbit period [11]
119872119905minus (119864 minus 119890 sin119864) = 0 (2)
4 Mathematical Problems in Engineering
Orbit plane
The equatorial plane
Perigee
u3
osu1
u2 rarrrsY
2a
i
Oe
Ω X
Υ
120596
rarrs
Figure 2 Orbital motion of remote sensor
In frame O
O119903119904= (
119886 (cos119864 minus 119890)119887 sin1198640
) OV
119904=(
minus119886 sin119864119887 cos1198640
)119899
1 minus 119890 cos119864
(3)
The coordinate transform matrix between O andI is
TOI
= (
119862120596119862Ω minus 1198781205961198621198940119878Ω minus119878120596119862Ω minus 119862120596119862119894
0119878Ω 119878119894
0119878Ω
119862120596119878Ω + 1198781205961198621198940119862Ω minus119878120596119878Ω + 119862120596119862119894
0119862Ω minus119878119894
0119862Ω
1198781205961198781198940
1198621205961198781198940
1198621198940
)
(4)
For simplicity we write 119862120572 = cos120572 119878120572 = sin120572In engineering the coordinate transfer matrix TOI also
can be derived from the real-time measurements of GPSSince the base vectors of frame O in I
3= minus
I119903119904| 119903
119904|
2
= (IV
119904times
I119903119904)|V
119904times 119903
119904| and
1=
2times
3then TOI =
(123)minus1
I119903119904= TOIsdot
O119903119904
IV119904= TOIsdot
OV119904 (5)
Associating the equation of boresight with the ellipsoidsurface of the earth inC yields
1198832 + 1198852
1198602
119890
+1198842
1198612119890
= 1
119883 minus 119883119904
1199041
=119883 minus 119884
119904
1199042
=119883 minus 119885
119904
1199043
(6)
Here 119860119890= 6378137 km and 119861
119890= 6356752 km being the
length of earthrsquos semimajor axis and semiminor axis 119904119894(119894 =
1 2 3) are the unit vectors of I 119903 We write the solution of (7)as I 120588 = (119883 119884 119885)
119879 Hence I119903 =
I120588 minus
I119903119904 C 119903 = M sdot A sdot
Tminus1
OI sdotI119903 where M is the coordinate transformation matrix
from frame B to frame C it is a constant matrix for fixedinstallation A is the attitude matrix of satellite according to1-2-3 rotating order we have
A = R120595sdot R
120579sdot R
120593(7)
in which
R120595= (
cos120595119905
sin1205951199050
minus sin120595119905cos120595
1199050
0 0 1
)
R120579= (
cos 1205791199050 minus sin 120579
119905
0 1 0
sin 1205791199050 cos 120579
119905
)
R120593= (
1 0 0
0 cos120593119905
sin120593119905
0 minus sin120593119905cos120593
119905
)
(8)
where 120593119905 120579
119905 and 120595
119905are in order the real-time roll angle
pitch angle and yaw angle at moment 119905 The velocity of 119901 inC can be written in the following scalar form
119894=
C 119903 sdot 119890119894
(119894 = 1 2 3) (9)
Thus the velocity of image point of 1199011015840 will be
1015840
119894= 120573119909
119894+ 120573
119894= (minus1)
1198981198911015840
( 119903 sdot 1198903)23119909119894
+ (minus1)119898minus1
1198911015840
119903 sdot 1198903
119894
(119894 = 1 2)
(10)
Substituting (2)ndash(9) into (10) the velocity vector of imagepoint V1015840 = (1015840
1 1015840
2)119879 can be expressed as the explicit function
of several variables that is
V1015840 = V (1198940 Ω 120596 119890119872
1199050 120593
119905 120579
119905 120595
119905
119905 120579
119905
119905 119909
1015840
1 119909
1015840
2) (11)
For conciseness this analytical expression of V1015840 is omittedhere
The orbit elements can be determined according toinstantaneous GPS data Besides they also can be calculatedwith sufficient accuracy in celestial mechanics [19] On theother hand the attitude angles 120593
119905 120579
119905 and 120595
119905can be roughly
measured by the star trackers andGPSMeanwhile their timerates
119905 120579
119905 and
119905have the following relations
(
1205961
1205962
1205963
) = R120595
(
0
0
119905
) + R120579
[
[
(
0120579119905
0
) + R120593(
0
0
119905
)]
]
(12)
1205961 120596
2 and 120596
3are the three components of the remote sen-
sorrsquos angular velocity C119904relative to orbital frame O which
is calibrated in frame C Those can be roughly measured byspace-borne gyroscopes or other attitude sensors
It is easy to verify from (11) that the instantaneous imagevelocity field on the focal plane appears significantly nonlin-ear and isotropic for large FOV remote sensors especially
Mathematical Problems in Engineering 5
when they are applied to perform large angle attitudemaneu-vering for example in sidelooking by swing or stereoscopiclooking by pitching and so forth Under these circumstancesin order to acquire photos with high spatial temporal andspectral resolution image motion velocity control strategiesshould be executed in real time [20] based on auxiliary datawhich measured by reliable space-borne sensors [21 22] Indetail for TDI CCD cameras the line rates of the detectorsmust be controlled synchronizing to the local image velocitymodules during exposure so as to avoid along-track motionblurring the attitude of remote sensor should be regulated intime to maintain the detectors push-broom direction aimingat the direction of image motion to avoid cross-track motionblurring
3 Optical Flow Rough Inversion andDense Image Registration
Optical flow is another important physical model carryingthe whole energy and information of moving images indynamic imaging A specific optical flow trajectory is anintegral curve which is always tangent to the image velocityfield thus we have
1199091015840
1(119879) = int
119879
0
1015840
1(119909
1015840
1 119909
1015840
2 119905) 119889119905
1199091015840
2(119879) = int
119879
0
1015840
2(119909
1015840
1 119909
1015840
2 119905) 119889119905
(13)
Since (13) are coupled nonlinear integral equations weconvert them to numerical forms and solve them iteratively
1199091015840
i (0) = 1199091015840
i (119905)10038161003816100381610038161003816119905=0
1199091015840
j (119899) = 1199091015840
j (119899 minus 1) +1
2
1015840
119895[119909
1015840
1(119899 minus 1) 119909
1015840
2(119899 minus 1) 119899]
+ 1015840
119895[119909
1015840
1(119899 minus 1) 119909
1015840
2(119899 minus 1)
119899 minus 1] Δ119905
(119895 = 1 2 119899 isin Z+
)
(14)
It is evident that the algorithm has enough precision solong as the step-size of time interval Δ119905 is small enough Itcan be inferred from (13) that strong nonlinear image velocityfield may distort optical flows so much that the geometricalstructure of image may have irregular behaviors Thereforeif we intend to inverse the information of optical flow tomeasure the attitude motion the general formula of imagedeformation due to the optical flows should be deduced
31 Time-Varying Image Deformation in Dynamic ImagingFirstly we will investigate some differential characteristics ofthe moving image of an extended object on the earth surfaceAs shown in Figure 1 considering a microspatial variation of119901 along 120591 on the curved surface can be expressed as 120575 120588
119901= 120575119897 120591
Its conjugated image is
1205751199091015840
119894= 120575120573119909
119894+ 120573120575119909
119894 (15)
We expand the term of 120575120573
120575120573 = (minus1)119898
1198911015840
( 119903 + 120575 119903) sdot 1198903
minus1198911015840
119903 sdot 1198903
= (minus1)119898minus1
1198911015840
119903 sdot 1198903
infin
sum119896=1
(minus1)119896
(120575 119903 sdot 119890
3
119903 sdot 1198903
)
119896
asymp (minus1)1198981198911015840 120591 sdot 119890
3120575119897
( 119903 sdot 1198903)2
(16)
Taking derivatives with respect to variable 119905 for either part of(15) we have
1205751015840
119894= 120575 120573119909
119894+ 120575120573
119894+ 120573120575119909
119894+ 120573120575
119894 (17)
According to (16) we know that 120575 120573 asymp 0 On the otherhand the variation of 119903 can be expressed through a series ofcoordinate transformations that is
C(120575 119903) = 120575119897 [MATminus1
OITEIE120591] (18)
Notice that E 120591 is a fixed tangent vector of earth surfaceat object point 119901 which is time-invariant and specifies anorientation of motionless scene on the earth
Consequently
(
C120575 119903
120575119897)
120591
= (MATminus1
OITEI +MATminus1
OITEI
+MATminus1
OITEI +MATminus1
OITEI)E120591
(19)
where the coordinate transformmatrix from frameE toI is
TEI = (
cos1198671199010 minus sin119867
119901
0 1 0
sin1198671199010 cos119867
119901
) (20)
Let 120596119890be the angular rate of the earth and 120572
119901the longitude of
119901 on the earth then the hour angle of 119901 at time 119905 is 119867119901(119905) =
GST+120572119901+120596
119890119905 in which GST represents Greenwich sidereal
timeThe microscale image deformation of the extended scene
on the earth along the direction of 120591 during 1199051sim 119905
2can be
written as
[1205751199091015840
119894]1199052
120591
minus [1205751199091015840
119894]1199051
120591
= int1199052
1199051
(1205751015840
119894)
120591
119889119905 (21)
From (17) we have
(1205751015840119894)
120591
120575119897=120575120573
120575119897119894+ 120573
120575119909119894
120575119897+ 120573
120575119894
120575119897 (22)
According to (16) (18) and (19) we obtain the terms in (22)
120575120573
120575119897= (minus1)
1198981198911015840 C 120591 sdot 119890
3
( 119903 sdot 1198903)2
120575119909119894
120575119897= MATminus1
OITEI 119890119894sdotE120591
120575119894
120575119897= (
C120575 119903
120575119897)
120591
sdot 119890119894+ (
C120575 119903
120575119897)
120591
sdot 119890119894
(23)
6 Mathematical Problems in Engineering
Furthermore if the camera is fixed to the satellite platformthen M = 0 119890
119894= 0
Consequently (22) becomes
F119894(119905 120591) =
(1205751015840119894)
120591
120575119897
= (minus1)1198981198911015840 C 120591 sdot 119890
3
( 119903 sdot 1198903)2119894
+ (minus1)1198981198911015840 ( 119903 sdot 119890
119894)
( 119903 sdot 1198903)2MATminus1
OITEI 119890119894sdotE120591
+ (minus1)119898minus1
1198911015840
119903 sdot 1198903
(MATminus1
OITEI
+MATminus1
OITEI
+MATminus1
OITEI)E120591 sdot 119890
119894
(24)
For the motionless scene on the earth surface E120591 is a time-
independent but space-dependent unit tangent vector whichmeanwhile represents a specific orientation on the groundMoreover the physical meaning of function F
119894(119905 120591) is the
image deformation of unit-length curve on the curved surfacealong the direction of E
120591 in unit time interval That is theinstantaneous space-time deforming rate of the image of theobject along E
120591Consequently in dynamic imaging macroscopic defor-
mation on themoving image can be derived from the integralofF
119894(119905 120591) in space and time Referring to Figure 1 let Γ be an
arbitrary curve of the extended object on the earth let Γ1015840 be itsimage let two arbitrary points 119901 119902 isin Γ and let their Gaussianimages1199011015840 1199021015840 isin Γ1015840 Let E 120591 = T(119904) be a vector-valued functionwith variable 119904 (the length of the arc) which is time-invariantin frame E and gives the tangent vectors along the curve
So the image deformation taking place during 1199051sim 119905
2is
able to be described as
[(1199091015840
119901)119894
]1199052
1199051
minus [(1199091015840
119902)119894
]1199052
1199051
= intΓ
int1199052
1199051
F119894∘ T119889119905 119889119904 (25)
in whichF119894∘ T = F
119894[119905 T(119904)]
Now in terms of (24) and (25) we can see that the imagedeformation is also anisotropic and nonlinear which dependsnot only on optical flowrsquos evolution but also on the geometryof the scene
32 Dense Image Registration throughOptical Flow PredictionAs mentioned in the preceding sections optical flow is themost precise model in describing image motion and time-varying deformation On the contrary it is possible to inverseoptical flow with high accuracy if the image motion anddeformation can be detected As we know the low frequencysignal components of angular velocity are easier to be sensedprecisely by attitude sensors such as gyroscopes and startrackers but the higher frequency components are hard to
be measured with high accuracy However actually pertur-bations from high frequency jittering are the critical reasonfor motion blurring and local image deformations since theinfluences brought by low components of attitude motion areeasier to be restrained in imaging through regulating remotesensors
Since (13) and (25) are very sensitive to the attitudemotion the angular velocity is able to be measured with highresolution as well as broad frequency bandwidth so long asthe image motion and deformation are to be determinedwith a certain precision Fortunately the lapped images ofthe overlapped detectors meet the needs because they werecaptured in turn as the same parts of the optical flow passthrough these adjacent detectors sequentiallyWithout losinggenerality we will investigate the most common form ofCCD layout for which two rows of detectors are arrangedin parallel The time-phase relations of image formation dueto optical flow evolution are illustrated in Figure 3 wherethe moving image elements 120572
1 120572
2 (in the left gap)
1205731 120573
2 (in the right gap) are captured firstly at the same
time since their optical flows pass through the prior detectorsHowever because of nonuniform optical flows they willnot be captured simultaneously by the posterior detectorsTherefore the geometrical structures of photographs willbe time varying and nonlinear It is evident from Figure 3that the displacements and relative deformations in frameCbetween the lapped images can be determined by measuringthe offsets of the sample image element pairs in frameP
Let Δ1199101015840 = Δ11990910158401 Δ1199091015840 = Δ1199091015840
2be the relative offsets of the
same objectrsquos image on the two photos they are all calibratedinC orF We will measure them by image registration
As far as image registration method is concerned one ofthe hardest problems is complex deformation which is proneto weaken the similarity between the referenced images andsensed images so that itmight introduce large deviations fromthe true values or even lead to algorithm failure Some typicalmethods have been studied in [23ndash25] Generally most ofthem concentrated on several simple deforming forms suchas affine shear translation rotation or their combinationsinstead of investigating more sophisticated dynamic deform-ing models In [26ndash30] some effective approaches havebeen proposed to increase the accuracy and the robust ofalgorithms according to the respective reasonable modelsaccording to the specific properties of objective images
For conventional template based registration methodsonce a template has been extracted from the referencedimage the information about gray values shape and fre-quency spectrum does not increase since no additionalphysical information resources would be offered But actuallysuch information has changed when the optical flows arriveat the posterior detectors Therefore the cross-correlationsbetween the templates and sensed images certainly reduceSo in order to detect the minor image motions and com-plex deformations between the lapped images high-accurateregistration is indispensable which means that more pre-cise model should be implemented We treat it using thetechnique called template reconfiguration In summary themethod is established on the idea of keeping the completionof the information about optical flows
Mathematical Problems in Engineering 7
y998400
x998400
Posterior CCD
12057211205722
1205731
1205732
13998400
Prior CCD
Δx998400120578
Figure 3Nonlinear image velocity field and optical flow trajectoriesinfluence the time-phase relations between the lapped imagescaptured by the adjacent overlapped detectors
In operating as indicated in Figure 3 take the lappedimages captured by the detectors in prior array as thereferenced images and the images captured by posteriordetectors as the sensed images Firstly we will rebuild theoptical flows based on the rough measurements of the space-borne sensors and then reconfigure the original templates toconstruct the new templates whose morphologies are moreapproximate to the corresponding parts on the sensed imagesWith this process the information about imaging proceduresis able to be added into the new templates so as to increase thedegree of similarity to the sensed images The method maydramatically raise the accuracy of dense registration such thatthe high-accurate offsets between the lapped image pairs areable to be determined
In the experiment we examined Mapping Satellite-1 aChinese surveying satellite operating in 500 km height sunsynchronous orbit which is used for high-accurate pho-togrammetry [31] whose structure is shown in Figure 4 Oneof the effective payload three-line-array panchromatic CCDcameras has good geometrical accuracy whose ground pixelresolution is superior to 5m spectral range is 051 120583m sim
069 120583m and the swath is 60 km Another payload is that thehigh resolution camera is designed possessing Cook-TMAoptical system which gives a wide field of view [16 17] andthe panchromatic spatial resolution can reach 2m
In engineering for the purpose to improve the imagequality and surveying precision the high-accuracy measure-ments of jitter and attitude motion are very essential for pos-terior processing Thus here we investigate the images andthe auxiliary data of the large FOV high resolution camera todeal with the problem The experimental photographs werecaptured with 10∘ side looking The focal plane of the camera
High resolutionpanchromatic camera
Optical axis
Mapping satellite-01
O998400
x9984001x9984003
x9984002
Figure 4 The structure of Mapping Satellite-1 and its effectivepayloads
consists of 8 panchromatic TDI CCD detectors and there are120578 = 96 physical lapped pixels between each other
The scheme of the processing in registering one imageelement 120594 is illustrated in Figure 5
Step 1 Set the original lapped image strips (the images whichwere acquired directly by the detectors and without anypostprocessing) in frameC
Step 2 Compute the deformations of all image elementson referenced template with respect to their optical flowtrajectories
We extract the original template from the referencedimage denoted as 119879
1 which consists of 1198732 square elements
that is dim(1198791) = 119873 times 119873 Let 120594 be its central element and
119908 the width of each element here 119908 = 875 120583m Beforethe moving image was going to be captured by the posteriordetector in terms of (25) their current shapes and energydistribution can be predicted by the optical flow based on theauxiliary data of the remote sensor
In order to simplify the algorithm first order approxima-tion is allowed without introducing significant errors Thisapproximation means that the shape of every image elementis always quadrilateral Linear interpolations are carried outto determine the four sides according to the deformationsalong the radial directions of the vertexes as showed inFigure 5 The unit radial vectors are denoted by 120591
1015840
1sim 1205911015840
4in
frameC
1205911015840
1=radic2
21198901minusradic2
21198902 120591
1015840
3= minus
radic2
21198901+radic2
21198902
1205911015840
2=radic2
21198901+radic2
21198902 120591
1015840
4= minus
radic2
21198901minusradic2
21198902
(26)
Suppose image point 1199011015840 is the center of an arbitrary elementΣ1015840 in 119879
1 Let Σ be the area element on the earth surface which
is conjugate to Σ1015840 The four unit radial vectors of the vertexes
8 Mathematical Problems in Engineering
1
3
2
1
4
T0
T1 T9984001
T2 Ts
Referenced image of prior CCD Sensed image of posterior CCD
Figure 5 Optical flow prediction and template reconfiguration
on Σ 1205911sim 120591
4are conjugate to 1205911015840
1sim 1205911015840
4and tangent to the earth
surface at 119901 From the geometrical relations we have
C120591119894= (minus1)
119898
1199031015840 times 1205911015840119894times
C119899119901
100381610038161003816100381610038161199031015840 times 1205911015840
119894times
C119899119901
10038161003816100381610038161003816
E120591119894= Tminus1
EITOIAminus1Mminus1 C
120591119894
C119899119901= MATminus1
OITEIE119899119901
(27)
where E 119899119901is the unit normal vector of Σ at 119901 We predict
the deformations along 1205911sim 120591
4during 119905
1sim 119905
2according to
the measurements of GPS star trackers and gyroscopes asexplained in Figure 6 119905
1is the imaging time on prior detector
and 1199052is the imaging time on the posterior detector
[1205751199091015840
1]Δ119905
120591119896
= [1205751199091015840
1]1199052
120591119896
minus [1205751199091015840
1]1199051
120591119896
[1205751199091015840
2]Δ119905
120591119896
= [1205751199091015840
2]1199052
120591119896
minus [1205751199091015840
2]1199051
120591119896
(119896 = 1 sim 4)
(28)
The shape of deformed image Σ10158401199052can be got through linear
interpolation with
[120575 1199031015840
]Δ119905
120591119896
= ([1205751199091015840
1]Δ119905
120591119896
[12057511990910158402]Δ119905
120591119896
) (29)
Step 3 Reconfigure referenced template 1198791according to
optical flow prediction and then get a new template 1198792
Let 11987910158401be the deformed image of 119879
1computed in Step 2
Let 120594 = 119861119894119895be the central element of 1198791015840
1 integers 119894 and 119895 are
respectively the row number and column number of 119861119894119895The
gray value 119897119894119895of each element in 1198791015840
1is equal to its counterpart
in 1198791with the same indexes In addition we initialize a null
template 1198790whose shape and orientation are identical to 119879
1
the central element of 1198790is denoted by 119879
119894119895
[120575rarrr 998400]Δtminusrarr1205911[120575rarrr 998400]Δtminusrarr1205912
[120575rarrr 998400]Δtminusrarr1205913[120575rarrr 998400]Δtminusrarr1205914
1 2
34
1998400
2998400
39984004998400
Σ998400t2
Σ998400t1
p998400rarr120591 998400
1rarr120591 998400
2
rarr120591 9984003
rarr120591 9984004
Figure 6 Deformation of single element
Then we cover 1198790upon 1198791015840
1and let their centers coincide
that is 119879119894119895= 119861
119894119895 as shown in Figure 7 Denote the vertexes
of 11987910158401as 119881119896
119894119895(119896 = 1 sim 4) Therefore the connective relation
for adjacent elements can be expressed by 1198811
119894119895= 119881
2
119894119895minus1=
1198813
119894minus1119895minus1= 1198814
119894minus1119895
Next we will reassign the gray value ℎ1015840119894119895to 119879
119894119895(119894 =
1 sdot sdot sdot 119873 119895 = 1 sdot sdot sdot 119873) in sequence to construct a new template1198792 The process is just a simulation of image resample when
optical flow arrives at the posterior detector as indicated inFigure 3
That is
ℎ1015840
119894119895=
119894+1
sum119898=119894minus1
119895+1
sum119899=119895minus1
120578119898119899119897119898119899 (30)
Weight coefficient 120578119898119899
= 1198781198981198991199082 where 119878
119898119899is the area of the
intersecting polygon of 119861119898119899
with 119879119894119895
Mathematical Problems in Engineering 9
V1iminus1jminus1
Biminus1jminus1
Bijminus1
Bi+1jminus1
V4i+1jminus1
Biminus1j
V1ij
Bij
Tij
V4ij V3
ij
Bi+1j
T9984001
T0
Biminus1j+1
V2ij
Bij+1
Bi+1j+1
V2iminus1j+1
V3i+1j+1
Figure 7 Template reconfiguration
Step 4 Computenormalized cross-correlation coefficientsbetween 119879
2and the sensed image and then determine the
subpixel offset of 1198792relative to the sensed image in frameP
Firstly for this method the search space on the sensedimage can be contracted so much since the optical flowtrajectories for the referenced elements have been predictedin Step 2 Assuming that the search space is 119879
119904 dim(119879
119904) =
119872 times 119872 When 119879119894119895
moves to the pixel (1198991 119899
2) on 119879
119904 the
normalized cross-correlation (NCC) coefficient is given by
120574 (1198991 119899
2)
=sum119909119910
[119892 (119909 119910) minus 119892119909119910] [ℎ (119909 minus 119899
1 119910 minus 119899
2) minus ℎ]
sum119909119910
[119892 (119909 119910) minus 119892119909119910]2
sum119909119910
[ℎ (119909 minus 1198991 119910 minus 119899
2) minus ℎ]
2
05
(31)
where 119892119909119910
is the mean gray value of the segment of 119879119904
that is masked by 1198792and ℎ is the mean of 119879
2 Equation
(31) requires approximately 1198732(119872 minus 119873 + 1)2 additions and
1198732(119872 minus 119873 + 1)2 multiplications whereas the complexity of
FFT algorithm needs about 121198722log2119872 real multiplications
and 181198722log2119872 real additionssubtractions [32 33]
At the beginning we take119872 = 101119873 = 7 and computethe NCC coefficient When 119872 is much larger than 119873 thecalculation in spatial domain will be efficient Suppose thatthe peak value 120574max is taken at the coordinate (119896119898) 119896119898 isin Z
in the sensed window Hence we will reduce search space intoa smaller one with dimension of 47 times 47 which centered on119879119904(119896119898) Next the subpixel registration is realized by phase
correlation algorithm with larger 119872 and 119873 to suppress thesystem errors owing to the deficiencies of detailed textures
on the photo Here we take119872 = 47119873 = 23 Let the subpixeloffset between the two registering image elements be denotedas 120575
119909and 120575
119910in frameP
The phase correlation algorithm in the frequency domainbecomes more efficient as 119873 approaches 119872 and both havelarger scales [28] Moreover the Fourier coefficients are nor-malized to unitmagnitude prior to computing the correlationso that the correlation is based only on phase information andbeing insensitive to changes in image intensity [27 29]
LetG(119906 V) be the 2D Discrete Fourier Transforms (DFT)of the sensed window then we have
G (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
119892 (119909 119910)119882119906119909
119872119882
V119910119872
H (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
ℎ (119909 119910)119882119906119909
119873119882
V119910119873
(32)
Here
119882119873= exp(minus1198952120587
119873) (33)
Cross-phase spectrum is given by
R (119906 V) =G (119906 V)Hlowast
(119906 V)|G (119906 V)Hlowast (119906 V)|
= exp (119895120601 (119906 V)) (34)
whereHlowast is the complex conjugate ofH By inverse DiscreteFourier Transform (IDFT) we have
120574 (1198991 119899
2) =
1
1198732
(119873minus1)2
sum119906=minus(119873minus1)2
(119873minus1)2
sumV=minus(119873minus1)2
R (119906 V)119882minus1199061198991
119873119882
minusV1198992
119873
(35)
10 Mathematical Problems in Engineering
Figure 8 Dense image registration for lapped image strips CCD1versus CCD2 (Gap 1 the left two) and CCD3 versus CCD4 (Gap 3the right two)
Suppose that the new peak 120574max appears at (1198961015840 1198981015840) 1198961015840 1198981015840 isin
Z referring to [27] we have the following relation
120574max (1198961015840
1198981015840
)
asymp120582
1198732
sin [120587 (1198961015840 + 120575119909)] sin [120587 (1198981015840 + 120575
119910)]
sin [(120587119873) (1198961015840 + 120575119909)] sin [(120587119873) (1198981015840 + 120575
119910)]
(36)
The right side presents the spatial distribution of the normal-ized cross-correlation coefficientsTherefore (120575
119909 120575
119910) are able
to be measured based on that In practice constant 120582 le 1which tends to decrease when small noise exists and equalsunity in ideal cases
Step 5 Dense registration is executed for the lapped imagestrips
Repeating Step 1simStep 4 we register the along-track sam-ple images selected from the referenced images to the sensedimageThemaximal sample rate can reach up to line-by-lineThe continuous procedure is shown in Figure 8 in which theimage pairs are marked
The curves of relative offsets inP are shown in Figures 9and 10
Let col119903 row
119903be the column and row indexes of image
elements on the referenced image and let col119904 row
119904be the
indexes of the same elements on the sensed image The totalcolumns of each detector 119876 = 4096 pix and the verticaldistance between the two detector arrays 119863 = 184975mmAccording to the results of registration we get the offsets
50 100 150 200 250 300 350 400 450 500
minus28minus26minus24 X 258
Y minus2515
Image rows (pixels)
Cros
s tra
ck(p
ixel
s)
CCD1 versus CCD2
50 100 150 200 250 300 350 400 450 500
minus8minus6minus4 X 258
Y minus5393
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 423Y minus7363
S11S22
S22
S11
X 423Y minus2378
Figure 9Theoffsets of lapped images captured byCCD1 andCCD2
50 100 150 200 250 300 350 400 450 500minus17minus16minus15minus14minus13minus12
X 266Y minus1285 X 436
Y minus1297
Image rows (pixels)Cr
oss t
rack
(p
ixel
s)
CCD3 versus CCD4
50 100 150 200 250 300 350 400 450 500minus9minus8minus7minus6minus5
X 436Y minus6869
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 266Y minus7663
S31
S31
S32
S32
Figure 10 The offsets of lapped images captured by CCD3 andCCD4
of images at 119899th gap 120575119899119909(cross track) 120575119899
119910(along track) in
frameP and Δ1199091015840119899 Δ1199101015840
119899(mm) in frameF
120575119899119909= col
119903+ col
119904minus 119876 minus 120578
119899
Δ1199091015840
119899= Δ(119909
1015840
2)119899
= 120575119899119909sdot 119908
120575119899119910= row
119904minus row
119903minus119863
119908
Δ1199101015840
119899= Δ(119909
1015840
1)119899
= 120575119899119910sdot 119908 + 119863
(37)
Four pixels S11 S12 S31 and S32 are examinedTheir data arelisted in Table 1
S11 and S31 are the images of the same object which wascaptured in order by CCD1 and CCD2 (Gap 1) S12 and S32were captured respectively by CCD3 and CCD4 (Gap 3)Referring to the auxiliary data S11 and S31 were capturedat same time and S12 and S32 were captured at differenttime which means that the along-track speeds of the twomoving images were quite different Moreover the cross-track image offsets in Gap 1 and Gap 3 vary so much whichsays that the optical flows were also distorted unevenly anddeflects away from the along-track directionOn the other
Mathematical Problems in Engineering 11
Table 1 The offsets between overlapped images
Sample Row no(pixel)
120575119899119909
(pixel)Δ119909
1015840
119899
(mm)120575119899
119910
(pixel)Δ119910
1015840
119899
(mm)
S11 258 minus2515 minus02200625 minus539 184503
S12 423 minus2378 minus02080750 minus736 184331
S31 266 minus1285 minus01124375 minus766 184304
S32 436 minus1297 minus01134875 minus687 184374
hand it is has been discovered in Figures 9 and 10 that thefluctuation of image offsets taking place in Gap 1 is greaterin magnitude than in Gap 3 All the facts indicate that thedistorted optical flows can be detected from a plenty of imageoffsets We will see later that the nonlinear distribution of thedata strengthens the well-posedness of optical flow inversionalgorithm
4 Remote Sensor AttitudeMotion Measurement
In this section the attitude velocity of the remote sensor isgoing to be resolved by using optical flow inversion methodThe results of dense registration are applied to produceconditions of fixed solution for optical flow equations
41 The Principle of Optical Inversion For clarity in frameC the two coordinate components of image displacementof 119896th sample element belonging to 119899th lapped strip pair arewritten as Δ1199091015840
119899119896 Δ1199101015840
119899119896 From (13) and (25) it is easy to show
that the contributions to optical flow owing to orbital motionand earthrsquos inertial movement are of very slightly varying inshort term such that the corresponding displacements can beregarded as piecewise constants 119904
119909 119904119910
Let 120591119894119895 119905119894119895
be in order the two sequential imaging timeof the 119895th image sample on the overlapped detectors in 119895thgap They are usually recorded in the auxiliary data of theremote sensor Hence for every image element the quantityof discrete status in optical flow tracing will be
119873119894119895= [
119905119894119895minus 120591
119894119895
Δ119905] isin Z
+
(119894 = 1 sdot sdot sdot 119899 119895 = 1 sdot sdot sdot 119898) (38)
where 119899 is the amount of CCD gaps 119898 is the amount ofsample groups and Δ119905 is the time step We set samples withsame 119895 index into the same group in which the samples arecaptured by the prior detectors simultaneously
We expand (11) substitute it into (14) and (13) and thenarrange the scalar optical flow inversion equations in termsof the three axial angular velocity components 120596
1 120596
2 and 120596
3
(the variables in the inverse problem) yielding the linearoptical flow equations
Locus of optical flow
CCD
CCD
120575max
D
ci120583120581 = const
Figure 11 Coefficients Determination according to the CurrentLocation of the Image
For the 119897th group samples
1198731119897
sum119894=119897
119888119894
11198971120596119894
1+ 119888
119894
11198972120596119894
2+ 119888
119894
11198973120596119894
3= Δ119909
1015840
1119897minus 119904
1199091
1198731119897
sum119894=119897
119889119894
11198971120596119894
1+ 119889
119894
11198972120596119894
2+ 119889
119894
11198973120596119894
3= Δ119910
1015840
1119897minus 119904
1199101
119873119899119897
sum119894=119897
119888119894
1198991198971120596119894
1+ 119888
119894
1198991198972120596119894
2+ 119888
119894
1198991198973120596119894
3= Δ119909
1015840
119899119897minus 119904
119909119899
119873119899119897
sum119894=119897
119889119894
1198991198971120596119894
1+ 119889
119894
1198991198972120596119894
2+ 119889
119894
1198991198973120596119894
3= Δ119910
1015840
119899119897minus 119904
119910119899
(39)
Suppose that the sample process will stop until119898 groupshave been founded The coefficients are as follows
119888119894
120583]120581 = Ξ120581 (120583 lceil119894 minus ] + 1119873120583]
Nrceil)
119889119894
120583]120581 = Λ 120581(120583 lceil
119894 minus ] + 1119873120583]
Nrceil) (120581 = 1 2 3)
(40)
12 Mathematical Problems in Engineering
Here
Ξ119896= (
12058511119896
12058512119896
sdot sdot sdot 1205851N119896
12058521119896
12058522119896
sdot sdot sdot 1205852N119896
sdot sdot sdot sdot sdot sdot
1205851198991119896
1205851198992119896
sdot sdot sdot 120585119899N119896
)
Λ119896= (
12058211119896
12058212119896
sdot sdot sdot 1205821N119896
12058221119896
12058222119896
sdot sdot sdot 1205822N119896
sdot sdot sdot sdot sdot sdot
1205821198991119896
1205821198992119896
sdot sdot sdot 120582119899N119896
)
(41)
As for the algorithm to reduce the complexity all possiblevalues for the coefficients are stored in the matrixes Ξ
119896and
Λ119896 The accuracy is guaranteed because the coefficients for
the images moving into the same piece of region are almostequal to an identical constant in a short period which isexplained in Figure 11
It has beenmentioned that the optical flow is not sensitiveto satellitersquos orbit motion and earth rotation in a short term
namely the possible values are assigned by the followingfunctions
120585119894119895119896= 120585
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
120582119894119895119896= 120582
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
119894 = 1 sim 119899 119895 = 1 sim N 119902 = 1 sim N
(42)
HereN is the number of constant-valued segments in theregion encompassing all the possible optical flow trajectoriesThe orbital elements and integral step size Δ119905 are commonto all functions Furthermore when long termmeasurementsare executed Ξ
119896and Λ
119896only need to be renewed according
to the current parametersThe coefficientmatrix of the optical flow equations for 119895th
(1 le 119895 le 119898) group can be written as
C119895=
(((((((((((
(
1198881
11198951119888111198952
119888111198953
sdot sdot sdot 1198881198731119895
111989511198881198731119895
111989521198881198731119895
11198953sdot sdot sdot 0 0
119889111198951
119889111198952
119889111198953
sdot sdot sdot 1198891198731119895
111989511198891198731119895
111989521198891198731119895
11198953sdot sdot sdot 0 0
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811199021198951
11988811199021198952
11988811199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119888119873119902119895
1199021198951119888119873119902119895
1199021198951119888119873119902119895
1199021198953
11988911199021198951
11988911199021198952
11988911199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119889119873119902119895
1199021198951119889119873119902119895
1199021198952119889119873119902119895
1199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811198991198951
11988811198991198952
11988811198991198953
sdot sdot sdot sdot sdot sdot 119888119873119899119895
1198991198951119888119873119899119895
1198991198952119888119873119899119895
1198991198953sdot sdot sdot 0
1198891
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot sdot sdot sdot 119889
1
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot 0
)))))))))))
)2119899times3119873119902119895
(43)
where119873119902119895= max119873
1119895 119873
119899119895 Consequently as we organize the equations for all groups
the global coefficient matrix will be given in the followingform
C =((
(
[C1]2119899times3119873
1199021
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
0 [C2]2119899times3119873
1199022
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
d sdot sdot sdot sdot sdot sdot
[C]2119899times3119873maxd
[C119898]2119899times3119873
119902119898
0
))
)2119899119898times3119873max
(44)
C is a quasidiagonal partitioned matrix every subblockhas 2119899 rows The maximal columns of C are 119873max =
max1198731199021 119873
119902119898
The unknown variables are as follows
[Ω]3119873maxtimes1
= [120596111205961212059613sdot sdot sdot 120596
119873max1
120596119873max2
120596119873max3
]119879
(45)
The constant are as followsΔu
2119898119899times1= [ Δ1199091015840
11Δ1199101015840
11sdot sdot sdot Δ1199091015840
1198991Δ1199101015840
1198991
sdot sdot sdot Δ11990910158401198981
Δ11991010158401198981
sdot sdot sdot Δ1199091015840119899119898
Δ1199101015840119899119898
]119879
s2119898119899times1
= [ 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899
sdot sdot sdot 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899]119879
(46)
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 3
pminusrarr120591
Γ
q1205972
12059711205970
u1x9984001 u2
x9984002
u3
Os
p998400 rarr120591 998400
x9984003
Figure 1 The analysis of dynamic imaging for the three-dimensional planet
imager holds a large attitude angle Obviously the shapes andlengths of 120597
1and 120597
2also have notable differences during push
broom which implies that the geometrical structure of theimage is time varying as well as nonuniform Furthermoreit can be discovered later that the deforming rates mainlydepend on the planetrsquos apparent motion observed by thecamera
Considering an object point 119901 on the earth its positionvector relative to 119874
119890is denoted as 120588
119901 As a convention in the
following discussions I120588119901represents the vectormeasured in
frame I and accordingly C120588119901is the same vector measured
in frame C We select one unit vector 120591 which is tangent tothe surface of the earth at 119901 Let 119903(119909
1 119909
2 119909
3) be the position
vector of 119901 relative to 119900119904 then C 119903 and C 119903 characterize the
apparent motion of 119901 Assume that the image point 1199011015840 isformed on the focal plane with coordinates (1199091015840
1 1199091015840
2 1199091015840
3) in
frameC Generally the optical systems of space cameras arewell designed and are free from optical aberrations and thestatic PSF is approximate to the diffraction limit [16 17] thusfollowing [18] we have
1199091015840
119894= 120573119909
119894 (119894 = 1 2)
1199091015840
3= 119891
1015840
(1)
where1198911015840 is the effective focal length the lateral magnificationof 1199011015840 120573 = (minus1)
119898minus1
(1198911015840( 119903 sdot 1198903)) 119898 is the number of
intermediate images in the optical system and 119890119894(119894 = 1 2 3)
is the base ofCLet 119903
119904be the position vector of satellite relative to119874
119890 then
119903 = 120588 minus 119903119904 In imaging the flight trajectory of the satellite
platform inI can be treated as Keplerian orbit as illustratedin Figure 2 According to the orbit elements 119894
0 inclination
Ω longitude of ascending node 120596 argument of perigee 119886semimajor axis 119890 eccentricity119872
119905 mean anomaly at epoch
we implement Newton-Raphson method to solve (2) and getthe eccentric anomaly 119864 from the given mean anomaly119872
119905=
1198720+ 119899(119905 minus 119905
0) where 119899 = 2120587119875 119875 is the orbit period [11]
119872119905minus (119864 minus 119890 sin119864) = 0 (2)
4 Mathematical Problems in Engineering
Orbit plane
The equatorial plane
Perigee
u3
osu1
u2 rarrrsY
2a
i
Oe
Ω X
Υ
120596
rarrs
Figure 2 Orbital motion of remote sensor
In frame O
O119903119904= (
119886 (cos119864 minus 119890)119887 sin1198640
) OV
119904=(
minus119886 sin119864119887 cos1198640
)119899
1 minus 119890 cos119864
(3)
The coordinate transform matrix between O andI is
TOI
= (
119862120596119862Ω minus 1198781205961198621198940119878Ω minus119878120596119862Ω minus 119862120596119862119894
0119878Ω 119878119894
0119878Ω
119862120596119878Ω + 1198781205961198621198940119862Ω minus119878120596119878Ω + 119862120596119862119894
0119862Ω minus119878119894
0119862Ω
1198781205961198781198940
1198621205961198781198940
1198621198940
)
(4)
For simplicity we write 119862120572 = cos120572 119878120572 = sin120572In engineering the coordinate transfer matrix TOI also
can be derived from the real-time measurements of GPSSince the base vectors of frame O in I
3= minus
I119903119904| 119903
119904|
2
= (IV
119904times
I119903119904)|V
119904times 119903
119904| and
1=
2times
3then TOI =
(123)minus1
I119903119904= TOIsdot
O119903119904
IV119904= TOIsdot
OV119904 (5)
Associating the equation of boresight with the ellipsoidsurface of the earth inC yields
1198832 + 1198852
1198602
119890
+1198842
1198612119890
= 1
119883 minus 119883119904
1199041
=119883 minus 119884
119904
1199042
=119883 minus 119885
119904
1199043
(6)
Here 119860119890= 6378137 km and 119861
119890= 6356752 km being the
length of earthrsquos semimajor axis and semiminor axis 119904119894(119894 =
1 2 3) are the unit vectors of I 119903 We write the solution of (7)as I 120588 = (119883 119884 119885)
119879 Hence I119903 =
I120588 minus
I119903119904 C 119903 = M sdot A sdot
Tminus1
OI sdotI119903 where M is the coordinate transformation matrix
from frame B to frame C it is a constant matrix for fixedinstallation A is the attitude matrix of satellite according to1-2-3 rotating order we have
A = R120595sdot R
120579sdot R
120593(7)
in which
R120595= (
cos120595119905
sin1205951199050
minus sin120595119905cos120595
1199050
0 0 1
)
R120579= (
cos 1205791199050 minus sin 120579
119905
0 1 0
sin 1205791199050 cos 120579
119905
)
R120593= (
1 0 0
0 cos120593119905
sin120593119905
0 minus sin120593119905cos120593
119905
)
(8)
where 120593119905 120579
119905 and 120595
119905are in order the real-time roll angle
pitch angle and yaw angle at moment 119905 The velocity of 119901 inC can be written in the following scalar form
119894=
C 119903 sdot 119890119894
(119894 = 1 2 3) (9)
Thus the velocity of image point of 1199011015840 will be
1015840
119894= 120573119909
119894+ 120573
119894= (minus1)
1198981198911015840
( 119903 sdot 1198903)23119909119894
+ (minus1)119898minus1
1198911015840
119903 sdot 1198903
119894
(119894 = 1 2)
(10)
Substituting (2)ndash(9) into (10) the velocity vector of imagepoint V1015840 = (1015840
1 1015840
2)119879 can be expressed as the explicit function
of several variables that is
V1015840 = V (1198940 Ω 120596 119890119872
1199050 120593
119905 120579
119905 120595
119905
119905 120579
119905
119905 119909
1015840
1 119909
1015840
2) (11)
For conciseness this analytical expression of V1015840 is omittedhere
The orbit elements can be determined according toinstantaneous GPS data Besides they also can be calculatedwith sufficient accuracy in celestial mechanics [19] On theother hand the attitude angles 120593
119905 120579
119905 and 120595
119905can be roughly
measured by the star trackers andGPSMeanwhile their timerates
119905 120579
119905 and
119905have the following relations
(
1205961
1205962
1205963
) = R120595
(
0
0
119905
) + R120579
[
[
(
0120579119905
0
) + R120593(
0
0
119905
)]
]
(12)
1205961 120596
2 and 120596
3are the three components of the remote sen-
sorrsquos angular velocity C119904relative to orbital frame O which
is calibrated in frame C Those can be roughly measured byspace-borne gyroscopes or other attitude sensors
It is easy to verify from (11) that the instantaneous imagevelocity field on the focal plane appears significantly nonlin-ear and isotropic for large FOV remote sensors especially
Mathematical Problems in Engineering 5
when they are applied to perform large angle attitudemaneu-vering for example in sidelooking by swing or stereoscopiclooking by pitching and so forth Under these circumstancesin order to acquire photos with high spatial temporal andspectral resolution image motion velocity control strategiesshould be executed in real time [20] based on auxiliary datawhich measured by reliable space-borne sensors [21 22] Indetail for TDI CCD cameras the line rates of the detectorsmust be controlled synchronizing to the local image velocitymodules during exposure so as to avoid along-track motionblurring the attitude of remote sensor should be regulated intime to maintain the detectors push-broom direction aimingat the direction of image motion to avoid cross-track motionblurring
3 Optical Flow Rough Inversion andDense Image Registration
Optical flow is another important physical model carryingthe whole energy and information of moving images indynamic imaging A specific optical flow trajectory is anintegral curve which is always tangent to the image velocityfield thus we have
1199091015840
1(119879) = int
119879
0
1015840
1(119909
1015840
1 119909
1015840
2 119905) 119889119905
1199091015840
2(119879) = int
119879
0
1015840
2(119909
1015840
1 119909
1015840
2 119905) 119889119905
(13)
Since (13) are coupled nonlinear integral equations weconvert them to numerical forms and solve them iteratively
1199091015840
i (0) = 1199091015840
i (119905)10038161003816100381610038161003816119905=0
1199091015840
j (119899) = 1199091015840
j (119899 minus 1) +1
2
1015840
119895[119909
1015840
1(119899 minus 1) 119909
1015840
2(119899 minus 1) 119899]
+ 1015840
119895[119909
1015840
1(119899 minus 1) 119909
1015840
2(119899 minus 1)
119899 minus 1] Δ119905
(119895 = 1 2 119899 isin Z+
)
(14)
It is evident that the algorithm has enough precision solong as the step-size of time interval Δ119905 is small enough Itcan be inferred from (13) that strong nonlinear image velocityfield may distort optical flows so much that the geometricalstructure of image may have irregular behaviors Thereforeif we intend to inverse the information of optical flow tomeasure the attitude motion the general formula of imagedeformation due to the optical flows should be deduced
31 Time-Varying Image Deformation in Dynamic ImagingFirstly we will investigate some differential characteristics ofthe moving image of an extended object on the earth surfaceAs shown in Figure 1 considering a microspatial variation of119901 along 120591 on the curved surface can be expressed as 120575 120588
119901= 120575119897 120591
Its conjugated image is
1205751199091015840
119894= 120575120573119909
119894+ 120573120575119909
119894 (15)
We expand the term of 120575120573
120575120573 = (minus1)119898
1198911015840
( 119903 + 120575 119903) sdot 1198903
minus1198911015840
119903 sdot 1198903
= (minus1)119898minus1
1198911015840
119903 sdot 1198903
infin
sum119896=1
(minus1)119896
(120575 119903 sdot 119890
3
119903 sdot 1198903
)
119896
asymp (minus1)1198981198911015840 120591 sdot 119890
3120575119897
( 119903 sdot 1198903)2
(16)
Taking derivatives with respect to variable 119905 for either part of(15) we have
1205751015840
119894= 120575 120573119909
119894+ 120575120573
119894+ 120573120575119909
119894+ 120573120575
119894 (17)
According to (16) we know that 120575 120573 asymp 0 On the otherhand the variation of 119903 can be expressed through a series ofcoordinate transformations that is
C(120575 119903) = 120575119897 [MATminus1
OITEIE120591] (18)
Notice that E 120591 is a fixed tangent vector of earth surfaceat object point 119901 which is time-invariant and specifies anorientation of motionless scene on the earth
Consequently
(
C120575 119903
120575119897)
120591
= (MATminus1
OITEI +MATminus1
OITEI
+MATminus1
OITEI +MATminus1
OITEI)E120591
(19)
where the coordinate transformmatrix from frameE toI is
TEI = (
cos1198671199010 minus sin119867
119901
0 1 0
sin1198671199010 cos119867
119901
) (20)
Let 120596119890be the angular rate of the earth and 120572
119901the longitude of
119901 on the earth then the hour angle of 119901 at time 119905 is 119867119901(119905) =
GST+120572119901+120596
119890119905 in which GST represents Greenwich sidereal
timeThe microscale image deformation of the extended scene
on the earth along the direction of 120591 during 1199051sim 119905
2can be
written as
[1205751199091015840
119894]1199052
120591
minus [1205751199091015840
119894]1199051
120591
= int1199052
1199051
(1205751015840
119894)
120591
119889119905 (21)
From (17) we have
(1205751015840119894)
120591
120575119897=120575120573
120575119897119894+ 120573
120575119909119894
120575119897+ 120573
120575119894
120575119897 (22)
According to (16) (18) and (19) we obtain the terms in (22)
120575120573
120575119897= (minus1)
1198981198911015840 C 120591 sdot 119890
3
( 119903 sdot 1198903)2
120575119909119894
120575119897= MATminus1
OITEI 119890119894sdotE120591
120575119894
120575119897= (
C120575 119903
120575119897)
120591
sdot 119890119894+ (
C120575 119903
120575119897)
120591
sdot 119890119894
(23)
6 Mathematical Problems in Engineering
Furthermore if the camera is fixed to the satellite platformthen M = 0 119890
119894= 0
Consequently (22) becomes
F119894(119905 120591) =
(1205751015840119894)
120591
120575119897
= (minus1)1198981198911015840 C 120591 sdot 119890
3
( 119903 sdot 1198903)2119894
+ (minus1)1198981198911015840 ( 119903 sdot 119890
119894)
( 119903 sdot 1198903)2MATminus1
OITEI 119890119894sdotE120591
+ (minus1)119898minus1
1198911015840
119903 sdot 1198903
(MATminus1
OITEI
+MATminus1
OITEI
+MATminus1
OITEI)E120591 sdot 119890
119894
(24)
For the motionless scene on the earth surface E120591 is a time-
independent but space-dependent unit tangent vector whichmeanwhile represents a specific orientation on the groundMoreover the physical meaning of function F
119894(119905 120591) is the
image deformation of unit-length curve on the curved surfacealong the direction of E
120591 in unit time interval That is theinstantaneous space-time deforming rate of the image of theobject along E
120591Consequently in dynamic imaging macroscopic defor-
mation on themoving image can be derived from the integralofF
119894(119905 120591) in space and time Referring to Figure 1 let Γ be an
arbitrary curve of the extended object on the earth let Γ1015840 be itsimage let two arbitrary points 119901 119902 isin Γ and let their Gaussianimages1199011015840 1199021015840 isin Γ1015840 Let E 120591 = T(119904) be a vector-valued functionwith variable 119904 (the length of the arc) which is time-invariantin frame E and gives the tangent vectors along the curve
So the image deformation taking place during 1199051sim 119905
2is
able to be described as
[(1199091015840
119901)119894
]1199052
1199051
minus [(1199091015840
119902)119894
]1199052
1199051
= intΓ
int1199052
1199051
F119894∘ T119889119905 119889119904 (25)
in whichF119894∘ T = F
119894[119905 T(119904)]
Now in terms of (24) and (25) we can see that the imagedeformation is also anisotropic and nonlinear which dependsnot only on optical flowrsquos evolution but also on the geometryof the scene
32 Dense Image Registration throughOptical Flow PredictionAs mentioned in the preceding sections optical flow is themost precise model in describing image motion and time-varying deformation On the contrary it is possible to inverseoptical flow with high accuracy if the image motion anddeformation can be detected As we know the low frequencysignal components of angular velocity are easier to be sensedprecisely by attitude sensors such as gyroscopes and startrackers but the higher frequency components are hard to
be measured with high accuracy However actually pertur-bations from high frequency jittering are the critical reasonfor motion blurring and local image deformations since theinfluences brought by low components of attitude motion areeasier to be restrained in imaging through regulating remotesensors
Since (13) and (25) are very sensitive to the attitudemotion the angular velocity is able to be measured with highresolution as well as broad frequency bandwidth so long asthe image motion and deformation are to be determinedwith a certain precision Fortunately the lapped images ofthe overlapped detectors meet the needs because they werecaptured in turn as the same parts of the optical flow passthrough these adjacent detectors sequentiallyWithout losinggenerality we will investigate the most common form ofCCD layout for which two rows of detectors are arrangedin parallel The time-phase relations of image formation dueto optical flow evolution are illustrated in Figure 3 wherethe moving image elements 120572
1 120572
2 (in the left gap)
1205731 120573
2 (in the right gap) are captured firstly at the same
time since their optical flows pass through the prior detectorsHowever because of nonuniform optical flows they willnot be captured simultaneously by the posterior detectorsTherefore the geometrical structures of photographs willbe time varying and nonlinear It is evident from Figure 3that the displacements and relative deformations in frameCbetween the lapped images can be determined by measuringthe offsets of the sample image element pairs in frameP
Let Δ1199101015840 = Δ11990910158401 Δ1199091015840 = Δ1199091015840
2be the relative offsets of the
same objectrsquos image on the two photos they are all calibratedinC orF We will measure them by image registration
As far as image registration method is concerned one ofthe hardest problems is complex deformation which is proneto weaken the similarity between the referenced images andsensed images so that itmight introduce large deviations fromthe true values or even lead to algorithm failure Some typicalmethods have been studied in [23ndash25] Generally most ofthem concentrated on several simple deforming forms suchas affine shear translation rotation or their combinationsinstead of investigating more sophisticated dynamic deform-ing models In [26ndash30] some effective approaches havebeen proposed to increase the accuracy and the robust ofalgorithms according to the respective reasonable modelsaccording to the specific properties of objective images
For conventional template based registration methodsonce a template has been extracted from the referencedimage the information about gray values shape and fre-quency spectrum does not increase since no additionalphysical information resources would be offered But actuallysuch information has changed when the optical flows arriveat the posterior detectors Therefore the cross-correlationsbetween the templates and sensed images certainly reduceSo in order to detect the minor image motions and com-plex deformations between the lapped images high-accurateregistration is indispensable which means that more pre-cise model should be implemented We treat it using thetechnique called template reconfiguration In summary themethod is established on the idea of keeping the completionof the information about optical flows
Mathematical Problems in Engineering 7
y998400
x998400
Posterior CCD
12057211205722
1205731
1205732
13998400
Prior CCD
Δx998400120578
Figure 3Nonlinear image velocity field and optical flow trajectoriesinfluence the time-phase relations between the lapped imagescaptured by the adjacent overlapped detectors
In operating as indicated in Figure 3 take the lappedimages captured by the detectors in prior array as thereferenced images and the images captured by posteriordetectors as the sensed images Firstly we will rebuild theoptical flows based on the rough measurements of the space-borne sensors and then reconfigure the original templates toconstruct the new templates whose morphologies are moreapproximate to the corresponding parts on the sensed imagesWith this process the information about imaging proceduresis able to be added into the new templates so as to increase thedegree of similarity to the sensed images The method maydramatically raise the accuracy of dense registration such thatthe high-accurate offsets between the lapped image pairs areable to be determined
In the experiment we examined Mapping Satellite-1 aChinese surveying satellite operating in 500 km height sunsynchronous orbit which is used for high-accurate pho-togrammetry [31] whose structure is shown in Figure 4 Oneof the effective payload three-line-array panchromatic CCDcameras has good geometrical accuracy whose ground pixelresolution is superior to 5m spectral range is 051 120583m sim
069 120583m and the swath is 60 km Another payload is that thehigh resolution camera is designed possessing Cook-TMAoptical system which gives a wide field of view [16 17] andthe panchromatic spatial resolution can reach 2m
In engineering for the purpose to improve the imagequality and surveying precision the high-accuracy measure-ments of jitter and attitude motion are very essential for pos-terior processing Thus here we investigate the images andthe auxiliary data of the large FOV high resolution camera todeal with the problem The experimental photographs werecaptured with 10∘ side looking The focal plane of the camera
High resolutionpanchromatic camera
Optical axis
Mapping satellite-01
O998400
x9984001x9984003
x9984002
Figure 4 The structure of Mapping Satellite-1 and its effectivepayloads
consists of 8 panchromatic TDI CCD detectors and there are120578 = 96 physical lapped pixels between each other
The scheme of the processing in registering one imageelement 120594 is illustrated in Figure 5
Step 1 Set the original lapped image strips (the images whichwere acquired directly by the detectors and without anypostprocessing) in frameC
Step 2 Compute the deformations of all image elementson referenced template with respect to their optical flowtrajectories
We extract the original template from the referencedimage denoted as 119879
1 which consists of 1198732 square elements
that is dim(1198791) = 119873 times 119873 Let 120594 be its central element and
119908 the width of each element here 119908 = 875 120583m Beforethe moving image was going to be captured by the posteriordetector in terms of (25) their current shapes and energydistribution can be predicted by the optical flow based on theauxiliary data of the remote sensor
In order to simplify the algorithm first order approxima-tion is allowed without introducing significant errors Thisapproximation means that the shape of every image elementis always quadrilateral Linear interpolations are carried outto determine the four sides according to the deformationsalong the radial directions of the vertexes as showed inFigure 5 The unit radial vectors are denoted by 120591
1015840
1sim 1205911015840
4in
frameC
1205911015840
1=radic2
21198901minusradic2
21198902 120591
1015840
3= minus
radic2
21198901+radic2
21198902
1205911015840
2=radic2
21198901+radic2
21198902 120591
1015840
4= minus
radic2
21198901minusradic2
21198902
(26)
Suppose image point 1199011015840 is the center of an arbitrary elementΣ1015840 in 119879
1 Let Σ be the area element on the earth surface which
is conjugate to Σ1015840 The four unit radial vectors of the vertexes
8 Mathematical Problems in Engineering
1
3
2
1
4
T0
T1 T9984001
T2 Ts
Referenced image of prior CCD Sensed image of posterior CCD
Figure 5 Optical flow prediction and template reconfiguration
on Σ 1205911sim 120591
4are conjugate to 1205911015840
1sim 1205911015840
4and tangent to the earth
surface at 119901 From the geometrical relations we have
C120591119894= (minus1)
119898
1199031015840 times 1205911015840119894times
C119899119901
100381610038161003816100381610038161199031015840 times 1205911015840
119894times
C119899119901
10038161003816100381610038161003816
E120591119894= Tminus1
EITOIAminus1Mminus1 C
120591119894
C119899119901= MATminus1
OITEIE119899119901
(27)
where E 119899119901is the unit normal vector of Σ at 119901 We predict
the deformations along 1205911sim 120591
4during 119905
1sim 119905
2according to
the measurements of GPS star trackers and gyroscopes asexplained in Figure 6 119905
1is the imaging time on prior detector
and 1199052is the imaging time on the posterior detector
[1205751199091015840
1]Δ119905
120591119896
= [1205751199091015840
1]1199052
120591119896
minus [1205751199091015840
1]1199051
120591119896
[1205751199091015840
2]Δ119905
120591119896
= [1205751199091015840
2]1199052
120591119896
minus [1205751199091015840
2]1199051
120591119896
(119896 = 1 sim 4)
(28)
The shape of deformed image Σ10158401199052can be got through linear
interpolation with
[120575 1199031015840
]Δ119905
120591119896
= ([1205751199091015840
1]Δ119905
120591119896
[12057511990910158402]Δ119905
120591119896
) (29)
Step 3 Reconfigure referenced template 1198791according to
optical flow prediction and then get a new template 1198792
Let 11987910158401be the deformed image of 119879
1computed in Step 2
Let 120594 = 119861119894119895be the central element of 1198791015840
1 integers 119894 and 119895 are
respectively the row number and column number of 119861119894119895The
gray value 119897119894119895of each element in 1198791015840
1is equal to its counterpart
in 1198791with the same indexes In addition we initialize a null
template 1198790whose shape and orientation are identical to 119879
1
the central element of 1198790is denoted by 119879
119894119895
[120575rarrr 998400]Δtminusrarr1205911[120575rarrr 998400]Δtminusrarr1205912
[120575rarrr 998400]Δtminusrarr1205913[120575rarrr 998400]Δtminusrarr1205914
1 2
34
1998400
2998400
39984004998400
Σ998400t2
Σ998400t1
p998400rarr120591 998400
1rarr120591 998400
2
rarr120591 9984003
rarr120591 9984004
Figure 6 Deformation of single element
Then we cover 1198790upon 1198791015840
1and let their centers coincide
that is 119879119894119895= 119861
119894119895 as shown in Figure 7 Denote the vertexes
of 11987910158401as 119881119896
119894119895(119896 = 1 sim 4) Therefore the connective relation
for adjacent elements can be expressed by 1198811
119894119895= 119881
2
119894119895minus1=
1198813
119894minus1119895minus1= 1198814
119894minus1119895
Next we will reassign the gray value ℎ1015840119894119895to 119879
119894119895(119894 =
1 sdot sdot sdot 119873 119895 = 1 sdot sdot sdot 119873) in sequence to construct a new template1198792 The process is just a simulation of image resample when
optical flow arrives at the posterior detector as indicated inFigure 3
That is
ℎ1015840
119894119895=
119894+1
sum119898=119894minus1
119895+1
sum119899=119895minus1
120578119898119899119897119898119899 (30)
Weight coefficient 120578119898119899
= 1198781198981198991199082 where 119878
119898119899is the area of the
intersecting polygon of 119861119898119899
with 119879119894119895
Mathematical Problems in Engineering 9
V1iminus1jminus1
Biminus1jminus1
Bijminus1
Bi+1jminus1
V4i+1jminus1
Biminus1j
V1ij
Bij
Tij
V4ij V3
ij
Bi+1j
T9984001
T0
Biminus1j+1
V2ij
Bij+1
Bi+1j+1
V2iminus1j+1
V3i+1j+1
Figure 7 Template reconfiguration
Step 4 Computenormalized cross-correlation coefficientsbetween 119879
2and the sensed image and then determine the
subpixel offset of 1198792relative to the sensed image in frameP
Firstly for this method the search space on the sensedimage can be contracted so much since the optical flowtrajectories for the referenced elements have been predictedin Step 2 Assuming that the search space is 119879
119904 dim(119879
119904) =
119872 times 119872 When 119879119894119895
moves to the pixel (1198991 119899
2) on 119879
119904 the
normalized cross-correlation (NCC) coefficient is given by
120574 (1198991 119899
2)
=sum119909119910
[119892 (119909 119910) minus 119892119909119910] [ℎ (119909 minus 119899
1 119910 minus 119899
2) minus ℎ]
sum119909119910
[119892 (119909 119910) minus 119892119909119910]2
sum119909119910
[ℎ (119909 minus 1198991 119910 minus 119899
2) minus ℎ]
2
05
(31)
where 119892119909119910
is the mean gray value of the segment of 119879119904
that is masked by 1198792and ℎ is the mean of 119879
2 Equation
(31) requires approximately 1198732(119872 minus 119873 + 1)2 additions and
1198732(119872 minus 119873 + 1)2 multiplications whereas the complexity of
FFT algorithm needs about 121198722log2119872 real multiplications
and 181198722log2119872 real additionssubtractions [32 33]
At the beginning we take119872 = 101119873 = 7 and computethe NCC coefficient When 119872 is much larger than 119873 thecalculation in spatial domain will be efficient Suppose thatthe peak value 120574max is taken at the coordinate (119896119898) 119896119898 isin Z
in the sensed window Hence we will reduce search space intoa smaller one with dimension of 47 times 47 which centered on119879119904(119896119898) Next the subpixel registration is realized by phase
correlation algorithm with larger 119872 and 119873 to suppress thesystem errors owing to the deficiencies of detailed textures
on the photo Here we take119872 = 47119873 = 23 Let the subpixeloffset between the two registering image elements be denotedas 120575
119909and 120575
119910in frameP
The phase correlation algorithm in the frequency domainbecomes more efficient as 119873 approaches 119872 and both havelarger scales [28] Moreover the Fourier coefficients are nor-malized to unitmagnitude prior to computing the correlationso that the correlation is based only on phase information andbeing insensitive to changes in image intensity [27 29]
LetG(119906 V) be the 2D Discrete Fourier Transforms (DFT)of the sensed window then we have
G (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
119892 (119909 119910)119882119906119909
119872119882
V119910119872
H (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
ℎ (119909 119910)119882119906119909
119873119882
V119910119873
(32)
Here
119882119873= exp(minus1198952120587
119873) (33)
Cross-phase spectrum is given by
R (119906 V) =G (119906 V)Hlowast
(119906 V)|G (119906 V)Hlowast (119906 V)|
= exp (119895120601 (119906 V)) (34)
whereHlowast is the complex conjugate ofH By inverse DiscreteFourier Transform (IDFT) we have
120574 (1198991 119899
2) =
1
1198732
(119873minus1)2
sum119906=minus(119873minus1)2
(119873minus1)2
sumV=minus(119873minus1)2
R (119906 V)119882minus1199061198991
119873119882
minusV1198992
119873
(35)
10 Mathematical Problems in Engineering
Figure 8 Dense image registration for lapped image strips CCD1versus CCD2 (Gap 1 the left two) and CCD3 versus CCD4 (Gap 3the right two)
Suppose that the new peak 120574max appears at (1198961015840 1198981015840) 1198961015840 1198981015840 isin
Z referring to [27] we have the following relation
120574max (1198961015840
1198981015840
)
asymp120582
1198732
sin [120587 (1198961015840 + 120575119909)] sin [120587 (1198981015840 + 120575
119910)]
sin [(120587119873) (1198961015840 + 120575119909)] sin [(120587119873) (1198981015840 + 120575
119910)]
(36)
The right side presents the spatial distribution of the normal-ized cross-correlation coefficientsTherefore (120575
119909 120575
119910) are able
to be measured based on that In practice constant 120582 le 1which tends to decrease when small noise exists and equalsunity in ideal cases
Step 5 Dense registration is executed for the lapped imagestrips
Repeating Step 1simStep 4 we register the along-track sam-ple images selected from the referenced images to the sensedimageThemaximal sample rate can reach up to line-by-lineThe continuous procedure is shown in Figure 8 in which theimage pairs are marked
The curves of relative offsets inP are shown in Figures 9and 10
Let col119903 row
119903be the column and row indexes of image
elements on the referenced image and let col119904 row
119904be the
indexes of the same elements on the sensed image The totalcolumns of each detector 119876 = 4096 pix and the verticaldistance between the two detector arrays 119863 = 184975mmAccording to the results of registration we get the offsets
50 100 150 200 250 300 350 400 450 500
minus28minus26minus24 X 258
Y minus2515
Image rows (pixels)
Cros
s tra
ck(p
ixel
s)
CCD1 versus CCD2
50 100 150 200 250 300 350 400 450 500
minus8minus6minus4 X 258
Y minus5393
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 423Y minus7363
S11S22
S22
S11
X 423Y minus2378
Figure 9Theoffsets of lapped images captured byCCD1 andCCD2
50 100 150 200 250 300 350 400 450 500minus17minus16minus15minus14minus13minus12
X 266Y minus1285 X 436
Y minus1297
Image rows (pixels)Cr
oss t
rack
(p
ixel
s)
CCD3 versus CCD4
50 100 150 200 250 300 350 400 450 500minus9minus8minus7minus6minus5
X 436Y minus6869
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 266Y minus7663
S31
S31
S32
S32
Figure 10 The offsets of lapped images captured by CCD3 andCCD4
of images at 119899th gap 120575119899119909(cross track) 120575119899
119910(along track) in
frameP and Δ1199091015840119899 Δ1199101015840
119899(mm) in frameF
120575119899119909= col
119903+ col
119904minus 119876 minus 120578
119899
Δ1199091015840
119899= Δ(119909
1015840
2)119899
= 120575119899119909sdot 119908
120575119899119910= row
119904minus row
119903minus119863
119908
Δ1199101015840
119899= Δ(119909
1015840
1)119899
= 120575119899119910sdot 119908 + 119863
(37)
Four pixels S11 S12 S31 and S32 are examinedTheir data arelisted in Table 1
S11 and S31 are the images of the same object which wascaptured in order by CCD1 and CCD2 (Gap 1) S12 and S32were captured respectively by CCD3 and CCD4 (Gap 3)Referring to the auxiliary data S11 and S31 were capturedat same time and S12 and S32 were captured at differenttime which means that the along-track speeds of the twomoving images were quite different Moreover the cross-track image offsets in Gap 1 and Gap 3 vary so much whichsays that the optical flows were also distorted unevenly anddeflects away from the along-track directionOn the other
Mathematical Problems in Engineering 11
Table 1 The offsets between overlapped images
Sample Row no(pixel)
120575119899119909
(pixel)Δ119909
1015840
119899
(mm)120575119899
119910
(pixel)Δ119910
1015840
119899
(mm)
S11 258 minus2515 minus02200625 minus539 184503
S12 423 minus2378 minus02080750 minus736 184331
S31 266 minus1285 minus01124375 minus766 184304
S32 436 minus1297 minus01134875 minus687 184374
hand it is has been discovered in Figures 9 and 10 that thefluctuation of image offsets taking place in Gap 1 is greaterin magnitude than in Gap 3 All the facts indicate that thedistorted optical flows can be detected from a plenty of imageoffsets We will see later that the nonlinear distribution of thedata strengthens the well-posedness of optical flow inversionalgorithm
4 Remote Sensor AttitudeMotion Measurement
In this section the attitude velocity of the remote sensor isgoing to be resolved by using optical flow inversion methodThe results of dense registration are applied to produceconditions of fixed solution for optical flow equations
41 The Principle of Optical Inversion For clarity in frameC the two coordinate components of image displacementof 119896th sample element belonging to 119899th lapped strip pair arewritten as Δ1199091015840
119899119896 Δ1199101015840
119899119896 From (13) and (25) it is easy to show
that the contributions to optical flow owing to orbital motionand earthrsquos inertial movement are of very slightly varying inshort term such that the corresponding displacements can beregarded as piecewise constants 119904
119909 119904119910
Let 120591119894119895 119905119894119895
be in order the two sequential imaging timeof the 119895th image sample on the overlapped detectors in 119895thgap They are usually recorded in the auxiliary data of theremote sensor Hence for every image element the quantityof discrete status in optical flow tracing will be
119873119894119895= [
119905119894119895minus 120591
119894119895
Δ119905] isin Z
+
(119894 = 1 sdot sdot sdot 119899 119895 = 1 sdot sdot sdot 119898) (38)
where 119899 is the amount of CCD gaps 119898 is the amount ofsample groups and Δ119905 is the time step We set samples withsame 119895 index into the same group in which the samples arecaptured by the prior detectors simultaneously
We expand (11) substitute it into (14) and (13) and thenarrange the scalar optical flow inversion equations in termsof the three axial angular velocity components 120596
1 120596
2 and 120596
3
(the variables in the inverse problem) yielding the linearoptical flow equations
Locus of optical flow
CCD
CCD
120575max
D
ci120583120581 = const
Figure 11 Coefficients Determination according to the CurrentLocation of the Image
For the 119897th group samples
1198731119897
sum119894=119897
119888119894
11198971120596119894
1+ 119888
119894
11198972120596119894
2+ 119888
119894
11198973120596119894
3= Δ119909
1015840
1119897minus 119904
1199091
1198731119897
sum119894=119897
119889119894
11198971120596119894
1+ 119889
119894
11198972120596119894
2+ 119889
119894
11198973120596119894
3= Δ119910
1015840
1119897minus 119904
1199101
119873119899119897
sum119894=119897
119888119894
1198991198971120596119894
1+ 119888
119894
1198991198972120596119894
2+ 119888
119894
1198991198973120596119894
3= Δ119909
1015840
119899119897minus 119904
119909119899
119873119899119897
sum119894=119897
119889119894
1198991198971120596119894
1+ 119889
119894
1198991198972120596119894
2+ 119889
119894
1198991198973120596119894
3= Δ119910
1015840
119899119897minus 119904
119910119899
(39)
Suppose that the sample process will stop until119898 groupshave been founded The coefficients are as follows
119888119894
120583]120581 = Ξ120581 (120583 lceil119894 minus ] + 1119873120583]
Nrceil)
119889119894
120583]120581 = Λ 120581(120583 lceil
119894 minus ] + 1119873120583]
Nrceil) (120581 = 1 2 3)
(40)
12 Mathematical Problems in Engineering
Here
Ξ119896= (
12058511119896
12058512119896
sdot sdot sdot 1205851N119896
12058521119896
12058522119896
sdot sdot sdot 1205852N119896
sdot sdot sdot sdot sdot sdot
1205851198991119896
1205851198992119896
sdot sdot sdot 120585119899N119896
)
Λ119896= (
12058211119896
12058212119896
sdot sdot sdot 1205821N119896
12058221119896
12058222119896
sdot sdot sdot 1205822N119896
sdot sdot sdot sdot sdot sdot
1205821198991119896
1205821198992119896
sdot sdot sdot 120582119899N119896
)
(41)
As for the algorithm to reduce the complexity all possiblevalues for the coefficients are stored in the matrixes Ξ
119896and
Λ119896 The accuracy is guaranteed because the coefficients for
the images moving into the same piece of region are almostequal to an identical constant in a short period which isexplained in Figure 11
It has beenmentioned that the optical flow is not sensitiveto satellitersquos orbit motion and earth rotation in a short term
namely the possible values are assigned by the followingfunctions
120585119894119895119896= 120585
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
120582119894119895119896= 120582
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
119894 = 1 sim 119899 119895 = 1 sim N 119902 = 1 sim N
(42)
HereN is the number of constant-valued segments in theregion encompassing all the possible optical flow trajectoriesThe orbital elements and integral step size Δ119905 are commonto all functions Furthermore when long termmeasurementsare executed Ξ
119896and Λ
119896only need to be renewed according
to the current parametersThe coefficientmatrix of the optical flow equations for 119895th
(1 le 119895 le 119898) group can be written as
C119895=
(((((((((((
(
1198881
11198951119888111198952
119888111198953
sdot sdot sdot 1198881198731119895
111989511198881198731119895
111989521198881198731119895
11198953sdot sdot sdot 0 0
119889111198951
119889111198952
119889111198953
sdot sdot sdot 1198891198731119895
111989511198891198731119895
111989521198891198731119895
11198953sdot sdot sdot 0 0
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811199021198951
11988811199021198952
11988811199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119888119873119902119895
1199021198951119888119873119902119895
1199021198951119888119873119902119895
1199021198953
11988911199021198951
11988911199021198952
11988911199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119889119873119902119895
1199021198951119889119873119902119895
1199021198952119889119873119902119895
1199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811198991198951
11988811198991198952
11988811198991198953
sdot sdot sdot sdot sdot sdot 119888119873119899119895
1198991198951119888119873119899119895
1198991198952119888119873119899119895
1198991198953sdot sdot sdot 0
1198891
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot sdot sdot sdot 119889
1
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot 0
)))))))))))
)2119899times3119873119902119895
(43)
where119873119902119895= max119873
1119895 119873
119899119895 Consequently as we organize the equations for all groups
the global coefficient matrix will be given in the followingform
C =((
(
[C1]2119899times3119873
1199021
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
0 [C2]2119899times3119873
1199022
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
d sdot sdot sdot sdot sdot sdot
[C]2119899times3119873maxd
[C119898]2119899times3119873
119902119898
0
))
)2119899119898times3119873max
(44)
C is a quasidiagonal partitioned matrix every subblockhas 2119899 rows The maximal columns of C are 119873max =
max1198731199021 119873
119902119898
The unknown variables are as follows
[Ω]3119873maxtimes1
= [120596111205961212059613sdot sdot sdot 120596
119873max1
120596119873max2
120596119873max3
]119879
(45)
The constant are as followsΔu
2119898119899times1= [ Δ1199091015840
11Δ1199101015840
11sdot sdot sdot Δ1199091015840
1198991Δ1199101015840
1198991
sdot sdot sdot Δ11990910158401198981
Δ11991010158401198981
sdot sdot sdot Δ1199091015840119899119898
Δ1199101015840119899119898
]119879
s2119898119899times1
= [ 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899
sdot sdot sdot 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899]119879
(46)
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 Mathematical Problems in Engineering
Orbit plane
The equatorial plane
Perigee
u3
osu1
u2 rarrrsY
2a
i
Oe
Ω X
Υ
120596
rarrs
Figure 2 Orbital motion of remote sensor
In frame O
O119903119904= (
119886 (cos119864 minus 119890)119887 sin1198640
) OV
119904=(
minus119886 sin119864119887 cos1198640
)119899
1 minus 119890 cos119864
(3)
The coordinate transform matrix between O andI is
TOI
= (
119862120596119862Ω minus 1198781205961198621198940119878Ω minus119878120596119862Ω minus 119862120596119862119894
0119878Ω 119878119894
0119878Ω
119862120596119878Ω + 1198781205961198621198940119862Ω minus119878120596119878Ω + 119862120596119862119894
0119862Ω minus119878119894
0119862Ω
1198781205961198781198940
1198621205961198781198940
1198621198940
)
(4)
For simplicity we write 119862120572 = cos120572 119878120572 = sin120572In engineering the coordinate transfer matrix TOI also
can be derived from the real-time measurements of GPSSince the base vectors of frame O in I
3= minus
I119903119904| 119903
119904|
2
= (IV
119904times
I119903119904)|V
119904times 119903
119904| and
1=
2times
3then TOI =
(123)minus1
I119903119904= TOIsdot
O119903119904
IV119904= TOIsdot
OV119904 (5)
Associating the equation of boresight with the ellipsoidsurface of the earth inC yields
1198832 + 1198852
1198602
119890
+1198842
1198612119890
= 1
119883 minus 119883119904
1199041
=119883 minus 119884
119904
1199042
=119883 minus 119885
119904
1199043
(6)
Here 119860119890= 6378137 km and 119861
119890= 6356752 km being the
length of earthrsquos semimajor axis and semiminor axis 119904119894(119894 =
1 2 3) are the unit vectors of I 119903 We write the solution of (7)as I 120588 = (119883 119884 119885)
119879 Hence I119903 =
I120588 minus
I119903119904 C 119903 = M sdot A sdot
Tminus1
OI sdotI119903 where M is the coordinate transformation matrix
from frame B to frame C it is a constant matrix for fixedinstallation A is the attitude matrix of satellite according to1-2-3 rotating order we have
A = R120595sdot R
120579sdot R
120593(7)
in which
R120595= (
cos120595119905
sin1205951199050
minus sin120595119905cos120595
1199050
0 0 1
)
R120579= (
cos 1205791199050 minus sin 120579
119905
0 1 0
sin 1205791199050 cos 120579
119905
)
R120593= (
1 0 0
0 cos120593119905
sin120593119905
0 minus sin120593119905cos120593
119905
)
(8)
where 120593119905 120579
119905 and 120595
119905are in order the real-time roll angle
pitch angle and yaw angle at moment 119905 The velocity of 119901 inC can be written in the following scalar form
119894=
C 119903 sdot 119890119894
(119894 = 1 2 3) (9)
Thus the velocity of image point of 1199011015840 will be
1015840
119894= 120573119909
119894+ 120573
119894= (minus1)
1198981198911015840
( 119903 sdot 1198903)23119909119894
+ (minus1)119898minus1
1198911015840
119903 sdot 1198903
119894
(119894 = 1 2)
(10)
Substituting (2)ndash(9) into (10) the velocity vector of imagepoint V1015840 = (1015840
1 1015840
2)119879 can be expressed as the explicit function
of several variables that is
V1015840 = V (1198940 Ω 120596 119890119872
1199050 120593
119905 120579
119905 120595
119905
119905 120579
119905
119905 119909
1015840
1 119909
1015840
2) (11)
For conciseness this analytical expression of V1015840 is omittedhere
The orbit elements can be determined according toinstantaneous GPS data Besides they also can be calculatedwith sufficient accuracy in celestial mechanics [19] On theother hand the attitude angles 120593
119905 120579
119905 and 120595
119905can be roughly
measured by the star trackers andGPSMeanwhile their timerates
119905 120579
119905 and
119905have the following relations
(
1205961
1205962
1205963
) = R120595
(
0
0
119905
) + R120579
[
[
(
0120579119905
0
) + R120593(
0
0
119905
)]
]
(12)
1205961 120596
2 and 120596
3are the three components of the remote sen-
sorrsquos angular velocity C119904relative to orbital frame O which
is calibrated in frame C Those can be roughly measured byspace-borne gyroscopes or other attitude sensors
It is easy to verify from (11) that the instantaneous imagevelocity field on the focal plane appears significantly nonlin-ear and isotropic for large FOV remote sensors especially
Mathematical Problems in Engineering 5
when they are applied to perform large angle attitudemaneu-vering for example in sidelooking by swing or stereoscopiclooking by pitching and so forth Under these circumstancesin order to acquire photos with high spatial temporal andspectral resolution image motion velocity control strategiesshould be executed in real time [20] based on auxiliary datawhich measured by reliable space-borne sensors [21 22] Indetail for TDI CCD cameras the line rates of the detectorsmust be controlled synchronizing to the local image velocitymodules during exposure so as to avoid along-track motionblurring the attitude of remote sensor should be regulated intime to maintain the detectors push-broom direction aimingat the direction of image motion to avoid cross-track motionblurring
3 Optical Flow Rough Inversion andDense Image Registration
Optical flow is another important physical model carryingthe whole energy and information of moving images indynamic imaging A specific optical flow trajectory is anintegral curve which is always tangent to the image velocityfield thus we have
1199091015840
1(119879) = int
119879
0
1015840
1(119909
1015840
1 119909
1015840
2 119905) 119889119905
1199091015840
2(119879) = int
119879
0
1015840
2(119909
1015840
1 119909
1015840
2 119905) 119889119905
(13)
Since (13) are coupled nonlinear integral equations weconvert them to numerical forms and solve them iteratively
1199091015840
i (0) = 1199091015840
i (119905)10038161003816100381610038161003816119905=0
1199091015840
j (119899) = 1199091015840
j (119899 minus 1) +1
2
1015840
119895[119909
1015840
1(119899 minus 1) 119909
1015840
2(119899 minus 1) 119899]
+ 1015840
119895[119909
1015840
1(119899 minus 1) 119909
1015840
2(119899 minus 1)
119899 minus 1] Δ119905
(119895 = 1 2 119899 isin Z+
)
(14)
It is evident that the algorithm has enough precision solong as the step-size of time interval Δ119905 is small enough Itcan be inferred from (13) that strong nonlinear image velocityfield may distort optical flows so much that the geometricalstructure of image may have irregular behaviors Thereforeif we intend to inverse the information of optical flow tomeasure the attitude motion the general formula of imagedeformation due to the optical flows should be deduced
31 Time-Varying Image Deformation in Dynamic ImagingFirstly we will investigate some differential characteristics ofthe moving image of an extended object on the earth surfaceAs shown in Figure 1 considering a microspatial variation of119901 along 120591 on the curved surface can be expressed as 120575 120588
119901= 120575119897 120591
Its conjugated image is
1205751199091015840
119894= 120575120573119909
119894+ 120573120575119909
119894 (15)
We expand the term of 120575120573
120575120573 = (minus1)119898
1198911015840
( 119903 + 120575 119903) sdot 1198903
minus1198911015840
119903 sdot 1198903
= (minus1)119898minus1
1198911015840
119903 sdot 1198903
infin
sum119896=1
(minus1)119896
(120575 119903 sdot 119890
3
119903 sdot 1198903
)
119896
asymp (minus1)1198981198911015840 120591 sdot 119890
3120575119897
( 119903 sdot 1198903)2
(16)
Taking derivatives with respect to variable 119905 for either part of(15) we have
1205751015840
119894= 120575 120573119909
119894+ 120575120573
119894+ 120573120575119909
119894+ 120573120575
119894 (17)
According to (16) we know that 120575 120573 asymp 0 On the otherhand the variation of 119903 can be expressed through a series ofcoordinate transformations that is
C(120575 119903) = 120575119897 [MATminus1
OITEIE120591] (18)
Notice that E 120591 is a fixed tangent vector of earth surfaceat object point 119901 which is time-invariant and specifies anorientation of motionless scene on the earth
Consequently
(
C120575 119903
120575119897)
120591
= (MATminus1
OITEI +MATminus1
OITEI
+MATminus1
OITEI +MATminus1
OITEI)E120591
(19)
where the coordinate transformmatrix from frameE toI is
TEI = (
cos1198671199010 minus sin119867
119901
0 1 0
sin1198671199010 cos119867
119901
) (20)
Let 120596119890be the angular rate of the earth and 120572
119901the longitude of
119901 on the earth then the hour angle of 119901 at time 119905 is 119867119901(119905) =
GST+120572119901+120596
119890119905 in which GST represents Greenwich sidereal
timeThe microscale image deformation of the extended scene
on the earth along the direction of 120591 during 1199051sim 119905
2can be
written as
[1205751199091015840
119894]1199052
120591
minus [1205751199091015840
119894]1199051
120591
= int1199052
1199051
(1205751015840
119894)
120591
119889119905 (21)
From (17) we have
(1205751015840119894)
120591
120575119897=120575120573
120575119897119894+ 120573
120575119909119894
120575119897+ 120573
120575119894
120575119897 (22)
According to (16) (18) and (19) we obtain the terms in (22)
120575120573
120575119897= (minus1)
1198981198911015840 C 120591 sdot 119890
3
( 119903 sdot 1198903)2
120575119909119894
120575119897= MATminus1
OITEI 119890119894sdotE120591
120575119894
120575119897= (
C120575 119903
120575119897)
120591
sdot 119890119894+ (
C120575 119903
120575119897)
120591
sdot 119890119894
(23)
6 Mathematical Problems in Engineering
Furthermore if the camera is fixed to the satellite platformthen M = 0 119890
119894= 0
Consequently (22) becomes
F119894(119905 120591) =
(1205751015840119894)
120591
120575119897
= (minus1)1198981198911015840 C 120591 sdot 119890
3
( 119903 sdot 1198903)2119894
+ (minus1)1198981198911015840 ( 119903 sdot 119890
119894)
( 119903 sdot 1198903)2MATminus1
OITEI 119890119894sdotE120591
+ (minus1)119898minus1
1198911015840
119903 sdot 1198903
(MATminus1
OITEI
+MATminus1
OITEI
+MATminus1
OITEI)E120591 sdot 119890
119894
(24)
For the motionless scene on the earth surface E120591 is a time-
independent but space-dependent unit tangent vector whichmeanwhile represents a specific orientation on the groundMoreover the physical meaning of function F
119894(119905 120591) is the
image deformation of unit-length curve on the curved surfacealong the direction of E
120591 in unit time interval That is theinstantaneous space-time deforming rate of the image of theobject along E
120591Consequently in dynamic imaging macroscopic defor-
mation on themoving image can be derived from the integralofF
119894(119905 120591) in space and time Referring to Figure 1 let Γ be an
arbitrary curve of the extended object on the earth let Γ1015840 be itsimage let two arbitrary points 119901 119902 isin Γ and let their Gaussianimages1199011015840 1199021015840 isin Γ1015840 Let E 120591 = T(119904) be a vector-valued functionwith variable 119904 (the length of the arc) which is time-invariantin frame E and gives the tangent vectors along the curve
So the image deformation taking place during 1199051sim 119905
2is
able to be described as
[(1199091015840
119901)119894
]1199052
1199051
minus [(1199091015840
119902)119894
]1199052
1199051
= intΓ
int1199052
1199051
F119894∘ T119889119905 119889119904 (25)
in whichF119894∘ T = F
119894[119905 T(119904)]
Now in terms of (24) and (25) we can see that the imagedeformation is also anisotropic and nonlinear which dependsnot only on optical flowrsquos evolution but also on the geometryof the scene
32 Dense Image Registration throughOptical Flow PredictionAs mentioned in the preceding sections optical flow is themost precise model in describing image motion and time-varying deformation On the contrary it is possible to inverseoptical flow with high accuracy if the image motion anddeformation can be detected As we know the low frequencysignal components of angular velocity are easier to be sensedprecisely by attitude sensors such as gyroscopes and startrackers but the higher frequency components are hard to
be measured with high accuracy However actually pertur-bations from high frequency jittering are the critical reasonfor motion blurring and local image deformations since theinfluences brought by low components of attitude motion areeasier to be restrained in imaging through regulating remotesensors
Since (13) and (25) are very sensitive to the attitudemotion the angular velocity is able to be measured with highresolution as well as broad frequency bandwidth so long asthe image motion and deformation are to be determinedwith a certain precision Fortunately the lapped images ofthe overlapped detectors meet the needs because they werecaptured in turn as the same parts of the optical flow passthrough these adjacent detectors sequentiallyWithout losinggenerality we will investigate the most common form ofCCD layout for which two rows of detectors are arrangedin parallel The time-phase relations of image formation dueto optical flow evolution are illustrated in Figure 3 wherethe moving image elements 120572
1 120572
2 (in the left gap)
1205731 120573
2 (in the right gap) are captured firstly at the same
time since their optical flows pass through the prior detectorsHowever because of nonuniform optical flows they willnot be captured simultaneously by the posterior detectorsTherefore the geometrical structures of photographs willbe time varying and nonlinear It is evident from Figure 3that the displacements and relative deformations in frameCbetween the lapped images can be determined by measuringthe offsets of the sample image element pairs in frameP
Let Δ1199101015840 = Δ11990910158401 Δ1199091015840 = Δ1199091015840
2be the relative offsets of the
same objectrsquos image on the two photos they are all calibratedinC orF We will measure them by image registration
As far as image registration method is concerned one ofthe hardest problems is complex deformation which is proneto weaken the similarity between the referenced images andsensed images so that itmight introduce large deviations fromthe true values or even lead to algorithm failure Some typicalmethods have been studied in [23ndash25] Generally most ofthem concentrated on several simple deforming forms suchas affine shear translation rotation or their combinationsinstead of investigating more sophisticated dynamic deform-ing models In [26ndash30] some effective approaches havebeen proposed to increase the accuracy and the robust ofalgorithms according to the respective reasonable modelsaccording to the specific properties of objective images
For conventional template based registration methodsonce a template has been extracted from the referencedimage the information about gray values shape and fre-quency spectrum does not increase since no additionalphysical information resources would be offered But actuallysuch information has changed when the optical flows arriveat the posterior detectors Therefore the cross-correlationsbetween the templates and sensed images certainly reduceSo in order to detect the minor image motions and com-plex deformations between the lapped images high-accurateregistration is indispensable which means that more pre-cise model should be implemented We treat it using thetechnique called template reconfiguration In summary themethod is established on the idea of keeping the completionof the information about optical flows
Mathematical Problems in Engineering 7
y998400
x998400
Posterior CCD
12057211205722
1205731
1205732
13998400
Prior CCD
Δx998400120578
Figure 3Nonlinear image velocity field and optical flow trajectoriesinfluence the time-phase relations between the lapped imagescaptured by the adjacent overlapped detectors
In operating as indicated in Figure 3 take the lappedimages captured by the detectors in prior array as thereferenced images and the images captured by posteriordetectors as the sensed images Firstly we will rebuild theoptical flows based on the rough measurements of the space-borne sensors and then reconfigure the original templates toconstruct the new templates whose morphologies are moreapproximate to the corresponding parts on the sensed imagesWith this process the information about imaging proceduresis able to be added into the new templates so as to increase thedegree of similarity to the sensed images The method maydramatically raise the accuracy of dense registration such thatthe high-accurate offsets between the lapped image pairs areable to be determined
In the experiment we examined Mapping Satellite-1 aChinese surveying satellite operating in 500 km height sunsynchronous orbit which is used for high-accurate pho-togrammetry [31] whose structure is shown in Figure 4 Oneof the effective payload three-line-array panchromatic CCDcameras has good geometrical accuracy whose ground pixelresolution is superior to 5m spectral range is 051 120583m sim
069 120583m and the swath is 60 km Another payload is that thehigh resolution camera is designed possessing Cook-TMAoptical system which gives a wide field of view [16 17] andthe panchromatic spatial resolution can reach 2m
In engineering for the purpose to improve the imagequality and surveying precision the high-accuracy measure-ments of jitter and attitude motion are very essential for pos-terior processing Thus here we investigate the images andthe auxiliary data of the large FOV high resolution camera todeal with the problem The experimental photographs werecaptured with 10∘ side looking The focal plane of the camera
High resolutionpanchromatic camera
Optical axis
Mapping satellite-01
O998400
x9984001x9984003
x9984002
Figure 4 The structure of Mapping Satellite-1 and its effectivepayloads
consists of 8 panchromatic TDI CCD detectors and there are120578 = 96 physical lapped pixels between each other
The scheme of the processing in registering one imageelement 120594 is illustrated in Figure 5
Step 1 Set the original lapped image strips (the images whichwere acquired directly by the detectors and without anypostprocessing) in frameC
Step 2 Compute the deformations of all image elementson referenced template with respect to their optical flowtrajectories
We extract the original template from the referencedimage denoted as 119879
1 which consists of 1198732 square elements
that is dim(1198791) = 119873 times 119873 Let 120594 be its central element and
119908 the width of each element here 119908 = 875 120583m Beforethe moving image was going to be captured by the posteriordetector in terms of (25) their current shapes and energydistribution can be predicted by the optical flow based on theauxiliary data of the remote sensor
In order to simplify the algorithm first order approxima-tion is allowed without introducing significant errors Thisapproximation means that the shape of every image elementis always quadrilateral Linear interpolations are carried outto determine the four sides according to the deformationsalong the radial directions of the vertexes as showed inFigure 5 The unit radial vectors are denoted by 120591
1015840
1sim 1205911015840
4in
frameC
1205911015840
1=radic2
21198901minusradic2
21198902 120591
1015840
3= minus
radic2
21198901+radic2
21198902
1205911015840
2=radic2
21198901+radic2
21198902 120591
1015840
4= minus
radic2
21198901minusradic2
21198902
(26)
Suppose image point 1199011015840 is the center of an arbitrary elementΣ1015840 in 119879
1 Let Σ be the area element on the earth surface which
is conjugate to Σ1015840 The four unit radial vectors of the vertexes
8 Mathematical Problems in Engineering
1
3
2
1
4
T0
T1 T9984001
T2 Ts
Referenced image of prior CCD Sensed image of posterior CCD
Figure 5 Optical flow prediction and template reconfiguration
on Σ 1205911sim 120591
4are conjugate to 1205911015840
1sim 1205911015840
4and tangent to the earth
surface at 119901 From the geometrical relations we have
C120591119894= (minus1)
119898
1199031015840 times 1205911015840119894times
C119899119901
100381610038161003816100381610038161199031015840 times 1205911015840
119894times
C119899119901
10038161003816100381610038161003816
E120591119894= Tminus1
EITOIAminus1Mminus1 C
120591119894
C119899119901= MATminus1
OITEIE119899119901
(27)
where E 119899119901is the unit normal vector of Σ at 119901 We predict
the deformations along 1205911sim 120591
4during 119905
1sim 119905
2according to
the measurements of GPS star trackers and gyroscopes asexplained in Figure 6 119905
1is the imaging time on prior detector
and 1199052is the imaging time on the posterior detector
[1205751199091015840
1]Δ119905
120591119896
= [1205751199091015840
1]1199052
120591119896
minus [1205751199091015840
1]1199051
120591119896
[1205751199091015840
2]Δ119905
120591119896
= [1205751199091015840
2]1199052
120591119896
minus [1205751199091015840
2]1199051
120591119896
(119896 = 1 sim 4)
(28)
The shape of deformed image Σ10158401199052can be got through linear
interpolation with
[120575 1199031015840
]Δ119905
120591119896
= ([1205751199091015840
1]Δ119905
120591119896
[12057511990910158402]Δ119905
120591119896
) (29)
Step 3 Reconfigure referenced template 1198791according to
optical flow prediction and then get a new template 1198792
Let 11987910158401be the deformed image of 119879
1computed in Step 2
Let 120594 = 119861119894119895be the central element of 1198791015840
1 integers 119894 and 119895 are
respectively the row number and column number of 119861119894119895The
gray value 119897119894119895of each element in 1198791015840
1is equal to its counterpart
in 1198791with the same indexes In addition we initialize a null
template 1198790whose shape and orientation are identical to 119879
1
the central element of 1198790is denoted by 119879
119894119895
[120575rarrr 998400]Δtminusrarr1205911[120575rarrr 998400]Δtminusrarr1205912
[120575rarrr 998400]Δtminusrarr1205913[120575rarrr 998400]Δtminusrarr1205914
1 2
34
1998400
2998400
39984004998400
Σ998400t2
Σ998400t1
p998400rarr120591 998400
1rarr120591 998400
2
rarr120591 9984003
rarr120591 9984004
Figure 6 Deformation of single element
Then we cover 1198790upon 1198791015840
1and let their centers coincide
that is 119879119894119895= 119861
119894119895 as shown in Figure 7 Denote the vertexes
of 11987910158401as 119881119896
119894119895(119896 = 1 sim 4) Therefore the connective relation
for adjacent elements can be expressed by 1198811
119894119895= 119881
2
119894119895minus1=
1198813
119894minus1119895minus1= 1198814
119894minus1119895
Next we will reassign the gray value ℎ1015840119894119895to 119879
119894119895(119894 =
1 sdot sdot sdot 119873 119895 = 1 sdot sdot sdot 119873) in sequence to construct a new template1198792 The process is just a simulation of image resample when
optical flow arrives at the posterior detector as indicated inFigure 3
That is
ℎ1015840
119894119895=
119894+1
sum119898=119894minus1
119895+1
sum119899=119895minus1
120578119898119899119897119898119899 (30)
Weight coefficient 120578119898119899
= 1198781198981198991199082 where 119878
119898119899is the area of the
intersecting polygon of 119861119898119899
with 119879119894119895
Mathematical Problems in Engineering 9
V1iminus1jminus1
Biminus1jminus1
Bijminus1
Bi+1jminus1
V4i+1jminus1
Biminus1j
V1ij
Bij
Tij
V4ij V3
ij
Bi+1j
T9984001
T0
Biminus1j+1
V2ij
Bij+1
Bi+1j+1
V2iminus1j+1
V3i+1j+1
Figure 7 Template reconfiguration
Step 4 Computenormalized cross-correlation coefficientsbetween 119879
2and the sensed image and then determine the
subpixel offset of 1198792relative to the sensed image in frameP
Firstly for this method the search space on the sensedimage can be contracted so much since the optical flowtrajectories for the referenced elements have been predictedin Step 2 Assuming that the search space is 119879
119904 dim(119879
119904) =
119872 times 119872 When 119879119894119895
moves to the pixel (1198991 119899
2) on 119879
119904 the
normalized cross-correlation (NCC) coefficient is given by
120574 (1198991 119899
2)
=sum119909119910
[119892 (119909 119910) minus 119892119909119910] [ℎ (119909 minus 119899
1 119910 minus 119899
2) minus ℎ]
sum119909119910
[119892 (119909 119910) minus 119892119909119910]2
sum119909119910
[ℎ (119909 minus 1198991 119910 minus 119899
2) minus ℎ]
2
05
(31)
where 119892119909119910
is the mean gray value of the segment of 119879119904
that is masked by 1198792and ℎ is the mean of 119879
2 Equation
(31) requires approximately 1198732(119872 minus 119873 + 1)2 additions and
1198732(119872 minus 119873 + 1)2 multiplications whereas the complexity of
FFT algorithm needs about 121198722log2119872 real multiplications
and 181198722log2119872 real additionssubtractions [32 33]
At the beginning we take119872 = 101119873 = 7 and computethe NCC coefficient When 119872 is much larger than 119873 thecalculation in spatial domain will be efficient Suppose thatthe peak value 120574max is taken at the coordinate (119896119898) 119896119898 isin Z
in the sensed window Hence we will reduce search space intoa smaller one with dimension of 47 times 47 which centered on119879119904(119896119898) Next the subpixel registration is realized by phase
correlation algorithm with larger 119872 and 119873 to suppress thesystem errors owing to the deficiencies of detailed textures
on the photo Here we take119872 = 47119873 = 23 Let the subpixeloffset between the two registering image elements be denotedas 120575
119909and 120575
119910in frameP
The phase correlation algorithm in the frequency domainbecomes more efficient as 119873 approaches 119872 and both havelarger scales [28] Moreover the Fourier coefficients are nor-malized to unitmagnitude prior to computing the correlationso that the correlation is based only on phase information andbeing insensitive to changes in image intensity [27 29]
LetG(119906 V) be the 2D Discrete Fourier Transforms (DFT)of the sensed window then we have
G (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
119892 (119909 119910)119882119906119909
119872119882
V119910119872
H (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
ℎ (119909 119910)119882119906119909
119873119882
V119910119873
(32)
Here
119882119873= exp(minus1198952120587
119873) (33)
Cross-phase spectrum is given by
R (119906 V) =G (119906 V)Hlowast
(119906 V)|G (119906 V)Hlowast (119906 V)|
= exp (119895120601 (119906 V)) (34)
whereHlowast is the complex conjugate ofH By inverse DiscreteFourier Transform (IDFT) we have
120574 (1198991 119899
2) =
1
1198732
(119873minus1)2
sum119906=minus(119873minus1)2
(119873minus1)2
sumV=minus(119873minus1)2
R (119906 V)119882minus1199061198991
119873119882
minusV1198992
119873
(35)
10 Mathematical Problems in Engineering
Figure 8 Dense image registration for lapped image strips CCD1versus CCD2 (Gap 1 the left two) and CCD3 versus CCD4 (Gap 3the right two)
Suppose that the new peak 120574max appears at (1198961015840 1198981015840) 1198961015840 1198981015840 isin
Z referring to [27] we have the following relation
120574max (1198961015840
1198981015840
)
asymp120582
1198732
sin [120587 (1198961015840 + 120575119909)] sin [120587 (1198981015840 + 120575
119910)]
sin [(120587119873) (1198961015840 + 120575119909)] sin [(120587119873) (1198981015840 + 120575
119910)]
(36)
The right side presents the spatial distribution of the normal-ized cross-correlation coefficientsTherefore (120575
119909 120575
119910) are able
to be measured based on that In practice constant 120582 le 1which tends to decrease when small noise exists and equalsunity in ideal cases
Step 5 Dense registration is executed for the lapped imagestrips
Repeating Step 1simStep 4 we register the along-track sam-ple images selected from the referenced images to the sensedimageThemaximal sample rate can reach up to line-by-lineThe continuous procedure is shown in Figure 8 in which theimage pairs are marked
The curves of relative offsets inP are shown in Figures 9and 10
Let col119903 row
119903be the column and row indexes of image
elements on the referenced image and let col119904 row
119904be the
indexes of the same elements on the sensed image The totalcolumns of each detector 119876 = 4096 pix and the verticaldistance between the two detector arrays 119863 = 184975mmAccording to the results of registration we get the offsets
50 100 150 200 250 300 350 400 450 500
minus28minus26minus24 X 258
Y minus2515
Image rows (pixels)
Cros
s tra
ck(p
ixel
s)
CCD1 versus CCD2
50 100 150 200 250 300 350 400 450 500
minus8minus6minus4 X 258
Y minus5393
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 423Y minus7363
S11S22
S22
S11
X 423Y minus2378
Figure 9Theoffsets of lapped images captured byCCD1 andCCD2
50 100 150 200 250 300 350 400 450 500minus17minus16minus15minus14minus13minus12
X 266Y minus1285 X 436
Y minus1297
Image rows (pixels)Cr
oss t
rack
(p
ixel
s)
CCD3 versus CCD4
50 100 150 200 250 300 350 400 450 500minus9minus8minus7minus6minus5
X 436Y minus6869
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 266Y minus7663
S31
S31
S32
S32
Figure 10 The offsets of lapped images captured by CCD3 andCCD4
of images at 119899th gap 120575119899119909(cross track) 120575119899
119910(along track) in
frameP and Δ1199091015840119899 Δ1199101015840
119899(mm) in frameF
120575119899119909= col
119903+ col
119904minus 119876 minus 120578
119899
Δ1199091015840
119899= Δ(119909
1015840
2)119899
= 120575119899119909sdot 119908
120575119899119910= row
119904minus row
119903minus119863
119908
Δ1199101015840
119899= Δ(119909
1015840
1)119899
= 120575119899119910sdot 119908 + 119863
(37)
Four pixels S11 S12 S31 and S32 are examinedTheir data arelisted in Table 1
S11 and S31 are the images of the same object which wascaptured in order by CCD1 and CCD2 (Gap 1) S12 and S32were captured respectively by CCD3 and CCD4 (Gap 3)Referring to the auxiliary data S11 and S31 were capturedat same time and S12 and S32 were captured at differenttime which means that the along-track speeds of the twomoving images were quite different Moreover the cross-track image offsets in Gap 1 and Gap 3 vary so much whichsays that the optical flows were also distorted unevenly anddeflects away from the along-track directionOn the other
Mathematical Problems in Engineering 11
Table 1 The offsets between overlapped images
Sample Row no(pixel)
120575119899119909
(pixel)Δ119909
1015840
119899
(mm)120575119899
119910
(pixel)Δ119910
1015840
119899
(mm)
S11 258 minus2515 minus02200625 minus539 184503
S12 423 minus2378 minus02080750 minus736 184331
S31 266 minus1285 minus01124375 minus766 184304
S32 436 minus1297 minus01134875 minus687 184374
hand it is has been discovered in Figures 9 and 10 that thefluctuation of image offsets taking place in Gap 1 is greaterin magnitude than in Gap 3 All the facts indicate that thedistorted optical flows can be detected from a plenty of imageoffsets We will see later that the nonlinear distribution of thedata strengthens the well-posedness of optical flow inversionalgorithm
4 Remote Sensor AttitudeMotion Measurement
In this section the attitude velocity of the remote sensor isgoing to be resolved by using optical flow inversion methodThe results of dense registration are applied to produceconditions of fixed solution for optical flow equations
41 The Principle of Optical Inversion For clarity in frameC the two coordinate components of image displacementof 119896th sample element belonging to 119899th lapped strip pair arewritten as Δ1199091015840
119899119896 Δ1199101015840
119899119896 From (13) and (25) it is easy to show
that the contributions to optical flow owing to orbital motionand earthrsquos inertial movement are of very slightly varying inshort term such that the corresponding displacements can beregarded as piecewise constants 119904
119909 119904119910
Let 120591119894119895 119905119894119895
be in order the two sequential imaging timeof the 119895th image sample on the overlapped detectors in 119895thgap They are usually recorded in the auxiliary data of theremote sensor Hence for every image element the quantityof discrete status in optical flow tracing will be
119873119894119895= [
119905119894119895minus 120591
119894119895
Δ119905] isin Z
+
(119894 = 1 sdot sdot sdot 119899 119895 = 1 sdot sdot sdot 119898) (38)
where 119899 is the amount of CCD gaps 119898 is the amount ofsample groups and Δ119905 is the time step We set samples withsame 119895 index into the same group in which the samples arecaptured by the prior detectors simultaneously
We expand (11) substitute it into (14) and (13) and thenarrange the scalar optical flow inversion equations in termsof the three axial angular velocity components 120596
1 120596
2 and 120596
3
(the variables in the inverse problem) yielding the linearoptical flow equations
Locus of optical flow
CCD
CCD
120575max
D
ci120583120581 = const
Figure 11 Coefficients Determination according to the CurrentLocation of the Image
For the 119897th group samples
1198731119897
sum119894=119897
119888119894
11198971120596119894
1+ 119888
119894
11198972120596119894
2+ 119888
119894
11198973120596119894
3= Δ119909
1015840
1119897minus 119904
1199091
1198731119897
sum119894=119897
119889119894
11198971120596119894
1+ 119889
119894
11198972120596119894
2+ 119889
119894
11198973120596119894
3= Δ119910
1015840
1119897minus 119904
1199101
119873119899119897
sum119894=119897
119888119894
1198991198971120596119894
1+ 119888
119894
1198991198972120596119894
2+ 119888
119894
1198991198973120596119894
3= Δ119909
1015840
119899119897minus 119904
119909119899
119873119899119897
sum119894=119897
119889119894
1198991198971120596119894
1+ 119889
119894
1198991198972120596119894
2+ 119889
119894
1198991198973120596119894
3= Δ119910
1015840
119899119897minus 119904
119910119899
(39)
Suppose that the sample process will stop until119898 groupshave been founded The coefficients are as follows
119888119894
120583]120581 = Ξ120581 (120583 lceil119894 minus ] + 1119873120583]
Nrceil)
119889119894
120583]120581 = Λ 120581(120583 lceil
119894 minus ] + 1119873120583]
Nrceil) (120581 = 1 2 3)
(40)
12 Mathematical Problems in Engineering
Here
Ξ119896= (
12058511119896
12058512119896
sdot sdot sdot 1205851N119896
12058521119896
12058522119896
sdot sdot sdot 1205852N119896
sdot sdot sdot sdot sdot sdot
1205851198991119896
1205851198992119896
sdot sdot sdot 120585119899N119896
)
Λ119896= (
12058211119896
12058212119896
sdot sdot sdot 1205821N119896
12058221119896
12058222119896
sdot sdot sdot 1205822N119896
sdot sdot sdot sdot sdot sdot
1205821198991119896
1205821198992119896
sdot sdot sdot 120582119899N119896
)
(41)
As for the algorithm to reduce the complexity all possiblevalues for the coefficients are stored in the matrixes Ξ
119896and
Λ119896 The accuracy is guaranteed because the coefficients for
the images moving into the same piece of region are almostequal to an identical constant in a short period which isexplained in Figure 11
It has beenmentioned that the optical flow is not sensitiveto satellitersquos orbit motion and earth rotation in a short term
namely the possible values are assigned by the followingfunctions
120585119894119895119896= 120585
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
120582119894119895119896= 120582
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
119894 = 1 sim 119899 119895 = 1 sim N 119902 = 1 sim N
(42)
HereN is the number of constant-valued segments in theregion encompassing all the possible optical flow trajectoriesThe orbital elements and integral step size Δ119905 are commonto all functions Furthermore when long termmeasurementsare executed Ξ
119896and Λ
119896only need to be renewed according
to the current parametersThe coefficientmatrix of the optical flow equations for 119895th
(1 le 119895 le 119898) group can be written as
C119895=
(((((((((((
(
1198881
11198951119888111198952
119888111198953
sdot sdot sdot 1198881198731119895
111989511198881198731119895
111989521198881198731119895
11198953sdot sdot sdot 0 0
119889111198951
119889111198952
119889111198953
sdot sdot sdot 1198891198731119895
111989511198891198731119895
111989521198891198731119895
11198953sdot sdot sdot 0 0
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811199021198951
11988811199021198952
11988811199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119888119873119902119895
1199021198951119888119873119902119895
1199021198951119888119873119902119895
1199021198953
11988911199021198951
11988911199021198952
11988911199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119889119873119902119895
1199021198951119889119873119902119895
1199021198952119889119873119902119895
1199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811198991198951
11988811198991198952
11988811198991198953
sdot sdot sdot sdot sdot sdot 119888119873119899119895
1198991198951119888119873119899119895
1198991198952119888119873119899119895
1198991198953sdot sdot sdot 0
1198891
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot sdot sdot sdot 119889
1
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot 0
)))))))))))
)2119899times3119873119902119895
(43)
where119873119902119895= max119873
1119895 119873
119899119895 Consequently as we organize the equations for all groups
the global coefficient matrix will be given in the followingform
C =((
(
[C1]2119899times3119873
1199021
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
0 [C2]2119899times3119873
1199022
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
d sdot sdot sdot sdot sdot sdot
[C]2119899times3119873maxd
[C119898]2119899times3119873
119902119898
0
))
)2119899119898times3119873max
(44)
C is a quasidiagonal partitioned matrix every subblockhas 2119899 rows The maximal columns of C are 119873max =
max1198731199021 119873
119902119898
The unknown variables are as follows
[Ω]3119873maxtimes1
= [120596111205961212059613sdot sdot sdot 120596
119873max1
120596119873max2
120596119873max3
]119879
(45)
The constant are as followsΔu
2119898119899times1= [ Δ1199091015840
11Δ1199101015840
11sdot sdot sdot Δ1199091015840
1198991Δ1199101015840
1198991
sdot sdot sdot Δ11990910158401198981
Δ11991010158401198981
sdot sdot sdot Δ1199091015840119899119898
Δ1199101015840119899119898
]119879
s2119898119899times1
= [ 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899
sdot sdot sdot 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899]119879
(46)
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 5
when they are applied to perform large angle attitudemaneu-vering for example in sidelooking by swing or stereoscopiclooking by pitching and so forth Under these circumstancesin order to acquire photos with high spatial temporal andspectral resolution image motion velocity control strategiesshould be executed in real time [20] based on auxiliary datawhich measured by reliable space-borne sensors [21 22] Indetail for TDI CCD cameras the line rates of the detectorsmust be controlled synchronizing to the local image velocitymodules during exposure so as to avoid along-track motionblurring the attitude of remote sensor should be regulated intime to maintain the detectors push-broom direction aimingat the direction of image motion to avoid cross-track motionblurring
3 Optical Flow Rough Inversion andDense Image Registration
Optical flow is another important physical model carryingthe whole energy and information of moving images indynamic imaging A specific optical flow trajectory is anintegral curve which is always tangent to the image velocityfield thus we have
1199091015840
1(119879) = int
119879
0
1015840
1(119909
1015840
1 119909
1015840
2 119905) 119889119905
1199091015840
2(119879) = int
119879
0
1015840
2(119909
1015840
1 119909
1015840
2 119905) 119889119905
(13)
Since (13) are coupled nonlinear integral equations weconvert them to numerical forms and solve them iteratively
1199091015840
i (0) = 1199091015840
i (119905)10038161003816100381610038161003816119905=0
1199091015840
j (119899) = 1199091015840
j (119899 minus 1) +1
2
1015840
119895[119909
1015840
1(119899 minus 1) 119909
1015840
2(119899 minus 1) 119899]
+ 1015840
119895[119909
1015840
1(119899 minus 1) 119909
1015840
2(119899 minus 1)
119899 minus 1] Δ119905
(119895 = 1 2 119899 isin Z+
)
(14)
It is evident that the algorithm has enough precision solong as the step-size of time interval Δ119905 is small enough Itcan be inferred from (13) that strong nonlinear image velocityfield may distort optical flows so much that the geometricalstructure of image may have irregular behaviors Thereforeif we intend to inverse the information of optical flow tomeasure the attitude motion the general formula of imagedeformation due to the optical flows should be deduced
31 Time-Varying Image Deformation in Dynamic ImagingFirstly we will investigate some differential characteristics ofthe moving image of an extended object on the earth surfaceAs shown in Figure 1 considering a microspatial variation of119901 along 120591 on the curved surface can be expressed as 120575 120588
119901= 120575119897 120591
Its conjugated image is
1205751199091015840
119894= 120575120573119909
119894+ 120573120575119909
119894 (15)
We expand the term of 120575120573
120575120573 = (minus1)119898
1198911015840
( 119903 + 120575 119903) sdot 1198903
minus1198911015840
119903 sdot 1198903
= (minus1)119898minus1
1198911015840
119903 sdot 1198903
infin
sum119896=1
(minus1)119896
(120575 119903 sdot 119890
3
119903 sdot 1198903
)
119896
asymp (minus1)1198981198911015840 120591 sdot 119890
3120575119897
( 119903 sdot 1198903)2
(16)
Taking derivatives with respect to variable 119905 for either part of(15) we have
1205751015840
119894= 120575 120573119909
119894+ 120575120573
119894+ 120573120575119909
119894+ 120573120575
119894 (17)
According to (16) we know that 120575 120573 asymp 0 On the otherhand the variation of 119903 can be expressed through a series ofcoordinate transformations that is
C(120575 119903) = 120575119897 [MATminus1
OITEIE120591] (18)
Notice that E 120591 is a fixed tangent vector of earth surfaceat object point 119901 which is time-invariant and specifies anorientation of motionless scene on the earth
Consequently
(
C120575 119903
120575119897)
120591
= (MATminus1
OITEI +MATminus1
OITEI
+MATminus1
OITEI +MATminus1
OITEI)E120591
(19)
where the coordinate transformmatrix from frameE toI is
TEI = (
cos1198671199010 minus sin119867
119901
0 1 0
sin1198671199010 cos119867
119901
) (20)
Let 120596119890be the angular rate of the earth and 120572
119901the longitude of
119901 on the earth then the hour angle of 119901 at time 119905 is 119867119901(119905) =
GST+120572119901+120596
119890119905 in which GST represents Greenwich sidereal
timeThe microscale image deformation of the extended scene
on the earth along the direction of 120591 during 1199051sim 119905
2can be
written as
[1205751199091015840
119894]1199052
120591
minus [1205751199091015840
119894]1199051
120591
= int1199052
1199051
(1205751015840
119894)
120591
119889119905 (21)
From (17) we have
(1205751015840119894)
120591
120575119897=120575120573
120575119897119894+ 120573
120575119909119894
120575119897+ 120573
120575119894
120575119897 (22)
According to (16) (18) and (19) we obtain the terms in (22)
120575120573
120575119897= (minus1)
1198981198911015840 C 120591 sdot 119890
3
( 119903 sdot 1198903)2
120575119909119894
120575119897= MATminus1
OITEI 119890119894sdotE120591
120575119894
120575119897= (
C120575 119903
120575119897)
120591
sdot 119890119894+ (
C120575 119903
120575119897)
120591
sdot 119890119894
(23)
6 Mathematical Problems in Engineering
Furthermore if the camera is fixed to the satellite platformthen M = 0 119890
119894= 0
Consequently (22) becomes
F119894(119905 120591) =
(1205751015840119894)
120591
120575119897
= (minus1)1198981198911015840 C 120591 sdot 119890
3
( 119903 sdot 1198903)2119894
+ (minus1)1198981198911015840 ( 119903 sdot 119890
119894)
( 119903 sdot 1198903)2MATminus1
OITEI 119890119894sdotE120591
+ (minus1)119898minus1
1198911015840
119903 sdot 1198903
(MATminus1
OITEI
+MATminus1
OITEI
+MATminus1
OITEI)E120591 sdot 119890
119894
(24)
For the motionless scene on the earth surface E120591 is a time-
independent but space-dependent unit tangent vector whichmeanwhile represents a specific orientation on the groundMoreover the physical meaning of function F
119894(119905 120591) is the
image deformation of unit-length curve on the curved surfacealong the direction of E
120591 in unit time interval That is theinstantaneous space-time deforming rate of the image of theobject along E
120591Consequently in dynamic imaging macroscopic defor-
mation on themoving image can be derived from the integralofF
119894(119905 120591) in space and time Referring to Figure 1 let Γ be an
arbitrary curve of the extended object on the earth let Γ1015840 be itsimage let two arbitrary points 119901 119902 isin Γ and let their Gaussianimages1199011015840 1199021015840 isin Γ1015840 Let E 120591 = T(119904) be a vector-valued functionwith variable 119904 (the length of the arc) which is time-invariantin frame E and gives the tangent vectors along the curve
So the image deformation taking place during 1199051sim 119905
2is
able to be described as
[(1199091015840
119901)119894
]1199052
1199051
minus [(1199091015840
119902)119894
]1199052
1199051
= intΓ
int1199052
1199051
F119894∘ T119889119905 119889119904 (25)
in whichF119894∘ T = F
119894[119905 T(119904)]
Now in terms of (24) and (25) we can see that the imagedeformation is also anisotropic and nonlinear which dependsnot only on optical flowrsquos evolution but also on the geometryof the scene
32 Dense Image Registration throughOptical Flow PredictionAs mentioned in the preceding sections optical flow is themost precise model in describing image motion and time-varying deformation On the contrary it is possible to inverseoptical flow with high accuracy if the image motion anddeformation can be detected As we know the low frequencysignal components of angular velocity are easier to be sensedprecisely by attitude sensors such as gyroscopes and startrackers but the higher frequency components are hard to
be measured with high accuracy However actually pertur-bations from high frequency jittering are the critical reasonfor motion blurring and local image deformations since theinfluences brought by low components of attitude motion areeasier to be restrained in imaging through regulating remotesensors
Since (13) and (25) are very sensitive to the attitudemotion the angular velocity is able to be measured with highresolution as well as broad frequency bandwidth so long asthe image motion and deformation are to be determinedwith a certain precision Fortunately the lapped images ofthe overlapped detectors meet the needs because they werecaptured in turn as the same parts of the optical flow passthrough these adjacent detectors sequentiallyWithout losinggenerality we will investigate the most common form ofCCD layout for which two rows of detectors are arrangedin parallel The time-phase relations of image formation dueto optical flow evolution are illustrated in Figure 3 wherethe moving image elements 120572
1 120572
2 (in the left gap)
1205731 120573
2 (in the right gap) are captured firstly at the same
time since their optical flows pass through the prior detectorsHowever because of nonuniform optical flows they willnot be captured simultaneously by the posterior detectorsTherefore the geometrical structures of photographs willbe time varying and nonlinear It is evident from Figure 3that the displacements and relative deformations in frameCbetween the lapped images can be determined by measuringthe offsets of the sample image element pairs in frameP
Let Δ1199101015840 = Δ11990910158401 Δ1199091015840 = Δ1199091015840
2be the relative offsets of the
same objectrsquos image on the two photos they are all calibratedinC orF We will measure them by image registration
As far as image registration method is concerned one ofthe hardest problems is complex deformation which is proneto weaken the similarity between the referenced images andsensed images so that itmight introduce large deviations fromthe true values or even lead to algorithm failure Some typicalmethods have been studied in [23ndash25] Generally most ofthem concentrated on several simple deforming forms suchas affine shear translation rotation or their combinationsinstead of investigating more sophisticated dynamic deform-ing models In [26ndash30] some effective approaches havebeen proposed to increase the accuracy and the robust ofalgorithms according to the respective reasonable modelsaccording to the specific properties of objective images
For conventional template based registration methodsonce a template has been extracted from the referencedimage the information about gray values shape and fre-quency spectrum does not increase since no additionalphysical information resources would be offered But actuallysuch information has changed when the optical flows arriveat the posterior detectors Therefore the cross-correlationsbetween the templates and sensed images certainly reduceSo in order to detect the minor image motions and com-plex deformations between the lapped images high-accurateregistration is indispensable which means that more pre-cise model should be implemented We treat it using thetechnique called template reconfiguration In summary themethod is established on the idea of keeping the completionof the information about optical flows
Mathematical Problems in Engineering 7
y998400
x998400
Posterior CCD
12057211205722
1205731
1205732
13998400
Prior CCD
Δx998400120578
Figure 3Nonlinear image velocity field and optical flow trajectoriesinfluence the time-phase relations between the lapped imagescaptured by the adjacent overlapped detectors
In operating as indicated in Figure 3 take the lappedimages captured by the detectors in prior array as thereferenced images and the images captured by posteriordetectors as the sensed images Firstly we will rebuild theoptical flows based on the rough measurements of the space-borne sensors and then reconfigure the original templates toconstruct the new templates whose morphologies are moreapproximate to the corresponding parts on the sensed imagesWith this process the information about imaging proceduresis able to be added into the new templates so as to increase thedegree of similarity to the sensed images The method maydramatically raise the accuracy of dense registration such thatthe high-accurate offsets between the lapped image pairs areable to be determined
In the experiment we examined Mapping Satellite-1 aChinese surveying satellite operating in 500 km height sunsynchronous orbit which is used for high-accurate pho-togrammetry [31] whose structure is shown in Figure 4 Oneof the effective payload three-line-array panchromatic CCDcameras has good geometrical accuracy whose ground pixelresolution is superior to 5m spectral range is 051 120583m sim
069 120583m and the swath is 60 km Another payload is that thehigh resolution camera is designed possessing Cook-TMAoptical system which gives a wide field of view [16 17] andthe panchromatic spatial resolution can reach 2m
In engineering for the purpose to improve the imagequality and surveying precision the high-accuracy measure-ments of jitter and attitude motion are very essential for pos-terior processing Thus here we investigate the images andthe auxiliary data of the large FOV high resolution camera todeal with the problem The experimental photographs werecaptured with 10∘ side looking The focal plane of the camera
High resolutionpanchromatic camera
Optical axis
Mapping satellite-01
O998400
x9984001x9984003
x9984002
Figure 4 The structure of Mapping Satellite-1 and its effectivepayloads
consists of 8 panchromatic TDI CCD detectors and there are120578 = 96 physical lapped pixels between each other
The scheme of the processing in registering one imageelement 120594 is illustrated in Figure 5
Step 1 Set the original lapped image strips (the images whichwere acquired directly by the detectors and without anypostprocessing) in frameC
Step 2 Compute the deformations of all image elementson referenced template with respect to their optical flowtrajectories
We extract the original template from the referencedimage denoted as 119879
1 which consists of 1198732 square elements
that is dim(1198791) = 119873 times 119873 Let 120594 be its central element and
119908 the width of each element here 119908 = 875 120583m Beforethe moving image was going to be captured by the posteriordetector in terms of (25) their current shapes and energydistribution can be predicted by the optical flow based on theauxiliary data of the remote sensor
In order to simplify the algorithm first order approxima-tion is allowed without introducing significant errors Thisapproximation means that the shape of every image elementis always quadrilateral Linear interpolations are carried outto determine the four sides according to the deformationsalong the radial directions of the vertexes as showed inFigure 5 The unit radial vectors are denoted by 120591
1015840
1sim 1205911015840
4in
frameC
1205911015840
1=radic2
21198901minusradic2
21198902 120591
1015840
3= minus
radic2
21198901+radic2
21198902
1205911015840
2=radic2
21198901+radic2
21198902 120591
1015840
4= minus
radic2
21198901minusradic2
21198902
(26)
Suppose image point 1199011015840 is the center of an arbitrary elementΣ1015840 in 119879
1 Let Σ be the area element on the earth surface which
is conjugate to Σ1015840 The four unit radial vectors of the vertexes
8 Mathematical Problems in Engineering
1
3
2
1
4
T0
T1 T9984001
T2 Ts
Referenced image of prior CCD Sensed image of posterior CCD
Figure 5 Optical flow prediction and template reconfiguration
on Σ 1205911sim 120591
4are conjugate to 1205911015840
1sim 1205911015840
4and tangent to the earth
surface at 119901 From the geometrical relations we have
C120591119894= (minus1)
119898
1199031015840 times 1205911015840119894times
C119899119901
100381610038161003816100381610038161199031015840 times 1205911015840
119894times
C119899119901
10038161003816100381610038161003816
E120591119894= Tminus1
EITOIAminus1Mminus1 C
120591119894
C119899119901= MATminus1
OITEIE119899119901
(27)
where E 119899119901is the unit normal vector of Σ at 119901 We predict
the deformations along 1205911sim 120591
4during 119905
1sim 119905
2according to
the measurements of GPS star trackers and gyroscopes asexplained in Figure 6 119905
1is the imaging time on prior detector
and 1199052is the imaging time on the posterior detector
[1205751199091015840
1]Δ119905
120591119896
= [1205751199091015840
1]1199052
120591119896
minus [1205751199091015840
1]1199051
120591119896
[1205751199091015840
2]Δ119905
120591119896
= [1205751199091015840
2]1199052
120591119896
minus [1205751199091015840
2]1199051
120591119896
(119896 = 1 sim 4)
(28)
The shape of deformed image Σ10158401199052can be got through linear
interpolation with
[120575 1199031015840
]Δ119905
120591119896
= ([1205751199091015840
1]Δ119905
120591119896
[12057511990910158402]Δ119905
120591119896
) (29)
Step 3 Reconfigure referenced template 1198791according to
optical flow prediction and then get a new template 1198792
Let 11987910158401be the deformed image of 119879
1computed in Step 2
Let 120594 = 119861119894119895be the central element of 1198791015840
1 integers 119894 and 119895 are
respectively the row number and column number of 119861119894119895The
gray value 119897119894119895of each element in 1198791015840
1is equal to its counterpart
in 1198791with the same indexes In addition we initialize a null
template 1198790whose shape and orientation are identical to 119879
1
the central element of 1198790is denoted by 119879
119894119895
[120575rarrr 998400]Δtminusrarr1205911[120575rarrr 998400]Δtminusrarr1205912
[120575rarrr 998400]Δtminusrarr1205913[120575rarrr 998400]Δtminusrarr1205914
1 2
34
1998400
2998400
39984004998400
Σ998400t2
Σ998400t1
p998400rarr120591 998400
1rarr120591 998400
2
rarr120591 9984003
rarr120591 9984004
Figure 6 Deformation of single element
Then we cover 1198790upon 1198791015840
1and let their centers coincide
that is 119879119894119895= 119861
119894119895 as shown in Figure 7 Denote the vertexes
of 11987910158401as 119881119896
119894119895(119896 = 1 sim 4) Therefore the connective relation
for adjacent elements can be expressed by 1198811
119894119895= 119881
2
119894119895minus1=
1198813
119894minus1119895minus1= 1198814
119894minus1119895
Next we will reassign the gray value ℎ1015840119894119895to 119879
119894119895(119894 =
1 sdot sdot sdot 119873 119895 = 1 sdot sdot sdot 119873) in sequence to construct a new template1198792 The process is just a simulation of image resample when
optical flow arrives at the posterior detector as indicated inFigure 3
That is
ℎ1015840
119894119895=
119894+1
sum119898=119894minus1
119895+1
sum119899=119895minus1
120578119898119899119897119898119899 (30)
Weight coefficient 120578119898119899
= 1198781198981198991199082 where 119878
119898119899is the area of the
intersecting polygon of 119861119898119899
with 119879119894119895
Mathematical Problems in Engineering 9
V1iminus1jminus1
Biminus1jminus1
Bijminus1
Bi+1jminus1
V4i+1jminus1
Biminus1j
V1ij
Bij
Tij
V4ij V3
ij
Bi+1j
T9984001
T0
Biminus1j+1
V2ij
Bij+1
Bi+1j+1
V2iminus1j+1
V3i+1j+1
Figure 7 Template reconfiguration
Step 4 Computenormalized cross-correlation coefficientsbetween 119879
2and the sensed image and then determine the
subpixel offset of 1198792relative to the sensed image in frameP
Firstly for this method the search space on the sensedimage can be contracted so much since the optical flowtrajectories for the referenced elements have been predictedin Step 2 Assuming that the search space is 119879
119904 dim(119879
119904) =
119872 times 119872 When 119879119894119895
moves to the pixel (1198991 119899
2) on 119879
119904 the
normalized cross-correlation (NCC) coefficient is given by
120574 (1198991 119899
2)
=sum119909119910
[119892 (119909 119910) minus 119892119909119910] [ℎ (119909 minus 119899
1 119910 minus 119899
2) minus ℎ]
sum119909119910
[119892 (119909 119910) minus 119892119909119910]2
sum119909119910
[ℎ (119909 minus 1198991 119910 minus 119899
2) minus ℎ]
2
05
(31)
where 119892119909119910
is the mean gray value of the segment of 119879119904
that is masked by 1198792and ℎ is the mean of 119879
2 Equation
(31) requires approximately 1198732(119872 minus 119873 + 1)2 additions and
1198732(119872 minus 119873 + 1)2 multiplications whereas the complexity of
FFT algorithm needs about 121198722log2119872 real multiplications
and 181198722log2119872 real additionssubtractions [32 33]
At the beginning we take119872 = 101119873 = 7 and computethe NCC coefficient When 119872 is much larger than 119873 thecalculation in spatial domain will be efficient Suppose thatthe peak value 120574max is taken at the coordinate (119896119898) 119896119898 isin Z
in the sensed window Hence we will reduce search space intoa smaller one with dimension of 47 times 47 which centered on119879119904(119896119898) Next the subpixel registration is realized by phase
correlation algorithm with larger 119872 and 119873 to suppress thesystem errors owing to the deficiencies of detailed textures
on the photo Here we take119872 = 47119873 = 23 Let the subpixeloffset between the two registering image elements be denotedas 120575
119909and 120575
119910in frameP
The phase correlation algorithm in the frequency domainbecomes more efficient as 119873 approaches 119872 and both havelarger scales [28] Moreover the Fourier coefficients are nor-malized to unitmagnitude prior to computing the correlationso that the correlation is based only on phase information andbeing insensitive to changes in image intensity [27 29]
LetG(119906 V) be the 2D Discrete Fourier Transforms (DFT)of the sensed window then we have
G (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
119892 (119909 119910)119882119906119909
119872119882
V119910119872
H (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
ℎ (119909 119910)119882119906119909
119873119882
V119910119873
(32)
Here
119882119873= exp(minus1198952120587
119873) (33)
Cross-phase spectrum is given by
R (119906 V) =G (119906 V)Hlowast
(119906 V)|G (119906 V)Hlowast (119906 V)|
= exp (119895120601 (119906 V)) (34)
whereHlowast is the complex conjugate ofH By inverse DiscreteFourier Transform (IDFT) we have
120574 (1198991 119899
2) =
1
1198732
(119873minus1)2
sum119906=minus(119873minus1)2
(119873minus1)2
sumV=minus(119873minus1)2
R (119906 V)119882minus1199061198991
119873119882
minusV1198992
119873
(35)
10 Mathematical Problems in Engineering
Figure 8 Dense image registration for lapped image strips CCD1versus CCD2 (Gap 1 the left two) and CCD3 versus CCD4 (Gap 3the right two)
Suppose that the new peak 120574max appears at (1198961015840 1198981015840) 1198961015840 1198981015840 isin
Z referring to [27] we have the following relation
120574max (1198961015840
1198981015840
)
asymp120582
1198732
sin [120587 (1198961015840 + 120575119909)] sin [120587 (1198981015840 + 120575
119910)]
sin [(120587119873) (1198961015840 + 120575119909)] sin [(120587119873) (1198981015840 + 120575
119910)]
(36)
The right side presents the spatial distribution of the normal-ized cross-correlation coefficientsTherefore (120575
119909 120575
119910) are able
to be measured based on that In practice constant 120582 le 1which tends to decrease when small noise exists and equalsunity in ideal cases
Step 5 Dense registration is executed for the lapped imagestrips
Repeating Step 1simStep 4 we register the along-track sam-ple images selected from the referenced images to the sensedimageThemaximal sample rate can reach up to line-by-lineThe continuous procedure is shown in Figure 8 in which theimage pairs are marked
The curves of relative offsets inP are shown in Figures 9and 10
Let col119903 row
119903be the column and row indexes of image
elements on the referenced image and let col119904 row
119904be the
indexes of the same elements on the sensed image The totalcolumns of each detector 119876 = 4096 pix and the verticaldistance between the two detector arrays 119863 = 184975mmAccording to the results of registration we get the offsets
50 100 150 200 250 300 350 400 450 500
minus28minus26minus24 X 258
Y minus2515
Image rows (pixels)
Cros
s tra
ck(p
ixel
s)
CCD1 versus CCD2
50 100 150 200 250 300 350 400 450 500
minus8minus6minus4 X 258
Y minus5393
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 423Y minus7363
S11S22
S22
S11
X 423Y minus2378
Figure 9Theoffsets of lapped images captured byCCD1 andCCD2
50 100 150 200 250 300 350 400 450 500minus17minus16minus15minus14minus13minus12
X 266Y minus1285 X 436
Y minus1297
Image rows (pixels)Cr
oss t
rack
(p
ixel
s)
CCD3 versus CCD4
50 100 150 200 250 300 350 400 450 500minus9minus8minus7minus6minus5
X 436Y minus6869
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 266Y minus7663
S31
S31
S32
S32
Figure 10 The offsets of lapped images captured by CCD3 andCCD4
of images at 119899th gap 120575119899119909(cross track) 120575119899
119910(along track) in
frameP and Δ1199091015840119899 Δ1199101015840
119899(mm) in frameF
120575119899119909= col
119903+ col
119904minus 119876 minus 120578
119899
Δ1199091015840
119899= Δ(119909
1015840
2)119899
= 120575119899119909sdot 119908
120575119899119910= row
119904minus row
119903minus119863
119908
Δ1199101015840
119899= Δ(119909
1015840
1)119899
= 120575119899119910sdot 119908 + 119863
(37)
Four pixels S11 S12 S31 and S32 are examinedTheir data arelisted in Table 1
S11 and S31 are the images of the same object which wascaptured in order by CCD1 and CCD2 (Gap 1) S12 and S32were captured respectively by CCD3 and CCD4 (Gap 3)Referring to the auxiliary data S11 and S31 were capturedat same time and S12 and S32 were captured at differenttime which means that the along-track speeds of the twomoving images were quite different Moreover the cross-track image offsets in Gap 1 and Gap 3 vary so much whichsays that the optical flows were also distorted unevenly anddeflects away from the along-track directionOn the other
Mathematical Problems in Engineering 11
Table 1 The offsets between overlapped images
Sample Row no(pixel)
120575119899119909
(pixel)Δ119909
1015840
119899
(mm)120575119899
119910
(pixel)Δ119910
1015840
119899
(mm)
S11 258 minus2515 minus02200625 minus539 184503
S12 423 minus2378 minus02080750 minus736 184331
S31 266 minus1285 minus01124375 minus766 184304
S32 436 minus1297 minus01134875 minus687 184374
hand it is has been discovered in Figures 9 and 10 that thefluctuation of image offsets taking place in Gap 1 is greaterin magnitude than in Gap 3 All the facts indicate that thedistorted optical flows can be detected from a plenty of imageoffsets We will see later that the nonlinear distribution of thedata strengthens the well-posedness of optical flow inversionalgorithm
4 Remote Sensor AttitudeMotion Measurement
In this section the attitude velocity of the remote sensor isgoing to be resolved by using optical flow inversion methodThe results of dense registration are applied to produceconditions of fixed solution for optical flow equations
41 The Principle of Optical Inversion For clarity in frameC the two coordinate components of image displacementof 119896th sample element belonging to 119899th lapped strip pair arewritten as Δ1199091015840
119899119896 Δ1199101015840
119899119896 From (13) and (25) it is easy to show
that the contributions to optical flow owing to orbital motionand earthrsquos inertial movement are of very slightly varying inshort term such that the corresponding displacements can beregarded as piecewise constants 119904
119909 119904119910
Let 120591119894119895 119905119894119895
be in order the two sequential imaging timeof the 119895th image sample on the overlapped detectors in 119895thgap They are usually recorded in the auxiliary data of theremote sensor Hence for every image element the quantityof discrete status in optical flow tracing will be
119873119894119895= [
119905119894119895minus 120591
119894119895
Δ119905] isin Z
+
(119894 = 1 sdot sdot sdot 119899 119895 = 1 sdot sdot sdot 119898) (38)
where 119899 is the amount of CCD gaps 119898 is the amount ofsample groups and Δ119905 is the time step We set samples withsame 119895 index into the same group in which the samples arecaptured by the prior detectors simultaneously
We expand (11) substitute it into (14) and (13) and thenarrange the scalar optical flow inversion equations in termsof the three axial angular velocity components 120596
1 120596
2 and 120596
3
(the variables in the inverse problem) yielding the linearoptical flow equations
Locus of optical flow
CCD
CCD
120575max
D
ci120583120581 = const
Figure 11 Coefficients Determination according to the CurrentLocation of the Image
For the 119897th group samples
1198731119897
sum119894=119897
119888119894
11198971120596119894
1+ 119888
119894
11198972120596119894
2+ 119888
119894
11198973120596119894
3= Δ119909
1015840
1119897minus 119904
1199091
1198731119897
sum119894=119897
119889119894
11198971120596119894
1+ 119889
119894
11198972120596119894
2+ 119889
119894
11198973120596119894
3= Δ119910
1015840
1119897minus 119904
1199101
119873119899119897
sum119894=119897
119888119894
1198991198971120596119894
1+ 119888
119894
1198991198972120596119894
2+ 119888
119894
1198991198973120596119894
3= Δ119909
1015840
119899119897minus 119904
119909119899
119873119899119897
sum119894=119897
119889119894
1198991198971120596119894
1+ 119889
119894
1198991198972120596119894
2+ 119889
119894
1198991198973120596119894
3= Δ119910
1015840
119899119897minus 119904
119910119899
(39)
Suppose that the sample process will stop until119898 groupshave been founded The coefficients are as follows
119888119894
120583]120581 = Ξ120581 (120583 lceil119894 minus ] + 1119873120583]
Nrceil)
119889119894
120583]120581 = Λ 120581(120583 lceil
119894 minus ] + 1119873120583]
Nrceil) (120581 = 1 2 3)
(40)
12 Mathematical Problems in Engineering
Here
Ξ119896= (
12058511119896
12058512119896
sdot sdot sdot 1205851N119896
12058521119896
12058522119896
sdot sdot sdot 1205852N119896
sdot sdot sdot sdot sdot sdot
1205851198991119896
1205851198992119896
sdot sdot sdot 120585119899N119896
)
Λ119896= (
12058211119896
12058212119896
sdot sdot sdot 1205821N119896
12058221119896
12058222119896
sdot sdot sdot 1205822N119896
sdot sdot sdot sdot sdot sdot
1205821198991119896
1205821198992119896
sdot sdot sdot 120582119899N119896
)
(41)
As for the algorithm to reduce the complexity all possiblevalues for the coefficients are stored in the matrixes Ξ
119896and
Λ119896 The accuracy is guaranteed because the coefficients for
the images moving into the same piece of region are almostequal to an identical constant in a short period which isexplained in Figure 11
It has beenmentioned that the optical flow is not sensitiveto satellitersquos orbit motion and earth rotation in a short term
namely the possible values are assigned by the followingfunctions
120585119894119895119896= 120585
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
120582119894119895119896= 120582
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
119894 = 1 sim 119899 119895 = 1 sim N 119902 = 1 sim N
(42)
HereN is the number of constant-valued segments in theregion encompassing all the possible optical flow trajectoriesThe orbital elements and integral step size Δ119905 are commonto all functions Furthermore when long termmeasurementsare executed Ξ
119896and Λ
119896only need to be renewed according
to the current parametersThe coefficientmatrix of the optical flow equations for 119895th
(1 le 119895 le 119898) group can be written as
C119895=
(((((((((((
(
1198881
11198951119888111198952
119888111198953
sdot sdot sdot 1198881198731119895
111989511198881198731119895
111989521198881198731119895
11198953sdot sdot sdot 0 0
119889111198951
119889111198952
119889111198953
sdot sdot sdot 1198891198731119895
111989511198891198731119895
111989521198891198731119895
11198953sdot sdot sdot 0 0
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811199021198951
11988811199021198952
11988811199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119888119873119902119895
1199021198951119888119873119902119895
1199021198951119888119873119902119895
1199021198953
11988911199021198951
11988911199021198952
11988911199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119889119873119902119895
1199021198951119889119873119902119895
1199021198952119889119873119902119895
1199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811198991198951
11988811198991198952
11988811198991198953
sdot sdot sdot sdot sdot sdot 119888119873119899119895
1198991198951119888119873119899119895
1198991198952119888119873119899119895
1198991198953sdot sdot sdot 0
1198891
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot sdot sdot sdot 119889
1
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot 0
)))))))))))
)2119899times3119873119902119895
(43)
where119873119902119895= max119873
1119895 119873
119899119895 Consequently as we organize the equations for all groups
the global coefficient matrix will be given in the followingform
C =((
(
[C1]2119899times3119873
1199021
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
0 [C2]2119899times3119873
1199022
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
d sdot sdot sdot sdot sdot sdot
[C]2119899times3119873maxd
[C119898]2119899times3119873
119902119898
0
))
)2119899119898times3119873max
(44)
C is a quasidiagonal partitioned matrix every subblockhas 2119899 rows The maximal columns of C are 119873max =
max1198731199021 119873
119902119898
The unknown variables are as follows
[Ω]3119873maxtimes1
= [120596111205961212059613sdot sdot sdot 120596
119873max1
120596119873max2
120596119873max3
]119879
(45)
The constant are as followsΔu
2119898119899times1= [ Δ1199091015840
11Δ1199101015840
11sdot sdot sdot Δ1199091015840
1198991Δ1199101015840
1198991
sdot sdot sdot Δ11990910158401198981
Δ11991010158401198981
sdot sdot sdot Δ1199091015840119899119898
Δ1199101015840119899119898
]119879
s2119898119899times1
= [ 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899
sdot sdot sdot 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899]119879
(46)
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 Mathematical Problems in Engineering
Furthermore if the camera is fixed to the satellite platformthen M = 0 119890
119894= 0
Consequently (22) becomes
F119894(119905 120591) =
(1205751015840119894)
120591
120575119897
= (minus1)1198981198911015840 C 120591 sdot 119890
3
( 119903 sdot 1198903)2119894
+ (minus1)1198981198911015840 ( 119903 sdot 119890
119894)
( 119903 sdot 1198903)2MATminus1
OITEI 119890119894sdotE120591
+ (minus1)119898minus1
1198911015840
119903 sdot 1198903
(MATminus1
OITEI
+MATminus1
OITEI
+MATminus1
OITEI)E120591 sdot 119890
119894
(24)
For the motionless scene on the earth surface E120591 is a time-
independent but space-dependent unit tangent vector whichmeanwhile represents a specific orientation on the groundMoreover the physical meaning of function F
119894(119905 120591) is the
image deformation of unit-length curve on the curved surfacealong the direction of E
120591 in unit time interval That is theinstantaneous space-time deforming rate of the image of theobject along E
120591Consequently in dynamic imaging macroscopic defor-
mation on themoving image can be derived from the integralofF
119894(119905 120591) in space and time Referring to Figure 1 let Γ be an
arbitrary curve of the extended object on the earth let Γ1015840 be itsimage let two arbitrary points 119901 119902 isin Γ and let their Gaussianimages1199011015840 1199021015840 isin Γ1015840 Let E 120591 = T(119904) be a vector-valued functionwith variable 119904 (the length of the arc) which is time-invariantin frame E and gives the tangent vectors along the curve
So the image deformation taking place during 1199051sim 119905
2is
able to be described as
[(1199091015840
119901)119894
]1199052
1199051
minus [(1199091015840
119902)119894
]1199052
1199051
= intΓ
int1199052
1199051
F119894∘ T119889119905 119889119904 (25)
in whichF119894∘ T = F
119894[119905 T(119904)]
Now in terms of (24) and (25) we can see that the imagedeformation is also anisotropic and nonlinear which dependsnot only on optical flowrsquos evolution but also on the geometryof the scene
32 Dense Image Registration throughOptical Flow PredictionAs mentioned in the preceding sections optical flow is themost precise model in describing image motion and time-varying deformation On the contrary it is possible to inverseoptical flow with high accuracy if the image motion anddeformation can be detected As we know the low frequencysignal components of angular velocity are easier to be sensedprecisely by attitude sensors such as gyroscopes and startrackers but the higher frequency components are hard to
be measured with high accuracy However actually pertur-bations from high frequency jittering are the critical reasonfor motion blurring and local image deformations since theinfluences brought by low components of attitude motion areeasier to be restrained in imaging through regulating remotesensors
Since (13) and (25) are very sensitive to the attitudemotion the angular velocity is able to be measured with highresolution as well as broad frequency bandwidth so long asthe image motion and deformation are to be determinedwith a certain precision Fortunately the lapped images ofthe overlapped detectors meet the needs because they werecaptured in turn as the same parts of the optical flow passthrough these adjacent detectors sequentiallyWithout losinggenerality we will investigate the most common form ofCCD layout for which two rows of detectors are arrangedin parallel The time-phase relations of image formation dueto optical flow evolution are illustrated in Figure 3 wherethe moving image elements 120572
1 120572
2 (in the left gap)
1205731 120573
2 (in the right gap) are captured firstly at the same
time since their optical flows pass through the prior detectorsHowever because of nonuniform optical flows they willnot be captured simultaneously by the posterior detectorsTherefore the geometrical structures of photographs willbe time varying and nonlinear It is evident from Figure 3that the displacements and relative deformations in frameCbetween the lapped images can be determined by measuringthe offsets of the sample image element pairs in frameP
Let Δ1199101015840 = Δ11990910158401 Δ1199091015840 = Δ1199091015840
2be the relative offsets of the
same objectrsquos image on the two photos they are all calibratedinC orF We will measure them by image registration
As far as image registration method is concerned one ofthe hardest problems is complex deformation which is proneto weaken the similarity between the referenced images andsensed images so that itmight introduce large deviations fromthe true values or even lead to algorithm failure Some typicalmethods have been studied in [23ndash25] Generally most ofthem concentrated on several simple deforming forms suchas affine shear translation rotation or their combinationsinstead of investigating more sophisticated dynamic deform-ing models In [26ndash30] some effective approaches havebeen proposed to increase the accuracy and the robust ofalgorithms according to the respective reasonable modelsaccording to the specific properties of objective images
For conventional template based registration methodsonce a template has been extracted from the referencedimage the information about gray values shape and fre-quency spectrum does not increase since no additionalphysical information resources would be offered But actuallysuch information has changed when the optical flows arriveat the posterior detectors Therefore the cross-correlationsbetween the templates and sensed images certainly reduceSo in order to detect the minor image motions and com-plex deformations between the lapped images high-accurateregistration is indispensable which means that more pre-cise model should be implemented We treat it using thetechnique called template reconfiguration In summary themethod is established on the idea of keeping the completionof the information about optical flows
Mathematical Problems in Engineering 7
y998400
x998400
Posterior CCD
12057211205722
1205731
1205732
13998400
Prior CCD
Δx998400120578
Figure 3Nonlinear image velocity field and optical flow trajectoriesinfluence the time-phase relations between the lapped imagescaptured by the adjacent overlapped detectors
In operating as indicated in Figure 3 take the lappedimages captured by the detectors in prior array as thereferenced images and the images captured by posteriordetectors as the sensed images Firstly we will rebuild theoptical flows based on the rough measurements of the space-borne sensors and then reconfigure the original templates toconstruct the new templates whose morphologies are moreapproximate to the corresponding parts on the sensed imagesWith this process the information about imaging proceduresis able to be added into the new templates so as to increase thedegree of similarity to the sensed images The method maydramatically raise the accuracy of dense registration such thatthe high-accurate offsets between the lapped image pairs areable to be determined
In the experiment we examined Mapping Satellite-1 aChinese surveying satellite operating in 500 km height sunsynchronous orbit which is used for high-accurate pho-togrammetry [31] whose structure is shown in Figure 4 Oneof the effective payload three-line-array panchromatic CCDcameras has good geometrical accuracy whose ground pixelresolution is superior to 5m spectral range is 051 120583m sim
069 120583m and the swath is 60 km Another payload is that thehigh resolution camera is designed possessing Cook-TMAoptical system which gives a wide field of view [16 17] andthe panchromatic spatial resolution can reach 2m
In engineering for the purpose to improve the imagequality and surveying precision the high-accuracy measure-ments of jitter and attitude motion are very essential for pos-terior processing Thus here we investigate the images andthe auxiliary data of the large FOV high resolution camera todeal with the problem The experimental photographs werecaptured with 10∘ side looking The focal plane of the camera
High resolutionpanchromatic camera
Optical axis
Mapping satellite-01
O998400
x9984001x9984003
x9984002
Figure 4 The structure of Mapping Satellite-1 and its effectivepayloads
consists of 8 panchromatic TDI CCD detectors and there are120578 = 96 physical lapped pixels between each other
The scheme of the processing in registering one imageelement 120594 is illustrated in Figure 5
Step 1 Set the original lapped image strips (the images whichwere acquired directly by the detectors and without anypostprocessing) in frameC
Step 2 Compute the deformations of all image elementson referenced template with respect to their optical flowtrajectories
We extract the original template from the referencedimage denoted as 119879
1 which consists of 1198732 square elements
that is dim(1198791) = 119873 times 119873 Let 120594 be its central element and
119908 the width of each element here 119908 = 875 120583m Beforethe moving image was going to be captured by the posteriordetector in terms of (25) their current shapes and energydistribution can be predicted by the optical flow based on theauxiliary data of the remote sensor
In order to simplify the algorithm first order approxima-tion is allowed without introducing significant errors Thisapproximation means that the shape of every image elementis always quadrilateral Linear interpolations are carried outto determine the four sides according to the deformationsalong the radial directions of the vertexes as showed inFigure 5 The unit radial vectors are denoted by 120591
1015840
1sim 1205911015840
4in
frameC
1205911015840
1=radic2
21198901minusradic2
21198902 120591
1015840
3= minus
radic2
21198901+radic2
21198902
1205911015840
2=radic2
21198901+radic2
21198902 120591
1015840
4= minus
radic2
21198901minusradic2
21198902
(26)
Suppose image point 1199011015840 is the center of an arbitrary elementΣ1015840 in 119879
1 Let Σ be the area element on the earth surface which
is conjugate to Σ1015840 The four unit radial vectors of the vertexes
8 Mathematical Problems in Engineering
1
3
2
1
4
T0
T1 T9984001
T2 Ts
Referenced image of prior CCD Sensed image of posterior CCD
Figure 5 Optical flow prediction and template reconfiguration
on Σ 1205911sim 120591
4are conjugate to 1205911015840
1sim 1205911015840
4and tangent to the earth
surface at 119901 From the geometrical relations we have
C120591119894= (minus1)
119898
1199031015840 times 1205911015840119894times
C119899119901
100381610038161003816100381610038161199031015840 times 1205911015840
119894times
C119899119901
10038161003816100381610038161003816
E120591119894= Tminus1
EITOIAminus1Mminus1 C
120591119894
C119899119901= MATminus1
OITEIE119899119901
(27)
where E 119899119901is the unit normal vector of Σ at 119901 We predict
the deformations along 1205911sim 120591
4during 119905
1sim 119905
2according to
the measurements of GPS star trackers and gyroscopes asexplained in Figure 6 119905
1is the imaging time on prior detector
and 1199052is the imaging time on the posterior detector
[1205751199091015840
1]Δ119905
120591119896
= [1205751199091015840
1]1199052
120591119896
minus [1205751199091015840
1]1199051
120591119896
[1205751199091015840
2]Δ119905
120591119896
= [1205751199091015840
2]1199052
120591119896
minus [1205751199091015840
2]1199051
120591119896
(119896 = 1 sim 4)
(28)
The shape of deformed image Σ10158401199052can be got through linear
interpolation with
[120575 1199031015840
]Δ119905
120591119896
= ([1205751199091015840
1]Δ119905
120591119896
[12057511990910158402]Δ119905
120591119896
) (29)
Step 3 Reconfigure referenced template 1198791according to
optical flow prediction and then get a new template 1198792
Let 11987910158401be the deformed image of 119879
1computed in Step 2
Let 120594 = 119861119894119895be the central element of 1198791015840
1 integers 119894 and 119895 are
respectively the row number and column number of 119861119894119895The
gray value 119897119894119895of each element in 1198791015840
1is equal to its counterpart
in 1198791with the same indexes In addition we initialize a null
template 1198790whose shape and orientation are identical to 119879
1
the central element of 1198790is denoted by 119879
119894119895
[120575rarrr 998400]Δtminusrarr1205911[120575rarrr 998400]Δtminusrarr1205912
[120575rarrr 998400]Δtminusrarr1205913[120575rarrr 998400]Δtminusrarr1205914
1 2
34
1998400
2998400
39984004998400
Σ998400t2
Σ998400t1
p998400rarr120591 998400
1rarr120591 998400
2
rarr120591 9984003
rarr120591 9984004
Figure 6 Deformation of single element
Then we cover 1198790upon 1198791015840
1and let their centers coincide
that is 119879119894119895= 119861
119894119895 as shown in Figure 7 Denote the vertexes
of 11987910158401as 119881119896
119894119895(119896 = 1 sim 4) Therefore the connective relation
for adjacent elements can be expressed by 1198811
119894119895= 119881
2
119894119895minus1=
1198813
119894minus1119895minus1= 1198814
119894minus1119895
Next we will reassign the gray value ℎ1015840119894119895to 119879
119894119895(119894 =
1 sdot sdot sdot 119873 119895 = 1 sdot sdot sdot 119873) in sequence to construct a new template1198792 The process is just a simulation of image resample when
optical flow arrives at the posterior detector as indicated inFigure 3
That is
ℎ1015840
119894119895=
119894+1
sum119898=119894minus1
119895+1
sum119899=119895minus1
120578119898119899119897119898119899 (30)
Weight coefficient 120578119898119899
= 1198781198981198991199082 where 119878
119898119899is the area of the
intersecting polygon of 119861119898119899
with 119879119894119895
Mathematical Problems in Engineering 9
V1iminus1jminus1
Biminus1jminus1
Bijminus1
Bi+1jminus1
V4i+1jminus1
Biminus1j
V1ij
Bij
Tij
V4ij V3
ij
Bi+1j
T9984001
T0
Biminus1j+1
V2ij
Bij+1
Bi+1j+1
V2iminus1j+1
V3i+1j+1
Figure 7 Template reconfiguration
Step 4 Computenormalized cross-correlation coefficientsbetween 119879
2and the sensed image and then determine the
subpixel offset of 1198792relative to the sensed image in frameP
Firstly for this method the search space on the sensedimage can be contracted so much since the optical flowtrajectories for the referenced elements have been predictedin Step 2 Assuming that the search space is 119879
119904 dim(119879
119904) =
119872 times 119872 When 119879119894119895
moves to the pixel (1198991 119899
2) on 119879
119904 the
normalized cross-correlation (NCC) coefficient is given by
120574 (1198991 119899
2)
=sum119909119910
[119892 (119909 119910) minus 119892119909119910] [ℎ (119909 minus 119899
1 119910 minus 119899
2) minus ℎ]
sum119909119910
[119892 (119909 119910) minus 119892119909119910]2
sum119909119910
[ℎ (119909 minus 1198991 119910 minus 119899
2) minus ℎ]
2
05
(31)
where 119892119909119910
is the mean gray value of the segment of 119879119904
that is masked by 1198792and ℎ is the mean of 119879
2 Equation
(31) requires approximately 1198732(119872 minus 119873 + 1)2 additions and
1198732(119872 minus 119873 + 1)2 multiplications whereas the complexity of
FFT algorithm needs about 121198722log2119872 real multiplications
and 181198722log2119872 real additionssubtractions [32 33]
At the beginning we take119872 = 101119873 = 7 and computethe NCC coefficient When 119872 is much larger than 119873 thecalculation in spatial domain will be efficient Suppose thatthe peak value 120574max is taken at the coordinate (119896119898) 119896119898 isin Z
in the sensed window Hence we will reduce search space intoa smaller one with dimension of 47 times 47 which centered on119879119904(119896119898) Next the subpixel registration is realized by phase
correlation algorithm with larger 119872 and 119873 to suppress thesystem errors owing to the deficiencies of detailed textures
on the photo Here we take119872 = 47119873 = 23 Let the subpixeloffset between the two registering image elements be denotedas 120575
119909and 120575
119910in frameP
The phase correlation algorithm in the frequency domainbecomes more efficient as 119873 approaches 119872 and both havelarger scales [28] Moreover the Fourier coefficients are nor-malized to unitmagnitude prior to computing the correlationso that the correlation is based only on phase information andbeing insensitive to changes in image intensity [27 29]
LetG(119906 V) be the 2D Discrete Fourier Transforms (DFT)of the sensed window then we have
G (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
119892 (119909 119910)119882119906119909
119872119882
V119910119872
H (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
ℎ (119909 119910)119882119906119909
119873119882
V119910119873
(32)
Here
119882119873= exp(minus1198952120587
119873) (33)
Cross-phase spectrum is given by
R (119906 V) =G (119906 V)Hlowast
(119906 V)|G (119906 V)Hlowast (119906 V)|
= exp (119895120601 (119906 V)) (34)
whereHlowast is the complex conjugate ofH By inverse DiscreteFourier Transform (IDFT) we have
120574 (1198991 119899
2) =
1
1198732
(119873minus1)2
sum119906=minus(119873minus1)2
(119873minus1)2
sumV=minus(119873minus1)2
R (119906 V)119882minus1199061198991
119873119882
minusV1198992
119873
(35)
10 Mathematical Problems in Engineering
Figure 8 Dense image registration for lapped image strips CCD1versus CCD2 (Gap 1 the left two) and CCD3 versus CCD4 (Gap 3the right two)
Suppose that the new peak 120574max appears at (1198961015840 1198981015840) 1198961015840 1198981015840 isin
Z referring to [27] we have the following relation
120574max (1198961015840
1198981015840
)
asymp120582
1198732
sin [120587 (1198961015840 + 120575119909)] sin [120587 (1198981015840 + 120575
119910)]
sin [(120587119873) (1198961015840 + 120575119909)] sin [(120587119873) (1198981015840 + 120575
119910)]
(36)
The right side presents the spatial distribution of the normal-ized cross-correlation coefficientsTherefore (120575
119909 120575
119910) are able
to be measured based on that In practice constant 120582 le 1which tends to decrease when small noise exists and equalsunity in ideal cases
Step 5 Dense registration is executed for the lapped imagestrips
Repeating Step 1simStep 4 we register the along-track sam-ple images selected from the referenced images to the sensedimageThemaximal sample rate can reach up to line-by-lineThe continuous procedure is shown in Figure 8 in which theimage pairs are marked
The curves of relative offsets inP are shown in Figures 9and 10
Let col119903 row
119903be the column and row indexes of image
elements on the referenced image and let col119904 row
119904be the
indexes of the same elements on the sensed image The totalcolumns of each detector 119876 = 4096 pix and the verticaldistance between the two detector arrays 119863 = 184975mmAccording to the results of registration we get the offsets
50 100 150 200 250 300 350 400 450 500
minus28minus26minus24 X 258
Y minus2515
Image rows (pixels)
Cros
s tra
ck(p
ixel
s)
CCD1 versus CCD2
50 100 150 200 250 300 350 400 450 500
minus8minus6minus4 X 258
Y minus5393
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 423Y minus7363
S11S22
S22
S11
X 423Y minus2378
Figure 9Theoffsets of lapped images captured byCCD1 andCCD2
50 100 150 200 250 300 350 400 450 500minus17minus16minus15minus14minus13minus12
X 266Y minus1285 X 436
Y minus1297
Image rows (pixels)Cr
oss t
rack
(p
ixel
s)
CCD3 versus CCD4
50 100 150 200 250 300 350 400 450 500minus9minus8minus7minus6minus5
X 436Y minus6869
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 266Y minus7663
S31
S31
S32
S32
Figure 10 The offsets of lapped images captured by CCD3 andCCD4
of images at 119899th gap 120575119899119909(cross track) 120575119899
119910(along track) in
frameP and Δ1199091015840119899 Δ1199101015840
119899(mm) in frameF
120575119899119909= col
119903+ col
119904minus 119876 minus 120578
119899
Δ1199091015840
119899= Δ(119909
1015840
2)119899
= 120575119899119909sdot 119908
120575119899119910= row
119904minus row
119903minus119863
119908
Δ1199101015840
119899= Δ(119909
1015840
1)119899
= 120575119899119910sdot 119908 + 119863
(37)
Four pixels S11 S12 S31 and S32 are examinedTheir data arelisted in Table 1
S11 and S31 are the images of the same object which wascaptured in order by CCD1 and CCD2 (Gap 1) S12 and S32were captured respectively by CCD3 and CCD4 (Gap 3)Referring to the auxiliary data S11 and S31 were capturedat same time and S12 and S32 were captured at differenttime which means that the along-track speeds of the twomoving images were quite different Moreover the cross-track image offsets in Gap 1 and Gap 3 vary so much whichsays that the optical flows were also distorted unevenly anddeflects away from the along-track directionOn the other
Mathematical Problems in Engineering 11
Table 1 The offsets between overlapped images
Sample Row no(pixel)
120575119899119909
(pixel)Δ119909
1015840
119899
(mm)120575119899
119910
(pixel)Δ119910
1015840
119899
(mm)
S11 258 minus2515 minus02200625 minus539 184503
S12 423 minus2378 minus02080750 minus736 184331
S31 266 minus1285 minus01124375 minus766 184304
S32 436 minus1297 minus01134875 minus687 184374
hand it is has been discovered in Figures 9 and 10 that thefluctuation of image offsets taking place in Gap 1 is greaterin magnitude than in Gap 3 All the facts indicate that thedistorted optical flows can be detected from a plenty of imageoffsets We will see later that the nonlinear distribution of thedata strengthens the well-posedness of optical flow inversionalgorithm
4 Remote Sensor AttitudeMotion Measurement
In this section the attitude velocity of the remote sensor isgoing to be resolved by using optical flow inversion methodThe results of dense registration are applied to produceconditions of fixed solution for optical flow equations
41 The Principle of Optical Inversion For clarity in frameC the two coordinate components of image displacementof 119896th sample element belonging to 119899th lapped strip pair arewritten as Δ1199091015840
119899119896 Δ1199101015840
119899119896 From (13) and (25) it is easy to show
that the contributions to optical flow owing to orbital motionand earthrsquos inertial movement are of very slightly varying inshort term such that the corresponding displacements can beregarded as piecewise constants 119904
119909 119904119910
Let 120591119894119895 119905119894119895
be in order the two sequential imaging timeof the 119895th image sample on the overlapped detectors in 119895thgap They are usually recorded in the auxiliary data of theremote sensor Hence for every image element the quantityof discrete status in optical flow tracing will be
119873119894119895= [
119905119894119895minus 120591
119894119895
Δ119905] isin Z
+
(119894 = 1 sdot sdot sdot 119899 119895 = 1 sdot sdot sdot 119898) (38)
where 119899 is the amount of CCD gaps 119898 is the amount ofsample groups and Δ119905 is the time step We set samples withsame 119895 index into the same group in which the samples arecaptured by the prior detectors simultaneously
We expand (11) substitute it into (14) and (13) and thenarrange the scalar optical flow inversion equations in termsof the three axial angular velocity components 120596
1 120596
2 and 120596
3
(the variables in the inverse problem) yielding the linearoptical flow equations
Locus of optical flow
CCD
CCD
120575max
D
ci120583120581 = const
Figure 11 Coefficients Determination according to the CurrentLocation of the Image
For the 119897th group samples
1198731119897
sum119894=119897
119888119894
11198971120596119894
1+ 119888
119894
11198972120596119894
2+ 119888
119894
11198973120596119894
3= Δ119909
1015840
1119897minus 119904
1199091
1198731119897
sum119894=119897
119889119894
11198971120596119894
1+ 119889
119894
11198972120596119894
2+ 119889
119894
11198973120596119894
3= Δ119910
1015840
1119897minus 119904
1199101
119873119899119897
sum119894=119897
119888119894
1198991198971120596119894
1+ 119888
119894
1198991198972120596119894
2+ 119888
119894
1198991198973120596119894
3= Δ119909
1015840
119899119897minus 119904
119909119899
119873119899119897
sum119894=119897
119889119894
1198991198971120596119894
1+ 119889
119894
1198991198972120596119894
2+ 119889
119894
1198991198973120596119894
3= Δ119910
1015840
119899119897minus 119904
119910119899
(39)
Suppose that the sample process will stop until119898 groupshave been founded The coefficients are as follows
119888119894
120583]120581 = Ξ120581 (120583 lceil119894 minus ] + 1119873120583]
Nrceil)
119889119894
120583]120581 = Λ 120581(120583 lceil
119894 minus ] + 1119873120583]
Nrceil) (120581 = 1 2 3)
(40)
12 Mathematical Problems in Engineering
Here
Ξ119896= (
12058511119896
12058512119896
sdot sdot sdot 1205851N119896
12058521119896
12058522119896
sdot sdot sdot 1205852N119896
sdot sdot sdot sdot sdot sdot
1205851198991119896
1205851198992119896
sdot sdot sdot 120585119899N119896
)
Λ119896= (
12058211119896
12058212119896
sdot sdot sdot 1205821N119896
12058221119896
12058222119896
sdot sdot sdot 1205822N119896
sdot sdot sdot sdot sdot sdot
1205821198991119896
1205821198992119896
sdot sdot sdot 120582119899N119896
)
(41)
As for the algorithm to reduce the complexity all possiblevalues for the coefficients are stored in the matrixes Ξ
119896and
Λ119896 The accuracy is guaranteed because the coefficients for
the images moving into the same piece of region are almostequal to an identical constant in a short period which isexplained in Figure 11
It has beenmentioned that the optical flow is not sensitiveto satellitersquos orbit motion and earth rotation in a short term
namely the possible values are assigned by the followingfunctions
120585119894119895119896= 120585
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
120582119894119895119896= 120582
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
119894 = 1 sim 119899 119895 = 1 sim N 119902 = 1 sim N
(42)
HereN is the number of constant-valued segments in theregion encompassing all the possible optical flow trajectoriesThe orbital elements and integral step size Δ119905 are commonto all functions Furthermore when long termmeasurementsare executed Ξ
119896and Λ
119896only need to be renewed according
to the current parametersThe coefficientmatrix of the optical flow equations for 119895th
(1 le 119895 le 119898) group can be written as
C119895=
(((((((((((
(
1198881
11198951119888111198952
119888111198953
sdot sdot sdot 1198881198731119895
111989511198881198731119895
111989521198881198731119895
11198953sdot sdot sdot 0 0
119889111198951
119889111198952
119889111198953
sdot sdot sdot 1198891198731119895
111989511198891198731119895
111989521198891198731119895
11198953sdot sdot sdot 0 0
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811199021198951
11988811199021198952
11988811199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119888119873119902119895
1199021198951119888119873119902119895
1199021198951119888119873119902119895
1199021198953
11988911199021198951
11988911199021198952
11988911199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119889119873119902119895
1199021198951119889119873119902119895
1199021198952119889119873119902119895
1199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811198991198951
11988811198991198952
11988811198991198953
sdot sdot sdot sdot sdot sdot 119888119873119899119895
1198991198951119888119873119899119895
1198991198952119888119873119899119895
1198991198953sdot sdot sdot 0
1198891
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot sdot sdot sdot 119889
1
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot 0
)))))))))))
)2119899times3119873119902119895
(43)
where119873119902119895= max119873
1119895 119873
119899119895 Consequently as we organize the equations for all groups
the global coefficient matrix will be given in the followingform
C =((
(
[C1]2119899times3119873
1199021
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
0 [C2]2119899times3119873
1199022
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
d sdot sdot sdot sdot sdot sdot
[C]2119899times3119873maxd
[C119898]2119899times3119873
119902119898
0
))
)2119899119898times3119873max
(44)
C is a quasidiagonal partitioned matrix every subblockhas 2119899 rows The maximal columns of C are 119873max =
max1198731199021 119873
119902119898
The unknown variables are as follows
[Ω]3119873maxtimes1
= [120596111205961212059613sdot sdot sdot 120596
119873max1
120596119873max2
120596119873max3
]119879
(45)
The constant are as followsΔu
2119898119899times1= [ Δ1199091015840
11Δ1199101015840
11sdot sdot sdot Δ1199091015840
1198991Δ1199101015840
1198991
sdot sdot sdot Δ11990910158401198981
Δ11991010158401198981
sdot sdot sdot Δ1199091015840119899119898
Δ1199101015840119899119898
]119879
s2119898119899times1
= [ 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899
sdot sdot sdot 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899]119879
(46)
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 7
y998400
x998400
Posterior CCD
12057211205722
1205731
1205732
13998400
Prior CCD
Δx998400120578
Figure 3Nonlinear image velocity field and optical flow trajectoriesinfluence the time-phase relations between the lapped imagescaptured by the adjacent overlapped detectors
In operating as indicated in Figure 3 take the lappedimages captured by the detectors in prior array as thereferenced images and the images captured by posteriordetectors as the sensed images Firstly we will rebuild theoptical flows based on the rough measurements of the space-borne sensors and then reconfigure the original templates toconstruct the new templates whose morphologies are moreapproximate to the corresponding parts on the sensed imagesWith this process the information about imaging proceduresis able to be added into the new templates so as to increase thedegree of similarity to the sensed images The method maydramatically raise the accuracy of dense registration such thatthe high-accurate offsets between the lapped image pairs areable to be determined
In the experiment we examined Mapping Satellite-1 aChinese surveying satellite operating in 500 km height sunsynchronous orbit which is used for high-accurate pho-togrammetry [31] whose structure is shown in Figure 4 Oneof the effective payload three-line-array panchromatic CCDcameras has good geometrical accuracy whose ground pixelresolution is superior to 5m spectral range is 051 120583m sim
069 120583m and the swath is 60 km Another payload is that thehigh resolution camera is designed possessing Cook-TMAoptical system which gives a wide field of view [16 17] andthe panchromatic spatial resolution can reach 2m
In engineering for the purpose to improve the imagequality and surveying precision the high-accuracy measure-ments of jitter and attitude motion are very essential for pos-terior processing Thus here we investigate the images andthe auxiliary data of the large FOV high resolution camera todeal with the problem The experimental photographs werecaptured with 10∘ side looking The focal plane of the camera
High resolutionpanchromatic camera
Optical axis
Mapping satellite-01
O998400
x9984001x9984003
x9984002
Figure 4 The structure of Mapping Satellite-1 and its effectivepayloads
consists of 8 panchromatic TDI CCD detectors and there are120578 = 96 physical lapped pixels between each other
The scheme of the processing in registering one imageelement 120594 is illustrated in Figure 5
Step 1 Set the original lapped image strips (the images whichwere acquired directly by the detectors and without anypostprocessing) in frameC
Step 2 Compute the deformations of all image elementson referenced template with respect to their optical flowtrajectories
We extract the original template from the referencedimage denoted as 119879
1 which consists of 1198732 square elements
that is dim(1198791) = 119873 times 119873 Let 120594 be its central element and
119908 the width of each element here 119908 = 875 120583m Beforethe moving image was going to be captured by the posteriordetector in terms of (25) their current shapes and energydistribution can be predicted by the optical flow based on theauxiliary data of the remote sensor
In order to simplify the algorithm first order approxima-tion is allowed without introducing significant errors Thisapproximation means that the shape of every image elementis always quadrilateral Linear interpolations are carried outto determine the four sides according to the deformationsalong the radial directions of the vertexes as showed inFigure 5 The unit radial vectors are denoted by 120591
1015840
1sim 1205911015840
4in
frameC
1205911015840
1=radic2
21198901minusradic2
21198902 120591
1015840
3= minus
radic2
21198901+radic2
21198902
1205911015840
2=radic2
21198901+radic2
21198902 120591
1015840
4= minus
radic2
21198901minusradic2
21198902
(26)
Suppose image point 1199011015840 is the center of an arbitrary elementΣ1015840 in 119879
1 Let Σ be the area element on the earth surface which
is conjugate to Σ1015840 The four unit radial vectors of the vertexes
8 Mathematical Problems in Engineering
1
3
2
1
4
T0
T1 T9984001
T2 Ts
Referenced image of prior CCD Sensed image of posterior CCD
Figure 5 Optical flow prediction and template reconfiguration
on Σ 1205911sim 120591
4are conjugate to 1205911015840
1sim 1205911015840
4and tangent to the earth
surface at 119901 From the geometrical relations we have
C120591119894= (minus1)
119898
1199031015840 times 1205911015840119894times
C119899119901
100381610038161003816100381610038161199031015840 times 1205911015840
119894times
C119899119901
10038161003816100381610038161003816
E120591119894= Tminus1
EITOIAminus1Mminus1 C
120591119894
C119899119901= MATminus1
OITEIE119899119901
(27)
where E 119899119901is the unit normal vector of Σ at 119901 We predict
the deformations along 1205911sim 120591
4during 119905
1sim 119905
2according to
the measurements of GPS star trackers and gyroscopes asexplained in Figure 6 119905
1is the imaging time on prior detector
and 1199052is the imaging time on the posterior detector
[1205751199091015840
1]Δ119905
120591119896
= [1205751199091015840
1]1199052
120591119896
minus [1205751199091015840
1]1199051
120591119896
[1205751199091015840
2]Δ119905
120591119896
= [1205751199091015840
2]1199052
120591119896
minus [1205751199091015840
2]1199051
120591119896
(119896 = 1 sim 4)
(28)
The shape of deformed image Σ10158401199052can be got through linear
interpolation with
[120575 1199031015840
]Δ119905
120591119896
= ([1205751199091015840
1]Δ119905
120591119896
[12057511990910158402]Δ119905
120591119896
) (29)
Step 3 Reconfigure referenced template 1198791according to
optical flow prediction and then get a new template 1198792
Let 11987910158401be the deformed image of 119879
1computed in Step 2
Let 120594 = 119861119894119895be the central element of 1198791015840
1 integers 119894 and 119895 are
respectively the row number and column number of 119861119894119895The
gray value 119897119894119895of each element in 1198791015840
1is equal to its counterpart
in 1198791with the same indexes In addition we initialize a null
template 1198790whose shape and orientation are identical to 119879
1
the central element of 1198790is denoted by 119879
119894119895
[120575rarrr 998400]Δtminusrarr1205911[120575rarrr 998400]Δtminusrarr1205912
[120575rarrr 998400]Δtminusrarr1205913[120575rarrr 998400]Δtminusrarr1205914
1 2
34
1998400
2998400
39984004998400
Σ998400t2
Σ998400t1
p998400rarr120591 998400
1rarr120591 998400
2
rarr120591 9984003
rarr120591 9984004
Figure 6 Deformation of single element
Then we cover 1198790upon 1198791015840
1and let their centers coincide
that is 119879119894119895= 119861
119894119895 as shown in Figure 7 Denote the vertexes
of 11987910158401as 119881119896
119894119895(119896 = 1 sim 4) Therefore the connective relation
for adjacent elements can be expressed by 1198811
119894119895= 119881
2
119894119895minus1=
1198813
119894minus1119895minus1= 1198814
119894minus1119895
Next we will reassign the gray value ℎ1015840119894119895to 119879
119894119895(119894 =
1 sdot sdot sdot 119873 119895 = 1 sdot sdot sdot 119873) in sequence to construct a new template1198792 The process is just a simulation of image resample when
optical flow arrives at the posterior detector as indicated inFigure 3
That is
ℎ1015840
119894119895=
119894+1
sum119898=119894minus1
119895+1
sum119899=119895minus1
120578119898119899119897119898119899 (30)
Weight coefficient 120578119898119899
= 1198781198981198991199082 where 119878
119898119899is the area of the
intersecting polygon of 119861119898119899
with 119879119894119895
Mathematical Problems in Engineering 9
V1iminus1jminus1
Biminus1jminus1
Bijminus1
Bi+1jminus1
V4i+1jminus1
Biminus1j
V1ij
Bij
Tij
V4ij V3
ij
Bi+1j
T9984001
T0
Biminus1j+1
V2ij
Bij+1
Bi+1j+1
V2iminus1j+1
V3i+1j+1
Figure 7 Template reconfiguration
Step 4 Computenormalized cross-correlation coefficientsbetween 119879
2and the sensed image and then determine the
subpixel offset of 1198792relative to the sensed image in frameP
Firstly for this method the search space on the sensedimage can be contracted so much since the optical flowtrajectories for the referenced elements have been predictedin Step 2 Assuming that the search space is 119879
119904 dim(119879
119904) =
119872 times 119872 When 119879119894119895
moves to the pixel (1198991 119899
2) on 119879
119904 the
normalized cross-correlation (NCC) coefficient is given by
120574 (1198991 119899
2)
=sum119909119910
[119892 (119909 119910) minus 119892119909119910] [ℎ (119909 minus 119899
1 119910 minus 119899
2) minus ℎ]
sum119909119910
[119892 (119909 119910) minus 119892119909119910]2
sum119909119910
[ℎ (119909 minus 1198991 119910 minus 119899
2) minus ℎ]
2
05
(31)
where 119892119909119910
is the mean gray value of the segment of 119879119904
that is masked by 1198792and ℎ is the mean of 119879
2 Equation
(31) requires approximately 1198732(119872 minus 119873 + 1)2 additions and
1198732(119872 minus 119873 + 1)2 multiplications whereas the complexity of
FFT algorithm needs about 121198722log2119872 real multiplications
and 181198722log2119872 real additionssubtractions [32 33]
At the beginning we take119872 = 101119873 = 7 and computethe NCC coefficient When 119872 is much larger than 119873 thecalculation in spatial domain will be efficient Suppose thatthe peak value 120574max is taken at the coordinate (119896119898) 119896119898 isin Z
in the sensed window Hence we will reduce search space intoa smaller one with dimension of 47 times 47 which centered on119879119904(119896119898) Next the subpixel registration is realized by phase
correlation algorithm with larger 119872 and 119873 to suppress thesystem errors owing to the deficiencies of detailed textures
on the photo Here we take119872 = 47119873 = 23 Let the subpixeloffset between the two registering image elements be denotedas 120575
119909and 120575
119910in frameP
The phase correlation algorithm in the frequency domainbecomes more efficient as 119873 approaches 119872 and both havelarger scales [28] Moreover the Fourier coefficients are nor-malized to unitmagnitude prior to computing the correlationso that the correlation is based only on phase information andbeing insensitive to changes in image intensity [27 29]
LetG(119906 V) be the 2D Discrete Fourier Transforms (DFT)of the sensed window then we have
G (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
119892 (119909 119910)119882119906119909
119872119882
V119910119872
H (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
ℎ (119909 119910)119882119906119909
119873119882
V119910119873
(32)
Here
119882119873= exp(minus1198952120587
119873) (33)
Cross-phase spectrum is given by
R (119906 V) =G (119906 V)Hlowast
(119906 V)|G (119906 V)Hlowast (119906 V)|
= exp (119895120601 (119906 V)) (34)
whereHlowast is the complex conjugate ofH By inverse DiscreteFourier Transform (IDFT) we have
120574 (1198991 119899
2) =
1
1198732
(119873minus1)2
sum119906=minus(119873minus1)2
(119873minus1)2
sumV=minus(119873minus1)2
R (119906 V)119882minus1199061198991
119873119882
minusV1198992
119873
(35)
10 Mathematical Problems in Engineering
Figure 8 Dense image registration for lapped image strips CCD1versus CCD2 (Gap 1 the left two) and CCD3 versus CCD4 (Gap 3the right two)
Suppose that the new peak 120574max appears at (1198961015840 1198981015840) 1198961015840 1198981015840 isin
Z referring to [27] we have the following relation
120574max (1198961015840
1198981015840
)
asymp120582
1198732
sin [120587 (1198961015840 + 120575119909)] sin [120587 (1198981015840 + 120575
119910)]
sin [(120587119873) (1198961015840 + 120575119909)] sin [(120587119873) (1198981015840 + 120575
119910)]
(36)
The right side presents the spatial distribution of the normal-ized cross-correlation coefficientsTherefore (120575
119909 120575
119910) are able
to be measured based on that In practice constant 120582 le 1which tends to decrease when small noise exists and equalsunity in ideal cases
Step 5 Dense registration is executed for the lapped imagestrips
Repeating Step 1simStep 4 we register the along-track sam-ple images selected from the referenced images to the sensedimageThemaximal sample rate can reach up to line-by-lineThe continuous procedure is shown in Figure 8 in which theimage pairs are marked
The curves of relative offsets inP are shown in Figures 9and 10
Let col119903 row
119903be the column and row indexes of image
elements on the referenced image and let col119904 row
119904be the
indexes of the same elements on the sensed image The totalcolumns of each detector 119876 = 4096 pix and the verticaldistance between the two detector arrays 119863 = 184975mmAccording to the results of registration we get the offsets
50 100 150 200 250 300 350 400 450 500
minus28minus26minus24 X 258
Y minus2515
Image rows (pixels)
Cros
s tra
ck(p
ixel
s)
CCD1 versus CCD2
50 100 150 200 250 300 350 400 450 500
minus8minus6minus4 X 258
Y minus5393
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 423Y minus7363
S11S22
S22
S11
X 423Y minus2378
Figure 9Theoffsets of lapped images captured byCCD1 andCCD2
50 100 150 200 250 300 350 400 450 500minus17minus16minus15minus14minus13minus12
X 266Y minus1285 X 436
Y minus1297
Image rows (pixels)Cr
oss t
rack
(p
ixel
s)
CCD3 versus CCD4
50 100 150 200 250 300 350 400 450 500minus9minus8minus7minus6minus5
X 436Y minus6869
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 266Y minus7663
S31
S31
S32
S32
Figure 10 The offsets of lapped images captured by CCD3 andCCD4
of images at 119899th gap 120575119899119909(cross track) 120575119899
119910(along track) in
frameP and Δ1199091015840119899 Δ1199101015840
119899(mm) in frameF
120575119899119909= col
119903+ col
119904minus 119876 minus 120578
119899
Δ1199091015840
119899= Δ(119909
1015840
2)119899
= 120575119899119909sdot 119908
120575119899119910= row
119904minus row
119903minus119863
119908
Δ1199101015840
119899= Δ(119909
1015840
1)119899
= 120575119899119910sdot 119908 + 119863
(37)
Four pixels S11 S12 S31 and S32 are examinedTheir data arelisted in Table 1
S11 and S31 are the images of the same object which wascaptured in order by CCD1 and CCD2 (Gap 1) S12 and S32were captured respectively by CCD3 and CCD4 (Gap 3)Referring to the auxiliary data S11 and S31 were capturedat same time and S12 and S32 were captured at differenttime which means that the along-track speeds of the twomoving images were quite different Moreover the cross-track image offsets in Gap 1 and Gap 3 vary so much whichsays that the optical flows were also distorted unevenly anddeflects away from the along-track directionOn the other
Mathematical Problems in Engineering 11
Table 1 The offsets between overlapped images
Sample Row no(pixel)
120575119899119909
(pixel)Δ119909
1015840
119899
(mm)120575119899
119910
(pixel)Δ119910
1015840
119899
(mm)
S11 258 minus2515 minus02200625 minus539 184503
S12 423 minus2378 minus02080750 minus736 184331
S31 266 minus1285 minus01124375 minus766 184304
S32 436 minus1297 minus01134875 minus687 184374
hand it is has been discovered in Figures 9 and 10 that thefluctuation of image offsets taking place in Gap 1 is greaterin magnitude than in Gap 3 All the facts indicate that thedistorted optical flows can be detected from a plenty of imageoffsets We will see later that the nonlinear distribution of thedata strengthens the well-posedness of optical flow inversionalgorithm
4 Remote Sensor AttitudeMotion Measurement
In this section the attitude velocity of the remote sensor isgoing to be resolved by using optical flow inversion methodThe results of dense registration are applied to produceconditions of fixed solution for optical flow equations
41 The Principle of Optical Inversion For clarity in frameC the two coordinate components of image displacementof 119896th sample element belonging to 119899th lapped strip pair arewritten as Δ1199091015840
119899119896 Δ1199101015840
119899119896 From (13) and (25) it is easy to show
that the contributions to optical flow owing to orbital motionand earthrsquos inertial movement are of very slightly varying inshort term such that the corresponding displacements can beregarded as piecewise constants 119904
119909 119904119910
Let 120591119894119895 119905119894119895
be in order the two sequential imaging timeof the 119895th image sample on the overlapped detectors in 119895thgap They are usually recorded in the auxiliary data of theremote sensor Hence for every image element the quantityof discrete status in optical flow tracing will be
119873119894119895= [
119905119894119895minus 120591
119894119895
Δ119905] isin Z
+
(119894 = 1 sdot sdot sdot 119899 119895 = 1 sdot sdot sdot 119898) (38)
where 119899 is the amount of CCD gaps 119898 is the amount ofsample groups and Δ119905 is the time step We set samples withsame 119895 index into the same group in which the samples arecaptured by the prior detectors simultaneously
We expand (11) substitute it into (14) and (13) and thenarrange the scalar optical flow inversion equations in termsof the three axial angular velocity components 120596
1 120596
2 and 120596
3
(the variables in the inverse problem) yielding the linearoptical flow equations
Locus of optical flow
CCD
CCD
120575max
D
ci120583120581 = const
Figure 11 Coefficients Determination according to the CurrentLocation of the Image
For the 119897th group samples
1198731119897
sum119894=119897
119888119894
11198971120596119894
1+ 119888
119894
11198972120596119894
2+ 119888
119894
11198973120596119894
3= Δ119909
1015840
1119897minus 119904
1199091
1198731119897
sum119894=119897
119889119894
11198971120596119894
1+ 119889
119894
11198972120596119894
2+ 119889
119894
11198973120596119894
3= Δ119910
1015840
1119897minus 119904
1199101
119873119899119897
sum119894=119897
119888119894
1198991198971120596119894
1+ 119888
119894
1198991198972120596119894
2+ 119888
119894
1198991198973120596119894
3= Δ119909
1015840
119899119897minus 119904
119909119899
119873119899119897
sum119894=119897
119889119894
1198991198971120596119894
1+ 119889
119894
1198991198972120596119894
2+ 119889
119894
1198991198973120596119894
3= Δ119910
1015840
119899119897minus 119904
119910119899
(39)
Suppose that the sample process will stop until119898 groupshave been founded The coefficients are as follows
119888119894
120583]120581 = Ξ120581 (120583 lceil119894 minus ] + 1119873120583]
Nrceil)
119889119894
120583]120581 = Λ 120581(120583 lceil
119894 minus ] + 1119873120583]
Nrceil) (120581 = 1 2 3)
(40)
12 Mathematical Problems in Engineering
Here
Ξ119896= (
12058511119896
12058512119896
sdot sdot sdot 1205851N119896
12058521119896
12058522119896
sdot sdot sdot 1205852N119896
sdot sdot sdot sdot sdot sdot
1205851198991119896
1205851198992119896
sdot sdot sdot 120585119899N119896
)
Λ119896= (
12058211119896
12058212119896
sdot sdot sdot 1205821N119896
12058221119896
12058222119896
sdot sdot sdot 1205822N119896
sdot sdot sdot sdot sdot sdot
1205821198991119896
1205821198992119896
sdot sdot sdot 120582119899N119896
)
(41)
As for the algorithm to reduce the complexity all possiblevalues for the coefficients are stored in the matrixes Ξ
119896and
Λ119896 The accuracy is guaranteed because the coefficients for
the images moving into the same piece of region are almostequal to an identical constant in a short period which isexplained in Figure 11
It has beenmentioned that the optical flow is not sensitiveto satellitersquos orbit motion and earth rotation in a short term
namely the possible values are assigned by the followingfunctions
120585119894119895119896= 120585
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
120582119894119895119896= 120582
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
119894 = 1 sim 119899 119895 = 1 sim N 119902 = 1 sim N
(42)
HereN is the number of constant-valued segments in theregion encompassing all the possible optical flow trajectoriesThe orbital elements and integral step size Δ119905 are commonto all functions Furthermore when long termmeasurementsare executed Ξ
119896and Λ
119896only need to be renewed according
to the current parametersThe coefficientmatrix of the optical flow equations for 119895th
(1 le 119895 le 119898) group can be written as
C119895=
(((((((((((
(
1198881
11198951119888111198952
119888111198953
sdot sdot sdot 1198881198731119895
111989511198881198731119895
111989521198881198731119895
11198953sdot sdot sdot 0 0
119889111198951
119889111198952
119889111198953
sdot sdot sdot 1198891198731119895
111989511198891198731119895
111989521198891198731119895
11198953sdot sdot sdot 0 0
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811199021198951
11988811199021198952
11988811199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119888119873119902119895
1199021198951119888119873119902119895
1199021198951119888119873119902119895
1199021198953
11988911199021198951
11988911199021198952
11988911199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119889119873119902119895
1199021198951119889119873119902119895
1199021198952119889119873119902119895
1199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811198991198951
11988811198991198952
11988811198991198953
sdot sdot sdot sdot sdot sdot 119888119873119899119895
1198991198951119888119873119899119895
1198991198952119888119873119899119895
1198991198953sdot sdot sdot 0
1198891
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot sdot sdot sdot 119889
1
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot 0
)))))))))))
)2119899times3119873119902119895
(43)
where119873119902119895= max119873
1119895 119873
119899119895 Consequently as we organize the equations for all groups
the global coefficient matrix will be given in the followingform
C =((
(
[C1]2119899times3119873
1199021
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
0 [C2]2119899times3119873
1199022
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
d sdot sdot sdot sdot sdot sdot
[C]2119899times3119873maxd
[C119898]2119899times3119873
119902119898
0
))
)2119899119898times3119873max
(44)
C is a quasidiagonal partitioned matrix every subblockhas 2119899 rows The maximal columns of C are 119873max =
max1198731199021 119873
119902119898
The unknown variables are as follows
[Ω]3119873maxtimes1
= [120596111205961212059613sdot sdot sdot 120596
119873max1
120596119873max2
120596119873max3
]119879
(45)
The constant are as followsΔu
2119898119899times1= [ Δ1199091015840
11Δ1199101015840
11sdot sdot sdot Δ1199091015840
1198991Δ1199101015840
1198991
sdot sdot sdot Δ11990910158401198981
Δ11991010158401198981
sdot sdot sdot Δ1199091015840119899119898
Δ1199101015840119899119898
]119879
s2119898119899times1
= [ 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899
sdot sdot sdot 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899]119879
(46)
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
8 Mathematical Problems in Engineering
1
3
2
1
4
T0
T1 T9984001
T2 Ts
Referenced image of prior CCD Sensed image of posterior CCD
Figure 5 Optical flow prediction and template reconfiguration
on Σ 1205911sim 120591
4are conjugate to 1205911015840
1sim 1205911015840
4and tangent to the earth
surface at 119901 From the geometrical relations we have
C120591119894= (minus1)
119898
1199031015840 times 1205911015840119894times
C119899119901
100381610038161003816100381610038161199031015840 times 1205911015840
119894times
C119899119901
10038161003816100381610038161003816
E120591119894= Tminus1
EITOIAminus1Mminus1 C
120591119894
C119899119901= MATminus1
OITEIE119899119901
(27)
where E 119899119901is the unit normal vector of Σ at 119901 We predict
the deformations along 1205911sim 120591
4during 119905
1sim 119905
2according to
the measurements of GPS star trackers and gyroscopes asexplained in Figure 6 119905
1is the imaging time on prior detector
and 1199052is the imaging time on the posterior detector
[1205751199091015840
1]Δ119905
120591119896
= [1205751199091015840
1]1199052
120591119896
minus [1205751199091015840
1]1199051
120591119896
[1205751199091015840
2]Δ119905
120591119896
= [1205751199091015840
2]1199052
120591119896
minus [1205751199091015840
2]1199051
120591119896
(119896 = 1 sim 4)
(28)
The shape of deformed image Σ10158401199052can be got through linear
interpolation with
[120575 1199031015840
]Δ119905
120591119896
= ([1205751199091015840
1]Δ119905
120591119896
[12057511990910158402]Δ119905
120591119896
) (29)
Step 3 Reconfigure referenced template 1198791according to
optical flow prediction and then get a new template 1198792
Let 11987910158401be the deformed image of 119879
1computed in Step 2
Let 120594 = 119861119894119895be the central element of 1198791015840
1 integers 119894 and 119895 are
respectively the row number and column number of 119861119894119895The
gray value 119897119894119895of each element in 1198791015840
1is equal to its counterpart
in 1198791with the same indexes In addition we initialize a null
template 1198790whose shape and orientation are identical to 119879
1
the central element of 1198790is denoted by 119879
119894119895
[120575rarrr 998400]Δtminusrarr1205911[120575rarrr 998400]Δtminusrarr1205912
[120575rarrr 998400]Δtminusrarr1205913[120575rarrr 998400]Δtminusrarr1205914
1 2
34
1998400
2998400
39984004998400
Σ998400t2
Σ998400t1
p998400rarr120591 998400
1rarr120591 998400
2
rarr120591 9984003
rarr120591 9984004
Figure 6 Deformation of single element
Then we cover 1198790upon 1198791015840
1and let their centers coincide
that is 119879119894119895= 119861
119894119895 as shown in Figure 7 Denote the vertexes
of 11987910158401as 119881119896
119894119895(119896 = 1 sim 4) Therefore the connective relation
for adjacent elements can be expressed by 1198811
119894119895= 119881
2
119894119895minus1=
1198813
119894minus1119895minus1= 1198814
119894minus1119895
Next we will reassign the gray value ℎ1015840119894119895to 119879
119894119895(119894 =
1 sdot sdot sdot 119873 119895 = 1 sdot sdot sdot 119873) in sequence to construct a new template1198792 The process is just a simulation of image resample when
optical flow arrives at the posterior detector as indicated inFigure 3
That is
ℎ1015840
119894119895=
119894+1
sum119898=119894minus1
119895+1
sum119899=119895minus1
120578119898119899119897119898119899 (30)
Weight coefficient 120578119898119899
= 1198781198981198991199082 where 119878
119898119899is the area of the
intersecting polygon of 119861119898119899
with 119879119894119895
Mathematical Problems in Engineering 9
V1iminus1jminus1
Biminus1jminus1
Bijminus1
Bi+1jminus1
V4i+1jminus1
Biminus1j
V1ij
Bij
Tij
V4ij V3
ij
Bi+1j
T9984001
T0
Biminus1j+1
V2ij
Bij+1
Bi+1j+1
V2iminus1j+1
V3i+1j+1
Figure 7 Template reconfiguration
Step 4 Computenormalized cross-correlation coefficientsbetween 119879
2and the sensed image and then determine the
subpixel offset of 1198792relative to the sensed image in frameP
Firstly for this method the search space on the sensedimage can be contracted so much since the optical flowtrajectories for the referenced elements have been predictedin Step 2 Assuming that the search space is 119879
119904 dim(119879
119904) =
119872 times 119872 When 119879119894119895
moves to the pixel (1198991 119899
2) on 119879
119904 the
normalized cross-correlation (NCC) coefficient is given by
120574 (1198991 119899
2)
=sum119909119910
[119892 (119909 119910) minus 119892119909119910] [ℎ (119909 minus 119899
1 119910 minus 119899
2) minus ℎ]
sum119909119910
[119892 (119909 119910) minus 119892119909119910]2
sum119909119910
[ℎ (119909 minus 1198991 119910 minus 119899
2) minus ℎ]
2
05
(31)
where 119892119909119910
is the mean gray value of the segment of 119879119904
that is masked by 1198792and ℎ is the mean of 119879
2 Equation
(31) requires approximately 1198732(119872 minus 119873 + 1)2 additions and
1198732(119872 minus 119873 + 1)2 multiplications whereas the complexity of
FFT algorithm needs about 121198722log2119872 real multiplications
and 181198722log2119872 real additionssubtractions [32 33]
At the beginning we take119872 = 101119873 = 7 and computethe NCC coefficient When 119872 is much larger than 119873 thecalculation in spatial domain will be efficient Suppose thatthe peak value 120574max is taken at the coordinate (119896119898) 119896119898 isin Z
in the sensed window Hence we will reduce search space intoa smaller one with dimension of 47 times 47 which centered on119879119904(119896119898) Next the subpixel registration is realized by phase
correlation algorithm with larger 119872 and 119873 to suppress thesystem errors owing to the deficiencies of detailed textures
on the photo Here we take119872 = 47119873 = 23 Let the subpixeloffset between the two registering image elements be denotedas 120575
119909and 120575
119910in frameP
The phase correlation algorithm in the frequency domainbecomes more efficient as 119873 approaches 119872 and both havelarger scales [28] Moreover the Fourier coefficients are nor-malized to unitmagnitude prior to computing the correlationso that the correlation is based only on phase information andbeing insensitive to changes in image intensity [27 29]
LetG(119906 V) be the 2D Discrete Fourier Transforms (DFT)of the sensed window then we have
G (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
119892 (119909 119910)119882119906119909
119872119882
V119910119872
H (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
ℎ (119909 119910)119882119906119909
119873119882
V119910119873
(32)
Here
119882119873= exp(minus1198952120587
119873) (33)
Cross-phase spectrum is given by
R (119906 V) =G (119906 V)Hlowast
(119906 V)|G (119906 V)Hlowast (119906 V)|
= exp (119895120601 (119906 V)) (34)
whereHlowast is the complex conjugate ofH By inverse DiscreteFourier Transform (IDFT) we have
120574 (1198991 119899
2) =
1
1198732
(119873minus1)2
sum119906=minus(119873minus1)2
(119873minus1)2
sumV=minus(119873minus1)2
R (119906 V)119882minus1199061198991
119873119882
minusV1198992
119873
(35)
10 Mathematical Problems in Engineering
Figure 8 Dense image registration for lapped image strips CCD1versus CCD2 (Gap 1 the left two) and CCD3 versus CCD4 (Gap 3the right two)
Suppose that the new peak 120574max appears at (1198961015840 1198981015840) 1198961015840 1198981015840 isin
Z referring to [27] we have the following relation
120574max (1198961015840
1198981015840
)
asymp120582
1198732
sin [120587 (1198961015840 + 120575119909)] sin [120587 (1198981015840 + 120575
119910)]
sin [(120587119873) (1198961015840 + 120575119909)] sin [(120587119873) (1198981015840 + 120575
119910)]
(36)
The right side presents the spatial distribution of the normal-ized cross-correlation coefficientsTherefore (120575
119909 120575
119910) are able
to be measured based on that In practice constant 120582 le 1which tends to decrease when small noise exists and equalsunity in ideal cases
Step 5 Dense registration is executed for the lapped imagestrips
Repeating Step 1simStep 4 we register the along-track sam-ple images selected from the referenced images to the sensedimageThemaximal sample rate can reach up to line-by-lineThe continuous procedure is shown in Figure 8 in which theimage pairs are marked
The curves of relative offsets inP are shown in Figures 9and 10
Let col119903 row
119903be the column and row indexes of image
elements on the referenced image and let col119904 row
119904be the
indexes of the same elements on the sensed image The totalcolumns of each detector 119876 = 4096 pix and the verticaldistance between the two detector arrays 119863 = 184975mmAccording to the results of registration we get the offsets
50 100 150 200 250 300 350 400 450 500
minus28minus26minus24 X 258
Y minus2515
Image rows (pixels)
Cros
s tra
ck(p
ixel
s)
CCD1 versus CCD2
50 100 150 200 250 300 350 400 450 500
minus8minus6minus4 X 258
Y minus5393
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 423Y minus7363
S11S22
S22
S11
X 423Y minus2378
Figure 9Theoffsets of lapped images captured byCCD1 andCCD2
50 100 150 200 250 300 350 400 450 500minus17minus16minus15minus14minus13minus12
X 266Y minus1285 X 436
Y minus1297
Image rows (pixels)Cr
oss t
rack
(p
ixel
s)
CCD3 versus CCD4
50 100 150 200 250 300 350 400 450 500minus9minus8minus7minus6minus5
X 436Y minus6869
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 266Y minus7663
S31
S31
S32
S32
Figure 10 The offsets of lapped images captured by CCD3 andCCD4
of images at 119899th gap 120575119899119909(cross track) 120575119899
119910(along track) in
frameP and Δ1199091015840119899 Δ1199101015840
119899(mm) in frameF
120575119899119909= col
119903+ col
119904minus 119876 minus 120578
119899
Δ1199091015840
119899= Δ(119909
1015840
2)119899
= 120575119899119909sdot 119908
120575119899119910= row
119904minus row
119903minus119863
119908
Δ1199101015840
119899= Δ(119909
1015840
1)119899
= 120575119899119910sdot 119908 + 119863
(37)
Four pixels S11 S12 S31 and S32 are examinedTheir data arelisted in Table 1
S11 and S31 are the images of the same object which wascaptured in order by CCD1 and CCD2 (Gap 1) S12 and S32were captured respectively by CCD3 and CCD4 (Gap 3)Referring to the auxiliary data S11 and S31 were capturedat same time and S12 and S32 were captured at differenttime which means that the along-track speeds of the twomoving images were quite different Moreover the cross-track image offsets in Gap 1 and Gap 3 vary so much whichsays that the optical flows were also distorted unevenly anddeflects away from the along-track directionOn the other
Mathematical Problems in Engineering 11
Table 1 The offsets between overlapped images
Sample Row no(pixel)
120575119899119909
(pixel)Δ119909
1015840
119899
(mm)120575119899
119910
(pixel)Δ119910
1015840
119899
(mm)
S11 258 minus2515 minus02200625 minus539 184503
S12 423 minus2378 minus02080750 minus736 184331
S31 266 minus1285 minus01124375 minus766 184304
S32 436 minus1297 minus01134875 minus687 184374
hand it is has been discovered in Figures 9 and 10 that thefluctuation of image offsets taking place in Gap 1 is greaterin magnitude than in Gap 3 All the facts indicate that thedistorted optical flows can be detected from a plenty of imageoffsets We will see later that the nonlinear distribution of thedata strengthens the well-posedness of optical flow inversionalgorithm
4 Remote Sensor AttitudeMotion Measurement
In this section the attitude velocity of the remote sensor isgoing to be resolved by using optical flow inversion methodThe results of dense registration are applied to produceconditions of fixed solution for optical flow equations
41 The Principle of Optical Inversion For clarity in frameC the two coordinate components of image displacementof 119896th sample element belonging to 119899th lapped strip pair arewritten as Δ1199091015840
119899119896 Δ1199101015840
119899119896 From (13) and (25) it is easy to show
that the contributions to optical flow owing to orbital motionand earthrsquos inertial movement are of very slightly varying inshort term such that the corresponding displacements can beregarded as piecewise constants 119904
119909 119904119910
Let 120591119894119895 119905119894119895
be in order the two sequential imaging timeof the 119895th image sample on the overlapped detectors in 119895thgap They are usually recorded in the auxiliary data of theremote sensor Hence for every image element the quantityof discrete status in optical flow tracing will be
119873119894119895= [
119905119894119895minus 120591
119894119895
Δ119905] isin Z
+
(119894 = 1 sdot sdot sdot 119899 119895 = 1 sdot sdot sdot 119898) (38)
where 119899 is the amount of CCD gaps 119898 is the amount ofsample groups and Δ119905 is the time step We set samples withsame 119895 index into the same group in which the samples arecaptured by the prior detectors simultaneously
We expand (11) substitute it into (14) and (13) and thenarrange the scalar optical flow inversion equations in termsof the three axial angular velocity components 120596
1 120596
2 and 120596
3
(the variables in the inverse problem) yielding the linearoptical flow equations
Locus of optical flow
CCD
CCD
120575max
D
ci120583120581 = const
Figure 11 Coefficients Determination according to the CurrentLocation of the Image
For the 119897th group samples
1198731119897
sum119894=119897
119888119894
11198971120596119894
1+ 119888
119894
11198972120596119894
2+ 119888
119894
11198973120596119894
3= Δ119909
1015840
1119897minus 119904
1199091
1198731119897
sum119894=119897
119889119894
11198971120596119894
1+ 119889
119894
11198972120596119894
2+ 119889
119894
11198973120596119894
3= Δ119910
1015840
1119897minus 119904
1199101
119873119899119897
sum119894=119897
119888119894
1198991198971120596119894
1+ 119888
119894
1198991198972120596119894
2+ 119888
119894
1198991198973120596119894
3= Δ119909
1015840
119899119897minus 119904
119909119899
119873119899119897
sum119894=119897
119889119894
1198991198971120596119894
1+ 119889
119894
1198991198972120596119894
2+ 119889
119894
1198991198973120596119894
3= Δ119910
1015840
119899119897minus 119904
119910119899
(39)
Suppose that the sample process will stop until119898 groupshave been founded The coefficients are as follows
119888119894
120583]120581 = Ξ120581 (120583 lceil119894 minus ] + 1119873120583]
Nrceil)
119889119894
120583]120581 = Λ 120581(120583 lceil
119894 minus ] + 1119873120583]
Nrceil) (120581 = 1 2 3)
(40)
12 Mathematical Problems in Engineering
Here
Ξ119896= (
12058511119896
12058512119896
sdot sdot sdot 1205851N119896
12058521119896
12058522119896
sdot sdot sdot 1205852N119896
sdot sdot sdot sdot sdot sdot
1205851198991119896
1205851198992119896
sdot sdot sdot 120585119899N119896
)
Λ119896= (
12058211119896
12058212119896
sdot sdot sdot 1205821N119896
12058221119896
12058222119896
sdot sdot sdot 1205822N119896
sdot sdot sdot sdot sdot sdot
1205821198991119896
1205821198992119896
sdot sdot sdot 120582119899N119896
)
(41)
As for the algorithm to reduce the complexity all possiblevalues for the coefficients are stored in the matrixes Ξ
119896and
Λ119896 The accuracy is guaranteed because the coefficients for
the images moving into the same piece of region are almostequal to an identical constant in a short period which isexplained in Figure 11
It has beenmentioned that the optical flow is not sensitiveto satellitersquos orbit motion and earth rotation in a short term
namely the possible values are assigned by the followingfunctions
120585119894119895119896= 120585
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
120582119894119895119896= 120582
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
119894 = 1 sim 119899 119895 = 1 sim N 119902 = 1 sim N
(42)
HereN is the number of constant-valued segments in theregion encompassing all the possible optical flow trajectoriesThe orbital elements and integral step size Δ119905 are commonto all functions Furthermore when long termmeasurementsare executed Ξ
119896and Λ
119896only need to be renewed according
to the current parametersThe coefficientmatrix of the optical flow equations for 119895th
(1 le 119895 le 119898) group can be written as
C119895=
(((((((((((
(
1198881
11198951119888111198952
119888111198953
sdot sdot sdot 1198881198731119895
111989511198881198731119895
111989521198881198731119895
11198953sdot sdot sdot 0 0
119889111198951
119889111198952
119889111198953
sdot sdot sdot 1198891198731119895
111989511198891198731119895
111989521198891198731119895
11198953sdot sdot sdot 0 0
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811199021198951
11988811199021198952
11988811199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119888119873119902119895
1199021198951119888119873119902119895
1199021198951119888119873119902119895
1199021198953
11988911199021198951
11988911199021198952
11988911199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119889119873119902119895
1199021198951119889119873119902119895
1199021198952119889119873119902119895
1199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811198991198951
11988811198991198952
11988811198991198953
sdot sdot sdot sdot sdot sdot 119888119873119899119895
1198991198951119888119873119899119895
1198991198952119888119873119899119895
1198991198953sdot sdot sdot 0
1198891
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot sdot sdot sdot 119889
1
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot 0
)))))))))))
)2119899times3119873119902119895
(43)
where119873119902119895= max119873
1119895 119873
119899119895 Consequently as we organize the equations for all groups
the global coefficient matrix will be given in the followingform
C =((
(
[C1]2119899times3119873
1199021
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
0 [C2]2119899times3119873
1199022
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
d sdot sdot sdot sdot sdot sdot
[C]2119899times3119873maxd
[C119898]2119899times3119873
119902119898
0
))
)2119899119898times3119873max
(44)
C is a quasidiagonal partitioned matrix every subblockhas 2119899 rows The maximal columns of C are 119873max =
max1198731199021 119873
119902119898
The unknown variables are as follows
[Ω]3119873maxtimes1
= [120596111205961212059613sdot sdot sdot 120596
119873max1
120596119873max2
120596119873max3
]119879
(45)
The constant are as followsΔu
2119898119899times1= [ Δ1199091015840
11Δ1199101015840
11sdot sdot sdot Δ1199091015840
1198991Δ1199101015840
1198991
sdot sdot sdot Δ11990910158401198981
Δ11991010158401198981
sdot sdot sdot Δ1199091015840119899119898
Δ1199101015840119899119898
]119879
s2119898119899times1
= [ 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899
sdot sdot sdot 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899]119879
(46)
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 9
V1iminus1jminus1
Biminus1jminus1
Bijminus1
Bi+1jminus1
V4i+1jminus1
Biminus1j
V1ij
Bij
Tij
V4ij V3
ij
Bi+1j
T9984001
T0
Biminus1j+1
V2ij
Bij+1
Bi+1j+1
V2iminus1j+1
V3i+1j+1
Figure 7 Template reconfiguration
Step 4 Computenormalized cross-correlation coefficientsbetween 119879
2and the sensed image and then determine the
subpixel offset of 1198792relative to the sensed image in frameP
Firstly for this method the search space on the sensedimage can be contracted so much since the optical flowtrajectories for the referenced elements have been predictedin Step 2 Assuming that the search space is 119879
119904 dim(119879
119904) =
119872 times 119872 When 119879119894119895
moves to the pixel (1198991 119899
2) on 119879
119904 the
normalized cross-correlation (NCC) coefficient is given by
120574 (1198991 119899
2)
=sum119909119910
[119892 (119909 119910) minus 119892119909119910] [ℎ (119909 minus 119899
1 119910 minus 119899
2) minus ℎ]
sum119909119910
[119892 (119909 119910) minus 119892119909119910]2
sum119909119910
[ℎ (119909 minus 1198991 119910 minus 119899
2) minus ℎ]
2
05
(31)
where 119892119909119910
is the mean gray value of the segment of 119879119904
that is masked by 1198792and ℎ is the mean of 119879
2 Equation
(31) requires approximately 1198732(119872 minus 119873 + 1)2 additions and
1198732(119872 minus 119873 + 1)2 multiplications whereas the complexity of
FFT algorithm needs about 121198722log2119872 real multiplications
and 181198722log2119872 real additionssubtractions [32 33]
At the beginning we take119872 = 101119873 = 7 and computethe NCC coefficient When 119872 is much larger than 119873 thecalculation in spatial domain will be efficient Suppose thatthe peak value 120574max is taken at the coordinate (119896119898) 119896119898 isin Z
in the sensed window Hence we will reduce search space intoa smaller one with dimension of 47 times 47 which centered on119879119904(119896119898) Next the subpixel registration is realized by phase
correlation algorithm with larger 119872 and 119873 to suppress thesystem errors owing to the deficiencies of detailed textures
on the photo Here we take119872 = 47119873 = 23 Let the subpixeloffset between the two registering image elements be denotedas 120575
119909and 120575
119910in frameP
The phase correlation algorithm in the frequency domainbecomes more efficient as 119873 approaches 119872 and both havelarger scales [28] Moreover the Fourier coefficients are nor-malized to unitmagnitude prior to computing the correlationso that the correlation is based only on phase information andbeing insensitive to changes in image intensity [27 29]
LetG(119906 V) be the 2D Discrete Fourier Transforms (DFT)of the sensed window then we have
G (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
119892 (119909 119910)119882119906119909
119872119882
V119910119872
H (119906 V) =(119873minus1)2
sum119909=minus(119873minus1)2
(119873minus1)2
sum119910=minus(119873minus1)2
ℎ (119909 119910)119882119906119909
119873119882
V119910119873
(32)
Here
119882119873= exp(minus1198952120587
119873) (33)
Cross-phase spectrum is given by
R (119906 V) =G (119906 V)Hlowast
(119906 V)|G (119906 V)Hlowast (119906 V)|
= exp (119895120601 (119906 V)) (34)
whereHlowast is the complex conjugate ofH By inverse DiscreteFourier Transform (IDFT) we have
120574 (1198991 119899
2) =
1
1198732
(119873minus1)2
sum119906=minus(119873minus1)2
(119873minus1)2
sumV=minus(119873minus1)2
R (119906 V)119882minus1199061198991
119873119882
minusV1198992
119873
(35)
10 Mathematical Problems in Engineering
Figure 8 Dense image registration for lapped image strips CCD1versus CCD2 (Gap 1 the left two) and CCD3 versus CCD4 (Gap 3the right two)
Suppose that the new peak 120574max appears at (1198961015840 1198981015840) 1198961015840 1198981015840 isin
Z referring to [27] we have the following relation
120574max (1198961015840
1198981015840
)
asymp120582
1198732
sin [120587 (1198961015840 + 120575119909)] sin [120587 (1198981015840 + 120575
119910)]
sin [(120587119873) (1198961015840 + 120575119909)] sin [(120587119873) (1198981015840 + 120575
119910)]
(36)
The right side presents the spatial distribution of the normal-ized cross-correlation coefficientsTherefore (120575
119909 120575
119910) are able
to be measured based on that In practice constant 120582 le 1which tends to decrease when small noise exists and equalsunity in ideal cases
Step 5 Dense registration is executed for the lapped imagestrips
Repeating Step 1simStep 4 we register the along-track sam-ple images selected from the referenced images to the sensedimageThemaximal sample rate can reach up to line-by-lineThe continuous procedure is shown in Figure 8 in which theimage pairs are marked
The curves of relative offsets inP are shown in Figures 9and 10
Let col119903 row
119903be the column and row indexes of image
elements on the referenced image and let col119904 row
119904be the
indexes of the same elements on the sensed image The totalcolumns of each detector 119876 = 4096 pix and the verticaldistance between the two detector arrays 119863 = 184975mmAccording to the results of registration we get the offsets
50 100 150 200 250 300 350 400 450 500
minus28minus26minus24 X 258
Y minus2515
Image rows (pixels)
Cros
s tra
ck(p
ixel
s)
CCD1 versus CCD2
50 100 150 200 250 300 350 400 450 500
minus8minus6minus4 X 258
Y minus5393
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 423Y minus7363
S11S22
S22
S11
X 423Y minus2378
Figure 9Theoffsets of lapped images captured byCCD1 andCCD2
50 100 150 200 250 300 350 400 450 500minus17minus16minus15minus14minus13minus12
X 266Y minus1285 X 436
Y minus1297
Image rows (pixels)Cr
oss t
rack
(p
ixel
s)
CCD3 versus CCD4
50 100 150 200 250 300 350 400 450 500minus9minus8minus7minus6minus5
X 436Y minus6869
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 266Y minus7663
S31
S31
S32
S32
Figure 10 The offsets of lapped images captured by CCD3 andCCD4
of images at 119899th gap 120575119899119909(cross track) 120575119899
119910(along track) in
frameP and Δ1199091015840119899 Δ1199101015840
119899(mm) in frameF
120575119899119909= col
119903+ col
119904minus 119876 minus 120578
119899
Δ1199091015840
119899= Δ(119909
1015840
2)119899
= 120575119899119909sdot 119908
120575119899119910= row
119904minus row
119903minus119863
119908
Δ1199101015840
119899= Δ(119909
1015840
1)119899
= 120575119899119910sdot 119908 + 119863
(37)
Four pixels S11 S12 S31 and S32 are examinedTheir data arelisted in Table 1
S11 and S31 are the images of the same object which wascaptured in order by CCD1 and CCD2 (Gap 1) S12 and S32were captured respectively by CCD3 and CCD4 (Gap 3)Referring to the auxiliary data S11 and S31 were capturedat same time and S12 and S32 were captured at differenttime which means that the along-track speeds of the twomoving images were quite different Moreover the cross-track image offsets in Gap 1 and Gap 3 vary so much whichsays that the optical flows were also distorted unevenly anddeflects away from the along-track directionOn the other
Mathematical Problems in Engineering 11
Table 1 The offsets between overlapped images
Sample Row no(pixel)
120575119899119909
(pixel)Δ119909
1015840
119899
(mm)120575119899
119910
(pixel)Δ119910
1015840
119899
(mm)
S11 258 minus2515 minus02200625 minus539 184503
S12 423 minus2378 minus02080750 minus736 184331
S31 266 minus1285 minus01124375 minus766 184304
S32 436 minus1297 minus01134875 minus687 184374
hand it is has been discovered in Figures 9 and 10 that thefluctuation of image offsets taking place in Gap 1 is greaterin magnitude than in Gap 3 All the facts indicate that thedistorted optical flows can be detected from a plenty of imageoffsets We will see later that the nonlinear distribution of thedata strengthens the well-posedness of optical flow inversionalgorithm
4 Remote Sensor AttitudeMotion Measurement
In this section the attitude velocity of the remote sensor isgoing to be resolved by using optical flow inversion methodThe results of dense registration are applied to produceconditions of fixed solution for optical flow equations
41 The Principle of Optical Inversion For clarity in frameC the two coordinate components of image displacementof 119896th sample element belonging to 119899th lapped strip pair arewritten as Δ1199091015840
119899119896 Δ1199101015840
119899119896 From (13) and (25) it is easy to show
that the contributions to optical flow owing to orbital motionand earthrsquos inertial movement are of very slightly varying inshort term such that the corresponding displacements can beregarded as piecewise constants 119904
119909 119904119910
Let 120591119894119895 119905119894119895
be in order the two sequential imaging timeof the 119895th image sample on the overlapped detectors in 119895thgap They are usually recorded in the auxiliary data of theremote sensor Hence for every image element the quantityof discrete status in optical flow tracing will be
119873119894119895= [
119905119894119895minus 120591
119894119895
Δ119905] isin Z
+
(119894 = 1 sdot sdot sdot 119899 119895 = 1 sdot sdot sdot 119898) (38)
where 119899 is the amount of CCD gaps 119898 is the amount ofsample groups and Δ119905 is the time step We set samples withsame 119895 index into the same group in which the samples arecaptured by the prior detectors simultaneously
We expand (11) substitute it into (14) and (13) and thenarrange the scalar optical flow inversion equations in termsof the three axial angular velocity components 120596
1 120596
2 and 120596
3
(the variables in the inverse problem) yielding the linearoptical flow equations
Locus of optical flow
CCD
CCD
120575max
D
ci120583120581 = const
Figure 11 Coefficients Determination according to the CurrentLocation of the Image
For the 119897th group samples
1198731119897
sum119894=119897
119888119894
11198971120596119894
1+ 119888
119894
11198972120596119894
2+ 119888
119894
11198973120596119894
3= Δ119909
1015840
1119897minus 119904
1199091
1198731119897
sum119894=119897
119889119894
11198971120596119894
1+ 119889
119894
11198972120596119894
2+ 119889
119894
11198973120596119894
3= Δ119910
1015840
1119897minus 119904
1199101
119873119899119897
sum119894=119897
119888119894
1198991198971120596119894
1+ 119888
119894
1198991198972120596119894
2+ 119888
119894
1198991198973120596119894
3= Δ119909
1015840
119899119897minus 119904
119909119899
119873119899119897
sum119894=119897
119889119894
1198991198971120596119894
1+ 119889
119894
1198991198972120596119894
2+ 119889
119894
1198991198973120596119894
3= Δ119910
1015840
119899119897minus 119904
119910119899
(39)
Suppose that the sample process will stop until119898 groupshave been founded The coefficients are as follows
119888119894
120583]120581 = Ξ120581 (120583 lceil119894 minus ] + 1119873120583]
Nrceil)
119889119894
120583]120581 = Λ 120581(120583 lceil
119894 minus ] + 1119873120583]
Nrceil) (120581 = 1 2 3)
(40)
12 Mathematical Problems in Engineering
Here
Ξ119896= (
12058511119896
12058512119896
sdot sdot sdot 1205851N119896
12058521119896
12058522119896
sdot sdot sdot 1205852N119896
sdot sdot sdot sdot sdot sdot
1205851198991119896
1205851198992119896
sdot sdot sdot 120585119899N119896
)
Λ119896= (
12058211119896
12058212119896
sdot sdot sdot 1205821N119896
12058221119896
12058222119896
sdot sdot sdot 1205822N119896
sdot sdot sdot sdot sdot sdot
1205821198991119896
1205821198992119896
sdot sdot sdot 120582119899N119896
)
(41)
As for the algorithm to reduce the complexity all possiblevalues for the coefficients are stored in the matrixes Ξ
119896and
Λ119896 The accuracy is guaranteed because the coefficients for
the images moving into the same piece of region are almostequal to an identical constant in a short period which isexplained in Figure 11
It has beenmentioned that the optical flow is not sensitiveto satellitersquos orbit motion and earth rotation in a short term
namely the possible values are assigned by the followingfunctions
120585119894119895119896= 120585
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
120582119894119895119896= 120582
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
119894 = 1 sim 119899 119895 = 1 sim N 119902 = 1 sim N
(42)
HereN is the number of constant-valued segments in theregion encompassing all the possible optical flow trajectoriesThe orbital elements and integral step size Δ119905 are commonto all functions Furthermore when long termmeasurementsare executed Ξ
119896and Λ
119896only need to be renewed according
to the current parametersThe coefficientmatrix of the optical flow equations for 119895th
(1 le 119895 le 119898) group can be written as
C119895=
(((((((((((
(
1198881
11198951119888111198952
119888111198953
sdot sdot sdot 1198881198731119895
111989511198881198731119895
111989521198881198731119895
11198953sdot sdot sdot 0 0
119889111198951
119889111198952
119889111198953
sdot sdot sdot 1198891198731119895
111989511198891198731119895
111989521198891198731119895
11198953sdot sdot sdot 0 0
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811199021198951
11988811199021198952
11988811199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119888119873119902119895
1199021198951119888119873119902119895
1199021198951119888119873119902119895
1199021198953
11988911199021198951
11988911199021198952
11988911199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119889119873119902119895
1199021198951119889119873119902119895
1199021198952119889119873119902119895
1199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811198991198951
11988811198991198952
11988811198991198953
sdot sdot sdot sdot sdot sdot 119888119873119899119895
1198991198951119888119873119899119895
1198991198952119888119873119899119895
1198991198953sdot sdot sdot 0
1198891
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot sdot sdot sdot 119889
1
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot 0
)))))))))))
)2119899times3119873119902119895
(43)
where119873119902119895= max119873
1119895 119873
119899119895 Consequently as we organize the equations for all groups
the global coefficient matrix will be given in the followingform
C =((
(
[C1]2119899times3119873
1199021
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
0 [C2]2119899times3119873
1199022
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
d sdot sdot sdot sdot sdot sdot
[C]2119899times3119873maxd
[C119898]2119899times3119873
119902119898
0
))
)2119899119898times3119873max
(44)
C is a quasidiagonal partitioned matrix every subblockhas 2119899 rows The maximal columns of C are 119873max =
max1198731199021 119873
119902119898
The unknown variables are as follows
[Ω]3119873maxtimes1
= [120596111205961212059613sdot sdot sdot 120596
119873max1
120596119873max2
120596119873max3
]119879
(45)
The constant are as followsΔu
2119898119899times1= [ Δ1199091015840
11Δ1199101015840
11sdot sdot sdot Δ1199091015840
1198991Δ1199101015840
1198991
sdot sdot sdot Δ11990910158401198981
Δ11991010158401198981
sdot sdot sdot Δ1199091015840119899119898
Δ1199101015840119899119898
]119879
s2119898119899times1
= [ 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899
sdot sdot sdot 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899]119879
(46)
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
10 Mathematical Problems in Engineering
Figure 8 Dense image registration for lapped image strips CCD1versus CCD2 (Gap 1 the left two) and CCD3 versus CCD4 (Gap 3the right two)
Suppose that the new peak 120574max appears at (1198961015840 1198981015840) 1198961015840 1198981015840 isin
Z referring to [27] we have the following relation
120574max (1198961015840
1198981015840
)
asymp120582
1198732
sin [120587 (1198961015840 + 120575119909)] sin [120587 (1198981015840 + 120575
119910)]
sin [(120587119873) (1198961015840 + 120575119909)] sin [(120587119873) (1198981015840 + 120575
119910)]
(36)
The right side presents the spatial distribution of the normal-ized cross-correlation coefficientsTherefore (120575
119909 120575
119910) are able
to be measured based on that In practice constant 120582 le 1which tends to decrease when small noise exists and equalsunity in ideal cases
Step 5 Dense registration is executed for the lapped imagestrips
Repeating Step 1simStep 4 we register the along-track sam-ple images selected from the referenced images to the sensedimageThemaximal sample rate can reach up to line-by-lineThe continuous procedure is shown in Figure 8 in which theimage pairs are marked
The curves of relative offsets inP are shown in Figures 9and 10
Let col119903 row
119903be the column and row indexes of image
elements on the referenced image and let col119904 row
119904be the
indexes of the same elements on the sensed image The totalcolumns of each detector 119876 = 4096 pix and the verticaldistance between the two detector arrays 119863 = 184975mmAccording to the results of registration we get the offsets
50 100 150 200 250 300 350 400 450 500
minus28minus26minus24 X 258
Y minus2515
Image rows (pixels)
Cros
s tra
ck(p
ixel
s)
CCD1 versus CCD2
50 100 150 200 250 300 350 400 450 500
minus8minus6minus4 X 258
Y minus5393
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 423Y minus7363
S11S22
S22
S11
X 423Y minus2378
Figure 9Theoffsets of lapped images captured byCCD1 andCCD2
50 100 150 200 250 300 350 400 450 500minus17minus16minus15minus14minus13minus12
X 266Y minus1285 X 436
Y minus1297
Image rows (pixels)Cr
oss t
rack
(p
ixel
s)
CCD3 versus CCD4
50 100 150 200 250 300 350 400 450 500minus9minus8minus7minus6minus5
X 436Y minus6869
Image rows (pixels)
Alo
ng tr
ack
(pix
els)
X 266Y minus7663
S31
S31
S32
S32
Figure 10 The offsets of lapped images captured by CCD3 andCCD4
of images at 119899th gap 120575119899119909(cross track) 120575119899
119910(along track) in
frameP and Δ1199091015840119899 Δ1199101015840
119899(mm) in frameF
120575119899119909= col
119903+ col
119904minus 119876 minus 120578
119899
Δ1199091015840
119899= Δ(119909
1015840
2)119899
= 120575119899119909sdot 119908
120575119899119910= row
119904minus row
119903minus119863
119908
Δ1199101015840
119899= Δ(119909
1015840
1)119899
= 120575119899119910sdot 119908 + 119863
(37)
Four pixels S11 S12 S31 and S32 are examinedTheir data arelisted in Table 1
S11 and S31 are the images of the same object which wascaptured in order by CCD1 and CCD2 (Gap 1) S12 and S32were captured respectively by CCD3 and CCD4 (Gap 3)Referring to the auxiliary data S11 and S31 were capturedat same time and S12 and S32 were captured at differenttime which means that the along-track speeds of the twomoving images were quite different Moreover the cross-track image offsets in Gap 1 and Gap 3 vary so much whichsays that the optical flows were also distorted unevenly anddeflects away from the along-track directionOn the other
Mathematical Problems in Engineering 11
Table 1 The offsets between overlapped images
Sample Row no(pixel)
120575119899119909
(pixel)Δ119909
1015840
119899
(mm)120575119899
119910
(pixel)Δ119910
1015840
119899
(mm)
S11 258 minus2515 minus02200625 minus539 184503
S12 423 minus2378 minus02080750 minus736 184331
S31 266 minus1285 minus01124375 minus766 184304
S32 436 minus1297 minus01134875 minus687 184374
hand it is has been discovered in Figures 9 and 10 that thefluctuation of image offsets taking place in Gap 1 is greaterin magnitude than in Gap 3 All the facts indicate that thedistorted optical flows can be detected from a plenty of imageoffsets We will see later that the nonlinear distribution of thedata strengthens the well-posedness of optical flow inversionalgorithm
4 Remote Sensor AttitudeMotion Measurement
In this section the attitude velocity of the remote sensor isgoing to be resolved by using optical flow inversion methodThe results of dense registration are applied to produceconditions of fixed solution for optical flow equations
41 The Principle of Optical Inversion For clarity in frameC the two coordinate components of image displacementof 119896th sample element belonging to 119899th lapped strip pair arewritten as Δ1199091015840
119899119896 Δ1199101015840
119899119896 From (13) and (25) it is easy to show
that the contributions to optical flow owing to orbital motionand earthrsquos inertial movement are of very slightly varying inshort term such that the corresponding displacements can beregarded as piecewise constants 119904
119909 119904119910
Let 120591119894119895 119905119894119895
be in order the two sequential imaging timeof the 119895th image sample on the overlapped detectors in 119895thgap They are usually recorded in the auxiliary data of theremote sensor Hence for every image element the quantityof discrete status in optical flow tracing will be
119873119894119895= [
119905119894119895minus 120591
119894119895
Δ119905] isin Z
+
(119894 = 1 sdot sdot sdot 119899 119895 = 1 sdot sdot sdot 119898) (38)
where 119899 is the amount of CCD gaps 119898 is the amount ofsample groups and Δ119905 is the time step We set samples withsame 119895 index into the same group in which the samples arecaptured by the prior detectors simultaneously
We expand (11) substitute it into (14) and (13) and thenarrange the scalar optical flow inversion equations in termsof the three axial angular velocity components 120596
1 120596
2 and 120596
3
(the variables in the inverse problem) yielding the linearoptical flow equations
Locus of optical flow
CCD
CCD
120575max
D
ci120583120581 = const
Figure 11 Coefficients Determination according to the CurrentLocation of the Image
For the 119897th group samples
1198731119897
sum119894=119897
119888119894
11198971120596119894
1+ 119888
119894
11198972120596119894
2+ 119888
119894
11198973120596119894
3= Δ119909
1015840
1119897minus 119904
1199091
1198731119897
sum119894=119897
119889119894
11198971120596119894
1+ 119889
119894
11198972120596119894
2+ 119889
119894
11198973120596119894
3= Δ119910
1015840
1119897minus 119904
1199101
119873119899119897
sum119894=119897
119888119894
1198991198971120596119894
1+ 119888
119894
1198991198972120596119894
2+ 119888
119894
1198991198973120596119894
3= Δ119909
1015840
119899119897minus 119904
119909119899
119873119899119897
sum119894=119897
119889119894
1198991198971120596119894
1+ 119889
119894
1198991198972120596119894
2+ 119889
119894
1198991198973120596119894
3= Δ119910
1015840
119899119897minus 119904
119910119899
(39)
Suppose that the sample process will stop until119898 groupshave been founded The coefficients are as follows
119888119894
120583]120581 = Ξ120581 (120583 lceil119894 minus ] + 1119873120583]
Nrceil)
119889119894
120583]120581 = Λ 120581(120583 lceil
119894 minus ] + 1119873120583]
Nrceil) (120581 = 1 2 3)
(40)
12 Mathematical Problems in Engineering
Here
Ξ119896= (
12058511119896
12058512119896
sdot sdot sdot 1205851N119896
12058521119896
12058522119896
sdot sdot sdot 1205852N119896
sdot sdot sdot sdot sdot sdot
1205851198991119896
1205851198992119896
sdot sdot sdot 120585119899N119896
)
Λ119896= (
12058211119896
12058212119896
sdot sdot sdot 1205821N119896
12058221119896
12058222119896
sdot sdot sdot 1205822N119896
sdot sdot sdot sdot sdot sdot
1205821198991119896
1205821198992119896
sdot sdot sdot 120582119899N119896
)
(41)
As for the algorithm to reduce the complexity all possiblevalues for the coefficients are stored in the matrixes Ξ
119896and
Λ119896 The accuracy is guaranteed because the coefficients for
the images moving into the same piece of region are almostequal to an identical constant in a short period which isexplained in Figure 11
It has beenmentioned that the optical flow is not sensitiveto satellitersquos orbit motion and earth rotation in a short term
namely the possible values are assigned by the followingfunctions
120585119894119895119896= 120585
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
120582119894119895119896= 120582
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
119894 = 1 sim 119899 119895 = 1 sim N 119902 = 1 sim N
(42)
HereN is the number of constant-valued segments in theregion encompassing all the possible optical flow trajectoriesThe orbital elements and integral step size Δ119905 are commonto all functions Furthermore when long termmeasurementsare executed Ξ
119896and Λ
119896only need to be renewed according
to the current parametersThe coefficientmatrix of the optical flow equations for 119895th
(1 le 119895 le 119898) group can be written as
C119895=
(((((((((((
(
1198881
11198951119888111198952
119888111198953
sdot sdot sdot 1198881198731119895
111989511198881198731119895
111989521198881198731119895
11198953sdot sdot sdot 0 0
119889111198951
119889111198952
119889111198953
sdot sdot sdot 1198891198731119895
111989511198891198731119895
111989521198891198731119895
11198953sdot sdot sdot 0 0
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811199021198951
11988811199021198952
11988811199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119888119873119902119895
1199021198951119888119873119902119895
1199021198951119888119873119902119895
1199021198953
11988911199021198951
11988911199021198952
11988911199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119889119873119902119895
1199021198951119889119873119902119895
1199021198952119889119873119902119895
1199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811198991198951
11988811198991198952
11988811198991198953
sdot sdot sdot sdot sdot sdot 119888119873119899119895
1198991198951119888119873119899119895
1198991198952119888119873119899119895
1198991198953sdot sdot sdot 0
1198891
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot sdot sdot sdot 119889
1
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot 0
)))))))))))
)2119899times3119873119902119895
(43)
where119873119902119895= max119873
1119895 119873
119899119895 Consequently as we organize the equations for all groups
the global coefficient matrix will be given in the followingform
C =((
(
[C1]2119899times3119873
1199021
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
0 [C2]2119899times3119873
1199022
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
d sdot sdot sdot sdot sdot sdot
[C]2119899times3119873maxd
[C119898]2119899times3119873
119902119898
0
))
)2119899119898times3119873max
(44)
C is a quasidiagonal partitioned matrix every subblockhas 2119899 rows The maximal columns of C are 119873max =
max1198731199021 119873
119902119898
The unknown variables are as follows
[Ω]3119873maxtimes1
= [120596111205961212059613sdot sdot sdot 120596
119873max1
120596119873max2
120596119873max3
]119879
(45)
The constant are as followsΔu
2119898119899times1= [ Δ1199091015840
11Δ1199101015840
11sdot sdot sdot Δ1199091015840
1198991Δ1199101015840
1198991
sdot sdot sdot Δ11990910158401198981
Δ11991010158401198981
sdot sdot sdot Δ1199091015840119899119898
Δ1199101015840119899119898
]119879
s2119898119899times1
= [ 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899
sdot sdot sdot 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899]119879
(46)
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 11
Table 1 The offsets between overlapped images
Sample Row no(pixel)
120575119899119909
(pixel)Δ119909
1015840
119899
(mm)120575119899
119910
(pixel)Δ119910
1015840
119899
(mm)
S11 258 minus2515 minus02200625 minus539 184503
S12 423 minus2378 minus02080750 minus736 184331
S31 266 minus1285 minus01124375 minus766 184304
S32 436 minus1297 minus01134875 minus687 184374
hand it is has been discovered in Figures 9 and 10 that thefluctuation of image offsets taking place in Gap 1 is greaterin magnitude than in Gap 3 All the facts indicate that thedistorted optical flows can be detected from a plenty of imageoffsets We will see later that the nonlinear distribution of thedata strengthens the well-posedness of optical flow inversionalgorithm
4 Remote Sensor AttitudeMotion Measurement
In this section the attitude velocity of the remote sensor isgoing to be resolved by using optical flow inversion methodThe results of dense registration are applied to produceconditions of fixed solution for optical flow equations
41 The Principle of Optical Inversion For clarity in frameC the two coordinate components of image displacementof 119896th sample element belonging to 119899th lapped strip pair arewritten as Δ1199091015840
119899119896 Δ1199101015840
119899119896 From (13) and (25) it is easy to show
that the contributions to optical flow owing to orbital motionand earthrsquos inertial movement are of very slightly varying inshort term such that the corresponding displacements can beregarded as piecewise constants 119904
119909 119904119910
Let 120591119894119895 119905119894119895
be in order the two sequential imaging timeof the 119895th image sample on the overlapped detectors in 119895thgap They are usually recorded in the auxiliary data of theremote sensor Hence for every image element the quantityof discrete status in optical flow tracing will be
119873119894119895= [
119905119894119895minus 120591
119894119895
Δ119905] isin Z
+
(119894 = 1 sdot sdot sdot 119899 119895 = 1 sdot sdot sdot 119898) (38)
where 119899 is the amount of CCD gaps 119898 is the amount ofsample groups and Δ119905 is the time step We set samples withsame 119895 index into the same group in which the samples arecaptured by the prior detectors simultaneously
We expand (11) substitute it into (14) and (13) and thenarrange the scalar optical flow inversion equations in termsof the three axial angular velocity components 120596
1 120596
2 and 120596
3
(the variables in the inverse problem) yielding the linearoptical flow equations
Locus of optical flow
CCD
CCD
120575max
D
ci120583120581 = const
Figure 11 Coefficients Determination according to the CurrentLocation of the Image
For the 119897th group samples
1198731119897
sum119894=119897
119888119894
11198971120596119894
1+ 119888
119894
11198972120596119894
2+ 119888
119894
11198973120596119894
3= Δ119909
1015840
1119897minus 119904
1199091
1198731119897
sum119894=119897
119889119894
11198971120596119894
1+ 119889
119894
11198972120596119894
2+ 119889
119894
11198973120596119894
3= Δ119910
1015840
1119897minus 119904
1199101
119873119899119897
sum119894=119897
119888119894
1198991198971120596119894
1+ 119888
119894
1198991198972120596119894
2+ 119888
119894
1198991198973120596119894
3= Δ119909
1015840
119899119897minus 119904
119909119899
119873119899119897
sum119894=119897
119889119894
1198991198971120596119894
1+ 119889
119894
1198991198972120596119894
2+ 119889
119894
1198991198973120596119894
3= Δ119910
1015840
119899119897minus 119904
119910119899
(39)
Suppose that the sample process will stop until119898 groupshave been founded The coefficients are as follows
119888119894
120583]120581 = Ξ120581 (120583 lceil119894 minus ] + 1119873120583]
Nrceil)
119889119894
120583]120581 = Λ 120581(120583 lceil
119894 minus ] + 1119873120583]
Nrceil) (120581 = 1 2 3)
(40)
12 Mathematical Problems in Engineering
Here
Ξ119896= (
12058511119896
12058512119896
sdot sdot sdot 1205851N119896
12058521119896
12058522119896
sdot sdot sdot 1205852N119896
sdot sdot sdot sdot sdot sdot
1205851198991119896
1205851198992119896
sdot sdot sdot 120585119899N119896
)
Λ119896= (
12058211119896
12058212119896
sdot sdot sdot 1205821N119896
12058221119896
12058222119896
sdot sdot sdot 1205822N119896
sdot sdot sdot sdot sdot sdot
1205821198991119896
1205821198992119896
sdot sdot sdot 120582119899N119896
)
(41)
As for the algorithm to reduce the complexity all possiblevalues for the coefficients are stored in the matrixes Ξ
119896and
Λ119896 The accuracy is guaranteed because the coefficients for
the images moving into the same piece of region are almostequal to an identical constant in a short period which isexplained in Figure 11
It has beenmentioned that the optical flow is not sensitiveto satellitersquos orbit motion and earth rotation in a short term
namely the possible values are assigned by the followingfunctions
120585119894119895119896= 120585
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
120582119894119895119896= 120582
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
119894 = 1 sim 119899 119895 = 1 sim N 119902 = 1 sim N
(42)
HereN is the number of constant-valued segments in theregion encompassing all the possible optical flow trajectoriesThe orbital elements and integral step size Δ119905 are commonto all functions Furthermore when long termmeasurementsare executed Ξ
119896and Λ
119896only need to be renewed according
to the current parametersThe coefficientmatrix of the optical flow equations for 119895th
(1 le 119895 le 119898) group can be written as
C119895=
(((((((((((
(
1198881
11198951119888111198952
119888111198953
sdot sdot sdot 1198881198731119895
111989511198881198731119895
111989521198881198731119895
11198953sdot sdot sdot 0 0
119889111198951
119889111198952
119889111198953
sdot sdot sdot 1198891198731119895
111989511198891198731119895
111989521198891198731119895
11198953sdot sdot sdot 0 0
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811199021198951
11988811199021198952
11988811199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119888119873119902119895
1199021198951119888119873119902119895
1199021198951119888119873119902119895
1199021198953
11988911199021198951
11988911199021198952
11988911199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119889119873119902119895
1199021198951119889119873119902119895
1199021198952119889119873119902119895
1199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811198991198951
11988811198991198952
11988811198991198953
sdot sdot sdot sdot sdot sdot 119888119873119899119895
1198991198951119888119873119899119895
1198991198952119888119873119899119895
1198991198953sdot sdot sdot 0
1198891
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot sdot sdot sdot 119889
1
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot 0
)))))))))))
)2119899times3119873119902119895
(43)
where119873119902119895= max119873
1119895 119873
119899119895 Consequently as we organize the equations for all groups
the global coefficient matrix will be given in the followingform
C =((
(
[C1]2119899times3119873
1199021
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
0 [C2]2119899times3119873
1199022
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
d sdot sdot sdot sdot sdot sdot
[C]2119899times3119873maxd
[C119898]2119899times3119873
119902119898
0
))
)2119899119898times3119873max
(44)
C is a quasidiagonal partitioned matrix every subblockhas 2119899 rows The maximal columns of C are 119873max =
max1198731199021 119873
119902119898
The unknown variables are as follows
[Ω]3119873maxtimes1
= [120596111205961212059613sdot sdot sdot 120596
119873max1
120596119873max2
120596119873max3
]119879
(45)
The constant are as followsΔu
2119898119899times1= [ Δ1199091015840
11Δ1199101015840
11sdot sdot sdot Δ1199091015840
1198991Δ1199101015840
1198991
sdot sdot sdot Δ11990910158401198981
Δ11991010158401198981
sdot sdot sdot Δ1199091015840119899119898
Δ1199101015840119899119898
]119879
s2119898119899times1
= [ 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899
sdot sdot sdot 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899]119879
(46)
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
12 Mathematical Problems in Engineering
Here
Ξ119896= (
12058511119896
12058512119896
sdot sdot sdot 1205851N119896
12058521119896
12058522119896
sdot sdot sdot 1205852N119896
sdot sdot sdot sdot sdot sdot
1205851198991119896
1205851198992119896
sdot sdot sdot 120585119899N119896
)
Λ119896= (
12058211119896
12058212119896
sdot sdot sdot 1205821N119896
12058221119896
12058222119896
sdot sdot sdot 1205822N119896
sdot sdot sdot sdot sdot sdot
1205821198991119896
1205821198992119896
sdot sdot sdot 120582119899N119896
)
(41)
As for the algorithm to reduce the complexity all possiblevalues for the coefficients are stored in the matrixes Ξ
119896and
Λ119896 The accuracy is guaranteed because the coefficients for
the images moving into the same piece of region are almostequal to an identical constant in a short period which isexplained in Figure 11
It has beenmentioned that the optical flow is not sensitiveto satellitersquos orbit motion and earth rotation in a short term
namely the possible values are assigned by the followingfunctions
120585119894119895119896= 120585
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
120582119894119895119896= 120582
119896(119886 119890 119894
0 Ω 120596 119909
1015840
119902 119910
1015840
119902 Δ119905)
119894 = 1 sim 119899 119895 = 1 sim N 119902 = 1 sim N
(42)
HereN is the number of constant-valued segments in theregion encompassing all the possible optical flow trajectoriesThe orbital elements and integral step size Δ119905 are commonto all functions Furthermore when long termmeasurementsare executed Ξ
119896and Λ
119896only need to be renewed according
to the current parametersThe coefficientmatrix of the optical flow equations for 119895th
(1 le 119895 le 119898) group can be written as
C119895=
(((((((((((
(
1198881
11198951119888111198952
119888111198953
sdot sdot sdot 1198881198731119895
111989511198881198731119895
111989521198881198731119895
11198953sdot sdot sdot 0 0
119889111198951
119889111198952
119889111198953
sdot sdot sdot 1198891198731119895
111989511198891198731119895
111989521198891198731119895
11198953sdot sdot sdot 0 0
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811199021198951
11988811199021198952
11988811199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119888119873119902119895
1199021198951119888119873119902119895
1199021198951119888119873119902119895
1199021198953
11988911199021198951
11988911199021198952
11988911199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 119889119873119902119895
1199021198951119889119873119902119895
1199021198952119889119873119902119895
1199021198953
sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot
11988811198991198951
11988811198991198952
11988811198991198953
sdot sdot sdot sdot sdot sdot 119888119873119899119895
1198991198951119888119873119899119895
1198991198952119888119873119899119895
1198991198953sdot sdot sdot 0
1198891
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot sdot sdot sdot 119889
1
11989911989511198891
11989911989521198891
1198991198953sdot sdot sdot 0
)))))))))))
)2119899times3119873119902119895
(43)
where119873119902119895= max119873
1119895 119873
119899119895 Consequently as we organize the equations for all groups
the global coefficient matrix will be given in the followingform
C =((
(
[C1]2119899times3119873
1199021
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
0 [C2]2119899times3119873
1199022
0 sdot sdot sdot sdot sdot sdot sdot sdot sdot 0
d sdot sdot sdot sdot sdot sdot
[C]2119899times3119873maxd
[C119898]2119899times3119873
119902119898
0
))
)2119899119898times3119873max
(44)
C is a quasidiagonal partitioned matrix every subblockhas 2119899 rows The maximal columns of C are 119873max =
max1198731199021 119873
119902119898
The unknown variables are as follows
[Ω]3119873maxtimes1
= [120596111205961212059613sdot sdot sdot 120596
119873max1
120596119873max2
120596119873max3
]119879
(45)
The constant are as followsΔu
2119898119899times1= [ Δ1199091015840
11Δ1199101015840
11sdot sdot sdot Δ1199091015840
1198991Δ1199101015840
1198991
sdot sdot sdot Δ11990910158401198981
Δ11991010158401198981
sdot sdot sdot Δ1199091015840119899119898
Δ1199101015840119899119898
]119879
s2119898119899times1
= [ 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899
sdot sdot sdot 1199041199091
1199041199101
sdot sdot sdot 119904119909119899
119904119910119899]119879
(46)
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 13
Predicting the deformation anddisplacement of every element viaoptical flow prediction based on
auxiliary data and then reconstructing a
Reconfiguring the deformed image via image resampling process to
Using normalized cross-
on the sensed image captured by the posterior CCD
Measuring the relative offsets in
the sensed window
Computing the precise offset in
sensed window by adding the optical flow prediction
Utilizing the offsets data as the fixed solution conditions for optical
inversion equations and solving
The auxiliary data of the satellite
For validation andfurther usages
Preliminary information acquisition
Yes
No
Selecting the original template T1centered on the 120581th sampling pixelfrom referenced image captured by
the prior CCD 1
2
3
4 5
7
6
120581 = 120581 + 1
new deformed image T9984001
form a new template T2
Inverse problem solving
angular velocity minusrarr120596
photography frame between T2 and
120581 = Nmax
image frame between T1 and the
correlation algorithm to register T2
Figure 12 The flow chart of the attitude motion measurement
Δu has been measured by image dense registration scan be determined by auxiliary data of sensors The globalequations are expressed by
C2119898119899times3119873max
sdot [Ω]3119873maxtimes1
= Δu2119898119899times1
minus s2119898119899times1
(47)
As for this problem it is easy to be verified that conditions(1) 2119899119898 gt 3119873max (2) rank(C) = 3119873max easily meet
well in practical works To solve (44) well-posedness is thecritical issue for the inverse problem Strong nonlinearity andanisotropy of optical flow will greatly reduce the relevancebetween the coefficients inCmeanwhile it increases thewell-posedness of the solution The least-square solution of (47)can be obtained
[Ω] = (C119879
C)minus1
C119879
(Δu minus s) (48)
The well-posedness is able to be examined by SingularValue Decomposition (SVD) toC Consider the nonnegativedefinite matrix C119879C whose eigenvalues are given in order1205821ge 120582
2ge sdot sdot sdot ge 120582
3119873max
C = U [120590]V119879
(49)
where U2119898119899times2119898119899
and V3119873maxtimes3119873max
are unit orthogonal matri-ces and the singular values are 120590
119894= radic120582
119894 The well-posedness
of the solution is acceptable if condition number 120581(C) =
1205901120590
3119873maxle 119905119900119897
Associating the process of inverse problem solving inSection 4 with the process of preliminary information acqui-sition in Section 3 the whole algorithm for remote sensorrsquosattitude measurement is illustrated in the flow chart inFigure 12
42 Experimental Results and Validation In the experiment72940 samples on 7 image strip pairs were involved Consid-ering maintaining the values in Ξ and Λ nearly invariant weredistributed these samples into 20 subspaces and solved outthe three axial components of the angular velocity Accordingto Shannonrsquos sampling theorem the measurable frequency 119891
119888
is expected to reach up to the half of line rates of TDI CCDFor the experiment 119891
119888asymp 1749KHz The 120596
119894sim 119905 curves of
0 s sim 0148 s are shown in Figure 13In this period 120596
2max = 0001104∘s 120596
1max = 0001194∘s
The signal of 1205963(119905) is fluctuating around mean value 120596
3=
001752∘s It is not hard to infer that high frequency jitters
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
14 Mathematical Problems in Engineering
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014minus1
01
002 004 006 008 01 012 014001600170018
Imaging time (s)
1205961
(deg
s)
1205962
(deg
s)
1205963
(deg
s)
times10minus3
times10minus3
Figure 13 Solutions for the angular velocities of the remote sensor
were perturbing the remote sensor besides compared to thesignals of 120596
1(119905) and 120596
2(119905) the low frequency components
in 1205963(119905) are higher in magnitude Actually according to the
remote sensor satellite yaw angle is needed to be regulatedin real time to compensate for the image rotation on thefocal plane such that the detectors can always scan along thedirection of image motion Based on the auxiliary data theimagemotion velocity vector V of the central pixel in FOV canbe computed So the optimal yaw motion in principle will be
120595lowast
119905=V1199101015840
V1199091015840
120596lowast
3(119905) =
lowast
119905=V1199101015840V1199091015840 minus V
1199101015840 V1199091015840
V21199091015840
(50)
The mean value of 120596lowast3(119905) 120596
lowast
3= 001198∘s We attribute
Δ120596lowast3= 120596
3minus 120596
lowast
3= 000554∘s to the error of satellite attitude
controlIn order to validate the measurement the technique of
template reconfiguration was implemented again to checkthe expected phenomenon that based on the high-accurateinformation the correlations between the new templates and119879119904should be further improved In addition the distribution
of 120574 near 120574max is going to become more compact which iseasy to be understood since much more useful informationabout remote sensorrsquos motion is introduced into templatereconstructions and increases the similarities between thelapped images
Unlike the processing in image dense registration in thevalidation phase larger original templates are selected Let 119879
1
be the referenced image template which centered at the exam-ining element 119879
2the new template reconfigured by rough
prediction of optical flow 2the new template reconfigured
based on precision attitude motion measurement and 119879119904the
template on sensed image which centered at the registrationpixel For all templates 119872 = 119873 = 101 The distributions ofthe normalized cross-correlation coefficients correspondingto the referenced template centered on the sampled selectedin 1198731199001000 row belonging to 1198731199007 CCD with sensed imagebelonging to1198731199008 CCD are illustrated in Figure 14
(a) shows the situation for1198791and119879
119904(b) for119879
2and119879
119904 and
(c) for 2and119879
119904The compactness of the data is characterized
by the peak value 120574max and the location variances 1205902119909 1205902
119910
1205902
119909=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119894 minus 119909max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
1205902
119910=sum119872
119894=1sum119872
119895=1120574119894119895sdot (119895 minus 119910max)
2
sum119872
119894=1sum119872
119895=1120574119894119895
(51)
where 119909max and 119910max are respectively the column and rownumber of the peak-valued location
In case (a) 120574max(119886) = 0893 standard deviation 120590119909(119886)
= 5653 and 120590119910(119886) = 8192 in case (b) 120574max(119887) =
0918 120590119909(119887) = 4839 and 120590
119910(119887) = 6686 in case (c) 120574max(119888)
= 0976 however the variance sharply shrinks to 120590119909(119888) =
327 120590119910(119888) = 406 In Table 2 some other samples with 1000
rows interval are also examinedThe samples can be regardedas independent to each other
Judging from the results the performances in case (c) arebetter than those in case (b) andmuchmore better than thosein case (a) since the precise attitude motion measurementsenhance the precision of optical inversion so as to improve thesimilarities between the new templates and sensed imagesNote that although in case (b) the variance decreases slightlyas we have analyzed in Section 32 compared to case (a) theoffsets of centroids from the peaks have been corrected wellby the use of the rough optical flow predictions
43 Summary and Discussions In terms of the precedingsections we can see that comparing to ordinary NCC theprecision of image registration is greatly improved since itis attributed to the assistance of the technique of templatereconfiguration Implementing the auxiliary data from thespace-borne sensors to optical flow prediction the relativedeformations between the lapped image pairs can be com-puted in considerable accuracy Afterwards it will be usedto estimate the gray values of the corresponding parts onsensed images and help us to construct a new template forregistration As we know the space-borne sensors may givemiddle and low frequency components of imagerrsquos attitudemotion in excellent precision Thus comparing to the clas-sical direct template based registration algorithms the simi-larity between the reconfigured template and sensed imagesmay greatly increase Furthermore the minor deformationsattributed to high frequency jitters can be detected by usingsubpixel registration between the reconfigured templates andsensed images This point of view is the exact basis of highfrequency jitters measurement with optical flow inversion
5 Conclusion
In this paper optical flows and time-varying image deforma-tion in space dynamic imaging are analyzed in detail Thenonlinear and anisotropic image motion velocity and opticalflows are utilized to strengthen the well-posedness of theinverse problem of attitude precise measurement by optical
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 15
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(a)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(b)
0 20 40 60 80 1000
102030405060708090
100
Spatial domain X (pix)
Spat
ial d
omai
n Y
(pix
)
(c)
Figure 14 Normalized cross-correlations comparison ((a) shows the distribution of 120574 by applying direct NCC algorithm (b) shows thedistribution of 120574 after template reconfiguration with optical flow prediction (c) shows the distribution of 120574 derived from posterior templatereconfiguration with high-accurate senorrsquos attitude measurement It can be noticed that the values of 120574 tend to be distributed uniformlyaround the peak value location from left to right)
Table 2 Correlation coefficients distribution for registration templates
Row number 120574max (119886 119887 119888) 120590119909sim (119886 119887 119888) 120590
119910sim (119886 119887 119888)
No 1000 0893 0918 0976 5653 4839 327 8192 6686 406No 2000 0807 0885 0929 8704 6452 213 6380 7342 571No 3000 0832 0940 0988 4991 3023 155 7704 4016 193No 4000 0919 0935 0983 5079 3995 361 5873 5155 385No 5000 0865 0922 0951 5918 4801 237 6151 2371 257No 6000 0751 0801 0907 1257 9985 789 1466 8213 206No 7000 0759 0846 0924 1163 1084 714 1271 8267 490No 8000 0884 0900 0943 8125 3546 542 8247 6770 288
flow inversion method For the purpose of determiningthe conditions of fixed solutions of optical flow equationsinformation based image registration algorithms are pro-posed We apply rough optical flow prediction to improvethe efficiency and accuracy of dense image registration Basedon the results of registration the attitude motions of remotesensors in imaging are measured by using precise opticalflow inversion method The experiment on a remote sensorshowed that the measurements are achieved in very highaccuracy as well as with broad bandwidth This method canextensively be used in remote sensing missions such as imagestrips splicing geometrical rectification and nonblind imagerestoration to promote the surveying precision and resolvingpower
Conflict of Interests
The authors declare that they have no financial nor personalrelationships with other people or organizations that caninappropriately influence their work there is no professionalor other personal interest of any nature or kind in anyproduct service andor company that could be construed asinfluencing the position presented in or the review of thispaper
Acknowledgments
This work is supported by the National High TechnologyResearch andDevelopment Program of China (863 Program)(Grant no 2012AA121503 Grant no 2013AA12260 andGrantno 2012AA120603) and the National Natural Science Foun-dation of China (Grant no 61377012)
References
[1] V Tchernykh M Beck and K Janschek ldquoAn embedded opticalflow processor for visual navigation using optical correlatortechnologyrdquo in Proceedings of the IEEERSJ International Con-ference on Intelligent Robots and Systems (IROS rsquo06) pp 67ndash72Beijing China October 2006
[2] K Janschek and V Tchernykh ldquoOptical correlator for imagemotion compensation in the focal plane of a satellite camerardquo inProceedings of the 15th IFAC Symposium on Automatic Controlin Aerospace Bologna Italy 2001
[3] W Priedhorsky and J J Bloch ldquoOptical detection of rapidlymoving objects in spacerdquo Applied Optics vol 44 no 3 pp 423ndash433 2005
[4] T Brox and J Malik ldquoLarge displacement optical flow descrip-tor matching in variational motion estimationrdquo IEEE Transac-tions on Pattern Analysis andMachine Intelligence vol 33 no 3pp 500ndash513 2011
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
16 Mathematical Problems in Engineering
[5] B Feng P P Bruyant P H Pretorius et al ldquoEstimation ofthe rigid-body motion from three-dimensional images using ageneralized center-of-mass points approachrdquo IEEETransactionson Nuclear Science vol 53 no 5 pp 2712ndash2718 2006
[6] J Wang P Yu C Yan J Ren and B He ldquoSpace optical remotesensor image motion velocity vector computational modelingerror budget and synthesisrdquo Chinese Optics Letters vol 3 no 7pp 414ndash417 2005
[7] A SMcEwenM E BanksN Baugh et al ldquoThehigh resolutionimaging science experiment (HiRISE) during MROrsquos primaryscience phase (PSP)rdquo Icarus vol 205 no 1 pp 2ndash37 2010
[8] F Ayoub S Leprince R Binet K W Lewis O Aharonson andJ-P Avouac ldquoInfluence of camera distortions on satellite imageregistration and change detection applicationsrdquo in Proceedingsof the IEEE International Geoscience and Remote Sensing Sympo-sium (IGARSS rsquo08) pp II1072ndashII1075 BostonMass USA 2008
[9] S Leprince S Barbot F Ayoub and J-P Avouac ldquoAutomaticand precise orthorectification coregistration and subpixel cor-relation of satellite images application to ground deformationmeasurementsrdquo IEEE Transactions on Geoscience and RemoteSensing vol 45 no 6 pp 1529ndash1558 2007
[10] S Leprince PMuse and J-P Avouac ldquoIn-flight CCDdistortioncalibration for pushbroom satellites based on subpixel correla-tionrdquo IEEE Transactions on Geoscience and Remote Sensing vol46 no 9 pp 2675ndash2683 2008
[11] Y Yitzhaky RMilberg S Yohaev andN S Kopeika ldquoCompar-ison of direct blind deconvolution methods for motion-blurredimagesrdquo Applied Optics vol 38 no 20 pp 4325ndash4332 1999
[12] R C Hardie K J Barnard and R Ordonez ldquoFast super-resolutionwith affinemotion using an adaptivewiener filter andits application to airborne imagingrdquo Optics Express vol 19 no27 pp 26208ndash26231 2011
[13] E M Blixt J Semeter and N Ivchenko ldquoOptical flow analysisof the aurora borealisrdquo IEEE Geoscience and Remote SensingLetters vol 3 no 1 pp 159ndash163 2006
[14] M G Mozerov ldquoConstrained optical flow estimation as amatching problemrdquo IEEE Transactions on Image Processing vol22 no 5 pp 2044ndash2055 2013
[15] H Sakaino ldquoA semitransparency-based optical-flow methodwith a point trajectory model for particle-like videordquo IEEETransactions on Image Processing vol 21 no 2 pp 441ndash4502012
[16] D Korsch ldquoClosed form solution for three-mirror telescopescorrected for spherical aberration coma astigmatism and fieldcurvaturerdquo Applied Optics vol 11 no 12 pp 2986ndash2987 1972
[17] G Naletto V da Deppo M G Pelizzo R Ragazzoni and EMarchetti ldquoOptical design of the wide angle camera for theRosetta missionrdquo Applied Optics vol 41 no 7 pp 1446ndash14532002
[18] M Born EWolf A B Bhatia and P C Clemmow Principles ofOptics Electromagnetic Theory of Propagation Interference andDiffraction of Light 7th edition 1999
[19] H Schaub and J L Junkins Analytical Mechanics of SpaceSystems AIAA Education Series 2002
[20] CWang F Xing J HWang andZ You ldquoOptical flowsmethodfor lightweight agile remote sensor design and instrumenta-tionrdquo in International Symposium on Photoelectronic Detectionand Imaging vol 8908 of Proceeding of the SPIE 2013
[21] T Sun F Xing and Z You ldquoOptical system error analysis andcalibration method of high-accuracy star trackersrdquo Sensors vol13 no 4 pp 4598ndash4623 2013
[22] T Sun F Xing Z You and M Wei ldquoMotion-blurred staracquisition method of the star tracker under high dynamicconditionsrdquoOptics Express vol 21 no 17 pp 20096ndash20110 2013
[23] L Younes ldquoCombining geodesic interpolating splines and affinetransformationsrdquo IEEETransactions on Image Processing vol 15no 5 pp 1111ndash1119 2006
[24] B Zitova and J Flusser ldquoImage registration methods a surveyrdquoImage and Vision Computing vol 21 no 11 pp 977ndash1000 2003
[25] Z L Song S Li and T F George ldquoRemote sensing imageregistration approach based on a retrofitted SIFT algorithm andLissajous-curve trajectoriesrdquo Optics Express vol 18 no 2 pp513ndash522 2010
[26] V Arevalo and J Gonzalez ldquoImproving piecewise linear regis-tration of high-resolution satellite images through mesh opti-mizationrdquo IEEETransactions onGeoscience andRemote Sensingvol 46 no 11 pp 3792ndash3803 2008
[27] Z Levi and C Gotsman ldquoD-snake image registration by as-similar-as-possible template deformationrdquo IEEE Transactionson Visualization and Computer Graphics vol 19 no 2 pp 331ndash343 2013
[28] R J Althof M G J Wind and J T Dobbins III ldquoA rapid andautomatic image registration algorithmwith subpixel accuracyrdquoIEEE Transactions on Medical Imaging vol 16 no 3 pp 308ndash316 1997
[29] W Tong ldquoSubpixel image registrationwith reduced biasrdquoOpticsLetters vol 36 no 5 pp 763ndash765 2011
[30] Y Bentoutou N Taleb K Kpalma and J Ronsin ldquoAn automaticimage registration for applications in remote sensingrdquo IEEETransactions on Geoscience and Remote Sensing vol 43 no 9pp 2127ndash2137 2005
[31] L S Ming L Yan and L Jindong ldquoMapping satellite-1 trans-mission type photogrammetric and remote sensingrdquo Journal ofRemote Sensing vol 16 supplement pp 10ndash16 2012 (Chinese)
[32] J P Lewis ldquoFast template matchingrdquo Vision Interface vol 95pp 120ndash123 1995
[33] H Foroosh J B Zerubia and M Berthod ldquoExtension ofphase correlation to subpixel registrationrdquo IEEETransactions onImage Processing vol 11 no 3 pp 188ndash200 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of