simultaneous navigation and synthetic aperture...

14
Technical report from Automatic Control at Linköpings universitet Simultaneous Navigation and Synthetic Aperture Radar Focusing Zoran Sjanic, Fredrik Gustafsson Division of Automatic Control E-mail: [email protected], [email protected] 11th June 2013 Report no.: LiTH-ISY-R-3063 Submitted to Transactions on Aerospace and Electronic Systems – TAES Address: Department of Electrical Engineering Linköpings universitet SE-581 83 Linköping, Sweden WWW: http://www.control.isy.liu.se AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET Technical reports from the Automatic Control group in Linköping are available from http://www.control.isy.liu.se/publications.

Upload: lamdien

Post on 16-Jun-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

Technical report from Automatic Control at Linköpings universitet

Simultaneous Navigation and SyntheticAperture Radar Focusing

Zoran Sjanic, Fredrik GustafssonDivision of Automatic ControlE-mail: [email protected], [email protected]

11th June 2013

Report no.: LiTH-ISY-R-3063Submitted to Transactions on Aerospace and Electronic Systems –TAES

Address:Department of Electrical EngineeringLinköpings universitetSE-581 83 Linköping, Sweden

WWW: http://www.control.isy.liu.se

AUTOMATIC CONTROLREGLERTEKNIK

LINKÖPINGS UNIVERSITET

Technical reports from the Automatic Control group in Linköping are available fromhttp://www.control.isy.liu.se/publications.

AbstractSynthetic Aperture Radar (SAR) equipment is a radar imaging system thatcan be used to create high resolution images of a scene by utilising themovement of a flying platform. Knowledge of the platform’s trajectory isessential to get good and focused images. An emerging application field isreal-time SAR imaging using small and cheap platforms with poorer navi-gation systems implying unfocused images. This contribution investigatesa joint estimation of the trajectory and SAR image.

Keywords: Optimisation, navigation, Synthetic Aperture Radar, auto-focusing

1

Simultaneous Navigation and Synthetic ApertureRadar FocusingZoran Sjanic, Fredrik GustafssonDivision of Automatic Control

Department of Electrical EngineeringLinkoping University

SE-581 83 Linkoping, SwedenEmail: {zoran, fredrik}@isy.liu.se

Abstract—Synthetic Aperture Radar (SAR) equipment is aradar imaging system that can be used to create high resolutionimages of a scene by utilising the movement of a flying platform.Knowledge of the platform’s trajectory is essential to get goodand focused images. An emerging application field is real-timeSAR imaging using small and cheap platforms with poorernavigation systems implying unfocused images. This contributioninvestigates a joint estimation of the trajectory and SAR image.

Keywords: Optimisation, navigation, Synthetic ApertureRadar, auto-focusing

I. INTRODUCTION

A general method for creating high resolution radar imagesfrom low resolution radar data, or real aperture images, isto use relative motion between radar antenna and the imagedscene and integrate all the partial real aperture images takenalong the flown trajectory [1]. Traditionally, this operation isperformed in the frequency domain using FFT like methods,e.g. the Fourier-Hankel method [2]–[4] or the ω-K migrationmethods [5]–[7]. The common denominator of these methodsis that they assume that the aircraft’s (or antenna’s) flownpath is linear and that is generally not the case in practice.If the trajectory is not linear the integration will result inan unfocused image. It is possible to partly correct for thedeviation from the nonlinear trajectory but then the methodsbecome computationally inefficient. Another approach is toperform integration in time domain by means of solving theback-projection integral [8]. This process is illustrated inFigure 1 where two points are imaged. Each column of thelow resolution real aperture radar image on the bottom isback-projected to the sub images, which means that column ismapped to the two-dimensional sub image. These sub imagesare in turn summed to the final synthetic aperture radar imageon the top. The simulation plots of this setup is also depictedin Figure 2. Even in this process it is assumed that theradar antenna’s flown path is linear with constant altitudeand heading, but the method can be extended to non-lineartracks as well. However exact inversion is not guaranteed.The main disadvantage of this method is the large amount ofoperations required to create an image, where the complexityis proportional to O(NKM) for K×M pixels image using anaperture with N positions. However, by means of coordinatetransformation, an approximation to exact back-projection can

+ ... ++ ... +

Raw radar data (Real aperture image)

Sub image

SAR image

Back−projection

Figure 1: Illustration of the back-projection method for creat-ing of the SAR images. Figure is not to scale.

Azimuth dimension

Ra

ng

e d

ime

nsio

n

Real aperture radar image

500 1000 1500 2000 2500 3000

900

950

1000

1050

1100

1150

1200

1250

1300

1350

1400

(a) Real aperture radar image of thetwo points.

Ra

ng

e p

ixe

ls

Azimuth pixels

Simulated SAR image

20 40 60 80 100 120

20

40

60

80

100

120

(b) Synthetic aperture radar image ofthe two points.

Figure 2: Real and synthetic aperture radar images of the twopoint scene.

be performed, which is called Fast Factorised Back-projection,see [9]. The complexity of this algorithm is proportional toKM logN operations, which for large N implies an importantsaving. With this faster algorithm it should be possible tocreate images in real time, possibly in dedicated hardware.Since back-projection algorithms are dependent on knowledge

2

of the antenna’s position in order to get focused images, theimage focus can be measured and used for estimation of thetrajectory. An example of this is depicted in Figure 3, where

Ra

ng

e p

ixe

ls

Azimuth pixels

Simulated SAR image

20 40 60 80 100 120

20

40

60

80

100

120

(a) Focused SAR image of two pointtargets.

Ra

ng

e p

ixe

lsAzimuth pixels

Simulated SAR image with σ = 0.5

20 40 60 80 100 120

20

40

60

80

100

120

(b) Unfocused SAR image of twopoint targets with σ = 0.5.

Ra

ng

e p

ixe

ls

Azimuth pixels

Simulated SAR image with σ = 1.5

20 40 60 80 100 120

20

40

60

80

100

120

(c) Unfocused SAR image of twopoint targets with σ = 1.5.

Ra

ng

e p

ixe

ls

Azimuth pixels

Simulated SAR image with σ = 3

20 40 60 80 100 120

20

40

60

80

100

120

(d) Unfocused SAR image of twopoint targets with σ = 3.

Figure 3: Example SAR images with different perturbedtrajectories.

two point targets are imaged. In Figure 3a, a linear pathis simulated, which results in a perfectly focused image. Inthe other three images the variation in range position wasadded as N (0, σ2) where σ = {0.5, 1.5, 3} and the imagesare created with an assumption that the path was linear.This gives unfocused images as depicted. Traditional methodsfor auto-focusing are mostly open-loop type methods whereeither SAR images or raw radar data are used, for examplePhase Gradient Auto-focus (PGA), [10]–[14]. The significantcommon denominator for all these methods is that the imageis created with assumptions on linear flight trajectory andfocusing is done afterwards in an open-loop way discardingeventual flight path information. This is a consequence ofthe off-line image generating process where the trajectory isno longer interesting. In the setup where SAR images aregenerated on-line an idea is to use information from the imagefocus and navigation system, like measured accelerations,together and in a sensor fusion framework try to obtain the bestsolution to both image focusing and navigation simultaneously.In the view of this approach for SAR images, the problem isclosely related to the Simultaneous Localisation and Mapping(SLAM), [15], [16], where a map of the unknown environmentis estimated jointly with the platform’s position. The SLAMproblem has been well studied during recent years and manydifferent solution methods have been proposed. One methodthat has been quite successful is to solve the SLAM problem inthe sensor fusion framework. In the SAR application, the mapof the environment from SLAM, is the unknown scene that isimaged and can be seen as the two dimensional map of pointreflectors. The problem of positioning the platform is the samein SLAM. However, the main difference is that we consider

NAV SAR

IMU

GPS

Vision

Radar

NAV

SAR

Sensor Fusion

IMU

GPS

Vision

Radar

IMU

GPS

Vision

Radar

Sensor Fusion

SAR image

SAR image

Navigation State

Navigation State

Navigation State

SAR image

Figure 4: Top: SAR architecture where navigation data isused in an open-loop manner. Middle: SAR architecture wherenavigation and SAR data is used together in a decentralisedsensor fusion framework. Bottom: SAR architecture wherenavigation and SAR data is used together in a centralisedsensor fusion framework.

a non-parametric SAR image rather than a parametric mapof point reflectors, that would be a too restrictive assumptionin SAR imaging. That is, though there are many conceptualsimilarities of joint navigation and mapping, the state of theart algorithms cannot be applied here.

This contribution applies a sensor fusion framework, wherethe SAR image together with a focus measure is interpretedas a “sensor” that contains information about the position ofthe platform. The image creating and auto-focusing methodsdescribed above can be illustrated in Figure 4. The methodbased on the sensor fusion can be implemented in a centralisedor decentralised manner. In this work we are focused on thedecentralised manner.

The outline is as follows. Section II summarizes notationand makes a high-level mathematical formulation of the ap-proach. Section III introduces the navigation framework andsystem and measurement models used. Section IV describesthe image focus measures that will be used in the auto-focus procedure. In Section V an optimisation frameworkand methods are introduced and their usage is explained.Numerical examples for the simulated images are covered inSection VI and for the real SAR data in Section VII. Finally,conclusions and future work are discussed in Section VIII.

3

II. NOTATION AND PROBLEM FORMULATION

Let the complex raw radar data be denoted zt(R), where t isthe time index, and where zt(R) denotes the returned radioenergy corresponding to distance R to the scene from theradar. Further, let xt denote the state vector of the platform,which includes position and orientation (radar pose). The back-projection method of producing the images, see Figure 1, canalternatively be expressed as integration per image pixel. Foreach pixel (i, j) in a complex valued image I , the total energyfrom each radar pulse is integrated by summing all the valuesfrom the raw data given the range from the platform to thepoint in the scene corresponding to the pixel (i, j). This canbe expressed as

Iij =

N∑t=1

zt(||pt − sij ||2) (1)

where pt is the 3D position of the radar and sij is thecoordinate of the scene point that is mapped to the pixel (i, j)in the image. Now, for a SAR system on the UAV platform,the pose cannot be assumed to be known. Instead, we haveaccess to an estimated state xt, and the estimated SAR imagebecomes

Iij =

N∑t=1

zt(||pt − sij ||2). (2)

This estimated SAR image will be out of focus, since all thecontributions from raw data will now be scattered due to theerror in position estimate, see Figure 3 for an example of this.

The key idea in this contribution is to perform a parametricfocusing. To enable this, we will make use of a focus measureF (I), with the property that

F (I) > F (Io), I 6= I0, (3a)

x01:N = arg minx1:N

F (I) (3b)

where I0 denotes the true SAR image and x01:N the true statesequence.

We will, however, not optimize the focus w.r.t. the poseblindly. We will optimize focus jointly with the filtering prob-lem, in that the states obey the state dynamics and observationsas well as possible.

As already noted, building up the image of size KMpixels with an aperture of N time points requires a hugecomputational effort (O(NKM)). It may seem that an outerloop that performs focusing will increase the computationalburden at least an order of magnitude more. However, we willshow that the gradient of the focus measure can be computedefficiently, using the chain rule

∂F (x)

∂x=∂F

∂I

∂I

∂R

∂R

∂x. (4)

The first and third factor can be derived analytically, whilethe second one can be computed from the already computedsub-images with only a little overhead.

In order to evaluate the performance of the estimationmethods, some performance measures are needed. A popular

measure for the parameter estimate is Root Mean Square Error(RMSE) defined as

RMSE(θ) =

√∑Nk=1(θk − θ0)2

N(5)

where θ1, . . . , θN are the unbiased estimates of the true scalarparameter θ0. To assess the quality of the obtained SARimages, the power of the error image can be used. This canbe defined as

P =

∑Ki=1

∑Mj=1 |Iij − I0ij |2

KM(6)

where I is the K ×M complex SAR image obtained withthe estimation procedure and I0 is the perfect focused SARimage, i.e. created with the true trajectory.

III. NAVIGATION FRAMEWORK

An Inertial Navigation System (INS) in an aircraft integratesaccelerometer and gyroscope data and corrects the state withaiding sensors such as barometer and GPS using a generaldynamics and measurement equations

xt+1 = f(xt, wt) (7a)yt = h(xt) + et (7b)

where xt are states of the system, wt denotes the process noisewith variance Qt, et is measurement noise with variance Rtand yt are the measurements. Usually an Extended Kalmanfilter is applied to estimate the states, see e.g. [17]. In thiswork, a simplified, yet useful, model of the dynamics will beassumed which will give simpler expressions in the algorithms.

A. Aircraft Model

In this setup, the following 2-DOF linear INS time discretedynamics is used, [17],

xt+1 = Fxt +Gwt (8a)

F =

I2 TsI2T 2s

2 I202×2 I2 TsI202×2 02×2 I2

(8b)

G =

T3s

6 I2T 2s

2 I2TsI2

(8c)

xt = [Xt Yt vXt vYt aXt aYt ]T (8d)

wt = [wXt wYt ]T (8e)

where Ts is the sampling time, X is the position in azimuthdirection and Y is the position in range direction, vX andvY are the velocities in the X- and Y -directions respectivelyand aX and aY are the accelerations in X- and Y -directionsrespectively. This model is used for the whole trajectory. Sincethis model is linear and time invariant the stationary Kalmanfilter can be used to estimate xt giving xt and its correspondingcovariance Pt.

4

Parameter Accuracy (1-σ) Stat. acc. (1-σ)Position 3 m 0.093 mVelocity 0.4 m/s 0.012 m/s

Acceleration 0.06 m/s2 0.015 m/s2

Table I: Accuracy and stationary accuracy for the navigationparameters

B. Navigation Performance

Due to the fact that the system is time invariant and linear,the covariance of the estimate will converge to the stationarycovariance P . This covariance can be calculated as

P = FPFT − FPHT (HPHT +R)−1HPFT +GQGT

(9)

where F and G are defined above, and H is the linearisedmeasurement equation h(xt). In our case, it is chosen as H =I6, since we can assume that all states are measured by thenavigation system. For a typical navigation system used in anUAV, the performance for these parameters (assumed to bemeasurement noise) can be summarised according to Table I.System noise covariance, Q, which represents disturbance onstates, like wind turbulence, can be taken as diag{0.25, 0.25}.With these values, the stationary covariance is as given in thethird column in Table I.

IV. FOCUS MEASURES

We here review and compare two common focus measures.

A. Two Entropy Measures

One common focus measure in SAR or image processingliterature is image entropy calculated as

E1(I) = −256∑k=1

qk log2(qk) (10)

where qk is an approximated grey level distribution of the K×M grey-scale image |I|, where I is the complex-valued SARimage. It can be obtained from the image histogram calculatedas

qk ={# of pixel values |Iij |} ∈ [k − 1, k]

KM(11a)

k ∈ [1, 256]. (11b)

The more focused the image is, the lower the entropy is, seefor example [14] or [18]. Histograms for the images in Figure3 are given in Figure 5. Note the log-scale on the y-axis. Analternative definition of entropy, and more commonly used inthe SAR context, is, [14], [18], [19],

E2(I) = −K∑i=1

M∑j=1

qij ln(qij) (12a)

qij =|Iij |2∑K

i=1

∑Mj=1 |Iij |2

. (12b)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

1

2

3

4

5

6

7

8

9

10

Normalised intensity

log

(pi)

Histogram of the focused SAR image

(a) Histogram of the focused image.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

1

2

3

4

5

6

7

8

9

Normalised intensity

log

(pi)

Histogram of the unfocused SAR image, σ = 0.5

(b) Histogram of the unfocused imagewith σ = 0.5.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

1

2

3

4

5

6

7

8

Histogram of the unfocused SAR image, σ = 1.5

Normalised intensity

log

(pi)

(c) Histogram of the unfocused imagewith σ = 1.5.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

1

2

3

4

5

6

7

8

Normalised intensity

log

(pi)

Histogram of the unfocused SAR image, σ = 3

(d) Histogram of the unfocused imagewith σ = 3.

Figure 5: Histograms for the images in Figure 3.

B. Focus Measure Performance

Entropy 1 and 2 focus measures are tested and compared on aSAR image according to Figure 6a and the results are depictedin Figure 7 where standard deviations 1− σ, 2− σ and 3− σare also drawn. These images are chosen since they are moreinformative than the image in Figure 3 and they represent bothstructured and unstructured scene. In this simulation setup thestate noise in model (8) is set to zero, i.e. the trajectory iscompletely deterministic. This is done in order to illustrate thefocus measure functions Fi in a two dimensional plot, sincethe trajectory, and consequently the focus measure, is thenonly dependent on the initial values. In these figures it can beseen that entropy 2 has a convex and pretty nice behaviouraround the true value of the initial state. However it looksvery flat along the velocity direction which indicates that itis very difficult to estimate that particular state. The entropy1 measure has, on the other hand, a sharp minimum for thecorrect value of the initial state but many local minima. Thismeans that the two entropy measures complement each otherperfectly, and can be used in combination to obtain the globalminimum of the focus measure.

V. SEARCH METHODS

As demonstrated in Section IV-B, the entropy 2 measure canbe used as a course first step in the optimization to come closeto the global minimum, and then entropy 1 can be used toobtain the global minimum. Note that a special structure of theproblem (8) allows for unconstrained solution of the problem.This is due to the fact that the constraints representing thetrajectory can be taken into acount while calculating thegradient of the cost function as will be demonstrated inSection V-C.

5

Range p

ixels

Azimuth pixels

Simulated SAR image

5 10 15 20 25 30 35 40 45

5

10

15

20

25

30

35

40

45

(a) SAR image of the structured scene.

Simulated SAR image

Azimuth pixels

Range p

ixels

5 10 15 20 25 30 35 40 45

5

10

15

20

25

30

35

40

45

(b) SAR image of the unstructured scene.

Figure 6: SAR images with a more informative scene than inFigures 2 and 3.

Azimuth direction velocity error [m/s]

Ra

ng

e d

ire

ctio

n a

cce

lera

tio

n e

rro

r [m

/s2]

Entropy 1

−0.03 −0.02 −0.01 0 0.01 0.02 0.03

−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

0.04

(a) Entropy 1 focus measure for thestructured scene.

Azimuth direction velocity error [m/s]

Ra

ng

e d

ire

ctio

n a

cce

lera

tio

n e

rro

r [m

/s2]

Entropy 2

−0.03 −0.02 −0.01 0 0.01 0.02 0.03

−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

0.04

(b) Entropy 2 focus measure for thestructured scene.

Azimuth direction velocity error [m/s]

Ra

ng

e d

ire

ctio

n a

cce

lera

tio

n e

rro

r [m

/s2]

Entropy 1

−0.03 −0.02 −0.01 0 0.01 0.02 0.03

−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

0.04

(c) Entropy 1 focus measure for theunstructured scene.

Azimuth direction velocity error [m/s]

Ra

ng

e d

ire

ctio

n a

cce

lera

tio

n e

rro

r [m

/s2]

Entropy 2

−0.03 −0.02 −0.01 0 0.01 0.02 0.03

−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

0.04

(d) Entropy 2 focus measure for theunstructured scene.

Figure 7: Focus measures for the image in Figure 6 withstandard deviation ellipses.

A. Joint Optimization of Trajectory and Focus

Since the focus of the image depends on the unknowntrajectory, one solution is to solve the following minimisationproblem

θ = arg minθ

γFEi(x0:N )+

γs

(N∑t=1

||yt − h(xt))||2R−1t

+ ||wt||2Q−1t

)(13a)

s.t. θ = [xT0 , wT1:N ]T (13b)

xt+1 = f(xt, wt) (13c)

where γF and γs are the weights (and γF + γs = 1) andmeasurement equation h(x) and system dynamics f(x,w) aredefined as in Section III. In Equation (13a), Ei(x0:N ), i ∈{1, 2}, is a function of the SAR image I created from theradar measurements and is of the type “how focused is theimage?” according to Section IV.

B. Gradient Search

Gradient search methods will be exemplified here with acouple of examples with different trajectories and errors inthem. Only two states and their initial values are considered(vX0 and aY0 ), for illustrative purposes. In general, the minimi-sation should be applied for these states for all or at least someof the time instants along the trajectory. Such an example willbe studied later.

A gradient search can, for the general problem minθ g(θ),be formulated as

θk+1 = θk + µkH(θk)−1∇g(θk) (14a)

∇g(θ) =∂

∂θg(θ) (14b)

where µk is step size with µ0 = 1, g(θ) is the loss functionas in (13a) and H(θ) is some (positive definite) matrix. Theinitial estimate, θ0, can be taken as the usual estimate fromthe navigation system. In the simplest case H can be chosenas the identity matrix and the procedure becomes a puregradient search. The disadvantage of such procedure is theslow convergence, especially if the function to be minimisedis ridge-like like entropy 1 focus measure. If H is chosen asthe Hessian of g, the procedure becomes a Newton search. TheNewton search has a fast convergence, and is to prefer if theHessian is available. In many cases the Hessian is either notavailable or very difficult to obtain, as in the case consideredhere, and some approximate methods must be applied. Oneoption is a quasi-Newton method, and BFGS in particular,where the Hessian is approximated by utilising gradients ofthe function during the search, see [20]. The general gradientsearch procedure is summarised in Algorithm 1.

In all these procedures it is essential to obtain the gradientof the loss function. Because of the special structure of thefocus measure function and the SAR processing algorithm, thecomplete analytical gradient is hard to obtain. For example, forthe Entropy 1 measure it is hard to differentiate a histogramof the image. In this case numerical methods must be used.

6

However, for the Entropy 2 measure it is possible to obtainanalytical expressions for most part of the gradient, enablingvery efficient algorithms, and this will be described in the nextsubsection.

Algorithm 1 Gradient search procedure

Input: Initial value of the optimisation parameters θ0, rawradar data, tolerance thresholds ε1, ε2, ε3Output: Solution θ, focused SAR image

k := 0repeat

Calculate gradient of the cost function, ∇g(θk)Calculate (approximate) Hessian, H(θk)µk := 1repeatθk+1 := θk − µkH(θk)−1∇g(θk)µk := µk/2

until g(θk+1) < g(θk)k := k + 1

until ||θk − θk−1||2 < ε1 or ∇g(θk−1) < ε2 or ||g(θk) −g(θk−1)||2 < ε3

C. Calculating the Gradient

The calculations to obtain an analytical gradient of theentropy 2 function will be presented. The key to doing thisis the chain rule for gradient calculation,

∂E2

∂x=∂E2

∂q

∂q

∂|I|∂|I|∂R

∂R

∂x. (15)

In order to apply the chain rule, first the decomposition chainof the entropy 2 focus measure will be considered and then,all partial derivatives will be presented.

The first factor to be differentiated is entropy 2 focusmeasure

E2 = −K∑i=1

M∑j=1

qij ln qij = −KM∑i=1

qi ln qi (16)

where last equality is simply reformulation of the double sumby vectorising the image.

In the second factor, each qi is obtained by

qi =|Ii|2∑j |Ij |2

(17)

meaning that qi is a function of the absolute value of thecomplex-valued SAR image.

For the third factor, we need to obtain the derivative∂|I|/∂R. In the creation of the image, a back-projection sumis evaluated and all partial images are summed up. Eachpartial image is a function of one column in the RAR imageand the range from the platform to each pixel in the SARimage, see Section II. Unfortunately it is not easy, if notimpossible, to obtain analytical expression for the derivative∂|I|/∂R. However, this value can simply be obtained duringimage creation by means of numerical derivation. The cost for

Yt

Rm

Z0

RN

X t

Xm

Point Target

Platform

R

Ψ

Imaged Scene

Figure 8: SAR geometry. The figure is not to scale.

that procedure is memory demand and execution time whichboth are doubled. But this increase in cost is independent ofthe state, and thus constant no matter how many parametersoptimisation is performed over. A straightforward numericalgradients would give a cost that is increasing linearly with thenumber of parameters.

The last factor that needs to be calculated is the gradientof the range as a function of the states, R(xt). To calculatean analytical expression of this function some SAR geometrypreliminaries are needed. In order to express range as afunction of the states, the geometry setup as in Figure 8 canbe considered. From the figure it can be seen that the rangeRt can, with help from Pythagoras theorem, be expressed as

Rt =√

(Xm −Xt)2 + (Rg − Yt)2 + (Z0 − Zt)2 (18a)

Rg =√R2m − Z2

0 (18b)

i.e. as a function of the trajectory. Note that Zt, the altitudeof the platform, is assumed to be known here. This can beachieved by measuring it with, for example, barometric sensorswhich is always done in the aircraft applications. Here theexact expression for the range along the trajectory is used,unlike in most of the SAR literature, where approximate andlinearised expressions are used, see for example [12]. This isdue to the fact that in low frequency SAR application, as theone considered here, the ratio between range and trajectorylength is not negligible due to the lobe width. If approximateexpressions are used, too large errors would be introduced inthe beginning and the end of the trajectory.

Next, the dynamical model (8) can be used to express theposition states used in the range expression above as a functionof any other state by using that

xt = F t−kxk, t > k (19)

Note that the state noise term, wt, is neglected in the followingsince it is equivalent to optimise over noise and over acceler-ation states, at, so the latter one is used here to simplify theexpressions and reduce the amount of variables in the problem.This gives that positions can be expressed as

Xt = X0 + Ts(t− k)vXk +T 2s (t− k)2

2aXk (20a)

Yt = Y0 + Ts(t− k)vYk +T 2s (t− k)2

2aYk (20b)

7

If these expressions are used in (18a), we can easily obtainpartial derivatives of the range with respect to the velocitiesand accelerations in arbitrary time points.

Now, we have everything needed to calculate the gradientof the focus measure with respect to the trajectory states. Thepartial derivatives are, in turn (for t > k),

∂E2

∂qi= − ln qi − 1 (21a)

∂qi∂|Ij |

=

2|Ij |

∑|I|2−2|Ij |2|Ii|

(∑|I|2)2 , i = j

− 2|Ij |2|Ii|(∑|I|2)2 , i 6= j

(21b)

∂Rt∂vXk

= − (Xm −Xt)Ts(t− k)

Rt(21c)

∂Rt∂vYk

= − (Rg − Yt)Ts(t− k)

Rt(21d)

∂Rt∂aXk

= − (Xm −Xt)T2s (t− k)2

2Rt(21e)

∂Rt∂aYk

= − (Rg − Yt)T 2s (t− k)2

2Rt(21f)

and ∂|I|/∂R is numerically calculated during image forma-tion. Now, at least for entropy 2 focus measure, we cancalculate the gradient (semi-) analytically and use it in theminimisation procedure. The second term in (13a) is easy todifferentiate, since it is a quadratic form and h(x) is a linearfunction in this case.

VI. NUMERICAL EXAMPLES FOR SIMULATED IMAGES

In order to demonstrate the behaviour of the gradient searchfor this setup, the SAR image from Figure 6a is used in twodifferent experiments.

A. Two-Dimensional Optimization

To be able to illustrate the convergence of the solution, onlytwo optimisation variables are considered here, θ = [vX0 , a

Y0 ]T

and the algorithm is initiated with random starting points θ0

based on the stationary covariance of the states in the system.Those initial values are [100.005, 0.005]T , [99.99, 0.01]T ,[99.995, 0.02]T , [100.02, −0.01]T and [100.005, −0.035]T .

The trajectories generated with these initial values areillustrated in Figure 9. In Figure 10a, the gradient searchbased entropy 2 measure is illustrated and we can see thatthe solutions converge to the flat ridge-like area close to thecorrect acceleration, but not necessarily to the correct velocity.In Figure 10b, the gradient search where the entropy 1 measureis used is depicted. In this case the algorithm is initiated withthe solution from the entropy 2 search. It can be seen that thisminimisation strategy works pretty well, although one solutionis stuck in a local minimum. In that case the velocity error isthe largest one of all errors. Note also that only the focusmeasure is used to find estimate of the states i.e. γs is set tozero while γF is set to one in Equation (13a).

It is interesting to see how the image created with thesolution that is stuck in the local minimum of the entropy1 measure looks like compared to the unfocused image that isinitialised with. As illustrated in Figure 11, it can be seen that

0 500 1000 1500 2000 2500 3000−15

−10

−5

0

5

10

T1

T2

T3

T4

T5

Azimuth direction [m]

Range d

irection [m

]

Example trajectories

Figure 9: Trajectory examples for different initial values of thevelocity, vX0 and acceleration, aY0 .

the image created with values from the minimisation procedureis very close to the focused image and much better thanthe unfocused images that are initialised with. The probableexplanation for this comes from the fact that small azimuthdirection velocity errors do not influence the final image muchdue to the quantisation effects. However the estimate of thenavigation states is not correct.

B. High-Dimensional Optimization

In the second example a more realistic setup is done. Theoptimisation problem to be solved is

θ = arg minθ

γFE1,2(x0:N ) + γs

N∑t=1

||amYt − aYt ||2R−1t

(22a)s.t.θ = [vX0 , a

Y0 , a

YbN/4c, a

YbN/2c, a

Yb3N/4c]

T (22b)

γF = 0.99 (22c)γs = 0.01 (22d)

xt+1 = Fxt (22e)X0

Y0vY0aX0

=

0000

(22f)

aXt = 0, t ∈{

0 : N}

(22g)

aYt =

aY0 , t ∈

{1 : bN/4c − 1

}aYbN/4c, t ∈

{bN/4c+ 1 : bN/2c − 1

}aYbN/2c, t ∈

{bN/2c+ 1 : b3N/4c − 1

}aYb3N/4c, t ∈

{b3N/4c+ 1 : N

}(22h)

P0 =∞· I2 (22i)

where amY is the measured acceleration in Y -direction withadditive white Gaussian noise with Rt = 0.0022 m2/s4.E1,2(x0:N ) is either entropy 2 or entropy 1, exactly as in

8

Azimuth direction velocity error [m/s]

Range d

irection a

ccele

ration e

rror

[m/s

2]

Entropy 2 measure and search trajectories

T3T2

T1

T4

T5

−0.03 −0.02 −0.01 0 0.01 0.02 0.03

−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

0.04

(a) Search trajectory for five different values of x0 using entropy 2focus measure.

Azimuth direction velocity error [m/s]

Range d

irection a

ccele

ration e

rror

[m/s

2]

Entropy 1 measure and search trajectories

T3

T2

T1

T4

T5

−0.01 −0.005 0 0.005 0.01 0.015 0.02−3

−2

−1

0

1

2

3x 10

−3

(b) Search trajectory for entropy 1 focus measure with x0 given by theentropy 2 gradient search.

Figure 10: Search trajectory for five different values of x0

using two different entropy measures.

the previous example. Here it is assumed that a change inY -direction acceleration will behave in a step like manneronly a few times during the SAR image generation and thatthe amplitude of the step is arbitrary. It is also assumedthat the acceleration in X-direction will vary slowly due tothe platforms inherited inertia in this direction, so it canbe assumed to be zero. The meaning of the constraint in(22i) is that there is no prior information about the initialvalues of the trajectory. Another spline-like interpretation ofthe setup in (22) is to find a best trajectory by fitting thesecond order polynomials between four points evenly spacedalong the trajectory. Both scenes from Figure 6 are used and30 Monte Carlo simulations are performed in order to evaluatethe performance of the estimation procedure.

The resulting RMSE of the parameters and the mean valueof the error image power are presented in Table II and TableIII for both structured and unstructured scene. Here, the actualacceleration is presented instead of the process noise value,since it is more physically interesting. It can be noticed thatthe improvement of the RMSE after further minimisation with

Range p

ixels

Azimuth pixels

Simulated SAR image

5 10 15 20 25 30 35 40 45

5

10

15

20

25

30

35

40

45

(a) Image created with error in velocity of 0.02 m/s and in accelerationof −0.01 m/s2.

Range p

ixels

Azimuth pixels

Simulated SAR image

5 10 15 20 25 30 35 40 45

5

10

15

20

25

30

35

40

45

(b) Image created with error in velocity of 0.014 m/s and in acceler-ation of −0.0003 m/s2.

Range p

ixels

Azimuth pixels

Simulated SAR image

5 10 15 20 25 30 35 40 45

5

10

15

20

25

30

35

40

45

(c) Focused image as a reference.

Figure 11: Resulting images from the minimisation procedurewith starting point [100.02, −0.01]T .

9

Simulated SAR image

Azimuth pixels

Ra

ng

e p

ixe

ls

5 10 15 20 25 30 35 40 45

5

10

15

20

25

30

35

40

45

(a) SAR image of the structured scenecreated with noisy position data.

Simulated SAR image

Azimuth pixels

Ra

ng

e p

ixe

ls

5 10 15 20 25 30 35 40 45

5

10

15

20

25

30

35

40

45

(b) SAR image of the unstructuredscene created with noisy positiondata.

Figure 12: SAR images created with noisy position data.

Parameter RMSE (opt. with E2) RMSE (opt. with E1)vX0 7.05 · 10−3 m/s 7.04 · 10−3 m/s

aY0 9.94 · 10−4 m/s2 9.15 · 10−4 m/s2

aYbN/4c 6.51 · 10−4 m/s2 6.34 · 10−4 m/s2

aYbN/2c 6.89 · 10−4 m/s2 6.84 · 10−4 m/s2

aYb3N/4c 6.02 · 10−4 m/s2 6.03 · 10−4 m/s2

Mean value ofthe error imagepower

149.6 126.9

Table II: RMSE for the estimated parameters and the meanvalue for the error image power for the structured scene.

entropy 1 is not very big, it is in the magnitude of 10−5. Itsuggests that the extra step of minimisation with entropy 1can be skipped if a faster procedure is sought.

In Figure 12, a noisy position (one of the 30 noise reali-sations) is used for the image generation. We see that bothimages are unfocused and the image of the unstructured sceneis pretty bad, all the dominant targets are blurred. In Figure13 the images after minimisation with entropy 2 and 1 aredepicted (for the same noise realisation as above). Here itcan be seen that any improvement in the image with extraminimisation with entropy 1 is impossible to see with thenaked eye, i.e. the improvement of the navigation states doesnot visibly improve the images. This could be expected fromthe results from MC simulations.

The resulting estimates of the parameters and error imagepower after entropy 1 minimisation for the two scenes and thisparticular realisation of the noise are presented in Table IV.

VII. EXAMPLE WITH REAL SAR IMAGE

Here, we illustrate the estimation results using data fromthe CARABAS II system [21] collected in western Sweden.

Parameter RMSE (opt. with E2) RMSE (opt. with E1)vX0 11.2 · 10−3 m/s 11.2 · 10−3 m/s

aY0 11.61 · 10−4 m/s2 10.98 · 10−4 m/s2

aYbN/4c 6.63 · 10−4 m/s2 6.52 · 10−4 m/s2

aYbN/2c 9.31 · 10−4 m/s2 8.86 · 10−4 m/s2

aYb3N/4c 7.77 · 10−4 m/s2 7.58 · 10−4 m/s2

Mean value ofthe error imagepower

1348 1242

Table III: RMSE for the estimated parameters and the meanvalue for the error image power for the unstructured scene.

Azimuth pixels

Ra

ng

e p

ixe

ls

Simulated SAR image

5 10 15 20 25 30 35 40 45

5

10

15

20

25

30

35

40

45

(a) Image of the structured scene afterminimisation with Entropy 2 as focusmeasure.

Azimuth pixels

Ra

ng

e p

ixe

ls

Simulated SAR image

5 10 15 20 25 30 35 40 45

5

10

15

20

25

30

35

40

45

(b) Image of the structured scene afterminimisation with Entropy 1 as focusmeasure.

Azimuth pixels

Ra

ng

e p

ixe

ls

Simulated SAR image

5 10 15 20 25 30 35 40 45

5

10

15

20

25

30

35

40

45

(c) Image of the unstructured sceneafter minimisation with Entropy 2 asfocus measure.

Azimuth pixels

Ra

ng

e p

ixe

ls

Simulated SAR image

5 10 15 20 25 30 35 40 45

5

10

15

20

25

30

35

40

45

(d) Image of the unstructured sceneafter minimisation with Entropy 1 asfocus measure.

Figure 13: Resulting images from the gradient search minimi-sation.

Parameter Structured scene Unstructured scenevX0 9.236 · 10−3 m/s 10.13 · 10−3 m/s

aY0 -3.057 · 10−4 m/s2 -1.707 · 10−4 m/s2

aYbN/4c -0.553 · 10−4 m/s2 -1.733 · 10−4 m/s2

aYbN/2c 11.15 · 10−4 m/s2 10.25 · 10−4 m/s2

aYb3N/4c 1.384 · 10−4 m/s2 1.234 · 10−4 m/s2

Error imagepower

51.37 53.12

Table IV: Error in the estimated parameters for the two scenesafter entropy 1 minimisation procedure.

The trajectory and the SAR image obtained by the proposedestimation method are compared to the GPS trajectory, whichis assumed to be the ground truth. Part of the SAR image usedfor the estimation is illustrated in Figure 14 where the GPSbased trajectory is used to generate the image. For the realdata case, the optimisation problem to be solved is formulated

10

Ran

ge p

ixel

s

Azimuth pixels

SAR image (real data, GPS trajectory)

50 100 150 200 250

10

20

30

40

50

60

Figure 14: SAR image for the real data case created with theGPS based trajectory (assumed to be ground truth).

according to

θ = arg minθ

γFE2(x0:N ) + γs

N∑t=1

||amt − at||2R−1t

(23a)s.t.θ = [vX0 , v

Y0 , a

X0 , a

Y0 , a

Xi , a

Yi ]T (23b)

i ∈ b{1 : 199}N/200c (23c)γF = 0.36 (23d)γs = 0.64 (23e)

xt+1 = Fxt (23f)[X0

Y0

]=

[00

](23g)

ai =

a0, i ∈

{1 : bN/200c − 1

}abN/200c, i ∈

{bN/200c+ 1 : bN/100c − 1

}...

ab199N/200c, i ∈{b199N/200c+ 1 : N

}(23h)

P0 =∞· I4 (23i)

where the variables are defined as

at = [aXt , aYt ]T (24a)

amt = [amXt , amYt ]T (24b)

Rt = diag{RXt , RYt } = diag{0.1, 0.1}[m2/s4] (24c)

Note that in this case only the entropy 2 focus measure isused due to the computational load to calculate the numericalgradient for the entropy 1 measure. However, according tothe results in Section VI, the improvement of the estimatesby using additional optimisation with entropy 1 is smalland therefore it is omitted here. Also, the weights and thecovariance of the acceleration measurements are seen as tuningparameters.

Ran

ge p

ixel

s

Azimuth pixels

SAR image (real data, estimated trajectory)

50 100 150 200 250

10

20

30

40

50

60

Figure 15: SAR image for the real data case obtained with theoptimisation procedure.

0 10 20 30 40 50 60 70 80 90 100−20

−15

−10

−5

0

5

Time [s]

Err

or [m

]

Error between GPS and estimated trajectory

X−positionY−position

Figure 16: Error in position between GPS and estimatedtrajectory.

Results from the optimisation procedure, which takes fivesteps to converge, is illustrated in Figure 15, where the esti-mated trajectory is used to generate the image and Figure 16where error between GPS and estimated trajectory is shown.That error can be compared to the error in the trajectory withthe initial values of the parameters, θ0, shown in Figure 17.It can be seen from these two plots that improvement in Y -direction is much less than improvement in X-direction. Onlythe total loss function for entropy 2 is depicted in Figures 18and 19. The image resulting form the estimated trajectory ishard do distunguish from the GPS based trajectory image,except that it is shifted in range direction. This ambiguity is,unfortunately, unobservable in the auto-focusing process, i.e.the method is invariant to the translation of the image.

VIII. CONCLUSIONS

A method is presented based on a decentralised sensor fusionframework, which is intended to provide better focused SARimages on future cheap small SAR platforms. The approach

11

0 10 20 30 40 50 60 70 80 90 100−18

−16

−14

−12

−10

−8

−6

−4

−2

0

Time [s]

Err

or [m

]Error between GPS and initial trajectory

X−positionY−position

Figure 17: Error in position between GPS and initial trajectory.

1 1.5 2 2.5 3 3.5 4 4.5 5352.8

352.9

353

353.1

353.2

353.3

353.4

353.5

353.6

353.7Total cost function

Iteration number

Cos

t fun

ctio

n

Figure 18: Total cost function as a function of the iterationnumber.

1 1.5 2 2.5 3 3.5 4 4.5 58.23

8.232

8.234

8.236

8.238

8.24

8.242

8.244

8.246

8.248

Iteration number

Foc

us m

easu

re

Focus measure

FM (estimated)FM (GPS based)

Figure 19: Entropy 2 value as a function of the iterationnumber. Entropy 2 value for the GPS-based image is alsoshown for a comparison.

is based on jointly optimizing a focus measure and the errorin the navigation states. As was concluded from simulationexamples of the simple scene and real SAR data, the methodworks fairly well, although not all states are observable.Nevertheless, even if not the whole navigation state vectorcan be corrected, the resulting SAR image is much morefocused after optimization than the original one. An importanttheoretical contribution, to reach the requirements on compu-tation complexity, is an analytical expression for the gradientof a focus measure. The result enables an implementation ofgradient search algorithms that add only marginal complexityto the SAR imaging process.

ACKNOWLEDGMENTS

The authors would like to thank Lars Uhlander and AndersGustavsson from Swedish Defence Research Agency (FOI) inLinkoping, Sweden, for providing real CARABAS II data andhelp with these. Our thanks go also to Hans Hellsten fromSaab Electronic Defence Systems in Goteborg, Sweden, forall help with the SAR system theory.

This work was supported by the Industry Excellence CenterLINK-SIC founded by The Swedish Governmental Agency forInnovation Systems, VINNOVA and Saab AB.

REFERENCES

[1] L. J. Cutrona, W. E. Vivian, E. N. Leith, and G. O. Hall, “A high-resolution radar combat-surveillance system,” IRE Transactions on Mil-itary Electronics, vol. MIL-5, no. 2, pp. 127–131, April 1961.

[2] J. A. Fawcett, “Inversion of N-Dimensional Spherical Averages,” SIAMJournal on Applied Mathematics, vol. 45, no. 2, pp. 336–341, 1985.[Online]. Available: http://www.jstor.org/stable/2101820

[3] H. Hellsten and L. E. Andersson, “An inverse method for the processingof synthetic aperture radar data,” Inverse Problems, vol. 3, no. 1, p. 111,1987. [Online]. Available: http://stacks.iop.org/0266-5611/3/i=1/a=013

[4] L. E. Andersson, “On the Determination of a Function fromSpherical Averages,” SIAM Journal on Mathematical Analysis,vol. 19, no. 1, pp. 214–232, 1988. [Online]. Available: http://link.aip.org/link/?SJM/19/214/1

[5] C. Cafforio, C. Prati, and F. Rocca, “SAR data focusing using seismicmigration techniques,” IEEE Transactions on Aerospace and ElectronicSystems, vol. 27, no. 2, pp. 194 –207, March 1991.

[6] F. Rocca, “Synthetic Aperture Radar: a New Application for WaveEquation Techniques,” Stanford Exploration Project SEP-56, pp. 167–189, 1987. [Online]. Available: http://sepwww.stanford.edu/oldreports/sep56/56 13.pdf

[7] A. S. Milman, “SAR Imaging by Omega-K Migration,” InternationalJournal of Remote Sensing, vol. 14, no. 10, pp. 1965–1979, 1993.

[8] F. Natterer, The Mathematics of Computerised Tomography. NewYork: Wiley, 1986.

[9] L. M. H. Ulander, H. Hellsten, and G. Stenstrom, “Synthetic-apertureradar processing using fast factorized back-projection,” Aerospace andElectronic Systems, IEEE Transactions on, vol. 39, no. 3, pp. 760–776,July 2003.

[10] C. Oliver and S. Quegan, Understanding Synthetic Aperture RadarImages, ser. The SciTech Radar and Defense Series. SciTech, 2004.

[11] D. E. Wahl, P. H. Eichel, D. C. Ghiglia, and C. V. J. Jakowatz,“Phase gradient autofocus - a robust tool for high resolution SAR phasecorrection,” IEEE Transactions on Aerospace and Electronic Systems,vol. 30, no. 3, pp. 827–835, July 1994.

[12] M. Xing, R. Jiang, X.and Wu, F. Zhou, and Z. Bao, “Motion Compen-sation for UAV SAR Based on Raw Radar Data,” IEEE Transactions onGeoscience and Remote Sensing, vol. 47, no. 8, pp. 2870–2883, August2009.

[13] J. R. Fienup, “Phase Error Correction by Shear Averaging,” in SignalRecovery and Synthesis. Optical Society of America, June 1989, pp.134–137.

12

[14] R. L. J. Morrison and D. C. J. Munson, “An experimental studyof a new entropy-based SAR autofocus technique,” in Proceedingsof International Conference on Image Processing, ICIP 2002, vol. 2,September 2002, pp. II–441–4.

[15] H. Durrant-Whyte and T. Bailey, “Simultaneous Localization and Map-ping: Part I,” IEEE Robotics & Automation Magazine, vol. 13, no. 12,pp. 99–110, June 2006.

[16] T. Bailey and H. Durrant-Whyte, “Simultaneous localization and map-ping (SLAM): Part II,” IEEE Robotics & Automation Magazine, vol. 13,no. 3, pp. 108–117, Septemeber 2006.

[17] J. Farrell and M. Barth, The global positioning system and inertialnavigation. McGraw-Hill Professional, 1999.

[18] L. Xi, L. Guosui, and J. Ni, “Autofocusing of ISAR images based onentropy minimization,” IEEE Transactions on Aerospace and ElectronicSystems, vol. 35, no. 4, pp. 1240–1252, October 1999.

[19] A. F. Yegulalp, “Minimum entropy SAR autofocus,” in 7th AdaptiveSensor Array Processing Workshop, March 1999.

[20] J. Nocedal and S. J. Wright, Numerical Optimization. New York:Springer, 2006.

[21] H. Hellsten, L. M. Ulander, A. Gustavsson, and B. Larsson, “De-velopment of VHF CARABAS II SAR,” in Society of Photo-OpticalInstrumentation Engineers (SPIE) Conference Series, ser. Society ofPhoto-Optical Instrumentation Engineers (SPIE) Conference Series, vol.2747, June 1996, pp. 48–60.

Zoran Sjanic Zoran Sjanic is PhD student at Divi-sion of Automatic Control at Department of Electri-cal Engineering, Linkoping University, since 2008.He received the M.Sc. degree in computer scienceand engineering 2001 and Tekn.Lic. in AutomaticControl 2011 both from Linkoping University. Hisresearch interests are sensor fusion for navigationof manned and unmanned aircraft, Simultaneouslocalisation and Mapping (SLAM) and nonlinearestimation methods.

He is also employed by Saab Aeronautics inLinkoping, Sweden, since 2001 where he works at the Navigation Departmentas a system engineer. He worked as a technical manager for the navigationsystem in both Gripen fighter aircraft and Skeldar UAV before starting hisPhD studies.

Fredrik Gustafsson Fredrik Gustafsson is professorin Sensor Informatics at Department of ElectricalEngineering, Linkoping University, since 2005. Hereceived the M.Sc. degree in electrical engineering1988 and the Ph.D. degree in Automatic Control,1992, both from Linkoping University. During 1992-1999 he held various positions in automatic control,and 1999-2005 he had a professorship in Com-munication Systems. His research interests are instochastic signal processing, adaptive filtering andchange detection, with applications to communica-

tion, vehicular, airborne, and audio systems. He is a co-founder of thecompanies NIRA Dynamics (automotive safety systems), Softube (audioeffects) and SenionLab (indoor positioning systems).

He was an associate editor for IEEE Transactions of Signal Process-ing 2000-2006 and is currently associate editor for IEEE Transactions onAerospace and Electronic Systems and EURASIP Journal on Applied SignalProcessing. He was awarded the Arnberg prize by the Royal SwedishAcademy of Science (KVA) 2004, elected member of the Royal Academy ofEngineering Sciences (IVA) 2007, elevated to IEEE Fellow 2011 and awardedthe Harry Rowe Mimno Award 2011 for the tutorial ”Particle Filter Theoryand Practice with Positioning Applications”, which was published in the AESSMagazine in July 2010.