the dynamic allan variance 3

15
0885–3010/$25.00 © 2009 IEEE 450 IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL, vol. 56, no. 3, MARCH 2009 Abstract—We present and discuss the dynamic Allan vari- ance, a measure of the time-varying stability of an atomic clock. First, the dynamic Allan variance is mathematically de- fined, then its behavior is extensively tested on simulated and experimental data. The results prove the validity and the ef- fectiveness of the proposed new tool. I. Introduction A tomic clocks are the undisputed reference for the measurement of time, and their performance in terms of cost, reliability, efficiency, and lifetime, in addition to accuracy and stability, allows their use in the most de- manding modern applications, such as navigation systems, telecommunication networks, and tests of fundamental physics. A fundamental characteristic of an atomic clock is its stability, which is defined through the Allan variance, a standard quantity for the characterization of atomic clocks, recommended by international standardizations organizations [1]–[3]. Other variances are also used [4]–[7], which are generally estimated on the longest possible data series. Nevertheless, it is well known that atomic clocks undergo sudden failures, are influenced by environmental factors such as temperature and humidity, and eventually age and break down like any physical device. Consequent- ly, the stability of an atomic clock varies with time, and it becomes therefore important to introduce a quantity that can represent the time-varying stability of the clock. We present the dynamic Allan variance (DAVAR), a trans- form that defines the stability of an atomic clock as a function of time. In [8]–[10] we introduced the DAVAR, and we have carried out some preliminary analysis and simulations to prove its validity for the characterization of atomic clocks under nonstationary behavior. In this article, we present and discuss the mathematical foundations of the DAVAR, and we show its effectiveness by performing extensive nu- merical analyses on simulated and experimental data. A discussion of similar dynamical extensions of other types of variances, such as the Hadamard, the time variance, and the structure functions, follows. We point out that, since the introduction of the DA- VAR, some applications and implementations have been developed showing the interest of the time community. In particular: The DAVAR is routinely used for the characterization of the clocks onboard the GIOVE-A and GIOVE-B satellites, the first experimental satellites of the Gali- leo system [11]; the DAVAR is also part of the algo- rithm that generates the reference time scale of the Galileo system. The DAVAR is currently used by the U.S. Naval Re- search Laboratory for the monitoring and testing of the GPS clocks [12]. The DAVAR has been proposed as a tool to moni- tor the quartz clocks onboard the space probes for the NASA far space missions to Mars, Mercury, and Pluto [13]. The DAVAR has been implemented in STABLE32, the commercial software generally used for clock sta- bility analysis [14]. A recent update is available for downloading with an improved version of the DAVAR that features increased graphic capabilities. The DAVAR has been implemented in a free Matlab software package for clock data analysis, developed by the U.S. Naval Research Laboratory [15]. Besides time and frequency, we believe the DAVAR can be applied to any time series of measurements. The Allan variance on which the DAVAR is based has in fact already been proposed as a statistical tool for all fields of metrol- ogy [16]. The article is structured as follows. In Section II, we review the classical Allan variance and establish our nota- tion. In Section III, we present the dynamic Allan variance. We give both the continuous-time and discrete-time for- mulations, and we also show how to estimate the DAVAR from experimental data. The DAVAR is then applied to simulated data and experimental data, and the obtained results prove its validity. Finally, we describe an algorithm that can be implemented to compute the DAVAR. To help the reader in following our derivations, in Ta- ble I we give a glossary of symbols 1 and acronyms used throughout the article. II. The Allan Variance The model of the ideal oscillator signal is given by the sinusoidal function [17] ut U t ()= (2 ) 0 0 sin pn (1) The Dynamic Allan Variance Lorenzo Galleani, Senior Member, IEEE, and Patrizia Tavella, Member, IEEE Manuscript received August 5, 2007; accepted August 5, 2008. L. Galleani is with Politecnico di Torino, Torino, Italy (email: gal- [email protected]). P. Tavella is with Istituto Nazionale di Ricerca Metrologica (INRIM), Torino, Italy (email: [email protected]). Digital Object Identifier 10.1109/TUFFC.2009.1064 1 We use bold letters for stochastic quantities.

Upload: ahmed-emad

Post on 18-Apr-2015

62 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Dynamic Allan Variance 3

0885–3010/$25.00 © 2009 IEEE

450 IEEE TransacTIons on UlTrasonIcs, FErroElEcTrIcs, and FrEqUEncy conTrol, vol. 56, no. 3, March 2009

Abstract—We present and discuss the dynamic Allan vari-ance, a measure of the time-varying stability of an atomic clock. First, the dynamic Allan variance is mathematically de-fined, then its behavior is extensively tested on simulated and experimental data. The results prove the validity and the ef-fectiveness of the proposed new tool.

I. Introduction

atomic clocks are the undisputed reference for the measurement of time, and their performance in terms

of cost, reliability, efficiency, and lifetime, in addition to accuracy and stability, allows their use in the most de-manding modern applications, such as navigation systems, telecommunication networks, and tests of fundamental physics. a fundamental characteristic of an atomic clock is its stability, which is defined through the allan variance, a standard quantity for the characterization of atomic clocks, recommended by international standardizations organizations [1]–[3]. other variances are also used [4]–[7], which are generally estimated on the longest possible data series. nevertheless, it is well known that atomic clocks undergo sudden failures, are influenced by environmental factors such as temperature and humidity, and eventually age and break down like any physical device. consequent-ly, the stability of an atomic clock varies with time, and it becomes therefore important to introduce a quantity that can represent the time-varying stability of the clock. We present the dynamic allan variance (daVar), a trans-form that defines the stability of an atomic clock as a function of time.

In [8]–[10] we introduced the daVar, and we have carried out some preliminary analysis and simulations to prove its validity for the characterization of atomic clocks under nonstationary behavior. In this article, we present and discuss the mathematical foundations of the daVar, and we show its effectiveness by performing extensive nu-merical analyses on simulated and experimental data. a discussion of similar dynamical extensions of other types of variances, such as the hadamard, the time variance, and the structure functions, follows.

We point out that, since the introduction of the da-Var, some applications and implementations have been developed showing the interest of the time community. In particular:

The daVar is routinely used for the characterization •of the clocks onboard the GIoVE-a and GIoVE-B satellites, the first experimental satellites of the Gali-leo system [11]; the daVar is also part of the algo-rithm that generates the reference time scale of the Galileo system.The daVar is currently used by the U.s. naval re-•search laboratory for the monitoring and testing of the GPs clocks [12].The daVar has been proposed as a tool to moni-•tor the quartz clocks onboard the space probes for the nasa far space missions to Mars, Mercury, and Pluto [13].The daVar has been implemented in sTaBlE32, •the commercial software generally used for clock sta-bility analysis [14]. a recent update is available for downloading with an improved version of the daVar that features increased graphic capabilities.The daVar has been implemented in a free Matlab •software package for clock data analysis, developed by the U.s. naval research laboratory [15].

Besides time and frequency, we believe the daVar can be applied to any time series of measurements. The allan variance on which the daVar is based has in fact already been proposed as a statistical tool for all fields of metrol-ogy [16].

The article is structured as follows. In section II, we review the classical allan variance and establish our nota-tion. In section III, we present the dynamic allan variance. We give both the continuous-time and discrete-time for-mulations, and we also show how to estimate the daVar from experimental data. The daVar is then applied to simulated data and experimental data, and the obtained results prove its validity. Finally, we describe an algorithm that can be implemented to compute the daVar.

To help the reader in following our derivations, in Ta-ble I we give a glossary of symbols1 and acronyms used throughout the article.

II. The allan Variance

The model of the ideal oscillator signal is given by the sinusoidal function [17]

u t U t( ) = (2 )0 0sin pn (1)

The Dynamic Allan Variancelorenzo Galleani, Senior Member, IEEE, and Patrizia Tavella, Member, IEEE

Manuscript received august 5, 2007; accepted august 5, 2008. l. Galleani is with Politecnico di Torino, Torino, Italy (email: gal-

[email protected]).P. Tavella is with Istituto nazionale di ricerca Metrologica (InrIM),

Torino, Italy (email: [email protected]).digital object Identifier 10.1109/TUFFc.2009.1064 1 We use bold letters for stochastic quantities.

Page 2: The Dynamic Allan Variance 3

where U0 is the nominal amplitude and ν0 the nominal frequency. In reality, both the amplitude and frequency are subjected to deterministic and random fluctuations, and hence the model becomes

u( ) = ( ( )) (2 ( ))0 0t U t t t+ +e fsin .pn (2)

The quantities ε(t) and ϕ(t) represent the amplitude and phase fluctuations, respectively. In the case of high-precision atomic clocks, the amplitude fluctuations ε(t) are negligible, and the stability of the oscillator is char-acterized through the stability of the instantaneous phase ϕ(t). In particular, it is important to define the instanta-neous frequency ν(t)

nf

( ) =12

( )0t

d tdt

np

+ . (3)

The relative deviation of the instantaneous frequency from its nominal value ν0 is the normalized frequency de-viation.

y( ) =( ) 0

0t

tn - nn

(4)

another fundamental quantity for clock characterization is the phase deviation x(t), connected to y(t) through the relationship

yx

( ) =( )

td tdt

. (5)

The frequency deviation y(t) is dimensionless, while x(t) has the dimension of time. Both x(t) and y(t) are random quantities, and they are referred to as clock phase and clock frequency noise, respectively.

The standard way to evaluate the stability of an oscil-lator is the allan variance [4], [18]

s t ty2

12( ) =

12

( , )D t k+ (6)

where τ > 0 is the observation interval, and

D( , ) =t k t k t kt ty y+ - . (7)

The averaged measure y k at time tk is given by

y yx x

( ) =1

( ) =( ) ( )

t t dtt t

kt k

t k k k

tt

tt-ò- -

. (8)

The operator ⟨ ⟩ in (6) denotes ensemble averaging, which, in practice, is performed through a time average, under the assumption of ergodicity.

If the allan variance is to be estimated on N discrete samples x[n] of the phase deviation x(t), then the time axis t is sampled as

t n= 0t (9)

where τ0 is the sampling time. consequently, the τ axis is discretized as

t t= 0k . (10)

Because k = 1, 2, …, then τ0 can also be interpreted as the minimum observation interval. consequently, the discrete-time allan variance becomes

s y x x x22

02

=0

2 12

[ ] =1

2

12

[ 2 ] 2 [ ] [ ]kk N k

n k n k nn

N k

t -+ - + +( )

- -

å .

(11)

If the measurements are continuous with time we can write

D( , ) = ( ) ( ) = ( ) ( )t h t t h t t t dt' ' 't t t* --¥

òy y (12)

where the star sign stands for convolution, and hτ(t) is the allan window, defined as

h tt

tt t

t

tt

( ) =

10 <

1< 0

- £

- £

ì

í

ïïïï

îïïïï

. (13)

Using this notation, the allan variance becomes

451GallEanI and TaVElla: the dynamic allan variance

TaBlE I. Glossary of symbols and acronyms.

Acronym/Symbol Description

daVar dynamic allan variancedadEV dynamic allan deviationt continuous timen discrete timeτ observation interval (continuous time)k observation interval (discrete time)τ0 sampling time (minimum observation interval)x(t) Phase deviation (continuous time)x[n] Phase deviation (discrete time)y(t) normalized frequency deviation (continuous time)y[n] normalized frequency deviation (discrete time)

s ty2( ) allan variance (continuous time)

s y2[ ]k allan variance (discrete time)

s ty2( , )t dynamic allan variance (continuous time)

T Window length of the daVar (continuous time)

s y2[ , ]n k dynamic allan variance (discrete time)

N Window length of the daVar (discrete time)

s y2( , )t t Estimated daVar (continuous time)

s y2[ , ]n k Estimated daVar (discrete time)

Page 3: The Dynamic Allan Variance 3

s t ty2 2( ) =

12

( , )D t . (14)

The dynamic allan variance is an extension of the allan variance.

III. The dynamic allan Variance

The dynamic allan variance is defined in a very intui-tive way. If we suspect that a random signal x(t) repre-senting the phase or frequency error (or any other physical quantity) has time-varying properties, we can repeat the evaluation of the allan variance at different epochs. Then we collect all the variances obtained at every epoch, and we plot the result in a single 3-d graph. This procedure corresponds to the definition of the dynamic allan vari-ance. The procedure can be summarized as follows:

1. Fix the analysis epoch at t = t1. 2. Truncate the signal x(t) with a window of length T

centered at t1. 3. Evaluate the allan variance σy(t1,τ). 4. choose another analysis epoch, say t = t2, and re-

peat from step 2. The new epoch t2 can be chosen so that the corresponding truncated signal overlaps with the truncated signal related to the previous ep-och t1.

at the end of the above procedure, the collection of variances σy(t1,τ), …, σy(tM,τ), related to the M different epochs tk and to the different observation intervals τ, gives a measure of the instantaneous stability of x(t). In what follows we derive the daVar, both for the continuous and discrete-time cases, by following the above procedure. We then propose an estimator for the daVar and test it on simulated and experimental data.

A. Continuous-Time Formulation

We obtain the continuous-time formulation of the da-Var by starting from the signal y(t) that represents the frequency deviation. We first truncate the signal on the interval

t T t t T'- £ £ +/ 2 / 2 (15)

by means of a rectangular window

y yT' '

T't t t P t t( , ) = ( ) ( )- (16)

where yT(t,t′) is the truncated signal and PT(t′) is the rectangular window of length T defined as

P t t T

T( ) = 1, / 20,

elsewhere

£{ . (17)

For any window, the epoch t is a fixed parameter that rep-resents the center of the analysis window PT(t – t′), and the support variable t′ describes the elapsing time inside the window. We now build the increment process Δ(t,t′,τ) by convolving the truncated signal with the allan window hτ(t′)

D( , , ) = ( ) ( , )t t h t t t t dt' ' ''T

'' ''t t-¥

ò - y (18)

where the variables are constrained by

t T t t T'- - £ £ + -( 2 ) ( 2 )/ /t t (19)

0 < t t£ max. (20)

The bound τmax is the maximum observation interval over which the allan variance can be estimated with the data contained in the window T, and it can be chosen for ex-ample as

t max .= 3T/ (21)

We now square the increment and average in time with respect to t′.

s y2 2( , ) =

12

( , , )t t t 't tD (22)

=1

2( 2 )( , , )

/2

/22

Tt t dt

t T

t T' '

- - +

+ -

òtt

t

tD (23)

The quantity s y2( , )t t is still random, because it changes

with respect to every realization of Δ(t,t′,τ), and it basi-cally represents the instantaneous stability of the individ-ual realization of the process y(t).

We define the dynamic allan variance as the ensemble average (expectation value) of (22)

s t ty2 2( , ) =

12

( , , )t E t t 'Déëê

ùûú (24)

where we still have

0 < t t£ max (25)

and the constraint on t′ is not necessary anymore because it is already embedded into the integration limits, (23). now s ty

2( , )t is a deterministic quantity for every t and τ. We also define the dynamic allan deviation, or dadEV, as

s t s ty y( , ) = ( , )2t t . (26)

We point out that the length T of the analysis window plays a fundamental role in the evaluation of the daVar,

452 IEEE TransacTIons on UlTrasonIcs, FErroElEcTrIcs, and FrEqUEncy conTrol, vol. 56, no. 3, March 2009

Page 4: The Dynamic Allan Variance 3

and the effect of different choices of T will be discussed in section III-E. Moreover, we note that other types of analysis windows can be chosen, such as hanning, ham-ming, Gaussian, or triangular. The choice of the window is a subject that may be worth investigating, because changing the window could improve the performance of the daVar.

It is interesting to formulate the daVar as a function of the phase deviation x(t). Using (5), we rewrite the in-crement process given in (18) as

D( , , ) =1

( , )1

( , )t t t t dt t t dt'

t '

t ''' ''

t '

t ''' ''t

t t

t

t

+

-ò ò-y y (27)

=1

( ) 2 ( ) ( )t

t tx x xt t t' ' '+ - + -éëê

ùûú (28)

where we use y(t,t′′) instead of yT(t,t′′) because in the time interval t – (T/2 – τ) ≤ t′ ≤ t + (T/2 – τ) they are identical. substitution into (24) gives

s t

t t

t tt

t

y

x x x

22

/2

/2

( , ) =1

2 ( 2 )

( ) 2 ( ) (

tT

E t t tt T

t T' ' '

-

+ - + -´- +

+ -

ò ))2( )é

ëêê

ùûúúdt'

(29)

with, as usual

0 < t t£ max. (30)

B. Discrete-Time Formulation

The dynamic allan Variance given in (29) is continu-ous both in t and τ. We now derive the discrete-time da-Var, which is necessary when dealing with discrete-time measurements. First the time axis t and the observation interval τ are sampled, by using (9) and (10), respectively. consequently, the following variables also are discretized:

t m T N' = =0 0t t, . (31)

By substituting (9), (10), and (31) in (29), and by replac-ing the integral with a summation, we obtain (32) (see above).It is better to rewrite this expression with the standard notation of discrete-time signals [19]

st

y

x x x

22

02

= /2

/2 1

[ , ] =1

2

12

[ ] 2 [ ] [

n kk N k

E m k mm n N k

n N k-

+ - +´- +

+ - -

å mm k-( )éëê

ùûú

]2

(33)

where x[m] = x(mτ0). We can readily see the connection to the definition of the discrete-time allan variance, (11). as a matter of fact, (33) can be obtained directly from (11) by introducing the truncation process, obtained using the rectangular window. We have instead derived it from the continuous-time formulation, pointing out the imme-diate connection between the 2 formulations. Moreover, we note that it is possible to write the daVar from the frequency y[n] by working on (23).

If we remove the expectation value, then (33) becomes an (unbiased [20]) estimator of the daVar, and it can be hence applied to a series of measured data

s y

x x x

22

02

= /2

/2 1

[ , ] =1

2

12

[ ] 2 [ ] [

n kk N k

m k mm n N k

n N kt -

+ - +´- +

+ - -

å mm k-( )] 2.

(34)

similarly, (23) represents the estimate of the daVar ob-tained from a single realization of continuous-time mea-surements x(t).

We note that to evaluate the daVar of a frequency signal y[n], (34) can still be used, provided that the cor-responding phase signal x[n] is obtained from y[n] by nu-merical integration, according to (5).

C. Simulation Results

The dynamic allan variance is a representation of the time-varying stability of an atomic clock, and more gen-erally of a stochastic process. The goal of the daVar is to track changes in the stability of a clock, pointing out sudden variations, cyclic behaviors, and slow variations, among the many possible nonstationarities. In this sec-tion, we apply the daVar to simulated nonstationary processes, and we show that it is a highly effective way to characterize changes in stability.

In particular, we consider one stationary situation and 5 nonstationary situations. The stationary case is a white Gaussian phase noise, that we generate using the follow-ing model:

453GallEanI and TaVElla: the dynamic allan variance

s t tt

t ty x x20 0 2

02

= /2

/2 1

0 0( , ) =1

2

12

( ) 2n kk N k

E m km n N k

n N k

-+ -

- +

+ - -

å (( ) ( )0 0 02

m m kt t t+ -( )éëê

ùûú

x (32)

Page 5: The Dynamic Allan Variance 3

x f[ ] = [ ]n ns (35)

where f[n] is a (real) white Gaussian sequence, whose mean μf and autocorrelation function Rf are given by

m f f= [ [ ]] = 0E n (36)

R n n E n n n nf f f[ , ] = [ [ ] [ ]] = ]1 2 1 2 1 2d[ .- (37)

here δ[n] indicates the Kronecker delta function, defined as

d[ .nnn] =

1, = 00, 0

¹

ìíïïîïï

(38)

The result of the application of the daVar to this case is given in section III-c-1.

The first 4 nonstationary cases, discussed in sections III-c-2 to III-c-5, are described by the following model of the discrete phase deviation x[n]:

x f[ ] = ] [ ]n n ns[ (39)

where σ[n] is a function of time. By using (36), (37), and (39), we see that the variance at time n of x[n] is given by

s sx x2 2 2[ ] = [ [ ]] = [ ]n E n n . (40)

We obtain different nonstationarities by changing the function σ[n], which plays the role of an instantaneous standard deviation. The results show how the daVar tracks the nonstationarities.

In the fifth and last nonstationary case, described in section III-c-6, we consider a process x[n] made by the sum of 2 noises, namely a white Gaussian frequency noise and a time-varying Gaussian random walk frequency noise. The random walk frequency noise is obtained by adding increments whose standard deviation increases lin-early with time.

The results for the 6 test cases are reported in Figs. 1–24. Every analyzed case has 4 pictures:

1. The phase deviation x[n]. We choose a sampling time τ0 = 1, and we plot a single realization of the simulated phase deviation.

2. The allan deviation σy[k] of x[n] obtained by using (11).

3. The dynamic allan deviation σy[n,k]. This is the en-semble average defined in (33). The dadEV is ob-tained by simulating a large number of realizations of the random process x[n], and by averaging out all the dadEV estimates s y[ , ]n k computed for each of them.

4. The estimate of the dynamic allan deviation ˆ ,s y[ , ]n k (34), obtained from the realization x[n] reported in the first plot.

Because the continuous-time notation is more familiar than the discrete-time one, in all the pictures we use t for time, rather than n, and τ for the observation interval, rather than k.

1) Stationary Case: White Gaussian Noise: In Fig. 1, we show a realization of the white Gaussian sequence x[n] governed by (35) with σ = 1, while in Fig. 2, we report the corresponding allan deviation. We see that, as expected, the allan deviation has a slope proportional to τ–1, which indicates a white phase noise. We see that s ty( = 1) = 3, as expected in case of white phase noise [18]. Because x[n] is a stationary random process, we expect also the instan-taneous stability to be constant with time. This is pre-cisely what happens, as can be noticed from Fig. 3, where the dynamic allan deviation is shown. The dadEV is constant with time, and changes with τ only, with the typical slope of white phase noise, and it has the same slope of the allan deviation of Fig. 2. Fig. 4 shows the estimate of the dadEV of the single realization reported in Fig. 1, obtained with a window of N = 100 samples. The estimator exhibits a variance in the estimate, that is a function of the window length N. The shorter the win-dow is, the higher the variance, and vice versa. In future work, we will discuss the concept of a confidence interval for the daVar and also how to represent the confidence on a 3-d picture in an understandable way.

2) Nonstationary Case 1: Step Variance: In Fig. 5, we show a realization of a nonstationary white noise x[n] that follows the model given in (39). In this case the time-varying standard deviation takes the following values:

sssf[ ] =

, <,

1 0

2 0n

n nn n

³

ìíïïîïï

(41)

where σ1 = 1, σ2 = 2, and n0 = 500. In Fig. 6, we show the corresponding allan deviation. To our surprise, we see that it has a constant slope with no evidence of the step variance. It is the slope of a pure white phase noise, with variance corresponding to the average values of the 2 variances on the 2 periods. We now apply our method and compute the dynamic allan deviation, which is shown in Fig. 7, while the estimate obtained on the sequence of Fig. 5 for a window of length N = 100 is represented in Fig. 8. as expected, the dadEV highlights the presence of a nonstationary noise, and we can infer that the signal is composed by the sequence of 2 stationary white phase noises. The transition happens in the middle of the signal, and we see that it is not perfectly sharp, but there is in-stead a transition region from one noise to the other. This smooth transition is a typical behavior of the dadEV, and its shape is a function of the length of the analysis window. To track fast variations in time, we need a short window; while to keep the variance of the estimate low, we need a long window. This is an evident tradeoff, and hence we have to adapt our choice from time to time in a reason-

454 IEEE TransacTIons on UlTrasonIcs, FErroElEcTrIcs, and FrEqUEncy conTrol, vol. 56, no. 3, March 2009

Page 6: The Dynamic Allan Variance 3

able way (we analyze this tradeoff in section III-E). The estimate of the dadEV shown in Fig. 8 points out again 2 distinguishable stationary periods, even if the estimate has an uncertainty that is due to the limited amount of data inside the window.

3) Nonstationary Case 2: Bump Variance: We now choose the time-varying variance in (39) to be constant on 3 consecutive time intervals

ssss

f[ ] =, <, <,

1 1

2 1 2

1 2

nn nn n nn n

£³

ì

íïïï

îïïï

(42)

where σ1 = 1, σ2 = 2, n1 = 400, and n2 = 600. In Fig. 9, we show a realization of the corresponding signal x[n] gener-ated using (39). We clearly notice the change in variance. In Fig. 10, the allan deviation is shown, and again we see that there is no visible trace of the nonstationarity going on in the signal. From the allan deviation, we hence con-clude that the signal is a pure stationary white noise. now

we observe the dadEV of Fig. 11, and we immediately see the nonstationarity. The corresponding estimate for a window of length N = 100 samples is given in Fig. 12. We see that the dadEV recognizes the 3 stationary regions of x[n]. The transition from one region to another is again not sharp, but on the contrary has a slightly curved shape. This is a consequence, as for the previous case, of the length of the analysis window.

4) Nonstationary Case 3: Linear Standard Deviation: In this case we choose the standard deviation to be linearly varying with time

s ss s

fx

[ ] = 12 1nN

n+-

(43)

where σ1 = 1, σ2 = 10, and Nx = 1000. We expect this variation to influence the instantaneous stability of the phase error x[n], generated according to (39). The signal x[n] is shown in Fig. 13, where we notice the variation

455GallEanI and TaVElla: the dynamic allan variance

Fig. 1. stationary white Gaussian phase noise x[n] generated using the model given in (35).

Fig. 2. allan deviation of the process shown in Fig. 1. notice the typical slope of white phase noise.

Fig. 3. dynamic allan deviation of the signal x[n], a realization of which is shown in Fig. 1. as expected, the dadEV is constant with time and has the typical slope of white phase noise.

Fig. 4. Estimate of the dynamic allan deviation shown in Fig. 3, ob-tained from the realization of Fig. 1 with a window of N = 100 samples. notice the variation of the estimate.

Page 7: The Dynamic Allan Variance 3

with time of the standard deviation. In Fig. 14, the allan deviation is represented, and again we see the same result of the previous cases, that is, the allan deviation does not reveal the nonstationarity of the phase deviation x[n]. The dynamic allan deviation, shown in Fig. 15, tracks the nonstationarity. Beside the fluctuations due to the estima-tion process, the estimated dadEV, reported in Fig. 16 for a window length of N = 100 samples, identifies the change in the noise variance.

5) Nonstationary Case 4: Cyclic Variance: a cyclosta-tionary behavior can be modeled by a time-varying stan-dard deviation that takes the values

s s pf[ ] = (2 )0n f ns s+ D cos (44)

where σ0 = 1, Δs = 0.2, and fs = 2 × 10–3. The random process x[n], generated using (39), is shown in Fig. 17,

where we notice a certain periodicity of the noise variance. In Fig. 18, the allan deviation is represented, and again we notice that it is not different from the previous non-stationary cases. It is clearly impossible to detect the pe-riodicity of the noise from the allan variance. We instead observe the dynamic allan deviation of Fig. 19, where the cyclic variation is immediately recognizable. From the pic-ture we can also estimate the period of oscillation, that is, 1/fs = 500 samples. The estimate is given in Fig. 20, for a window of N = 100 samples.

6) Nonstationary Case 5: Time-Varying Noise: Finally, we consider a nonstationary frequency deviation y[n] made by the sum of 2 different noises, namely, a white Gaussian frequency noise and a time-varying random-walk Gauss-ian frequency noise. The random-walk frequency noise is obtained by adding Gaussian increments that have a standard deviation that increases linearly with time (the standard deviation increases by a factor of 50%). The fre-quency deviation x[n] is obtained by numerically integrat-

456 IEEE TransacTIons on UlTrasonIcs, FErroElEcTrIcs, and FrEqUEncy conTrol, vol. 56, no. 3, March 2009

Fig. 5. noise x[n] generated according to (39) with the time-varying standard deviation given by (41). notice the change in variance at n = 100.

Fig. 6. allan deviation of x[n] from Fig. 5. The allan deviation has the typical slope of white phase noise and does not point out the nonsta-tionarity in x[n].

Fig. 7. dynamic allan deviation of the signal x[n], a realization of which is shown in Fig. 5. The dadEV highlights in a clear way the change in variance of the noise.

Fig. 8. Estimate of the dadEV shown in Fig. 7, obtained from the realization of Fig. 5 with a window of N = 100 samples. notice how the nonstationarity is clearly visible, even if embedded in the uncertainty of the dadEV estimate.

Page 8: The Dynamic Allan Variance 3

ing the frequency signal generated by adding the 2 noises. In Fig. 21, we show y[n]. From the signal in time it is very hard to understand whether the clock is behaving in a stationary or nonstationary way. The corresponding allan deviation is shown in Fig. 22, and we see that it points out the presence of a white frequency noise and a random-walk frequency noise. It is not possible to de-tect the time-varying behavior of the random-walk noise, which has been averaged out by the allan deviation. The dynamic allan deviation, shown in Fig. 23, gives a much better representation of the nonstationarity of the noise y[n]. We immediately see that there is an increase in the variance of the noise for large observation intervals, while for small observation intervals the variance is constant with time. In Fig. 24, the estimate of the dynamic allan deviation is represented. again we notice the increase in variance highlighted by the dadEV.

We point out that, by looking at the allan deviation of Fig. 22, we would be tempted to identify a frequency drift, because the slope of the allan deviation is roughly τ+1 for τ > 100. consequently, we would remove such inexistent

drift from the original time series y[n]. This conclusion would be erroneous, as proved by the dadEV of Fig. 24.

D. Experimental Results

We now discuss 2 sets of experimental data, the first belonging to a rubidium clock undergoing space flight certification tests, and the second obtained from a GPs clock. In both cases, the use of the dynamic allan vari-ance reveals the presence of a nonstationary behavior of the clock.

1) Case 1: Rubidium Clock: In Fig. 25, the frequency data y(t) of a rubidium clock are shown. The clock is un-dergoing a test campaign for space certification, which re-sults in a highly nonstationary time series. as a matter of fact, the signal y(t) shows several outliers, and also a mean value μ(t) that changes with time, sometimes in an abrupt way, as happens for example around t = 9.5 × 105 s. We expect also the clock stability to be a function of time.

457GallEanI and TaVElla: the dynamic allan variance

Fig. 9. nonstationary signal x[n] generated using (39) and (42). notice the change of variance in the middle of the signal.

Fig. 10. allan deviation of x[n] from Fig. 9. We again see the typical slope of white phase noise, without evident traces of the nonstationarity in x[n].

Fig. 11. dynamic allan deviation of the signal x[n], a realization of which is shown in Fig. 9. The dadEV represents the change in variance in a clear and intuitive way.

Fig. 12. Estimate of the dadEV shown in Fig. 11, obtained from the realization of Fig. 9 with a window of length N = 100. notice that the change in stationarity is clearly visible.

Page 9: The Dynamic Allan Variance 3

In Fig. 26, we show the allan deviation σy(τ) of the clock data. The nonstationarities are not visible from this picture, which looks like the stability of a typical rubidi-um clock with some mixed behavior for long observation intervals that are very hard to understand. In Fig. 27, we represent the dynamic allan deviation σy(t,τ) of the frequency data y(t). The picture immediately gives the feeling of a clock whose behavior is highly nonstationary. We see many localized events that are due both to the outliers and to the abrupt variations of the mean value μ(t) of the frequency data. such localized variations im-pact on the clock stability, but while in the classical allan deviation plot they are averaged together, in the dadEV representation they are clearly visible at the time they oc-cur. also, by observing the dadEV at small τ values, we spot a linear trend in the stability: the clock performance is getting worse as time goes by. This linear variation is very similar to the one that was discussed with simulated data in section III-c-4.

Finally, in Fig. 28, we show the dadEV of the fre-quency data y(t), preprocessed to eliminate the outliers. also, an estimate m( )t of the mean value has been sub-

tracted; m( )t has been obtained by using a sliding estima-tor of the mean with a window of 100 samples. The num-ber of nonstationary events in the instantaneous stability is now reduced, and the linear trend in the stability is more evident.

2) Case 2: GPS Clock: In Fig. 29, we show the fre-quency data of a GPs clock2. From the time signal, we do not see any strange behavior of the clock. In Fig. 30, we show the corresponding dynamic allan deviation, from which we immediately spot the increase in variance at large observation intervals, while for small τ the variance seems pretty constant. This situation is very similar to the numeric example discussed in section III-c-6, where we built and analyzed a time-varying noise with a standard deviation that increases with time. It is possible that a small change in the clock parameters has happened, which has made the clock stability a function of time.

458 IEEE TransacTIons on UlTrasonIcs, FErroElEcTrIcs, and FrEqUEncy conTrol, vol. 56, no. 3, March 2009

Fig. 16. Estimate of the dadEV given in Fig. 15, obtained from the realization of Fig. 13 with a window of length N = 100. The change in clock stability is clearly visible.

2 data obtained from https://goby.nrl.navy.mil/IGstime/igst.php

Fig. 14. allan deviation of x[n] from Fig. 13. The allan deviation does not point out the change in variance of the nonstationary noise x[n].

Fig. 15. dynamic allan deviation of the signal x[n], a realization of which is shown in Fig. 13. The dadEV represents the nonstationarity in the signal and highlights the change in variance.

Fig. 13. random signal x[n] generated by using (39) and (43). The pic-ture shows the steady increase in the variance.

Page 10: The Dynamic Allan Variance 3

We also point out that, in the frequency signal of Fig. 29, there is a block of missing data, around t = 54035 days. To properly evaluate the corresponding dynamic al-lan deviation, we have used an improved estimation algo-rithm designed for the case of missing data [11].

E. The Window Effect

The length and the type of the analysis window can be freely chosen in the dynamic allan variance. In this article, we consider a rectangular window of length T (corresponding to N samples in discrete time), but other choices are possible. In practice, in fixing the length of the window, we are constrained by 2 opposite needs:

1. To quickly track fast variations in the signal the win-dow must be short.

2. To have a good confidence, that is, a low variance of the estimate, the window must be long. The greater the number of samples in the computation of the allan variance, in fact, the less the variance of the estimate.

Therefore, a tradeoff exists in the choice of the window length. To show this window effect we consider the nonsta-tionary signal described in section III-c-3, and we choose different window lengths. First, we use a short window of N = 50 samples. The resulting dadEV, shown in Fig. 31, quickly tracks the variation in the noise, but has a poor confidence on the estimate, due to the reduced amount of data captured by the short window. In Fig. 32, we show the dadEV computed with a long window of N = 200 samples. The confidence in the estimate of the dadEV is highly increased, due to the long window that captures a larger amount of data, but the nonstationarity in the sig-nal is ascribed to a range of time epochs before and after it really happens. This effect is due to the long window that tracks the nonstationarity of the signal far away from the analysis time (center of the window). Finally, Fig. 12 shows the effect of a medium length window of N = 100 samples. This choice represents a good tradeoff between tracking capabilities and variance reduction.

459GallEanI and TaVElla: the dynamic allan variance

Fig. 17. random signal x[n] generated using (39) and (44). The periodic change in the standard deviation is noticeable.

Fig. 18. allan deviation of the signal shown in Fig. 17. The allan devia-tion does not highlight the periodic nonstationarity of the signal.

Fig. 19. dynamic allan deviation of the signal x[n], a realization of which is shown in Fig. 17. The dadEV perfectly represents the cyclic change in the variance of x[n].

Fig. 20. Estimate of the dadEV given in Fig. 19, obtained from the realization of Fig. 17 with a window of length N = 100. The estimate indicates the cyclic change in the variance of x[n].

Page 11: The Dynamic Allan Variance 3

F. Algorithm Description

In Fig. 33, we provide a flowchart with a Matlab-like pseudo-code3, which describes a possible implementation of the discrete-time estimator of the daVar, as given in (34). Because it is common to work with phase measure-ments, we describe an algorithm that uses the sampled phase deviation x[n] as input. an algorithm that works with the frequency offset y[n] can be obtained with a slight modification, that is, the time series y[n] must be numerically integrated, according to (5), to obtain its cor-responding phase x[n]. The input data of the function are:

•a vector x made of Nx samples of the phase offset x[n], where x(1) is the first element and x(Nx) the last.a vector • t of Nt elements, describing the epochs t. This is a subset of 1,2,…Nx (hence Nt ≤ Nx).an integer • N that represents the length of the rect-angular window PT(t) used to slice the data. In the

definition of the discrete-time daVar given in (33), N has to be even. a similar definition can be obtained when N is odd. In Fig. 33, we describe an algorithm that works for any choice of N.

The output of the function is the Nτ × Nt matrix S, where Nτ is an integer identifying the maximum observa-tion interval used for the estimation of the allan variance at every epoch t. In the flowchart shown, N Nt = 3/êë úû ,

where êë úû rounds the argument to the nearest integer to-ward –∞. The pseudo-code uses the function floor(x), which computes xêë úû . The code assumes the presence of the function allan, whose inputs are the vector x and the sampling time τ0, and whose output is the allan deviation estimated for the set of discrete-time observation intervals 1, 2, …, Nτ.

If the truncation of the signal happens where no ex-perimental data are available (at the beginning and at the end of the data series), a NaN (Not A Number) padding is carried out. In the daVar computation given in (33), all

460 IEEE TransacTIons on UlTrasonIcs, FErroElEcTrIcs, and FrEqUEncy conTrol, vol. 56, no. 3, March 2009

Fig. 24. Estimate of the dynamic allan deviation of the frequency data shown in Fig. 21.

Fig. 23. dynamic allan deviation of the frequency data shown in Fig. 21.

Fig. 22. allan deviation of the frequency data shown in Fig. 21.

Fig. 21. Frequency signal made by the sum of a white frequency noise plus a time-varying random-walk frequency noise.

3 a free Matlab implementation of the daVar is available at www.inrim.it/tf/ts/clock_behavior.shtml.

Page 12: The Dynamic Allan Variance 3

461GallEanI and TaVElla: the dynamic allan variance

Fig. 25. Frequency data y(t) of a rubidium clock undergoing a test cam-paign for space certification.

Fig. 26. allan deviation σy(t) of the frequency data shown in Fig. 25.

Fig. 27. dynamic allan deviation of the frequency data shown in Fig. 25. notice the linear increase of the stability that can be seen for small τ values.

Fig. 28. dynamic allan deviation of the frequency data shown in Fig. 25. The outliers have been removed and also the estimated mean value.

Fig. 29. Frequency data of a GPs clock.

Fig. 30. dynamic allan deviation of the clock data shown in Fig. 29.

Page 13: The Dynamic Allan Variance 3

the phase triplets x[m + k], x[m], x[m – k] that contain at least one nan value are thrown away. The nan padding may be avoided by an appropriate choice of the analysis epochs (vector t) for any window length N.

IV. conclusions

The stability of an atomic clock can be a function of time, the reasons being manifold. First, the stability may be affected by changes of temperature, humidity, and other environmental factors. Then, the clock is composed by physical devices that are subject to aging and conse-quently change their behavior with time. Finally, sudden breakdowns may occur. We have introduced the dynamic allan variance, or daVar, a quantity that represents the stability of an atomic clock with respect to time.

In this article, we have characterized the dynamic al-lan variance through both analytical and simulation tools, and we have reported 2 examples with experimental data. We are currently working on large sets of experimental data (preliminary results were presented in [9], [10]), and on the unambiguous interpretation of the daVar esti-mates.

Further developments in progress concern the evalua-tion of the level of confidence on the daVar estimates and on the possible use of the daVar as anomaly detec-tor, which can be of interest in those applications that suffer a serious failure when the clock itself has a failure [21].

another issue is the implementation of a real-time daVar estimator. The daVar requires in fact the defi-nition of a window of length T that truncates the data series at any analysis epoch t. When we discussed the estimator of the daVar, we assumed a complete series of experimental data to be available for the evaluation of the instantaneous stability by post-processing. The da-Var is tagged at epoch t, and the evaluation is intended on a window centered at t, and therefore extending from t – T/2 to t + T/2. In certain applications, such as time

scales or satellite clock control, a real-time evaluation of the daVar may be necessary. In this case, the estimator of the daVar should be implemented in a real-time fash-ion that allows the processing of experimental data one by one, and that updates the daVar stability plot whenever new data are available. The algorithm implementing the daVar estimator that we have presented in this article is well suited for this real-time implementation. also, a waterfall diagram representation is feasible and could be of interest for real-time monitoring of the clock status.

In addition, a possible reversed use of the daVar was recommended as of great interest by manufacturers of measuring instruments, who may need to demonstrate that a given stability is not reached only once by their products (under the best metrological conditions), but it is instead a stationary performance maintained in time. an almost-constant daVar, not varying with time, can be the right tool to prove the long-lasting quality of a clock or an electronic device.

another possible application of the daVar is the sta-bility analysis of primary frequency standards. Today, these devices offer the most accurate realization of the sI second, and they are used to steer the International atomic Time. The daVar can be used to evaluate the reproducibility of their performance during their construc-tion and operation.

Finally, we point out that the dynamic evaluation of the allan variance may be applied to any type of variance. In certain cases, in fact, the characterization of a clock takes advantage of the modified allan variance, the ha-damard variance, or structure functions [6], [7], [22]. all those stability indicators may be generalized to a dynamic definition by following the same steps described here for the dynamic allan variance.

acknowledgments

The authors are grateful to cnEs Toulouse for provid-ing the experimental data analyzed in section III-d-1.

462 IEEE TransacTIons on UlTrasonIcs, FErroElEcTrIcs, and FrEqUEncy conTrol, vol. 56, no. 3, March 2009

Fig. 31. dynamic allan deviation of a nonstationary signal of the type described in section III-c-3. The window length is N = 50.

Fig. 32. dynamic allan deviation of a nonstationary signal of the type described in section III-c-3. The window length is N = 200.

Page 14: The Dynamic Allan Variance 3

references

[1] Definitions of Physical Quantities for Fundamental Frequency and Time Metrology, IEEE standard 1139, 1999.

[2] ITU handbook, Selection and Use of Precise Frequency and Time Systems, International Telecommunication Union–radiocommuni-cation ITU-r, Geneva, 1997.

[3] ITU-r recommendation TF 538–3, Measures for Random Insta-bilities in Frequency and Time (Phase), vol. 2000, TF series, Inter-national Telecommunication Union–radiocommunication ITU-r, Geneva, 2001.

[4] d. W. allan, “statistics of atomic frequency standards,” Proc. IEEE, vol. 54, no. 2, pp. 221–230, 1966.

[5] c. a. Greenhall, d. a. howe, and d. B. Percival, “Total variance, an estimator of long term frequency stability,” IEEE Trans. Ultra-son. Ferroelectr. Freq. Control, vol. 46, pp. 1183–1191, sep. 1999.

[6] W. c. lindsey and c. M. chie, “Theory of oscillator instability based upon structure functions,” Proc. IEEE, vol. 64, pp. 1652–1666, dec. 1976.

[7] F. Vernotte, E. lantz, J. Groslambert, and J. J. Gagnepain, “os-cillator noise analysis: Multivariance measurement,” IEEE Trans. Instrum. Meas., vol. 42, pp. 342–350, apr. 1993.

[8] l. Galleani and P. Tavella, “Interpretation of the dynamic allan variance of nonstationary clock data,” presented at IEEE Fcs-EFTF 2007, Geneva, switzerland, May 29–Jun. 1, 2007.

[9] l. Galleani and P. Tavella, “Tracking nonstationarities in clock nois-es using the dynamic allan variance,” in Proc. IEEE FCS-PTTI, Vancouver, canada, 2005, pp. 29–31.

[10] l. Galleani and P. Tavella, “The characterization of clock behav-ior with the dynamic allan variance,” in Proc. IEEE FCS-EFTF, Tampa, Florida, 2003, pp. 5–8.

[11] I. sesia, l. Galleani, and P. Tavella, “Implementation of the dynam-ic allan variance for the Galileo system Test Bed V2,” presented at IEEE Fcs-EFTF 2007, Geneva, switzerland, May 29–June 1, 2007.

[12] naval center for space Technology, “Monthly analysis report–octo-ber 2006,” U.s. naval research laboratory, Washington dc, 2006.

[13] M. Miranian, G. l. Weaver, and M. J. reinhart, “an ensemble of ultra-stable quartz oscillators to improve spacecraft on-board fre-quency stability,” presented at the 38th annu. Precise Time and Time Interval (PTTI) Meeting, Virginia, december 5–7, 2006.

[14] sTaBlE 32. hamilton Technical services. [online]. available: www.wriley.com.

[15] canVas (clock analysis Visualization, and archiving system). [online]. available: https://goby.nrl.navy.

463GallEanI and TaVElla: the dynamic allan variance

Fig. 33. Flowchart describing a software implementation of the dynamic allan variance. The algorithm is described in section III-F.

Page 15: The Dynamic Allan Variance 3

[16] d. W. allan, “should the classical variance be used as a basic mea-sure in standards metrology?” IEEE Trans. Instrum. Meas., vol. 36, no. 2, pp. 646–654, 1987.

[17] P. Kartaschoff, Frequency and Time. Burlington, Ma: academic Press, 1978.

[18] d. W. allan, “Time and frequency (time-domain) characterization, estimation, and prediction of precision clocks and oscillators,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, vol. 34, pp. 647–654, nov. 1987.

[19] a. V. oppenheim, r. W. schafer, and J. r. Buck, Discrete-Time Signal Processing. Upper saddle river, nJ: Prentice-hall, 1999.

[20] s. M. Kay, Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory. Upper saddle river, nJ: Prentice-hall, 1993.

[21] E. nunzi, l. Galleani, P. Tavella, and P. carbone, “detection of anomalies in the behavior of atomic clocks,” IEEE Trans. Instrum. Meas., vol. 56, pp. 523–528, apr. 2007.

[22] d. howe, r. Beard, c. Greenhall, F. Vernotte and W. riley, “a total estimator of the hadamard function used for GPs operations,” in Proc. 32nd Precise Time and Time Interval Meeting, nov. 2000, pp. 255.

Lorenzo Galleani received his B.s. degree in electrical engineering in 1997, and his Ph.d. de-gree in dlectrical and communication engineering in 2001, both from Politecnico di Torino, Italy, where he is an assistant professor in the Electrical department. he has done research on the time-frequency representation of nonstationary signals and systems and on the statistical characteriza-tion of atomic clocks. he is currently involved in the design of the reference time scale for the Eu-ropean navigation system Galileo.

Patrizia Tavella received her B.s. degree in physics from the University of Torino and her Ph.d. degree in metrology from Politecnico di To-rino, Italy. she is currently senior scientist with the Italian Metrology Institute in Torino, Italy. her main interests are mathematical and statisti-cal models applied to atomic time scale algo-rithms. she is currently involved in the develop-ment of the European navigation system Galileo. she collaborates with the Bureau International des Poids et Mesures, and she chairs the working

groups on international atomic time and algorithms of the consultative committee of Time and Frequency.

464 IEEE TransacTIons on UlTrasonIcs, FErroElEcTrIcs, and FrEqUEncy conTrol, vol. 56, no. 3, March 2009