Download - frequency analysis using fft
CHAPTER-1
INTRODUCTION TO FREQUENCY ANALYSIS
Frequency analysis is a descriptive statistical method that shows the number of occurrences
of each response chosen by the respondents. When using frequency analysis, SPSS Statistics
can also calculate the mean, median and mode to help users analyze the results and draw
conclusions.
The object of frequency analysis is to break down a complex signal into its
components at various frequencies, and it can be shown that this is possible for all practical
signals. The word "components" can be interpreted in several different ways, however, and it
is important to clarify the differences between them.
Mathematicians and theoretically inclined engineers tend to interpret "components" as the
results of a Fourier Analysis, while practical engineers often think in terms of measurements
made with filters tuned to different frequencies. it is one of the purposes of this book to
clarify the relationships between these two approaches. Another is to give a simple pictorial
interpretation of Fourier analysis which greatly facilitates the understanding of fundamental
relationships, in particular the connection between mathematical theory and practical analysis
by both analog and digital means.
The approach taken is to consider frequency components as vectors (or phasors) rotating in a
complex plane rather than as sums of sines and/or cosines. A typical (eo-) sinusoidal
component is represented initially as the sum of two contrarotating vectors (representing
positive and negative frequency, respectively). This gives a meaning to the mathematically
useful concept of negative frequency, and a two-sided frequency domain gives a valuable
[1]
symmetry with the time domain. lt also allows for the simple pictorial representation of
Fourier analysis.
CHAPTER-2
THEORETICAL FREQUENCY ANALYSIS
BASIC CONCEPTS :
Complex Notation
Fig. 2.1 represents a two-dimensional vector F in the so-called "complex plane". This has a
"real" component a directed along the "Real Axis" and an "imaginary" component jb directed
along the "Imaginary Axis". The vector as a whole is represented as the sum of these, viz.:
F= a+ jb -----------------------------------(1)
Note that b alone, as a real number, would lie along the Real Axis, but that multiplication by j
has the effect of rotating it through radians. Accordingly, a further multiplication by j
would result in a further rotation through so that the vector of length b would then lie
along the negative Real Axis as shown in Fig. 2.1. Hence, multiplication by corresponds to
a multiplication by - 1 and j can thus be interpreted as .
The complex plane shown here is turned through 90° as compared with the conventional
representation with the Real Axis horizontal. This is done purely to simplify interpretation of
the Real Axis as being in the plane of the paper rather that at right angles to it in 3-
dimensional diagrams.
In many cases it Is desirable to represent F in terms of its amplitude I F I and phase angle
Instead of Its real and imaginary components, and from Fig. 2.1 it can be seen that the
relationships between these two sets of coordinates are:
[2]
a = I Fl
b = I Fl sin
IFI= ------------------------------(2)
From Equations (1) and (2) it follows that:
F = I F I ( + j sin ) ------------------------------(3)
and since it is well-known (Euler's relation) that:
+ j sin = -----------------------------(4)
the most concise way of representing F in terms of Its amplitude and phase is as the complex
exponential
[3]
F = I F I ------------------------------(5)
The real parts and amplitudes have the same sign while the imaginary parts and phase angles
have opposite sign. The absolute values of the equivalent components are the same.
considering uniformly rotating vectors, i.e. vectors whose amplitude I F I is a constant and
whose phase angle is a linearly varying function of time
i.e.
where is a constant angular frequency (in radians/s) and is the "initial" phase angle at
time zero.
Normally, the frequency will be expressed as circular frequency f in revolutions/s (Hertz)
rather than w in radians/s and thus
-------------------------------------(6)
it follows from the above that is a unit vector (amplitude = 1) with angular orientation,
and (where is as defined in (6)) is a unit vector rotating at frequency f Hz and with
angular orientation at time zero.
Vector (i.e. phasor) multiplication is simplest when the vectors are expressed in the form of
Equation (5) and for two vectors f1 ( = I f1 I ) and f2 ( = I f2 I ) is obtained simply as:
F1 · f2 = I f1 I . I f2l
I f1 I. I f2 l -------------------------------------(7)
i.e. the amplitude of the product is equal to the product of the two amplitudes while the phase
is equal to the sum of the phases.
[4]
In particular, multiplication by a fixed unit vector has no effect on the amplitude but
adds to the phase angle {i.e. rotates the vector through an angle ) while multiplication by
the rotating unit vector causes a vector to rotate at frequency f.
Delta Functions
Another mathematical concept of which considerable use will be made Is the Dirac delta
function, also known as the "unit impulse". A typical delta function, located on an X-axis at
= may be represented as o ( - ). it has the property that Its value is zero
everywhere except at, where its value is infinite. it has the further property, however, that the
result of integrating over any range of the X-axis which includes is a unit area. it can be
considered as the limiting case of any pulse with unit area whose length is made infinitely
short at the same time as its height is made infinitely large, while retaining the unit area. The
unit delta function can be weighted by a scaling factor (with or without physical dimensions)
so that the result of integrating over it gives the value of the weighting. The delta function
provides a means of treating functions which are infinitely narrowly localised on an axis at
the same time as other functions which are distributed along the axis. A typical case is that of
a single discrete frequency component which is to be represented in terms of its "power
spectral density". Because of the infinitely narrow localisation of a discrete frequency
component on a frequency axis, its spectral density (power per unit frequency), will be
infinitely high, but since it represents a certain finite power, it can be considered as a delta
function weighted by this value of power.
[5]
CHAPTER-3
Fourier series
In mathematics, a Fourier series is a way to represent a wave-like function as the sum of
simple sine waves. More formally, it decomposes any periodic function or periodic signal
into the sum of a (possibly infinite) set of simple oscillating functions, namely sines and
cosines (or, equivalently, complex exponentials). The discrete-time Fourier transform is a
periodic function, often defined in terms of a Fourier series. The Z-transform, another
example of application, reduces to a Fourier series for the important case |z|=1. Fourier series
are also central to the original proof of the Nyquist–Shannon sampling theorem. The study of
Fourier series is a branch of Fourier analysis.
[6]
(The first four partial sums of the Fourier series for a square wave )
Definition
In this section, s(x) denotes a function of the real variable x, and s is integrable on an interval
[x0, x0 + P], for real numbers x0 and P. We will attempt to represent s in that interval as an
infinite sum, or series, of harmonically related sinusoidal functions. Outside the interval, the
series is periodic with period P (frequency 1/P). It follows that if s also has that property, the
approximation is valid on the entire real line. We can begin with a finite summation (or
partial sum):
[7]
SN ( X )=
A0
2+ ∑
n=0
N
An .sin ¿¿¿¿
sN (x) is a periodic function with period P. Using the identities:
sin( 2 πnxp
+∅ n)≡sin (∅ n ) cos ¿¿
sin ¿¿)≡ Re{1i
.e i (2 πnx
p+∅ n)} =
12i
.e i(2 πnx
p+∅ n)+(
12i
. e i( 2πnxp
+∅ n))*
we can also write the function in these equivalent forms:
SN ( X )=
a0
2+∑
n=1
N
( an⏞Ansin(∅ n)
) cos ( 2 πnxP
) + bn⏞Ancos(∅n)
sin (2 πnx
P )= ∑n=−N
N
Cn .e
i 2 πnxP
Where
Cn≝
A n
2i
e i∅ n=12(an−i bn) for n>0
12
c0 for n=0
C*|n| for n=o
When the coefficients (known as Fourier coefficients) are computed as follows:
an=¿ 2
P∫x0
x0 +p
s ( x ) .cos¿¿ ¿)dx b
n=¿ 2p∫X0
x0+P
s (x ). sin( 2πnxp ) dxc
n=1P
∫x0
X0+p
s ( x ). e−i 2πnx
Pdx
¿
SN ( x )approximates on and the approximation improves as N →
∞. The infinite
sum,s∞(x) is called the Fourier series representation of s. In engineering applications,
the Fourier series is generally presumed to converge everywhere except at discontinuities,
since the functions encountered in engineering are more well behaved than the ones that
mathematicians can provide as counter-examples to this presumption. In particular, the
Fourier series converges absolutely and uniformly to s(x) whenever the derivative of s(x)
(which may not exist everywhere) is square integrable. If a function is square-integrable
[8]
on the interval [x0, x0+P], then the Fourier series converges to the function at almost every
point. Convergence of Fourier series also depends on the finite number of maxima and
minima in a function which is popularly known as one of the Dirichlet's condition for
Fourier series. See Convergence of Fourier series. It is possible to define Fourier
coefficients for more general functions or distributions, in such cases convergence in
norm or weak convergence is usually of interest.
Example 1: a simple Fourier series
Plot of a periodic identity function, a sawtooth wave
[9]
Animated plot of the first five successive partial Fourier series
We now use the formula above to give a Fourier series expansion of a very simple function.
Consider a sawtooth wave
s ( X )= xπ ,
for−π<x<π
s ( x+2 πk )=s ( x ) ,for -∞<x<∞ and k∈ z
In this case, the Fourier coefficients are given by
an− 1
π ∫
– π
π
s (x)cos(nx)dx−0 , n≥ 0bn= 1
π
∫−π
π
s(x )sin (nx )dx
=-2
πncos (nπ )+ 2
π2 n2sin (nπ )
¿2(−1)n+1
πn, n≥ 1
it can be proven that the Fourier series converges to s(x) at every point x where s is
differentiable, and therefore:
s ( X )=a0
2+∑
n=1
∞
[an¿cos (nx )+bn sin (nx )]¿
2π∑n=1
∞
(−1 )n+1sin (nx ) , forx−π∈2πZ --------------(8)
When x = π, the Fourier series converges to 0, which is the half-sum of the left- and right-
limit of s at x = π. This is a particular instance of the Dirichlet theorem for Fourier series.
[10]
Example 2: Fourier's motivation
The Fourier series expansion of our function in example 1 looks much less simple than the
formula s(x) = x/π, and so it is not immediately apparent why one would need this Fourier
series. While there are many applications, we cite Fourier's motivation of solving the heat
equation. For example, consider a metal plate in the shape of a square whose side measures π
meters, with coordinates (x, y) ∈ [0, π] × [0, π]. If there is no heat source within the plate, and
if three of the four sides are held at 0 degrees Celsius, while the fourth side, given by y = π, is
maintained at the temperature gradient T(x, π) = x degrees Celsius, for x in (0, π), then one
can show that the stationary heat distribution (or the heat distribution after a long period of
time has elapsed) is given by
T( x , y )=2∑n=1
∞ (−1)n+1
nsin(nx )
sinh (ny)sinh (nπ )
Here, sinh is the hyperbolic sine function. This solution of the heat equation is obtained by
multiplying each term of Eq.8 by sinh(ny)/sinh(nπ). While our example function s(x) seems
to have a needlessly complicated Fourier series, the heat distribution T(x, y) is nontrivial. The
function T cannot be written as a closed-form expression. This method of solving the heat
problem was made possible by Fourier's work.
Other common notations
The notation cn is inadequate for discussing the Fourier coefficients of several different
functions. Therefore it is customarily replaced by a modified form of the function (s, in this
case), such as or S, and functional notation often replaces subscripting:
S∞(X)= ∑n=−∞
∞
s(n).ei 2 πnx
p
= ∑n=−∞
∞
S [n ] . ej 2 πnx
P common engineering notation
[11]
In engineering, particularly when the variable x represents time, the coefficient sequence is
called a frequency domain representation. Square brackets are often used to emphasize that
the domain of this function is a discrete set of frequencies.
1. Another commonly used frequency domain representation uses the Fourier series
coefficients to modulate a Dirac comb:
2. S( f )≝ ∑n=−∞
∞
S {n ] .δ ( f − np)
where f represents a continuous frequency domain. When variable x has units of seconds, f
has units of hertz. The "teeth" of the comb are spaced at multiples (i.e. harmonics) of 1/P,
which is called the fundamental frequency. can be recovered from this representation
by an inverse Fourier transform:
f−1{S(f )}=∫−∞
∞
¿¿
= ∑n=−∞
∞
S [n ] .∫−∞
∞
δ( f − np)e i 2πfx df
= ∑n=−∞
∞
S [n ] . e2 πnx
p ≝s∞ (x)
The constructed function S(f) is therefore commonly referred to as a Fourier transform.
[12]
CHAPTER-4
Fourier transform(DFT,FFT)
There are several common conventions for defining the Fourier transform f of an integrable
function .This article will use the following definition:
f ( ε )=∫−∞
∞
f (x)e−2 πixε dx , for any realnumber ε
When the independent variable x represents time (with SI unit of seconds), the transform
variable ε represents frequency (in hertz). Under suitable conditions, f is determined by f via
the inverse transform:
f ( x )=∫−∞
∞
f (ε )e2πiεxdε ,for any real number x
The statement that f can be reconstructed from f is known as the Fourier inversion theorem,
and was first introduced in Fourier's Analytical Theory of Heat, although what would be
considered a proof by modern standards was not given until much later. The functions f and f
often are referred to as a Fourier integral pair or Fourier transform pair).
For other common conventions and notations, including using the angular frequency ω
instead of the frequency ε , see Other conventions and Other notations below. The Fourier
transform on Euclidean space is treated separately, in which the variable x often represents
position and ε momentum.
[13]
: Discrete Fourier transformIn mathematics, the discrete Fourier transform (DFT) converts a finite list of equally
spaced samples of a function into the list of coefficients of a finite combination of complex
sinusoids, ordered by their frequencies, that has those same sample values. It can be said to
convert the sampled function from its original domain (often time or position along a line) to
the frequency domain.
The sequence of N complex numbers x0 , x1 ,… .. xN−1is transformed into an N-periodic
sequence of complex numbers:
xK≝∑n=0
N−1
xn . e−2 πikn/N,k∈ z ------------------------(9)
(integers)
Each xK is a complex number that encodes both amplitude and phase of a sinusoidal
component of function . The sinusoid's frequency is k cycles per N samples. Its amplitude
and phase are:
|xk|/N=√ℜ¿¿+ℑ¿/N
arg(X k)=atan2(ℑ ( X K ) ,ℜ(xk )¿=−¿ ln (xk
|Xk|)
where atan2 is the two-argument form of the arctan function. Due to periodicity, the
customary domain of k actually computed is [0, N-1]. That is always the case when the DFT
is implemented via the Fast Fourier transform algorithm. But other common domains are [-
[14]
N/2, N/2-1] (N even) and [-(N-1)/2, (N-1)/2] (N odd), as when the left and right halves of
an FFT output sequence are swapped.
The transform is sometimes denoted by the symbol f , as inx=f {x } or f ( x ) or f x.
Eq.9 can be interpreted or derived in various ways, for example:
It completely describes the discrete-time Fourier transform (DTFT) of an N-periodic
sequence, which comprises only discrete frequency components. (Using the DTFT
with periodic data)
It can also provide uniformly spaced samples of the continuous DTFT of a finite
length sequence. (Sampling the DTFT)
It is the cross correlation of the input sequence, xn, and a complex sinusoid at
frequency k/N. Thus it acts like a matched filter for that frequency.
It is the discrete analogy of the formula for the coefficients of a Fourier series:
xn=
1N ∑
K−0
N−1
Xk . ei 2πkn /N ,n∈ z−−−−−−−−−−−−−−−−−(10)
which is also N-periodic. In the domain this is the inverse transform
of Eq.10.
The normalization factor multiplying the DFT and IDFT (here 1 and 1/ N ) and the signs of
the exponents are merely conventions, and differ in some treatments. The only requirements
of these conventions are that the DFT and IDFT have opposite-sign exponents and that the
product of their normalization factors be 1/N. A normalization of √1/ Nfor both the DFT and
IDFT, for instance, makes the transforms unitary.
[15]
Properties
Completeness:
The discrete Fourier transform is an invertible, linear transformation
F :CN → CN
with Cdenoting the set of complex numbers. In other words, for any N > 0, an N-dimensional
complex vector has a DFT and an IDFT which are in turn N-dimensional complex vectors.
Orthogonality:The vectorsuk=¿form an orthogonal basis over the set of N-dimensional complex vectors:
uKT uk '
¿ =∑n=0
N−1 (e2 πiN
kn)(e¿¿2 πiN
(−k ')n)=∑n=0
N −1
e2πiN (k−k ' ) n=N δ kk '¿
where is the Kronecker delta. (In the last step, the summation is trivial if , where
it is 1+1+⋅⋅⋅=N, and otherwise is a geometric series that can be explicitly summed to obtain
zero.) This orthogonality condition can be used to derive the formula for the IDFT from the
definition of the DFT, and is equivalent to the unitarity property below.
The Plancherel theorem and Parseval's theorem:
If Xk and yk are the DFTs of xn and yn respectively then the Parseval's theorem states:
∑n=0
N−1
xn yn∗¿ 1N∑K =0
N −1
X K Y K*
where the star denotes complex conjugation. Plancherel theorem is a special case of the
Parseval's theorem and states:
∑n=0
−1
¿¿
These theorems are also equivalent to the unitary condition below.
[16]
Periodicity
The periodicity can be shown directly from the definition:
XK +N ≝∑
n=0
N−1
xn
e−¿ 2 πi
n¿
(K+N ¿n=∑N =0
N −1
xne−2 πi
N kne−2 πin⏟= ∑n=0
N−1 xn
e−2 πi
n k n=x k
Similarly, it can be shown that the IDFT formula leads to a periodic extension.
Shift theorem
Multiplying by a linear phase for some integer m corresponds to a circular shift
of the output : is replaced by , where the subscript is interpreted modulo N
(i.e., periodically). Similarly, a circular shift of the input corresponds to multiplying the
output by a linear phase. Mathematically, if represents the vector x then
If f ( {xn })k=Xk
Then f ({xn . e2πiN
nm })k=xk −m
andf ( {xn−m })=xk .e−2 πi
N km
[17]
: FFT(FAST FOURIER TRANSFORM)Syntax
Y = fft(X)
Y = fft(X,n)
Y = fft(X,[],dim)
Y = fft(X,n,dim)
Definition
The functions Y=fft(x) and y=ifft(X) implement the transform and inverse transform pair
given for vectors of length N by:
x (k )=∑j=1
N
x ( j)ωN(J−1 )(k−1)
x ( j)=¿¿
Where ωN =e(−2πi )/ N
is an N th root of unity.
Description
Y = fft(X) returns the discrete Fourier transform (DFT) of vector X, computed with a fast
Fourier transform (FFT) algorithm.
If X is a matrix, fft returns the Fourier transform of each column of the matrix.
[18]
If X is a multidimensional array, fft operates on the first nonsingleton dimension.
Y = fft(X,n) returns the n-point DFT. If the length of X is less than n, X is padded with
trailing zeros to length n. If the length of X is greater than n, the sequence X is truncated.
When X is a matrix, the length of the columns are adjusted in the same manner.
Y = fft(X,[],dim) and Y = fft(X,n,dim) applies the FFT operation across the dimension dim.
E XAMPLE: FFT IN MATAB PROGRAM
[19]
Fig-1
[20]
Fig-2
Fig-3
[21]
Fig-4
Fig-5
[22]
ALGORITHM:
1.The FFT functions (fft, fft2, fftn, ifft, ifft2, ifftn) are based on a library called FFTW .
2.To compute an -point DFT when is composite (that is whenN=N1 N 2 ), the FFTW
library decomposes the problem using the Cooley-Tukey algorithm which first computes N1
transforms of size,N2 and then computes transforms of size .
3.The decomposition is applied recursively to both the N1- andN2 -point DFTs until the
problem can be solved using one of several machine-generated fixed-size "codelets."
4. The codelets in turn use several algorithms in combination, including a variation of
Cooley-Tukey, a prime factor algorithm, and a split-radix algorithm. The particular
factorization of is chosen heuristically.
5.When is a prime number, the FFTW library first decomposes an -point problem into
three ( )-point problems using Rader's algorithm. It then uses the Cooley-Tukey
decomposition described above to compute the ( )-point DFTs.
6.For most , real-input DFTs require roughly half the computation time of complex-input
DFTs. However, when has large prime factors, there is little or no speed difference.
7.The execution time for fft depends on the length of the transform. It is fastest for powers of
two. It is almost as fast for lengths that have only small prime factors. It is typically several
times slower for lengths that are prime or which have large prime factors.
[23]
CHAPTER-5 CONCLUSIONThere are many issues to consider when analyzing and measuring signals from plug-in DAQ
devices. Unfortunately, it is easy to make incorrect spectral measurements. Understanding the
basic computations involved in FFT-based measurement, knowing how to prevent
antialiasing, properly scaling and converting to different units, choosing and using windows
correctly, and learning how to use FFT-based functions for network measurement are all
critical to the success of analysis and measurement tasks. Being equipped with this
knowledge and using the tools discussed in this application note can bring you more success
with your individual application.
[24]