image data fusion
TRANSCRIPT
-
8/9/2019 image data fusion
1/12
4-l
Image
1. Introduction.
Data Fusion for Enhanced Situation Awareness
H.-U. Dohler, P. Hecker, R. Rodloff
DLR, Institute of Flight Guidance, Lilientha lplatz 7
D-381 08 Braunschweig, Germany
Today’s aircraft crews have to handle more and
more complex situations. Especially during ap-
proach, landing, take-off and taxiing the improve-
ment of situational awareness can be regarded as a
key task.’ Particularly future military transport air-
crafts have to cope with new requirements such as
autonomous, non-cooperative landing at unsup-
ported (without D-GPS, MLS, ILS) airstrips down to
CAT-l (minimum) or better, low leve l flight opera-
tions, ground mapping, precise air dropping, search
and rescue missions. In most cases these require-
ments have to be established under adverse
weather conditions, without being detected by hos-
tile observation.
Within this context visual in formation provided by
fusion of image data from onboard multispectral
sensors with synthetic vision (SV), supported by an
ATC - interface and aircraft state data (position,
attitude and speed), will become an important and
helpful tool for aircraft guidance. The development of
these so called “Enhanced Vision Systems” (EVS) is
an interdisciplinary task, which requires a wide
spectrum of different information technologies:
l
modern data link technology for transmission of
guidance information;
l
complex data bases to provide terrain data for
synthetic images and to support the imaging
sensors (sensor characteristics, object signa-
tures);
l high performance computer graphics systems to
render synthetic images in real time;
l
a new generation of onboard imaging sensors,
like solid state infrared and especially a new kind
of imaging radar, providing a real view through
darkness and adverse weather;
l knowledge based image interpreters to convert
sensor images into a symbolic description.
This paper presents the DLR concept for an inte-
grated enhanced vision system. After the description
of the basics in section 2 the image sensor charac-
teristics are compared with the requirements of an
Enhanced-Vision-System in section 3. First results
’ Recommendation of the FAA - Human Factors Team @A-l):
“The FAA should require operators to increase flightcrews un-
derstanding of and sensitivity to maintaining situation awareness,
particularly: Position awareness with respect to the intended
flight path and proximity to terrain, obstacles or traftic ;‘I.
concerning the experiments with the DASA HiVision
radar and data fusion techniques are given in sec-
tion 4.2 and 4.3.
2. The DLR concept for enhanced vision.
Looking a lit tle closer to the meaning of the terms
“Enhanced Vision Systems” (EVS) and “Synthetic
Vision System” (SVS), we find a rather indistinct
situation: sometimes the whole fie ld concerning “ob-
stacle detection”, “sensor simulation”, “4D flight
guidance display”, “enhanced vision”, etc. is sub-
sumed under the headline “Synthetic Vision” /2,3/
and sometimes the same term is strictly reduced to
computer generated images /II. But as an average
one finds the following use of these two terms:
It’s amazing to see that the terms “EVS” and “SVS”
are very often discussed in para llel , - sometimes
even treated as competing concepts. But as long as
EVS and SVS are considered as isolated technolo-
gies, they have to face some critical questions, such
as:
“EVS” _ no single sensor will really cover al l possi-
ble weather situations;
- what happens if the imaging sensor fails?
- multispectral images are difficult - some-
times impossible - to interpret.
” SVS” - unmodelled obstacles are not detectab le;
- how can the integrity of the data base be
monitored and what happens if data base
errors occur?
- how can such a data base be certified?
- position accuracy of the synthetic image
depends on the accuracy of the reference
system .
Another somewhat restricted but quite interesting
definition concerning the field of EVS / SVS - termi-
nology was coined by the Air Transport Association
(ATA) : . . “a means to safely increase airport ca-
pacity and reduce runway incursions in low visibility
conditions, without significant expansion of ground
facilities”. This definition seems to be mainly influ-
enced by civi l needs, but the renunciation of sup-
porting ground facilit ies makes EVS- and SVS-
technology also very interesting for military require-
ments.
Paper presented at the RTO SC1 Symposium on
“The Application of Information Technologies
(Computer Science) to Mission Systems”, held in Monterey, California, USA,
20-22 April 1998, and published in RTO MP-3.
-
8/9/2019 image data fusion
2/12
4-2
To avoid the drawbacks of the isolated EVS and
“Sensor Vision” = Sensor generated images; (re-
SVS technologies and to main tain the benefits of places the term “Enhanced Vision” in the conven-
both, it seems to be absolutely necessary to inte-
tional meaning).
grate them into one system which should be treated
as a part of an artif icial pilot assistant.
“Synthetic Vision” = Computer generated images
based on navigation inputs, map informations and/or
In order to avoid any confusion concerning the terms terrain data bases
used throughout this paper, it seems to be worth-
“Enhanced Vision” = A concept which integrates
while to define them :
“Sensor Vision”, “Synthetic Vision”, aircraft state -
and ATC - data
Vision
Sensor Vision
- FLIR
- LLL-TV
- RADAR
raw data
) Processing
) - Remapping
- Restoration
) - Segmentation
Object Extraction
Synthetic Vision
Terrain Model
I
ttitude
heading
Vision Fusion
synthetic image
Terrain Object
H
Identtiication;
- Obstacle
detection
Fig. 2.1: Block Diagram of the DLR “Integrated Enhanced Vision System”.
Figure 2.1 shows the basic idea of an integrated
Enhanced Vision concept. Key elements are func-
tional subblocks which are interacting through de-
fined interfaces: The “Sensor Vision” - block in-
cludes the imaging sensor(s) and possibly some
image data preprocessing. The “Vision Processing” -
block is responsible for a first step of image proc-
essing concerning signal restoration like noise re-
duction and various fil tering techniques, scaling and
transformation to convenient perspectives and as a
most challenging topic: feature - or object extraction,
The block “Synthetic Vision” represents al l neces-
sary technologies to set up a synthetic image de-
rived from terrain data bases and navigation infor-
mations.
The core part of the integrated Enhanced Vision
System is represented by the block “Vision Fusion”.
A first and simple possibility for “Vision Fusion” could
be an integra tion of sensor images and flight status
data on the primary flight display, using a Head
Down Display (HDD) or a Head Up Display (HUD)in
order to increase the crew’s situation awareness, -
but other possibilities are just as important:
l incompatibilities between sensed data and terrain
data can be used for obstacle detection;
l
a conformity check between sensed vision and
synthetic vision can be used as an “Integrity
Monitor” for the navigation system and the data
bases.
The shaded blocks of Figure 2.1 are part of, or con-
nections to a cockpit interface and / or a pilot assis-
tant system, which will not be discussed in this pa-
per. But it is obvious that obstacle detection carried
out within the “Vision Fusion” subblock can be im-
proved to an obstacle - or vehicle identification, if
ATC informations are included. The pilot assistant
may condense these data to a representation of the
traffic situation and - if the need arises - to a conflic t
prediction.
-
8/9/2019 image data fusion
3/12
4-3
Synthetic Vision
_______________________________
+ Sensor Vision
‘-------‘---------r--------------------------
obstacle detection
_______________________________
+ Aircraft status
+ ATC data
Advantage :
easy image interpretation
Disadvantage :
no obstacle detection
reference system necessarv
depends on data base reliabi lity
-------------------.____________________-----.
difficult image interpretation
no reference system necessary
I
no data base necessary
___________________________________
1_______-_______--__-_______
already available single sensor information
----------l----------------------------------------------
eeds data link technology
Table 2.1 summarizes the concept of an integrated
Enhanced Vision System. It is obvious that the dis-
advantages of “Synthetic Vision” and ‘Sensor Vi-
sion” cancel each other out and the advantages are
increased. In the sense of a synergetic effect both
sum up to a new system quality in the fields of
Crew Assistance in connection with:
l adverse weather
l
obstacle detection
l
non-cooperative landing systems
Integrity Monitor for:
0 navigation
. GPWS
l
taxi guidance system
Automatic Flight Guidance in terms of:
l
image based navigation
= Integrated Enhanced Vision System
Table 2.1: Definition of the Integrated EnhancedVision System.
l collision avoidance
l terrain avoidance, CFIT,
which would be not achieved if “Sensor Vision” or
“Synthetic Vision” are used as isolated technologies.
3. EVS - Sensors : characteristics and require-
ments.
The requirements for suitable imaging EVS - sen-
sors depend strongly on the application. In the fol-
lowing table the civil requirements and the resulting
sensor characteristics should be interpreted as the
minimum performance and the military features are
meant as an add on.
1
civil :
military:
operational requirements : resulting requirements for imaging sensors :
reduction of minimum approach RVR = weather independent
reduction of minimum takeoff RVR 3 sufficient resolution
safe taxi operations 3 image rate > 15 Hz
obstacle warning in critical phases * image delay c 200 ms
CFIT warning =, coverage minimum = HUD coverage
integrity monitor for GPS supported navigation
low level flights
landing on unsupported forward operating
strips
a passive or “silent” sensor
z autonrjmous; ground facilities not necessary
precise air dropping
air surveillance
ground mapping for surveillance
=r reconnaissance capability
search and rescue
Table 3.1 : Requirementson imaging EVS - sensor. (RVR = Runway Visual Range; CFIT = Controlled Flight Into Ter-
rain)
With the background of the requirements shown in estimation for two of the most important sensor pa-
table 3.1 we should be able to qualify different types rameters: the penetration through the atmosphere
of sensors in terms of their EVS - capability. A rather (especially under foggy conditions) and the image
convenient method to distinguish between all possi- quality in terms of resolution. The rule of thumb is
ble sensor types, is to refer to the operating wave- rather simple: increasing wavelength improves the
length, because this parameter allows a rather easy penetration and decreases the image resolution. But
-
8/9/2019 image data fusion
4/12
4-4
of course there is no rule without exception: if the
wavelength is much smaller than the water particle
size ( 1 - 10 pm), the penetration increases again .
This effect is used for the so called “FogEye” - re-
ceiver, which is opt imized for UV wavelengths be-
tween 0,2 - 0,275 pm/4/. Table 3.2 compares the
characteristics of sensors which might come into
consideration for EVS - applications.
A valid comparison of the all-weather capability is
nearly impossible, because the technical realizations
of image generation are too different. First of all, it
seems quite clear that the active kind of sensors are
more successful penetrating a foggy or rainy atmos-
phere than the passive ones. “Active” in this context
is not only an illuminating kind of device, such as the
active mmw - radar, but also those sensors which
need an emit ting source to overcome the signal to
iensol
type
uv
141
Video
MMW
94GHz
13,10/
MMW
35GHz
(w-0
-I---
,2 -0,3 Perspective
image
I
8570 1 range angle
1 image
noise barrier, as for instant the UV sensor “Fog Eye”
/4/ and to a certain extent the “passive” mmw
(PMMW) camera, whose range of visibility can be
significantly enlarged, if additional mmw - reflectors
are positioned nearby the target. /6/. If we take op-
erational requirements into account we can define:
“Acfive” sensor systems make use of additional
means to increase the emission at or nearby the
observed object.
From this point of view the mmw - sensor and the
UV- Sensor are purely active, the others are optional
active depending on the usage of illumination or
emission increasing devices, such as IR- emitting
lamps or mmw - reflectors, etc.
active I ground angular range
passive facility res.
res.
passive
yes ‘)
(“) 7)
0,05
1 /6,7,8/ 1
I
Table
3.2 : Characteristics of potential EVS - sensors. (UV : Ultra
Wave Sensor; PMMW : Passive MilliMeter Wave sensor)
1)
UV
- source on the ground
necessary;
2) direct range information not available;
3) 1 m antenna square aperture /7/
4) calculated for field o f view (FOV) of 40”
IL
> 25
7
b 25
7
a 25 ‘1
10 7
=I5 = 40” x 40”
>= 40” x 40”
>= 40” x 40”
30” x 22”
ml
= 800
at: CAT llla
conditions)
No improv.
No improv.
No improv.
7
= 40” x 28”
range. 10 km)
15” x 17”
2500 - 3000
at zero-sight
= 700
at :zero-sight
5) specific values are not available, but problems are not expected
6) data not available.
7) numerical resolution 0,25”; beamwidth: 0,8”
On the other hand a “passive” sensor is not neces-
sarily the same as a “silent” one. For example: the
active mmw - sensor, as it is proposed by DASA
Ulm /5/, has a greater range than the hostile ESM
detection range. This is an example for an active but
silent sensor; and it turns out that the meaning of
term “silent” is a matter of mission.
But lets return to the “all weather” capability: A rather
convincing method to define the “all weather” char-
acteristic of a sensor was proposed by W.F. Horne
e.a. /3/ : these authors measured the received
power from the runway and the surrounding grass
as a function of distance and various weather condi-
tions. The variation of the received power between
grass and runway was used as a measure for the
contrast and they defined a 3 dB difference as a
threshold for sufficient visibility. The result of these
investigations for a 35 GHz mmw -sensor and an IR
- sensor is given in table 3.2.
-
8/9/2019 image data fusion
5/12
Although the visibility data of the other types of sen-
other side the mmw - sensor delivers a direct infor-
sors are not completely comparable we can state,
mation about the distance to certain objects which is
that the mmw - sensor seems to be the most suc-
extremely valuable for navigation, traffic analysis
cessful one.
and obstacle detection and avoidance respectively.
Another very important feature of the above l isted
sensors is the type of image generating: UV-, IR-,
Video- and PMMW-sensors generate a perspective
2D kind of image, which the human visual-
perception-system is evolutionary trained to process
into a 3-D-interpretation of the “outside world”. The
mmw - sensor on the other hand delivers primari ly
an information about the range and the angular di-
rection of a certain object. This range angle informa-
tion can be transformed into a view “out-of-the-
window”, but there is still a lack of information about
the objects height or its vertical postion/9/. The
presentation of such images needs knowledge about
the surrounding elevation, which often is estimated
by the so-called “flat-earth-assumption”ll21. On the
But whatever conclusions are drawn from these
different kinds of image generation - it seems to be
impossible to claim a general advantage for one of
them. In connection with the above used def init ion of
the term “Enhanced Vision” as a fusion of “synthetic
vision” and “sensor vision”, the type of image gen-
eration is an important part of the system philoso-
phy.
On the basis of the sensor requirement (table 3.1)
and the sensor characteristics (table 3.2 ) we should
be able to correlate sensor characteristics and re-
quirements. (Table 3.3 )
UV - Sensor
‘1
0
0
+
+
0
yes
no/yes
no
no/yes
4-5
weather independent
sufficient resolution
image rate > 15 Hz
image delay < 200 ms
coverage (azimuth x elevation)
passive sensor
“silent” sensor
autonomous
reconnaissance capability
1 ) air to ground observation
+
+
+
+
yes
yes
yes
yes
+
+
+
+
yes
yes
yes
yes
Table 3.3 : Matrix of EVS-requirementsand sensorcharacteristics. (,‘+” : good ; “0” : fair; “-”
Simply counting the “+” and “yes”, Table 3.3 gives
the impression, that the infrared sensors and the
simple video cameras should be the most promising
sensors for EVS applications. They offer nearly all
features which might be valuable, - except one: the
ability to penetrate fog, snow and rain. Walter F.
Horne e.a. 131 stated in their paper “During ap-
proaches and landings in actual weather for 50 foot
ceilings and 700-1000 foot visibility the millimeter
wave image was not noticeable degraded. During
these same in-weather approaches, the IR sensor
was not able to provide an image any better than the
human eye.”
If EVS - technology is mainly justified by an increase
of the crew’s (visual) situation awareness under
adverse weather conditions the all weather capabili-
ties of the sensors will become the most important
characteristic. From this standpoint of view the UV-,
the mmw- and the pmmwSensor should be taken
into account. The UV- sensor and the passive mmw
- sensor need some additional ground facilities (UV-
sources and mmw-reflectors), the first one in general
Video -
Sensor
IR - Sensor mmw -
Sensor
+
+
+
+
no
yes
yes
yes
poor )
pmmw -
Sensor
0
+
0
yes
yes
yes/no
yes
and the latter one in adverse weather conditions.
/4,6/. Additional ground facilities might be no prob-
lem, especially if they are cheap and easy to install,
but they restrict the EVS technology to certain sce-
narios with a ground based infra structure, which
might be not always available, especially in military
environments.
I waveform FMCW
scanning principle
centre freauencv
1 frequency scanning
I 35 GHz
1azimuth coverage
range resolution
Instrumented ranae
41”
6 m (512 pixel /3300m)
3.5 km
-
8/9/2019 image data fusion
6/12
4-6
These were the reasons why DLR decided to use
the “HiVision” mm-wave radar from DASA Ulm as
the main sensor for EVS research. Technical de-
scriptions are given in several papers /5, 1 I,/;
therefore a short summary might be sufficient: “HiVi-
sion” radar uses continous wave (FMCW) technol-
ogy with a frequency scanning antenna which cov-
ers an azimuth sector of 40”. Table 3.4 summarizes
the parameters which were achieved during recent
tests.
A very promising feature of this sensor, especially
for milita ry applications, is the ESM range, where the
radar could be detected. With an output power of
100 mW / 500 mW a high performance ESM re-
ceiver with a -80 dBm sensitivity would detect the
radar within a distance of 2.3 / 5.2 km. On the other
hand: the radar range against a person (radar cross
section approx. 1 m2 ) is 6.9 / 10 km, which means
that the hostile detection range is much smaller than
the detection range of the radar itself /5/. Or in other
words: the aircraft with an active HiVision radar can
be detected earlier by “ears” than by an ESM re-
ceiver.
covered with a video camera type FA 871 (0.4 - 1.0
pm wavelength) and a resolution of 581 x 756 pixel
manufactured by Grundig. For the infrared channel
the AEGAIS - camera manufactured by AIM Heil-
bronn (former AEG) for the 3-5 pm wavelength and
a spatial resolution of 256 x 256 pixel was used. A
set of typical images are shown in Fig. 4.3.
In para llel DASA and DLR developed an airborne
version of the HiVision Radar for the DO 228 (Fig.
4.2) First flight tests are planned for the autumn of
1998.
To avoid any misunderstanding: the “AWACS - like”
position of the radar on top of the plane (Fig. 4.2)
was only chosen to avoid any kind of disturbance
between the already existing IR-sensor at the nose
of the plane and the radar. With a width of the radar
antenna of 70 - 80 cm it should be always possible
to find a position inside the radom, - at least at me-
dium sized or large transport aircrafts.
4. Results
4.1 Experimental setup
Fig. 4.2 : DA.9 HiVision radar mounted on top of a DO
228.
Fig. 4.1
: Multi sensor test van, equipped with the mmw-
radar (aluminium box on the top), a video- and an IR-
sensor.
The DASA HiVision Radar is a rather new develop-
ment which was first investigated within a static envi-
ronment/l l/. The Institute of Flight Guidance at DLR
Braunschweig started to take the first step from
static scenario to a dynamic one in 1996. In order to
establish a rather quick experience, a Mercedes
Benz van (Fig. 4.1.) was equipped with a differential
GPS receiver, with a Litton inertial reference unit (to
determine the exact trajectory during the test) and a
set of forward looking imaging sensors for the visible
and the infrared channel. The visible channel was
4.2 Experiments with DASA HiVision Radar
Sensor data sets were recorded on Braunschweig
airfield driving a well defined course along taxi- and
runways. The experiments were repeated at differ-
ent times of the day (even at night) and under differ-
ent weather conditions , such as sunny and cloudy
sky, rain and fog. Furthermore some artificial obsta-
cles like other vans were placed on the track and
natural obstacles (Fig. 4.4) appeared by random.
Fig. 4.3 shows a set of images acquired during a
typical test run looking along a concrete runway
towards an area with parking airplanes. While Fig.
4.3 a) and b) show unprocessed images of the visi-
ble and the IR - channel, images c) and d) show a
radar-out-the-window-view (C-scope) and a radar-
map-view (PPI-scope) calculated from the raw radar
data, which are available in a range angle (B-scope)
format.
-
8/9/2019 image data fusion
7/12
4.3 a: Visible channel
4.3 c: Radar “out - the - window - view”
4.3 b: IR - channel
4.3 d: Radar map - view.
Fig. 4.3: A set of typical multispectral images of a Braunschweig airport taxi way.
Fig. 4.4: C-scope radar image with “obstacle” on the taxi-
way.
A very interesting example of the mmw-radar capa-
bility for obstacle detection is presented in Fig. 4.4
This picture shows the same taxi-way on
Braunschweig airport as Fig. 4.3c, but this time a
certain obstacle was located in the region of the taxi-
way. Fig. 4.4 also emphasizes some problems which
are connected with the interpretation of a radar im-
age: the poor image resolution and the kind of image
generation (range angle) which contains no informa-
tion about the objects height or its vertical position.
The only information which is really transmitted by
the radar image of Fig. 4.4 is a reflex in a part of the
scene (taxi way) where no reflex would be expected.
The pilot has no idea about the kind of object and
due to the way of image generation he has also no
information about the objects height and the vertical
position (ref. Fig. 4.8). In this special case the pilot
would identify a bird sitting on the taxi-way, simply
by looking out of the window. That’s of course not
the idea for an EVS-sensor, but from the pure radar
information nobody would really be able to decide
whether the “obstacle” is under, ahead of or above
-
8/9/2019 image data fusion
8/12
4-8
the radar position. This problem of radar-image in-
terpretation is still open.
4.3 Data Fusion
An integrated Enhanced Vision System, as it is de-
fined in chapter 2, requires the fusion of several data
sources. In order to achieve that, a ll data has to be
transformed into the same level of representation.
Especially a radar image data can exist in different
formats:
1. range-angle B-scope Fig. 4.5 a
2. map
PPI-scope
Fig. 4.5 b
3. out-the window-view
C - scope Fig. 4.5 c
Of special interest for radar images is the range-
angle, or B-scope representation. This type of pic-
ture contains two basic informations: the distance
(range) of a certain object relative to the radar head
and the direction in terms of the azimuth angle.
These “range angle”- pictures use the x-axis for the
angle representation and the y-axis for the object (=
reflector) distances.
Figure 4.5. shows some examples for these different
types of image data representations.
4 b) cl
Fig. 4.5: Dif ferent radar image representations of the same scene. a. Range-angle (B-scope): b. PPI-scope or range /
cross-range; c. “out-the-window-view” (C-scope).
Fig. 4.6a : Radar image with terrain overlay : Radar-map-
view.
Fig. 4.6b : Radar images with terrain overlay : Radar-out-
the window-view.
A first, rather simple realization of an Enhanced
Vision System deals with the fusion of radar images,
which might be available in the “PPI - scope”- format
or as “out-the-window-view” with a terrain data base.
Position and attitude delivered by the vehicle’s navi-
gation system and a 3-D terrain model are used to
compute a wire frame overlay of terrain elements
which can be presented in combination with the
radar image. Figure 4.6 shows two examples of a
radar out-the-window-view (C-scope Fig. 4.6b) and a
radar-map-view (PPI-scope, Fig. 4.6a).
-
8/9/2019 image data fusion
9/12
4-9
The fused radar images are a rather natural sup-
plement to the type of informations which are carried
by the primary flight display (PFD). A tentative ex-
ample how to integrate such a fused image into a
PFD is shown in Fig. 4.7.
Fusion techniques, especially in the field of image
data fusion, are often devoted to the fusion of sev-
eral image sources with different spectral sensitivity
/13/. In a first step the emission of al l objects and
radia tion sources are transformed into the same
spectral region in order to make them visible or de-
tectable with a common tool, - in many cases the
human eye.
Disregarding the possibility of data reduction which
might be achieved with an image fusion technique -,
the main reason for image data fusion is to improve
the situational awareness of the pilot and the ob-
server respectively. In /14/ the authors demonstrate
a dramatic increase in reliabi lity of target detection
and observer performance if visible and thermal
images are fused either grey or colour.
Fig. 4.7: Primary flight display with an integrated fused
radar image .
If we concentrate for a moment on the EVS - tech-
nique as a possibility to increase the pilots or crew’s
situation awareness - again disregarding require-
ments for reconnaissance or remote control - we
have to compete with a very famous “vision system”:
the pilots eyes In terms of situation awareness the
EVS - technology should “enhance” the ability of the
human operator, but not replace him. With this
background the design of an enhanced vision sys-
tem, especially the applied fusion philosophy, has to
be examined very carefully :
l What kind of additional sensor really enhances
the human cognition ?
l How useful is a sensor which covers the same,
or a similar spectral range as the human eye ?
l What kind of representation ( map or “out-the-
window-view”) in what situation should be used ?
l What type of images , symbols, synthetic- or
sensor vision, should be displayed on a HUD ?
Another rather important type of data fusion, espe-
cially for radar images, is related to a technique
which could be called “data fusion in time and
space”. Beside the fusion of different data (images)
from different sources, it is also possible to fuse
different images from one sensor, taken from differ-
ent positions and at different times . The SAR -
technique or computer tomography are good exam-
ples how to improve the information of one sensor
using data “fusion in space”. For EVS applications
this approach offers the following advantages:
l (determination of objects height and vertical posi-
tion by displacement vectors;)
l reduction of background noise (fusion in time);
l
integration of single snapshots to an overall view.
(Fusion in space)
Fig. 4.8: The radar imaging situation:
The first point is set into brackets, because the
authors feel that this advantage doesn’t really exist, -
although it is mentioned in the literature /12/. Figure
4.8 shows the situation. The radar is only able to
measure the distance to certain objects, an informa-
tion about the vertical position is not available. A
possible solution of this problem could be a series of
distance measurements which are taken from the
moving radar, because the slant distance is a func-
tion of the objects vertical position relative to the
radar head. This is the idea to overcome the lack
information about the vertical axis, - but for the ac-
tual available radar systems this procedure fails, due
to the rather large range errors, which are in the
order 3 - 6 m.
-
8/9/2019 image data fusion
10/12
4-10
radar height (m): 200
distance between radar and the
object for the 2. measurement :
‘)
‘)
, = 900 m
R, = 800 m
‘1
Rz = 700 m
‘1
300 400
‘) ‘)
‘)
260m
570m
283
m
‘)
483 m
201 m
370 m
7
‘1
243 m
R2 = 600 m
346
m
?
133m
R2 = 500 m
248
m 7
7
Table 4.1 : Errors of the vertical position measured by a
moving radar. (Starting distance between radar and ob-
ject: 1000m; range error o f the radar: 6 m) 1) : No existing
numerical solution.
2) : Object not within the elevation
range of the radar.
The following example from table 4.1 clarifies the
basic procedure: the vertical distance between the
radar head and the object might be 400m; the slant
distance R, to the object is assumed to be IOOOm. In
order to get an information about the vertical position
of the object a second measurement at a slant dis-
tance of R, = 900 m is carried out. Both distance
measurements are afflicted with a range error of 6m,
which leads to an uncertainty of the vertical position
between 260m and 570 m.
Fig. 4.9: Fusion of radar images to an overall view of
Braunschweig airfield. (Fusion in space, with a moving
vehicle)
The other above mentioned data fusion techniques -
fusion in time and fusion in space - are rather prom-
ising, particularly for the range angle images of a
radar. Fig 4.9 shows some details of a fusion in
space: The wire frame map in Fig. 4.9 represents
the Braunschweig airfield and the shaded segments
indicate those parts of the map which are covered
by the radar (PPI-scope) during taxiing. Using the
information
about
the vehicles position, which is
given by an inertial reference unit, or by means of
GPS, every single radar image can be assembled to
an overall view of the whole airfield. (For this special
fusion technique the range-angle-information of
a
radar picture is very advantageous.)
The image rate of the DASA HiVision radar is 16 Hz,
which leads to a large overlap of the PPI-scope pic-
tures If the vehicle’s velocity is rather slow com-
pared with the frame rate, - which is a typical situa-
tion during taxiing -, the overlap between consecu-
tive radar images covers a large common area. This
feature can be used for noise reduction and to elimi-
nate or to detect moving targets by means of an
averaging technique, adding up weighted grey scale
values. The effect of such a motion compensated
fusion technique is threefold:
. noise reduction;
.
enlargement of the image coverage;
.
increase of spatial resolution.
The result is shown in Fig. 4.10: Fig 4.10a presents
a detail of a raw PPI-scope picture. After fusion in
space and time, the picture noise was nearly com-
pletely eliminated and the object itself can be identi-
fied as a row of lamps.
The fusion of radar images shown in Figure 4.9 re-
fers to a situation with a fixed map and moving vehi-
cle. In this case the fusion procedure eliminates all
moving objects and of course the noise. These fea-
tures qualifies this type of fusion as a “map genera-
tor”.
a.) b)
Fig 4.10: Noise reduction in a PPI-scope image a: detail
of single radar image. b: the same part of the picture after
fusion.
Reference
Fused tY Pe of
System : image :
Result:
Cockpit C-scope
noise reduced “out-
the-window-view”
Geographic
system
PPI-scope
overall map - view
Table 4.2: Fusion in time and space for different refer-
ence systems.
-
8/9/2019 image data fusion
11/12
4-11
The same fusion technique can be applied to a “out-
the-window-view” (C-scope) with a fixed vehicle and
a moving map. Unfortunately moving - and comov-
ing objects are also suppressed, but the visibility of
fixed objects will increase. Tab le 4.2 compares
these different fusion techniques.
5. Conclusion
The integrated enhanced vision concept as it is in-
vestigated at DLR demands the fusion of sensor
vision, synthetic vision and additional informations of
the inner and outer status of the vehicle. In order to
meet most weather situations DLR decided to use
the HiVision radar from DASA Ulm, which is a for-
ward looking active , - but from the military stand-
point of view a ‘silent’ -, mm-wave imaging sensor.
Radar sensors are very promising in the fie ld of
weather penetration, but on the other side they are a
great challenge concerning noise and resolution,
(angle and range) and they fail completely delivering
the vertical position of the depicted object. Data
fusion techniques, such as an overlay with terrain
data, or fusion in time and space may be applied to
overcome the former difficult ies. The restriction of
the mmw-technology to a range - angle information
and the impact on the architecture of the overall
Enhanced Vision System is still a topic for research.
A very promising imaging technique in this field
would be the total 3-D reconstruction of the outside
world from 2-D (range-angle) radar images. This
would require the adoption of signal reconstruction
methods, as they are known from computer tomo-
graphy by fusing radar images with different scan-
ning directions. These image data could be acquired
for example by at least two orthogonal mounted
radar antennas or by a rotating one.
First experiments with the DASA HiVision Radar as
an enhanced vision sensor show some very prom-
ising results concerning range, image rate and
resolution.
6. References
/I/ Hester, R.B.; Summers, L.G.; Todd, J.R. “Seeing
Through the Weather: Enhanced I Synthetic Vision
Systems for Commercial Transports.” SAE Interna-
tional Technical Paper Series 921973, Aerotech
‘92, 58.0ct 1992; p 97 - 104
121 “Enhanced and Synthetic Vision 1997”, Jaques
G. Verly, Editor, Proceedings of SPIE Vol. 3088,
1997
131 Horne, W.F.; Ekiert, S.; Radke, J.; Tucker, R.R.;
Hannan, C.H.; Zak, J.A., “Synthetic Vision System
Technology Demonstration Methodology and Re-
sults to Date”, SAE International Technical Paper
Series 921971, Aerotech ‘92, 5-8 Ott 1992; p. I-l 9
/4/ Nordwall, B.D.; “UV Sensor Proposed As Pilot
Landing Aid”, Aviation Week & Space Technology,
August 11, 1997, p . 81-84.
/5/ Pirkl, M.; Tospann, F.J.; “The HiVision MM-Wave
Rasdar for Enhanced Vision Systems in Civi l and
Military Transport Aircraft.” SPIE, Vol. 3088, “En-
hanced and Synthetic Vision 1997” p. 8-l 8 (1997)
/6/ Shoucri, M.; Dow, G.S.; Hauss, B.; Lee, P.; Yuji ri,
L.; “ Passive milimeter-wave camera for vehicle
guidance in low-visibility conditions.” SPIE, Vol.
2463, p. 2-9 (1995)
/7/ Appleby, R. ; Price, S.; Gleed, D.G.;
“Passive milimeter-wave imaging: seeing in very
poor visibility”
SPIE, Vol. 2463, p. 10 - 19 (1995)
/8/ Shoucri, M.; Dow, Fornaca, S.; G.S.; Hauss, B.;
Yuji ri, L.; ” Passive milimeter-wave camera for en-
hanced vision systems.” in Enhanced and Synthetic
Vision 1996, J.G.Verly, Editor, Proc. of SPIE, Vol.
2736, p. 2-8 (1996)
/9/ Doehler, H.-U.; Bollmeyer, D.; “Simulation of
imaging radar for obstacle avoidance and enhanced
vision.” in “Enhanced and Synthetic Vision 1997,
J.G. Verly, Editor, Proc. of SPIE, Vol. 3088, p. 64 -
73 (1997)
/IO/ BULL.; Franklin, M.; Taylor Chr.; Neilson, G.;
“Autonomous Landing Guidance System Validation”
in Enhanced and Synthetic Vision 1997, J. G. Verly,
Editor, Proc. of SPIE Vol. 3088, p. 19 - 25 (1997)
/ll / Tospann, F.-H.; Pirkl, M.; Griiner, W.; “Multi-
function 35 GHz FMCW radar with frequency scan-
ning antenna for synthetic vision application2.” in
Synthetic Vision for Vehicle Guidance and Control,
J. G. Verly, Editor, Proc. of SPIE Vol. 2463, p. 28 -
37 (1995)
/I21 Pavel, M,; Sharma, R.K.; “Fusion of radar im-
ages rectification without the flat earth assumption.“,
in Enhanced and Synthetic Vision 1996, J.G.Verly,
Editor, Proc. of SPIE, Vol. 2736, p. 108-I 18 (1996)
1131 Sweet, B.T.; Tiana, C.; “Image processing and
fusion for landing guidance”, in Enhanced and Syn-
thetic Vision 1996, J.G.Verly, Editor, Proc. of SPIE,
Vol. 2736, p. 84-95 (1996)
-
8/9/2019 image data fusion
12/12
4-12
/14/ Toet, A.; IJspeert, J.K.; Waxman, A.M.; Aguilar,
M.; “Fusion of visible and thermal imagery improves
situational awareness”, in Enhanced and Synthetic
Vision 1997, J. G. Verly, Editor, Proc. of SPIE Vol.
3088, p. 177 - 188 (1997)