Transcript
Page 1: Reconstruction the  3D world  out of  two  frames, based on camera pinhole model :

Reconstruction the 3D world out of two frames, based on camera pinhole model :

1. Calculating the Fundamental Matrix for each pair of frames 2. Estimating the Essential Matrix using the calibration information of the camera. Extracting the Transformation between the frames out of the Essential Matrix 3. Calculation of first-order triangulation

Laboratory of Computer Graphics & MultimediaDepartment of Electrical Engineering Technion

Reconstruction Results:

Simulation Scenarios:Collision direction Same direction

Collision direction Same direction

(2) 3D Reconstruction Match

esFundamental Matrix

Estimating transformatio

n between frames

Triangulation

3DReconstruct

ed points

(4) Collision Detection

Estimate dynamic points

scattering

Is there collisio

n?

Static PointsStatic PointsStatic PointsStatic FeaturePoints

Nהעולם שחזור

על- ממדי התלתהנקודות פי

בלבד הסטטיות

העולם שחזורעל- ממדי התלת

הנקודות פיבלבד הסטטיות

Reconstruction

of the Dynamic points

N-1Static PointsStatic PointsStatic PointsDynamicFeaturePoints

N

העולם שחזורממדי- התלת

פי עלהנקודות הסטטיות בלבד

העולם שחזורממדי- התלת

פי עלהנקודות הסטטיות בלבד

Estimating Fundament

al Matrix by the Static

pointsN-1

Project goal:Designing an algorithm for recognition of possible collision trajectories by vehicles, using a video taken from a camera directed toward the rear of the direction of driving

Presented by: Adi Vainiger & Eyal Yaacoby , under the supervision of Netanel Ratner

SIFT vs. ASIFT

Though slower (~50x) then SIFT, ASIFT was chosen due to accuracy reasons and fining more features.

(1) Feature Detection & Matching

Matches

Feature Detection

&Image

Descriptors

Frame1 Matching

Interest PointsFram

e2

Feature Detection

&Image

DescriptorsIn this section we find interest points and their descriptors then match them between the two frames. This stage was implemented using the algorithm ASIFT.

Collision recognition from a video – part A

(2)3D

Reconstruction

System outline:Frame

i-N

Frames

(3)Recognition

and Differentiation Between Static and

Moving Objects

(4)Collisio

n Detectio

n

Alert

Frame i

(1)Feature Detectio

n and Matchin

g

The system takes a video from a camera, with an angle to the direction of the movement.For each window of time (~2.5 seconds) in the video, the system looks at pairs of frames a second apart.Each such pair of frames is processed at stages 1 and 2.After there are enough reconstructions the algorithm performs stages 3 and 4.

(1)Feature Detectio

n and Matchin

g

(2)3D

Reconstruction

(2)3D

Reconstruction

(1)Feature Detectio

n and Matchin

g

Introduction:Driving is a task that requires attention distribution. One of its many issues is identifying possible collision trajectories by vehicles from behind. Thus, there is a need for a system that automatically recognizes vehicles that are about to collide with the user, and warns him/her.Our solution is an algorithm that uses a video feed from a single simple camera , recognizes moving vehicles in the video and predicts whether they are about to collide with the user. Part A of this project focuses on the algorithm itself, without taking into account real-time constraints.

(3) Recognition and Differentiation Between Static and Moving Objects

Dynamic

Feature

PointsReconstructions Matching

Variance Calculation

for each point Static

Feature PointsN-1

3DReconstructed

points

3DReconstructed

points3D

Reconstructed points

3D Synthetic World

Synthetic TestingEnvironment

Matching the reconstructions for each point.Differentiation of moving points from static points is based on the normalized variance of the reconstructed matches of each point.

High Variance

Dynamic Point Reconstruction

Low Variance

Static Point Reconstruction

L0w ambiguity High ambiguity

We normalize the variance by angle and distance from camera, as the ambiguity correlates well with them

On a collision course, the lines between the camera centers and the object are almost parallel.Thus, the reconstructions will be very distant from one another, as shown in results

We estimate whether the dynamic points are moving towards the camera, using their scattering thorough out the reconstructions .

 

'xx Collision detection Results:

*Scenario 4 is a collision scenario and the rest are non-collision scenarios.

Ideal results for synthetic environment : 2% false negatives 12% false positivesReal movie results:

3D reconstruction of the world example

Static & moving object differentiation

*Red points – high variance --> dynamic points. Green points – low variance --> static points

Conclusions :

On the synthetic environment, the system produces good results.When turning to real movies, we had several issues:Matching features of dynamic objects (due to rolling shutter) did not work, and the classification did not work well.However, under certain conditions, we still get valuable results.Further research should allow much better results. We believe that a tracking algorithm can solve most of the issues that we saw.

Our thanks to Hovav Gazit and CGM Lab for the support

Top Related