miroslav hlaváč martin kozák
DESCRIPTION
Fish position determination in 3D space by stereo vision. Miroslav Hlaváč Martin Kozák. 27 . 07. 2011. Project goals. Design low budget system to determinate 3D position of fish in water environment in real time Explore the capabilities of two cameras system - PowerPoint PPT PresentationTRANSCRIPT
Miroslav HlaváčMartin Kozák
27. 07. 2011
Fish position determination in 3D space by stereo vision
Project goals
• Design low budget system to determinate 3D position of fish in water environment in real time
• Explore the capabilities of two cameras system• Explore the capabilities of the Kinect dept sensor • Testing of both systems in different conditions• Compare results from cameras and Kinect• Designed system will be used to track differences in
fish motion
Used equipment and software
• Aquarium (60x30x30cm) – similar one is planned to be used in real application of this project
• Two Microsoft LifeCam Studio webcams• Calibration object (chessboard)• Kinect for Xbox 360• Rubber testing object• Matlab
Two cameras system
• The system of two cameras is emulating human eyes• We need to do calibration of cameras to determine the
system parameters• These parameters are then used to compute 3D coordinates
from two different views of scene (epipolar geometry)
Epipolar geometry• We can determine position of point from
one image, but to determine depth we need the information from the second camera
• Selection of one point in left image and finding corresponding point on epipolar line in right image
• Computing 3D coordinates from those two points
-300
-200
-100
0
100
200
0
200
400
600
800
1000-100
-50
0
50
87
1
611
5213
43
12
109
Extrinsic parameters
XZ
Right Camera
Y
ZX
Y
Left Camera
Cameras calibration• Two sets of parameters for cameras
– Extrinsic (rotation and translation between cameras)– Intrinsic (focal length, skew and pixel distortion for each camera)
Kinect
• Gaming device for Xbox 360• Projecting IR light pattern on the scene
through special grid• Computing depth information from the
projected grid distortion
Cameras results 1
-400 -300 -200 -100 0 100 200 300700
800
900
1000
1100
1200
-100
-50
0
50
100
150
• Manual corresponding points selection
• Selecting the white point on rubber testing object manually and computing 3D trajectory
• 3D coordinates accuracy is ± 0.5 mm
Camera results 2• We developed online tracking system – 7fps
• Automatic corresponding point selection
• Image thresholding
• Binary image opening to eliminate small distortions
• By computing mean position of white pixels we will get corresponding points in both images
Kinect accuracy
• Real and Kinect distance dependence on water depth
• Depth independent
• Kinect accuracy in x-axis in water
• x-axis accuracy is ±3.5pixels
-15 -10 -5 0 5 10 150
20
40
60
80
100
120
140
0 2 4 6 8 10 12 14 16 180
2
4
6
8
10
12
14
f(x) = 0.7475 x − 0.00999999999999268
real distance [cm]
mea
sure
d di
stan
ce [c
m]
shift from the center of view [cm]
obje
ct s
ize
[pix
el]
Kinect results• We developed online tracking system – 30 fps • Maximum measurable depth in clear water is 40 cm• Maximum measurable depth in dirty water is 20 cm• Depth of fish is obtained by depth thresholding • Minimal measurable distance 80cm
Kinect vs. camerasKinect
• No need for calibration (+)
• Depth map is direct output (+)
• No color and outer light (+) dependence
• Maximal water depth (-) limitation
• IR reflecting material cause (-) errors in depth map
• Lower accuracy in water (-)
• Minimal distance 80cm (-)
Cameras
• Precision (+)
• Environment independence (+)
• Image segmentation(-)
• Localization of (-)corresponding points
• Calibration for each new (-) system position
• Requires more processing (-) power
Conclusion
• Both systems are usable for online 3D fish position determination in water
• We would recommend using Kinect in environment where accuracy is not the main concern the water is shallow and clean and where we need more mobility
• Cameras offer higher accuracy and environment independence but they require more processing power (corresponding points detection) and initial calibration
Acknowledgement
We would like to thank Ing. Petr Císař, Ph.D. for leading us through this project and for his advices.