machine vision robotic approach

Upload: hessam-jalali

Post on 05-Apr-2018

224 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 Machine Vision Robotic Approach

    1/42

    The automatic acquisition and analysis of images to obtain desired data for

    controlling a specific activity.

    A guidance system that gives a robot the ability to see what it is doing and react, as

    a human would, to changes in positioning.

    Systems that use video cameras, robots or other devices, and computers to visually

    analyze an operation or activity. Typical uses include automated inspection, optical

    character recognition and other non-contact applications.

    Machine vision is the application of computer vision to factory automation. Just as

    human inspectors working on assembly lines visually inspect parts to judge the

    quality of workmanship, so machine vision systems use digital cameras and image

    processing software to perform similar inspections. A machine vision system is a

    computer that makes decisions based on the analysis of digital images.

  • 7/31/2019 Machine Vision Robotic Approach

    2/42

    Vision is the Most Powerful Sense for Humans

    It provides an enormous amount of information about the environment and

    enables analysis for rich, intelligent interaction in dynamic environment

  • 7/31/2019 Machine Vision Robotic Approach

    3/42

    Sensors

    Vision Based

    Sensors

    ProprioceptiveExteroceptive

    Passive Active

    Thermal

    Sensors

    Vision Based

    Sensors

    Laser Range

    Finder

    Eye

    Scanner(Laser)

    Sonar

    ScannerCCD CMOS

  • 7/31/2019 Machine Vision Robotic Approach

    4/42

    CCD (Charge coupled Device) is a device with light-Sensitive photo cells which is

    used to create bitmap images.

    Most popular basic ingredient of robotics vision systems Today.

    Is an array of light-sensitive picture elements, or pixels, Usually with between

    20,000 and several million pixels total.

    The basic light measuring is colorless : it is just measuring the total number of

    photons that strike each pixel in the integration period.

    CCDs

    Three ChipsSingle Chip

  • 7/31/2019 Machine Vision Robotic Approach

    5/42

  • 7/31/2019 Machine Vision Robotic Approach

    6/42

    The pixels on the CCD chip are grouped into 2x2 sets of four, then Red, Green, and

    Blue Dyes can be applied to a color filter so that each individual pixel receives only

    light of one color.

    Normally 2 pixels measure Green while one pixel

    each measures Red and Blue light intensity (GRGB).

    In the other hand it can be as a single chip RGBE format

    which uses emerald (cyan blue) instead of green for measuring .

    NOTE: The Number of pixels in the system has been effectively cut

    by a factor of four.

  • 7/31/2019 Machine Vision Robotic Approach

    7/42

    Splits the incoming light into three complete (lower intensity) copies. Three

    separate CCD chips receive the light, with one Red, Green, or Blue filter our each

    entire chip.

    Silicon Absorbs different wave-lengths of light at different depths.

    Three Chip CCD can capture red, Green and Blue light at every pixel location

    As a result ofOne Chip CCD mosaic sensors, it just capture 50% of the green and

    only 25% of the red and blue light

  • 7/31/2019 Machine Vision Robotic Approach

    8/42

    The complementary Metal Oxide Semiconductor chip is a significant departure

    from the CCD

    In most CMOS devices, there are several transistors at each pixel that amplify and

    move the charge using more traditional wires.

  • 7/31/2019 Machine Vision Robotic Approach

    9/42

    The CMOS approach is more flexible because each pixel can be read individually.

    CCD sensors, as mentioned above, create high-quality, low-noise images. CMOS

    sensors, traditionally, are more susceptible to noise.

    Because each pixel on a CMOS sensor has several transistors located next to it, the

    light sensitivity of a CMOS chip tends to be lower. Many of the photons hitting the

    chip hit the transistors instead of the photodiode.

    CMOS traditionally consumes little power. Implementing a sensor in CMOS yields alow-power sensor.

  • 7/31/2019 Machine Vision Robotic Approach

    10/42

    CMOS chips can be fabricated on just about any standard silicon production line, so

    they tend to be extremely inexpensive compared to CCD sensors.

    CCD sensors have been mass produced for a longer period of time, so they aremore mature. They tend to have higher quality and more pixels.

  • 7/31/2019 Machine Vision Robotic Approach

    11/42

    The key Disadvantages of CCD and CMOS cameras are primarily in the areas of

    inconstancy and dynamic range.

    The Second Class of Disadvantages relates to the behavior of a CCD chip in

    environment with extreme Illuminations.

  • 7/31/2019 Machine Vision Robotic Approach

    12/42

    Eye Scanner Sensor uses a single beam laser for modeling the environment

  • 7/31/2019 Machine Vision Robotic Approach

    13/42

    Collects Data similar the way a sonar range finder sensor does

    Uses Vision based Methods for data extraction and analysis in making 3D maps and

    localizing the robot such as 2D evidence grid.

  • 7/31/2019 Machine Vision Robotic Approach

    14/42

    DEPTHX was the first research robot used this technique for exploring Zacaton

    sinkhole in central Mexico (Diameter:110m Depth: over 350m)

  • 7/31/2019 Machine Vision Robotic Approach

    15/42

    Advantages :

    Lots of data can be extracted from the Image

    Faster in some cases

    Disadvantages:

    Is not Accurate in all environments

    Because it is passive (CCD and CMOS) type environment has more effect onanalysis

    Needs much more powerful processors (microcontrollers cannot be used in most

    cases)

  • 7/31/2019 Machine Vision Robotic Approach

    16/42

    Line Follower Robot with Photocell sensor Self Drive Volkswagen Golf GTi

  • 7/31/2019 Machine Vision Robotic Approach

    17/42

    Range Sensing is extremely important in mobile robotics as it is a basic input for

    successful obstacle avoidance

    A number of sensors are popular in robotics explicitly for their ability to recover

    depth estimates: ultrasonic, laser range finder, optical range finder. So it is natural

    to attempt to implement ranging functionality using Vision chips (CCD,CMOS,) aswell.

    A fundamental problem with visual images makes range finding relatively difficult.

    Any vision chip collapses the 3D world into the 2D image plane, thereby losing

    depth information.

    If one strong assumptions regarding the size of objects in the world, or their

    particular color or reflectance, then one can directly interpret the appearance of

    the 2D image to recover depth. But such an assumption are rarely possible in real-

    world robot applications.

  • 7/31/2019 Machine Vision Robotic Approach

    18/42

    The general solution is to recover depth by looking at the several images of the

    scene to gain information.

    The images used must be different, so that taken together they provide additional

    information

    An alternative is to create different images by changing the view point , or by

    changing the camera geometry such as the focus position.

    Depth from Focus - Depth from Defocus is the Basic Concept for Visual range

    finding.

  • 7/31/2019 Machine Vision Robotic Approach

    19/42

    Relies on the fact that image properties change as a function of camera parameters

    If the image plane is located at distance e from the lens, all light will be focused at

    single point on the image plane and the object voxel will be focused.

    When the image plane is not at e, then the light from the object voxel will be caston the image plane as a blur circle, and the radius R of the circle can be

    characterized according to the equation.

    e

    LR

    2

    edf

    111

  • 7/31/2019 Machine Vision Robotic Approach

    20/42

    The distance to near objects can therefore

    be measured more accurately than that

    to distant objects, just as depth from

    Focus techniques.

    The accuracy of the depth estimate increases

    With increasing baseline b

    As bis increased, because the physical

    Separation in cameras is increased,

    Some objects may appear in one camera butNot in the other. Such objects will not

    Be ranged successfully.

  • 7/31/2019 Machine Vision Robotic Approach

    21/42

    LearningPhase Test

  • 7/31/2019 Machine Vision Robotic Approach

    22/42

  • 7/31/2019 Machine Vision Robotic Approach

    23/42

    An important aspect of vision based sensing is that the vision chip can provide

    sensing modalities and cues that no other mobile sensor provides.

    One such novel sensing modality is detecting and tracking color in the

    environment.

    Advantages of Color Detection:

    detection of color is straightforward function of a single image

    because color sensing provides a new, independent environmental cue, if it iscombined with existing cues, such as data from stereo vision or laser range finding,

    we can expect significant information gains.

  • 7/31/2019 Machine Vision Robotic Approach

    24/42

    Color Tracking Camera for Robots

  • 7/31/2019 Machine Vision Robotic Approach

    25/42

    Using Vision chip as Color Tracking Sensor for line following

  • 7/31/2019 Machine Vision Robotic Approach

    26/42

    Using Vision chip as Color Tracking Sensor for line following with collision avoidance

  • 7/31/2019 Machine Vision Robotic Approach

    27/42

    An Autonomous mobile robot must be able to determine its relationship to the

    environment by making measurements with its sensors and then using those

    measured signals.

    Vision-based feature extraction can affect a significant computational cost,

    particularly in robots where the vision sensor processing is performed by one therobots main processors

    The method must operate in Real-Time. Mobile robots move through the

    environment and so the processing simply cannot be an off-line operation.

    The Method be robust to the real world conditions outside of the laboratory. This

    means that carefully controlled illumination assumptions and carefully painted

    objects are unacceptable requirements.

  • 7/31/2019 Machine Vision Robotic Approach

    28/42

    Vision-Based interpretation is primarily about the challenge of reducing

    information.

    A sonar unit produces perhaps 50 bits of information per second. By contrast, a

    CCD camera can output 240 million bits per second.

    Image Preprocessing: it is important to note that all vision-based sensors supply

    images with such significant amount of noise that a first step usually consist of

    cleaning the image before lunching any feature extraction algorithm.

  • 7/31/2019 Machine Vision Robotic Approach

    29/42

    A 3x3 kernel table weighted as:

    Such a Low-pass filter effectively removes high-frequency noise, and this in turn

    causes far more stable.

    121242

    121

    16

    1

    G

  • 7/31/2019 Machine Vision Robotic Approach

    30/42

    Without Gaussian Smoothing With Gaussian Smoothing

  • 7/31/2019 Machine Vision Robotic Approach

    31/42

    Using Vision chip (Infrared CCD) for extracting specific objects base on color

  • 7/31/2019 Machine Vision Robotic Approach

    32/42

    The single most popular local feature extractor used by the mobile robotics

    community is the edge detector.

    Edges define regions in the image plane where a significant change in the image

    brightness takes place.

    Edge detection significantly reduces the amount of information in an image, and is

    therefore a useful potential feature during image interpretation.

    Optimal Edge Detection Canny. The current reference edge detector through out

    the vision community was invented by john Canny in 1983

  • 7/31/2019 Machine Vision Robotic Approach

    33/42

    The very first step is preprocessing. Each input image is gray scaled and contour-

    filtered using the Canny edge

    detector

    Using exhaustive scanning. In anX Y image with an N M template, we first try to

    match the windowdefined by the rectangle (0, 0,N,M);

    after that the one defined by (1, 0,N + 1,M), and so on until reaching

    the end of the image at that scale.

    Using random sampling. In an X Y image with an

    N M template, we select a fixed number of samples

    proportional to the size of the image. This scanning

    method accelerates the process with a sacrifice in

    Precision.

  • 7/31/2019 Machine Vision Robotic Approach

    34/42

    In the offline experiments we use exhaustive scanning because runtime performance is

    not an issue. The online version also uses exhaustive scanning. However, note that

    the online version could be made faster by using random

    sampling, but in this case not all positions in the image will

    be scanned in each frame.

    More templates means a better definition of the

    class of interest but also translates into a slower

    matching process. The templates are taken from

    photographs of the object of interest after contour

    filtering it and obtaining the relevant connected

    components.

  • 7/31/2019 Machine Vision Robotic Approach

    35/42

    The System Was Tested with an

    ActivMedia Robotics Pioneer 2 mobile

    robot. The online version (onboard the

    robot) uses the randomized scanning

    method previouslydescribed.

  • 7/31/2019 Machine Vision Robotic Approach

    36/42

  • 7/31/2019 Machine Vision Robotic Approach

    37/42

    They serve compact representations of the entire local region.

    Direct Extraction: image Histograms

    in still mode, with rotation of the robot or camera pixel positions will change,

    although the new image will simply be a rotation of the original image. But we

    intend to extract image feature via histogramming. because histogramming is a

    function of a set of pixel values and not the position of each pixel the process is

    pixel position invariant .

  • 7/31/2019 Machine Vision Robotic Approach

    38/42

    Unwarped Planner reconstruction of partial viewOriginal Omni Directional Image

  • 7/31/2019 Machine Vision Robotic Approach

    39/42

  • 7/31/2019 Machine Vision Robotic Approach

    40/42

    The DARPA Grand Challenge is a United States

    government-sponsored competition that aims

    to create the first fully autonomous vehicles

    capable of competing on an under-300 mile,

    off-road course in the. This annual challenge

    took place for the first time on March 13, 2004and was sponsored by the Defense Advanced

    Research Projects Agency of the U.S.

    Department of Defense.

  • 7/31/2019 Machine Vision Robotic Approach

    41/42

  • 7/31/2019 Machine Vision Robotic Approach

    42/42

    Autonomous Mobile Robots (Siegwart Nourbakhsh, MIT Press)

    Artificial intelligence illuminated (Coppin, Jones and Bartlett Publishers)

    DORPA Grand Challenge Documents

    Alice: An Information-Rich Autonomous Vehicle for High-Speed Desert Navigation

    (California institute of technology)Vision-Based Control of Mobile Robot (John Hopkins University)

    A Study of CMOS Cameras (Auburn University)

    A system for Vision-Based Human Robot Interaction (Orebro University)

    A Method to Detect Victims in Search and Rescue Operations using Template

    Matching (Simon Bolivar University)

    Real-Time Exploration in underwater tunnels (Carnegie Mellon University)