perception introduction pattern recognition image formation image processing summary
Post on 29-Dec-2015
235 Views
Preview:
TRANSCRIPT
Perception
• Introduction• Pattern Recognition• Image Formation• Image Processing• Summary
Introduction
• Perception is initiated by sensors.
• The focus is on vision (as opposed to hearing, sensing).
• How do we process info. provided by sensors?
• What can I infer about the world from this sequence of sensors?
Processing Sensor Data
It has several uses:
• Manipulation. • Navigation.• Object Recognition.
Perception
• Introduction• Pattern Recognition• Image Formation• Image Processing• Summary
Recognizing Patterns
Definition. Pattern recognition is the “act of taking in raw data and taking an action or category of the pattern”(Pattern Classification, Duda, Hart, and Stork).
ComputerComputer
Pattern RecognitionPattern Recognition
ActionActionInput SignalInput Signal
A Particular Example
Fish packing plant Sort incoming fish on a belt according to two classes:
Salmon or Sea Bass
Steps:a) Preprocessing (segmentation)b) Feature extraction (measure features or properties)c) Classification (make final decision)
Figure 1.1
Figure 1.2
Figure 1.3
Decision Theory
Most times we assume “symmetry” in the cost.(e.g., it is as bad to misclassify salmon as sea bass).
That is not always the case:
Case 1.
Case 2.
Sea bass can with pieces of salmon
X Salmon can with pieces of sea bass
Decision BoundaryWe will normally deal with several features at a time.
An object will be represented as a feature vector
X =
x1x2
Our problem then is to separate the space of feature valuesinto a set of regions corresponding to the number of classes.The separating boundary is called the decision boundary.
Figure 1.4
GeneralizationThe main goal of pattern classification is as follows:
To generalize or suggest the class or action of objectsas yet unseen.
Some complex decision boundaries are not good at generalization.Some simple boundaries are not good either.
One must look for a tradeoff between performance and simplicity
This is at the core of statistical pattern recognition
Figure 1.5
Figure 1.6
Designing Pattern Recognition Systems
Components in a system:
a) Sensing devicesOften a transducer such as a camera or microphone (features: bandwidth,resolution, sensitivity, distortion, latency, etc.)
b) Segmentation and groupingPatterns must be segmented (there may be overlapping).
c) Feature extractionExtract features that simplify classification.Ideally: values similar in same category and different among categories. That is we need distinguishing features (invariant to transformations).
Designing Pattern Recognition Systems
Components in a system:
d) ClassificationUse feature vectors to assign an object to the right category. Ideally: determine probability of category membership for an object.Learn to handle noise.
e) Post processingUse output of classifier to suggest an action.Classifier performance? Error rate?Minimize expected cost or “risk”.
Figure 1.7
Our focus of study today
Applying Pattern Recognition Systems
Steps:
1. Data CollectionUsually very time consuming
2. Feature ChoicePrior knowledge is crucial
3. Model ChoiceSwitch to new features, new classifier
4. TrainingLearn from example patterns
5. EvaluationAvoid overfitting
Perception
• Introduction• Image Formation• Image Processing• Summary
Image Formation
Consists of creating a 2-D image of a scene.
We can do this through a pinhole camera.
Image is inverted through “perspective projection”.
Translation of Coordinates
Let (x,y,z) be a point in the image.Let (X,Y,Z) be a point in the scene.
Then,
-x/f = X/Z -y/f = Y/z
x = -fX/Z y = -fY/Z
Lenses
Real cameras use a lens. More light comes in. (but not all can be in sharp focus).
Scene points within certain range Zo can be identified with sharp focus.
This is called the depth of the field.
CCD Camera
The image plane is subdivided into pixels. (typically 512x512).
The signal is modeled by the variation in image brightness over time.
Fig. 24.4a
Fig. 24.4b
Photometry of Image Formation
The brightness of a pixel is proportional tothe amount of light directed toward the camera.
Light reflected can be of two types:a. Diffusely reflected (penetrates below the surface of the object and is re-emitted).b. Specularly reflected. Light is reflected from the outer surface of the object.
Photometry of Image Formation
Most surfaces have a combination of diffuselyspecularly reflected light.
This is the key to “modeling” in computer graphics
Perception
• Introduction• Image Formation• Image Processing• Summary
Image Processing
One important step is “edge detection”.
Motivation:
Edge contours correspond to scene contours.
Image Processing
Typically there are problems:
• missing contours• noise contours
Edge Detection
Edges are curves in the image plane where there is a clear change of brightness.
How do we detect edges?
Consider the profile of image brightness along a 1-D cross-section perpendicular to an edge.
Solutions
1. Look for places where the derivative is large.(many subsidiary peaks may show up).
2. Combine differentiation with smoothing.
Extracting 3-D Information
Normally divided into three steps:
a. Segmentation.b. Determining position and orientation.
Important for navigation and manipulation.c. Determining the shape of objects
Stereopsis
Idea is similar to motion parallax.We use images separated in space. Superposing the images would show a disparity in the location of image features.
Perception
• Introduction• Image Formation• Image Processing• Summary
Summary
• We need to extract information from sensor data for activities such as manipulation, navigation, and object recognition.• A signal is modeled by the variation in image brightness over time.• Light reflected can be diffusely reflected or specularly reflected.• Stereopsis is similar to motion parallax.
top related