lecture 14: classification thursday 18 february 2010 reading: ch. 7.13 – 7.19 last lecture:...

21
Lecture 14: Classification Thursday 18 February 20 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Post on 21-Dec-2015

218 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Lecture 14: Classification

Thursday 18 February 2010

Reading:Ch. 7.13 – 7.19

Last lecture: Spectral Mixture Analysis

Page 2: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Classification vs.Spectral Mixture Analysis

In SMA, image pixels were regarded as being mixed from various proportions of common materials. The goal was to find what those materials were in an image, and what their proportions were pixel by pixel.

In classification, the image pixels are regarded as grouping into thematically and spectrally distinct clusters (in DN space). Each pixel is tested to see what group it most closely resembles. The goal is to produce a map of the spatial distribution of each theme or unit.

Forests – group 2

Water – group 1

Desert – group 3

Page 3: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

=2

=48

=60

Multi-unit veg map

AVHRR Images with pixels similar to vegetation flagged according to distance

at different tolerances

Page 4: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

What is spectral similarity?

X

Y

xA

B

Spectral distance: Spectral angle:

Spectral contrast between similar objects is small

Page 5: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Manual Classification

Seattle

1) association by spectral similarity of pixels into units

2) naming those units, generally using independent information- reference spectra

- field determinations- photo-interpretation

Page 6: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Basic steps in image classification:

1) Data reconnaissance and self-organization

2) Application of the classification algorithm

3) Validation

Page 7: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Reconnaissance and data organization

Reconnaissance What is in the scene?What is in the image?What bands are available?What questions are you asking of the image?Can they be answered with image data?Are the data sufficient to distinguish what’s in the scene?

Organization of dataHow many data clusters in n-space can be recognized?What is the nature of the cluster borders?Do the clusters correspond to desired map units?

Page 8: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

UnsupervisedUnsupervised

Separate DataInto Groups

With Clustering

Classify DataInto Groups

Assign NameTo Each Group

Satisfactory?

Yes

No

Form ImagesOf Data

Choose TrainingPixels For

Each Category

CalculateStatistical

Descriptors

Satisfactory?

Classify DataInto Categories

Defined

Yes

No

SupervisedSupervised

Classification algorithmsClassification algorithms

Page 9: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Unsupervised Classification:K-Means algorithm

Pick number of themes; set distance tolerance° 1st pixel defines 1st theme° is 2nd pixel within tolerance?

- YES: redefine theme- NO: define 2nd theme

° Interrogate 3rd pixel…

° Iterate, using “found” themes as the new seed

How do you estimate thenumber of themes? - can be greater than number of bands

Page 10: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

•Parallelipiped

•Minimum Distance

•Maximum Likelihood

•Decision-Tree

+x

““Hard” vs. “soft” classificationHard” vs. “soft” classification

Hard: winner take allSoft: “answer” expressed as probability x belongs to A, B

“Fuzzy” classification is very similar to spectral unmixing

Supervised Classification: What are some algorithms?

Page 11: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Parallelepiped Classifier

Assigns a DN range in each band for each class (parallelepiped)

Advantages: simple

Disadvantages: low accuracy - especially when the distribution in DN space has strong covariance, large areas of the parallelipipeds may not be occupied by data and they may overlap

Page 12: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Minimum-Distance Classifier

Uses only the mean of each class. The unknown pixel is classified using its distance to each of the class means. The shortest distance wins.

Decisionboundaries

Page 13: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Maximum Likelihood

The most commonly used classifier used. A pixel is assigned to the class based on statistical probability.

Based on statistics (mean; covariance)

A (Bayesian) probability function is calculated from the inputs for classes established from training sites.

Each pixel is then judged as to the class to which it most probably belong.

Page 14: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Maximum Likelihood

For each DN ntuple in the image, 1) calculate the distance to each cluster mean

2) scale by the number of standard deviations in the direction of the ntuple from the mean

3) construct rule images, pixel by pixel for each cluster, in which the number of standard deviations is recorded

4) threshold the rule images (null pixels too far from a cluster)

5) pick best match (least number of standard deviations and record it in the appropriate pixel of the output image or map

Page 15: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Decision-Tree Classifier

Hierarchical classifier compares the data sequentially with carefully selected features.

Features are determined from the spectral distributions or separability of the classes.

There is no general procedure. Each decision tree or set of rules is custom-designed.

A decision tree that provides only two outcomes at each stage is called a “binary decision tree” (BDT) classifier.

Page 16: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

One goal: reduce impact of topography on outcome

ratioing

NDVI

Spectral angle

Pre-processing - dimension transformation

Line of constant ratio

y

x

x/y

y/z

A

B

BA

Page 17: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Validation

PhotointerpretationLook at the original data:

does your map make sense to you?

Page 18: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Confusion matricesWell-named. Also known as contingency tables or error matricesHere’s how they work…

Training areasA B C D E F

A

B

C

D

EF

Cla

ssif

ied

data

Column sums

Row sums

Grand sum

All non diagonal elementsare errors

Row sums give “commission”errors

Column sums give “omission” errors

Overall accuracy is the diagonal sum over the grand total

This is the assessment only for the training areas

What do you do for the rest of the data?

p 586, LKC 6th

480 0 5 0 0 0 485

0

0

0

0

0

0

0

0

480

52

16

68 1992

0 20 0 0 72

Page 19: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Again: columns give the cover types used for training, and rowsGive the pixels actually classified into each category by the classifier

Training areasA B C D E F

A

B

C

D

EF

Cla

ssif

ied

data

Column sums:Number reference pixels In each class

Row sums:Number pixelsClassified asEach class

Grand sum

480 0 5 0 0 0 485

0

0

0

0

0

0

0

0

480

52

16

68 1992

0 20 0 0 72

38 24 60

313

0

0

0 0

0

79

359

353

142

459

481

356 248 402 438

126

34238

0

40

Producer’s accuracy 480/480=100%52/68=76%

User’s accuracy

480/485=99%

52/72=72%

313/353=87%

126/142=89%

342/459=74%

359/481=75%

313/356=88%126/248=51%

342/402=85%359/438=82%

Overall accuracy diagonal sum/grand sum84%

Producer’s accuracy:The number of correctly classified pixels used for trainingdivided by the number in the training area

User’ accuracy:The number of correctly classified pixels in each categorydivided by the total number classified as that category

Overall accuracy:The total number of correctly classified pixelsDivided by the total reference pixels in all categories

Page 20: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

A basic problem with classification

What’s actually on the ground –all three look similar (A) because they are grassy

meadow

golf course

cemetary

Mystery pixel X

is found to be spectrally similar to:

Theme A = grass

We tend to want to classify by land use, and from the remote sensing perspective this may lead to ambiguity

= bear habitat

≠ bear habitat

≠ bear habitat

I want to find bears. Bears like meadows. I train on a meadow (Theme A) and classify an image to see where the bears are. Pixel X is classified as similar to A. Will I find bears there?

Maybe not. 1) they might be somewhere else even they like the meadow. 2) What a meadow is from the RS perspective is a high fraction of GV. Other things share this equivalence. Therefore, X may indeed belong to A spectrally, but not according to use.

Thought exercise: what would you need to do in order to classify by land use?

Page 21: Lecture 14: Classification Thursday 18 February 2010 Reading: Ch. 7.13 – 7.19 Last lecture: Spectral Mixture Analysis

Next class: Radar