…. 2 ongoing software project, not “theory” encapsulated internals & interfaces today:...

25

Upload: pamela-lindsey

Post on 05-Jan-2016

218 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

Page 2: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

2

• Ongoing software project, not “theory”

• Encapsulated internals & interfaces

• Today:– Details of module internals– Details of architecture & signaling/feedback– Single, clean, simple inputs– (26 slides)

• Not yet: – time– noise– robustness– multiple/partial hypotheses

Page 3: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

3

One “compressor”:• Generic memory unit• Learns about low-dim structure in high-dim data• Converts live data between low-dim high-dim

Hierarchy of compressors:• Each learns from compressed &

combined output of those below• Bi-directional (feedback)

Page 4: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

4

Compressor Internals

Probabilityestimation

Bi-directional

mapping

Matching to previous compression

Compressing

Quantizing & representing high-dim input

old:

P = p1 + p2 + …

Page 5: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

5

Quantizing & representing high-dim input

“Point” = position, weight, radius

Two point-clouds: mapping vs. learning (sync occasionally)

1. Find 3 closest cloud-points

X

Result: Point-cloud approximates input cloud, with roughly equal weight per point

2. Choose the lightest 3. Move it to absorb new point,

preserving center of mass

4. Increase weight

5. Update radius

6. (prune lightweight points)

Online updates:

Page 6: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

6

Compressing high to low(ISOMAP)

1. Find local distances in high-dim space

2. Create long-range distances from shortestpiecewise path (“geodesic”)

3. Link “islands” until all Dij defined

4. Diagonalize F(Dij) to get low-dim cloud (arbitrary coordinates)

Page 7: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

7

Keeping new maps consistent with old ones

Old cloud

The low-dim mapping is not always unique…

…so rotate & stretch new cloud to minimize distance from old one (SVD)

New cloud

Rotated new cloud

Page 8: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

8

Mapping new points using point-clouds

1. Find new point’s closest 4-5 neighbors

2. Express it as their center-of-mass (SVD)

3. Construct low-dim output from corresponding neighbors & weights

4. Also works

mapping lowhigh

=

W2

W1W3

W4

W2W1

W3W4

=

Page 9: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

9

Prob. Estimation

Each point is center of gaussian

P = p1 + p2 + …

Ri Pi = exp ( -0.5 r2 / R2) / (RD Ptot)

“Probability” of test point is sum over local gaussians

P = p1 + p2 + …

Probability =“Closeness” to manifold = how much to trust this point

… use it later in mixing estimates.

Page 10: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

10

Compressors interacting

Creating forward output

Feedback mixed

back in

Settling

Page 11: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

11

Creating output

• Map from high to low dim

• Expose result to all Compressors above

• Re-map output backwards to high dim

• Expose as feedback to Compressors below

Page 12: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

12

P

Mix feedback into output

2. Get probabilities of feedback and own output

1. Average feedback from above

3. Create weighted mixture of them

P

Page 13: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

13

2. Iterate a few times to settle

Updating and settling

1. Expose mixture as updated output,

and map downward as updated

feedback

--- done with description of system --

Page 14: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

14

General simulation results

3-layer hierarchy with

2-1 convergence

Input is 9x6 “pixel” space

with random illumination

Display low-dim output

in 2-D color

Page 15: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

15

Simple 1-dim illumination

How does each module map the input space?

?

Page 16: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

16

Page 17: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

17

Toroidal 1-dim illumination

How does each module map the circular input space?

?

=

Page 18: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

18

Page 19: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

19

2-dim spot illumination

How does each module map the 2-D input space?

?

Page 20: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

20

Page 21: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

21

“Hallucinating” spots driven from above

1. Force activity at a single

location in top module

2. Let feedback move down

3. Look at what lower modules think input ought to be

? ? ?

Page 22: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

22

Page 23: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

23

2-dim clustered spots (left & right)

How does each module map the 2-D input space?

?

Page 24: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

24

Page 25: …. 2 Ongoing software project, not “theory” Encapsulated internals & interfaces Today: –Details of module internals –Details of architecture & signaling/feedback

25

Next steps

Architecture– Time – Reference problem – Reference platform– Integration method– Separate streams for transforms vs. objects– Get people involved!

Algorithms– Noise– Multiple hypotheses– Distributed representation– “neurons”– Better quantization, mapping, robustness