imitation and social intelligence for synthetic characters daphna buchsbaum, mit media lab and...
TRANSCRIPT
Imitation and Social Intelligence for Synthetic Characters
Daphna Buchsbaum, MIT Media Lab and Icosystem Corporation
Bruce Blumberg, MIT Media Lab
Imitation and Social Intelligence for Synthetic Characters
Daphna Buchsbaum, MIT Media Lab and Icosystem Corporation
Bruce Blumberg, MIT Media Lab
Socially Intelligent Characters and RobotsSocially Intelligent Characters and Robots• Able to learn by observing and interacting with
humans, and each other
• Able to interpret other’s actions, intentions and motivations - characters with Theory of Mind
• Prerequisite for cooperative behavior
Max and Morris Max and Morris
QuickTime™ and aPhoto - JPEG decompressor
are needed to see this picture.
Max and MorrisMax and Morris
• Max watches Morris using synthetic vision
• Can recognize and imitate Morris’s movements, by comparing them to his own movements (using his own movements as the model/example set)
• Uses movement recognition to bootstrap identifying simple motivations and goals and learning about new objects in the environment
Infant ImitationInfant Imitation
• These interactions may help infantslearn relationships between self andother
• ‘like me’ experiences
• Simulation Theory
©
Simulation TheorySimulation Theory
• “To know a man is to walk a mile in his shoes”
• Understanding others using our own perceptual, behavioral and motor mechanisms
• We want to create a Simulation Theory-based social learning system for synthetic characters
©
Motor Representation: The PosegraphMotor Representation: The Posegraph
• Nodes are poses
• Edges are allowable transitions
• A motor program generates a path through a graph of annotated poses
• Paths can be compared and classified
• Nodes are poses
• Edges are allowable transitions
• A motor program generates a path through a graph of annotated poses
• Paths can be compared and classified
Related Work: Downie 2001 Masters Thesis; Arikan and Forsyth, SIGGRAPH 2002;Lee et. al., SIGGRAPH 2002
Motor Representation: The PosegraphMotor Representation: The Posegraph
• Multi-resolution graphs
• Nodes are movements
• Blending variants of ‘same’ motion
• Multi-resolution graphs
• Nodes are movements
• Blending variants of ‘same’ motion
Synthetic VisionSynthetic Vision
• Graphical camera captures Max’s viewpoint
• Enforces sensory honesty (occlusion)
Synthetic VisionSynthetic Vision
• Key body parts are color-coded
• Max locates them, and remembers their position relative to Morris’s root node.
• People watching a movement attend to end-effector locations
Root node
Parsing MotionParsing Motion
• Many different movements start and end in the same transitionary poses (Gleicher et. al., 2003)
• These poses can be used as segment markers
Related Work:•Bindiganavale and Badler, CAPTECH 1998;
•Fod, Mataric and Jenkins, AutonomousRobots 2002;
•Lieberman, Masters Thesis 2004;
©
Movement RecognitionMovement Recognition
Movement RecognitionMovement Recognition
Movement RecognitionMovement Recognition
• Identify the best matching path through the posegraph
• Check if this path closely matches an already existing movement
Differing Movement GraphsDiffering Movement Graphs
QuickTime™ and aAnimation decompressor
are needed to see this picture.
Identifying Actions, Motivations and GoalsIdentifying Actions, Motivations and Goals
QuickTime™ and aAnimation decompressor
are needed to see this picture.
Action Identification
Action IdentificationAction Identification
Top-level motivation systems
ObjectAction Do-untilTrigger
Representation of Action: Action TupleRepresentation of Action: Action Tuple
Object
Action
Do-until
Trigger Context in which the action can be performed
Optional object to perform action on
Anything from setting an internal variable to making a motor request.
Context in which action is completed
ActionlIdentificationActionlIdentification
“Should I”trigger
“can I” trigger
ActionIdentificationActionIdentification
Find bottom-level actions that use matched movements
ActionIdentificationActionIdentification
Find bottom-level actions that use matched movements
ActionIdentificationActionIdentification
Find all paths throughThe action hierarchyTo the matchingaction
ActionIdentificationActionIdentification
Check “can-I” triggers,see which actionsare possible.
ActionIdentificationActionIdentification
Check “can-I” triggers,see which actionsare possible.
ActionIdentificationActionIdentification
Check “can-I” triggers,see which actionsare possible.
QuickTime™ and aAnimation decompressor
are needed to see this picture.
Learning About ObjectsLearning About Objects
LearningAbout ObjectsLearningAbout Objects
?
?
? ?
LearningAbout ObjectsLearningAbout Objects
Contributions:What Max can DoContributions:What Max can Do
• Parse a continuous stream of motion into individual movement units
• Classify observed movements as one of his own
• Identify observed actions, using his own action system
• Identify simple motivations and goals for an action
• Learn uses of objects through observation
Future Work:What Max Can’t Currently DoFuture Work:What Max Can’t Currently Do
• Solve the correspondence problem
• Imitate characters with non-identical morphology
• Doesn’t act on knowledge of partner’s goals - cooperative activity
• Currently ignores novel movements
Harder ProblemsHarder Problems
• How do you use your knowledge?
– Limits of simulation theory
– Intentions vs consequences: The problem of the robot that eats for you
– What level of granularity do you attend to: wanting the object vs wanting to eat
AcknowledgementsAcknowledgements
• Members of the Synthetic Characters and Robotic Life Groups at the MIT Media Lab
• Advisor:– Bruce Blumberg, MIT Media Lab
• Thesis Readers:– Cynthia Breazeal, MIT Media Lab– Andrew Meltzoff, University of Washington
• Special Thanks To:– Jesse Gray– Marc Downie