gheorghe tecuci mihai boicu, dorin marcu, david schum, kathryn … · invited talk the 9th int....
TRANSCRIPT
Invited Talk
The 9th Int. Conference on Machine Learning and Applications
ICMLA, Washington DC, USA, 12-14 December 2010
Gheorghe Tecuci
Mihai Boicu, Dorin Marcu, David Schum, Kathryn Russell
Learning Agent Center, George Mason University
2010, Learning Agents Center 2
Abstract
We present the development and applications of Disciple cognitive
agents that integrate three complementary capabilities:
(1)They are able to learn, directly from their users, their subject matter
expertise which currently takes years to establish, is lost when
experts separate from service, and is costly to replace;
(2)They can tutor new users in expert problem solving; and
(3)They can assist their users to solve complex problem in uncertain
and dynamic environments.
We first present how one can teach a Disciple agent to solve a familiar
problem: How to assess a PhD Advisor?
Then we discuss the development of a Disciple agent for the highly
complex domain of intelligence analysis, where it helps intelligence
analysts to discover and evaluate evidence and hypotheses, by
generating Wigmorean probabilistic inference networks that link evidence
to hypotheses in argumentation structures that establish and defend the
relevance, the believability, and the inferential force or weight of evidence.
2010, Learning Agents Center 3
Overview
Final Remarks
How a Disciple Agent is Taught and Learns
Research and Development Objectives
Cognitive Assistant for Intelligence Analysis:
Learning, Tutoring, and Analytic Assistance
2010, Learning Agents Center 4
“Rarely does a technology arise that offers
such a wide range of important benefits of
this magnitude. Yet as the technology
moved through the phase of early adoption
to general industry adoption, the response
has been cautious, slow, and “linear”
(rather than exponential).”
Building Expert Systems (Cognitive Agents)
Edward
Feigenbaum
Tiger in a Cage:
The Applications
of Knowledge-
Based Systems
AAAI 1993
Invited Talk
Knowledge
Engineer
Subject Matter
Expert
Knowledge Base
Inference Engine
Expert System
Programming
Dialog
Results
Building expert systems is hard because much of
expert’s problem solving knowledge is in the form
of tacit knowledge which is very difficult to capture.
2010, Learning Agents Center 5
Alan Turing
Computing
Machinery and
Intelligence
Mind, 59,
433-460, 1950.
Building an intelligent machine by
programming is too difficult.
“Instead of trying to produce a
program to simulate the adult
mind, why not rather try to produce
one which simulates the child's?
If this were then subjected to an
appropriate course of education
one would obtain the adult brain.”
Teaching as Alternative to Programming
2010, Learning Agents Center 6
The expert
teaches the agent
how to solve
problems in a way
that resembles
how the expert
would teach a
student,
an apprentice or
a collaborator.
The agent
continuously
develops and
refines its
knowledge base to
capture and better
represent expert’s
knowledge and
problem solving
strategies.
Disciple Approach to Agent Development
Approach to develop learning and problem solving agents that can
be taught by subject matter experts to become cognitive assistants.
Main Ideas
Mixed-initiative problem solving
Evidence-based reasoning
Teaching and learning
Multistrategy learning
Resulting agent capabilities
Learning expert knowledge
Assisting experts and non-
experts in problem solving
Teaching students
Disciple Shell and Disciple Cognitive Assistants
New paradigm for
system development
and maintenance
KB
Disciple
Agent
Shell KB
Disciple
Agent
Shell KB
Disciple
Agent
Shell
Disciple↔Expert
Disciple Agent Shell
2010, Gheorghe Tecuci, Learning Agents Center 8
Course of Action Critiquing
Workaround Reasoning in Planning
Sample Applications of the Disciple Agents
Strategic Center of Gravity Determination
PhD Advisor Assessment, Web Believability Evaluation
Higher Order Thinking Skills in History and Statistics
Regulatory Compliance in Financial Services Industry
Medical Triage and Medical Diagnosis
Collaborative Emergency Response Planning
Inquiry-based Learning, Evidence-based Teaching Evaluation
Intelligence Analysis and Evidence-based Reasoning
2010, Learning Agents Center 9
Vision: Evolution of Software Development and Use
Mainframe
Computers
Software systems
developed and used
by computer experts
Personal
Computers
Software systems
developed by computer
experts and used by
persons who are not
computer experts
Learning
Assistants
Software systems
developed and used by
persons who are not
computer experts
DISCIPLE
2010, Learning Agents Center 10
Vision: Use of Disciple Agents in Education
teaches Disciple
Agent KB
A subject matter expert teaches
Disciple similarly to how the
expert would teach a student.
teaches Disciple
Agent KB
Disciple
Agent KB
Disciple behaves as a tutoring
system, guiding the student through
a series of lessons and exercises.
Personalized Learning: Grand Challenge for the 21st Century
US National Academy of Engineering, 2008
collaborate
A student uses Disciple as an
assistant and learns from its
explicit reasoning.
Army Intelligence Center
2010, Learning Agents Center 11
Overview
Final Remarks
How a Disciple Agent is Taught and Learns
Research and Development Objectives
Cognitive Assistant for Intelligence Analysis:
Learning, Tutoring, and Analytic Assistance
2010, Learning Agents Center 12
A problem P1 is solved by:
• successively reducing it to simpler and simpler problems;
• finding the solutions of the simplest problems;
• successively combining these solutions to obtain the
solution of the initial problem.
Reasoning Model: Divide and Conquer
One of the most highly developed skills in
contemporary Western civilization is dissection;
the split-up of problems into their smallest
possible components. We are good at it. So
good, we often forget to put the pieces back
together again. Alvin Toffler
S3 1 S3
p
S1
S1 n
Question Answer
S1 1
S2 1
Question Answer
S2 m
Question Answer
P1
"I Keep Six Honest...“ by Rudyard Kipling
I keep six honest serving-men
(They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.
P1 1
P1 n
Question Answer
P2 1 P2
m
Question Answer
P3 p P3
1
Question Answer
Problem Reduction and
Solution Synthesis
2010, Learning Agents Center 13
PhD Advisor Assessment: Overall Logic
Yves
Kodratoff
PhD Advisor
…
Necessary conditions satisfied? Yes
Assess whether John Doe is
a good PhD advisor for Bob
Sharp in Artificial Intelligence.
Assess whether John Doe is a
potential PhD advisor for Bob Sharp.
Assess whether John Doe
would be a good PhD advisor
for Bob Sharp with respect to
professional reputation.
Assess whether John Doe
would be a good PhD
advisor for Bob Sharp with
respect to reputation
among peers.
Assess whether John Doe
would be a good PhD advisor
for Bob Sharp with respect to
quality of student results.
Assess whether John
Doe would be a good
PhD advisor for Bob
Sharp with respect to
research funding.
Common area
of interest, etc.
Logic to assess John Doe based on elementary criterion.
professional reputation,
learning experience, … ,
quality of student results
Main PhD advisor criteria
Sub-criteria of professional reputation
… Logic to assess John Doe based on elementary criterion.
…
…
…
It is very likely that John Doe would
be a good PhD advisor for Bob
Sharp in Artificial Intelligence.
It is very likely that John Doe would be
a good PhD advisor for Bob Sharp.
It is almost
certain that
John Doe
would be ...
It is almost certain that
John Doe would be a good
PhD advisor for Bob Sharp
with respect to reputation
among peers.
It is very
likely that
John Doe
would be ...
It is very likely that
John Doe would be a
good PhD advisor for
Bob Sharp with respect
to research funding.
…
2010 Gheorghe Tecuci, Learning Agents Center 14
Hybrid Knowledge Base = Ontology + Rules
PhD advisor
Jane Austin John Doe
faculty member staff member
professor
university employee
instance of instance of
subconcept of
subconcept of
instructor
full professor
associate professor
assistant professor
instance of
subconcept of
John Smith
Ontology:
Hierarchical
representation of the
domain concepts
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
… Except-When Condition
Problem reduction and
solution synthesis rules:
Specified with the concepts
from the ontology
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
… Except-When Condition
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
… Except-When Condition
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
… Except-When Condition
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
… Except-When Condition
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
… Except-When Condition
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
… Except-When Condition
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
… Except-When Condition
S1
S11 S1
n
S21 S2
m
S31 S3
p
QuestionAnswer
QuestionAnswer
QuestionAnswer
P1
P11
P1n
P21 P2
m
P3pP3
1
QuestionAnswer
QuestionAnswer
QuestionAnswer
Reasoning tree:
Finding the
solution of a
specific problem
2010, Learning Agents Center Jane Austin
Ph.D. student
John Doe
faculty member staff member
professor
studentuniversity employee
person
Bob Sharp
instance of
subconcept of
instance of instance of
subconcept of
subconcept ofsubconcept of
subconcept of
M.S. student
B.S. studentinstructor
graduatestudent undergraduate
student
fullprofessor
associateprofessor
assistantprofessor
subconcept of
instance of
subconcept of
Joan Dean
instance of
PhDadvisor
John Smith
graduateresearchassistant
teachingassistant
employee
subconcept of
subconcept of
Learned with an
evolving ontology
15
Partially Learned Rules and Evolving Ontology
IF the problem to solve is P1
THEN solve its sub-problems
P1 … P1
PVS Condition
Except-When
PVS Condition
1 n
S1
S11 S1
n
S21 S2
m
S31 S3
p
QuestionAnswer
QuestionAnswer
QuestionAnswer
P1
P11
P1n
P21 P2
m
P3pP3
1
QuestionAnswer
QuestionAnswer
QuestionAnswer
Partially
learned
rules with
plausible
version
space
(PVS)
conditions
+
-
PVS Condition Except-When
PVS Condition
2010, Learning Agents Center
Agent Development and Maintenance Tasks
Specify
instances
& features
Learn
ontological
elements
Import and
develop initial
ontology
Learn
reasoning
rules
Provide and
explain
examples
Analyze
agent’s
solution
Refine
rules
Explain
errors
Develop
reasoning
trees
Instruct
expert to
use model
Teaching and Learning
Claim: Much easier and faster because many tasks are performed by the learning agent (cognitive assistant) itself.
Subject Matter Expert
Knowledge Engineer
Learning Agent
Develop
ontology
Define
reasoning
rules
Verify and
update rules
Programming Very difficult and time-consuming
Model
problem
solving
Subject Matter Expert
Knowledge Engineer
S1
S11 S1
n
S21 S2
m
S31 S3
p
QuestionAnswer
QuestionAnswer
QuestionAnswer
P1
P11
P1n
P21 P2
m
P3pP3
1
QuestionAnswer
QuestionAnswer
QuestionAnswer
PhDadvisor
Jane AustinJohn Doe
faculty memberstaff member
professor
university employee
instance of instance of
subconcept of
subconcept of
instructor
fullprofessor
associateprofessor
assistantprofessor
instance of
subconcept of
John Smith
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
…Except-When Condition
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
…Except-When Condition
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
…Except-When Condition
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
…Except-When Condition
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1
Condition
Except-When Condition
1g ng
…Except-When Condition
Reasoning Tree
Mixed-Initiative Problem Solving
Accept Reasoning Step
Reject Reasoning Steps
Rules Refinement
Problem
Extend Reasoning Tree
Explain Examples
Rules Learning
Explain Examples
Explain Examples
Refined Rules
Refined Ontology
Learned Rules
Modeling, Learning, and Problem Solving
Ontology + Rules S3
1 S3p
S11
S21
QuestionAnswer
S2m
QuestionAnswer
P1
P11
P1n
QuestionAnswer
P21 P2
m
QuestionAnswer
P3pP3
1
QuestionAnswer
2010, Learning Agents Center 18
The subject matter expert helps the agent to learn (e.g. by
providing examples and explanations), and the agent helps
the expert to teach it (e.g. by asking relevant questions).
Integrated Teaching and Learning
Input knowledge
Problem solving behavior
Explicit learning guidance
Explicit teaching guidance
Hints and answers
to agent’s questions
Problem solving examples,
Explanations
Attempted solutions
to problems
Questions
2010, Learning Agents Center
P1
P1 1
P1 n
Question Q Answer A
…
Example
Rule Learning Method
Knowledge Base
1. Ontology-based
mixed-initiative
understanding
f1
ob1 ob2
ob3
f2
Explanation
Problem P1g
condition
Knowledge Base
3. Minimal and maximal
ontology-based generalizations
guided by analogical reasoning
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1 1g ng
Qg Ag
Applicability condition
Problem Png
condition
1
Problem P1g
condition
1 …
2. Example and
explanation
reformulation
IF the problem to solve is P1g
THEN solve its sub-problems
P1 … P1 1g ng
Qg Ag
?O1 is ob1
f1 ?O3
?O2 is ob2
f2 ?O3
?O3 is ob3
2010, Learning Agents Center 20
The expert makes explicit how to solve a problem
1. Modeling
Modeling and Learning
The agent learns reduction rules
2. Learning
Rule1
Rule2
2010, Gheorghe Tecuci, Learning Agents Center 21
Rule Learning
Problem Reduction
Example
Problem Rule
Problem Rule
Reduction Rule
2010, Learning Agents Center 22
Rule Learning
?O1 is John Doeis expert in ?O3
?O2 is Bob Sharpis interested in ?O3
?O3 is Artificial Intelligence
Condition:
1. Ontology-based
mixed-initiative
understanding
Explanation
2. Example and
explanation
reformulation
Knowledge Base
Bob Sharp
Artificial Intelligence
is expert in is interested in
John Doe
2010, Learning Agents Center
23
Minimal and Maximal Generalizations
John Doe
PhD advisor
Bob Sharp
instance of instance of
PhD student
employee
subconcept of
?O2 ?O1
person
associate
professor
subconcept of
graduate
student
student
subconcept of
professor
faculty member
university employee
actor
object
?O1 is John Doe
is expert in ?O3
?O2 is Bob Sharp
is interested in ?O3
?O3 is Artificial Intelligence
Most s
pecific
genera
lizatio
n
Most g
enera
l
genera
lizatio
n
LB
LB
LB
UB
Artificial
Intelligence
PhD
research
area
Computer
Science
research area
LB
UB
instance of
?O3 feature
is expert in domain
range
person
subconcept-of
is interested in domain
range
subconcept-of
research area
Object
ontology
Feature
ontology
2010, Learning Agents Center 24
Solving, Critiquing, and Refinement
Applies learned rules to solve new problems 1. Solving
Correct
2. Critiquing
Refines rule
with positive
example
3. Refinement
2010, Learning Agents Center 25
Failure
explanation Examples of problem
reductions generated by agent
Incorrect
example
Learning from
Explanations
Learning by Analogy
and Experimentation
Learning from Examples
Knowledge Base
Refinement Strategy: PVS Condition Refinement
_
+ +
Tom Mitchell
Version
spaces: A
candidate
elimination
approach to
rule learning
IJCAI 1977
Optional
Explanation
Correct
example
2010, Learning Agents Center 26
Generalization with a Positive Example
?O1 is Dan Smith is expert in ?O3 ?O2 is Bob Sharp is interested in ?O3
?O3 is Information Security
Positive example
Rule’s main condition
Dan Smith
PhD advisor
instance of
full
professor
subconcept of
professor
associate
professor
Refined Rule
2010, Learning Agents Center 27
Solving, Critiquing, and Refinement
Applies learned rules to solve new problems 1. Solving
Correct
Incorrect because
Dan Smith plans to
retire
2. Critiquing
Refines rule
with positive
example
3. Refinement
Refines
rule with
negative
example
2010, Learning Agents Center 28
Failure
explanation Example of problem reductions
generated by the agent
Incorrect
example
Correct
example
Learning from
Explanations
Learning by Analogy
and Experimentation
Learning from Examples
Knowledge Base
IF we have to solve
<Problem>
THEN solve
<Sub-problem 1> … <Sub-problem m>
Main
PVS Condition
Except-When
PVS Condition
Refinement Strategy: Except-When PVS Condition
2010, Learning Agents Center 29
Rewrite as
Except When Condition 1
?O1 is Dan Smith
plans to retire from ?O4
?O4 is George Mason University
Failure Explanation
Dan Smith plans to retire from
George Mason University
Negative Example
Rule Specialization with a Negative Example Rule
2010, Learning Agents Center 30
Solving, Critiquing, and Refinement
Applies learned rules to solve new problems 1. Solving
Refines
rule with
positive
example
3. Refinement
Refines
rule with
negative
example
Correct
2. Critiquing
Incorrect because
Jane Austin
plans to move
2010, Learning Agents Center 31
Rewrite as
Except When Condition 2
?O1 is Jane Austin
plans to move to ?O4
?O6 is Indiana University
Failure Explanation
Jane Austin plans to move to
Indiana University
Rule Specialization
Negative Example
2010, Learning Agents Center 32
Applies
learned
rules to
solve new
problems
1. Solving
Solving, Modeling, and Learning
2. Modeling
Extends the
reasoning tree
Learns a
new rule
3. Learning
Rule
2010, Learning Agents Center 33
Rule Learning
REDUCTION EXAMPLE
LEARNED
REDUCTION RULE
2010, Learning Agents Center
Applies learned rules to solve new problems 1. Solving
34
Solving, Critiquing, and Refinement
Refines
rule with
negative
example
3. Refinement
Rule
Correct
2. Critiquing
Incorrect because of the “even chance” likelihood
2010, Learning Agents Center 35
Refined Rule
Incorrect reduction
Negative Example
2010, Learning Agents Center 36
Problem Solving
Modeling
Learning
Refined rule
Expert example
Rule-based guidance
Creative solution
Context for creative solution Generated
example Mixed
Initiative
Reasoning
Synergy of Modeling, Learning, and Problem Solving
AcceptReasoning Steps
RejectReasoning Steps
Rule Refinement
ExtendReasoning Tree
ExplainExamples
Rule Learning
ExplainExamples
ExplainExamples
Refined Rules
Refined Ontology
Learned Rules
Reasoning Tree
Problem
Mixed-Initiative Problem Solving
Ontology + Rules
S1
S11 S1n
S111 S11mP11mP111
P1nP11
P1
…
…
Modeling, learning, and
problem solving mutually
support each other to
capture tacit knowledge
2010, Gheorghe Tecuci, Learning Agents Center 37
Characterization of the Disciple Learning Method
• Uses a form of multistrategy learning that synergistically integrates learning from
examples, learning from explanations, and learning by analogy.
• Uses the explanation of the first example and analogical reasoning to generate a
much smaller version space than Mitchell’s classical version space method.
• Efficiently searches the version space, guided by explanations obtained through
mixed-initiative reasoning with the user (both the upper bounds and the lower
bounds are both generalized and specialized to converge toward one another).
• Learns from only a few examples, in the context of an incomplete and evolving
ontology.
• Learns even in the presence of exceptions.
• Keeps minimally generalized examples and
explanations to automatically regenerate the
rules when the ontology changes.
• Efficiently captures expert’s tacit knowledge,
significantly reducing the complexity of developing cognitive assistants.
• Applied to many complex real-world domains (intelligence analysis, military strategy
and planning, education, collaborative emergency response, etc.)
+
-
2010, Learning Agents Center 38
Overview
Final Remarks
How a Disciple Agent is Taught and Learns
Research and Development Objectives
Cognitive Assistant for Intelligence Analysis:
Learning, Tutoring, and Analytic Assistance
2010, Learning Agents Center 39
Intelligence Analysis
• The goal of Intelligence Analysis
is to answer complex questions for
decision-making, such as:
Does Al Qaeda have nuclear weapons?
Will the United States be the world leader in
alternative fuels within the next decade?
• Complex arguments, requiring both
imaginative and critical reasoning, are
necessary in order to establish and
defend the relevance, the believability,
and the inferential force of evidence,
with respect to the questions asked.
• The answers are necessarily
probabilistic in nature because our
evidence is always incomplete, usually
inconclusive, frequently ambiguous,
commonly dissonant, and with various
degrees of believability.
David A.
Schum
The Evidential
Foundations
of Probabilistic
Reasoning,
Northwestern
University
Press,
1994, 2001.
John H.
Wigmore
The Science of
Judicial Proof:
As Given by Logic,
Psychology, and
General Experience
and Illustrated in
Judicial Trials,
Little,Brown&Co,
Boston, 1937
2010, Gheorghe Tecuci, Learning Agents Center 40
Analytic Assistance Empowers the analysts through mixed-initiative
reasoning for hypotheses analysis, collaboration with
other analysts and experts, and sharing of information.
Learning Rapid acquisition
and maintenance of
intelligence analysis
expertise which
currently takes years
to establish, is lost
when experts
separate from
service, and is costly
to replace.
Tutoring Helps new
intelligence
analysts learn the
reasoning
processes involved
in making
intelligence
judgments and
solving intelligence
analysis problems.
Disciple-LTA: Analyst’s Cognitive Assistant
2010, Learning Agents Center 41
Sample Problem: Analysis of Wide-Area Motion Imagery
From: Mita Desai, Multi-entity activity discovery over large
space-time windows, DARPA,
http://www.darpa.mil/ipto/solicit/baa/BAA-09-55_ID01.pdf
Real‐Time Analysis
Compare tracks against
known movement
patterns, or sets and
sequences of events,
and find matches that
may indicate an
impending threat event
(e.g., an ambush).
Forensic Analysis
Backtrack from a threat
event (e.g., ambush,
rocket launch) and
discover participants,
possible related
locations and events,
and movement patterns.
2010, Learning Agents Center
Discovery of Evidence, Hypotheses and Arguments
Evidence in search of hypotheses
What threat does this evidence suggest?
E*: Evidence of road work
at Al Batha highway junction
at 1:17AM
Not Road work
Road repair
Traffic disruption
Potential Items of
Evidence
Abductive reasoning
Hk: Ambush threat at the Al Batha highway junction at 1:17AM
P Possibly Q
Evidential tests of hypotheses
What is the likelihood of the threat based on the available evidence?
Items of Evidence
Inductive reasoning
Hk: Ambush threat very likely
P Probably Q
Hypotheses in search of evidence
Assuming that the threat is real, what other events or entities should be observable?
Hk: Ambush threat
Deductive reasoning
P Necessarily Q
Hc: Ambush preparation
Hi: Ambush location
Ha: Road blocking
E: Road work
Hc: Ambush preparation
2010, Learning Agents Center
Mixed-Initiative Multistrategy Learning
E*: Evidence of road work
at Al Batha highway junction
at 1:17AM
Not Road work
Road repair
Traffic disruption
Potential Items of
Evidence
Abductive reasoning
Hk: Ambush threat at the Al Batha highway junction at1:17AM
P Possibly Q
Items of Evidence
Inductive reasoning
Hk: Ambush threat very likely
P Probably Q
Hk: Ambush threat
Deductive reasoning
P Necessarily Q
Hc: Ambush preparation
Hi: Ambush location
Ha: Road blocking
E: Road work
Hc: Ambush preparation
Abd Rule Ded Rule Ind Rule
2010, Learning Agents Center 44
Wigmorean Network for Hypothesis Analysis
Assess H1
Assess the favoring
evidence for H12
Assess the disfavoring
evidence for H12
Assess the relevance of
E1 to H12
Assess the believability
of E1
Assess the extent to which E1 favors H12
Assess the extent to which E2 favors H12
Assess H11
Assess H12
Assess H13
E*i
Relevance answers the question:
So what? How does this item of
information bear on what we are
trying to prove or disprove?
If we believe E1 then H12 is almost certain
2010, Learning Agents Center 45
Wigmorean Network for Hypothesis Analysis
Assess H1
Assess the favoring
evidence for H12
Assess the disfavoring
evidence for H12
It is likely that E1 is
true
If we believe E1 then H12 is almost certain
Assess the relevance of
E1 to H12
Assess the believability
of E1
Assess the extent to which E1 favors H12
Assess the extent to which E2 favors H12
Assess H11
Assess H12
Assess H13
E*i
Believability answers the question:
Can we believe what this item of
intelligence information is telling us?
2010, Learning Agents Center 46
Wigmorean Network for Hypothesis Analysis
Assess H1
Assess the favoring
evidence for H12
Assess the disfavoring
evidence for H12
Based on E1 it is likely that H12 is true
Inferential force of E1 on H12
Based on the favoring evidence it is almost
certain that H12 is true
Inferential force of favoring evidence on H12
It is very likely that H12 is true
Inferential force of evidence on H12
It is likely that E1 is
true
If we believe E1 then H12 is almost certain
Assess the relevance of
E1 to H12
Assess the believability
of E1
Based on E2 it is almost certain that H12 is true
Assess the extent to which E1 favors H12
Assess the extent to which E2 favors H12
Based on the disfavoring evidence it is an even
chance that H12 is false
Assess H11
Assess H12
Assess H13
It is almost certain that H11 is true
It is very likely that H13 is true
E*i
Inferential Force or Weight
answers the question:
How strong is this item of relevant
evidence in favoring or disfavoring various
alternative hypotheses being entertained?
It is very likely that H1 is true
Inferential force of evidence on H1
2010, Learning Agents Center 47
Believability of Evidence
Evidence ontology
evidence
tangible evidence
testimonial evidence
demonstrative tangible evidence
real tangible evidence
unequivocal testimonial evidence
equivocal testimonial evidence
unequivocal testimonial evidence
based upon direct
observation
authoritative record
missing evidence
unequivocal testimonial evidence
obtained at second hand
testimonial evidence based on opinion
completely equivocal testimonial evidence
probabilistically equivocal testimonial evidence
believability of E
authenticity of E
reliability of E
accuracy of E
Believability assessments
believability of E
Source’s competence
Source’s credibility
Source’s understandability
Source’s access
Source’s veracity
Source’s objectivity
Source’s observational
sensitivity
2010, Learning Agents Center 48
S1
S11 S1n
S111 S11m P11m P111
P1n P11
P1
…
…
Sa11m Sd
11m Pd11m Pa
11m …
S1 P1
Multi-Agent and Multi-Domain Problem Solving
S11m P11m
S11 P11
S1n P1n
The problem reduction / solution synthesis paradigm facilitates:
collaboration between users assisted by their agents;
solving problems requiring multi-domain expertise.
2010, Learning Agents Center
The Arch of Knowledge
Evidential tests of hypotheses
Hypotheses in Search of evidence
New Observations Evidence in search
of hypotheses
Hypothesis
Observations
Likelihood of
Hypothesis
Galileo
He
Hc
Ha
Ei
E*
Items of
Evidence
Inductivereasoning
Hk: It is likely that a dirty bomb will be set off inthe Washington DC area.
Hd
Hk true
He
DeductivereasoningHd
Potential
Items of
Evidence
Hc
Ha
Ei
E*
Abductivereasoning
Hk: A dirty bomb will be set off in the Washington
DC area.
E*: Report on cesium-137
canister missing
Ha: stolen
E: missing
Hc: stolen by terrorist
organization
He: build dirty bomb
not missing
lostused inproject
stolen by competitor
stolen by employee
Aristotle Newton Locke
Herschel Whewell Peirce
Oldroyd Schum
2010, Learning Agents Center
The Arch of Knowledge Everywhere
New Observable Phenomena
Possible Hypotheses or Explanations
Observations of Events in Nature
New or Revised Theory
Intelligence
Analysis
Science
New Potential Evidence
Possible Charges or Complaints
Observations during Fact Investigation
Verdict
Law
New Potential Evidence
Possible Hypotheses
Observations of Events in the World
Likelihood of Hypotheses
2010, Learning Agents Center 51
Overview
Final Remarks
How a Disciple Agent is Taught and Learns
Research and Development Objectives
Cognitive Assistant for Intelligence Analysis:
Learning, Tutoring, and Analytic Assistance
2010, Learning Agents Center 52
Future Research Directions
• Natural language understanding and generation
• Abductive reasoning and learning
• Abstraction reasoning and learning
• Multistrategy reasoning and learning
• Integration of logic with different probability systems (Fuzzy,
Baconian, Naïve Bayes, Dempster-Shapher)
• Modeling evidence-based decision making
• Scaling-up the developed methods and tools
• Application in a variety of domains (education, intelligence,
defense, medicine, etc.)
2010, Learning Agents Center 53
Research Vision for the Disciple Learning Assistants
Mainframe
Computers
Personal
Computers
Learning
Assistants
2010, Learning Agents Center 54
Questions
2010, Learning Agents Center 55
This research was performed in the Learning Agents Center and was
supported by George Mason University and by several agencies of
the U.S. Government, including the Department of Defense, the
National Geospatial-Intelligence Agency, the Intelligence Community,
the Air Force Office of Scientific Research, the Air Force Research
Laboratory, the Defense Advanced Research Projects Agency, the
National Science Foundation, the U.S. Army War College, and the
Joint Forces Staff College. The U.S. Government is authorized to
reproduce and distribute reprints for Government purposes
notwithstanding any copyright notation thereon.
Acknowledgements and Contact Information
Contact information: Dr. Gheorghe Tecuci
Professor of Computer Science and Director of the Learning Agents Center
MSN 6B3, Learning Agents Center, George Mason Univ., Fairfax, VA 22030
[email protected] tel: 703 993 1722 http://lac.gmu.edu/