a non-probabilistic generalization of the agreement theorem
DESCRIPTION
A Non-Probabilistic Generalization of the Agreement Theorem. Knowledge. E. ( ω ). K( E ). ω. Ω – a state space – a partition of Ω ( ω ) – the element of that contains state ω. At ω the agent knows ( ω ). ...and also E. K( E ) – The event that the agent knows E. - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/1.jpg)
A Non-Probabilistic
Generalization
of the
Agreement Theorem
![Page 2: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/2.jpg)
.
....
..
Knowledge
Ω – a state space
– a partition of Ω
(ω) – the element of
that contains state ω.
Ω – a state space
– a partition of Ω
(ω) – the element of
that contains state ω.
ω
(ω)
At ω the agent knows (ω)
E
...and also E.
K(E)
K(E) – The event that the agent knows E.
![Page 3: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/3.jpg)
.
....
..
Knowledge
ω
(ω)
K(E) – The event that the agent knows E.
K(E) = {ω | }(ω) E
K : 2Ω 2Ω
1. K(Ω) = Ω
2. K(E) K(F) = K(E F)
3. K(E) E
4. ¬ K(E) = K(¬ K(E))
1. K(Ω) = Ω
2. K(E) K(F) = K(E F)
3. K(E) E
4. ¬ K(E) = K(¬ K(E))
Conversely: If K satisfies 1-4 then there exist such that…
Jaako Hintikka
Knowledge and Belief – An Introduction to the Logic of the Two Notions
![Page 4: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/4.jpg)
.
....
..
KnowledgeProbability
P – a prior probability
1/14 2/14
1/14 4/14
2/142/14
2/14
Fix event E.
E
The posterior probability of E:
d: Ω R
d(ω) = P(E | (ω))
d: Ω R
d(ω) = P(E | (ω))
2/3 0
2/32/3
2/3
1/2 1/2
[d = p] – The event {ω | d(ω) = p}
e.g. [d = 2/3]
![Page 5: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/5.jpg)
.
....
..
1 -
2 -
c - coarser than 1 and 2
finest among all such partitions
the common knowledge partition.
Common Knowledge
K(E) := K1(E) K2(E)
Kc(E) = Kn(E)n=1
![Page 6: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/6.jpg)
The probabilistic agreement theorem
The probabilistic agreement theorem
P – a prior probability
d1(ω) = P(E | 1(ω))
d2(ω) = P(E | 2(ω))
common
p1 p2 Kc ( )[d1 = p1 ] ([d2 = p2 ]
It is impossible …
… to agree…
… to disagree.
=
![Page 7: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/7.jpg)
The probabilistic agreement theorem
The probabilistic agreement theorem
P – a prior probability
d1(ω) = P(E | 1(ω))
d2(ω) = P(E | 2(ω))
p1 p2 Kc ( )[d1 = p1 ] ([d2 = p2 ] =
Anon-
A set of decisions
δ1 δ2 δ1 δ2
+conditions on
d1 and d2
satisfied by the posterior probability functions?
![Page 8: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/8.jpg)
Cave, J. (1983), Learning to agree, Economics Letters, 12.
Bacharach, M. (1985), Some extensions of a claim of Aumann in an axiomatic model of knowledge, J. Econom. Theory, 37(1).
Virtual decision functions
D : 2Ω
A decision function di : Ω
is derived from the virtual decision function D if
di (ω)= D(i (ω))
Interpretation: D(E) is the decision made if E were the information given to the agent.
Agents are like minded
if all individual decision functions are derived from the same virtual decision function.
Agents are like minded
if all individual decision functions are derived from the same virtual decision function.
![Page 9: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/9.jpg)
The Sure Thing Principle (STP)
A businessman contemplates buying a certain piece of property. He considers the outcome of the next presidential election relevant. So, to
clarify the matter to himself, he asks whether he would buy if he knew that the Democratic candidate were going to win, and decides that he would.
Similarly, he considers whether he would buy if he knew that the Republican candidate were going to win, and again finds that he would. Seeing that he would buy in either event, he decides that he should buy, even though he does not know which event obtains, or will obtain, as we
would ordinarily say. It is all too seldom that a decision can be arrived at on the basis of this principle, but except possibly of for the assumption of simple ordering, I know of no other extralogical principle governing
decisions that finds such ready acceptance.
Savage, L. J. (1954), The foundations of statistics.
The sure-thing principle cannot appropriately be accepted as postulate in the sense that P1 is, because it would introduce new
undefined technical terms referring to knowledge and possibility that would render it mathematically useless without still more
postulates governing these terms. It will be preferable to regard the principle as a loose one that suggests certain formal
postulate well articulated with P1.
![Page 10: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/10.jpg)
Virtual decision functions
D : 2Ω
A decision function di : Ω
is derived from the virtual decision function D if
di (ω)= D(i (ω))
Agents are like minded
if all individual decision functions
are derived from the same virtual decision function.
Agents are like minded
if all individual decision functions
are derived from the same virtual decision function.
The virtual decision function, D, satisfies the STP when for any two disjoint events E, F, if
D(E) = D(F) = δthen D(EF) = δ.
The virtual decision function, D, satisfies the STP when for any two disjoint events E, F, if
D(E) = D(F) = δthen D(EF) = δ.
![Page 11: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/11.jpg)
An agreement theorem
If the agents are
like minded with virtual decision function D, and
D satisfies STP,
then it is impossible to agree to disagree.
That is, if the decisions of the agents are common knowledge then they coincide.
If the agents are
like minded with virtual decision function D, and
D satisfies STP,
then it is impossible to agree to disagree.
That is, if the decisions of the agents are common knowledge then they coincide.
![Page 12: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/12.jpg)
A murder has been committed. To increase the chances of conviction, the chief of police puts two detectives on the case, with strict instructions to work independently, to exchange no information.
The two, Alice and Bob, went to the same police school; so given the same clues, they would reach the same conclusions.
But as they will work independently, they will, presumably; not get the same clues. At the end of thirty days, each is to decide whom to arrest (possibly nobody).
A detective story
Like mindedness
![Page 13: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/13.jpg)
A detective story
On the night before the thirtieth day, they happen to meet … and get to talking about the case. True to their instructions, they exchange no substantive information, no clues; but … feel that there is no harm in telling each other whom they plan to arrest.
Thus, … it is common knowledge between them whom each will arrest.
Conclusion: They arrest the same people; and this, in spite of knowing nothing about each other's clues.
Curtain
![Page 14: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/14.jpg)
A detective story
Aumann, (1988) Notes on interactive epistemology, unpublished.
Aumann, (1988) Notes on interactive epistemology, unpublished.
Aumann, (1999) Notes on interactive epistemology, IJGT.
Aumann, (1999) Notes on interactive epistemology, IJGT.
![Page 15: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/15.jpg)
.
Syntactically, they involve knowledge that cannot be expressed in terms of actual knowledge Ki .
Semantically, at a given state ω the agent’s knowledge is given by i(ω) and not by any other event. (Moses & Nachum (1990))
ω
i(ω)
ω’ .i(ω’)
A remedy
Is the STP captured?Is the agent more knowledgeable in ω than in ω’ ?
Ki p ¬ Ki p
Ki ¬ Ki p¬ Ki ¬ Ki p
How do virtual decision functions fit in a partitional knowledge setup?
![Page 16: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/16.jpg)
Comparison of knowledge
i knows at ω … more than
.ω
… i knows at ω’.
.ω’
interpersonal stateintrapersonalintra interstate
personalintra interstate
.ω
i knows at ω … more than … j knows at ω.
ω ¬ Kj(E) Ki(E) for each E.
ω ¬ Kj(E) Ki(E) for each E.
The event that i is more knowledgeable than j:
[i j] := (¬ Kj(E) Ki(E))
The event that i is more knowledgeable than j:
[i j] := (¬ Kj(E) Ki(E)) E
![Page 17: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/17.jpg)
The decision functions (d1,…, dn) satisfy the ISTP if for each i and j:
The decision functions (d1,…, dn) satisfy the ISTP if for each i and j:
Kj( ) [dj = δ] [i j] ) [di = δ]
Interpersonal Sure Thing Principle (ISTP)
[i = j] := [i j] [j i]
If the decision functions satisfy ISTP, then the agents are like minded: For each i and j,
[i = j] [di = di]
If the decision functions satisfy ISTP, then the agents are like minded: For each i and j,
[i = j] [di = di]
![Page 18: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/18.jpg)
The decision functions (d1,…, dn) on the model (Ω, K1 , … , Kn )
are ISTP-expandable if for each expansion (Ω, K1 , … , Kn , Kn+1 )
there exists a decision dn+1 for agent n+1such that (d1,…, dn, dn+1) satisfy the ISTP.
The decision functions (d1,…, dn) on the model (Ω, K1 , … , Kn )
are ISTP-expandable if for each expansion (Ω, K1 , … , Kn , Kn+1 )
there exists a decision dn+1 for agent n+1such that (d1,…, dn, dn+1) satisfy the ISTP.
Expandability
An agent is epistemic dummy, if it is common knowledge that each other agent is more knowledgeable.
Officer E. P. Dummy
Where n+1 is epistemic dummy
![Page 19: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/19.jpg)
If the decision functions (d1,…, dn) on the model (Ω, K1 , … , Kn )
are ISTP-expandable then the agents cannot agree to disagree.
If the decision functions (d1,…, dn) on the model (Ω, K1 , … , Kn )
are ISTP-expandable then the agents cannot agree to disagree.
A non-probabilistic generalization of the agreement theorem
![Page 20: A Non-Probabilistic Generalization of the Agreement Theorem](https://reader035.vdocuments.net/reader035/viewer/2022070405/56813fd6550346895daabaa6/html5/thumbnails/20.jpg)
Alice’s ken Binmore’s ken
..........
.......
.............
....
........
. . . . . .
..........
.......
.............
....
........
. . . . . .
…..….......... . .
kenthe list of all known sentences
The decision δ depends only on the ken.
K K
K KWhy ISTP?
Alice knows that…
Binmore is more knowledgeable:
Binmore’s decision is δ:
Alice knows that…
Binmore is more knowledgeable:
Binmore’s decision is δ:K =
K’ is consistent with K K’Binmore’s decision is δ for each the kens K’ .