playing with hats: the set theory of generalized hat...

29
PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12, 2014 Abstract This project has two parts: the first part is an introduction to the important concepts of “hat problems,” and the second part is a summary of my own research in the area. A “hat problem” is one where, given a set of people with colored hats placed on their heads, one asks whether there is a strategy (based only on the hats they can see) this group of people can use which will guarantee someone correctly guesses the color of his/her hat. The basic scenario is two people and two colors, where there is indeed a simple strategy guar- anteed to be successful. The problems become mathematically interesting when the number of people is allowed to be infinite; the axiom of choice becomes important. 1

Upload: others

Post on 29-Jun-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

PLAYING WITH HATS:

THE SET THEORY OF GENERALIZED HAT

PROBLEMS

Matthew J. Feller

May 12, 2014

Abstract

This project has two parts: the first part is an introduction to theimportant concepts of “hat problems,” and the second part is a summaryof my own research in the area.

A “hat problem” is one where, given a set of people with colored hatsplaced on their heads, one asks whether there is a strategy (based only onthe hats they can see) this group of people can use which will guaranteesomeone correctly guesses the color of his/her hat. The basic scenario istwo people and two colors, where there is indeed a simple strategy guar-anteed to be successful. The problems become mathematically interestingwhen the number of people is allowed to be infinite; the axiom of choicebecomes important.

1

Page 2: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

Contents

I A Crash Course in Hat Problems 2

1 Introduction 41.1 The Two Agent Problem . . . . . . . . . . . . . . . . . . . . . . . 41.2 Some Basic Notation . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 General Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4 Basic Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . 51.5 The µ-Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.6 The Gabay-O’Connor Theorem . . . . . . . . . . . . . . . . . . . 7

2 The Denumerable Setting: One-Way Visibility 82.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Minimal Predictors for a Transitive Graph . . . . . . . . . . . . . 92.3 Finite-Error Predictors: Transitive Graphs . . . . . . . . . . . . . 102.4 Finite-Error Predictors: Arbitrary Graphs . . . . . . . . . . . . . 12

3 Two “Teams” of Agents 133.1 Two Finite Teams . . . . . . . . . . . . . . . . . . . . . . . . . . 133.2 Two Infinite Teams . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4 A Slight Twist: Agents That Do Not Know Where They Are 16

5 Predicting the Future 17

II New Research 18

6 A Finite Number of Teams 186.1 Finitely Many Agents . . . . . . . . . . . . . . . . . . . . . . . . 186.2 Finitely Many Infinite Teams . . . . . . . . . . . . . . . . . . . . 19

7 A Plane of Agents 197.1 A Preliminary Lemma . . . . . . . . . . . . . . . . . . . . . . . . 207.2 Full Knowledge of Directions . . . . . . . . . . . . . . . . . . . . 217.3 Limited Knowledge of Directions . . . . . . . . . . . . . . . . . . 247.4 Other Variations/Open Questions . . . . . . . . . . . . . . . . . . 26

8 The Gabay-O’Connor Theorem and AC 27

2

Page 3: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

Part I

A Crash Course in Hat ProblemsThe first half of this paper introduces some of the basic concepts and results fromHardin and Taylor’s book The Mathematics of Coordinated Inference: A Studyof Generalized Hat Problems [1]. The book covers a vast range of material, someof it involving concepts from topology and advanced set theory; we cover herethe material which is hopefully more accessible to an undergraduate with onlyan introductory course in set theory. Specific emphasis is put on those conceptswhich are needed to understand the new research presented in the second halfof this paper.

In Chapter 1, we give a basic introduction to hat problems. We start withthe two person/two color problem and then set up the more general format. Wethen introduce the bit of graph theory we will need, give an important definition,and prove a basic but powerful result.

Chapter 2 explores the specific problem in which our set of agents is thenatural numbers and everyone is “looking upwards,” that is, each agent canonly see higher numbered agents. It turns out that we can characterize howmuch visibility there needs to be in order to get successful guessing strategies.

In Chapter 3, we investigate a special case where we have two “teams” ofagents, and show that the existence of a successful strategy depends heavily onthe number of colors.

Chapters 4 and 5 are quick mentions of concepts that are explored much morefully and carefully in Hardin and Taylor. Chapter 4 deals with the scenario inChapter 2, except that now the agents do not know where in line they stand.Chapter 5 briefly sketches a fun result about “predicting the future,” whoseproof and full statement is beyond the scope of this paper.

This paper skips most of the results about finite sets of agents, which canbe found in the second chapter of Hardin and Taylor [1, p. 11]. That materialis not particularly hard, but it turns out to be basically useless for the infinitecase.

Before getting started, it is worth pointing out that most of the followingresults depend heavily on the nature of infinity and the axiom of choice. Withfinitely many people and finitely many colors, we could actually play the game ifwe wanted to (though it would not be much fun). In the infinite case, however,we require that each agent somehow sees an infinite amount of information and,as we will see, that there are well-orderings on sets of cardinality larger than ω.Thus the finite and infinite cases are almost unrelated, so, as pointed out in theprevious paragraph, all results and intuition from the finite case are generallyunhelpful for the infinite case.

3

Page 4: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

1 Introduction

1.1 The Two Agent Problem

We start with the basic problem from which the rest of this material is derived.As is usually the case in these problems, the object of the game is for at leastone player to guess the color of the hat on his/her head.

Imagine there are two players sitting in a room, facing one another. Let’scall them players A and B. Their adversary comes in with a bag of red hats andblue hats, placing a hat on each player’s head. They can see the other player’shat, but not their own. Using only the color of the other player’s hat, theyboth try to guess the color of their own hat. If either one of them guesses right,they both win. If they both guess wrong, they both lose and their adversarywins. They are allowed to work out a strategy ahead of time, but they cannotcommunicate in any way once the hats are on their heads. They are not allowedto know the other’s guess before they give their own.

It is important to realize that this has nothing to do with probability, asprobability implies uncertainty. The question we are interested in is whetherthere is some strategy we can find which guarantees, with absolute certainty,that somebody will guess correctly. At this point, it’s worth pausing to see ifyou can figure out what the successful strategy is for this basic problem beforecontinuing to the next paragraph. There’s a certain way of thinking about theproblem which makes the answer “obvious” and which is how we’re going towant to think about the more complicated problems.

Here is a winning strategy: Player A guesses the same color as the one it seeson Player B, and Player B guesses the opposite color of the one it sees on PlayerA. You could write out every scenario and see that it works, but the fastestway to see that it works (and to see why it works) is to notice that the hatscan either be the same color, or different colors. If they are the same, Player Awill be right, and if they are different, Player B will be right. (Notice that thisalso means one player is guaranteed to be wrong every time, too. We don’t careabout that, though, as long as at least one person is right every time.)

1.2 Some Basic Notation

Much of this notation should be familiar. For a set X, the power set of X isdenoted P(X). The cardinality of X is denoted |X|. For a function f , therestriction of f to a set X is f |X . The set of all functions from a set A to a setK is AK . For two sets X and Y , their set difference is XrY = {x ∈ X|x 6∈ Y },and the symmetric difference is X∆Y = (X r Y ) ∪ (Y r X). If f and gare functions, f∆g = {x|f(x) 6= g(x)}. Recall that each natural number isdefined to be the set containing all lower natural numbers, starting with 0 = ∅(1 = {0}, 2 = {0, 1}, etc.), and that the set of all natural numbers is called ω.

4

Page 5: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

1.3 General Format

Many “hat problems” follow the same format, so we will start with some generaldefinitions. There is usually a set A of agents (what we’ve been calling playersso far), a set K of colors, and a set C = A

K of colorings (the set of functionsfrom A into K). Given an agent a ∈ A, let V (a) denote the subset of A that acan see. (Note: in every problem, a will not be able to see itself, so a 6∈ V (a).)Define the following:

Definition 1.3.1. We define ≡a to be a relation on C where f ≡a g if f |V (a) =g|V (a); that is, if f and g look the same to a. This is clearly an equivalencerelation.

In each problem, our goal is to come up with a strategy for each player a ∈ A.This should input a coloring and output a color (the agent sees the coloring andguesses its own hat color), so it should be a function Ga : C → K. However,if a cannot tell two colorings f and g apart, then we want the guess to be thesame, so we require that if f ≡a g, then Ga(f) = Ga(g). This motivates thenext definition:

Definition 1.3.2. Given an agent a, a guessing strategy for a is a functionGa : C → K such that if f ≡a g then Ga(f) = Ga(g). We say that a guessescorrectly if Ga(f) = f(a).

Notice that, collectively, the agents guess a coloring. This leads us to ournext definition.

Definition 1.3.3. Given a set of agents A, a set of colors K, and a visibilitygraph V on A, a predictor is a function P : C → C where, given a ∈ A,P (f)(a) = Ga(f) where Ga is a guessing strategy.

A predictor simply inputs the actual coloring and outputs the guessed col-oring. This is probably the most important definition in here, so be sure youfully understand it. We will judge a predictor, loosely speaking, by “how many”agents are guaranteed to guess correctly. For our purposes, a minimal predictoris one where at least one agent is guaranteed to guess correctly. (This definitionis less general than in Hardin and Taylor [1, p. 3], but, for the material we willcover, it is sufficient.)

1.4 Basic Graph Theory

This section will explain the basic graph theory we will use. Informally, a“graph” is a set of “vertices” and “edges” connecting those vertices. You canthink of it as a bunch of dots (vertices) with lines (edges) connecting them.Formally, a set V of vertices can be any nonempty set. (For us, the set ofvertices will always be the set A of agents.) For an undirected graph, the setof edges is some set E ⊆ P(V ) where ∀x ∈ E, |E| = 2; that is, E is a setof pairs of vertices, and for some a, b ∈ V , a and b are connected if and only if{a, b} ∈ E. For a directed graph, the set of edges E is some subset of V × V ,

5

Page 6: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

thus for some a, b ∈ V , a and b are connected if and only if either (a, b) ∈ Eor (b, a) ∈ E. The directed graph has the extra notion of a direction for eachedge; instead of thinking of it as a line connecting two vertices, it is now morelike an arrow from the first vertex to the second in the ordered pair. (Whendrawing a directed graph, where there is an edge from a to b and b to a, one caneither draw two arrows, or a single arrow with a head on each end.) A graph Gis simply an ordered pair consisting of the set of vertices and the set of edgesG = (V,E). If G is a graph with a set of vertices V , we say G is a graph on V .

Figure 1: An undirected graph.

Throughout this paper, the set of vertices will be the set A of agents, andthere will be a graph V on A which describes the visibility, i.e., which agentscan see which other agents. In some cases the graph will be directed, where anagent a can see another agent b if and only if there is an edge from a to b. Thatis the obvious thing to do; if a can see b, draw an arrow from a to b. However,many of the problems we will be looking at have the set of agents ω where eachagent can only see agents higher than it on the number line. Using a directedgraph, every arrow would be pointed in the same direction, so we can simplifythe problem a little by making the graph undirected and saying that a can seeb if and only if (i) there is an edge connecting a and b and (ii) a < b. (In eachproblem or class of problems we will specify whether the graph is directed orundirected, and how exactly the condition for “sight” is related.) If a can see b,we write aV b, and we can more formally define the set of agents that a can seeas V (a) = {b|aV b}.

Some terminology: A graph is complete if every vertex is connected to everyother vertex. A graph is an independent graph if no two vertices are connected.A set B is an independent set for a graph V if aV b for no two a, b ∈ B. A graphV is transitive if aV b and bV c implies aV c. For a graph V = (A,E), a subgraphW = (B,D) of V is a graph where B ⊆ A and D ⊆ E.

1.5 The µ-Predictor

Here we define the predictor which will be the key to unlock the case where Ais infinite. In virtually all cases, this definition requires the Axiom of Choice(AC); or, specifically, the assertion that all sets admit a well-ordering, which isequivalent to AC.

6

Page 7: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

Figure 2: The red dots form an independent set in this graph.

Definition 1.5.1. Let A be the set of agents, let K the set of colors, and fora ∈ A let ≡a be a’s equivalence relation on C as in Definition 1.3.1. (Recall thatfor f, g ∈ C, f ≡a g if and only if f and g are indistinguishable to a.) Let [f ]abe the equivalence class of ≡a containing f . Assuming AC, every set has a well-ordering, so let � be a well-ordering of C. Let 〈f〉a be the least element of [f ]ain the � ordering. We then have a guess the color 〈f〉a(a), i.e., µ(f) = 〈f〉a(a).We define the µ-predictor as M� : C → C where ∀a ∈ A, M�(f)(a) = 〈f〉a(a).

While each agent is really only guessing its own color, we can also think of itas guessing the total coloring of A. Clearly they will want to choose a coloringwhich is consistent with what they see; the µ-predictor has them choosing theleast such coloring, then making the guess that they are colored according tothat coloring. The power of the µ-predictor will become apparent once weactually use it in the later sections.

1.6 The Gabay-O’Connor Theorem

Before we get to more specific infinite hat-problems, we explain and prove theGabay-O’Connor Theorem, which is a very neat and general result.

Consider the hat problem with two colors and infinitely many agents whocan all see each other. If we simply pair each agent off with another agent, wecan have them all guess according to the strategy from the two-person game,guaranteeing one correct guess for each pair, and so infinitely many correctguesses. That was easy enough; will it work for more colors? What if theagents’ vision is slightly limited? Will there always be infinitely many incorrectguesses? These, and many similar questions, are all answered neatly by theGabay-O’Connor theorem (so long as you believe the axiom of choice). It turnsout that, in many cases, there is guaranteed to exist a finite-error predictor, apredictor where, no matter the coloring, only finitely many agents will guesswrong. That predictor is also “robust,” as we now define:

Definition 1.6.1. A predictor P is robust if, given a coloring f ∈ C, finitelymany hats can be changed without changing any agent’s guess; that is, givenany coloring g ∈ C, P (f) = P (g) if f∆g is finite.

7

Page 8: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

Theorem 1.6.1. (Gabay, O’Connor). Suppose that the set A of agents and theset K of colors are arbitrary, and that every agent can see all but finitely manyof the other agents’ hats. Then there is a robust finite-error predictor.

Proof. Given h, g ∈ C, say h ≈ g if h∆g is finite. It is easy to see that this is anequivalence relation on C. The axiom of choice gives us a function that choosesa representative from each equivalence class; if we call it Φ : C → C, then wehave that Φ(h) ≈ h and if h ≈ g then Φ(h) = Φ(g). Now, each agent a ∈ A cansee all but finitely many hats, so, given a coloring h, a knows the equivalenceclass of h and thus knows Φ(h). Thus we can let every agent guess accordingto the coloring Φ(h); that is, we define Ga(h) = Φ(h)(a) for each a. We canalso just think of Φ as the predictor, as it is not hard to see that it fits ourdefinition of a predictor in Definition 1.3.3. Given h ∈ C, h ≈ Φ(h), so h andΦ(h) agree on all but finitely many hats, thus all but finitely many agents willguess correctly. Thus we see that Φ is a finite-error predictor. Since changingfinitely many hats does not change the equivalence class, Φ will still have thesame output, so every agent would guess the same. Thus Φ is also robust.

It is important to note that while a finite error is guaranteed, the error isnot bounded. We will prove this in the next section for the special case whereA = ω. It is also worth reflecting on how powerful this result is. Much of thatpower comes from two crucial assumptions: the axiom of choice and the factthat the agents know an infinite amount of information. Those assumptions arefine for mathematicians, but it means we have moved away from games thatcould be played in the real world and plunged deep into the abstract worldof infinite mathematics. It is an open question whether the Gabay-O’Connortheorem is equivalent to AC; this will be discussed more in the second half ofthis paper.

2 The Denumerable Setting: One-Way Visibil-ity

2.1 Introduction

For the next two chapters, unless otherwise stated, our set of agents will be ω.The visibility graph V will be undirected, and a can see b if and only if a andb are connected by an edge and a < b; we call this one-way visibility, since if asees b, b cannot see a. For example, if V is complete, then each agent sees everyagent higher than it on the number line (and none below). In this setting, wehave the following theorem.

Theorem 2.1.1. Let the set of agents A = ω, the set of colors K be arbitrary,and V be an undirected graph on A with one-way visibility. Then there is apredictor ensuring one correct guess iff there is a predictor ensuring infinitelymany correct guesses.

8

Page 9: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

Proof. The reverse implication is trivial. For the forward implication, we willuse the contrapositive. Suppose P is a predictor where only finitely many agentsguess correctly for a coloring f ∈ C. We will now construct a coloring f ′ suchthat all agents guess wrong using P . Then there is an n ∈ A such that everyagent a ≥ n guesses incorrectly. Now, starting at n and working down, if theagent m guesses right, change its hat color, otherwise leave it alone. Each agentcan only see agents above it, so once it is guessing wrong, it will continue toguess wrong when any hats below it are changed. Thus, for the coloring we getat the end of this process, everyone guesses wrong.

We can construct the coloring f ′ more formally as follows: Let f0 = f .Assuming 1 ≤ i ≤ n and that fi−1 is defined, define fi = fi−1 if agent n − iguesses incorrectly, otherwise change agent n − i’s hat and define fi to be theresulting coloring. Let f ′ = fn, for which everyone guesses wrong, as explainedabove.

This theorem is another huge sign that we are firmly entrenched in the worldof infinite mathematics, where whatever intuition we might have about the finitesetting is almost worthless.

We will proceed first by proving two theorems where V is assumed to betransitive, then generalizing those results to cases were V need not be transitive.

2.2 Minimal Predictors for a Transitive Graph

This theorem characterizes transitive graphs for which a minimal predictor ex-ists.

Theorem 2.2.1. Consider the hat problem with a transitive graph V on ω withone-way visibility. The following are equivalent:

1. V contains an infinite path x0V x1V x2V · · · .

2. V contains an infinite complete subgraph.

3. For any set K of colors, there is a predictor ensuring infinitely manycorrect guesses.

4. For 2 colors, there is a predictor ensuring infinitely many correct guesses.

5. For 2 colors, there is a predictor ensuring one correct guess.

Proof. (1) =⇒ (2): This is immediate since V is transitive.(2) =⇒ (3): Within the infinite complete subgraph W , each sees all higher

agents, thus sees all but finitely many agents. So, by the Gabay O’Connortheorem, there is a predictor for the agents in W where all but finitely many,and so infinitely many, guess correctly. The agents on W can use this strategyand ignore the agents not on V .

(3) =⇒ (4) and (4) =⇒ (5) are trivial.(5) =⇒ (1): We will prove this by contrapositive. Assume V contains no

infinite path and take any predictor P for 2 colors. Let B0 = ∅. Now, for

9

Page 10: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

0 < α < ℵ1 let Bα = {x ∈ ω|V (x) ⊆ ∪{Bβ |β < α}} (the set of agents whocannot see anybody outside of the sets already defined). Then, for example,B1 = {a ∈ ω|V (a) = ∅} (the set of agents that cannot see anybody else). LetB = ∪{Bα|α < ℵ1}.

Claim 1: There is a coloring where everyone in B guesses wrong.

Proof: We will construct a coloring using transfinite induction. First,the agents in B1 cannot see anybody, so their guesses are determined.This means we can color them so that they guess wrong no matter whateveryone else is colored. Assuming we have colored the agents in Bα sothey are all wrong no matter what everyone else is colored, the agentsin Bα+1 can only see agents in Bα, so we can color them to always bewrong no matter what everyone else is colored. We can apply a similarargument for limit ordinals. Following this procedure, we get a coloringwhere everyone in B guesses wrong, proving the claim.

Claim 2: B = ω.

Proof: Suppose not, so take x0 ∈ ω rB.

If ∀β < ℵ1, Bβ 6= Bβ+1, then we’d have an onto function φ : B → ℵ1mapping, for each β ∈ ℵ1, the elements of Bβ+1 rBβ to β, implying that|B| ≥ ℵ1. But B ⊆ ω so |B| ≤ ℵ0, contradiction. So ∃β ∈ ℵ1 such thatBβ = Bβ+1. It then follows almost directly from the definition of Bγ(using transfinite induction) that ∀α ∈ ℵ1 such that α ≥ β, Bα = Bβ , andthus B = Bβ .

If for some x ∈ ω, V (x) ⊆ Bβ = B, then x ∈ Bβ+1 = B by definition.Thus, given an n ≥ 1, if xn 6∈ B, ∃xn+1 ∈ V (xn) r B. Since x0 6∈ B, weget an infinite chain x0V x1V x2V · · · . This contradicts our assumption,proving the claim.

Combining the two claims, we get a coloring where everyone in ω guesseswrong according to P .

2.3 Finite-Error Predictors: Transitive Graphs

We can also characterize the transitive graphs for which there is a finite-errorpredictor. The Gabay-O’Connor theorem gives us some such graphs, but notall, it turns out. But, first, a quick lemma:

Lemma 2.3.1. Suppose we have a transitive graph V on ω that contains noinfinite independent subgraph. Then every infinite set D ⊆ ω contains an infinitecomplete subgraph.

Proof. Given infinite D ⊆ ω, notice there cannot be infinitely many agentsa ∈ D for which V (a) ∩ D = ∅, since otherwise those agents would form aninfinite independent subgraph. Thus we can choose x0 ∈ D where x0 > a ∀a ∈ Dsuch that V (a) ∩D = ∅. Then for n ≥ 0, suppose we have x0V x1V · · ·V xn in

10

Page 11: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

D. Since xn ≥ x0, V (xn)∩D is nonempty, so let xn+1 ∈ V (xn). Doing this forall n ∈ ω gives us an infinite path x0V x1V · · · in D. Since V is transitive, thisis an infinite complete subgraph of D.

Theorem 2.3.1. Suppose we have a transitive graph V on ω with one-wayvisibility given by V . Then the following are equivalent:

1. V contains no infinite independent subgraph.

2. There exists a finite-error predictor for any set of colors K.

3. There exists a finite-error predictor for 2 colors.

Proof. (1) =⇒ (2): For each a ∈ ω, recall that ≡a is defined on C by f ≡a giff f |V (a) = g|V (a). We will show that the µ-predictor from Section 1.5 is afinite-error predictor for V . (Recall that the µ-predictor works by fixing a wellordering � of C and for a coloring f ∈ C having each agent a ∈ ω guess the�-least coloring g such that f ≡a g. Refer back to Definition 1.5.1 if this is stillfuzzy for you.)

Suppose, for contradiction, there is an infinite set D of agents guessing in-correctly for a coloring f by the µ-predictor. By Lemma 2.3.1, this means thereis some infinite complete subgraph of D. Let B be the set of agents in thatinfinite complete subgraph. To get our contradiction, we will construct an in-finite descending chain in the well-ordering � of C. This will follow from thefollowing two claims:

Claim 1: Given i, j ∈ B, if i < j then 〈f〉i � 〈f〉jProof: Since i < j and they’re both in B, i sees j, so by transitivityV (i) ⊇ V (j). Thus [f ]i ⊆ [f ]j since any coloring consistent with whati sees will be consistent with what j sees, too. Then 〈f〉i (the �-leastelement of [f ]i) is in [f ]j , so it is at least as large as 〈f〉j (the �-leastelement of [f ]j). This proves the claim.

Claim 2: Given i, j ∈ B, if i < j, then 〈f〉i 6= 〈f〉jProof: Since i sees j and 〈f〉i is consistent with what i sees, 〈f〉i assignsj the correct color, f(j). That is, 〈f〉i(j) = f(j). But since j ∈ B ⊆ D, jguesses wrong. Thus 〈f〉j 6= f(j), so 〈f〉i 6= 〈f〉j , proving the claim.

Let B be indexed B = {b0, b1, b2, · · · } where b0<b1<b2<· · · . Then by thetwo claims, we have 〈f〉b0 � 〈f〉b1 � 〈f〉b2 � · · · , an infinite descending chain inthe well-ordering �, a contradiction. This proves (1) =⇒ (2).

(2) =⇒ (3) is trivial.(3) =⇒ (1): Given P , a finite-error predictor for 2 colors, suppose there

exists an infinite independent subgraph. Let D be the set of agents in thesubgraph. Color all of the agents in ω rD arbitrarily. Given d ∈ D, d cannotsee any other agents in D so its guess is completely determined by the coloringof ωrD. Thus we can color d so it guesses wrong according to P . Doing this forevery agent in D gives us infinitely many agents guessing wrong, contradictingthe fact that P is finite-error.

11

Page 12: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

2.4 Finite-Error Predictors: Arbitrary Graphs

It turns out that the previous theorem is true even if we do not assume transi-tivity. It does take a bit of work to generalize the theorem, though.

Theorem 2.4.1. Suppose we have a graph V = (ω,E) with one-way visibilitygiven by V . Then the following are equivalent:

1. V contains no infinite independent subgraph.

2. There exists a finite-error predictor for any set of colors K.

3. There exists a finite-error predictor for 2 colors.

Proof. (2) =⇒ (3) and (3) =⇒ (1) are exactly as in the previous theorem 2.3.1.(1) =⇒ (2): We will inductively construct V ′ = (ω,E′) where E′ ⊆ E (so

V ′ is a subgraph of V ) which will be transitive and still not contain an infiniteindependent set. We will do this by carefully removing edges from E. Giventhat we successfully do that, the agents can play according to the finite-errorpredictor for V ′ (by simply ignoring the extra edges) which is guaranteed bythe previous theorem 2.3.1.

Some quick terminology: We can think of ω as stretched out horizontally,starting with 0 on the left and running to infinity on the right. Recall that anedge is simply a pair of agents, but we often understand it as a line segmentbetween the two agents. Because of this, given an edge {a, b} ∈ E, if a < b wecall a the left endpoint and b the right endpoint.

We will construct E′ in stages, one stage for each n ∈ ω. At stage n weconsider every edge {j, n} with n as a right endpoint, so stage n will consist ofn steps, one for each j < n. Step j of stage n will entail deciding whether toinclude {j, n} in E′. Let En,j be the set of edges we have decided to keep bystage n, step j. E′ will be the union of all En,j .

Given n ∈ ω and j < n, if {j, n} 6∈ E, then {j, n} 6∈ En,j since we wantE′ ⊆ E. If {j, n} ∈ E, we check whether there is some i < j such that{i, j} ∈ En,j−1 but {i, n} 6∈ En,j−1. (Note: {i, j} and {i, n} will have alreadybeen considered by this point, so they will each be in E′ iff they are already inEn,j−1.) If there is indeed such an i, we don’t include {j, n} in En,j , because thenV ′ would not be transitive. Only if there is no such i do we have {j, n} ∈ En,j .

It may already be apparent that V ′ is transitive, but we will go throughthe reasoning to be sure. Suppose we have iV ′jV ′k. As noted during theconstruction, iV ′j ⇐⇒ {i, j} ∈ Ek,j−1 and iV ′k ⇐⇒ {i, k} ∈ Ek,j−1. Thus,if {i, k} 6∈ E′, then {i, k} 6∈ Ek,j−1 and we would not have included {j, k} inEk,j and thus it would not be in E′. So {i, k} ∈ E′, i.e. iV ′k, so V ′ is transitive.

Now it remains to show that V ′ contains no infinite independent set (IIS).This will be a little trickier. Suppose that V ′ does contain an IIS; we will proceedby constructing what might be called a “leftmost” IIS in V ′ and showing thatthis implies V had an IIS to begin with, contradicting the assumption.

Let x0 be the smallest element which appears in any IIS of V ′ (that is, x0 isthe minimum of the union of all IIS’s). For each n ≥ 1, let xn be the smallest

12

Page 13: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

element greater than xn−1 which appears in an IIS with x0, x1, · · · , xn−1 as thefirst n elements. Let D = {x0, x1, · · · }. Given any xi, xj ∈ D with j > i, xjwas chosen to be in an IIS with x0, x1, · · · , xj−1, including xi, so {xi, xj} 6∈ E′.Thus D is an IIS too. D is the “leftmost” IIS we were looking for.

Claim: For each y ∈ D, yV d for only finitely many d ∈ D.

Proof: Suppose we have y ∈ D such that yV d for infinitely many d ∈ D.Since D is independent under V ′, all of those edges must have been thrownaway. In particular, each edge must have been thrown away due to somei < y. Since we threw away infinitely many edges and there are notinfinitely many natural numbers less than y, there must be infinitely manyedges corresponding to the same i < y.

Recall that for some a, b ∈ ω with a < b, {b, d} is removed at stage a forsome d > b only if aV ′b but {a, d} 6∈ E′. Thus there are infinitely manyd ∈ D such that {i, d} 6∈ E′. Let A be the set of such d ∈ D.

We can’t have i < x0 because then i∪A is an IIS, since {i, d} 6∈ E′ ∀d ∈ Aand A ⊆ D so the elements of A are independent from each other. Butthen i < x0 and is in an IIS, contradicting the choice of x0. So choosethe largest p ∈ ω such that xp < i. Let B = {x0, x1, · · · , xp, i} ∪ A. Eachxm ∈ B is independent from the others since (B r {i}) ⊆ D. If for m ≤ pwe have xmV

′i, then since iV ′y and V ′ is transitive, we’d have xmV′y,

which can’t happen since xm, y ∈ D, so {xm, i} 6∈ E′ for all m ≤ p. Andfor all d ∈ A, {i, d} 6∈ E′ by definition of A. Thus B is an IIS, and ithas x0, x1, · · · , xp as the first p elements. Since i 6∈ D, i 6= xp+1, and bychoice of p, xp+1 6< i, i < xp+1. But then i should have been chosen asxp+1 instead of the actual xp+1, a contradiction. This proves the claim.

We can now easily construct an IIS under V . Start with y0 = x0, andfor each n ≥ 1 let yn = xp+1 where xp is the largest element of D such thatyn−1V xp. The resulting set is clearly infinite, and is independent because eachelement is chosen to be higher than anything in D that the previous elementscan see. Thus we have an IIS under V , a contradiction.

3 Two “Teams” of Agents

In this chapter we look at the hat problem wherein the agents are split intotwo teams, where each agent can only see agents on the other team. First weprove the finite case, then apply it to the infinite case. We will generalize thesetheorems in the second part of this paper.

3.1 Two Finite Teams

Theorem 3.1.1. Suppose we have a finite set K of colors where |K| = k. Letthere be two teams of agents: let team 2 have k− 1 agents, and let team 1 haveas many agents as there are guessing strategies based on the coloring of team 2

13

Page 14: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

(which is |(k−1k)k|). Then, in the hat problem where each team can see all of the

other and none of their own, there is a predictor ensuring at least one correctguess.

Proof. Let each member of team 1 guess by a different strategy, so that eachguessing strategy possible for team 1 is followed by someone. We will show thatthis leaves very few possibilities for coloring team 2 such that everyone on team1 guesses wrong.

Let team 1 have an arbitrary coloring f . Now take any k number of distinctcolorings of team 2, call them g1, . . . , gk. Since every possible strategy is repre-sented in team 1, in particular there is some agent a who is guessing differentlyfor each gi. Since we have k colorings gi and k colors, a’s guess must agree withf for one of the gi’s. Thus, we have shown that, given a coloring of team 1,there are at most k − 1 colorings of team 2 where everyone on team 1 guesseswrong. Let h1, . . . , hl be those colorings of team 2. Then we have the ith agenton team 2 guess according to hi, and, if there are more members than colors, wehave the extra agents guess the least color. This works because either someoneon team 1 is right, or everyone on team 1 is wrong and thus team 2 is coloredby one of the hi’s and the person on team 2 guessing according to that coloringis correct.

This proof may seem artificial, in that the size of the teams is specificallyconstructed to make it work. It could be interesting to explore what the smallestnumber of people is that we actually need for a given k. Our focus is the infinitecase, though, where this is not an issue.

3.2 Two Infinite Teams

In this section we look at an infinite version of this problem, first with finitelymany colors, and then with ω2 colors.

Theorem 3.2.1. Suppose that we have a finite set of colors K with |K| = k,and a set of agents split into two copies of ω. Call these copies ω1 and ω2.Suppose that for n,m, nV m iff m > n and they are in different copies; that is,each agent n can only see agents m such that m > n and they are in the othercopy. Then there is a predictor ensuring at least one correct guess.

Proof. Let N be the number of agents on team 1 for k colors in the previoustheorem. Partition ω1 into teams of size N and parition ω2 into teams of sizek − 1. Pair up each team in ω1 with a team on ω2 which the first team cancompletely see (i.e., the lowest agent of the second team must be strictly higherthan the largest agent of the first team). Because each team in ω1 can seethe team they are paired with, they can guess as they would in the previoustheorem. The teams they are paired with cannot see back, however, so theymust guess what the team they are paired with is colored. They do this byseeing which team 1 colorings show up infinitely many times in ω1 and guessingthat the team they are paired with is colored by the least of those colorings.

14

Page 15: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

Everyone on ω2 can see which team 1 colorings show up infinitely many times,so they are all guessing the same coloring for their paired teams. Since infinitelymany team 1’s have that coloring, there will be infinitely many pairs of teamswhere they are guessing exactly as they would in the finite case, guaranteeingat least one correct guess for each of those pairs.

Theorem 3.2.2. If there are ω2 colors, then there is no predictor ensuring atleast one correct guess for the graph in the previous theorem.

Proof. The idea here is that since ω2 is relatively large, we can choose a colorfor each copy of ω which is larger than any of the guesses for the other copy.

Take any predictor P = (P 1, P 2) where P i is a predictor for ωi. Given anordinal α, let cα ∈ωK be the function which is constantly α. We will color ω1

with cβ for some β ∈ ω1 and ω2 with cγ for some γ ∈ ω2.Recall that a cardinal κ is regular if it cannot be written as ∪S for some

set S where |S| < κ and ∀A ∈ S, |A| < κ. In particular, this means if we havesome set X ⊆ κ with |X| < κ, then ∪{y < x|x ∈ X} 6= κ, so ∃z ∈ κ such that∀x ∈ X, z > x.

ω2 is regular, so we can choose some γ ∈ ω2 such that γ > P 1(cα)(n) for allα ∈ ω1 and n ∈ ω. Then we can choose β ∈ ω1 such that β > P 0(cγ)(n) for alln ∈ ω where P 0(cγ)(n) ∈ ω1. (That last condition is there because P 0(cγ)(n)need not be in ω1, but if it’s not, then it certainly can’t equal β.) Then withthe coloring (cβ , cγ), everyone is guessing too small of a color.

The first of these theorems shows that if we remove the condition that Vbe transitive in Theorem 2.2.1, then (5) =⇒ (2) in that theorem fails. Thatis, given K a finite set of colors, the set of graphs for which there exists aminimal predictor is not simply the set of graphs which contain infinite completesubgraphs. The second theorem, however, shows that this graph does not satisfy(3) from Theorem 2.2.1. It is an open question whether (2) ⇐⇒ (3) as inTheorem 2.2.1 when V is not transitive. That is, it is unknown whether Vcontains an infinite complete subgraph if and only if for every set K of colorsthere is a minimal predictor for V .

15

Page 16: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

4 A Slight Twist: Agents That Do Not KnowWhere They Are

Until now, we have been implicitly assuming that an agent has a way to identifyitself. For example, in the problem involving two copies of ω, we assumed thateach agent knew where it was so that it could work out what team it is on andthus which agents to look at on the other copy of ω. In this short chapter wequickly look at what happens if we remove this assumption in one of the caseswe have considered. This idea will come up more in the second part of thispaper.

Theorem 4.0.3. Consider the scenario where the set of agents is ω and the setK of colors is arbitrary, where each agent sees all higher-numbered agents, butthe agents do not know what number they are. Then there exists a finite-errorpredictor.

Proof. This paragraph is motivation for how we will prove this. The intuitionhere is that every agent can see the same infinite “tail,” so they can use some-thing like the µ− predictor to guess the least coloring which is consistent withwhat they see. We have to be careful, though, as the following example shows:suppose the coloring f = 〈1, 1, 0, 1, 0, 1, 0, . . . 〉 (where 1,0 alternate infinitely) isthe least element of our well-ordering. Then, if we actually have that coloring,every agent colored 0 will see 〈1, 0, 1, 0, 1, 0, . . . 〉 and so will assume the coloringis f . But what place do they assume they are? If they assume they are theleast element such that f is consistent with what they see, then they all guess 1and are all wrong. If we have them assume they are the second instance, then if〈1, 1, 1, 0, 1, 0, 1, 0, . . . 〉 were the least element, we would again have a problem.

Given a ∈ ω define an equivalence relation ∼=a on the colorings, where f ∼=a gwhen there is some n ∈ ω where (f(a+1), f(a+2), f(a+3), · · · ) = (g(n+1), g(n+2), g(n+ 3), · · · ); that is, a sees the same thing with f as n sees with g.

Let � be a well-ordering of ωK where all of the periodic colorings appearfirst. (This is the trick to resolve the issue explored in the previous paragraph.)Then, given a coloring f , we have agent a guess by choosing the �-least coloringg such that f ∼=a g. Then we have a guess g(n) where n is the number from thedefinition of ∼=a in the previous paragraph. (If there are multiple possible n, asthere will be for periodic functions, then choose the least one.)

Now take any h ∈ωK . If h is eventually periodic, all of the agents in theperiodic tail will guess correctly. Now suppose h is not eventually periodic.Let h′ be the �-least element such that h∆h′ is finite and let m be the largestelement where h and h′ disagree. Then every agent b > m will see that h′ isconsistent with h, and thus will choose it, and they will also correctly guesstheir position since the function is not eventually periodic. Thus every b > mwill guess correctly.

16

Page 17: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

5 Predicting the Future

In this chapter we mention a result that has generated some discussion outsidethe world of math.

Theorem 5.0.4. Let R be the set of agents, and suppose that each agent x ∈ Rcan see the interval (a, x) for some a < x. Instead of having each agent guessonly its own hat color, we have it guess the coloring of [x,+∞) and considerit to be correct if there is some b > x such that its guess is correct on [x, b).Then there is a predictor where the set of agents not guessing correctly is awell-ordered subset of R (by the natural ordering on R). (A well-ordered subsetof R is countable and nowhere dense. An obvious example is ω. There is alsothe set {n − 1

m |m,n ∈ ω}, which shows that a well-ordered set, while “small,”can have some interesting structure.)

We omit the proof because we are not as interested in the specifics of themath here.

This theorem is usually interpreted with R being the continuum of time, withan agent for each point in time who can see the recent past and tries to guessthe future. The theorem then says that very nearly every agent will correctlyguess the near future. It is unlikely this has any direct practical applications,of course, since it requires each agent have an infinite (in fact, uncountable)amount of information in knowing the coloring of the recent past, and the proofrequires the axiom of choice, so we could not actually construct a predictor touse.

We mention it partially for its own curiosity, and also because it has caughtthe eye of philosophers. In particular, Alexander George of Amherst Collegehas written a paper on how this result relates to the “philosophical problemof induction” (which is not specifically mathematical induction, but the act ofextrapolating past experiences to future cases) [2]. George argues that, whilethis theorem is by no means a direct proof of induction, it does give justificationfor believing that the future is not completely unrelated to the past. It isamusing to think that the proof of such a simple proposition would lie hiddenin the arcane depths of mathematical set theory.

17

Page 18: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

Part II

New ResearchThis half of the paper is devoted to new research. In Chapter 6 we generalize theresults from Chapter 3 to any finite number of teams. In Chapter 7 we explore acompletely new problem, in which our set of agents is positioned on an infinite2-dimensional grid. Finally, in Chapter 8 we discuss an open question posedin Hardin and Taylor, asking if the Gabay-O’Connor theorem is equivalent toAC. We will include not only successful results, but pose open questions, makeconjectures, and analyze attempted proofs.

6 A Finite Number of Teams

It turns out that the results from Chapter 3 can be easily extended to any finitenumber of teams. We present modified proofs of those results here.

6.1 Finitely Many Agents

Theorem 6.1.1. Suppose we have a finite set K of colors where |K| = k. Letthere be r teams of agents, where r is finite: let team r have (k − 1)r−1 agents,and for 1 ≤ s ≤ r − 1, let team s have as many agents as there are guessingstrategies based on the coloring of team s + 1. Then, in the hat problem whereteam r can only see all of team 1, and for 1 ≤ s ≤ r− 1 team s can only see allof team s+ 1, there is a predictor ensuring at least one correct guess.

Proof. For 1 ≤ s ≤ r − 1, let each member of team s guess by a differentstrategy, so that each possible guessing strategy for that team is followed bysomeone (recall that team s has exactly enough people for this to be possible).We will show that this leaves very few possibilities for coloring team r such thateveryone else guesses wrong.

Take some 1 ≤ s ≤ r − 1, and let team s have some arbitrary coloring f .Now take any k number of distinct colors of team s + 1, call them g1, . . . , gk.Since every possible strategy is represented in team s, in particular there issomeone who is guessing differently for each gi. Since we have k colorings giand k colors, this agent’s guess must agree with f for one of the gi’s, i.e., itmust be correct for at least one of the chosen colorings. Thus, we have shownthat, given a coloring of team s, there are at most k− 1 colorings of team s+ 1where everyone on team s guesses wrong.

Now, suppose team 1 has some coloring h. Then there are at most k − 1colorings of team 2 so that everyone on team 1 guesses wrong. For each ofthose colorings of team 2, then, there are k − 1 colorings of team 3 such thateveryone on team 2 guesses wrong, so there are (k − 1)2 colorings of team 1together with team 2 where everyone on both of those teams guesses wrong.Proceeding inductively, we see that, given a coloring h of team 1, there are

18

Page 19: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

at most (k − 1)r−1 colorings of teams 2 through r where everyone on teams 1through r− 1 guess wrong. Since there are (k− 1)r−1 people on team r, we leteach person on team r guess according to one of those colorings. Then, eithersomeone in teams 1 through r − 1 will guess correctly, or nobody will, and soteams 2 through r must be colored by one of those colorings, so someone onteam r will be guessing according to it, and thus will guess correctly.

6.2 Finitely Many Infinite Teams

Theorem 6.2.1. Suppose that we have a finite set of colors K with |K| = k,and a set of agents split into r copies of ω, where r is finite. Call these copies ωs

for 1 ≤ s ≤ r. Suppose that ωr can only see agents in ω1, and for 1 ≤ s ≤ r−1,ωs can only see agents in ωs+1. Suppose also that each agent can see only all ofthe agents in the next copy of ω which are a higher number than it. Then thereis a predictor ensuring infinitely many correct guesses.

Proof. For 1 ≤ s ≤ r, let Ns be the number of agents on team s in the finitecase for r teams. For each such s, partition ωs into teams of size Ns. We willinductively match the teams up so they can almost guess according to the finitecase. Start with the lowest team on ω1 and match it with a team on ω2 whicheveryone on the first team can see. Continue doing this for each subsequentcopy of ω, until we have a “string” of teams starting in ω1 and ending in ωr.Now, for each subsequent team in ω1, similarly match it up with a string ofteams on the other copies, not using any teams which are already a part ofanother string. We now have infinitely many strings of teams where each teamcan see the next up until the last team on ωr which cannot see the team on ω1

that is on the same string. We have each team on each string which can see thenext team guess as they would in the finite case, and then for the teams on ωr,we have them look at which team colorings show up infinitely in ω1 and guessthe least of those. Then for the infinitely many teams in ω1 where the team isindeed colored that team coloring, the string will be guessing exactly as in thefinite case, so there will be at least one correct guess in that string.

7 A Plane of Agents

For most of this chapter, the set of agents will be A = Z × Z, and each agentwill be facing a certain direction and be able to see an infinite cone in front ofit. (See picture.)

There are many different variables for this scenario: whether the numberdirections the agents can be facing is limited; what information the agents haveabout their position; what information the agents have about the directions ofthe other agents; the angles on the cones of sight. If a predictor does not existfor a certain formulation, we can also ask, if we “fill in” the plane with moreagents (i.e., let A be Q × Q or R × R), whether a predictor then exists. Inthis paper we prove the existence of a minimal predictor for only some of thesevariations, make some conjectures, and leave the rest as open questions.

19

Page 20: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

For a given configuration, we could use the language of graphs (where aV biff b is in a’s cone). So we see that this is just a special case of the generictype of hat problem from Part I where you have a set of agents and a directedvisibility graph on them. However, the problem fundamentally changes if theconfiguration is not fixed beforehand and the agents are required to guess thatas well (which is the case where each agent only knows the directions for theagents in its cone). We will see shortly that, with a fixed, known configuration,there is a relatively easy answer.

7.1 A Preliminary Lemma

This lemma covers a special case that will arise in many of the variations of thisproblem.

Lemma 7.1.1. Let k < ℵ0 be the number of colors, and let the set of agents beZ, where each agent can either see everything to its right or everything to its left.(Left is negative, right is positive.) Suppose also that the set of agents lookingoutward is bounded, meaning that for some M,N ≥ 1, every agent m < −Mis looking right and every agent n > N is looking left. Suppose that each agentdoes not know its position, including whether it is facing right or left, and cansee another agent’s direction iff it can see the other’s hat color. Then there is apredictor ensuring infinitely many correct guesses.

20

Page 21: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

Proof. Let n be the number of agents on team 1 as in Theorem 3.1.1. Givenan agent a, without loss of generality, let a be facing right. First, we have aassume that everyone less than it is also facing right. Let M be the least agent asees that is facing left, and N be the largest agent which is facing right. Dividethe agents less than M into groups of n + k − 1, and similarly for the agentsgreater than N (see picture). Pair up each group on the left with a group onthe right, starting with the two innermost groups. In each group, we identifythe innermost n agents as team 1 from Theorem 3.1.1, and the other agents asteam 2. (By innermost we mean the closest to M and N , not necessarily to theorigin—keep in mind that a does not necessarily know where the origin is.)

Figure 3: Paired groups are highlighted with the same color.

Since a can see how far it is from M , it knows which group and team it is in,and also which member of that team. Then it plays according to the two finiteteam case as in Theorem 3.1.1 with the opposite team in the correspondinggroup on the right.

This method works because for all but a finite number of agents in themiddle, they will be right in assuming that everyone behind them is facing thesame direction. Thus we will have infinitely many pairs of teams guessing asin the finite case (Theorem 3.1.1), guaranteeing a correct guess for each pair ofteams.

7.2 Full Knowledge of Directions

We start by considering the simple case with four directions and the parametersthat allow each agent the most information (full knowledge of every agent’sposition and direction).

Theorem 7.2.1. Let the set of agents A = Z×Z, where the agents are limitedto four directions (N/S/E/W), and each agent has full knowledge of its positionand the directions of every other agent. (That is, while a given agent cannot seethe hat colors for the agents outside its cone, it does know what direction theyare facing.) The cone angles are allowed to vary for different agents. Let theset of colors K be finite. Then for every configuration (of directions) there is apredictor ensuring infinitely many correct guesses.

Proof. For this proof, we will ignore those agents not on the axes and the agentat the origin, leaving just a cross of agents consisting of four distinct “branches.”

21

Page 22: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

Figure 4: The “cross” of agents.

We will work through every type of configuration of directions and show thatthere is a predictor in every case. (In the pictures, we will draw a single wedgeon a branch to signify that there are infinitely many agents on that branchfacing in that direction.)

Figure 5: Case 1.

Case 1: On at least one branch, there are infinitely many agents lookingoutward (e.g., looking north on the north branch). Then take one of thosebranches and call the set of people looking outward on that branch D. Leteveryone outside D guess the least color. Then we simply let D play accordingto the case of a complete visibility graph with one-way visibility from Chapter2, guaranteeing infinitely many correct guesses in D.

Case 2: We do not have infinitely many looking outward on any branch,but if all agents are looking sideways (not inward or outward) are ignored, twoopposing branches are as described in Lemma 7.1.1. Put another way, on twoopposing branches, there are infinitely many agents looking inward. Then wesimply let the agents looking inward/outward on those branches guess accordingto the strategy given in Lemma 7.1.1. Everyone else can guess the least color.

Case 3: We do not have the scenario for case 1 or case 2, but we dohave infinitely many looking inward on at least one branch. See the picturesfor all possible configurations where this occurs (up to rotation). If one of the

22

Page 23: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

Figure 6: Case 2.

Figure 7: Case 3. The two wedges on the same branch mean that we haveinfinitely many looking one way or the other way.

branches is looking at an inward branch, then they can break themselves up andplay infinitely many of the finite team game. If none of them is, then two ofthose branches must be looking at each other. Then they can play similarly tothe infinite team case, except that they create teams inductively: starting withthe first team on one branch, pick a team on the other branch they can all see,and in turn pick the next team on the first branch that the second team can allsee, and so on.

Case 4: There are only finitely many agents looking inward or outward.Then either we have two branches looking at each other, all branches lookingclockwise, or all looking counterclockwise. If two are looking at each other,they can play the two infinite team game, and if they are all looking clock-wise/counterclockwise, they can play the four infinite team game. (Again, inboth of those cases the teams will have to be chosen slightly differently than inthe simple case.)

This theorem shows that every configuration effectively reduces to a case wehave already seen before. Thus, it is more of a graph theory result, showing thatthese types of directed graphs always have certain structures lurking in them,than it is a truly new hat problem.

23

Page 24: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

Figure 8: Case 3.

Figure 9: Case 4.

7.3 Limited Knowledge of Directions

We now consider the same problem as the previous section, except that eachagent no longer knows the directions of the agents outside its cone and we requirethat every agent have the same cone angle. It is an open question whether thereis a minimal predictor in this case. We will begin describing an attempted proofthat a minimal predictor exists, and show where we run into problems.

Since every agent still knows where it is, we can focus on the cross againand try to emulate the previous proof. However, the agents’ guessing strategiescannot depend on knowing the full configuration, so we need to describe howeach agent will guess based only on what it can see and where it is.

If an agent is facing outward, it can see whether there are infinitely manyfacing outward on that same branch and guess with them. If there are not, it cansee if there are infinitely many looking inward, and guess with them accordingto Lemma 7.1.1. If it does not see either of those, then it is hopeless and wehave it guess the least color.

If an agent a is facing inward, we have it first check if there are infinitelymany looking inward on the opposite branch, and guess with them according toLemma 7.1.1. If not, if the agent is odd we have it assume there are infinitelymany from the branch to its left looking in its direction and play with them as

24

Page 25: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

described in case 3 of the previous theorem, and similarly with the branch toits right if it is even. Since it is looking inward, it can see how many agentsbelow it are also looking inward, and it can see finitely many of the agents onthe branches to its left/right, thus it can determine which team it is on andwhich team to guess with.

If an agent a is facing sideways, it can only see the agents on the branch inthe direction it is looking. If there are infinitely many looking inward, we haveit guess with them. If not, and there are infinitely many looking in the directionof its branch, we have it guess with them. If neither of those is the case, andthere are infinitely many looking in the direction of the branch opposite a’s,then we have a assume the agents are arranged in a “spiral,” that is, all areclockwise or all are counterclockwise.

This is where we run into our problem. Since the sideways facing agentscannot see any of the other agents on their branch, they have no apparent wayof figuring out which player from the finite team case to guess as. They knowtheir position, but they no longer know what direction everyone else on thebranch is facing, so the mth agent on a branch does no know if they are themth or the first agent facing that direction.

We can make a little progress by having each sideways facing agent assumethat a cofinite set of agents on its branch are also facing sideways. They canmake this assumption because an agent on another branch will only play with

25

Page 26: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

them if there are not infinitely many inward facing agents to choose first. Now,if n is the number of players on team 1 from the finite team game, then wecan split the branch up into groups of 2(n+ k − 1). Then there must either beinfinitely many groups where at least half of the agents in the group are facingthe same direction as a is, or infinitely many where at least half are facing theopposite direction. This would allow us a way for agents to place themselves onteams big enough for the finite team game, but it does not help us determinewhich strategy to use. (Recall that nearly all of the agents guess differently inthe finite team game.) We still need a way for an agent to determine wherethey are on their team.

One hope is to look for some help from infinity, as has often been the trickwith these hat problems. The issue is that the infinite tail we’d like to look atis that of the branch these agents are on, but the agents cannot see their ownbranch. The infinitely many agents looking at the same branch can see the sameinfinite tail on that branch, but it isn’t helpful because that branch has nothingto do with theirs.

We could try looking at the agents outside the cross. For example, if a is anagent on the east branch of the cross, looking north, then not only can a see theinfinite tail of the north branch of the cross, but it can see all of the north tails.Then there is an east and a west “tail of tails” for the north tails! Could thistell us anything about the west tails? Well, no, as the whirlpool example hereshows. What happens infinitely far to the west has no relationship to what thenorth-facing agents can see.

We did not go through and check every step of this proof sketch in detail, butevery part of it can be made to work, except for this one hiccup. Unfortunately,it is a rather persistent hiccup, and the prospects for this proof look ratherbleak.

A question now is whether we can find a way, given a predictor, to configureand color everyone so that they guess wrong. This could be challenging becauseit is difficult to get a grasp on what an arbitrary predictor for this problemwould look like. We have nothing more to say about that, except that it couldbe a good avenue to explore for further research.

7.4 Other Variations/Open Questions

In this section we explore how we can tweak the problem further, posing anumber of mostly unexplored questions. These would be a good starter forfurther research.

First, we point out that none of the steps in the previous two sections re-quired that an agent know what direction they are facing. That is, no agents’guess is changed if the entire plane is rotated by π

2 . This suggests that suchinformation might never be necessary and/or helpful.

What if we allowed every agent to know the full configuration, but they didnot know where they were? Could we come up with a way to have them guessan origin, so that enough people would be playing according to the game inTheorem 7.2.1? A proof would probably require a description of how an agent

26

Page 27: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

decides which group of agents to play with, and a careful analysis of all possiblecases to see if that method works. This seems like it would be much morecomplicated than with the cross.

If we had a successful proof in Section 7.3, could that be generalized to whenthe agents do not know their position? Is this essentially the same question asin the previous paragraph?

What happens if we change the number of allowed directions? The answeris obviously yes with 2 directions (Lemma 7.1.1); what about 3 directions? 8directions?

What if we allow for countably many directions? For example, only anglesof the form n

2m−1π for n,m ∈ ω, or only angles of the form qπ for q ∈ Q. Whatif we allow any angle (so, uncountably many)? Are all of those cases equivalent?Is any case equivalent to these if the set of angles is dense in [0, 2π]?

What if we make the agents dense in R2? This could make all of theseproblems much easier, since there are not just infinite “tails” on the infiniteedges of the plane, but all over the place. Does it help if we not only make itdense, but uncountably dense (by which we mean that every open subset of R2

has uncountably many agents in it)? There is an added variable to this type ofproblem: if the agents do not know where they are, do they at least agree on a“grid?” That is, they know which agents have at least one integer coordinate,but they do not know what the coordinates are. Without any information likethat, the agents would be very lost. However, with so many agents, that maynot be a problem. It is also worth seeing if variable cone angle would be anissue here.

It might also be worth stepping back and reflecting on what is the significanceof these results. Are these all just complicated versions of problems that werediscussed in Part I, or do some of these have fundamentally new characteristics?Limiting an agent’s knowledge of the visibility graph might be a new concept,but are there any new mathematics in it? If the answer is no, then there mightat least be an interesting theorem which shows that.

8 The Gabay-O’Connor Theorem and AC

One of the open questions posed in Hardin and Taylor is whether the Gabay-O’Connor theorem is equivalent to the axiom of choice [1, p. 30]. In this sectionwe give an attempted proof that Gabay-O’Connor plus the assertion that achoice function exists for every set of finite sets implies the axiom of choice.Such a theorem, if it were true, would not quite answer the open question posedin Hardin and Taylor, but analyzing what goes wrong in this proof may behelpful for understanding the subtleties of the question.

We want to prove AC using Gabay O’Connor and AC for sets of finite sets.We will try to do this by using Gabay O’Connor to choose a finite subset ofevery set in our set of sets. Given a set of nonempty sets S, for each X ∈ S,define X ′ = {(x,X)|x ∈ X}, and let S′ be the set of all X ′. If we can find achoice function for S′, then we can clearly modify it to work as a choice function

27

Page 28: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

for S. Thus, we can assume without loss of generality that S is disjoint to beginwith.

Case 1: X is finite for all but finitely many X ∈ S. Let T be the set ofinfinite sets. Then there is a choice function on T without AC, and there is achoice function on SrT by finite AC, so the union of those functions is a choicefunction on S.

Case 2: X is infinite for infinitely many X ∈ S. Given A ∈ S, a ∈ A, defineha : ∪S → S ∪ 2 where we have

ha(x) =

1 x = a0 x 6= a, x ∈ A, and a ∈ AA x 6∈ A and a ∈ A

This is a sort of characteristic function for a. Notice that, given a, b ∈ ∪S,ha∆hb is finite iff a, b ∈ A for some A ∈ S.

Recall that Gabay-O’Connor gives us a function Φ : ∪SS∪2 → ∪SS∪2 such

that, given f, g ∈∪SS∪2, f∆Φ(f) is finite and if f∆g is finite then Φ(f) = Φ(g).This means that, given a ∈ ∪S where a ∈ A for some A ∈ S, Φ(ha)−1(1) ∩ Ais a finite subset of A. In fact, given any A ∈ S and a, b ∈ A, Φ(ha) = Φ(hb),so we have a well-defined function Ψ : S → ∪S

S∪2 with Ψ(A) = Φ(ha)−1(1)∩Awhere a ∈ A. Then Ψ is a function choosing a finite subset from every set in S.

There is a big problem: the finite subset Ψ gives us might be ∅. There isno obvious way around this. This has literally been much ado for nothing. Wecould have proved the same theorem, that there is a function which choosesfinite subsets of every set in S, simply by defining the function F : S → {∅}.F is basically as useful to us as Φ is.

This attempt shows the frustrating apparent limitation of the Gabay-O’Connortheorem as a choice function. All it knows how to do is collapse one type ofequivalence classes of functions. A more direct attempt to positively answer theopen question might be to map ∪S to a set of functions (where members of thesame set are sent to the same equivalence class), apply Gabay-O’Connor, andthen map back to ∪S. The problem is that the cardinality of the equivalenceclasses (where two functions are equivalent if their ∆ is finite) are all equalfor any given X

Y , while the cardinality of sets in S can vary wildly, making itimpossible to find a bijection for arbitrary S from ∪S to any X

Y where twoelements are in the same set iff the ∆ of their images is finite.

There is even an issue with the proposition that every set of equal cardinalitysets has a choice function: if we try to prove this by mapping each set to anequivalence class of some X

Y , we need to choose a mapping for each set! Andif we had a way to do that, we would probably already have a choice functionto begin with. In fact, we are probably unable to use just any artificial set offunctions X

Y , since any useful mapping to or from them would likely requirechoice. XY would probably need to be constructed from S somehow, but, again,that construction apparently requires the axiom of choice to be useful.

And, of course, for each A ∈ S, one can use Gabay-O’Connor to show thereis a ΦA and subsequently a successful choice function ΨA : A → A on A. The

28

Page 29: PLAYING WITH HATS: THE SET THEORY OF GENERALIZED HAT PROBLEMSpeople.virginia.edu/~mjf7eb/undergradthesis.pdf · THE SET THEORY OF GENERALIZED HAT PROBLEMS Matthew J. Feller May 12,

problem is that we would need to choose infinitely many of these ΨA’s in orderto construct a choice function on S. We also do not need the Gabay-O’Connortheorem to choose an element from a single A in the first place; we just usethe fact that A is nonempty. The difficulty in the axiom of choice is not indoing that once, but in doing it infinitely many times, and in that respect theGabay-O’Connor theorem, used in this way, does not help us at all.

Hardin and Taylor discuss how Gabay-O’Connor can be used to contradictthe Property of Baire, which is independent of the axiom of choice [1, p. 26].That shows that it is at least a weaker form of choice. But from what we’vejust seen, it seems completely unhelpful for proving AC.

References

[1] Christopher S. Hardin and Alan D. Taylor. The Mathematics of CoordinatedInference: A Study of Generalized Hat Problems. Springer Developments inMathematics, Vol. 33. Springer International Publishing Switzerland, 2013.

[2] Alexander George. A proof of induction? Philosopher’s Imprint, 7(2):1-5,March 2007.

29