computable linear orders and turing reductions

63
University of Connecticut DigitalCommons@UConn Master's eses University of Connecticut Graduate School 5-5-2012 Computable Linear Orders and Turing Reductions Whitney P. Turner [email protected], [email protected] is work is brought to you for free and open access by the University of Connecticut Graduate School at DigitalCommons@UConn. It has been accepted for inclusion in Master's eses by an authorized administrator of DigitalCommons@UConn. For more information, please contact [email protected]. Recommended Citation Turner, Whitney P., "Computable Linear Orders and Turing Reductions" (2012). Master's eses. 246. hp://digitalcommons.uconn.edu/gs_theses/246

Upload: others

Post on 03-Feb-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

University of ConnecticutDigitalCommons@UConn

Master's Theses University of Connecticut Graduate School

5-5-2012

Computable Linear Orders and Turing ReductionsWhitney P. [email protected], [email protected]

This work is brought to you for free and open access by the University of Connecticut Graduate School at DigitalCommons@UConn. It has beenaccepted for inclusion in Master's Theses by an authorized administrator of DigitalCommons@UConn. For more information, please [email protected].

Recommended CitationTurner, Whitney P., "Computable Linear Orders and Turing Reductions" (2012). Master's Theses. 246.http://digitalcommons.uconn.edu/gs_theses/246

Computable Linear Orders and TuringReductions

Whitney Patton Turner

B.A. Mathematics, Albion College, Albion, Michigan, 2009

A ThesisSubmitted in Partial Fulfillment of the

Requirements for the Degree ofMaster of Science

at theUniversity of Connecticut

2012

APPROVAL PAGE

Master of Science in Mathematics Thesis

Computable Linear Orders and TuringReductions

Presented byWhitney Patton Turner, B.A.

Major AdvisorDavid Reed Solomon

Associate AdvisorHenry Towsner

Associate AdvisorJohanna N.Y. Franklin

University of Connecticut2012

i

ACKNOWLEDGEMENTS

I would like to thank my advisor Reed Solomon. I would not have been able to

complete this without his direction and support. For the past two years, Reed

has been an outstanding mentor and teacher. He has always had time to

help and work with me through the many revisions and corrections. I could not

have asked for a more exceptional advisor. I would also like to thank

Johanna Franklin and Henry Towsner for being on my advisory committee.

I would like to thank my past and current office-mates for being my friends

and making life entertaining in Storrs. Thank you to the Math Department’s

administrative assistants Tammy Prentice and Monique Roy for always

knowing the answer to any question or problem that came up, and to David

Gross and Reed Solomon for their advice and helping me with the many

student or administrative related issues.

I would like to thank my parents, Rita and Lee Patton, for everything they have

done to help me throughout my life. I love you very much Mom and Dad! I

would also like to thank my siblings: Courtney, Bryan, Bree, Chelsea, Dack,

and Bree, and in laws, the Turners. I love you all! Finally, I would like

to thank my wonderful husband, Dustin Turner for being an all around

awesome guy. Without you, I would never have even gone into math!

ii

TABLE OF CONTENTS

Introduction : Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1. Computability Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2. Linear Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Ch. 1 : Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.1. Dense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.2. Discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.3. Finite Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Ch. 2 : Discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.1. Construction I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.2. Construction II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Ch. 3 : Dense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.1. Construction I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.2. Construction II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

iii

Introduction

Introduction

0.1 Computability Theory

The main focus of this thesis is to measure the complexity of a variety of rela-

tions on computable linear orders. To do this measurement, we will use two

reducibilities. A set A ⊂ N is Turing reducible to a set B if A = φBe meaning that

there is an oracle machine that computes the characteristic function of A using

oracle B. We denote this as A ≤T B. During our proofs, this basically means

that we can ask our set B questions, specifically whether or not a number is in

B. A is many-one reducible or m-reducible to B if there is a computable function f

such that x ∈ A if and only if f (x) ∈ B. We denote this as A ≤m B. Recall that

if A ≤m B, then A ≤T B, but not conversely. We will give a concrete example of

when the converse fails in this thesis.

Through this thesis, we will use standard notation from computability the-

ory as found in Robert I. Soare’s Recursively Enumerable Sets and Degrees or

Hartley Rogers, Jr.’s Theory of Recursive Functions and Effective Computability.

We use φ0, φ1, φ2,...to denote the standard list of partial computable functions

1

and W0, W1, W2,...to denote their domains. Recall that the sets We are called

computable enumerable or c.e. sets. We utilize several familiar index sets: K,

Fin, and In f , formally defined as:

(i) K = {x | φx(x) converges }

(ii) Fin = {x |Wx is finite }

(iii) In f = {x |Wx is infinite }

These sets live in a hierarchy which is defined by quantifier complexity. We

define the Σ0n and Π0

n sets in the following way.

Definition: Let A be a set.

(i) A is in Σ00

or Π00

if and only if A is computable.

(ii) For n ≥ 1, A is in Σ0n if there is a computable R(x, y1, y2, ..., yn) such that

x ∈ A if and only if (∃y1)(∀y2)(∃y3)...(Qyn)R(x, y1, y2, ..., yn ).

(iii) For n ≥ 1, A is in Π0n if there is a computable R(x, y1, y2, ..., yn ) such that

x ∈ A if and only if (∀y1)(∃y2)(∀y3)...(Qyn)R(x, y1, y2, ..., yn ).

By fully writing out their definitions, we have that K ∈ Σ01, Fin ∈ Σ0

2, and

In f ∈ Π02. A ∈ Σ0

n(Π0n) is Σ0

n(Π0n)-complete if for any arbitrary set B ∈ Σ0

n(Π0n), we

have B ≤m A. We will use the facts that K is Σ01-complete, Fin is Σ0

2-complete,

and In f is Π02-complete.

2

0.2 Linear Orders

A linear order is a pair (D,≤D) where D is a set and ≤D is a binary relation on D

which is reflexive, transitive, and anti-symmetric. For this thesis, we will work

with countable (and often computable) linear orders and will typically assume

that D = N. Our standard notation for a linear order will be L = (N,≤L).

We say that L = (N,≤L) is computable if and only if the binary relation ≤L is

computable.

We will use several classical notions from the theory of linear orders. Specifi-

cally, we need the following definitions.

• A linear order L is discrete if every element a ∈ L has an immediate

successor and an immediate predecessor unless a is the least or greatest

element. If a is the least element of L, we require a to have an immediate

successor and if a is the greatest element of L, then we require a to have

an immediate predecessor. Note that every finite linear order is discrete.

An interval (a, b) ⊂ L is discrete if the ordering given by ≤L restricted to

(a, b) is discrete.

• A linear order L is dense if L is isomorphic to the usual order on Q.

(Recall that we assume our orderings are countable.) As above, we say

an interval (a, b) in L is dense if the order given by ≤L restricted to (a, b)

is dense.

3

• Let L be a linear order and let a ∈ L. We say b is in the same block as a if

the interval [a, b] (if a ≤L b) or [b, a] (if b ≤L a) is finite. The block of a is

the set of elements b such that b is in the same block as a.

Let L be a computable linear order. The following are ordering relations we

will examine:

(i) FinBlL = {c | c is in a finite block in L}

(ii) DenL = {〈b, c〉 | (b, c) is dense in L}

(iii) DisL = {〈b, c〉 | (b, c) is discrete in L}

We can calculate the complexity of each of these relations as follows. DenL ∈ Π02

because 〈b, c〉 ∈ DenL if and only if

b <L c ∧ ∀x, y ∈ (b, c) [x <L y→ ∃ z (x <L z <L y)]

∧ ∀ x ∈ (b, c)[∃ z(b <L z <L x) ∧∃ u(x <L u <L c)].

To analyze the complexity of FinBlL and DenL, we first consider the imme-

diate predecessor relation PredL(x, y) and the immediate successor relation

SuccL(x, y). PredL(x, y) holds if and only if

x <L y ∧ ¬∃ z(x <L z <L y)

and hence is Π01. SuccL(x, y) holds if and only if

y <L x ∧ ¬∃ z(y <L z <L x)

4

and hence is also Π01. We can now show that DisL ∈ Π

03

because 〈b, c〉 ∈ DisL if

and only if

b <L c ∧ ∀x ∈ (b, c) ∃ u, v(PredL(u, x) ∧ SuccL(v, x))

Finally, to analyze FinBlL, we also need the complexity of the limit from be-

low relation, LimBelowL(x) and the limit from above relation, LimAboveL(x).

LimBelowL(x) holds if and only if

∀ y (y <L x→ ∃ z(y <L z <L x))

and hence is Π02. LimAboveL(x) holds if and only if

∀ y (x <L y→ ∃ z(x <L z <L y))

and hence is also Π02. Note the following subtlety of these definitions. If L

has a least element a, then LimBelowL(a) holds since we do not require that

there is a y <L x in the definition of LimBelowL(x). Similarly, if L has a greatest

element a, then LimAboveL(a) holds. This aspect of these definitions will make

the definition of FinBlL(x) more compact.

Now, we have that FinBlL(x) ∈ Σ03

because FinBlL(x) holds if and only if

∃ y (y = 〈x1, ..., xn〉 ∧ ∃ i ≤ n(xi = x) ∧

∀i ≤ n (SuccL(xi+1, xi)) ∧ LimBelowL(x1) ∧ LimAboveL(xn))

In Chapter 1, we will show that each of these relations is complete for some

computable L.

5

Theorem: There are computable linear orders L1, L2, and L3 such that DenL1is

Π02-complete, DisL2 is Π0

3-complete, and FinBlL3 is Σ0

3-complete.

In Chapters 2 and 3, we will consider the complexity of DenL, DisL, and BlockL

when we fix the element b to be the least element in L. Specifically, if L is a

computable linear order and b ∈ L, then we define

(i) DenL(b) = {c | (b, c) is dense in L}

(ii) DisL(b) = {c | (b, c) is discrete in L}

(iii) BlockL(b) = {c | c is in the same block as b }

We show that we can code both 0′ and 0′′ into these relations by a Turing re-

duction when b is the least element of L.

Theorem: There are computable linear orders L1, L2, and L3 with b denot-

ing the least element in each order such that 0′′ ≤T DenL1(b), 0′′ ≤T DisL2(b),

and 0′′ ≤T BlockL3(b).

Before starting the main results of this thesis, we present a theorem originally

due to Carl Jockusch which has not appeared in print. This theorem shows

that for the BlockL(b) relation, we cannot improve our Turing reduction in the

6

previous theorem to an m-reduction.

Theorem: If L is a computable linear order and B is a block in L with least

element b ∈ B, then K �m B.

Proof : Let L be a computable linear order and let B be a block in L with least

element b ∈ B. We want to show that K �m B.

By way of contradiction, assume that K ≤m B and fix a computable function f

such that n ∈ K if and only if f (n) ∈ B. Without loss of generality, we assume

that for all n, b <L f (n).

We define two partial computable functions g(x, y) and h(x, y) using the s-

m-n theorem:

φg(x,y)(u) =

↑ if f (x) <L f (y)

0 if f (x) ≥L f (y)

φh(x,y)(u) =

0 if f (x) <L f (y)

↑ if f (x) ≥L f (y)

We want to apply the following theorem which is an adjustment to the regular

7

recursion theorem found in Hartley Rogers, Jr.’s Theory of Recursive Functions

and Effective Computability.

Double Recursive Theorem: For any recursive functions g and h, there exist m

and n such that φm = φg(m,n) and φn = φh(m,n).

By this theorem, fix some n and m such that φn = φg(m,n) and φm = φh(m,n). Using

our function f , the relationship between point placement in our linear order

can be broken down into three possibilities. We are using the placement of

points in L and in particular, whether the points are included in the block B, to

create a contradiction.

(i) f (n) = f (m)

If f (n) = f (m), then we know that

φn(n) = φg(n,m)(n) ↓= 0⇒ φn(n) ↓= 0

So, n ∈ K which implies that f (n) ∈ B. On the other hand, we also know

that

φm(m) = φh(n,m)(m) ↑ ⇒ φm(m) ↑.

So, m < K which implies that f (m) < B. Thus, we have a contradiction

since f (n) = f (m), but f (n) is in the block and f (m) is not in the block.

8

(ii) f (n) <L f (m)

If f (n) <L f (m), then we know that

φn(n) = φg(n,m)(n) ↑ ⇒ φn(n) ↑

So, n < K which implies that f (n) < B. On the other hand, we also know

that

φm(m) = φh(n,m)(m) ↓= 0⇒ φm(m) = 0.

So, m ∈ K which implies that f (m) ∈ B. Thus, we have a contradiction

since b <L f (n) <L f (m), but f (n) is not in the block and f (m) is in the

block.

(iii) f (m) <L f (n) If f (m) <L f (n), then we know that

φn(n) = φg(n,m)(n) ↓= 0⇒ φn(n) = 0

So, n ∈ K which implies that f (n) ∈ B. On the other hand, we also know

that

φm(m) = φh(n,m)(m) ↑ ⇒ φm(m) ↑.

So, m < K which implies that f (m) < B. Thus, we have a contradiction

since b <L f (m) <L f (n), but f (n) is in the block and f (m) is not in the

block.

Thus, we know that the relation between the computable function f and our

linear order derives a contradiction in each case. Therefore, K �m B.

9

Chapter 1

Completeness

In this chapter, we prove the completeness results stated in the Introduction.

Recall that if L is a computable linear order, then DenL ∈ Π02, DisL ∈ Π

03, and

FinBlL ∈ Σ03. We show that in each case, we can construct a computable L for

which the relation is complete at the given level of the arithmetic hierarchy.

1.1 Dense

Theorem: There is a computable linear order L for which

DenL = {〈b, c〉 | (b, c) is dense in L} is Π02-complete.

Proof : Since In f = {e |We is infinite} is Π02-complete, it suffices to build a com-

putable linear order L such that In f ≤m DenL. To accomplish this reduction,

we use pairs of witness points bn and cn, and we make the interval (bn, cn) dense

if and only if Wn is infinite. The requirements are the following:

Rn : n ∈ In f if and only if (bn, cn) is dense in L.

Construction:

Stage 0: Set down the set of even numbers in their usual order and label the

10

numbers in pairs as bn and cn. Each pair bi and ci is associated to the domain

Wi.

b0 <L c0 <L b1 <L c1 <L ... <L bn <L cn <L ...

Stage s+1: At stage s+1,we examine each Wn, for n ≤ s, to see if it receives a

new element at stage s + 1. Since the requirements for each Wn are applied to

each separate interval, we can treat each such requirement individually. Note

that this means that there is no injury in this construction.

Case I: Assume a new element enters Wn. We need to make progress towards

making the interval (bn, cn) dense. To accomplish this, suppose the interval

(bn, cn) currently contains m many points and appears as

bn <L zm <L zm−1 <L ... <L z1 <L cn

Let y1n, ...y

m+1n be the m + 1 least unused odd numbers. Place the odd numbers

into the interval (bn, cn) between each current pair of successor points as follows:

bn <L ym+1n <L zm <L ym

n <L zm−1 <L ... <L y2n <L z1 <L y1

n <L cn

In later constructions, we will describe this process of adding a new point be-

tween each pair of current successors in [bn, cn] as partially densifying the interval

(bn, cn).

Case II: Assume no new elements are enumerated into Wn. We leave the

interval (bn, cn) as it is and do not add points towards densifying the interval.

11

Verification:

(i) n ∈ In f if and only if (bn, cn) is dense

We know that n ∈ In f if and only if Wn is an infinite domain. This is

true if and only if we enumerate infinitely many points into Wn. When

a point enumerates into Wn, we place points into (bn, cn). This process,

when done infinitely often, creates a dense interval. Thus, if there are

infinitely many points in Wn, then (bn, cn) is dense.

We know that n < In f if and only if Wn is finite. This is true if and only if

only a finite number of points are enumerated in Wn which implies that

only finitely many points were placed into (bn, cn). Thus, if n < In f , then

(bn, cn) is finite and specifically, is not dense.

(ii) Effective Construction

The construction is effective because there are only a finite number of

things done at each stage for each Wn. The points we place into each

[bn, cn] are all odd and thus, there will never be a lack of available numbers

as there are only finitely many odds used at each stage. Also, since there

are indices Wn for which Wn is infinite, the construction will use all of the

odd numbers. Hence the domain of L is N, which is computable. This

implies that to compare i and j in order, we need just run the construction

until both elements appear and compare where they land in L.

12

1.2 Discrete

Theorem: There is a computable linear order L such that

DisL = {〈b, c〉 | [b, c] is discrete in L} is Π03-complete.

Proof : Let X be aΠ03-complete set. We need to build a computable linear order

L such that X ≤m DisL. Since Fin = {e |We is finite } is Σ02-complete, we can fix

a computable function f (x,n) such that

n ∈ X if and only if ∀x(W f (x,n) is finite )

To accomplish this, we will use pairs of witness points bn and cn to meet the

following requirements.

Rn : ∀x(W f (x,n) finite) if and only if (bn, cn) is discrete in L.

Construction:

Stage 0: Effectively partition the even numbers into infinitely many infinite sets

X, P0, P1, P2, ...We will use these sets of numbers to put down a basic structure

for our computable order L that will be filled in at later stages.

To define this basic structure, first place the numbers in X in L in their usual

order and label them as follows:

b0 <L c0 <L b1 <L c1 <L b2 <L c2 <L ...

For each n, we will place the numbers in Pn into the interval (bn, cn) in their

usual order so our ordering L at stage 0 looks like:

13

b0 <L points in P0 <L c0 <L b1 <L points in P1 <L c1 <L ...

Label the points in each Pn in groups of three as follows:

u0n <L d0

n <L v0n <L u1

n <L d1n <L v1

n <L ...

Therefore, our order L at stage 0 looks like:

b0 <L u00<L d0

0<L v0

0<L u1

0 <L d10 <L v1

0 <L ...c0 <L b1 <L u01<L d0

1<L v0

1<L

... <L c1 <L b2 <L ...

Stage s+1: At stage s+1,we let each requirement Rn for n ≤ s act in turn. Since

Rn will work only in the interval (bn, cn), we can treat each requirement indi-

vidually and there is no injury in this construction.

Action for Rn: For each x ≤ s, we check if W f (x,n) has received a new element.

Case I: Assume W f (x,n) has received a new element. We need to make progress

towards making the interval (bn, cn) not discrete. Let y and z be the least unused

odd numbers. We add y and z to L as the immediate predecessor and successor

of dxn as follows:

bn <L ... <L uxn <L finite <L y <L dx

n <L z <L finite <L vxn <L ... <L cn

Notice that if we add a new predecessor and successor for dxn infinitely often,

then dxn becomes a limit point and (bn, cn) is not discrete. However, if we add

14

only finitely many such points, the interval (uxn, v

xn) will be finite.

Case II: Assume W f (x,n) had not received a new element. We leave the interval

(bn, cn) looking discrete, so we do not add any new points and we move on to

x + 1.

Verification:

(i) n ∈ X if and only if (bn, cn) is discrete

First, suppose n < X. In this case, we can fix an x such that W f (x,n) is

infinite. Since W f (x,n) receives a new element at infinitely many stages, we

add a new successor and predecessor to dxn infinitely often. Therefore, dx

n

is a limit point and has neither an immediate predecessor nor an imme-

diate successor in L. Since dxn ∈ (bn, cn), the interval (bn, cn) is not discrete

in L.

On the other hand, suppose n ∈ X. In this case, each set W f (x,n) is fi-

nite and hence each interval (uxn, v

xn) is finite. Thus the interval (bn, cn) in

L looks like

u0n <L finite <L v0

n <L u1n <L finite <L v1

n <L ...

Since (bn, cn) has order typeN, it is discrete.

15

(ii) Effective Construction

The construction is effective because there are only a finite number of

things done at each stage for each Wn. The points we place into each (bn, cn)

are all odd and thus, there will never be a lack of available numbers as

there are only finitely many odds used at each stage. Also, since there are

numbers n < X, and hence sets W f (x,n) which are infinite, the construction

will use all of the odd numbers. Hence the domain of L is N, which is

computable. This implies that to compare i and j in order, we need just

run the construction until both elements appear and compare where they

land in L.

16

1.3 Finite Block

Theorem: There is a computable linear order L such that

FinBlL = {c | c is in a finite block in L} is Σ03-complete.

Proof : Let X be a Σ03-complete set. We need to build a computable linear order

L such that X ≤m FinBlL. Since In f is Π02-complete, we can fix a computable

function f (n, x) such that

n ∈ X if and only if ∃ x (W f (x,n) is infinite).

To build L, we will use witness points cn and meet the requirements:

Rn : ∃x(W f (x,n) is infinite) if and only if cn ∈ FinBlL

Construction:

Stage 0: Effectively partition the even numbers into infinitely many infinite sets

X, P0, P1, P2, ...We will use these sets of numbers to put down a basic structure

for our computable order L that will be filled in at later stages.

To define this basic structure, first place the numbers in X in L in their usual

order and label them as follows:

c0 <L c1 <L c2 <L ...

For each n, we will place the numbers in Pn around cn and order them in order

typeZwith labels as follows:

17

... <L b10<L b0

0<L c0 <L d0

0<L d1

0<L ... <L b1

1<L b0

1<L c1 <L d0

1<L d1

1<L ...

Stage s+1: At stage s+1, we let each requirement Rn for n ≤ s act in turn. Since

Rn will act within the part of L defined by the Pn points, we can treat each

requirement individually and there is no injury in this construction.

Action for Rn: For each x ≤ s, we check if W f (x,n) has received a new element.

Case I: Assume W f (x,n) does receive a new element. We need to make progress

towards making cn a member of a finite block. So, let z1 and z2 be the two least

unused odd numbers. Place z1 into L as the immediate predecessor of dx−1n (or

cn if x = 0) and z2 into L as the immediate successor of dx−1n (or cn if x = 0). The

order looks like:

... <L bxn <L finite

<L z1 <L bx−1n <L ... <L cn <L ... <L dx−1

n <L z2 <Lfinite<L dxn <L ...

Notice that if bx−1n and dx−1

n receive new predecessors and successors infinitely

often, then they become limit points from below and above respectively, and

the block containing cn cannot extend beyond [bx−1n , d

x−1n ].

Case II: Assume W f (x,n) did not receive a new element. We do nothing in this

case and do not add any new points to L. Proceed to x + 1.

18

Verification:

(i) n ∈ X if and only if cn is in a finite block

First, suppose n ∈ X. We can fix x such that W f (x,n) is infinite. Assume

we have fixed the least such x. Since bx−1n receives infinitely many new

predecessors, it is a limit point from below. Similarly, dn−1n is a limit point

from above. (If x = 0, then cn is a limit point from below and above, and

hence is in a block of size 1. We continue assuming x , 0). Thus, our

order around cn looks like

... <L bx−1n <L finite <L b0

n <L finite <L cn <L finite

<L d0n <Lfinite<L dx−1

n <L ...

The interval [bx−1n , d

x−1n ] is finite and constitutes the block containing cn.

Therefore, cn is a finite block.

On the other hand, suppose n < X. In this case, W f (x,n) is finite for

all x and hence each interval of the form [bxn, b

x−1n ], [b0

n, cn], [cn, d0n], and

[dx−1n , d

xn] is finite. Therefore, the block containing cn has order typeZ and

is infinite.

(ii) Effective Construction

The construction is effective because there are only a finite number of

19

things done at each stage. The points we place into each set of even

numbers around cn are all odd and thus, there will never be a lack of

available numbers as there are only finitely many odds used at each

stage. Also, since there are numbers n ∈ X and hence infinite sets, the

construction will use all of the odd numbers. Hence the domain of L is

N, which is computable. This implies that to compare i and j in order, we

need just run the construction until both elements appear and compare

where they land in L.

20

Chapter 2

Discrete

The main goal of this chapter is to construct a computable linear order L with

a least element b such that

0′′ ≤T DisL(b) = {c | (b, c) is discrete in L}

and 0′′ ≤T BlockL(b) = {c | [b, c] is finite in L}

We first give a simpler construction coding 0′ instead of 0′′ and then we show

how to modify this construction to code 0′′.

2.1 Construction I

Recall: We define an interval as discrete if every element has a successor and

predecessor except if the interval has a least or greatest element. If the interval

has a least element, the least element will not have a predecessor and if the

interval has a greatest element, then the greatest element will not have a suc-

cessor. In particular, finite intervals are discrete and we will utilize that part of

the definition in this proof.

Theorem: There is a computable linear order L with least element b such that

0′ ≤T DisL(b) = {c | (b, c) is discrete in L}.

21

Proof : In order to prove this theorem, we want to build a computable linear

order L around a least element b such that the interval (b, xn) is discrete if and

only if n ∈ K. We have the following requirements:

Rn : n ∈ K if and only if xn ∈ DisL(b)

with ordering R0 < R1 < R2 < ...

The basic strategy for a single requirement R0 is to put down a pair of points l0

and x0 such that

b <L l0 <L x0.

Our goal is to do one of two things in the interval (l0, x0) depending on whether

0 ∈ K or not. If 0 < K, then we want to make the open interval (l0, x0) isomorphic

toω∗. This action makes l0 into a limit point from above and hence, makes (b, x0)

not discrete because l0 has no successor. If 0 ∈ K, then we want to make [l0, x0]

finite which makes (l0, x0) discrete. In the context of a single requirement, this

also makes (b, x0) finite and thus, discrete.

To accomplish this goal, at each stage s, we check whether 0 ∈ Ks. If not, then

we add a new least point in the interval (l0, x0).

b <L l0 <L new point <L zk <L ... <L z1 <L z0 <L x0

In this case, we regard R0 as a building state requirement and in the general

22

construction, we will be taking the B outcome (for building).

On the other hand, if 0 ∈ Ks, then we want to stop building our ω∗-chain and

restrain the interval [l0, x0] from ever growing again. We regard R0 as a re-

straining state requirement. In the general construction, we will be taking the

R outcome (for restraining).

To handle a second requirement R1, we need a second pair of witness points

l1 <L x1. The placement of these points depends on the action of R0. As long

as R0 is in the building state, we are working under the assumption that [l0, x0]

will not be discrete in the limit and therefore we can put any points we want

into the interval (b, l0). Thus, we place the points l1 and x1 as follows:

b <L l1 <L x1 <L l0 <L x0

The requirement R1 now works exactly as R0 did. As long as 1 < Ks, R1 contin-

ues to add points to (l1, x1) towards making this interval isomorphic to ω∗. If

1 ∈ Ks, then R1 restrains [l1, x1] by not allowing any additional points to enter

this interval.

However, consider what happens if R0 changes to the restraining state. In this

case, R0 freezes the finite size of [l0, x0] and wants to also make sure that (b, x0)

is finite. Therefore, R1 needs to stop adding points in its current interval (l1, x1)

23

since these points are added into the interval [b, l0].

In this situation, R1 adds new witness points l∗1

and x∗1

and places them such

that

b <L l1 <L Finite <L x1 <L l0 <L Finite <L x0 <L l∗1<L x∗

1.

R1 can now proceed as before using the interval (l∗1, x∗

1). Notice that if 0 ∈ K and

1 ∈ K, then R0 makes [l0, x0] finite and makes [b, l0] finite (by forcing R1 to stop

using witnesses l0 and x0). R1 also makes [l∗1, x∗

1] finite. Thus, (b, x0) and (b, x∗

1)

are both finite (and hence discrete), winning R0 and R1.

Notice that with two requirements, we need to know the outcome at R0 in order

to know which interval in L codes information about whether 1 ∈ K. To use

{c | (b, c) is discrete in L} to compute K, we proceed as follows. First, we need

to ask if (b, x0) is discrete. If the interval is not discrete, then we know that 0 < K

and that the witness pair for R1 is (l1, x1). So, we ask if (b, x1) is discrete. If so,

then 1 < K and if not, then 1 ∈ K.

On the other hand, if (b, x0) is discrete, then we know that 0 ∈ K. So, at some

finite point in the construction, we switched our witness pair for R1 to (l∗1, x∗

1).

Therefore, to determine if 1 ∈ K, we need to ask if (b, x∗1) is discrete. If it is

discrete, then 1 ∈ K and if it is not discrete 1 < K.

24

The witness x2 is set down based upon the restrictions of the higher priority

requirements R0 and R1.

• If 0 ∈ K and 1 ∈ K, then x2 is set down such that b <L x0 <L x1 <L x2.

• If 0 ∈ K and 1 < K, then x2 is set down such that b <L x0 <L x2 <L x1.

• If 0 < K and 1 ∈ K, then x2 is set down such that b <L x1 <L x2 <L x0.

• If 0 < K and 1 < K, then x2 is set down such that b <L x2 <L x1 <L x0.

The rest of the witnesses are set down based upon the higher priority require-

ments.

Notice that, as described above for R0 and R1, in order to determine which

interval in L codes information about whether 2 ∈ K, we need to know the

outcomes for R0 and R1. The answer to the question of whether (b, x0) is dis-

crete tells us which witness pair for R1 codes the information about whether

1 ∈ K. Once we know which witness pair codes this information, we can ask

a discreteness question to determine which witness pair for R2 codes informa-

tion about whether 2 ∈ K. In general, to determine which witness pair codes

information about whether n ∈ K, we will have to use discreteness questions

to determine the correct witness pairs for 0, 1, ...,n − 1. This process illustrates

why our reduction is a Turing reduction as opposed to an m-reduction.

25

We will be setting up a tree of strategies T = {R,B}<ω such that R <L B. We

want to indicate that the order determined by the tree will be represented by

L as opposed to L which refers to the actual linear order. The basic universal

strategy is to stay in a build state until n is enumerated into K. During this

time, we will be building anω∗-chain between the witness pair xn and ln. When

n is enumerated into K, we want to switch to the restrain strategy which will

not allow any new points to be introduced in the interval [ln, xn].

It remains to describe where a strategy α ∈ T places its witness points lα and

xα when it is first eligible to act. To describe this placement, we treat the pair lα

and xα as a single entity wα and write

wα <L wβ

as an abbreviation for lα <L xα <L lβ <L xβ. Our method of adding points (as

described below) will ensure that the intervals (lα, xα) and (lβ, xβ) are always

disjoint. For distinct strategies α and β, we place wα <L wβ if and only if either

• β <L α (β is to the left of α in the tree of strategies)

• or α ⊆ β and β(|α|) = R (β extends α ∗ R)

• or β ⊆ α and α(|β|) = B (α extends β ∗ B).

26

Local Action for α for Re :

(i) When Rα is first eligible to act, place a new pair of witnesses lα <L xα in

L as described above.

(ii) If e < Ks, then add a new least point into the interval (lα, xα) and take

outcome α ∗ B.

(iii) If e ∈ Ks, do not add any points to (lα, xα) and take outcome α ∗ R.

Note two properties of the placement of points in our linear order L. First,

only α is allowed to put points in the interval (lα, xα). This protects our interval

against other witnesses encroaching on its territory. Second, when lα and xα are

placed, the interval contains no lβ and xβ points. This serves the same purpose

as the previous restriction in that it preserves the previous intervals.

Construction

Stage 0: We begin with the empty set. So, we need to set down point b.

Stage s+1: Follow the path down the tree of strategy to level s as directed by

the action of the strategies eligible to act. When we reach level s + 1, end the

stage.

27

Verification:

(i) True Path

The true path in our tree of strategies is the leftmost path visited infinitely

often. Notice that if α is on the true path, then either α always takes

outcome α ∗ B or at some stage s, α switches to outcome α ∗R and always

takes α ∗ R at all future stages. Therefore, as the construction proceeds,

the paths taken only move left and a node α at level n is on the true path

if and only if α is eventually on the path at every stage past some stage s.

(ii) Lemma 2.1.1: Let α be on the true path. If α ∗ R is on the true path, then

(b, xα) is finite and hence discrete.

Proof : Fix t such that for all s ≥ t, α ∗ R is on the path at stage s. To prove

this lemma, we need to consider the ways that a point could possibly

enter the interval (b, xα) after stage t. There are two possibilities:

(a) α places points into [lα, xα] after stage t.

If we have taken the outcome R at stage t, then we know that n entered

K by stage t. By the construction, there is no possibility for points to

enter [lα, xα] because we cannot return to the building outcome.

(b) Another strategy β places points into (b, xα). In this case, we must have

wβ <L wα. Consider the ways in which this could happen.

• If β ⊆ α, then wβ <L wα means α(|β|) = R and hence β ∗ R ⊆ α.

Therefore, β ∗ R is on the true path and is on the current path at all

28

stages s ≥ t. By the action at β, β does not add any points to [lβ, xβ].

• If α ⊆ β, then wβ <L wα means β(|α|) = B and hence α ∗ B ⊆ β.

However, α ∗B is never on the path after state t, so β is never eligible

to act after stage t. Therefore, β cannot add new points to [lβ, xβ].

• If α and β are incomparable, then wβ <L wα means α <L β. Since our

path only moves left and α is on the true path, β is never eligible to

act after stage t and never adds any new points to L after stage t.

In all cases, we see that no strategy β , α can add new points to (b, xα)

after stage t.

(iii) Lemma 2.1.2: Let α be on the true path. If α ∗ B is on the true path, then

(b, xα) is not discrete.

Proof : If α ∗ B is on the true path, then it is eligible to act infinitely often

and it adds points to make (lα, xα) isomorphic to ω∗. Therefore, lα ∈ (b, xα)

and lα has no immediate successor. Hence, (b, xα) is not discrete.

(iv) Lemma 2.1.3: Letα be the Re strategy on the true path. Then, the following

are equivalent:

• (b, xα) is discrete.

• (b, xα) is finite.

• e ∈ K

• α ∗ R is on the true path.

29

Proof : If e ∈ K, then by our local action for α, α ∗ R is on the true path. By

Lemma 2.1.1, (b, xα) is discrete and finite. If e < K, then by our local action

for α, α ∗ B is on the true path. By Lemma 2.1.2, (b, xα) is not discrete and

hence infinite.

(v) Lemma 2.1.4: 0′ ≤T DisL(b) = {c | (b, c) is discrete in L}

Proof : We define a function f : N → T by setting f (e) = α if α is the Re

strategy on the true path. Notice that f is computable from DisL(b) since

f (0) is the unique R0 strategy and by Lemma 2.1.3,

f (e + 1) =

f (e) ∗ B if (b, x f (e)) is not discrete

f (e) ∗ R if (b, x f (e)) is discrete

We can compute 0′ from f using Lemma 2.1.3 since

n ∈ K if and only if (b, x f (n)) is discrete

and thus n ∈ K if and only if f (n+1) = f (n) ∗R. Therefore, we have K ≤T f

and f ≤T DisL(b), so K ≤T DisL(b).

(vi) Effective Construction

The construction is effective because there are only a finite number of

things done at each stage for each requirement Rn. Also, since there are

n < K, we will build at least one infinite ω∗-chain, the construction will

use all of the natural numbers. Hence the domain of L is N, which is

30

computable. This implies that to compare i and j in order, we need just

run the construction until both elements appear and compare where they

land in L.

Corollary: 0′ ≤T BlockL(b) = {c | [b, c] is finite in L}

Proof : By Lemma 2.1.3, if α is an Re strategy on the true path, then (b, xα) is

discrete if and only if (b, xα) is finite. Therefore, we could equivalently define

the function f in Lemma 2.1.4 by

f (e + 1) =

f (e) ∗ B if (b, x f (e)) is infinite

f (e) ∗ R if (b, x f (e)) is finite

Thus, f ≤T BlockL(b) = {c | [b, c] is finite in L} and hence K ≤T BlockL(b) =

{c | [b, c] is finite in L}.

31

2.2 Construction II

Theorem: There is a computable linear order L with least element b such that

0′′ ≤T DisL(b) = {c | (b, c) is discrete in L}.

Proof : In order to prove this theorem, we want to build a computable linear

order L around a least element b such that the interval (b, xn) is not discrete

if and only if n ∈ In f where In f = {e |We is infinite}. We have the following

requirements:

Rn : n ∈ In f if and only if (b, xn) < DisL(b)

with ordering R0 < R1 < R2 < ...

The basic strategy for a single requirement R0 is similar to the strategy in the 0′

construction. We want to put down a pair of points l0 and x0 such that

b <L l0 <L x0.

Our goal is to do one of two things in the interval [l0, x0] depending on whether

0 ∈ In f or not. If 0 ∈ In f , then we want to make the open interval (l0, x0)

isomorphic to ω∗. This action makes l0 into a limit point from above and hence,

makes (b, x0) not discrete because l0 has no successor. If 0 < In f , then we want

to make [l0, x0] finite which makes [l0, x0] discrete. In the context of a single

requirement, this also makes (b, x0) finite and thus, discrete.

32

To accomplish this goal, at each stage s, we check whether W0 had received a

new point. If so, then we add a new least point in the interval (l0, x0).

b <L l0 <L new point <L zk <L ... <L z1 <L z0 <L x0

In this case, we regard R0 as a building state requirement and in the general

construction, we will be taking the B outcome (for building).

On the other hand, if W0 has not received a new point, then we want to stop (at

least temporarily) building our ω∗-chain and restrain the interval [l0, x0] from

growing. We regard R0 as a restraining state requirement. In the general con-

struction, we will be taking the R outcome (for restraining).

We will be setting up a tree of strategies T = {B,R}<ω such that B <L R. Notice

that we have switched from R <L B in the 0′ construction to B <L R in the 0′′

construction. In the 0′′ construction, it is possible to take both the B and R

outcomes infinitely often. For, example, we would do this if there are infinitely

many stages at which Wn gets a new element and infinitely many stages at

which Wn does not get a new element. In this case, the true outcome is the B

outcome since Wn is infinite. In order to have the true path be the leftmost path

visited infinitely often, we need B <L R for the 0′′ construction.

The basic universal strategy is to stay in a restrain state until Wn adds a new

33

point. In a restrained stage, we will not allow any new points to be introduced

in the interval [ln, xn]. When Wn grows, we want to switch to the build strategy

and add a single point towards building a copy of ω∗ in (ln, xn). If we take the

outcome B infinitely often, then (ln, xn) will grow to a copy of ω∗.

It remains to describe where a strategy α ∈ T places its witness points lα and

xα when it is first eligible to act. To describe this placement, we treat the pair lα

and xα as a single entity wα and write

wα <L wβ

as an abbreviation for lα <L xα <L lβ <L xβ. Our method of adding points (as

described below) will ensure that the intervals (lα, xα) and (lβ, xβ) are always

disjoint. For distinct strategies α and β, we place wα <L wβ if and only if either

• α <L β (α is to the left of β in the tree of strategies)

• or α ⊆ β and β(|α|) = R (β extends α ∗ R)

• or β ⊆ α and α(|β|) = B (α extends β ∗ B).

Local Action for α for Re :

(i) When Rα is first eligible to act, place a new pair of witnesses lα <L xα in

L.

Let s be the last stage at which α was eligible to act (with s = 0 if this is

the first time α is eligible to act).

34

(ii) If We,s ,We,s, then add a new least point into the interval (lα, xα) and take

outcome α ∗ B.

(iii) If We,s =We,s, do not add any points to (lα, xα) and take outcome α ∗ R.

Note two properties of the placement of points in our linear order L. First,

only α is allowed to put points in the interval (lα, xα). This protects our interval

against other witnesses encroaching on its territory. Second, when lα and xα are

placed, the interval contains no lβ and xβ points. This serves the same purpose

as the previous restriction in that it preserves the previous intervals.

Construction

Stage 0: We begin with the empty set. So, we need to set down point b.

Stage s+1: Follow the path down the tree of strategy to level s as directed by

the action of the strategies eligible to act. When we reach level s + 1, end the

stage.

Verification:

(i) True Path

First let s be an α-stage if α is eligible to act at stage s. The true path in

our tree of strategies is the leftmost path visited infinitely often. Assume

35

α is on the true path. If there are infinitely many α-stages when we take

outcome α ∗ B, then α ∗ B is on the true path. Otherwise, there exists a

stage t such that for all α-stages after t, we take α ∗ R and α ∗ R is on the

true path. Note that if α is on the true path, then there exists only finitely

many stages s when the true path is to the left of α.

(ii) Lemma 2.2.1: Let α be on the true path. If α ∗ R is on the true path, then

(b, xα) is finite and hence discrete.

Proof : Fix a stage t such that for all s ≥ t, the path is not to the left of α ∗R.

To prove this lemma, we need to consider the ways that a point could

enter the interval (b, xα) after stage t.

(a) α places points in [lα, xα] after stage t.

Ifαplaces a point in [lα, xα], then it takes outcomeα∗B. Sinceα∗B <L α∗R

and the path is never left of α∗R after stage t, α cannot place any points

in [lα, xα] after stage t.

(b) Another strategy β places points in [b, xα]. In this case, we must have

wβ <L wα. Consider the ways this could happen.

(i) If β ⊆ α, then wβ <L wα means α(|β|) = R and hence β ∗ R ⊆ α. If

β adds points to L , it takes outcome β ∗ B which is left of α ∗ R.

Therefore, β adds no more points after stage t.

(ii) If α ⊆ β, then wβ <L wα means β(|α|) = B and hence α ∗B ⊆ β. Since α

takes outcome R at all α-stages after t, β is never eligible to act after

36

stage t and hence does not add any points after stage t.

(iii) Ifα and β are incomparable, then wβ <L wα means β <L α. However,

the path is never to the left of α ∗R after stage t and therefore, never

to the left of α after stage t. Hence, β cannot add points after stage t.

In all cases, we see that no strategy β , α can add new points to (b, xα)

after stage t.

(iii) Lemma 2.2.2: Let α be on the true path. If α ∗ B is on the true path, then

(b, xα) is not discrete.

Proof : If α ∗ B is on the true path, then it is eligible to act infinitely often

and it adds points to make (lα, xα) isomorphic to ω∗. Therefore, lα ∈ (b, xα)

and lα has no immediate successor. Hence, (b, xα) is not discrete.

(iv) Lemma 2.2.3: Let α be the Re strategy on the true path. Then the following

are equivalent:

• (b, xα) is discrete.

• (b, xα) is finite.

• e < In f

• α ∗ R is on the true path.

Proof : If e < In f , then by our local action for α, α ∗ R is on the true path.

By Lemma 2.2.1, (b, xα) is discrete and finite. If e ∈ In f , then by our local

37

action for α, α∗B is on the true path. By Lemma 2.2.2, (b, xα) is not discrete

and hence infinite.

(v) Lemma 2.2.4: 0′′ ≤T DisL(b) = {c | (b, c) is discrete in L}

Proof : We define a function f : N → T by setting f (e) = α if α is the Re

strategy on the true path. Notice that f is computable from DisL(b) since

f (0) is the unique R0 strategy and by Lemma 2.2.3,

f (e + 1) =

f (e) ∗ B if (b, x f (e)) is not discrete

f (e) ∗ R if (b, x f (e)) is discrete

We can compute 0′′ from f using Lemma 2.2.3 since

n ∈ In f if and only if (b, x f (n)) is not discrete

and hence n ∈ In f if and only if f (n + 1) = f (n) ∗ B. Therefore, we have

In f ≤T f and f ≤T DisL(b), so In f ≤T DisL(b).

(vi) Effective Construction

The construction is effective because there are only a finite number of

things done at each stage for each requirement Rn. Also, since there are

n ∈ In f , we will build at least one infinite ω∗-chain, the construction will

use all of the natural numbers. Hence the domain of L is N, which is

computable. This implies that to compare i and j in order, we need just

run the construction until both elements appear and compare where they

land in L.

38

Corollary: 0′′ ≤T BlockL(b)

Proof : By Lemma 2.2.3, if α is an Re strategy on the true path, then (b, xα) is

discrete if and only if (b, xα) is finite. Therefore, we could equivalently define

the function f in Lemma 3.2.4 by

f (e + 1) =

f (e) ∗ B if (b, x f (e)) is infinite

f (e) ∗ R if (b, x f (e)) is finite

Thus, f ≤T BlockL(b) and hence 0′′ ≤T BlockL(b).

39

Chapter 3

Dense

In this chapter, we construct a computable linear order L with a least element

b such that

0′′ ≤T DenL(b) = {c | (b, c) is dense in L}.

We first give a simpler construction coding 0′ instead of 0′′.

3.1 Construction I

Recall: We define an interval as dense if it is isomorphic to Q.

Theorem: There is a computable linear order L with least element b such that

0′ ≤T DenL(b) = {c | (b, c) is dense in L}.

Proof : In order to prove this theorem, we want to build a computable linear

order L around a least element b such that the interval (b, xn) is dense if and

only if n < K. We have the following requirements:

Rn : n < K if and only if (b, xn) ∈ DenL(b)

40

with ordering R0 < R1 < R2 < ...

The basic strategy for a single requirement R0 is to put down a point x0 such

that

b <L x0.

Our goal is to do one of two things in the interval (b, x0) depending on whether

0 ∈ K or not. If 0 < K, then we want to make the open interval (b, x0) isomorphic

to Q. This action makes (b, x0) dense. If 0 ∈ K, then we want to make (b, x0) not

dense.

To accomplish this goal, at each stage s, we check whether 0 ∈ Ks. If not, then

we add new points between each point in the interval (b, x0).

b <L new <L wk <L new <L ... <L new <L w1 <L new <L w0 <L new <L x0

In this case, we regard R0 as a building state requirement and in the general

construction, we will be taking the B outcome (for building). Since we will

repeat this process many times, we introduce the following terminology. Let

(u, v) be a finite interval in L at stage s. To partially densify (u, v) means to add

a new least element and a new greatest element to this open interval and to

add one new point between each pair of points in (u, v) which are currently

successors. Notice that if a fixed interval (u, v) is partially densified infinitely

41

often, then (u, v) has order type Q.

On the other hand, if 0 ∈ Ks, then we want to stop building our Q and restrain

the interval (b, x0) from becoming dense. To do this, we want to add two points

b < z0 < y0 < x0 as immediate predecessors of x0 and not allow any points to

enter interval (z0, x0). If we maintain this restraint, then z0 will be an imme-

diate predecessor of y0 and hence (b, x0) will not be dense. We regard R0 as a

restraining state requirement. In the general construction, we will be taking

the R outcome (for restraining).

To handle a second requirement R1, we need a witness point x1. The placement

of this points depends on the action of R0. As long as R0 is in the building state,

we are working under the assumption that (b, x0) will be dense in the limit and

therefore, we want to protect the interval (b, x0). Thus, we place the point x1 as

follows:

b <L x0 <L x1

The requirement R1 now works exactly as R0 did. As long as 1 < Ks, R1 con-

tinues to partially densify (b, x1), making this interval isomorphic to Q. Notice

that if 0 < K and 1 < K, then the action of R1 towards making (b, x1) isomorphic

to Q does not injure the action of R0 towards making (b, x0) isomorphic to Q. If

1 ∈ Ks, then R1 restrains (b, x1) by not allowing this interval to become dense by

42

placing two points x0 <L z1 <L y1 <L x1 as immediate predecessors of x1 and

not allowing any points to enter between [z1, x1]. Since we have x0 <L z1 <L y1,

the requirement R0 can make (b, x0) dense while the requirement R1 can make

(b, x1) not dense by making z1 and immediate predecessor of y1.

However, consider what happens if R0 changes to the restraining state. In this

case, R0 adds two points b < z0 < y0 < x0 as immediate predecessors of x0 and

does not allow any points to enter between (z0, y0) which will make sure that

(b, x0) is not dense. Therefore, R1 needs to stop partially densifying its current

interval (b, x1) since this action adds points in the interval (z0, y0).

In this situation, R1 adds a new witness point (or chooses one in the interval

(b, z0)) x∗1

and places it such that

b <L x∗1<L z0 <L y0 <L x0 <L finite <L x1.

R1 can now proceed as before using the interval (b, x∗1). Notice that if 0 ∈ K and

1 ∈ K, then R0 makes (b, x0) not dense with the witnesses z0 <L y0 and R1 makes

(b, x1) not dense with witnesses z∗1<L y∗

1. Thus, (b, x0) and (b, x∗

1) are both not

dense, winning R0 and R1.

Notice that with two requirements, we need to know the outcome of R0 in order

to know which interval in L codes information about whether 1 ∈ K. To use

43

{c | (b, c) is dense} to compute K, we proceed as follows. First, we need to ask if

(b, x0) is dense. If the interval is dense, then we know that 0 < K and that the

witness for R1 is x1. So, we ask if (b, x1) is dense. If so, then 1 < K and if not,

then 1 ∈ K.

On the other hand, if (b, x0) is not dense, then we know that 0 ∈ K. So, at some

finite point in the construction, we switched our witness for R1 to x∗1. Therefore,

to determine if 1 ∈ K, we need to ask if (b, x∗1) is dense. If it is dense, then 1 < K

and if it is not dense, 1 ∈ K.

The witness x2 is set down based upon the restrictions of the higher priority

requirements R0 and R1.

• If 0 < K and 1 < K, then x2 is set down such that b <L x0 <L x1 <L x2.

• If 0 < K and 1 ∈ K, then x2 is set down such that b <L x0 <L x2 <L x1.

• If 0 ∈ K and 1 < K, then x2 is set down such that b <L x1 <L x2 <L x0.

• If 0 ∈ K and 1 ∈ K, then x2 is set down such that b <L x2 <L x1 <L x0.

The rest of the witnesses are set down based upon the higher priority require-

ments.

Notice that, as described above for R0 and R1, in order to determine which

interval in L codes information about whether 2 ∈ K, we need to know the out-

comes for R0 and R1. The answer to the question of whether (b, x0) is dense tells

44

us which witness for R1 codes the information about whether 1 ∈ K. Once we

know which witness codes this information, we can ask a denseness question

to determine which witness for R2 codes information about whether 2 ∈ K. In

general, to determine which witness codes information about whether n ∈ K,

we will have to use denseness questions to determine the correct witness for

0, 1, ...,n − 1. This process illustrates why our reduction is a Turing reduction

as opposed to an m-reduction.

We will be setting up a tree of strategies T = {R,B}<ω such that R <L B. The

basic universal strategy is to stay in a build state until n is enumerated into K.

During this time, we will be building Q between the witness xn and b. When

n is enumerated into K, we want to switch to the restrain strategy which will

protect the interval (b, xn) from being dense by inserting zn < yn < xn as imme-

diate predecessors of xn and restraining any new elements from entering (zn, yn).

It remains to describe where α places its witness point xα when it is first eligible

to act. To describe this placement, we treat the point xα as an entity wα (to keep

a similar notation as in the Discrete Construction Proofs) and write

wα <L wβ

as a notation for xα <L xβ. For distinct strategies α and β, we place wα <L wβ if

and only if either

45

• α <L β (α is to the left of β in the tree of strategies)

• or α ⊆ β and β(|α|) = B (β extends α ∗ B)

• or β ⊆ α and α(|β|) = R (α extends β ∗ R).

Local Action for α for Re :

(i) When Rα is first eligible to act, place a new witness xα in L or choose an

existing point that satisfies the ordering conditions above.

(ii) If e < Ks, then partially densify the interval (b, xα) and take outcome α ∗ B.

(iii) If s is the least stage such that e ∈ Ks, add zα and yα as immediate pre-

decessors of xα and take outcome α ∗ R. If zα and yα have already been

added, just take outcome α ∗ R.

Note a property of the placement of points in our linear order L. Only α is

allowed to place zα and yα into (b, xα). This protects our interval against other

witnesses encroaching on its territory and making it dense.

Construction

Stage 0: We begin with the empty set. So, we need to set down point b.

Stage s+1: Follow the path down the tree of strategies to level s as directed by

the action of the strategies eligible to act. When we reach level s + 1, end the

46

stage.

Verification:

(i) True Path

The true path in our tree of strategies is the leftmost path visited infinitely

often. Notice that if α is on the true path, then either α always takes

outcome α ∗ B or at some stage s, α switches to outcome α ∗R and always

takes α ∗ R at all future stages. Therefore, as the construction proceeds,

the paths taken only move left and a node α at level n is on the true path

if and only if α is eventually on the path at every stage past some state s.

(ii) Lemma 3.1.1: Let α be on the true path. If α ∗ R is on the true path, then

(b, xα) is not dense.

Proof : Fix t such that α ∗ R is first on the true path at stage t and hence is

on the path at stage s for all s ≥ t. At stage t, α places the points yα and

zα such that zα <L yα <L xα and (zα, xα) = {yα}. To show that (b, xα) is not

dense, it suffices to show that no strategy can add points to [zα, xα] after

stage t. There are two possibilities:

(a) α places points into [zα, xα] after stage t.

If we have taken the outcome R at stage t, then we know that n entered

K by stage t. By the construction, there is no possibility for points to

enter [zα, xα] because we cannot return to the building outcome.

47

(b) Another strategy β places points into [zα, xα]. In this case, we must

have wα <L wβ. Consider the ways in which this could happen.

• If β ⊆ α, then wα <L wβ means α(|β|) = R and hence β ∗ R ⊆ α.

Therefore, β ∗ R is on the true path and is on the current path at all

stages s ≥ t. By the action at β, β does not add any new points to L.

• If α ⊆ β, then wα <L wβ means β(|α|) = B and hence α ∗ B ⊆ β.

However, α ∗ B is never on the path after stage t, so β is never

eligible to act after stage t. Therefore, β cannot add new points to

L.

• If α and β are incomparable, then wα <L wβ means α <L β. Since our

path only moves left and α is on the true path, β is never eligible to

act after stage t and never adds any new points to L after stage t.

In all cases, we see that no strategy β , α can add new points to [zα, xα]

after stage t.

(iii) Lemma 3.1.2: Let α be on the true path. If α ∗ B is on the true path, then

(b, xα) is dense.

Proof : If α ∗ B is on the true path, then it is eligible to act infinitely often

and it adds points to make (b, xα) isomorphic toQ. Hence, (b, xα) is dense.

48

(iv) Lemma 3.1.3: Let α be an Re strategy on the true path. Then, (b, xα) is

dense if and only if e < K if and only if α ∗ B is on the true path.

Proof : If e < K, then by our local action for α, α ∗ B is on the true path. By

the Lemma 3.1.2, (b, xα) is dense. If e ∈ K, then α ∗ R is on the true path

and by Lemma 3.1.1, (b, xα) is not dense.

(v) Lemma 3.1.4: 0′ ≤T DenL(b) = {c | (b, c) is dense in L}

Proof : We define a function f : N → T by setting f (e) = α if α is the Re

strategy on the true path. Notice that f is computable from DenL(b) since

f (0) is the unique R0 strategy and by Lemma 3.1.3,

f (e + 1) =

f (e) ∗ B if (b, x f (e)) is dense

f (e) ∗ R if (b, x f (e)) is not dense

We can compute 0′ from f using Lemma 3.1.3 since

n ∈ K if and only if (b, x f (n)) is not dense.

and hence n ∈ K if and only if f (n + 1) = f (n) ∗ R. Therefore, we have

K ≤T f and f ≤T Dis, so K ≤T Dis.

(vi) Effective Construction

The construction is effective because there are only a finite number of

things done at each stage for each requirement Rn. Also, since there

are n < K, we will build at least one infinite Q, the construction will

49

use all of the natural numbers. Hence the domain of L is N, which is

computable. This implies that to compare i and j in order, we need just

run the construction until both elements appear and compare where they

land in L.

50

3.2 Construction II

Theorem: There is a computable linear order L with least element b such that

0′′ ≤T DenL(b) = {c | (b, c) is dense in L}.

Proof : In order to prove this theorem, we want to build a computable linear

order L around a least element b such that the interval (b, xn) is dense if and only

if n ∈ In f where In f = {e |We is infinite}. We have the following requirements:

Rn : n ∈ In f if and only if (b, xn) ∈ DenL

with ordering R0 < R1 < R2 < ...

The basic strategy for a single requirement R0 is similar to the strategy in the 0′

construction. We want to put down an x0 such that

b <L x0.

Our goal is to do one of two things in the interval (b, x0) depending on whether

0 ∈ In f or not. If 0 ∈ In f , then we want to make the open interval (b, x0)

isomorphic to Q. This action makes (b, x0) dense. If 0 < In f , then we want to

make (b, x0) not dense.

To accomplish this goal, at each stage s, we check whether W0 adds a new point.

If so, then we partially densify (b, x0) by adding new points in the interval (b, x0).

51

b <L new <L wk <L new <L ... <L new <L w1 <L new <L w0 <L new <L x0

In this case, we regard R0 as a building state requirement and in the general

construction, we will be taking the B outcome (for building).

On the other hand, if W0 does not add a new point, then we want to stop

building (at least temporarily) our copy of Q and restrain the interval (b, x0)

from densifying. To do this, we want to add two points b < z0 < y0 < x0 as

immediate predecessors of x0 and not allow any points to enter between [z0, x0].

We regard R0 as a restraining state requirement. In the general construction,

we will be taking the R outcome (for restraining). However, if we see W0 get a

new element at a later stage, we initialize z0 and y0 in the sense that we forget

that these points had any special significance and we regard the parameters

y0 and z0 as undefined. When we partially densify (b, x0), we treat the points

formally labeled by y0 and z0 as any other points in (b, x0) and add a new point

between them.

We will be setting up a tree of strategies T = {B,R}<ω such that B <L R. The

basic universal strategy is to stay in a restrain state until Wn adds a new point.

In the restrained stage, we will not allow any new points to be introduced in

the interval [zn, xn]. When Wn grows, we want to switch to the build strategy.

In this strategy, we will forget about any yn and zn designation and continue to

build a copy of Q between b and xn.

52

It remains to describe where α places its witness point xα when it is first eligible

to act. To describe this placement, we treat xα as an entity wα (to keep with

previous notation) and write

wα <L wβ

as notation for xα <L xβ. For distinct strategies α and β, we place wα <L wβ if

and only if either

• β <L α (β is to the left of α in the tree of strategies)

• or α ⊆ β and β(|α|) = B (β extends α ∗ B)

• or β ⊆ α and α(|β|) = R (α extends β ∗ R).

Local Action for α for Re :

(i) When Rα is first eligible to act, place a new witness xα in L (or choose a

point xα in L satisfying the order conditions above).

Let s be the last stage at which α was eligible to act (with s = 0 if this is

the first time α is eligible to act).

(ii) If We,s , We,s, then initialize yα and zα (if they are defined), partially

densify (b, xα), and take outcome α ∗ B.

(iii) If We,s =We,s, add points zα and yα as immediate predecessors of xα (unless

they are already defined) and take outcome α ∗ R.

53

Note a property of the placement of points in our linear order L: only α is

allowed to place zα and yα into the interval (b, xα). This protects our interval

against other witnesses encroaching on its territory.

Construction

Stage 0: We begin with the empty set. So, we need to set down point b.

Stage s+1: Follow the path down the tree of strategy to level s as directed by

the action of the strategies eligible to act. When we reach level s + 1, end the

stage.

Verification:

(i) True Path

First let s be an α-stage if α is eligible to act at stage s. The true path in

our tree of strategies is the leftmost path visited infinitely often. If α is

on the true path, then either there are infinitely many α-stages when we

take α ∗ B and α ∗ B is on the true path, or there exists a stage t such that

for all α-stages after t, we take α ∗ R and α ∗ R is on the true path. Note

that if α is on the true path, then there exists only finitely many stages s

when the true path is to the left of α.

54

(ii) Lemma 3.2.1: Let α be on the true path. If α ∗ R is on the true path, then

(b, xα) is not dense.

Proof : Fix the least stage t such that α ∗ R is on the path at stage t and the

path is never to the left of α ∗R after t. At stage t, α defines yα and zα and

places them so that zα <L yα <L xα and (zα, xα) = {yα}. Since α never takes

outcome B after stage t, these witnesses yα and zα are never initialized

by α. Therefore they remain defined forever. To prove that (b, xα) is not

dense, it suffices to show that no strategy can add points to [zα, xα] after

stage t. There are two possibilities.

(a) α places points in (zα, xα) after stage t.

If α places a point in [zα, xα], then it takes outcome α ∗ B. Since α ∗ B <L

α ∗R and the path is never left of α ∗R after stage t, α cannot place any

points in [zα, xα] after stage t.

(b) Another strategy β places points in [zα, xα]. In this case, we must have

wα <L wβ. Consider the ways this could happen.

(i) If β ⊆ α, then wα <L wβ means α(|β|) = R and hence β ∗ R ⊆ α. If

β adds points to L , it takes outcome β ∗ B which is left of α ∗ R.

Therefore, β adds no more points after stage t.

(ii) If α ⊆ β, then wα <L wβ means β(|α|) = B and hence α ∗B ⊆ β. Since α

takes outcome R at all α-stages after t, β is never eligible to act after

stage t and hence does not add any points after stage t.

55

(iii) Ifα and β are incomparable, then wα <L wβ means β <L α. However,

the path is never to the left of α ∗R after stage t and therefore, never

to the left of α after stage t. Hence, β cannot add points after stage t.

In all cases, we see that no strategy β , α can add new points to [zα, xα]

after stage t.

(iii) Lemma 3.2.2: Let α be on the true path. If α ∗ B is on the true path, then

(b, xα) is dense.

Proof : If α ∗ B is on the true path, then it is eligible to act infinitely often

and it adds points to make (b, xα) isomorphic toQ. Hence, (b, xα) is dense.

(iv) Lemma 3.2.3: Let α be an Re strategy on the true path. Then, (b, xα) is not

dense if and only if e ∈ In f if and only if α ∗ R is on the true path.

Proof : If e < In f , then by our local action for α, α ∗ R is on the true path.

By the Lemma 3.2.1, (b, xα) is not dense. If e ∈ In f , then α ∗B is on the true

path and by Lemma 3.2.2, (b, xα) is dense.

(v) Lemma 3.2.4: 0′′ ≤T DenL(b) = {c | (b, c) is dense in L}

Proof : We define a function f : N → T by setting f (e) = α if α is the Re

strategy on the true path. Notice that f is computable from DenL(b) since

f (0) is the unique R0 strategy and by Lemma 3.2.3,

f (e + 1) =

f (e) ∗ B if (b, x f (e)) is dense

f (e) ∗ R if (b, x f (e)) is not dense

56

We can compute 0′′ from f using Lemma 3.2.3 since

n ∈ In f if and only if (b, x f (n)) is dense.

and hence n ∈ In f if and only if f (n + 1) = f (n) ∗ B. Therefore, we have

In f ≤T f and f ≤T DenL(b), so In f ≤T DenL(b).

(vi) Effective Construction

The construction is effective because there are only a finite number of

things done at each stage for each requirement Rn. Also, since there

are n ∈ In f , we will build at least one infinite Q, the construction will

use all of the natural numbers. Hence the domain of L is N, which is

computable. This implies that to compare i and j in order, we need just

run the construction until both elements appear and compare where they

land in L.

57

Bibliography

[1] Rodney G. Downey, Computability Theory and Linear Orderings, in Com-plexity, Logic, and Recursion Theory, (A. Sorbi, ed), Marcel Dekker, ISBN0-444-50106, Lecture notes in Pure and Applied Mathematics, Vol. 197(1997)

[2] Rodney G. Downey, On Presentations of Algebraic Structures, Handbookof Recursive Mathematics, Vol. 2, (Ed. Ershov, Goncharov, Nerode, Rem-mel, Marek), Studies in Logic Vol. 139. North Holland, 1998

[3] Hartley Rogers, Jr., Theory of Recursive Functions and Effective Computabil-ity, ISBN 0-262-68052-1, MIT Press, 1967

[4] Robert I. Soare, Recursively Enumerable Sets and Degrees, ISBN 3-540-15299-7, Springer Verlag New York, 1980

24