maps with holes - university of manchester...and sidorov [20]. the main results used in these papers...
TRANSCRIPT
MAPS WITH HOLES
A thesis submitted to the University of Manchester
for the degree of Doctor of Philosophy
in the Faculty of Science and Engineering
2016
Lyndsey Rachel Clark
School of Mathematics
Contents
Abstract 5
Declaration 6
Copyright Statement 7
Acknowledgements 8
1 Introduction 9
I Holes in one dimensional expanding maps 12
2 Combinatorics on words 13
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.1 Finite words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.2 Infinite words . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.3 Entropy and Hausdorff dimension . . . . . . . . . . . . . . . . . 16
2.2 Balanced words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Extremality and trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3 β-expansions 26
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Greedy and lazy expansions . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.1 Greedy expansions . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.2 Lazy expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Intermediate expansions . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.1 Link to Lorenz maps . . . . . . . . . . . . . . . . . . . . . . . . 36
2
4 Admissible balanced words 37
4.1 Balanced words and intermediate
β-transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1.1 Descendants of balanced words . . . . . . . . . . . . . . . . . . 40
4.2 Transitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5 Tβ,α with a hole 47
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2 Extremal pairs and maximality . . . . . . . . . . . . . . . . . . . . . . 50
5.3 Small b and large a for D0 and D2 . . . . . . . . . . . . . . . . . . . . . 57
5.4 Balanced pairs and their preimages . . . . . . . . . . . . . . . . . . . . 60
6 Rγ and Rδ 63
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.2 Primary trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.3 Secondary trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.3.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7 Small b and large a 76
7.1 b1(β, α) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
7.2 Trees for small b and large a . . . . . . . . . . . . . . . . . . . . . . . . 81
8 Discussion of the non-transitive case 85
9 Summary and worked examples 93
9.1 The easiest case that is not the doubling map . . . . . . . . . . . . . . 94
9.2 A greedy case with primary trees . . . . . . . . . . . . . . . . . . . . . 95
9.3 Two intermediate cases . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
9.3.1 An easy case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
9.3.2 A difficult case . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
9.4 A non-transitive case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
10 Large holes for Di 112
3
II Holes for the baker’s map 114
11 Introduction 115
11.1 The baker’s map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
11.2 Interior holes are avoidable . . . . . . . . . . . . . . . . . . . . . . . . . 118
12 Corner holes for the baker’s map 119
12.1 The symmetric case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
12.2 The asymmetric case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
12.2.1 D3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
12.2.2 D0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
13 Convex traps 128
13.1 The kite K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
13.2 The parallelogram P . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
14 Conclusion 134
Bibliography 136
Word count 33,795
4
The University of Manchester
Lyndsey Rachel ClarkDoctor of PhilosophyMaps with HolesNovember 2, 2016
Consider an expanding map T on a compact metric space X. Let H be a simplyconnected subset of X. We call H a hole. Then consider the set of points whose orbitsunder T never fall into the hole. We call this the avoidance set J of H.
In the first part of this thesis, we study avoidance sets when X = [0, 1] and themap T is an intermediate β-transformation:
T (x) = Tβ,α(x) = βx+ α mod 1.
One would expect that when the hole H = (a, b) ⊂ [0, 1] is large, every orbit shouldfall into the hole, and so the avoidance set should be empty. Similarly, when the holeis small, the avoidance set should be in some sense large. In particular, we wish tounderstand the following sets:
D0(β, α) := {(a, b) ∈ [0, 1]2 : Jβ,α(a, b) 6= ∅},D1(β, α) := {(a, b) ∈ [0, 1]2 : dimHJβ,α(a, b) > 0},D2(β, α) := {(a, b) ∈ [0, 1]2 : ∃N ∈ N : for all n > N,
Jβ,α(a, b) contains an orbit of period n}.
These were studied for the doubling map by Glendinning and Sidorov [17] and by Hareand Sidorov [20]. Extending to the intermediate case causes substantial additionalcomplications. This thesis describes these sets Di almost completely, and providesconjectures for the small remaining regions.
The second part of this thesis studies holes for the baker’s map on X = [0, 1]2.The higher dimensional case is shown to behave very differently to one dimensionalmaps. We present some conjectures for holes in the corners of the square, and someinteresting results regarding convex holes with empty avoidance sets.
5
Declaration
No portion of the work referred to in the thesis has been
submitted in support of an application for another degree
or qualification of this or any other university or other
institute of learning.
6
Copyright Statement
i. The author of this thesis (including any appendices and/or schedules to this thesis)
owns certain copyright or related rights in it (the “Copyright”) and s/he has given
The University of Manchester certain rights to use such Copyright, including for
administrative purposes.
ii. Copies of this thesis, either in full or in extracts and whether in hard or electronic
copy, may be made only in accordance with the Copyright, Designs and Patents
Act 1988 (as amended) and regulations issued under it or, where appropriate, in
accordance with licensing agreements which the University has from time to time.
This page must form part of any such copies made.
iii. The ownership of certain Copyright, patents, designs, trade marks and other intel-
lectual property (the “Intellectual Property”) and any reproductions of copyright
works in the thesis, for example graphs and tables (“Reproductions”), which may
be described in this thesis, may not be owned by the author and may be owned by
third parties. Such Intellectual Property and Reproductions cannot and must not
be made available for use without the prior written permission of the owner(s) of
the relevant Intellectual Property and/or Reproductions.
iv. Further information on the conditions under which disclosure, publication and com-
mercialisation of this thesis, the Copyright and any Intellectual Property and/or
Reproductions described in it may take place is available in the University IP Policy
(see http://documents.manchester.ac.uk/DocuInfo.aspx?DocID=487), in any rele-
vant Thesis restriction declarations deposited in the University Library, The Univer-
sity Library’s regulations (see http://www.manchester.ac.uk/library/aboutus/regul-
ations) and in The University’s Policy on Presentation of Theses.
7
Acknowledgements
There are many people that I would like to thank for their help and support throughout
my PhD. Firstly, I must of course thank my supervisor Nikita Sidorov who introduced
me to the topic of maps with holes. I am also grateful to the dynamics group, in
particular to Paul Glendinning, for many interesting discussions over the course of
my PhD, to Nige Ray for his timely advice, and of course my two mathematical
brothers and good friends Simon Baker and Rafael Alcaraz-Barrera, whose help and
encouragement was invaluable.
I would also like to thank Charles Walkden for his impressive hat juggling, words
of wisdom, sarcasm, and above all friendship, without which I would never have stayed
sane through my final year.
For their friendship and support I would also like to thank Georgia Kime (hang in
there, you can do it) and the entirety of Ultraviolet.
I must also apologise to my sister for failing to include the word “zebra”, to Charles
for the lack of “contrafibularities”, and to Vandita for the absence of “bespectacled”
— I have been reliably informed that including them in these acknowledgements does
not count! I can but offer up to Vandita the presence of innumerable trees as a poor
substitute, along with the defence to Nikki that her thesis did not include the word
“camel”, so I feel the lack of zebras is fair.
Finally, my eternal thanks go to my parents, my sister, and Mike, who have col-
lectively encouraged me, supported me, and put up with me throughout the entire
process. I cannot thank them enough.
8
Chapter 1
Introduction
Consider an expanding map T on a compact metric space X. Let H be a simply
connected subset of X. We call H a hole. Then consider the set of points whose orbits
under T never fall into the hole. We call this the avoidance set J of H:
J (H) = {x ∈ X : T n(x) /∈ H for all n ∈ Z},
if T is invertible, or
J (H) = {x ∈ X : T n(x) /∈ H for all n ∈ N},
otherwise. This is the topic of maps with holes, also referred to as open maps or open
dynamical systems. Common objects of study include the restriction of the map T to
J (H), the avoidance sets (also called survivor sets) themselves, and so-called “escape
rates” — the rate at which points fall into a hole. These have been studied mostly for
hyperbolic systems: see for example the book by Bahsoun et al. [2], along with various
papers including Bundfuss et al. [6] and Chernov and Markarian [7, 8].
In this thesis, we study the relationship between the size and placement of the hole
H and the size of the avoidance set J (H).
In Part 1, we consider avoidance sets for intermediate β-expansions, where X =
[0, 1] and
T (x) = Tβ,α(x) = βx+ α mod 1,
with β ∈ (1, 2) and α ∈ (0, 2 − β). Intermediate β-expansions are very well studied.
Originally, β-expansions with α = 0 arose as a way of writing a number in a non-integer
9
CHAPTER 1. INTRODUCTION 10
base:
x =∞∑i=1
xiβi,
with xi ∈ {0, 1, . . . , bβc}. These were first studied by Renyi [30] in 1957, and then by
Parry [27] in 1960, and offered up a world of interesting number theoretic questions. In
particular, intermediate expansions turned out to be very useful to the study of Lorenz
maps: piecewise monotonic expanding maps of the interval with one discontinuity.
Lorenz maps are semi-conjugate to intermediate β-transformations [15]. This makes
intermediate β-transformations a good class of maps to study: there is a large body
of literature already available on them, and results transfer through conjugacy to a
much wider class of maps.
The main tool we use to study avoidance sets is symbolic dynamics. To this end,
Chapter 2 contains the necessary background material on symbolic dynamics and
combinatorics on words, whilst Chapter 3 covers the background for intermediate β-
expansions. Chapter 4 begins to combine the two and contains a new more convenient
way of describing the set of non-transitive intermediate β-expansions.
After these three chapters, we come to the main problem. One would expect that
when the hole H = (a, b) ⊂ [0, 1] is large, every orbit should fall into the hole, and so
the avoidance set should be empty. Similarly, when the hole is small, the avoidance
set should be in some sense large. In particular, we wish to understand the following
sets:
D0(β, α) := {(a, b) ∈ [0, 1]2 : Jβ,α(a, b) 6= ∅},
D1(β, α) := {(a, b) ∈ [0, 1]2 : dimHJβ,α(a, b) > 0},
D2(β, α) := {(a, b) ∈ [0, 1]2 : ∃N ∈ N : for all n > N,
Jβ,α(a, b) contains an orbit of period n}.
These were studied for the doubling map by Glendinning and Sidorov [17] and by Hare
and Sidorov [20]. The main results used in these papers are collated, altered slightly
and reformulated for general intermediate β-expansions in Chapter 5. Unfortunately
intermediate β-expansions are significantly more complicated than the doubling map.
Greedy expansions (that is, α = 0) are slightly easier and were studied in my paper
[9]. Much of this work is repeated in Chapters 5 and 6, and further adapted for the
intermediate case. This completes the majority of D0, D1 and D2, but still leaves some
CHAPTER 1. INTRODUCTION 11
gaps in the intermediate case. Chapter 6 ends with conjectures for these remaining
regions, which being very very small are very difficult to investigate.
In Chapter 7, we study holes (a, b) where b is small or a is large. In these cases,
D0 is simple, but D1 and D2 are complicated. My paper [9] has a mistake in that it
omits these cases: Chapter 7 partially corrects and conjecturally completely corrects
this issue.
There are some intermediate β-expansions which are non-transitive. This causes
all previous methods to break down somewhat, and so Chapter 8 addresses these cases.
In Chapter 9, we present a number of full worked examples of D0, D1 and D2,
along with pictures.
Finally Chapter 10 contains some results regarding the largest possible holes on
the boundary of each Di. This completes Part 1.
In Part 2 of this thesis, we study holes for the baker’s map on [0, 1)2:
B(x, y) =
(2x, y2) if 0 ≤ x < 1
2,
(2x− 1, y+12
) if 12≤ x < 1.
The baker’s map is the higher dimensional analogue of the doubling map, and so is
a good place to start. It will be seen immediately in the first chapter that higher
dimensional maps behave very differently to the one-dimensional case: any interior
hole, that is H ⊂ (0, 1)2, has an avoidance set with positive Hausdorff dimension.
In Chapter 12, we investigate corner holes [0, a)2∪ (b, 1)2. These are slightly analo-
gous to holes (a, b) in one dimension, but still offer enough differences to be interesting.
Finally, Chapter 13 presents some results regarding convex traps: convex holes where
J (H) is empty.
Part I
Holes in one dimensional expanding
maps
12
Chapter 2
Combinatorics on words
2.1 Introduction
One of the main tools in studying expanding maps is symbolic dynamics. Expand-
ing maps tend to be defined on a simple space, but are often very complicated—
topologically mixing, ergodic, and generally quite difficult to approach. Symbolic
dynamics offers a trade-off: instead of studying the original complicated map on an
easy space, we convert to some symbolic space and instead study an easy map on
a complicated space. This may open up a different way of approaching the original
problem. Of course, the symbolic space may be so complicated as to be next to use-
less. However, any results one can retrieve should apply to any map that corresponds
sensibly to the symbolic space, giving potentially a large volume of results almost for
free.
This section therefore introduces the tools needed for symbolic dynamics and for
maps with holes. These tools primarily come from the field of combinatorics on words.
As with much of mathematics, this is a far broader area than first appearances might
suggest. A thorough discussion of the field may be found in the books by Lothaire:
Combinatorics on Words, Algebraic Combinatorics on Words, and Applied Combina-
torics on Words [23, 24, 25]. As mentioned in the prefaces of each book, Lothaire
is in fact a collection of mathematicians, with each authoring different chapters and
most chapters having multiple authors. This thesis will mainly draw upon Chapter
2 of Algebraic Combinatorics on Words [24, §2], written by Jean Berstel and Patrice
Seebold, which is devoted to the topic of balanced words. For the more general topic
13
CHAPTER 2. COMBINATORICS ON WORDS 14
of symbolic dynamics as a whole we recommend Lind and Marcus An introduction to
Symbolic Dynamics and Coding [22].
2.1.1 Finite words
Let A be a set, called the alphabet, with its elements being called letters. In this thesis
we will only need to consider finite alphabets, so let us assume A to be finite from
here on in. The set of finite words over A is defined to be
A+ = {a1 . . . an | n ≥ 1, ai ∈ A}.
We denote the empty word by ε and define A∗ to be A+ ∪ ε. Given two finite words
u = u1 . . . un and v = v1 . . . vm, we denote by uv their concatenation u1 . . . unv1 . . . vm.
In particular uk = u . . . u (k times). Concatenation with the empty word is defined by
wε = εw = w for all w ∈ A+.
Concatenation is clearly associative. Recall that a semigroup is a set closed under
an associative binary operation, and a monoid is a semigroup with an identity element.
This means that A+ is a semigroup and A∗ is a monoid, commonly called the free
monoid over A.
The length of a word w is denoted |w|. The number of occurrences of the letter a
in w is denoted |w|a.
A word u is said to be a factor of w if there exist words x, y such that w = xuy.
If x (respectively y) is the empty word then u is called a prefix (respectively suffix ) of
w.
To compare words we use the lexicographic order : given an order on the alphabet,
a word u is lexicographically smaller than a word v (that is, u ≺ v) if either u1 < v1
or there exists k > 1 with ui = vi for 1 ≤ i < k and uk < vk. This is the standard
order in a dictionary.
Two words x and y are said to be conjugate if there exist u, v with x = uv and
y = vu. Equivalently, x is a cyclic permutation of y.
We also introduce the following notation. Given a word w and a factor u of w,
we denote by u-max(w) the lexicographically maximal conjugate of w that begins
with the word u. Similarly we denote by u-min(w) the lexicographically minimal
CHAPTER 2. COMBINATORICS ON WORDS 15
conjugate of w that begins with the word u. For example, given w = 10100, we have
0-max(w) = 01010 and 1-min(w) = 10010.
2.1.2 Infinite words
A (right) infinite word over A is defined as being an element of AN; that is a sequence
of letters indexed by the natural numbers. Factors, prefixes and the lexicographic
order can be defined on infinite words in the same way as for finite words.
We define a metric on AN as follows. Given u, v ∈ AN, set d(u, v) = 0 if u = v and
d(u, v) = 2−n otherwise, where
n = min{i ≥ 1 : ui 6= vi}.
This gives a topology on AN and enables us to define limits in a sensible way: a
sequence u(n) of infinite words converges to a word v if for every i ≥ 1, u(n)i = vi for
all sufficiently large n.
Using this topology we extend the notation of uk = u . . . u (k times) to write
u∞ = limk→∞ uk. Such a word w = u∞ is called periodic. The length of the shortest
u such that w = u∞ is called the period of w. A word w ∈ AN such that w = uv∞ is
called eventually periodic. A word that is not eventually periodic is called aperiodic.
The shift map σ : AN → AN is defined on infinite words by
σ(w1w2w3 . . . ) = w2w3w4 . . . .
For an infinite word w = w1w2w3 . . . , define max(w) and min(w) to be the lexico-
graphically largest and smallest shifts of w respectively. If one of these does not exist,
then take max(w) = limn→∞max(w1w2 . . . wn) or min(w) = limn→∞min(w1w2 . . . wn)
instead. In this way we get the maximal and minimal points on the closure of the
orbit of w under σ.
A shift space is a set Σ ⊆ AN such that i) Σ is closed with respect to the topology
on AN defined above, and ii) Σ is invariant under the shift map σ: that is, for all
w ∈ Σ, we have σ(w) ∈ Σ also. We will refer to AN itself as being the full shift on A,
and any other shift space Σ ⊂ AN as being a subshift of AN.
Given two finite words u, v ∈ A+, we write the full shift on these two words as
{u, v}N = {σnw : w is composed of words u and v}.
CHAPTER 2. COMBINATORICS ON WORDS 16
2.1.3 Entropy and Hausdorff dimension
One of the key things we wish to study regarding maps with holes is for what holes the
avoidance set has positive Hausdorff dimension. As we are using symbolic dynamics,
it is helpful to be able to identify when a shift space has positive Hausdorff dimension.
To do this, we use the entropy. Given a shift space Σ, let Bn(Σ) denote the set of
words (blocks) of length n appearing in Σ. We then define the entropy of Σ as
h(Σ) = limn→∞
1
nlog |Bn(Σ)|.
It is well known that entropy is invariant under topological conjugacy.
The definition of Hausdorff dimension is more complex. Let S be a subset of a
metric space Σ. We say a collection of balls {Bi(ri)} is a δ-cover of S if S ⊂ ∪iBi and
the radii ri of the Bi satisfy ri < δ. Then given d ∈ [0,∞), define as follows:
Cdδ (S) = inf
{∞∑i=1
rdi : {Bi(ri)} is a δ − cover of S
},
where the infimum is taken over all possible δ-covers of S. Then the d-dimensional
Hausdorff content of S is given by
Cd(S) = limδ→0
Cdδ (S).
Finally the Hausdorff dimension of S is given by
dimH(S) = inf{d ≥ 0 : Cd(S) = 0}.
This definition and a wealth of information on Hausdorff dimension can be found
in many textbooks, for example Falconer’s Fractal Geometry [12]. We do not actu-
ally need this level of detail: we are only interested in whether or not the Hausdorff
dimension is greater than 0. Recall the following:
Lemma 2.1.1. Any countable set has Hausdorff dimension 0.
It is also well known that positive entropy implies positive Hausdorff dimension
(see Young [36]). This gives us the following useful result:
Lemma 2.1.2. Let u, v ∈ A+ be two finite words such that u 6= vn and v 6= un for all
n ∈ N. Then Σ = {u, v}N has positive Hausdorff dimension.
CHAPTER 2. COMBINATORICS ON WORDS 17
Proof. We show that Σ has positive entropy. To see this, write ` = lcm(|u|, |v|).
Consider n = k`. For such an n, we may form words composed of blocks u`/|u| and
v`/|v|. These blocks, each of length `, may be freely concatenated. Thus there are at
least 2k words in Σ of length n.
This means that for n = k` we have Bn(Σ) ≥ 2k = 2n/`. As Bn must be increasing,
we have
h(Σ) ≥ limn→∞
1
nn log 21/` > 0.
Thus Σ has positive entropy and therefore positive Hausdorff dimension.
This is all we need regarding Hausdorff dimension.
2.2 Balanced words
One of the key topics needed in this thesis is the topic of balanced and Sturmian words.
A good intuition for these words and the tree structure inherent in their formation
will be of great help in understanding the later results. From here on in we take our
alphabet to be A = {0, 1}.
There are multiple equivalent definitions of balanced words, and they have many
nice properties. These words are very well studied, so we will state the properties we
need without proof, but direct the interested reader variously to the books by Lothaire
[24, §2], Allouche and Shallit [1, §9, §10.5], or Fogg [14, §6], any one of which contains
a thorough exposition of the topic. There is also a survey paper by Vuillon [35] and a
useful paper investigating balanced words as rotation maps by Bullett and Sentenac
[5].
Definition 2.2.1. A finite or infinite word w is said to be balanced if for any two
factors u and v of w of equal length, we have that ||u|1 − |v|1| ≤ 1. A finite word w
is called cyclically balanced if w2 is balanced. Infinite aperiodic balanced words are
commonly called Sturmian words.
Example 2.2.2. A balanced word cannot contain both 11 and 00 because ||11|1−|00|1| =
2.
CHAPTER 2. COMBINATORICS ON WORDS 18
00
1 00
1 00
Figure 2.1: A cutting sequence, 00100100.
The following equivalent definition gives a clear visual idea of this property, as
shown in Figure 2.1.
Definition 2.2.3 (Cutting sequences). Draw a line on an integer grid. Where the line
cuts a vertical, write a 0. Where the line cuts a horizontal, write a 1.
The word obtained is clearly affected by the slope of the line. More precisely, for
an infinite cutting sequence w, define its slope to be
r = limn→∞
|w1 . . . wn|1n
.
This limit always exists.
It turns out that a balanced word of a given slope is unique up to shifting. In this
way balanced words are wholly determined by their slope. An irrational slope gives
an infinite aperiodic balanced word. A rational slope p/q instead is periodic, formed
from the unique (up to conjugacy) cyclically balanced word of length q containing p
1s.
It would hence be convenient to be able to go between the value r and the corre-
sponding balanced word easily without drawing lines and cutting sequences. This can
be done using the Farey tree.
Definition 2.2.4 (Farey tree). We construct the Farey tree inductively, defining words
vp/q for all p/q ∈ [0, 1]. Take v0/1 = 0 and v1/1 = 1 as initial words with associated
fractions 0/1 and 1/1 respectively. Two fractions a/b and c/d with a/b < c/d are said
to be neighbours if bc− ad = 1, and two words va/b and vc/d are said to be neighbours
if their associated fractions are neighbours.
Given two neighbouring words vr1 and vr2 such that v∞r1 < v∞r2 , combine the asso-
ciated fractions r1 = a1/b1 and r2 = a2/b2 to make r1⊕ r2 = (a1 + a2)/(b1 + b2). Then
we define vr1⊕r2 = vr2vr1 .
CHAPTER 2. COMBINATORICS ON WORDS 19
01
11
12
13
23
14
34
15
25
35
45
0 1
10
100 110
1000 1110
10000 10100 11010 11110
Figure 2.2: The Farey tree and corresponding balanced words
The beginning of the trees for both fractions and words are depicted in Figure 2.2.
Given r = r1 ⊕ r2 with r1 < r2 we will refer to r1 and r2 as the left and right
Farey parents respectively, and r will be referred to as the child of r1 and r2. We will
also use these terms for the associated words where appropriate. Whenever we write
r = r1 ⊕ r2 we will always assume that r1 < r2.
We can see that for each r this tree gives us the maximal shift of balanced word
corresponding to r. In general, we will always denote the maximal shift by vr and the
minimal shift by ur. Notice that if we instead defined our tree by concatenating the
left parent followed by the right, this would give the minimal shifts ur instead of vr.
For rational r, define Xr to be the finite set
Xr = {σn(v∞r ) : n ∈ N}.
These sets Xr have been well studied in [5].
The limit points of the Farey tree correspond to irrational r, and give rise to the
infinite aperiodic balanced words known as Sturmian sequences. In this case we define
Xr as follows:
Xr = {σnvr : n ∈ N}.
Here the set Xr is a Cantor set with Hausdorff dimension 0 and vr is the supremum
CHAPTER 2. COMBINATORICS ON WORDS 20
of Xr ([5]).
We also mention the nice interaction between the Farey tree and continued frac-
tions. Let r be given by a continued fraction denoted [0; a1, . . . , an], that is to say we
have
r = 0 +1
a1 +1
a2 +1
. . . +1
an
.
In general we will always assume an > 1. The Farey parents of r are then easy to find.
Proposition 2.2.5. Given a rational r = [0; a1, . . . , an] with an > 1, the Farey parents
of r are given by [0; a1, . . . , an−1] and [0; a1, . . . , an−1, an − 1]. If n is even then the
former is the right Farey parent; if n is odd then the latter is the right Farey parent.
Finally, we include some miscellaneous properties of balanced words that we will
make use of later.
Proposition 2.2.6. For a rational balanced word r = r1⊕ r2, we have vr = 1wr0 and
ur = 0wr1 for some word wr.1 Furthermore, the word wr is given by wr = wr201wr1 =
wr110wr2.
Proof. By induction. The statement holds for r = 1/n (take w1/n = 0n−2) and for
r = (n − 1)/n (take w(n−1)/n = 1n−2). Suppose the result holds for all suitably short
balanced words.
Consider r = r1 ⊕ r2. Then we have
vr = vr2vr1 = 1wr201wr10,
ur = ur1ur2 = 0wr110wr21.
This means the result will instantly follow from showing that wr = wr201wr1 =
wr110wr2 . To see this, notice that for r1 and r2 to be neighbours, we must have
that one is the Farey parent of the other. Suppose that r2 is the right Farey parent of
r1. Then we have that r1 = r0 ⊕ r2. This means that wr1 = wr201wr0 = wr010wr2 by
1The word wr is also actually a palindrome, but we will not prove this here.
CHAPTER 2. COMBINATORICS ON WORDS 21
our inductive assumption. Then
wr201wr1 = wr201(wr010wr2)
= (wr210wr0)10wr2
= wr110wr2 ,
as required. The case where r1 is the left Farey parent of r2 is similar.
Proposition 2.2.7. If r1 < r2, then v∞r1 < v∞r2 and u∞r1 < u∞r2 .
Proof. Easy induction. The result holds for the first level (that is r = 1/n, (n− 1)/n).
Then when we combine neighbours as r1⊕ r2, for vr we write the bigger word first and
for ur we write the smaller word first.
Proposition 2.2.8. For each rational r = r1 ⊕ r2, the words vr2v∞r and ur1u
∞r are
also balanced.
Proof. The easiest way to see the result is to use Proposition 2.2.6. Consider:
vr2v∞r = 1wr20(1wr0)∞,
= 1wr20(1wr110wr20)∞,
= (1wr201wr11)(0wr201wr11)∞,
= 1wr1(0wr1)∞.
For balancedness, the beginning is the only possible problem. Let r = p/q. Then we
know vr is of length q and contains p 1s. By definition of balanced, we hence expect
any factor of length q to contain either p, p+ 1 or p− 1 1s. It is clear to see that the
first q digits contain p+ 1 1s and any other factor of length q contains p 1s. Therefore
the word is balanced. The case of ur1u∞r is similar.
Proposition 2.2.9. Let r = r1 ⊕ r2 = p/q. Suppose for some word x we have v∞r �
max(x) � vr2v∞r . Then min(x) � u∞r . Similarly, if we have ur1u
∞r � min(x) � u∞r ,
then max(x) � v∞r .
Proof. Recall that we have (v∞r , vr2v∞r ) = ((1w0)∞, 1w1(0w1)∞), with w = wr as per
Proposition 2.2.6. Suppose the maximal shift of x is in this range. This places severe
restrictions on the form of x. Begin from the left endpoint (1w0)∞. In order to be in
CHAPTER 2. COMBINATORICS ON WORDS 22
the correct range, our max(x) is an increase on this sequence. To increase (1w0)∞, we
must somewhere replace a 0 with a 1. If the change is inside a w, then the maximal
shift is instantly greater than 1w1 and so is outside our range. Therefore the change
must be the written 0 rather than inside a w, and so our maximal shift begins 1w1.
This means the word is now almost too big, and to remain in the range we must now
decrease the following (0w1)∞. Therefore the only possibility is that max(x) = 1w1u
where u < (0w1)∞. But (0w1)∞ = u∞r . Hence by shifting, we can see that the
minimal shift of x satisfies min(x) ≤ u < u∞r as required. The case beginning with
ur1u∞r � min(x) � u∞r is similar.
2.3 Extremality and trees
We will make extensive use of the idea of extremal pairs, closely linked to the study
of Lorenz maps through kneading invariants — see [21] and [18]. Unfortunately this
terminology is used for several related ideas. The original idea as used in the context
of Lorenz maps is to take the extreme points of an orbit, the maximum and minimum.
This is not the variant used in this thesis. Instead, for our context the essential idea
is that given some orbit, we take two neighbouring points of that orbit, meaning that
the rest of the orbit does not fall between these two points.
Definition 2.3.1 (Extremal pairs2). Let (s, t) be a pair of finite {0, 1} words with t
a cyclic permutation of s and s∞ ≺ t∞. Then (s, t) is said to be an extremal pair if
for every k ∈ N, either σks∞ � s∞ or σks∞ � t∞.
We also define a version for infinite aperiodic words.
Definition 2.3.2 (Extremal pairs (infinite aperiodic words)). Let (s, t) be a pair of
infinite aperiodic{0, 1} words belonging to the same orbit. Then (s, t) is said to be an
extremal pair if for every k ∈ N, either σks � s or σks � t.
Notice that we do not require that s1 = 0 and t1 = 1 as in [17]. However as an
immediate consequence of the definition we have the following:
2There are more general versions of this concept, as seen in [21] and [18]. These cover cases whent is not a cyclic permutation of s. However, these situations are not needed for our context and sowe omit a more general definition.
CHAPTER 2. COMBINATORICS ON WORDS 23
Proposition 2.3.3. For every extremal pair (s, t) there exist words w and u such that
u0 and u1 are factors of w and s = u0-max(w) and t = u1-min(w).
Proof. To see this, take u to be the longest common prefix of s and t. By assumption
s∞ ≺ t∞, so s begins u0 and t begins u1. Then s and t must be the maximal
and minimal words with these prefixes respectively because if not, we would have
(u0-max(w))∞ or (u1-min(w))∞ in (s∞, t∞), which would contradict the definition of
extremal.
This is where our definition relates to the original meaning of extremal: our pairs
use preimages of the maximum and minimum.
In general, we will normally use pairs of finite words, and therefore whenever an
extremal pair (s, t) is mentioned, assume these words to be finite unless told otherwise.
Example 2.3.4. Consider the periodic point (1100)∞. Extremal pairs arising from
this orbit are (0011, 0110), (0110, 1001), and (1001, 1100). However the pair (s, t) =
(0110, 1100) is not extremal because s∞ ≺ (1001)∞ ≺ t∞.
Given an extremal pair (S, T ), we have S, T ∈ {0, 1}n and so denote more fully
the pair as (S(0, 1), T (0, 1)). Then one can take another extremal pair (s, t) and use
this pair as an alphabet to gain the pair (S(s, t), T (s, t)). These words then belong to
{s, t}n. We call such a pair (S(s, t), T (s, t)) a descendant of (s, t). It is shown in [17,
Proposition 2.1] that all such descendants of an extremal pair are themselves extremal
pairs.
Definition 2.3.5 (Maximal extremal pairs). An extremal pair (s, t) is said to be
maximal if firstly there does not exist any point x such that the orbit of x is contained
in one of either [0, s∞) or (t∞, 1), and secondly there does not exist a distinct (finite
or infinite) extremal pair (s, t) such that (s∞, t∞) ⊂ (s∞, t∞) (or (s∞, t∞) ⊂ (s, t) if
this pair is infinite).
Remark 2.3.6. This is essentially the only place where we will mention infinite extremal
pairs. As a matter of fact, it seems not to be strictly necessary: it appears to be
impossible to have a finite extremal pair (s, t) where simultaneously we have that a)
there does not exist another finite extremal pair (s1, t1) with (s∞, t∞) ⊂ (s∞1 , t∞1 ) and
b) there does exist an infinite extremal pair (s2, t2) with (s∞, t∞) ⊂ (s2, t2). However,
CHAPTER 2. COMBINATORICS ON WORDS 24
it is not at all clear how to actually show this, and hence we simply include infinite
pairs in the definition of maximal extremal instead.
Example 2.3.7. For the doubling map, the extremal pair (0110, 1001) is not a maximal
extremal pair because (01, 10) is an extremal pair with
(01)∞ ≺ (0110)∞ ≺ (1001)∞ ≺ (10)∞.
Lemma 2.3.8. Given a balanced word vr, the pair (sr, tr) = (0-max(vr), 1-min(vr)) is
maximal extremal for the shift map on {0, 1}N.
Proof. This is shown in [34, Lemma 2.5]. Our result is in a slightly different format,
so we include a slightly altered proof here for completeness.
Firstly, notice that these pairs can be formed using a tree, shown in Figure 2.3.
The first level pairs are given by (sr, tr) = (010n−2, 10n−1) for r = 1/n and (sr, tr) =
(01n−1, 101n−2) for r = (n−1)/n. Two neighbouring pairs (s1, t1) and (s2, t2) combine
to give child (s2s1, t1t2). The proof that these pairs are indeed (0-max(vr), 1-min(vr))
is another easy induction using Proposition 2.2.6 which we thus omit.
Furthermore, note that for balanced words, we have sr = sr1tr2 and tr = tr2sr2 .
This is shown in [20, Lemma 3.2].
The result is easy to see for the first level pairs, because for these pairs we have
σs∞r = t∞r . This immediately implies that for any s∞ < s∞r , we have σ(s∞) < t∞r .
Thus in order to make an extremal pair from the orbit of s, we instantly have to have
t∞ < t∞r . Hence there cannot exist an extremal pair (s, t) with (s∞r , t∞r ) ⊂ (s∞, t∞),
and so the first level balanced pairs are maximal extremal.
We now use induction on q = |sr|, r = p/q. Suppose the result holds for all q′ < q.
Then the result holds for the Farey parents r1 and r2 of r. Thus there are no extremal
pairs avoiding (s∞r1 , t∞r1
). Then for an extremal pair (s, t) to avoid (s∞r , t∞), we must
have
s∞r1 < s∞ < s∞r = (sr1tr2)∞.
This means s begins sr1 . But then σq1 s∞ ∈ (s∞r1 , t∞r ). This implies that t∞ < t∞r .
Thus we cannot have a distinct extremal pair with (s∞, t∞) ⊂ (s∞, t∞), and so (sr, tr)
is maximal extremal.
We also set forth a standard way of creating trees. This is exactly the same as
for balanced words, but instead of having 0 and 1 as the roots we will have two finite
CHAPTER 2. COMBINATORICS ON WORDS 25
0 1
(01, 10)
(010, 100) (011, 101)
(0100, 1000) (0111, 1011)
(01000, 10000) (01010, 10010) (01101, 10101) (01111, 10111)
Figure 2.3: Balanced pairs (sr, tr) = (0-max(wr), 1-min(wr)).
words w1 and w2. We will always have the left root less than the right root, that is
w∞1 < w∞2 . Then we define the first child pair to be (w1w2, w2w1). When combining
a pair (s, t) with either root wi, we take the child to be (swi, twi). When combining
two neighbouring pairs (s1, t1) and (s2, t2), we take their child to be (s2s1, t1t2).
In this way, a general tree is exactly the same as the standard Farey tree after
applying the substitution 0→ w1 and 1→ w2. For this reason we can use the balanced
tree’s p/q almost as a co-ordinate system, and this is how we define neighbouring for
a more general tree: two pairs are neighbouring if their corresponding Farey pairs are
neighbouring.
Sometimes, we will need to create a tree from two root pairs instead of two root
words. This is actually easier as combining with a root is just the same as combining
two neighbouring pairs: we always take the child of (s1, t1) and (s2, t2) to be (s2s1, t1t2).
Chapter 3
β-expansions
3.1 Introduction
Let β > 1 and x ∈ [0, (dβe − 1)/(β − 1)]. Then a representation of the form
x =∞∑i=1
xiβi, xi ∈ {0, 1, . . . , dβe − 1},
is called a β-expansion of x. These were first introduced by Renyi in 1957 [30] and then
investigated by Parry in 1960 [27] and have been studied ever since. This chapter will
draw upon the many good introductions to this topic, including Dajani and Kraaikamp
[10], and Sidorov [32, 33].
Firstly, recall that if β is an integer, then almost every x has a unique expansion,
and the exceptional set is simply numbers of the form a/βk for some a, k, which have
precisely two expansions. The situation is very different for non-integer values of β.
In this thesis we will only consider β ∈ (1, 2). This means that for all our expan-
sions, xi ∈ {0, 1}.
Consider the following multivalued map τβ : [0, 1/(β − 1)] 7→ [0, 1/(β − 1)]:
τβ(x) =
βx, x ∈ [0, 1
β(β−1)),
βx or βx− 1 x ∈ [ 1β, 1β(β−1) ],
βx− 1, x ∈ ( 1β, 1β−1 ].
This has an overlap where there are two possible values of τβ, as shown in Figure
3.1. To generate a β-expansion for some x ∈ [0, 1/(β − 1)], consider the orbit of x.
26
CHAPTER 3. β-EXPANSIONS 27
Then take
xi =
0 if τ i−1β (x) ∈ [0, 1
β],
0 or 1 if τ i−1β (x) ∈ [ 1β, 1β(β−1) ],
1 if τ i−1β (x) ∈ [ 1β(β−1) ,
1β−1 ].
In the overlapping (“switch”) region, one has a choice of xi = 0 or xi = 1. One then
takes the branch corresponding to this choice of xi. The existence of different choices
gives rise to multiple expansions of x.
Theorem 3.1.1 (Erdos, Joo and Komornik, [11]). If 1 < β < ϕ = 1+√5
2then every
x ∈ (0, 1/(β − 1)) has a continuum of β-expansions.
Sketch proof. If 1 < β < ϕ, then taking the left-hand branch we have τβ(1/β) = 1 <
1/β(β − 1). This means that for any x ∈ (0, 1/β) we have that τnβ (x) is in the switch
region for some finite n. Similarly, for any x ∈ (1/(β(β − 1)), 1/(β − 1)) we have that
τmβ (x) is in the switch region for some finite m. In effect, this means it is impossible
to “jump over” the switch region. Every x can only avoid the switch region for a finite
amount of time before there is another choice of branch and thus choice of expansions.
As this must continue to occur forever, this gives us an infinitely branching tree of
expansions for x, and thus a continuum of expansions.
Theorem 3.1.2 (Sidorov, [31]). If β ∈ (ϕ, 2) then almost every x ∈ (0, 1/(β− 1)) has
a continuum of β-expansions.
This is slightly more complex, but essentially relies on the fact that almost no x
can avoid the switch region forever. A nice depiction of this can be found in [32].
3.2 Greedy and lazy expansions
Various people have studied β-expansions by considering for example the set of x with
a unique expansion, or the set of x with countably many expansions. Other topics
of research include the growth rate of the number of expansions, the approximation
properties of β-expansions, or extensions to when β is negative. However, for the
purposes of this thesis, we do not wish to deal with multiple expansions at once. The
chief way to avoid this is simply to pick one particular expansion and just use that
one.
CHAPTER 3. β-EXPANSIONS 28
0 1β
1β(β−1)
1β−1
1β−1
Figure 3.1: τβ, β = ϕ = 1+√5
2.
3.2.1 Greedy expansions
One particular expansion of x is given by choosing the right-hand βx − 1 branch
whenever possible and thus writing down a 1 whenever possible. This gives the lexi-
cographically largest β-expansion of x and is hence called the greedy expansion. We
know that the right-hand branch of τβ satisfies τβ(x) < x for x > 1, therefore orbits
leaving the region x > 1 will never return. This means we can study the greedy
expansion by restricting to [0, 1] and considering the map Tβ : [0, 1]→ [0, 1] given by
Tβ(x) = βx mod 1,
and as described taking the right-hand branch at 1/β to give Tβ(1/β) = 0. Again this
can be done for any β, but we will restrict ourselves to β ∈ (1, 2). The case β = 2 is
of course the doubling map.
In order to use combinatorics on words in the context of greedy β-transformations,
we wish to make Tβ conjugate to the shift map on some subshift of Σ = {0, 1}N. To
do this we write as before
x =∞∑i=1
xiβ−i,
with xi = bβT i−1β (x)c for all i ≥ 1. We denote the set of possible (“admissible”) greedy
sequences (xi)∞i=1 by Xβ. In a moment we will describe this set symbolically.
Before describing Xβ, notice that we have actually defined a map πβ : Xβ 7→ [0, 1],
given by
πβ((xi)) = x =∞∑i=1
xiβ−i,
CHAPTER 3. β-EXPANSIONS 29
0 1β
1
1
Figure 3.2: Tβ, β = ϕ = 1+√5
2.
with its inverse defined by taking xi = bβT i−1β (x)c as described. This gives us a
conjugacy between Tβ on [0, 1] and the shift map on Xβ: we have Tβ ◦ πβ = πβ ◦ σ, as
shown:Xβ
σ−−−→ Xβyπβ yπβ[0, 1]
Tβ−−−→ [0, 1]
Note that as π−1β is not continuous, this is not a topological conjugacy.
The exact structure of the set of admissible greedy sequences Xβ is well studied,
originally by Parry in [27]. We state this result later in Theorem 3.2.1, but beforehand
we attempt to motivate what is otherwise a rather opaque description.
To begin with, notice that the sequence 1∞ would project to 1/(β − 1) under πβ,
which is outside the space [0, 1]. Similarly any sequence beginning with too many 1s
will be too large. Xβ must therefore be something interesting, rather than just the full
shift space {0, 1}N.
Consider the expansion of 1 given by di = bβT i−1β (1)c. We want to have only x ≤ 1,
and in fact we want the entire orbit of x to be contained in [0, 1]. This suggests the
description Xβ could be something like the following:
{(xi)∞i=1 : xjxj+1xj+2 · · · � d1d2d3 . . . for all j ∈ N}. (3.1)
This is very nearly the correct set, but unfortunately there are two slight complications.
Firstly, at all times we have 10∞ = 0d1d2d3 . . . , and indeed u10∞ = u0d1d2d3 . . .
for any finite word u. This is the same as the way that in base 10, we have 0.999 · · · =
CHAPTER 3. β-EXPANSIONS 30
1.000 . . . . We deal with this by defining an equivalence relation ∼, setting 10∞ ∼
0d1d2d3 . . . , and then considering the set 3.1 as above modulo this relation. Generally
speaking throughout this thesis we will not concern ourselves overly with this techni-
cality and will use whichever of these sequences is the most convenient for whatever
we are trying to do.
The second complication is more subtle. We originally defined Tβ with Tβ(1/β) = 0
and wrote down a 1 in our expansion at this point, to have the lexicographically largest
expansion possible. However, if we instead take Tβ(1/β) = 1, this gives another
expansion of 1/β. This is the so-called quasi-greedy expansion. This affects any
preimages of 1/β as well.
If the expansion of 1 was infinite (that is, not ending 0∞), then it remains unaf-
fected and so the greedy and quasi-greedy expansions of 1 are the same. However, if the
greedy expansion of 1 is finite and given by d1 . . . dn10∞, then the quasi-greedy expan-
sion of 1 is given by (d1 . . . dn0)∞. This is lexicographically smaller than d1 . . . dn10∞.
For such a β, the same problem will affect any x with finite greedy expansion; that is
any x of the form x =∑n
i=1xiβi
, with xi ∈ {0, 1}. This means these x will have multiple
expansions that are allowed (“admissible”) under our first attempt 3.1 at describing
Xβ. This would be somewhat dubious for our nicely defined conjugacy πβ and its
inverse.
Fortunately this has two possible simple solutions. The first is to define an equiv-
alence relation setting the quasi-greedy and greedy expansions of any point x to be
equivalent. Then as previously Xβ can be given by Equation 3.1 modulo this equiv-
alence relation. Alternatively, one can simply use the quasi-greedy expansion of 1 in
the definition. This is what we shall do, leading us to arrive at our final description:
Theorem 3.2.1 (Parry, [27]). The set of quasi-greedy β-expansions of x ∈ [0, 1] is
given by
Xβ = {(xi)∞i=1 : xjxj+1xj+2 · · · � d1d2d3 . . . for all j ∈ N}/ ∼,
where d1d2d3 . . . denotes the quasi-greedy expansion of 1 in base β and ∼ is the equiv-
alence relation found by setting 0d1d2d3 · · · = 10∞.
We will denote the quasi-greedy expansion of 1 for a given β by 1·. Note that we
originally stated we would use admissible greedy expansions and have now defined Xβ
CHAPTER 3. β-EXPANSIONS 31
to be admissible quasi-greedy expansions. This is not concerning: the point is that we
could use whichever happens to be the most convenient because they are for all intents
and purposes the same, and either way the important factor is the relevant expansion
of 1. In this thesis, we have made the choice to use the quasi-greedy expansion.
Example 3.2.2. Consider β given by the golden ratio φ = (1+√
5)/2. Then β2 = β+1,
or equivalently 1 = 1/β+1/β2. Hence the greedy expansion of 1 is 110∞ and the quasi-
greedy expansion of 1 is 1· = (10)∞. Therefore for this value of β, the set Xβ is the set
of all 0-1 sequences that do not contain 11 as a factor, subject to 10∞ = 01· = (01)∞.
It was shown by Parry in [27] that as β increases, 1· increases lexicographically.
For example, large β close to 2 will have 1· = 1n0 . . . for some large n. Small β close
to 1 will have 1· = 10n1 . . . for some large n.
Rather than beginning with a β and from this finding its quasi-greedy expansion,
we will usually go about this in reverse: examples will be given in terms of 1·, which
then defines a β.
Throughout this thesis, whenever dealing with the map Tβ, we will refer to a point
x ∈ (0, 1) and its expansion (xi)∞i=1 ∈ Xβ interchangeably. Furthermore, as mentioned
above we have 10∞ = 01· and will use whichever of these is the most convenient for
the situation.
3.2.2 Lazy expansions
The lazy expansion of x is the opposite of the greedy expansion. Whereas the greedy
expansion writes a 1 whenever possible, the lazy expansion writes a 0 whenever poss-
sible.
The lazy expansion is given by Lβ : [ 1β, 1β−1 ],
Lβ(x) =
βx if x ∈ ( 1β, 1β(β−1) ],
βx− 1 if x ∈ ( 1β(β−1) ,
1β−1).
This is depicted in Figure 3.2.2.
The expansion of 1/(β − 1) is always 1∞, but the expansion of 1/β varies with β.
This plays the same role as the expansion of 1 for the greedy transformation.
In fact, lazy expansions work in exactly the same way as greedy expansions, but
CHAPTER 3. β-EXPANSIONS 32
1β
1β(β−1)
1β−1
1β(β−1)
1β−1
Figure 3.3: Lβ, β = ϕ = 1+√5
2.
with the digits flipped. To be admissible, a sequence and all its shifts must be lexico-
graphically greater than the expansion of 1/β. There are the same problems regarding
some points having two expansions, which we again simply declare to be equivalent.
We may also define the quasi-lazy expansion by taking Lβ(1/β(β − 1)) = 0 instead of
Lβ(1/β(β − 1)) = 1.
Notice that we could if we wanted move our map to be defined on [0, 1] instead of
[ 1β, 1β−1 ]. Then we would have Lβ(y) = βy+ (2−β) mod 1, y ∈ [0, 1], with the switch
point between the branches given by 1 − 1/β. This will be made clearer in the next
section. Ultimately it is a difference of notation and the actual map is the same with
the same dynamics, so it is not of great importance.
Generally speaking, because lazy expansions are essentially the same as greedy
expansions, we choose to use greedy expansions, as these are the more intuitive and
most results are in terms of greedy expansions rather than lazy expansions.
3.3 Intermediate expansions
An intermediate β-expansion is, sensibly, in between the lazy and greedy expansions.
Intermediate β-expansions can be described in two ways.
We introduce first the description of Dajani and Kraaikamp in [10]. This has the
advantage of providing a clear intuition for how these expansions are intermediate.
Essentially, we choose some point in the switch region, above which we take the right-
hand branch and below which we take the left-hand branch. Formally, for each γ ∈
CHAPTER 3. β-EXPANSIONS 33
(a) (b) (c)
Figure 3.4: Inset within τβ, β = ϕ, we have (a) the greedy transformation, (b) anintermediate transformation Sβ,γ with γ = 0.4, and (c) the lazy transformation.
[0, (2− β)/(β − 1)], define Sβ,γ : [γ, γ + 1] 7→ [γ, γ + 1] by:
Sβ,γ(x) =
βx x ∈ [γ, 1+γβ
),
βx− 1 x ∈ [1+γβ, 1 + γ].
This is illustrated in Figure 3.3, which instantly demonstrates how these transforma-
tions are ‘intermediate’.
Note that the choice of γ ∈ [0, (2 − β)/(β − 1)] ensures that (1 + γ)/β is in the
switch region.
The second way of describing intermediate β transformations is by considering the
map Tβ,α : [0, 1] 7→ [0, 1] given by
Tβ,α(x) = βx+ α mod 1,
where β ∈ (1, 2) and α ∈ (0, 2 − β). We denote the set of possible (β, α) by 4. The
greedy expansions are thus the case α = 0, and lazy expansions have α = 2 − β.
The switch point between the two branches is c = (1 − α)/β, and the definition
of Tβ,α is ambiguous at this point. Therefore, we define two functions T±β,α to have
T+β,α(c) = 0, T−β,α(c) = 1, and T±β,α(x) = Tβ,α(x) for x 6= c.
The two descriptions of intermediate β transformations are isomorphic: take α =
(β− 1)γ to switch between the two. It is therefore of little consequence which is used.
We will use Tβ,α because it is easier to picture and is more widely used, having first
been studied by Parry in 1964 [28].
Like the greedy transformation, Tβ,α is conjugate to the shift map on a subset of
Σ = {0, 1}N. Given β ∈ (1, 2) and α ∈ (0, 2− β), we therefore define the intermediate
CHAPTER 3. β-EXPANSIONS 34
β-expansions of x ∈ [0, 1] as follows:
w±i (x) =
0 (T±β,α)i−1
(x) ∈ [0, 1−αβ
),
1 (T±β,α)i−1
(x) ∈ (1−αβ, 1].
Just like the greedy and quasi-greedy expansions, the difference in the expansions w+
and w− occurs at c = (1−α)/β: if (T±β,α)i−1(x) = c we take w+i (x) = 1 and w−i (x) = 0.
This explains the notation of + and − in that w+ writes down a 1 whenever possible
and is thus the larger of the two expansions.
To return from sequences to x ∈ [0, 1] we define a projection:
πβ,α(w) =α
1− β+∞∑i=1
wiβi.
Notice that πβ,α(w+(x)) = πβ,α(w−(x)) = x; that is to say both expansions do project
sensibly to the same value x.
We define the shift map σ on the sequence space {0, 1}N by σ(w)i = wi+1. Then
with the above projection the maps T±β,α are conjugate to the shift map on suitable
sequence spaces. As shown in [21], these sequence spaces are given by:
X+β,α =
{w ∈ {0, 1}N | w+(0) � σn(w) ≺ w−(c) or w+(c) � σn(w) � w−(1)∀n ≥ 0
},
X−β,α ={w ∈ {0, 1}N | w+(0) � σn(w) � w−(c) or w+(c) ≺ σn(w) � w−(1)∀n ≥ 0
},
Xβ,α = X+β,α ∪ X−β,α.
We will denote w+(0) = 0β,α and w−(1) = 1β,α, however we will generally suppress
subscripts and instead write 0· and 1·, akin to a decimal point. Then w−(c) = 01· and
w+(c) = 10·. Thus our entire sequence space can be easily envisaged from just the
expansions 0· and 1· of 0 and 1:
X+β,α =
{w ∈ {0, 1}N | 0· � σn(w) ≺ 01 · or 10· � σn(w) � 1 · ∀n ≥ 0
},
X−β,α ={w ∈ {0, 1}N | 0· � σn(w) � 01 · or 10· ≺ σn(w) � 1 · ∀n ≥ 0
},
Xβ,α = X+β,α ∪ X−β,α.
The expansions 1· and 0· are closely dependent on (β, α). Because we work primarily
with the sequence space, we will commonly view this in the reverse: examples will be
given in terms of 1· and 0·, which then defines our pair (β, α).
CHAPTER 3. β-EXPANSIONS 35
p
α
0 1
Figure 3.5: The transformation Tβ,α with 1· = (10)∞ and 0· = (001)∞, giving (β, α) ≈(1.3247, 0.2451).
Throughout this thesis we will refer to a point x and its intermediate β-expansion
w(x) ∈ Xβ,α interchangeably, with the note that there may of course be two expansions
if w+(x) 6= w−(x). However, this will only occur if x is a preimage of c, in which case
its two expansions will be u01· and u10· for some admissible word u and we will specify
which expansion to use as befits the situation.
We also will need to consider certain inadmissible sequences and make them ad-
missible. If a sequence is too large, we wish to take the largest admissible sequence
with the same prefix, and if a sequence is too small, we take the smallest admissible
sequence with the same prefix. We will refer to this as truncating a sequence, using
the notation (“truncates to”) as follows:
Example 3.3.1. Suppose 1· = (10)∞ and 0· = (001)∞. Then we have
Xβ,α = {w | (001)∞ � σn(w) � 0(10)∞ or 1(001)∞ � σn(w) � (10)∞ for all n}.
Consider the sequence w1 = (0101011)∞. This is inadmissible because 11 is in-
admissible. Then we say w1 truncates to 010101· = 01010(10)∞ = (01)∞. Similarly
consider w2 = (101000010)∞. This is inadmissible because 04 is inadmissible. Then
w2 1010· = 101(001)∞.
This will be particularly prevalent in the examples in Chapter 9.
Remark 3.3.2. We only actually need to study half of intermediate expansions: we
can then flip 0s and 1s and any results will transfer to the remaining intermediate
expansions. For example, if we understand greedy expansions, then by flipping 0s and
CHAPTER 3. β-EXPANSIONS 36
1s we also understand lazy expansions. Note that in doing this maxima will change to
minima and vice versa, but the essence of any result will transfer.
3.3.1 Link to Lorenz maps
By studying intermediate β-expansions, we actually manage to study a far wider class
of maps. A map f : [0, 1] 7→ [0, 1] is said to be a Lorenz map if f satisfies the following:
1. f is differentiable and monotonic on [0, 1] \ {c} for some c ∈ (0, 1);
2. limx↗c f(x) = 1 and limx↘c f(x) = 0;
3. f is expanding; that is there exists ε > 0 such that f ′(x) ≥ 1 + ε for all x 6= c.
Intermediate β-transformations are therefore simple examples of Lorenz maps. To
begin with, it is known that every Lorenz map is semi-conjugate to an intermediate
β-transformation [15], and that every transitive Lorenz map is conjugate to an in-
termediate β-transformation [29]. The exact necessary and sufficient conditions for a
Lorenz map to be conjugate to an intermediate β-transformation were described by
Glendinning in [15].
We do not mention this to discuss the precise details of Lorenz maps, but rather
to draw attention to the fact that the results in this thesis are more widely applicable
than just β-transformations. In essence, now that we have the sequence space Xβ,α,
results will be given involving this sequence space. Therefore, the same results will
apply to any map that can be made conjugate to the same sequence space. The papers
above show that “most” Lorenz maps fall into this category.
Chapter 4
Admissible balanced words
4.1 Balanced words and intermediate
β-transformations
It is shown in [17] that for the doubling map, the maximal extremal pairs are formed
from balanced words. Hence, we want to establish which balanced sequences are
admissible for which (β, α).
We define two functions as follows:
Definition 4.1.1 (γ(β, α) and δ(β, α)). Define γ : 4 → (0, 1) to be the slope of the
maximal admissible balanced word for a given (β, α) ∈ 4. Similarly define δ : 4 →
(0, 1) to be the slope of the minimal admissible balanced word for a given (β, α) ∈ 4.
For example, for the doubling map we have γ(2, 0) = 1 and δ(2, 0) = 0.
We showed in Proposition 2.2.7 that if r1 < r2 then v∞r1 ≺ v∞r2 and u∞r1 ≺ u∞r2 .
Therefore γ(β, α) and δ(β, α) are non-decreasing in the following sense: for a fixed
0·, γ(β, α) is non-decreasing as 1· increases lexicographically. Similarly for a fixed
1·, δ(β, α) is non-decreasing as 0· increases lexicographically. We also trivially have
δ(β, α) ≤ γ(β, α). This all gives the effect that for a given (β, α) ∈ 4, v∞r and u∞r are
admissible if and only if δ(β, α) ≤ r ≤ γ(β, α).
We can describe these function using the following lemma:
Lemma 4.1.2 (Admissible balanced words). We have γ(β, α) = r ∈ Q for every
(β, α) such that 1· ∈ [v∞r , vr2v∞r ], where r2 is the right Farey parent of r. We have
37
CHAPTER 4. ADMISSIBLE BALANCED WORDS 38
δ(β, α) = r ∈ Q for every (β, α) such that 0· ∈ [ur1u∞r , u
∞r ], where r1 is the left Farey
parent of r.
Proof. We show only the γ(β, α) case as the result for δ(β, α) is entirely analogous.
Firstly note again that as v∞r is the maximal element of Xr, it is clear that v∞r is
admissible if and only if 1· � v∞r , and so γ(β, α) ≥ r if and only if 1· � v∞r .
To see the result for the right endpoint, consider the sequence of rationals pn =
r2 ⊕ r ⊕ · · · ⊕ r (n times). Then pn ↘ r and v∞rn = (vr2vnr )∞ ↘ vr2v
∞r . Therefore,
if 1· � vr2v∞r then there exists some n such that v∞pn is admissible, meaning that
γ(β, α) ≥ pn > r. Similarly if γ(β, α) > r, then γ(β, α) > pn for some n, and
v∞pn � vr2v∞r meaning we must have 1· � vr2v
∞r .
By a similar argument it is easy to see that γ(β, α) or δ(β, α) take an irrational
value r precisely when 1· = vr or 0· = ur respectively. The functions can be seen in
Figures 4.1 and 4.2.1 Generally speaking we will write uγ, vγ, uδ, and vδ for balanced
words with slope γ(β, α) or δ(β, α) respectively, and will use r for balanced words with
δ(β, α) < r < γ(β, α).
Example 4.1.3. For example, we have γ(β, α) = 1/2 for every (β, α) such that 1· ∈
[(10)∞, 1(10)∞]. If α = 0 this corresponds to β ∈ [ϕ, 1.8019 . . . ] where ϕ denotes the
golden ratio. This is the largest plateau visible in Figure 4.3.
One consequence of Lemma 4.1.2 is that the intervals [v∞r , vr2v∞r ] are disjoint for
distinct r ∈ Q. As can be seen in Figure 4.3, γ(β, 0) is a devil’s staircase: it is
continuous, non-decreasing, and has zero derivative almost everywhere. The same
function arises when considering digit frequencies for β-expansions, as described by
Boyland, de Carvalho and Hall in [4]. In fact γ(β, α) and δ(β, α) are higher dimensional
versions of a devil’s staircase, and fixing one variable will give a devil’s staircase.
It was shown in 2.3.8 that we can associate an extremal pair with each rational
r. This was given by (sr, tr) = (0-max(vr), 1-min(vr)). For example, (s2/5, t2/5) =
(01010, 10010).
1On the production of figures: Figures 4.1 and 4.2 are approximations drawn to pass throughspecific known points. The curves defining the edge of each “tongue” are implicit functions where asone approaches β = 1 one must attempt to divide by 0. For this reason it was beyond my skill topersuade a computer to draw them accurately.
CHAPTER 4. ADMISSIBLE BALANCED WORDS 39
β
α
12
23
13
0 112
13
23
14
14
25
35
1
2
(10)∞
1(10)∞
(100)∞
10(100)∞
(110)∞
1(110)∞
Figure 4.1: γ(β, α) (approximate). The function takes the value p/q on the ‘tongue’based at p/q. The boundaries of each tongue are given by lines of constant 1· aslabelled.
β
α
12
23
13
(01)∞
0(01)∞
(011)∞
01(011)∞
(001)∞0(001)∞
0 112
13
23
14
14
25
35
1
2
Figure 4.2: δ(β, α) (approximate). The function takes the value p/q on the ‘tongue’based at p/q. The boundaries of each tongue are given by lines of constant 0· aslabelled.
CHAPTER 4. ADMISSIBLE BALANCED WORDS 40
Figure 4.3: γ(β, 0), a devil’s staircase.
Remark 4.1.4. Given r = r1 ⊕ r2, we have that balanced pairs satisfy the following:
sr1t∞r1< (sr2sr1)
∞,
sr2sr1(tr1tr2)∞ < s∞r2 .
This means that the intervals [s∞r , srt∞r ] do not overlap.
This is shown for balanced words in [20, Lemma 3.2], and may also be seen as
a corollary of Lemma 4.1.2. We remark that the result is actually a property of the
tree construction method, not specifically of the words themselves: to obtain the same
result for a different tree, we simply take the tree for balanced words and map 0 and
1 to the left and right roots of our alternative tree. Then provided the left root is less
than the right root, the result will hold.
4.1.1 Descendants of balanced words
In Section 2.3 the idea of the descendants of an extremal pair was mentioned. In order
to fully understand β-transformations with a hole, we will need to use the descendants
CHAPTER 4. ADMISSIBLE BALANCED WORDS 41
of balanced pairs. These are defined in [17] and we repeat the discussion here. These
words are not themselves balanced but are derived from balanced words, and once
again we need to know which are admissible for which intermediate β-transformations.
Consider p/q ∈ (0, 1). Define a function ρp/q : {0, 1} → {0, 1}q by
ρp/q(0) = sp/q,
ρp/q(1) = tp/q,
where s = 0-max(vp/q) and t = 1-min(vp/q) are the balanced extremal pair associated
to p/q as defined in Section 2.3. We extend the definition of ρ to finite words by con-
catenation: given some word u = u1 . . . un define ρp/q(u1 . . . un) = ρp/q(u1) . . . ρp/q(un).
Then for r ∈ (Q ∩ (0, 1))n we may define as follows:
sr = ρr1ρr2 . . . ρrn(0),
tr = ρr1ρr2 . . . ρrn(1).
Example 4.1.5. We have s(2/5,1/2) = ρ2/5ρ1/2(0) = ρ2/5(01) = 0101010010 and t(2/5,1/2) =
ρ2/5(10) = 1001001010.
By taking limits this definition may be extended to r ∈ (Q ∩ (0, 1))N and to
r ∈ (Qn−1 × R) ∩ (0, 1)n.
Remark 4.1.6. In [17, Lemma 2.10] it is shown that:
1. Given r = (r1, . . . , rn) and r′ = (r′1, . . . , r′n) with r 6= r′, we have that [s∞r , srt
∞r ]∩
[s∞r′ , sr′t∞r′ ] = ∅;
2. Given r = (r1, . . . , rn) and r = (r1, . . . , rn−1), we have that
[s∞r , srt∞r ] ⊂ [srtrs
∞r , srt
∞r ].
This enables us to make the following definitions:
Definition 4.1.7 (γ(β, α) and δ(β, α)). We define γ(β, α) to be the (finite or infinite)
sequence of numbers corresponding to the maximal admissible balanced descendant.
Similarly, we define δ(β, α) to be the (finite or infinite) sequence of numbers corre-
sponding to the minimal admissible balanced descendant. We write these as vectors:
γ(β, α), δ(β, α) ∈ (Q ∩ (0, 1))N ∪
(∞⋃n=1
(Q ∩ (0, 1))n
)∪
(∞⋃n=1
(Qn−1 × R) ∩ (0, 1)n
).
CHAPTER 4. ADMISSIBLE BALANCED WORDS 42
Remark 4.1.6 enables us to note that the function γ(β, α) defined in the previous
section corresponds to (γ(β, α))1. Essentially on each plateau of γ(β, α), we define
a new devil’s staircase giving (γ(β, α))2. Each plateau of this will then give rise to
a further devil’s staircase for (γ(β, α))3, and so the process continues. The function
δ(β, α) interacts with δ(β, α) in the same way.
Example 4.1.8. Consider the case 1· = (10010000)∞, β ≈ 1.427. Here (1000)∞ is
admissible but 100(1000)∞ is inadmissible, therefore γ(β, α) = (γ(β, α))1 = 1/4. Then
consider r = (1/4, 1/2). This gives (sr, tr) = (01001000, 10000100). Therefore s∞r is
admissible and indeed σs∞r = 1·, meaning that s∞r = 1/β. It follows that srt∞r
is inadmissible. Therefore the maximal admissible descendant of balanced words is
given by γ(β, α) = (1/4, 1/2).
We can use the above definitions in the more general setting of maximal extremal
pairs.
Definition 4.1.9 (Farey descendants). Let (s, t) be a maximal extremal pair. Then
we say the pairs (sr, tr) given by
sr = ρr1ρr2 . . . ρrn(s),
tr = ρr1ρr2 . . . ρrn(t).
are the Farey descendants of (s, t).
Because of the tree construction, the results stated above specifically for balanced
descendants then hold for the Farey descendants of any maximal extremal pair, by
simply repeating all proofs with s in place of 0 and t in place of 1. No further
modification is needed.
4.2 Transitivity
A map f from the unit interval to itself is said to be transitive if there exists x ∈ [0, 1]
whose orbit is dense in [0, 1], or equivalently2 for any open interval I ⊂ [0, 1], there
exists an n such that ∪ni=0fi(I) = (0, 1).
2In this context of [0, 1] these are equivalent definitions; they are not always.
CHAPTER 4. ADMISSIBLE BALANCED WORDS 43
β
α
√2
3√
24√
25√
2
12
13
23
14
34
15
25
35
45
01
1
2
Figure 4.4: Any intermediate β-transformation that is inside the “bubbles” is nottransitive. This image is an approximation: in full detail there is a bubble for everyrational p/q.
One might expect that the map Tβ,α(x) = βx + α mod 1 should always be tran-
sitive for (β, α) ∈ 4: it is an expanding map, it is not particularly pleasant, and its
friendliest case of the doubling map is definitely transitive. However, as β ↘ 1, these
maps approach circle rotations, and rational rotations are not transitive. It turns out
to be the case that not all intermediate β-transformations are transitive: this property
is lost once the map becomes “too close” to being a rational rotation.
A precise description of which intermediate β-transformations are transitive and
which are not has already been completed. Originally this appears in the thesis of
Palmer [26] as a description of which transformations are weak-Bernoulli (a compli-
cated condition which we will not explain here). This turns out to be equivalent to
non-transitivity, as discussed in Glendinning [15] and Glendinning and Sparrow [18],
and further summarised in [13]. The set is shown in Figure 4.4.
The point of this section is to provide a new way of describing this set using γ(β, α)
CHAPTER 4. ADMISSIBLE BALANCED WORDS 44
and δ(β, α). This is entirely equivalent to the previous descriptions.
Lemma 4.2.1. The map Tβ,α(x) = βx + α mod 1 is not transitive if and only if
γ(β, α) = δ(β, α) = p/q.
Figure 4.4 can now be seen as taking the graphs of γ(β, α) and δ(β, α) as in Figures
4.1 and 4.2 and overlapping the two. The largest bubble, based at β = 1, α = 1/2 is
where γ(β, α) = δ(β, α) = 1/2. Similarly the bubble based at β = 1, α = p/q is where
γ(β, α) = δ(β, α) = p/q. Notice that the only cases where γ(β, α) = δ(β, α) = r /∈ Q
are precisely irrational circle rotations, with 1· = vr and 0· = ur.
We will show this lemma to be true by showing it to be equivalent to the original
description found in Palmer [26], which is also available in Section 3 of [15]. This
original description, taken almost verbatim from these two sources, is as follows.
Definition 4.2.2. A periodic orbit of minimal period q is called an q(p) cycle if its
points {z1, . . . , zq} are ordered such that
z1 < z2 < · · · < zq−p < c = (1− α)/β < zq−p+1 < · · · < zq,
where c = (1− α)/β is the point of discontinuity of Tβ,α.
Definition 4.2.3. An q(p) cycle is called a primary q(p) cycle if it satisfies the fol-
lowing:
1. p and q are coprime,
2. Tβ,α(zi) = zp+i for 1 ≤ i ≤ q − p and Tβ,α(zq−p+i) = zi for 1 ≤ i ≤ p,
3. Tβ,α(1) ≤ zp+1 and Tβ,α(0) ≥ zp.
Notice that the first two conditions imply that the points zi are ordered as for a
rational rotation R(x) = x+ p/q mod 1.
Theorem 4.2.4 (Palmer, [26]). An intermediate β-transformation Tβ,α is not transi-
tive if and only if Tβ,α has a primary q(p) cycle.
We now prove Lemma 4.2.1.
CHAPTER 4. ADMISSIBLE BALANCED WORDS 45
Proof. Suppose γ(β, α) = δ(β, α) = p/q. We claim that the balanced orbit Xp/q is a
primary q(p) cycle for Tβ,α. By definition p and q are co-prime, and it is well known,
as shown in [3], that balanced words are ordered orbits in exactly the sense we need
for a primary q(p) cycle. Thus taking z1 = u∞p/q and zq = v∞p/q we know that parts 1
and 2 of the definition of a primary q(p) cycle are satisfied.
We must now show that T (1) ≤ zp+1 = T (z1) and T (0) ≥ zp = T (zq). To see this,
recall that in order to have γ(β, α) = δ(β, α) = r = p/q, we have that
1· ≤ vr2v∞r , 0· ≥ ur1r
∞r .
Recall also that as per Proposition 2.2.6, we can write vr = 1wr0 and ur = 0wr1 for
some central word wr, and we have wr = wr201wr1 = wr110wr2 , where r = r1 ⊕ r2 as
usual.
This means we have
T (1) ≤ wr20(1wr110wr20)∞ = (wr10)∞ = T (u∞p/q) = zp+1.
Similarly we have
T (0) ≥ wr11(0wr201wr11)∞ = (wr01)∞ = T (v∞p/q) = zp.
Thus the admissible balanced orbit Xp/q is a primary q(p) cycle for Tβ,α and so Tβ,α is
not transitive.
We now show the reverse. Suppose Tβ,α has a primary q(p) cycle. Again by [3], the
fact that this cycle is ordered as per part 2 of the definition means that this primary
q(p) cycle must be given by the balanced word with slope p/q = r, so we have z1 = u∞p/q
and zq = v∞p/q. This instantly implies δ(β, α) ≤ p/q and γ(β, α) ≥ p/q because Xp/q
must be admissible.
We now apply essentially the same trick as used above, with p/q = r, vr = 1wr0
and ur = 0wr1. By the existence of the primary q(p) cycle, we have
T (1) ≤ zp+1 = T (z1) = (wr10)∞.
This means we must have
1· ≤ 1(wr10)∞ = 1(wr201wr110)∞ = 1wr20(1wr110wr20)∞ = vr2v∞r .
CHAPTER 4. ADMISSIBLE BALANCED WORDS 46
This is precisely the condition to have γ(β, α) ≤ r = p/q. Hence combining with the
admissibility of Xp/q, we have γ(β, α) = p/q.
Similarly, we have
T (0) ≥ zp = T (zq) = (wr01)∞.
This means we must have
0· ≥ 0(wr01)∞ = 0(wr110wr201)∞ = 0wr11(0wr210wr11)∞ = ur1u∞r .
This is precisely the condition to have δ(β, α) ≥ r = p/q. Hence combining with the
admissibility of Xp/q, we have δ(β, α) = p/q.
Thus if Tβ,α has a primary q(p) cycle, then we have γ(β, α) = δ(β, α) = p/q.
Therefore Tβ,α is not transitive if and only if γ(β, α) = δ(β, α) = p/q.
The place where the proof falls down if γ(β, α) 6= δ(β, α) is in part 3 of the definition
of a primary q(p) cycle: the jump condition. What happens is that the condition on
T (1) can be satisfied if γ(β, α) = p/q, and the condition on T (0) can be satified if
δ(β, α) = p/q, but if γ(β, α) 6= δ(β, α) then we cannot find a p/q that will satisfy both
conditions at once. This is the heart of this lemma.
Chapter 5
Tβ,α with a hole
5.1 Introduction
The topic of this thesis is maps with holes, and now that we have the background
material needed, we can begin to explore this problem. Consider an open interval
(a, b) ⊂ [0, 1]. We call this a hole. Our object of study is the set of points whose orbits
under the map Tβ,α never fall in the hole:
Jβ,α(a, b) := {x ∈ [0, 1] : T nβ,α(x) /∈ (a, b) for any n ∈ N}.
We will refer to this as the avoidance set of (a, b); the set of points avoiding (a, b).
This set is also sometimes called the survivor set. We will occasionally use closed or
half-open intervals instead of open intervals; in this case one simply replaces (a, b) in
the definition of Jβ,α with [a, b], [a, b) or (a, b] as appropriate.
We expect that if the hole (a, b) is large, then roughly speaking everything should
fall in and the avoidance set should be empty. Similarly if the hole is small, then some
points should avoid the hole and the avoidance set should be non-empty.
There are several seemingly basic questions we wish to answer about avoidance sets.
Firstly, when is Jβ,α(a, b) non-empty? Secondly, when does Jβ,α(a, b) have positive
Hausdorff dimension? Finally, when are there periodic orbits of any suitably large
47
CHAPTER 5. Tβ,α WITH A HOLE 48
period that avoid (a, b)? We define as follows:
D0(β, α) := {(a, b) ∈ [0, 1]2 : Jβ,α(a, b) 6= ∅},
D1(β, α) := {(a, b) ∈ [0, 1]2 : dimHJβ,α(a, b) > 0},
D2(β, α) := {(a, b) ∈ [0, 1]2 : ∃N ∈ N : for all n > N,
Jβ,α(a, b) contains an orbit of period n}.
For notational simplicity we will quite often drop the (β, α) and just write D0, D1 and
D2. Again we will also occasionally refer to closed or half-open intervals as being in
some Di: as one would expect one simply applies the definition of Di using Jβ,α[a, b]
or similar instead of Jβ,α(a, b) as appropriate.
These questions have been answered for the case of the doubling map Tx = 2x
mod 1 by Glendinning and Sidorov [17] and by Hare and Sidorov [20], who also consid-
ered a set D3 where there exist orbits of every possible length avoiding a hole. In the
course of my PhD I extended this work to greedy transformations, and this thesis has
a large overlap with the resultant paper [9]. The version present in the paper is more
condensed with less supplementary explanation. We will refrain from continual cita-
tions of this paper, but please note that any result applying to greedy transformations
can also be found in this paper. The exception is that the paper has a mistake, only
noticed by myself after publication. We will later explain and correct this mistake.
This thesis extends the study of maps with holes not just to the greedy transforma-
tion, but also to intermediate β-transformations. For greedy transformations we have
a full description of D0, and almost a complete description of D1 and D2. For interme-
diate transformations, there are additional complications, meaning we offer detailed
conjectures for all three. In all cases, the regions where the results are conjectural are
tiny: indeed, this is half the problem as it means the needed sequences are very long
and hence difficult to work with. As mentioned in the preceding chapter, this will
also give us results for Lorenz maps: the results are actually for the shift map on the
sequence space, and so transfer to any suitably conjugate map.
We will begin in this chapter with the main theorem showing the significance of
maximal extremal pairs. This theorem collects and extends multiple smaller lemmas
used in [17] and [20] to study the doubling map. In their setting, the lemmas were
shown only for balanced words and balanced words were used simply because they
CHAPTER 5. Tβ,α WITH A HOLE 49
work. We now collect these results together and extend them to show that the key
idea behind the lemmas is actually that of maximal extremal pairs, not the balanced
property.
Next, we will show that when either b is small enough or a is large enough, the hole
(a, b) is very easy to deal with. This allows us to significantly restrict our search for
maximal extremal pairs. Given this restriction, the remainder of the chapter explores
the extent to which we can use balanced words and their descendants for intermediate
β-transformations. This means that by the end of Chapter 5 we will have covered
all the material needed for the case of the doubling map, meaning that the following
chapters are new.
In Chapter 6, it is explained why balanced words are insufficient for intermediate β-
transformations, and has a large overlap with my paper, with Section 6.2 in particular
covering the needed methods for greedy and lazy transformations. However, there
are still further complications for intermediate transformations. Section 6.3 shows
why the methods previously used are inadequate for intermediate transformations and
offers detailed conjectures for filling in the remaining gaps. These gaps, whilst tiny,
essentially involve approximating 1· and 0·
The mistake in my paper was related to the descriptions of D1 and D2 for small b
and large a. This is explained in detail in Chapter 7. The problem is fully corrected
for some examples, partially corrected for all (β, α), and the remaining issues also have
detailed conjectures.
Finally, in the non-transitive case, the majority of the above solutions become
impossible and we must again look for alternative methods. This is the topic of
Chapter 8.
Examples of Di for given (β, α) make by far the most sense when seen completely
as a whole, rather than doled out piecemeal as each new technique is covered. For this
reason, Chapter 9 contains a number of complete worked examples. It is recommended
to jump to this chapter and attempt to follow the examples along whenever confused.
Chapter 10 then contains some extra results regarding the largest holes on the
boundary of D0, D1 and D2.
CHAPTER 5. Tβ,α WITH A HOLE 50
5.2 Extremal pairs and maximality
Recall the definition of a maximal extremal pair, as given in Section 2.3.
Definition 5.2.1 (Extremal pairs). Let (s, t) be a pair of finite {0, 1} words with t a
cyclic permutation of s and s∞ ≺ t∞. Then (s, t) is said to be an extremal pair if for
every k ∈ N, either σks∞ � s∞ or σks∞ � t∞.
Definition 5.2.2 (Maximal extremal pairs). An extremal pair (s, t) is said to be
maximal if firstly there does not exist any point x such that the orbit of x is contained
in one of either [0, s∞) or (t∞, 1), and secondly there does not exist a distinct (finite
or infinite) extremal pair (s, t) such that (s∞, t∞) ⊂ (s∞, t∞) (or (s∞, t∞) ⊂ (s, t) if
this pair is infinite).
We make one further definition pertaining to avoidance sets:
Definition 5.2.3. We say an avoidance set Jβ,α(a, b) is essentially equal to an invariant
set X, written Jβ,α(a, b).= X, if for every y ∈ Jβ,α(a, b), there exists x ∈ X and
w ∈ {0, 1}n with y = wx.
The idea of this definition is that we would like to describe an avoidance set up to
preimages (expressed by the word w), but we cannot possibly mean every preimage,
because some preimages will themselves fall into (a, b). Therefore this definition im-
plicitly comes with substantial restrictions on what words w will be possible, because
we cannot have y = wx ∈ (a, b) or indeed σny ∈ (a, b) for any n.
We now begin with an easy observation:
Remark 5.2.4. For each i ∈ {0, 1, 2}, if (a, b) ∈ Di(β, α) then (a, b) ∈ Di(β, α) when-
ever a ≥ a, b ≤ b and a < b. This is simply because the latter hole is contained in the
former, thus Jβ,α(a, b) ⊆ Jβ,α(a, b). This can be seen in Figure 5.1: for each vertex
(a, b) ∈ D0(β, α), we see that the triangle with vertices (a, b), (a, a) and (b, b) is in
D0(β, α) also.
For a hole (a, b) to be in D0, all we need is one orbit that avoids (a, b). Our first
tactic is thus to consider just one orbit and see what holes this one orbit will avoid.
Noting the above remark, this is shown in Figure 5.1 and gives us a subset of D0. The
standout corners of this set are given by the extremal pairs derived from the orbit we
CHAPTER 5. Tβ,α WITH A HOLE 51
a
b
Figure 5.1: A subset of D0 shown in grey formed from a single periodic orbit.
are studying, which is why we are interested in extremal pairs. Then, what we want
to do is look at the union of these sets over all possible orbits. This will give us all of
D0. It is worth noting that looking at the picture, a dense orbit is clearly useless as it
will give us just the diagonal and no D0. If anything, we are actually interested in the
opposite: orbits that have the largest gaps to give us the largest bits of D0. This is
exactly the intention—and, when expressed more formally, the definition—of maximal
extremal pairs. This is why our chief approach is to find these maximal extremal pairs,
and once we have found them all we should be done and have a full description of D0.
This concept is the key extension from the work of Glendinning and Sidorov in [17]
and Hare and Sidorov in [20] for the doubling map. In these papers, it is shown that
balanced pairs are extremal, and extremality is then used as a tool to make the proofs
easier. Here, we show that actually the key is maximal extremality, and that this is
the reason why we are interested in balanced pairs. In essence, balanced pairs are not
important because they are balanced, but rather because they are maximal extremal.
For D1, we need holes that are small enough to have positive Hausdorff dimension
escaping. Our tactic here is to take two admissible words w1 and w2 and freely con-
catenate them, so we wish to have {w1, w2}N avoiding the hole. This gives positive
entropy and thus positive Hausdorff dimension. However, again this should work for
any w1 and w2, but we wish to choose the ‘best’ such words w1 and w2, so that the
set {w1, w2}N will avoid large holes (a, b). This is the difficulty. It will turn out that
the ‘best’ words are maximal extremal pairs and their descendants.
For D2, we proceed exactly as for D1, but with an additional criterion. If the
lengths of w1 and w2 are coprime, then freely concatenating the two will give us orbits
CHAPTER 5. Tβ,α WITH A HOLE 52
of any suitably large length. This is what we want. This also suggests that D2 ⊆ D1,
which is not wholly intuitive by first appearances but does indeed turn out to be the
case. The requirement of coprimality turns out to mean we cannot use the descendants
of maximal extremal pairs, but rather just the pairs themselves.
We now collect together the results that are shown in [17] and [20] for balanced
words, and restate these using maximal extremal pairs. The proofs are essentially
the same and need only very minor alterations, and are included here for the sake of
completeness, but the idea of maximality and its importance is new.
Theorem 5.2.5. Let (β, α) ∈ 4. Suppose (s, t) is an extremal pair such that {s, t}N ⊂
Xβ,α. Let u and v be words such that s = uv and t = vu. Then for any ε > 0, we have
1. Jβ,α(s∞, t∞) ⊇ {σns∞ : n ≥ 0}, that is (s∞, t∞) ∈ D0,
2. Jβ,α(s∞, ts∞ − ε) and Jβ,α(st∞ + ε, t∞) have positive Hausdorff dimension, that
is (s∞, ts∞ − ε), (st∞ + ε, t∞) ∈ D1.
If (s, t) is an extremal pair and additionally j = |u| and q = |s| are coprime, then
3. Jβ,α(s∞, ts∞ − ε), Jβ,α(st∞ + ε, t∞) and Jβ,α(st∞ + ε, ts∞ − ε) contain periodic
points of any sufficiently long length, that is these holes belong to D2.
If (s, t) is a maximal extremal pair, then
4. Jβ,α(s∞, t∞).= {σns∞ : n ≥ 0}, that is [s∞, t∞] /∈ D0.
If (s, t) is the Farey descendant of a maximal extremal pair, then
5. Jβ,α(s∞, t∞) is countable, that is (s∞, t∞) /∈ D1,
6. Jβ,α(sts∞, ts∞) and Jβ,α(st∞, tst∞) are countable, that is these holes are not in
D1.
This theorem, when combined with Remark 5.2.4, is illustrated in Figure 5.2.
Proof. These results are shown for the case of balanced pairs and their descendants in
[17] and [20]. We collect the results together here in a bid to make clearer precisely
what combinatorial property of words each result is relying upon, and so for the sake
of clarity we repeat the arguments here and alter them as necessary to encompass
general extremal pairs.
CHAPTER 5. Tβ,α WITH A HOLE 53
ts∞
s∞ st∞
t∞
sts∞
tst∞
Figure 5.2: Theorem 5.2.5 for a maximal extremal pair (s, t). The dark grey showspoints in D2(β, α), the light grey shows points in D0(β, α), and the white region showswhere the Farey descendants of (s, t) will lie, so these points are in D0(β, α) and mayor may not be in D1(β, α).
Let (s, t) be an extremal pair. Item (1) – that {σns∞ : n ≥ 0} ⊆ Jβ,α(s∞, t∞) –
follows immediately from the definition.
For item (2), to show that (s∞, ts∞ − ε) is in D1, we follow [17, Lemma 2.2]. Let
N ∈ N and define
WN = {σiw : w is composed of blocks utm with m > N}.
Because {s, t}N is admissible, we know that WN is admissible for all N . Furthermore
WN is shift invariant and has positive entropy (and therefore positive Hausdorff di-
mension). For any ε > 0, there exists an N such that ts∞ − ε < tsN . Then we claim
WN ⊂ Jβ,α(s∞, ts∞ − ε). To see this, notice that utm = smu. By extremality, the
only shifts we need be concerned about are those beginning with s or t. Any shift
beginning s will be of the form siuu ≺ s∞, so avoids the hole. Any shift beginning t
either has multiple ts and so avoid the hole, or begins tsmuu for some m > N . This
therefore also avoids the hole for large enough N .
The case Jβ,α(st∞ + ε, t∞) is similar, using shifts with tmv = vsm. Either of these
shifts will then avoid (st∞+ ε, ts∞− ε). This leads immediately to item (3), following
[20, Theorem 3.6]. Notice that the orbit (utm)∞ has period mq + j. If j and q are
coprime, then for every ` there exists k such that ` ≡ kj mod q. So by considering
points of the form
w = (utm1utm2 . . . utmk)∞,
CHAPTER 5. Tβ,α WITH A HOLE 54
for sufficiently large mi > N , we can create orbits of any sufficiently large length which
avoid the hole. Thus whenever |u| and |s| are coprime, we have that (s∞, ts∞ − ε),
(st∞ + ε, t∞) and (st∞ + ε, ts∞ − ε) are in D2.
Item (4) is by definition of maximal extremal.
For (5), we follow [17, Lemma 2.12] and use induction to show that Jβ,α(s∞, t∞) is
countable for Farey descendants of maximal extremal pairs. The result clearly holds for
the maximal extremal pair itself. Assume the claim holds for all kth level descendants,
meaning pairs (sk, tk) = (s(r1,...rk), t(r1,...rk)). We show it must then hold for the (k+1)st
level. Write rk+1 = pk+1/qk+1. Note firstly that as Jβ,α(s∞k , t∞k ) is countable, we wish
to show that all but countably many points of (s∞k , t∞k ) must fall into (s∞k+1, t
∞k+1).
Any word (sk+1, tk+1) is by definition a balanced word on the alphabet {sk, tk},
with length in this alphabet qk+1. The only shifts of sk+1 that fall into (s∞k+1, t∞k+1) are
those beginning with sk or tk. Label these (in order) as x1, . . . , xqk+1. Balanced words
correspond to ordered orbits as discussed in [5] and [19]. This means that any interval
[xi, xi+1] will be mapped by σqk+1 to some other interval [xj, xj+1], and by repeatedly
applying σqk+1 we will cycle through all possible j ∈ {1, . . . , qk+1 − 1}. One of these
intervals is [s∞k+1, t∞k+1]. Therefore all but countably many points in (x1, xqk+1
) will fall
into (s∞k+1, t∞k+1).
The only remaining possibilities are points in (s∞k , x1) and points in (xqk+1, t∞k ).
Applying σqk+1 to these intervals maps them to (s∞k , xi) and (xj, t∞k ) respectively for
some i > 1 and j < qk+1. Thus again by applying σqk+1 repeatedly we see that all but
countably many points must fall into (s∞k+1, t∞k+1).
Therefore item (5) holds and Jβ,α(s∞, t∞) is countable for Farey descendants of
maximal extremal pairs.
The final item (6) is more complex, with a degree of subtlety as to why the result
holds only for Farey descendants, not for either all extremal pairs or only maximal
extremal pairs.1 We follow [17, Theorem 2.13]. Suppose (s, t) is a Farey descendant
of a maximal extremal pair (u, v). Consider Jβ,α(s∞, t∞). This is a countable subset
of {u, v}N.
Then take any point in [ts∞, t∞]. By applying σq repeatedly we can see that all but
1Note we consider maximal extremal pairs to be Farey descendants of themselves, so this resultdoes hold for maximal extremal pairs.
CHAPTER 5. Tβ,α WITH A HOLE 55
countably many points in this interval must fall into (s∞, ts∞). Hence Jβ,α(s∞, ts∞) \
Jβ,α(s∞, t∞) is countable. Therefore Jβ,α(s∞, ts∞) is countable.
Similarly, consider any point in [s∞, sts∞]. Apply σq repeatedly, and we see that
all but countably many points must fall into (sts∞, ts∞). Therefore we have that
Jβ,α(sts∞, ts∞)\Jβ,α(s∞, ts∞) must be countable. Hence Jβ,α(sts∞, ts∞) is countable.
The cases with s and t reversed are similar.
Remark 5.2.6. It is key to note that item (6) relies very strongly on Jβ,α(s∞, t∞)
being countable, which holds only by item (5). This is where the proof fails if the
pair (s, t) is a general extremal pair rather than specifically Farey descendant of a
maximal extremal pair. Notice also that for Farey descendants, we never have |u| and
|s| coprime, meaning item (2) applies but item (3) does not.
The theorem, noting the above remark, gives the following corollary, as can be seen
in Figure 5.2.
Corollary 5.2.7. D2(β, α) ⊂ D1(β, α) ⊂ D0(β, α).
Now that we have this theorem, we notice that there is another problem. Whenever
we have a maximal extremal pair (s, t) with s∞ and t∞ are admissible, we know parts
1 and 4 of the theorem apply. However, it could be the case that one or both of st∞
and ts∞ are not admissible, which brings the other parts into question. Dealing with
this problem is a large part of the extension from the case of the doubling map: for
the doubling map, every sequence is admissible, so this problem cannot occur.
Fortunately if only one of st∞ and ts∞ is inadmissible then the situation is easily
recoverable. This is the case if the transformation is greedy or lazy.
Lemma 5.2.8. Suppose that (s, t) is an extremal pair for Tβ,α with s∞, t∞ admissible.
Then
1. if α = 0 (that is, Tβ,α is greedy), then ts∞ is admissible.
2. if α = 2− β (that is, Tβ,α is lazy), then st∞ is admissible.
Proof. Recall Proposition 2.3.3: any extremal pair can be written as (s, t) = (uv, vu) =
(w0-max, w1-min). Suppose then that s = w0xw1y and t = w1yw0x, so u = w0x and
CHAPTER 5. Tβ,α WITH A HOLE 56
v = w1y. In this case, maximal shifts begin at x and minimal shifts begin at y.
Consider
ts∞ = vu(uv)∞ = w1yw0x(w0xw1y)∞.
In the greedy case, admissibility is only dependent on the maximal shift. The maximal
shift must arise at a shift beginning x. Thus our options for the maximal shift of ts∞
are x(w0xw1y)∞ or (xw1yw0)∞. The latter is clearly larger, and is simply a shift
of s∞. Therefore for a greedy expansion if s∞ is admissible, we have that ts∞ is
admissible.
The argument for the admissibility of st∞ under a lazy transformation is the same.
Example 5.2.9. Consider (s, t) = (010110, 011001). Then w = 01, x = 001, y =
110, u = 01, v = 0110.
Then st∞ = 010110(011001)∞. This has minimum shift (001011)∞ which is a
shift of s∞. Therefore under a lazy transformation, if s∞ is admissible then st∞ is
admissible. Note that for a lazy transformation we have 1· = 1∞, so it is impossible
for the maximal shift to be too large.
Similarly ts∞ = 011001(010110)∞ has maximum shift (110010)∞ which is also a
shift of s∞. Therefore under a greedy transformation, if s∞ is admissible then ts∞ is
admissible. Note that for a greedy transformation we have 0· = 0∞, so it is impossible
for the minimal shift to be too small.
If Tβ,α is neither greedy nor lazy, then we could for example have 1· = (1100)∞, 0· =
(001011)∞. In this case both ts∞ and st∞ are inadmissible: the minimal shift of ts∞
is too small and the maximal shift of st∞ is too big.
The various parts of Theorem 5.2.5 hold true whenever the sequences necessary are
admissible. This means that if, for example, only ts∞ is admissible, then the results
involving ts∞ will still hold, that is for holes (s∞, ts∞ − ε). This is because looking
closely at the proof of the relevant part, the sequences concerned must be admissible
as well. Therefore, provided at least one of st∞ and ts∞ is admissible, the situation is
simple: the theorem tells us the details of D0, D1 and D2. This hence covers the lazy
and greedy cases.
However, suppose we have a maximal extremal pair (s, t) with s∞ admissible but
CHAPTER 5. Tβ,α WITH A HOLE 57
both of st∞ and ts∞ inadmissible. This then presents a problem. In this situation the
theorem tells us the boundary of D0, but leaves a complete mystery as to D1 and D2.
What to do in this situation will be covered later in Section 6.3.
Recall Remark 4.1.6: given a maximal extremal pair (s, t), we have – up to a set
of measure zero given by the limit points – that
[s∞, st∞] =⋃
(sr,tr)
[s∞r , srtrs∞r ],
where (sr, tr) are the Farey descendants of (s, t).
This ensures that once we have a maximal pair with st∞ and ts∞ admissible, we
can completely describe D1(β, α): the white part in Figure 5.2 gets neatly filled in by
Farey descendants.
5.3 Small b and large a for D0 and D2
Before beginning the search for maximal extremal pairs, in this section we wish to
answer the following question: what is the largest b such that the hole (0, b) ∈ D0?
Similarly, what is the largest b such that the hole (0, b) ∈ D2? We ask the same
questions for holes (a, 1).
To this end, let bi(β, α) be such that (a, b) ∈ Di(β, α) whenever b < bi(α, β)
and (0, bi] /∈ Di(β, α), i = 0, 1, 2. Similarly, let ai(β, α) denote the value such that
(a, b) ∈ Di(β, α) whenever a > ai(α, β) and [ai, 1) /∈ Di(β, α), i = 0, 1, 2. As per usual,
these are all functions of (β, α) and we will for ease of notation drop this (β, α) unless
it is strictly necessary.
As indicated by these definitions, we also wish to ask these questions for D1, but
this proves to be substantially more complicated. We will return to study b1 and a1
in Chapter 7.
The reason for asking these questions is firstly that they are somewhat interesting
in their own right, but also that it is an easy way of simplifying our search for maximal
extremal pairs. We know that once we reach b0, any b below this is already in D0,
so we needn’t look here. Similarly once we reach a0 we already know the answer.
This is hence an easy way of cutting off large swathes of the problem for (one hopes)
comparatively little effort. In the worst case scenario, perhaps b0 = 0 or a0 = 1, but
CHAPTER 5. Tβ,α WITH A HOLE 58
this would imply that every orbit must come arbitrarily close to 0 or 1 respectively,
which seems somewhat unlikely. We hence expect b0 > 0 and a0 < 1 and thus this
should actually be a useful thing to study.
In the case of the doubling map, it was shown in [34, Proposition 1.5] that we
have (a, b) ∈ D0 ∩ D1 ∩ D2 whenever either a > 1/2 or b < 1/2, that is to say
ai(2, 0) = bi(2, 0) = 1/2. To see this, consider the following subshifts:
Ak = {w ∈ {0, 1}N : wi = 1 =⇒ wi+j = 0 for j = 1, . . . , k + 1},
Bk = {w ∈ {0, 1}N : wi = 0 =⇒ wi+j = 1 for j = 1, . . . , k + 1}.
Recall that for the doubling map, 1/2 = 01∞ = 10∞. Thus for any a > 1/2, we
can write a = 10k1 for some k, and then Ak will avoid the hole (a, b). Similarly any
b < 1/2 can be written as b = 01k0 for some k, meaning Bk will avoid the hole (a, b).
The subshifts Ak and Bk allow free concatenation of multiple words and therefore have
positive entropy and so positive Hausdorff dimension. They also contain the periodic
points (10n)∞ and (01n)∞ respectively for any suitably large n, implying these subshifts
are large enough to give D2. Hence we have ai = bi = 1/2.
For greedy or lazy expansions, one of these subshifts will still be admissible.
Lemma 5.3.1 (Large a for greedy expansions; small b for lazy expansions). We have
for i = 0, 1, 2,
ai(β, 0) =1
β, and bi(β, 2− β) =
β − 1
β.
Proof. For greedy expansions, the subshift Ak given above is admissible for suitably
large k, and will avoid any hole (a, b) with a > 10∞ = 1/β. Similarly for lazy expan-
sions, the subshift Bk given above is admissible for suitably large k and will avoid any
hole (a, b) with b < 01∞ = (β − 1)/β.
As described above, these subshifts contain periodic orbits of any suitably large
length, and hence we have that for greedy expansions, ai = 1/β and for lazy expansions,
bi = (β − 1)/β.
Unfortunately, for intermediate β expansions, the situation is immediately more
complicated than the doubling map. Anything involving arbitrarily long strings of 0s
or 1s is forbidden, and hence both of the subshifts described are inadmissible. Even
CHAPTER 5. Tβ,α WITH A HOLE 59
for greedy or lazy expansions, we only have one of the two subshifts admissible. This
presents a problem, as it is in no way clear what to do instead.
Recall the functions γ(β, α) and δ(β, α) defined in Chapter 4 to give the largest and
smallest admissible balanced words respectively. Recall also that we use uγ to denote
the minimal balanced word given by γ(β, α), and likewise vδ to denote the maximal
balanced word given by δ(β, α).
We will commonly need the sequences u∞γ and v∞δ . However, this does not make
sense in the case that γ(β, α) or δ(β, α) are irrational. Since almost every transfor-
mation has γ(β, α) and δ(β, α) rational, we will give results in terms of u∞γ and v∞δ ,
but for the irrational cases, instances of u∞γ and v∞δ should generally be replaced by
uδ = inf Xγ(β,α) and vδ = supXδ(β,α). Recall also that as γ(β, α) and δ(β, α) are
continuous, any sequences involving these functions can simply be found by taking
limits.
Proposition 5.3.2. For rational γ(β, α) and δ(β, α) we have
b0 = u∞γ , and a0 = v∞δ .
Proof. We show the result for b0; a0 follows by the same method.
By definition, Xγ(β,α) ∈ Jβ,α(a, b) whenever b < u∞γ , thus b0 ≥ u∞γ .
To show that b0 = u∞γ , we need to show that (0, u∞γ ] is a trap, that is to say
Jβ,α(0, u∞γ ] = ∅. This is immediate from Remark 2.2.9: that for any r we have if
max(x) ≤ vr2v∞r , then min(x) ≤ u∞r . From the description of γ(β, α), any admissible
point x satisfies max(x) ≤ vγ2v∞γ , and therefore must fall into (0, u∞γ ] as required.
Any point x ∈ [10·, 1u∞γ ] shifts to immediately fall into [0, u∞γ ], so this region is not a
problem. We therefore need to consider the image of [0v∞γ , 01·], that is x ∈ [v∞γ , 1·].
Proposition 5.3.3. For rational γ(β, α) and δ(β, α), we have
b2 = uγ1u∞γ , and a2 = vδ2v
∞δ .
Proof. We show the result for b2; a2 follows by the same method.
BK = {w ∈ Xβ : w is made of blocks uγ1ukγ for any k > K}.
Because the lengths of uγ1 and uγ are coprime, this subshift contains periodic points
of every suitably large period. By taking sufficiently large K, the minimal element of
BK is arbitrarily close to uγ1u∞γ . Therefore, b2 ≥ uγ1u
∞γ .
CHAPTER 5. Tβ,α WITH A HOLE 60
To show that (0, uγ1u∞γ ] /∈ D2 is somewhat more difficult: we need to show that
any admissible subshift whose minimal element is greater than uγ1u∞γ cannot contain
periodic points of any suitably long length. To do this we show that any periodic orbit
whose minimal shift lies in (uγ1u∞γ , u
∞γ ) must be of period n|uγ| for some n.
Suppose w∞ = min(w∞) is a periodic point in (uγ1u∞γ , u
∞γ ). Then w must be
composed of blocks of the form
uγ1(uγ1uγ2)nuγ2(uγ1uγ2)
m,
for some n and m. This is because whenever we have uγ2uγ2 , this has maximal shift
vγ2vγ2 . Therefore in order to be admissible, uγ2uγ2 must be followed by umγ u2γ1
for
some m. This u2γ1 begins another block of the above form. Essentially this is a careful
balancing act between enough of uγ1 to be in the correct interval and not too many
uγ2 lest the sequence be inadmissible.
The important point is that the block given above has length (n+m+1)|uγ|. Hence
the only available periodic points cannot have arbitrary period. This means we cannot
find a large enough shift for D2 that is better than BK . Consequently, b2 = uγ1u∞γ as
claimed.
Note that if γ(β, α) is irrational, then by taking limits it is easy to see that b2 =
b0 = inf Xγ(β,α). Note also that if either a2 or b2 are inadmissible, then we have
γ(β, α) = δ(β, α). Thus in this case we have from Lemma 4.2.1 that Tβ,α is non-
transitive. Holes for non-transitive Tβ,α are covered in Chapter 8, therefore for the
time being we will assume that we have both a2 and b2 admissible.
We must have b2 ≤ b1 ≤ b0, so we now have a range for b1 and we know what
the possible periodic points are in this region. Precisely which periodic points are
admissible for which (β, α) and then what possible subshifts we can make from these
is not at all obvious and turns out to be exceedingly complicated. We will thus return
to this question later in Chapter 7.
5.4 Balanced pairs and their preimages
It was shown Glendinning and Sidorov that balanced pairs are maximal extremal for
the doubling map. This was repeated in Lemma 2.3.8.
CHAPTER 5. Tβ,α WITH A HOLE 61
Lemma 5.4.1. Suppose (s, t) is a maximal extremal pair for some (β0, α0) ∈ 4 with
expansions 1·β0,α0 and 0·β0,α0. Then (s, t) is maximal extremal for any (β, α) ∈ 4 such
that firstly s∞ is admissible, and secondly the expansions 1·β,α and 0·β,α satisfy
1·β,α � 1 ·β0,α0 and 0·β,α � 0 ·β0,α0 .
Proof. This is immediate from the fact that if (β, α) satisfies the above conditions,
then we have the sets X of admissible sequences satisfy
Xβ,α ⊂ Xβ0,α0 .
This means that for any hole (a, b) given by admissible sequences a and b, we have
Jβ,α(a, b) ⊆ Jβ0,α0(a, b).
Notice that it is important here that we are meaning the same hole in terms of se-
quences, not in terms of points in [0, 1].
This means that once we have shown some pair is maximal extremal for some
particular (β0, α0), we know that it is in fact maximal extremal for any (β, α) with
smaller sequence space as long as it is still admissible. This makes things much easier.
To begin with, we now know that balanced pairs are maximal extremal whenever
admissible, and from Chapter 4 we know precisely which balanced words are admissible
for which transformations.
Corollary 5.4.2. Balanced pairs (s, t) = (0-max(vr), 1-min(vr)) are maximal extremal
for (β, α) ∈ 4 whenever γ(β, α) ≤ r ≤ δ(β, α).
In the case of the full shift (β = 2, α = 0) the holes formed from balanced pairs
will customarily have two distinct preimages, formed by prepending either a 0 or a 1
as a prefix to both endpoints of the hole. As 1· decreases or 0· increases, the preimage
formed by prepending a 1 or a 0 may become inadmissible and so a particular hole
may have a unique preimage. Because Jβ,α(a, b) is invariant under Tβ,α, this means
that all results pertaining to the original hole will also apply to its unique preimage.
Lemma 5.4.3. Suppose γ(β, α) ∈ [ 1n+1
, 1n) and 0 < k ≤ n. Let (s, t) = (0-max(vr), 1-
min(vr)) be the central maximal extremal balanced pair that corresponds to some r ∈
CHAPTER 5. Tβ,α WITH A HOLE 62
(δ(β, α), γ(β, α)). Then the pairs (0k-max(vr), 0k−11-min(vr)) are also maximal ex-
tremal. Similarly, suppose δ(β, α) ∈ [m−1m, mm+1
) and 0 < k ≤ m. Let (s, t) = (0-
max(vr), 1-min(vr)) be the maximal extremal balanced pair corresponding to some r ∈
(δ(β, α), γ(β, α)). Then the pairs given by
(1k−10- max(vr), 01k- min(vr)),
are also maximal extremal.
Proof. We show the first of these results; the second is much the same. Firstly, note
that if γ(β, α) ∈ [ 1n+1
, 1n), then we have that u∞γ begins 0n1. This means the above
pairs do satisfy b > b0. Secondly, note that vγ is composed of blocks 10n−1 and
10n. Therefore, each pair (0k-max(wr), 0k−11-min(wr)) has a unique preimage given
by prepending a 0 until we reach k = n. It is only then that prepending a 1 would
be possible, giving two distinct preimages. Thus when k ≤ n, because Jβ,α(a, b) is
invariant under Tβ,α, the result holds.
We will generally refer to the pair (s, t) = (0-max(vr), 1-min(vr)) as the central bal-
anced pair corresponding to r, or (sr, tr). This is because it contains the discontinuity
and so is in this sense “central”.
Chapter 6
Rγ and Rδ
6.1 Introduction
At this point, we have used all the possible balanced pairs. For D0, we know the
situation below u∞γ and above v∞δ . Unfortunately, this leaves gaps. Consider the
following example.
Example 6.1.1. Consider 1· = (1100)∞, 0· = (00011)∞. Then γ(β, α) = 1/2 and
δ(β, α) = 1/3.
The smallest balanced pair is (010, 100), and we know (a, b) ∈ D0 whenever b <
u∞γ = (01)∞. Therefore, what happens when b ∈ ((01)∞, (100)∞)?
The largest balanced pair is (01, 10), and we know (a, b) ∈ D0 whenever a > v∞δ =
(100)∞. Therefore, what happens when a ∈ ((01)∞, (100)∞)?
It is not coincidence that this is the same problem interval for a and for b. This
will occur whenever we have simply central balanced pairs and no preimages. Should
preimages give maximal balanced pairs as described in the previous section, then we
will have distinct problem intervals for a and for b with one a preimage of the other.
In general, the regions that are neither too large, too small, nor covered by balanced
pairs are
Rγ(β,α) = (0u∞γ , 0n10·)× (u∞γ , 0
n−110·), and
Rδ(β,α) = (1m−101·, v∞δ )× (1m01·, 1v∞δ ),
where n and m are chosen so that γ(β, α) ∈ [ 1n+1
, 1n) and δ(β, α) ∈ [m−1
m, mm+1
). This
also means that in the greedy case, Rδ is empty, and in the lazy case, Rγ is empty.
63
CHAPTER 6. Rγ AND Rδ 64
There are also some exceptional cases where 1· or 0· happens to be particularly nice
which can result in these regions being empty — see the example of Section 9.1.
For Rγ, the region for b must always be admissible, and similarly for Rδ the region
for a must always be admissible. However, it could be the case that either of 0u∞δ or
1v∞δ are inadmissible. For now, we will simply assume that this is not the case and
these two sequences are admissible. In fact, in the greedy or lazy cases respectively
these sequences must be admissible and so we know there is no problem.
There is an intuitive explanation for why these regions are difficult. For a central
balanced pair, if we are sufficiently close to the discontinuity, then we are already in
the hole. This means that we only need to deal with one branch of Tβ,α at a time,
never both. However, for Rγ and Rδ we are forced to study the interval (u∞γ , v∞δ ) which
must contain the discontinuity (or preimage thereof). We cannot get away with just
considering one branch of Tβ,α at a time and have no option but to deal with both.
Naturally this means part of the interval maps very close to 0 and part maps very
close to 1, so then we are dealing with intervals all over the place. This is essentially
why this is difficult.
For the doubling map (and the lazy and greedy cases in part) we are able to
approach the discontinuity using balanced words and thus avoid this issue, but this is
no longer an option: there are no balanced words arbitrarily close to the discontinuity.
We have used all the possible balanced words we can already and there are no more
to be found. For this reason the solution cannot involve balanced words.
In the tree of balanced words, those inbetween δ(β, α) and γ(β, α) are admissi-
ble with no problems. However, for those below δ(β, α) or above γ(β, α), we may
try adding 1s or 0s respectively to make them bigger or smaller and thus hopefully
admissible. Consider the following example.
Example 6.1.2. Let 1· = (1100)∞ and 0· = (0001001)∞. Then γ(β, α) = 1/2 and
δ(β, α) = 2/7. Consider rationals outside of this range. The balanced word corre-
sponding to, say, 3/4, is 0111. This has three ones, and therefore trying to add 0s
on the end will not help and will not make the sequence admissible. However, con-
sider 3/5, with balanced word 01011. This is inadmissible. However, adding a 0 gives
010110 which is admissible. Likewise 4/7 gives 0101011, and 01010110 is admissible.
Indeed we can continue adding 0s until the sequence is too small. On the other side,
CHAPTER 6. Rγ AND Rδ 65
the word corresponding to 1/5 is 10000. This has four zeros, and so adding 1s to
the end will still fail to make this admissible. However, 3/11 gives 10010001000, and
100100010001 is admissible. This is as many 1s as is possible to add lest the sequence
become too large.
These will turn out to be the useful sequences. However, as demonstrated, this
will not work for all rationals. We therefore need to establish which rationals work,
what precisely we want to do to these rationals and their balanced words, and then
quite why the resulting words are indeed the maximal extremal pairs we are seeking.
Finally, we obviously wish to cover the entirety of Rγ and Rδ, and so need to establish
whether these sequences are sufficient to do this.
6.2 Primary trees
As described in Remark 3.3.2, we need only consider either Rγ or Rδ. The other has
effectively the same solution with maxima and minima switched as appropriate. We
chose to describe Rγ in this section, with a summary including Rδ at the end. We
also refer the reader to Chapter 9 for some complete examples which may be helpful
in understanding these results.
We begin by approximating γ(β, α) from above. The idea is to take the sequence
of right Farey parents1, and the approximation will be this sequence in reverse. When
γ(β, α) is rational, this sequence is finite, so to create an infinite sequence we take
γ ⊕ · · · ⊕ γ ⊕ γ2, with k γs. As k increases, this approximates γ(β, α) from above.
More formally, the sequence of right Farey parents can be found using the continued
fraction expansion of γ(β, α). Recall Proposition 2.2.5: given r = [0; a1, . . . an] with
an > 1, the Farey parents of r are [0; a1, . . . , an−1] and [0; a1, . . . , an − 1]. If n is even
then the former is the right Farey parent; if n is odd then the latter is the right Farey
parent.
Thus, suppose γ(β, α) = [0; a1, . . . , a`] with a` > 1. If ` is odd, we take the sequence
[0; a1, . . . , a`−1, k] for 1 ≤ k < a`. At k = 1 we rewrite as [0; a1, . . . , a`−1 + 1]. ` − 1
is even, so the next right Farey parent is [0; a1, . . . , a`−2]. We then continue by taking
[0; a1, . . . , a`−3, k] with 1 ≤ k < a`−2. When we reach k = 1, we rewrite again and
1Meaning the right Farey parent γ2 of γ, then the right Farey parent of γ2, then the right Fareyparent of this, and so on.
CHAPTER 6. Rγ AND Rδ 66
continue. Eventually we reach (n − 1)/n or 1/n for some n at which point we stop.
The case with ` even is similar.
Essentially, we want [0, a1, . . . , a2n, k] for every 2n < ` and every 1 ≤ k < a2n+1.
This is the sequence of right Farey parents. If γ(β, α) is irrational then this is an
infinite sequence and we are done.
If γ(β, α) = [0; a1, . . . , a`] with a` > 1, then the sequence γ ⊕ · · · ⊕ γ ⊕ γ2 is given
by [0; a1, . . . , a` − 1, 1, k] if ` is odd and [0; a1, . . . , a`, k] if ` is even.
This gives our sequence of rationals r ↘ γ(β, α). Whenever we talk about rationals
ri approaching γ(β, α) from above, this is the sequence we will mean.
Given these ri, we create trees with left root 0 and right root uri . We call these
primary trees. Most of the words in these trees will be inadmissible. We claim that
admissible pairs from these trees are maximal extremal, and that with these specific
ri for greedy expansions we must have that these trees contain some admissible pairs.
Lemma 6.2.1. Suppose Tβ,α is greedy with 1· > v∞γ . Then we have that (ur0k)∞ is
admissible for some k, where r ↘ γ(β, α).
Proof. Recall from Lemma 4.1.2 that in order to have γ(β, α) = γ, we have that
1· ∈ [v∞γ , vγ2v∞γ ]. For the case 1· = v∞γ , this is the quasi-greedy expansion. The
corresponding greedy expansion is given by 1· = v2γ20∞. This means that whenever
γ(β, α) = γ and 1· > v∞γ , we have that 1· begins v2γ20k for some k ≥ 0.
By construction, we have that ur = unγuγ2 for some n. The largest shift of this will
thus be at the end where we have u2γ2 , which shifts to v2γ2 . Thus by adding enough 0s,
we can make this word small enough to be admissible.
Notice that this also shows that for greedy transformations, the primary trees are
left infinite as we can concatenate infinitely many 0s. In fact, these primary trees
should be admissible for the majority of cases where 1· = v∞γ also, with the only
exceptions being the cases where Rγ is empty. As 1· = v∞γ almost nowhere, we will
not worry about this case.
Lemma 6.2.2. For any minimal cyclically balanced ur associated to r = p/q, consider
the pair (s, t) = (0ur0k−1, ur0
k), k ≥ 1. If we have (s∞, t∞) ∈ Rγ, then Jβ,α[s∞, t∞] =
∅: that is to say, (s, t) is a maximal extremal pair.
CHAPTER 6. Rγ AND Rδ 67
Proof. Consider a point x ∈ (0, 1). We know by Proposition 5.3.2 that Jβ,α[0u∞γ , u∞γ ]
is empty, so we may restrict to x in this region. This region overlaps the hole under
consideration, so restrict x to [0u∞γ , s∞) = [0u∞γ , (0ur0
k−1)∞). By applying the shift
map we map to [u∞γ , (ur0k)∞) = [u∞γ , t
∞). Thus every point has fallen in the hole,
with the exception of s∞ itself. Therefore the pair (s, t) is maximal extremal.
With a little extra effort we may prove that pairs (0u2r, ur0ur) are also maximal
extremal, with caution needed only because here σs∞ 6= t∞. This has the effect
that after shifting once we may restrict to [t∞, σs∞), whence applying σq has us back
on track. However, this is only needed rarely when r = γ2, because usually by the
construction of r ↘ γ(β, α) we have that u2r is inadmissible.
Lemma 6.2.3. The admissible pairs (s, t) from the primary trees as defined above are
maximal extremal.
Proof. The previous lemma shows that the first level of pairs in a tree are maximal
extremal whenever admissible. Then we can use the tree structure to extend this
to the more complex pairs. This is the same method used for balanced words in
2.3.8. With this in mind, suppose two neighbouring pairs (s1, t1) and (s2, t2) are
admissible and maximal extremal. We show that their child (s, t) = (s2s1, t1t2) is
maximal extremal. To show maximality of (s, t), we aim to show that Jβ,α[s∞, t∞] = ∅.
Consider x ∈ (0, 1). We know by maximal extremality that Jβ,α[s∞1 , t∞1 ] = ∅, so we
may restrict to x ∈ [s∞1 , t∞1 ]. We know by the tree construction that s∞1 < s∞ and
t∞1 < t∞, so we may restrict to x ∈ [s∞1 , s∞). Then σ|s1|(x) ∈ [s∞1 , t
∞], so restrict
again. Continuing this process, we see that the only possible point avoiding [s∞, t∞]
must be s∞1 . But then this shifts to t∞1 ∈ [s∞, t∞].
We know trees make good devil’s staircases and within one individual tree we cover
all possibilities and leave no gaps. Thus the remaining question is whether these trees
as defined are enough: are there any gaps inbetween distinct trees?
We claim that in the case of the greedy map, the answer is no: the largest pair
from one tree butts up neatly against the smallest pair from the next tree.
Lemma 6.2.4. Suppose Tβ,α is greedy. Suppose r ↘ γ(β, α). Let (s, t) be the pair
from the tree with roots 0 and ur such that s∞ is admissible and st∞ is inadmissible.
Then st∞ truncates to 0ur2.
CHAPTER 6. Rγ AND Rδ 68
Proof. We know that s∞ is admissible so clearly the sequence st∞ becomes inadmissible
with the very first t. Therefore, to be admissible we should truncate at the maximal
shift of s and replace the preceding 0 with a 1. We know that s begins 0ur and must be
the maximal shift beginning this way, so consider ur = u1 . . . uq = ur1ur2 . There exists
k such that σkur begins vr2 , which will be the point at which to truncate the sequence.
Then we claim that we have u1 . . . uk−21 = ur2 . To see this, recall Proposition 2.2.6:
we may write ur = 0wr1 and vr = 1wr0 for some central word wr. Then we have
ur = 0wr110wr21 = 0wr201wr11,
vr = 1wr201wr10 = 1wr110wr20.
Therefore we have k = |0wr20|, giving u1 . . . uk−21 = 0wr21 = ur2 as required. This
proves the lemma.
Recall that in the construction of the sequence r ↘ γ(β, α), we have that each r
is the right Farey parent of the next. This means that the tree above the one formed
from a particular r is formed by r2. As the expansion is greedy, every tree is left
infinite. Therefore, the tree formed from 0 and ur2 covers down to 0ur20∞ - which
is exactly the largest possible point from the ur tree. This means that in the greedy
case we have covered everything above b0 with no gaps. This concludes the material
presented in my paper [9].
6.3 Secondary trees
In the general intermediate case, Lemma 6.2.4 still works, just ending with 0· instead of
0∞: for the largest pair from the tree with roots 0 and ur, we have that st∞ truncates
to 0ur20·. Similarly, it is actually to see for the smallest pair from the tree with roots
0 and ur2 , we have that ts∞ truncates to ur20·. This thus looks promising: one is the
shift of the other, which is exactly what we should expect for one tree to finish neatly
adjacent to the next.
At this point we need some better notation. We are dealing with the interaction
between two neighbouring trees. Let the largest admissible pair from the lower tree be
given by (s1, t1). This is the tree with roots 0 and ur, and we have s1t∞1 inadmissible
and truncating to 0ur20·. Let the smallest admissible pair from the upper tree be given
CHAPTER 6. Rγ AND Rδ 69
s∞1s2 = 0ur20·
t∞1
t2 = ur20·
(a)
s∞1 s∞2
t∞1
t∞2
?
0ur20·
ur20·
(b)
Figure 6.1: (a) depicts the greedy case, where there are no gaps as the top primarytree is left infinite. (b) depicts the gaps left from primary trees. Here (s1, t1) is thelargest pair from the tree of 0 and ur1 and (s2, t2) is the smallest pair from the tree of0 and ur2 . The rest of these primary trees although not shown would be respectivelyat the bottom left and the top right.
by (s2, t2). This is the tree with roots 0 and ur2 , and we have t2s∞2 inadmissible and
truncating to ur20·.
The reader may infer from the introduction of additional notation that despite the
promising appearance, there is in fact an issue here. This would be correct. Even
though the truncation of t2s∞2 is the shift of the truncation of s1t
∞1 , the problem is
that we now have no knowledge of what happens to a ∈ (s1t∞1 = 0ur20·, s∞2 ) and
b ∈ (t∞1 , ur20· = t2s∞2 ). This is depicted in Figure 6.1. In the greedy case the situation
is avoided simply by having s2 = 0ur20· due to the left infinite tree, and thus there is
no gap for a, leaving just a jump for b.
Having established that for intermediate β-transformations there are still more
gaps to fill, we look for maximal extremal pairs in these gaps.
An initial approach following previous methods may be to combine the balanced
words from the neighbouring trees in question, and then create a tree with this com-
bination as the right root and 0 as the left root. Unfortunately this does not work
because the resultant words are inadmissible. The balanced words r ↘ γ(β, α) in
the limit are of the form rn = unγuγ2 , and if initially they are not of this form then
they must be larger. Combining neighbouring such words gives unγuγ2un−1γ uγ2 . This
has maximal shift greater than vγ2vn−1γ vγ2 . However, we know that 1· ≤ vγ2v
∞γ . This
means the words suggested are inadmissible and this idea will not work.
The solution is instead to define what we will call secondary trees. As described
CHAPTER 6. Rγ AND Rδ 70
above, given r ↘ γ, consider rn and rn−1. Take the largest admissible pair from
Tree(0, urn) and denote this pair (s1, t1). Take the smallest admissible pair from
Tree(0, urn−1) and denote this pair (s2, t2). Then make a tree with these two pairs
as roots, combining in the usual form so that the first child pair is (s2s1, t1t2). This
notation will be used throughout this section. For an example of creating secondary
trees, we refer the reader to Section 9.3.1.
Conjecture 1. If s2s1 and t1t2 as defined above are both admissible, then these are
cyclic permutations of one another and form a maximal extremal pair. Moreover,
should this first pair be admissible, we have that the entire tree is admissible and
covers all of the gap between the two root pairs.
We now explain why this is a conjecture; both in terms of why I believe it to be
correct and why it is as of yet unproven.
The first thing to note is the the assumption that s2s1 and t1t2 are both admissible.
This is not always the case. It is not even clear precisely when this is the case. Some
examples and possibilities for when one or both are inadmissible will be discussed
later.
There are very few possible words that fall into the gap at all, and so to make
periodic points, combining the bounding words is the most obvious thing to do. Tree
structures are a very normal feature, as seen in the previous sections, and behave so
sensibly with maximal extremal pairs that it would be far more surprising not to find
a tree.
The primary reason to make this conjecture is simply that for every example I have
tried where the assumption holds, the conjecture is correct. One creates an example,
writes down the relevant pairs, and then tries to find any words avoiding the hole.
With a great deal of care, one tries every possibility and finds that all words must fall
in.
The examples themselves are by this point somewhat difficult, as exhibited in 9.3.1.
To create an example, we need the expansions of 0 and 1, then γ(β, α) and δ(β, α).
From this we approximate γ(β, α) from above, write down the primary trees, and then
take the largest and smallest admissible pairs from neighbouring trees. By the time
we have got to the stage of secondary trees, the words are long: length 15 was the
shortest non-trivial example I could find. This means it is very easy to make mistakes.
CHAPTER 6. Rγ AND Rδ 71
Writing a 0 instead of a 1 causes metaphorical chaos. Getting the approximation of
γ(β, α) wrong causes chaos. Accidentally marking something as admissible when it
isn’t causes chaos. The shortest examples are naturally the ones with γ(β, α) = 1/2
and δ(β, α) = 1/3 or similar, but these are likely to be simpler, non-general examples
because things involving 1/n are usually simpler. Using anything other than 1/n
makes the words longer, which makes the examples more fiddly, which makes mistakes
even more likely. The one saving grace is mainly that practice makes perfect: enough
examples and the entire procedure becomes routine.
This brings us to the main difficulty in constructing a potential proof. In essence,
writing down a generic s1 and s2 is not practical. We know these words involve urn
and urn−1 . We know that the ri are derived from γ(β, α). We know that one is the
right Farey parent of the other, and so can write urn in terms of urn−1 . We know there
are miscellaneous 0s. However, we do not know precisely what s1 and s2 are. These
pairs incredibly close to (shifts of) the discontinuity, and so are approximating the
expansions of 0 and 1 so closely that the only way I am aware of of working out what
s1 and s2 are is simply by example. There is just no convenient method of writing
down “the largest admissible pair of the nth primary tree” for a generic 1· and 0·.
It should be evident that it is also not particularly possible to simply glance at 1·
and 0· and know what these words should be. Furthermore due to the difficulty of
constructing examples I have not completely ascertained when the assumption of the
conjecture holds, which adds an additional complication.
Setting the previous paragraph aside, it is nevertheless still clear what the method
of proof should be—mainly because the method of proof for these types of results is
always broadly the same. The tactic is that we know (s∞1 , t∞1 ) and (s∞2 , t
∞2 ) are traps.
Therefore we consider x in one of these traps – the left usually is easier. If x is already
in the hole, we are finished. If not, we aim to shift x by some suitable amount k so
that σkx is once again in the original trap. Then we simply iterate this process. The
idea is that the quantity of xs left should shrink by a factor of β each time, meaning
that the only possible x that needs this process iterating forever must be s2s1 itself.
Unfortunately as mentioned in the beginning of this chapter, Rγ is precisely the
inconvenient region where applying a shift requires both branches of the map at once.
For primary trees, we got around this mainly by the base cases being easy: the words
CHAPTER 6. Rγ AND Rδ 72
have a nice structure, and often σs∞ = t∞. Here, as stated we do not know the
structure. This means we don’t know what k to use. Using |s1| or |s2| seems a safe
bet, but it is not clear why this should work, and indeed it does not work all the time.
This is the problem.
Needless to say, as stated the conjecture holds for every example I have tested.
Furthermore, if this result is not true, then the other possibilities must primarily
be to combine s1 and s2 in some alternative way: there simply are not many other
periodic points in the correct place, and we are still strongly in favour of tree structures.
Essentially, if this conjecture is not correct, then the alternative must be truly strange.
This conjecture, if proven, would solve the problem for D0. However, there is the
assumption that s2s1 and t1t2 are both admissible. This is not immediately obvious,
and turns out to not always be true, so currently we have only conjecturally solved D0
part of the time. To make matters worse, it turns out that these pairs cannot give D1
or D2.
Conjecture 2. If (s, t) = (s2s1, t1t2) formed from a secondary tree is a maximal
extremal pair, then we have that both st∞ and ts∞ are inadmissible.
Proof. This conjecture applies to every maximal extremal pair formed from a sec-
ondary tree. However we can actually prove this result for the simple cases where we
have (s1, t1) = (0unri , uri0un−1ri
) and (s2, t2) = (0uri−10m−1, uri−1
0m) for some n,m. By
assumption, s1t∞1 and t2s
∞2 are inadmissible. Then notice that s1t1 begins 0un+1
ri, which
by assumption must be inadmissible (or (s1, t1) would not be the largest admissible
pair from its tree). Similarly, t2s2 would begin uri−10m+1 which must be inadmissible
otherwise (s2, t2) would not be the smallest pair from its tree. This means that st and
ts must both be inadmissible.
This is a serious problem. Because we can never have st or ts, there is no possibility
for any Farey descendants either. This is the first time it has happened that we have
an entire tree which is perfectly valid for D0 but offers no possibilities for D1 or D2.
This situation is wholly new and will hence need a new solution.
Conjecture 3. If (s, t) = (s2s1, t1t2) formed from a secondary tree is a maximal
extremal pair, then {s1, s2}N avoids (s∞2 , t∞1 ).
CHAPTER 6. Rγ AND Rδ 73
Proof. We can prove this result subject to the previous conjecture 1: that the entire
tree with roots (s1, t1) and (s2, t2) is admissible. This implies that s1sn2 and s2s
n1 are
both admissible for any n, and so the shift {s1, s2}N is admissible.
The only shifts we need to worry about are those beginning with an s or a t, as
anything else is clearly too large or too small to be near the hole. Then as we have
s2 > s1, the largest word beginning s must be s∞2 . Notice also that if s2s1 and t1t2 are
cyclic permutations then we have {s1, s2}N = {t1, t2}N. Thus similarly the smallest
word beginning with a t must be t∞1 .
This subshift {s1, s2}N will have positive Hausdorff dimension and so will give D1.
However, the lengths of s1 and s2 are not necessarily coprime. In previous cases
we would write s1 = uv and s2 = vu for some words u, v, but here s1 is not a
cyclic permutation of s2, so we cannot do this. Thus this still may not complete the
description of D2.
To find D2, as we cannot follow the usual methods of combining s1 and t1 or s2 and
t2 due to any such combinations being inadmissible, we instead mix and match these
words in such a way as to avoid these forbidden incidences. The simplest options are
thus to combine as (s2t1, t1s2) or as (s1t2, t2s1). Recalling that we want the smallest
a and largest b, the first of these should work better as s1 < s2. This is hence where
we start. As per usual, we expect there to be trees, and hence consider the entire tree
with roots s2 and t1. We claim this is the correct thing to do.
Conjecture 4. Let (s, t) be a pair from the tree with roots s2 and t1. Then the
avoidance set of [s∞, t∞] is {s1, s2}N. Furthermore, both st∞ and ts∞ are admissible.
This would thus give D2, and experimentally appears to work. However, this does
not fill the entire region from s∞1 to s∞2 .
Conjecture 5. The pairs from the trees with roots s1tn1 , t2 and the trees with roots
s1, t2sn2 , n ∈ N are also admissible, have st∞ and ts∞ admissible, and give the boundary
of D2.
Once again, these appear to work and give D2, but still do not fill all of the gaps.
The first set of trees with roots s1tn1 and t2 appear to be left infinite; the second set
appear to be right infinite. We have no further conjectures regarding this situation:
CHAPTER 6. Rγ AND Rδ 74
the secondary trees themselves were already complicated, and this is more so. From
a limited number of examples, it seems sensible to simply fill in the gaps with more
trees made from the very few words available, but this appears never to fill in all of
the gaps. It is possible that infinite layers of trees could be needed, but this is mostly
speculation.
We return now to the “basic” secondary trees. We make the following conjecture
regarding the situation where one of s2s1 and t1t2 is not admissible.
Conjecture 6. Suppose s2s1 is admissible and t1t2 is inadmissible. Then consider the
orbit of (s2s1)∞. Choose the point w∞ in this orbit that is greater than and closest
to (s2s1)∞. We claim that (s2s1, w) is a maximal extremal pair. Similarly if t1t2 is
admissible and s2s1 is inadmissible, take the lower neighbouring point w∞ on the orbit
of (t1t2)∞, and we claim that (w, t1t2) is a maximal extremal pair.
Once again, this works experimentally, but is immensely difficult to even write
down in full generality. This scenario is illustrated in Section 9.3.2.
Conjecture 7. In this case the resultant pair will give D2, but only on one side. This
leaves potentially infinitely many gaps.
These gaps are possibly filled in a similar way to above, by combining the words
s1, t1, s2, and t2, into trees in the most obvious way possible.
We fortunately do have one more positive conjecture:
Conjecture 8. At least one of s2s1 and t1t2 must be admissible.
This appears in examples to be correct, meaning that we now have conjectures for
every possible situation, but as previously discussed the only practical examples may
be unusually nice. It could be possible that both words are inadmissible: if this is
possible, then the solution is not at all clear.
6.3.1 Summary
As has been described in this section, the jump from greedy to intermediate transfor-
mations with a hole is decidedly non-trivial, and these secondary trees are the reason
why. The problem and the provisional solution in the form of secondary trees is a new
CHAPTER 6. Rγ AND Rδ 75
phenomenon not seen in greedy transformations, and even primary trees are absent
for the doubling map.
Although this section is largely unproven, I remain confident that the solution is, if
not exact, certainly along these lines. We must also emphasise that the regions where
secondary trees are needed are actually tiny. In any pictures showing Di in [0, 1]2,
even the primary trees are very small, and the secondary trees are far too small to see
without zooming in several times. It remains immensely odd that such tiny regions
should exhibit such strange behaviour. Despite this confusion, the method of using
trees by now is clear. This means that although there may still be gaps within these
regions, it should be possible to simply make trees from the surrounding words to fill
the gaps. This may have to be repeated infinitely, but in general, one simply makes
trees whereever possible.
Furthermore, it is interesting to note that in these regions the words are very very
close approximations to (shifts of) the expansions 0· and 1·. This also illustrates why
examples are so difficult: whilst still continuous, a very small variation in 0· or 1·
makes a comparatively large variation in the admissible words for secondary trees.
This makes patterns difficult to see, and proofs difficult to write down. It also raises
questions as to the significance of these words: why are these particular approximations
to 0· and 1· the correct ones? Are these approximations the best in some sense? It is
easy to say that these words are important because they are maximal extremal pairs,
but it feels as though this still somewhat misses the point: why these words? They are
the ones that appear to work, but why? Whether this question actually has a sensible
answer remains to be seen.
Chapter 7
Small b and large a
Recall that we defined bi(β, α) to be the unique value such that [0, bi) ∈ Di(β, α)
and [0, bi] /∈ Di(β, α). Similarly we set ai(β, α) to be the unique value such that
(ai, 1] ∈ Di(β, α) and [ai, 1] /∈ Di(β, α). Then we know that for any hole (a, b) with
either b < bi or a > ai, we have (a, b) ∈ Di(β, α).
In Section 5.3, we established that when γ(β, α) and δ(β, α) are rational – so almost
every case – we have
b0 = u∞γ , a0 = v∞δ ,
b2 = uγ1u∞γ , a2 = vδ2vδ.
This enabled us to deal with the central region and know that then the description of
D0 would be complete. However, it was mentioned that b1 are more complicated. In
this chapter we return to this question. As ever we need only study the bi as the ai
are simply the same thing with 0s and 1s and maxima and minima switched.
This topic is the error in my paper [9]. This paper only covered the greedy case,
meaning only the bi are an issue because a0 = a1 = a2 = 10· = 10∞. The problem was
that I did not realise that it was possible for b1 and b2 to be distinct. Furthermore, I
thought the border of D2 and D1 was just flat between b2 and b0, as opposed to there
being for example (a, b), b ∈ (b2, b0) with (a, b) ∈ D2. This is depicted in Figure 7.1.
76
CHAPTER 7. SMALL B AND LARGE A 77
b0
b2
0(a)
b0
b1
b2
0(b)
Figure 7.1: (a) depicts the incomplete situation as described in my paper [9]. (b)depicts the actual situation. Here D0 is light grey, D2 is dark grey, and D1 is thewhite inbetween. The trees from Rγ are visible in the top right of each figure. Notethat in the trees, there will be a white part of D1 in each corner between D0 and D2,but this is too small to see.
7.1 b1(β, α)
We are looking for b1 such that dimHJβ,α[0, b1] = 0 and dimHJβ,α[0, b1 − ε) > 0. In
other words, for any b < b1, we have (a, b) ∈ D1(β, α). Essentially b1 is a function of
1·, and to a lesser extent 0·. For the time being we shall take 0· = 0∞ as this is quite
complicated enough. As mentioned in Section 5.3, any case where b2 is not admissible
must in fact be non-transitive, and so will be discussed in Chapter 8.
Recall that γ(β, α) = γ whenever 1· ∈ [v∞γ , vγ2v∞γ ], where γ1 < γ2 are the left and
right Farey parents of γ, satisfying γ1⊕ γ2 = γ. Recall that the minimal and maximal
cyclic permutations of the balanced word wγ satisfy uγ = uγ1uγ2 and vγ = vγ2vγ1
respectively. It was also shown in Section 5.3 that given a periodic point w∞ =
min(w∞) ∈ (b2, b0), we have that w must be composed of blocks of the form
uγ1(uγ1uγ2)nuγ2(uγ1uγ2)
m,
for some (possibly infinite) n and m.
As has been said, b1 turns out to be surprisingly unpleasant. Whereas b0 and b2
depend only upon γ(β, α) and are fixed for a given γ(β, α), b1 is far worse and is not
constant for a fixed γ(β, α): in fact it varies in a very complicated way. We begin with
the easiest case.
Lemma 7.1.1. We have b1 = uγ1u∞γ whenever 1· ∈ [v∞γ , vγ2vγvγ1v
∞γ ].
CHAPTER 7. SMALL B AND LARGE A 78
Proof. Consider subshifts of the form
{uγ1unγ | n large}.
These are admissible whenever 1· ≥ v∞γ , therefore b1 ≥ uγ1u∞γ whenever 1· ≥ v∞γ .
We know that Jβ,α[0, u∞γ ].= {u∞γ }. Hence consider x ∈ (uγ1u
∞γ , u
∞γ ).
Then either x contains too many 0s in a row and so falls into (0, uγ1u∞γ ), or x is of
the form
x = unγuγ1umγ uγ2uγ1u
∞γ ,
with possibly n = ∞ or m = ∞. These are the only possibilities that do not involve
more 0s: the first uγ1 forces the following umγ uγ2 as any other possibility is too small.
This uγ2 then forces the following uγ1 because anything else would be inadmissible.
Then we must have umγ lest the point is too small, but we cannot repeat the previous
trick of having a uγ2 to escape because this would again be inadmissible.
This thus shows that if 1· ≤ vγ2vγvγ1v∞γ then we have Jβ,α[0, uγ1u
∞γ ]
.= u∞γ , and so
b1 ≤ uγ1u∞γ as required.
Define as follows:
u0 = uγ1uγuγ2 ,
un = uγ1uγuγ2unγ ,
u−n = uγ1un+1γ uγ2 .
These are all minimal shifts. With these definitions we have
uγ1u∞γ ↙ . . . < u−2 < u−1 < u0 < u1 < u2 < . . .↗ uγ1uγuγ2u
∞γ .
The corresponding maximal shifts are given by
v0 = vγ2vγvγ1 ,
vn = vγ2vn+1γ vγ1 ,
v−n = vγ2vγvγ1vnγ .
Similarly these satisfy
vγ2vγvγ1v∞γ ↙ . . . < v−2 < v−1 < v0 < v1 < v2 < . . .↗ vγ2v
∞γ .
This is precisely the region we need to cover.
CHAPTER 7. SMALL B AND LARGE A 79
Lemma 7.1.2. For i 6= 0, we have b1 = ui−1u∞i whenever 1· ∈ [v∞i , vi+1vi−1v
∞i ).
Proof. Similar to the above. Firstly notice that whenever 1· ≥ v∞i , we have that words
of the form
{uiuni+1 | n large},
are admissible and for suitable n and ε > 0 will avoid [0, ui−1u∞i − ε). Therefore
b1 ≥ ui−1u∞i for all 1· ≥ v∞i .
Suppose 1· = vi+1vi−1v∞i . Consider x ∈ (ui−1u
∞i , u
∞γ ). Either x contains too many
0s in a row and so falls in the hole, or x is one of the following:
x = unγu∞i ,
x = unγui−1umi ui+1ui−1u
∞i ,
x = unγumi ui+1ui−1u
∞i ,
x = unγumi ui−1u
`iui+1ui−1u
∞i .
The key here is that the first instance of ui+1 forces the remainder of the sequence.
Essentially ui−1 forces the following sequence to be large, whereas ui+1 forces the
following sequence to be small. Between the value for b1 and the maximum for 1· this
takes away all the options, leaving Jβ,α[0, ui−1u∞i )
.= {u∞γ } ∪ {u∞i }.
Lemma 7.1.3. For i = 0, we have b1 = u−1u∞0 whenever 1· ∈ [v∞0 , v1vγv
∞0 ).
Proof. Similar to the above. The difference lies in that given the sequence u1u0u∞−1,
the maximal shift of this is actually v1vγv∞0 rather than v1v−1v
∞0 .
The next step is to create a tree as per usual with roots ui and ui+1. We are
wanting the minimal shifts, so we concatenate the smaller word first: the first child is
thus uiui+1.
Lemma 7.1.4. Consider u from a tree with roots ui and ui+1 for some i. Suppose u has
left parent u` and right parent ur. Let the corresponding maximal shifts of these words
be denoted v, v` and vr respectively. Then b1 = u`u∞ whenever 1· ∈ [v∞, vrv`v
∞).
Proof. The same as those above; replace ui−1 with u`, ui with u, and ui+1 with ur.
The result has been shown for the root words, that is for the actual ui, giving us the
base case, and so by induction the lemma is proved.
CHAPTER 7. SMALL B AND LARGE A 80
The final step is the most interesting. The issue is that there may still be gaps.
We must check that we have covered all possible 1·, and if our result is to be remotely
sensible then we expect b1 to be a devil’s staircase as per usual. We will describe the
i 6= 0 case: the i = 0 is much the same just using uγ instead of u−1 as in Lemma 7.1.3.
Similarly the following can also be done using a word u from a tree, with parents ur
and u` as in Lemma 7.1.4.
Consider b1 = ui−1u∞i , i 6= 0, so 1· ∈ [v∞i , vi+1vi−1v
∞i ). Then consider sequences of
b1 from above and below. We want that the corresponding sequences of 1· intervals
tend to the endpoints of the interval [v∞i , vi+1vi−1v∞i ): this would mean that all possible
1· were covered.
From above, we have tree elements of the form uni ui+1. This gives
b1 = ui(uni ui+1)
∞ ↘ u∞i .
The corresponding intervals for 1· have left endpoints 1· = (vi+1vni )∞ ↘ vi+1v
∞i .
From below, we have tree elements of the form ui−1uni . This gives
b1 = ui−1un−1i (ui−1u
ni )∞ ↗ ui−1u
∞i .
The corresponding intervals for 1· have left endploints 1· = (vni vi−1)∞ ↗ v∞i .
This means that there is a problem. From below, the intervals line up nicely,
but from above, we do not know what happens when 1· ∈ [vi+1vi−1v∞i , vi+1v
∞i ) or
b1 ∈ [ui−1u∞i , u
∞i ).
To fill this gap, notice that using ui and ui+1, we want to cover, and have partially
covered, 1· ∈ [v∞i , vi+1v∞i ) and b1 ∈ [ui−1u
∞i , u
∞i ].
Recall that the original problem was to establish the details of when 1· ∈ [v∞γ , vγ2v∞γ ]
and b1 ∈ [uγ1u∞γ , u
∞γ ]. To solve this we defined the ui and trees between them.
This smaller problem is now exactly a microcosm of our original problem. There-
fore, we repeat the solution inductively, and define “(ui)j” by using ui instead of uγ,
ui−1 for uγ1 and ui+1 for uγ2 . This means our previous work with the ui a base case
for induction. Repeat all proofs for the (ui)j using the substitutions described.
Note that the same problem will arise again for every (ui)j, and the same problem
will also arise for every word u formed from a ui, ui+1 tree. This process will hence
continue infinitely, giving a surprisingly complicated structure.
CHAPTER 7. SMALL B AND LARGE A 81
Example 7.1.5. Consider:
1· ∈ [(11010)∞, 110(11010)∞], 0· = (000011)∞,
γ(β, α) = 3/5 = 1/2⊕ 2/3, δ(β, α) = 1/4.
Then we have
γ1 = 2/3, γ2 = 1/2,
b0 = u∞γ = (01011)∞, b2 = uγ1u∞γ = 01(01011)∞.
The ui for γ(β, α) = 3/5 are given by
· · · < 010101101011011 < u0 = 0101011011 < 010101101101011 < . . . ,
with corresponding vi given by
· · · < 110110101011010 < v0 = 1101101010 < 110110101101010 < . . . .
Thus, for example, b1 = u−1u∞0 = 010101101011011(0101011011)∞ whenever
1· ∈ [v∞0 , v1vγv∞0 ] = [(1101101010)∞, 110110101011010(1101101010)∞].
This concludes the description of b1.
7.2 Trees for small b and large a
The final question is what happens for b ∈ (b2, b0) and a ∈ (a0, a2). Whilst we know
any such points must be in D0, it still remains to be seen whether for example, there
exist (a, b) ∈ D2 with b2 < b < b0. This is depicted in Figure 7.1, and was the key
mistake in my paper: I optimistically did not realise this was a possibility and assumed
that the point (0b2, b0) was on the boundary of D2, and did not realise that it was
possible to have b1 > b2. This section therefore studies holes (a, b) with b2 < b < b0 or
a0 < a < a2. As per usual, we will study the former scenario and note that the latter
works in the same way.
Unfortunately this is immediately different to the scenarios in previous chapters.
We can no longer simply look for maximal extremal pairs: any holes with b < b0 are
already in D0 and are hence by definition not maximal.
CHAPTER 7. SMALL B AND LARGE A 82
Instead, we look for pairs (s, t) such that Jβ,α[s∞, t∞].= {u∞γ }. Then the results
of Theorem 5.2.5 involving D1 and D2 will still hold: the same useful subshifts will
be admissible. This should be possible for b1 < b < b0. For b2 < b < b1 this becomes
more difficult again: we must search for pairs (s, t) = (uv, vu) such that |u| and |s|
are coprime, which is what is lacking in the shifts used for b1 itself. This makes proofs
somewhat more difficult, essentially because it is easier to show that nothing avoids a
hole than it is to show that some things avoid a hole but not the things we want. The
situation is also complicated by the fact that, just as for secondary trees, the sequences
we have in this region tend to have fairly long periods. This means it is very easy to
make mistakes in examples, and very difficult to spot those mistakes: even a correct
example becomes very fiddly to follow along. Due to these issues and a lack of time,
parts of this section are therefore conjectural, unconfirmed, and there may be some
cases missing.
Recall again that it was shown in Section 5.3 that given a periodic point w∞ =
min(w∞) ∈ (b2, b0), we have that w must be composed of blocks of the form
uγ1(uγ1uγ2)nuγ2(uγ1uγ2)
m,
for some n and m. Note that these words have period k|uγ| for some k, and are thus
not enough to give D2. In finding b1 we have also in effect shown which of these words
are admissible for which 1·. We will also assume that 0b2 is admissible. There is no
particular reason that this should be the case, but the situation is complicated enough
even with this assumption.
With this in mind, we claim that the solution is to take periodic points that are
inadmissible, and tree these with 0 to make admissible words. This a similar idea to
the methods used for Rγ and Rδ in Chapter 6.
Lemma 7.2.1. The pairs (s, t) = (0ui, ui0) with ui as defined in Section 7.1 satisfy
Jβ,α(s∞, t∞).= {σns∞} ∪ {σnu∞γ } whenever admissible with t∞ > b1.
Proof. Notice that σs∞ = t∞, and b2 < t∞ < b0 as required. We know that
Jβ,α(0b1, b1).= {σnu∞γ }. Therefore every other point falls into (0b1, b1). Then, for
any x ∈ (0b1, s∞), we have that σx ∈ (b1, t
∞) ⊂ (s∞, t∞) as needed.
This also holds for pairs (s, t) = (0ui0k−1, ui0
k) provided they are admissible, and
CHAPTER 7. SMALL B AND LARGE A 83
can be easily altered to work for admissible (s, t) = (0uki , ui0uk−1i ) as well. This gives
the base of the tree, and therefore as per usual the rest of the tree easily follows.
If b2 < t∞ < b1 then we have that Jβ,α already contains periodic points of period
k|uγ| as mentioned above. The proof is the same excepting these points.
We also use the tree formed from 0 and uγ.
We now need to show that at least some of these trees will have admissible pairs.
Ideally we would like that these trees are all that is needed, but unfortunately this is
not always the case.
Proposition 7.2.2. Suppose b1 = b2, with 1· ∈ (v∞γ , vγ2vγvγ1v∞γ ), and suppose 0· ≤
0b2. Then the pair (s, t) = (0ui0k−1, ui0
k) is admissible for some k for every i ≤ 0.
For i > 0, all pairs from the tree with roots 0 and ui are inadmissible.
Proof. For i ≤ 0, the maximal cyclic shift of ui0k is v2γ20
kv2γ1viγ. This must be admissible
for suitably large k (which may be necessary for small 1·). In the case where 0k would
be inadmissible for large k, we have that k = 1 must work by the assumption that
0· ≤ 0b2. Similarly for i > 0 we have the maximal cyclic shift of ui0k is vγ2v
iγ0
kv2γ1 .
This is inadmissible for 1· as described.
Notice also that when we write (s, t) = (uv, vu), treeing with 0 gives |u| = |0| = 1
and so |u| and |s| are clearly coprime. Therefore, provided that st∞ and ts∞ are
admissible, these pairs do give D2.
Unfortunately this is not guaranteed, and indeed is not always the case (see the
examples in Chapter 9). Furthermore, there may still be gaps between these trees.
Additionally, the case where b1 = b2 is the easy case: if b1 > b2 then these trees are
definitely insufficient and leave gaps. From this point forth I have no concrete solution,
but rather can describe approximately what appears to happen in the examples I have
tried. The idea seems to create some form of secondary trees as for Rγ, but then these
once again do not give D2. This means D2 would need creating from more complicated
concatenations as described in Section 6.3.
There are also further questions about the interaction with b1. We have seen that
b1 is incredibly complicated: should we thus expect a similar level of complication in
the solution here? Is there also some change in behaviour at b = b1, or does this not
CHAPTER 7. SMALL B AND LARGE A 84
make a difference? Finally, what happens when 0b2 is inadmissible? In this case even
the trees with roots 0 and ui may be inadmissible and the solution is not at all clear.
There are some positives. Firstly, in suitably nice cases such as the first example
in Chapter 9, the above trees are sufficient. Secondly, the entire region is once again
tiny: the vast majority of D1 and D2 is already completed using balanced words and
primary trees. It is also wholly fascinating that such a simple problem — when does
the avoidance set of a hole have positive Hausdorff dimension — could give rise to
such richly intricate structures as b1. Finally, any holes with b < b0 have to be in D0,
and so the problem of this chapter does not affect D0 at all. Whilst we would like to
be able to fully solve this problem, we hope that the methods and conjectures shown
give a good idea of the types of solutions that are likely needed and the surprising
difficulty of the problem.
Chapter 8
Discussion of the non-transitive
case
It was shown in Lemma 4.2.1, Tβ,α is non-transitive precisely when γ(β, α) = δ(β, α) =
r = p/q. The non-transitive cases are very close to being just a rational rotation and
when furnished with a hole exhibit some interesting behaviour.
Example 8.0.3 (Some examples of non-transitive Tβ,α). Consider:
a) 1· = (1100)∞ and 0· = (00101011)∞, γ(β, α) = δ(β, α) = 1/2.
b) 1· = (110110101101010)∞ and 0· = (0101011011)∞, γ(β, α) = δ(β, α) = 3/5.
There are several things we must think about here.
Firstly, it is no longer possibly to have periodic orbits of every large period. In fact,
we have only periodic points whose period is a multiple of |vr| = q. This means that
D2 is either empty or needs redefining, taking care to retain a meaningful distinction
between D1 and D2. We choose the latter option and say that (a, b) ∈ D2 if there
exists some N such that there is a periodic orbit of every possible length greater than N
avoiding (a, b). This can be used as a definition throughout; whenever Tβ,α is transitive
every suitably large period is possible and so this fits with the previous definition.
Next, it is important to think of the actual meaning of a lack of transitivity. Pe-
riodic points are no longer dense. This means there now exist non-recurrent regions
where points never return and where there are no periodic points. This has serious
consequences when adding a hole: if there are no periodic points that fall into some
region, that that region is instantly in D2 when taken as a hole.
85
CHAPTER 8. DISCUSSION OF THE NON-TRANSITIVE CASE 86
Essentially our work in the previous sections begins to break down. We only have
one possible balanced word, but it certainly has both st∞ and ts∞ inadmissible. But,
with our redefinition of D2, the first level of admissible Farey descendants will now
give D2 not just D1. However when it comes to primary trees we may have absolutely
no pairs admissible. To add to this, b2 and a2 are definitely inadmissible, but in fact
b1 and a1 now give D2 with our redefinition, but then these may or may not also be
inadmissible. To summarise: we should expect trouble.
In these non-transitive cases, we are able to take advantage of kneading theory
and in particular renormalisation. Previously, this theory was neither necessary nor
helpful, and so was delibrately avoided as being overly complicated for too little gain.
Now, the benefits begin to outweigh the difficulties. This theory as it applies to
Lorenz maps was discussed variously by Glendinning [15], Glendinning and Sparrow
[18], and Glendinning and Hall [16]. We need only a little bit of this theory: namely,
an approximate idea of what renomalisation is. Unfortunately the precise results we
need, whilst definitely contained in some form within the literature, are not explicitly
stated: in fact the non-transitive case is dismissed as trivial!
Renormalisation, as it pertains to us, is the idea of performing a substitution that
somehow retains the essential elements of our system. For example, given two words
w and v of length n, the shift σn on {w, v}N is essentially the same as the full shift
on {0, 1}. This is the basic idea: to use some substitution to study the essence of a
system in a way that looks simpler. This is a decidely approximate description, and
a far fuller discussion can be found in any of the above papers. We will give a more
precise version later once it becomes clear why we wish to use this.
Consider the one balanced periodic orbit available.
Lemma 8.0.4. Every extremal pair formed from the balanced periodic orbit w∞γ is
maximal.
Proof. The only points that have two preimages are those in [Tβ,α(0), Tβ,α(1)]. Con-
sider the extreme case where 1· = vr2v∞r and 0· = ur1u
∞r for some r = γ(β, α) =
δ(β, α). Then [Tβ,α(0), Tβ,α(1))] = [σ2(0 max(wr)), σ2(1 min(wr))]. This is in itself a
balanced extremal pair, hence none of the others are in this interval. This means
every other pair has only one preimage and is thus maximal. Given that we start with
CHAPTER 8. DISCUSSION OF THE NON-TRANSITIVE CASE 87
Figure 8.1: Both figures show Tβ,α (dashed) and T qβ,α (solid), where γ(β, α) = δ(β, α) =p/q. Figure (a): 1· = (11010100)∞, 0· = (0011)∞, γ(β, α) = δ(β, α) = 1/2. Figure(b): 1· = (1101101010)∞, 0· = (010101101011011)∞, γ(β, α) = δ(β, α) = 3/5.
(0 max(wr), 1 min(wr)), the extremal pair (σ2(0 max(wr)), σ2(1 min(wr))) is actually
the last we arrive to by taking preimages.
If we increase 0· and decrease 1·, then the interval [Tβ,α(0), Tβ,α(1)] shrinks and so
even this last extremal pair is strictly outside of it and has only one preimage. Thus
for any strictly transitive map, this pair too must be maximal extremal.
Remark 8.0.5. As we have only one balanced admissible word, the pairs (s, t) arising
from this have both st∞ and ts∞ inadmissible. However, in contrast to the issue of
Section 6.3 with the secondary trees, provided Tβ,α is not a rotation, there must be
admissible Farey descendants.
From this point on the results become more conjectural, and we refer the reader
to the example of Section 9.4. The sequences 1· and 0· are so thoroughly restricted
that it becomes very difficult to think of examples at all, and anything with γ(β, α) 6=
1/n, (n − 1)/n starts to involve very long sequences very quickly. This makes these
maps somewhat challenging to work with. It is entirely possible that some subcase
has been missed and that some further strange behaviour can occur.
Conversely, working in such a restricted sequence space can be an advantage. It is
somewhat easier to establish which pairs are maximal extremal: there simply are not
many options. Merely writing down some admissible sequences in whatever interval is
under consideration can be a challenge in itself.
It turns out that the situation for a non-transitive map with γ(β, α) = δ(β, α) = p/q
CHAPTER 8. DISCUSSION OF THE NON-TRANSITIVE CASE 88
R1R00 R01G1 G2
Figure 8.2: 1· = (11010100)∞, 0· = (0011)∞. The dashed line is Tβ,α, the solid is T 2β,α.
Notice how T 2β,α maps R1 to itself and R0 = R00 ∪ R01 to itself. Notice also that the
period 2 (balanced) orbit has one point in G1 and one point in G2.
is as follows. There are q distinct ‘gaps’ Gi, each of which contains an iterate of v∞p/q
and no other periodic points. Inbetween the gaps are q − 1 internal regions Ri such
that T qβ,α is an automorphism on each region. Furthermore we can consider there to
be a qth region R0 split into two halves R00 = [0, x1) and R01 = (x2, 1) for some x1, x2.
Once an orbit enters ∪Ri it cannot escape: Tβ,α maps ∪Ri to itself and not to ∪Gi.
We show a simple example for p/q = 1/2 in Figure 8.2.
This description can all be extracted from the papers on Lorenz maps cited above.
Unfortunately it is not, as far as I am aware, stated explicitly, and certainly the
methods of proof require a depth of theory that is effectively using a sledgehammer to
crack a nut. We therefore present a more explicit, less sophisticated version.
Lemma 8.0.6. A non-transitive intermediate β-expansion with γ(β, α) = δ(β, α) =
p/q has q distinct gaps G1, . . . , Gq such that every orbit other than the balanced orbit
s∞p/q avoids these gaps.
Proof. For ease of reading, we write Tβ,α = T .
We show this somewhat in reverse: we begin by defining the regions Ri and showing
that points cannot escape ∪Ri.
Recall the results of Palmer [26] that were discussed in Section 4.2. The map T is
non-transitive with γ(β, α) = δ(β, α) = p/q if and only if it has a primary q(p) cycle.
CHAPTER 8. DISCUSSION OF THE NON-TRANSITIVE CASE 89
This is given by the sole admissible balanced word, which has slope p/q. Write the
orbit of this balanced word thus:
z1 < z2 < · · · < zq−p < c = (1− α)/β < zq−p+1 < · · · < zq.
Then given that this is a primary q(p) cycle, we have that zp ≤ T (0) < T (1) ≤ zp+1.
Furthermore, we have the ordering of the zi under T : recall T (zi) = zi+p for 1 ≤ i ≤
q − p and T (zq−p−i) = zi for 1 ≤ i ≤ p. This implies that zq−p < T q−1(0) < c <
T q−1(1) < zq−p+1.
This instantly defines the regions Ri = (T i(0), T i(1)) for i = 1, . . . , q − 1, and the
external split region as R0 = R00∪R01 = [0, T q(1))∪(T q(0), 1). We then take the gaps
as those between the regions: we know these must exist and be non-empty because as
a minimum they each contain a point zi. In fact, the limiting case where the gaps are
each a single point zi is precisely when we have 1· = vr2v∞r and 0· = ur1u
∞r .
We now show that the gaps are non-recurrent and contain no other periodic points.
To do this, we consider the recurring regions Ri. Most of them by definition map neatly
from one to the other under T . The only ones we need be concerned about are the
central region (T q−1(0), T q−1(1)) containing c = (1−α)/β and the external split region
[0, T q(1)) ∪ (T q(0), 1). We need to show that these map inside of ∪Ri.
For the central region, this is easy. The first half (T q−1(0), c) takes the left branch
and maps up to (T q(0), 1) and the second half [c, T q−1(1)) takes the right branch and
maps down to [0, T q(1)). So the central region maps directly to the spit external
region.
We now consider the external split region. Recall that one of the internal regions
is (T (0), T (1)). Clearly 0 maps to T (0), so we now need T q+1(1) ≤ T (1). To see this,
note that for any strictly non-transitive1 map we have
1· = 1w1(0w1)n0w01w1 . . . ,
for some n, where vp/q = 1w0 and up/q = 0w1. Then we have
T (1) = (w10)n+1w01w1 . . . ,
T q+1(1) = (w10)nw01w1 . . . .
1That is, strictly inside the bubbles of Figure 4.4.
CHAPTER 8. DISCUSSION OF THE NON-TRANSITIVE CASE 90
Thus the point of difference is at the (n + 1)st w10 of T (1), where T q+1(1) has w01.
This shows that T q+1(1) < T (1) as required.
The same method holds for the orbit of 0: we have
T (0) = (w01)m+1w10w0 . . . ,
T q+1(0) = (w01)mw10w0 . . . .
Thus T q+1(0) > T (0). This shows that the two outer regions map into the (T (0), T (1))
region.
These results prove that orbits cannot escape ∪Ri. In order to show that the gaps
contain no periodic points other than the balanced one, we need to show that points
must escape the gaps. This is already shown by the above results as the endpoints
defining the gaps and the regions are the same, but it is still useful to think in terms
of just the gaps to fully appreciate the situation. In fact, most of the gaps also map
neatly from one to the next. It is only the first and last gaps that do not. The first
is given by (T q(1), T q−p(0)). The right endpoint maps neatly to T q−p(0) which is the
right endpoint of a gap. However, we have just shown that T q(1) maps to below T (1).
This gives an overlap with the first region (T (0), T (1)). Similarly, the last gap also
maps to overlap this region.
As every gap must eventually wind up at these two, this demonstrates that as we
iterate T , all non-balanced points must fall out of the gaps and into the regions, from
whence they cannot return.
This shows that the gaps are contained in D2. Further to this, the non-gap regions
are contained in D0 due to the sole balanced word. In fact, we can improve D0 from
this slightly: because u∞p/q and v∞p/q are in the gaps and the gaps have no periodic
points, we know that in fact b0 must be given by the right endpoint of the first gap,
and a0 must be given by the left endpoint of the last gap. Hence, we currently have the
situation shown in Figure 8.3. We need to sort out D1 and D2 for inside the regions
and D0 and D1 for outside the regions.
For the first, this is where we use renormalisation. Notice that each Ri maps to
itself under T q (with the split external regions now functioning as one region R0).
Furthermore, restricting to any particular Ri we see that T q looks suspiciously like an
intermediate β-transformation, as is visible in Figure 8.2. This is exactly the case. For
CHAPTER 8. DISCUSSION OF THE NON-TRANSITIVE CASE 91
Figure 8.3: 1· = (11010100)∞, 0· = (0011)∞. D2 in dark grey formed from the gaps,D0 in light grey formed from the gaps and from the period two balanced orbit.
each Ri, consider the balanced pair (s, t) with Ri ⊂ (s∞, t∞). Then consider T q as
an intermediate β transformation on some subset of {s, t}N. From this, one can find
maximal extremal pairs (in terms of the alphabet {s, t}) and use these to find D1 and
D2 within the regions. In fact, the admissible balanced words on alphabet {s, t} are
precisely the admissible Farey descendants that we would expect to have used.
For the external split region, some caution is required as we have no immediate
(s, t) pair. To renormalise, here we need to take u and v, but take care that in
this alphabet we must then have u > v, because the lefthand region close to 0 is
actually playing the role of the second branch of an intermediate β-transformation.
The central discontinuity in this framework will actually be 0 and 1. The idea still
works, but requires a degree of care to not get mixed up.
Furthermore, it could be that we use this method, renormalise, and the resultant
intermediate transformation is again non-transitive. This is not a particular problem:
we just continue and do the same thing again.
The hazard with this method is that it has some issues when it comes to small b
and large a: a small b for T q in a particular Ri is not a small b for the entire map T
on the whole space, which means this still needs to be negotiated. This is particularly
necessary for the external split region. Essentially, in each renormalised Ri we have to
pretend that there are no bi and ai and carry on finding extremal pairs.
For D0, we need to establish what happens around the gaps. We conjecture that
the solution is to take the “best Farey” descendants from the surrounding regions and
form a tree as per usual. By “best”, we are meaning smallest from the region above
and largest from the region below, but also shortest: the highest levels of descendants.
These should give maximal extremal pairs that contain the gap. Experimentally, this
CHAPTER 8. DISCUSSION OF THE NON-TRANSITIVE CASE 92
appears to work, and we direct the reader to the example of Section 8. As has been
discussed multiple times, the sequences become very long, making everything highly
fiddly and making it very difficult to elucidate more precise statements and proofs.
For this reason, some examples and sketch solutions are for the time being the best
we can offer.
In summary: for a non-transitive map that is close to a rotation of p/q, we have
q gaps that everything aside from the balanced word avoids, and then q regions (one
split at the edges) where everything interesting happens. T q when restricted to one of
these regions is itself an intermediate β-transformation, and by considering this map
we can complete the description of D1 and D2. For D0, we make trees using Farey
descendants across the gaps. For a full example, see Section 9.4. This completes the
non-transitive case.
Chapter 9
Summary and worked examples
For each example, we work through the following steps:
1. Begin with 1· and 0·.
2. Find γ(β, α) and δ(β, α).
3. Find b0, b2, a0, a2.
4. Work out where central balanced pairs cover.
5. Consider preimages of central balanced pairs.
6. Work out Rγ and Rδ.
7. Approximate γ(β, α) from above.
8. Create primary trees.
9. Create secondary trees.
10. Find b1.
11. Create trees for b2 < b < b0.
12. Approximate δ(β, α) from below.
13. Create primary trees.
14. Create secondary trees.
93
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 94
15. Find a1.
16. Create trees for a0 < a < a2.
In depictions of trees, dashed lines from a root indicate limits, dashed lines to a pair
give extra information, and crosses indicate where a pair is inadmissible.
We will not detail the process of finding Farey descendants. This should be done for
every tree where D1 is needed. Nor will we show the Farey tree with balanced pairs
as this is needed for every example and is shown in full in Chapter 2.
9.1 The easiest case that is not the doubling map
1· = (10)∞, 0· = 0∞, (β, α) = (√5+12, 0).
γ(β, α) = 1/2, δ(β, α) = 0.
This is a greedy transformation and is the most commonly seen example of a β-
transformation.
We have b0 = (01)∞, b2 = 0(01)∞, a0 = a2 = 10·.
Central balanced pairs cover:
(010·, 10·) to ((01)∞, (10)∞),
=(001·, 01·) to (01·, 1·),
=(0(01)∞, (01)∞) to ((01)∞, (10)∞)).
The 1 preimage is inadmissible, and the 0 preimage has b < b0. Therefore, no balanced
preimages used.
Because balanced pairs cover down to u∞γ = b0, we have Rγ = ∅. Because the transfor-
mation is greedy, we have Rδ = ∅. Therefore, no primary or secondary trees needed.
Because 1· = v∞γ , we know instantly that b1 = 0(01)∞ = b2.
For D1, D2, we need to cover a ∈ (00(01)∞, 0(01)∞) = (0b2, 0b0), b ∈ (0(01)∞, (01)∞) =
(b2, b0).
Begin with Tree(0, uγ):
0 01(001, 010)
(0010, 0100) (00101, 01001)
(0(01)∞, 010(01)∞)= (0b0, 01001·)(0010·, 010·)
= (0b2, b2)
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 95
We have a0 = a1 = a2 so nothing needs doing on this side.
As can be seen, the sole tree covers the entirety of a ∈ (0b2, 0b0), so we are done.
Figure 9.1: D2(β, α) (dark grey) and D0(β, α) (light grey + dark grey) for 1· = (10)∞,β = ϕ.
9.2 A greedy case with primary trees
1· = (10010000)∞, 0· = 0∞, (β, α) ≈ (1.427, 0).
γ(β, α) = 1/4, δ(β, α) = 0.
We have b0 = (0001)∞, b2 = 0(0001)∞, a0 = a2 = 10·.
Central balanced pairs cover:
(010·, 10·) to ((0100)∞, (1000)∞).
Note the largest pair here has s1/4t∞1/4 01· = 10·, so we cover a up to this value.
γ(β, α) ∈ [1/(n+ 1), 1/n) for n = 3, so we use two 0 preimages. These cover:
(0010·, 010·) to ((0010)∞, (0100)∞), st∞ 001· = 010·,
(00010·, 0010·) to ((0001)∞, (0010)∞), st∞ 0001· = 0010 · .
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 96
The effect of s1/4t∞1/4 and its preimages being truncated can be seen in the figure. Every
pair gives a light grey square of D0 (with D1 too small to see), except for these ones
which give a rectangle: the central pair (s∞1/4, t∞1/4) is at approximately (0.645, 0.92), and
s1/4t∞1/4 truncates to 1/β ≈ 0.7. This is also what causes the “jumps” at approximately
0.7, 0.48 and 0.345.
The remaining gap between b0 and balanced pairs gives us
Rγ = (0(0001)∞, 00010·)× ((0001)∞, 0010·).
Because the transformation is greedy, we have Rδ = ∅.
Approximate γ(β, α) = 14
from above: 13, 27, 311, . . . , k
4k+3, . . .
Make primary trees:
1/3: Just gives balanced pairs, discard.
2/7:0 0001001
(00001001, 00010010)
(000010010, 000100100)
×
s∞ = 00001·(000010010·,
00010010·)
3/11:0 00010001001
(000010001001, 000100010010)
(0000100010010, 0001000100100)
×
st∞ = 000010001·
(0000100010010·,000100010010·)
These primary trees continue, and in the limit cover down to (0(0001)∞, (0001)∞) =
(0b0, b0).
The map is greedy, so each primary tree is left infinite. The left limit of one tree is
the truncated st∞ from the next. Thus the whole of Rγ is covered and secondary trees
are not needed.
To find b1, note γ1 = 0, γ2 = 1/3. Then we have
vγ2vγvγ1v∞γ = 10010000(1000)∞ > 1·,
therefore b1 = b2 = 0(0001)∞.
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 97
For D1, D2, we need to cover
a ∈ (00(0001)∞, 0(0001)∞) = (0b2, 0b0),
b ∈ (0(0001)∞, (0001)∞) = (b2, b0).
Begin with Tree(0, uγ):
0 0001(00001, 00010)
(00010, 00100) (000010001, 000100001)
(0(0001)∞ = 0b0,00010(0001)∞)
(000010·,00010·)
Then try roots u−i, i ≥ 0. Recall u−i = uγ1ui+1γ uγ2 .
u0:0 00001001 = u0
(000001001, 000010010)
(0u00, u000) (0u20, u00u0)
(0000010010· = 0u−1u∞0 ,
000010010· = u00·)(0u∞0 = 000001·,u00u
∞0 )
u−1:
0 00001001 = u−1
(0u−1, u−10)
(0u−10, u−100) (0u2−1, u−10u−1)
(0u−10·,u−10·)
(0u∞−1,u−10u
∞−1)
This leaves gaps between the trees: compare the largest a from the u−1 tree and the
smallest a from the u0 tree.
Points in this gap are, for example, (0u−1u0)∞. The gap is thus probably filled using
combinations of u−1, u0, and 0s.
We have a0 = a1 = a2 so nothing needs doing on this side.
This is as complete as we can make this example. Notice that the sequences delineating
the remaining gaps differ at the 21st place, giving a gap size of around β−21 ≈ 10−4,
so very small.
Figure 9.2 shows the balanced pairs and their descendants giving Di(β), with
β ≈ 1.427. Note that D1(β, α) is shown by the dark grey and the white areas “be-
tween” the light and dark grey. These white areas do exist but are so small as to be
barely visible, therefore the inset image shows a magnification as indicated. Notice
how the overall image has the same section repeated three times at different scales.
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 98
This corresponds to the shifting of the balanced words as in Lemma 5.4.3 above. Fur-
thermore there are vertical intervals that appear to be jumps, at a = 1/βk, such that
∂D2(β, α) = ∂D1(β, α) = ∂D0(β, α). This corresponds to where s∞ is admissible but
st∞ is inadmissible.
Figure 9.2: D2(β, α) (dark grey), D1(β, α) (white + dark grey) and D0(β, α) (lightgrey + white + dark grey) balanced pairs only, with 1· = (10010000)∞, β ≈ 1.427,γ(β, α) = (1/4, 1/2), with magnified area inset to clearly show D1(β, α).
9.3 Two intermediate cases
9.3.1 An easy case
1· = (11100)∞, 0· = (0010011)∞, (β, α) ≈ (1.64, 0.2044).
γ(β, α) = 2/3, δ(β, α) = 2/5.
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 99
We have b0 = (011)∞, b2 = 01(011)∞, a0 = (10100)∞, a2 = 10(10100)∞.
Central balanced pairs cover:
((01010)∞, (10010)∞) to ((011)∞, (101)∞).
Note that t2/5s∞2/5 10· and s2/3t
∞2/3 01·.
For every admissible balanced word, both its preimages are admissible, so we do not
use any preimages.
The gaps between b0 and balanced pairs and balanced pairs and a0 give us
Rγ = (0(011)∞, (01010)∞)× ((011)∞, 10·),
Rδ = (01·, (10100)∞)× ((101)∞, 1(10100)∞).
Begin with Rγ. Approximate γ(β, α) = 23
from above: 34, 57, 710, . . . , 2k+1
3k+1, . . .
Make primary trees:
3/4:0 0111
(00111, 01110)
× ×s∞ = 001·t∞ = 01·, ts∞ 01110· = 01101·
5/7:0 0110111
(00110111, 01101110
× ×st∞ 001101·ts∞ 01101110·
7/10:0 0110110111
(00110110111, 01101101110)
× ×
st∞ 001101101·ts∞ 01101101110·
Each primary tree gives only one admissible pair (so no D1 or D2), and there are then
gaps between the trees.
Make secondary trees:
First, notice that we have
Jβ,α((01010)∞, (01110)∞ = 10·) .= {(00111)∞, (01010)∞}.
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 100
This means we don’t need a secondary tree inbetween the 3/4 primary tree and bal-
anced words. This is because the expansion of 1 happens to coincide with the pair
from the 3/4 primary tree and is not a usual situation.
5/7 and 3/4:
(00110111, 01101110) (00111, 01110)
(0011100110111,0110111001110)
(001110011011100110111,011011100110111001110)
(001110011100110111,011011100111001110)
(0(01110011)∞,(01101110)∞)
((00111)∞,011(01110)∞)
This tree is infinite and it is easy to see that for every pair, st∞ and ts∞ are inadmis-
sible. Notice also that the right limit gives a = (00111)∞ which is the smallest a from
the 3/4 primary tree, and the left limit gives b = (01101110)∞ which is the largest b
from the 5/7 primary tree.
As the words are now very long, we will not include further secondary trees.
To find b1, note γ1 = 1/2, γ2 = 1. Then we have
vγ2vγvγ1v∞γ = 111010(110)∞ > 1·,
therefore b1 = b2 = 01(011)∞.
For D1, D2, we need to cover
a ∈ (001(011)∞, 0(011)∞) = (0b2, 0b0),
b ∈ (01(011)∞, (011)∞) = (b2, b0).
Begin with Tree(0, uγ):
0 011(0011, 0110)
(00110, 01100) (0011011, 0110011)
(0(011)∞ = 0b0,0110(011)∞)
(00110·,0110·)
Then try roots u−i, i ≥ 0. Recall u−i = uγ1ui+1γ uγ2 .
u0:0 010111 = u0
(0010111, 0101110)
× ×st∞ 00101· = 00110·ts∞ 0101110·
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 101
u−1:
0 010110111 = u−1
(0010110111, 0101101110)
× ×st∞ 00101101·ts∞ 0101101110·
Neither of these trees give D1 or D2, and there are still gaps between them. Creating
secondary trees here does seem to give admissible pairs, but none of them have ts∞ or
st∞ admissible either. This means none of these trees except the uγ tree have given
any part of D1 or D2, which was the point. These sets thus remain a mystery here.
Potentially this could be solved in the same way as D1 and D2 are found for secondary
trees, but this cannot be said with certainity.
We now need to repeat the process for Rδ.
Approximate δ(β, α) = 25
from below: 13, 38, 513, . . . , 2k+1
5k+3, . . .
Make primary trees:
1/3:100 1
(1001, 1100)
(10011, 11001) (1001100, 1100100)
× ×s∞ = 1001·t∞ = 11001·
s∞ = 100110·t∞ = 110·
3/8:10100100 1
(101001001, 110100100)
× ×st∞ 101001001·ts∞ 11010·
There is clearly a gap between balanced words and the 1/3 tree. However, making a
secondary tree using the (1001100, 1100100) pair does not work because these are shift
of 0·, so any resultant pairs are inadmissible. Instead, we use the following:
(011, 101) (1001, 1100)
(1001011, 1011100)
(1001011011,1011011100)
(10011001011,×10111001100)
((1001)∞,101(1100)∞)
(1001(011)∞,(101)∞)
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 102
However, problematically the right hand part of this tree is actually already covered
by the 1/3 primary tree! Furthermore, these pairs all have st∞ and ts∞ inadmissible,
and so do not give D1 and D2. This is an example of the strange behaviour that can
occur — although, this is an exceptional example because the limits of the primary
tree are precisely equal to shifts of 0·. I conjecture that one simply takes the parts of
this tree which do not overlap with the 1/3 primary tree: one can test that these do
seem to be maximal extremal.
The next secondary tree, between the 1/3 and 3/8 primary trees, is also problem-
atic. The pairs created are inadmissible because the largest pair from the 1/3 primary
tree is a shift of 1·. However, by testing one finds that
Jβ,α((101001001)∞, (11001)∞)
is countable, containing sequences of the form (110100100)n(11001)∞ and shifts thereof.
Therefore, there are no more maximal extremal pairs here so this is not a problem.
Fortunately the remaining secondary trees proceed precisely as expected, so we will
not show them here.
We now find a1. Recall that a1 = a2 = vδ2v∞δ whenever we have u∞δ > 0· >
uδ1uδuδ2u∞δ . In this example,
uδ1uδuδ2u∞δ = 0010010101(00101)∞ < 0·,
thus a1 = a2 = 10(10100)∞.
D0 for this example is shown in Figure 9.3.
9.3.2 A difficult case
1· = (110110011011000)∞, 0· = (00011001)∞, (β, α) ≈ (1.6359, 0.1586).
γ(β, α) = 3/5, δ(β, α) = 1/3.
We have b0 = (01011)∞, b2 = 01(01011)∞, a0 = (100)∞, a2 = 10(100)∞.
Central balanced pairs cover:
((010)∞, (100)∞) to ((01101)∞, (10101)∞).
Note that t1/3s∞1/3 10· and s3/5t
∞3/5 01·.
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 103
Figure 9.3: An approximation of D0, 1· = (11100)∞, 0· = (0010011)∞. Balancedwords are from a = (01010)∞ ≈ 0.238 to b = (101)∞ ≈ 0.762. The largest visiblecorner at (0.272, 0.651) is from the pair (01, 10).
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 104
For every admissible balanced word, both its preimages are admissible, so we do not
use any preimages.
The gaps between b0 and balanced pairs and balanced pairs and a0 give us
Rγ = (0(01011)∞, (010)∞)× ((01011)∞, 10·),
Rδ = (01·, (100)∞)× ((10101)∞, 1(100)∞).
Begin with Rγ. Approximate γ(β, α) = 35
from above: 23, 58, 813, . . . , 3k+2
5k+3, . . .
Make primary trees:
2/3 (note the pairs marked × are inadmissible):0 011
(0011, 0110)
(00110, 01100)× (0011011, 0110011)×
(001100110,011000110)
(00110110011,01100110011) st∞ 001·ts∞ 0110·
5/8:0 01011011
(001011011, 010110110)
× ×st∞ 00101·ts∞ 010110110·
Now we try to create secondary trees.
Firstly, we need a tree between the 2/3 primary tree and the balanced words.
2/3: (s1, t1) = (00110110011, 01100110011). Balanced: (s2, t2) = (010, 100).
This creates a problem: t1t2 = 01100110011100 is inadmissible. However, s2s1 =
01000110110011 is admissible
Therefore use the extremal pair found from s2s1: take t to be the closest point on this
orbit above s2s1, giving (s, t) = (01000110110011, 01100110100011). Continue the rest
of the tree using extremal pairs formed from the s in this manner.
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 105
(001100110, 011000110) (010, 100)
(0100011011001,
01100110100011)
(0100011011001100110110011,
0110011010001101100110011)
(01001000110110011,
01100110100100011)
These pairs all have ts∞ admissible but st∞ inadmissible. Furthermore as can be seen
these pairs do not fill down to the 2/3 primary tree on the left. Therefore, there may
be more maximal extremal pairs still to find. Also, these pairs do nto give D2 due to
the inadmissibility of st∞.
Next, a tree between the 5/8 and 2/3 primary trees.
5/8: (s1, t1) = (001011011, 010110110). 2/3: (s2, t2) = (001100110, 011000110). This
causes a problem: s2s1 = 001100110001011011 is inadmissible. However, t1t2 =
010110110011000110 is admissible.
Therefore use the extremal pair found from t1t2: take s to be the closest point on this
orbit below t1t2, giving (s, t) = (001100101101100110, 010110110011000110). Continue
the rest of tree using extremal pairs formed from the t in this manner.
(001011011, 01101110) (001100110, 011000110)
(001100101101100110,
010110110011000110)
(001100101101100101101100110,
010110110010110110011000110)
(001100110001100101101100110,
010110110011000110011000110)
These pairs have st∞ admissible but ts∞ inadmissible, and conjecturally appear to
work. However, because ts∞ is inadmissible for every pair, this causes problems for
D2, and the limits on the right do not reach the 2/3 primary tree, so there may still
be maximal extremal pairs to find.
The remaining secondary trees should work with no difficulty: the problems here are
caused by the fact that the maximal and minimal admissible pairs from the 2/3 tree
correspond to the 3/5 and 2/5 pairs respectively, as opposed to 1/n or (n− 1)/n.
Now we find b1. To find b1, note γ1 = 1/2, γ2 = 2/3. Then we have
vγ2vγvγ1v∞γ = 1101101010(11010)∞ > 1·,
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 106
therefore b1 = b2 = 01(01011)∞.
For D1, D2, we need to cover
a ∈ (001(01011)∞, 0(01011)∞) = (0b2, 0b0),
b ∈ (01(01011)∞, (01011)∞) = (b2, b0).
Begin with Tree(0, uγ):
0 01011(001011, 010110)
× (00101101011, 01011001011)
(0(01011)∞ = 0b0,010110(01011)∞)
ts∞ 010110·
Then try roots u−i, i ≥ 0. Recall u−i = uγ1ui+1γ uγ2 .
u0:0 0101011011 = u0
(00101011011, 01010110110)
× ×
st∞ 0010101·ts∞ 010110110·
This continues with further u−i trees. As they do not give us any D2, there may be
further secondary tree type constructions needed here.
Now return to Rδ = (01·, (100)∞)× ((10101)∞, 1(100)∞).
Approximate δ(β, α) = 13
from below: 14, 27, 310, . . . , k+1
3k+4, . . .
Make primary trees:
1/4: nothing admissible.
2/7:1001000 1
(10010001, 11001000)
× ×st∞ 10010001·t∞ = 110010·
3/10:1001001000 1
(10010010001, 11001001000)
× ×st∞ 10010010001·ts∞ 110010010·
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 107
Create secondary trees. There is a gap to fill between balanced words and the 2/7 tree.
However, creating a secondary tree from roots (01101, 10101) and (10010001, 11001000)
gives only inadmissible words. This is further complicated by the inadmissible 1/4 tree.
What appears to work is the following:10001 101
(10001101, 10110001)
× ×st∞ 10001·ts∞ 10110·
This covers up to the 2/7 tree but does not cover down to balanced words.
To cover this gap, it appears we use pairs (10001(10101)n101, (10101)n10110001).
There are various tree-related ways in which we could get these pairs, but it is not
clear which is the most logical. There may be more pairs inbetween these ones.
The remaining secondary trees between later primary trees fortunately proceed as
expected and (s2s1, t1t2) pairs cause no problems.
We now find a1. Recall that a1 = a2 = vδ2v∞δ whenever we have u∞δ > 0· > uδ1uδuδ2u
∞δ .
In this example,
uδ1uδuδ2u∞δ = 000101(001)∞ < 0·,
thus a1 = a2 = 10(100)∞.
We need to cover a ∈ (a0, a2), b ∈ (1a0, 1a2). Begin by using vδ:100 1
(1001, 1100)
(1001100,1100100)
×st∞ 1001·
(1001(100)∞,1(100)∞ = 1a0)
Then try v−i = vδ2vi+1δ vδ1 , i ≥ 0:
v0:101000 1
(1010001, 1101000)
× ×st∞ 1010001·ts∞ 11010·
This then continues with further v−i. As these pairs give no D2 and the point
is that we are currently definitely in D0 and need to find D1 and D2, there may be
further secondary tree type constructions needed. This is as far as we can take this
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 108
example.
We hope this example gives some idea of quite how unpleasant intermediate β-
transformations can on occasion be.
9.4 A non-transitive case
1· = (11010100)∞, 0· = (0011)∞, (β, α) ≈ (1.2106, 0.4055).
γ(β, α) = δ(β, α) = 1/2.
We begin with D0. We have the sole balanced pair (01, 10).
Then b0 = (01)∞, a0 = (10)∞. As this is non-transitive we know b1, b2, a1 and a2 will
be problematic so leave them for now.
We would have
Rγ(β,α) = (0(01)∞, 010·)× ((01)∞, 10·), and
Rδ(β,α) = (01·, (10)∞)× (101·, 1(10)∞),
but this has admissibility problems. Thus instead we have
Rγ(β,α) = (0·, 010·)× ((01)∞, 10·), and
Rδ(β,α) = (01·, (10)∞)× (101·, 1·).
Attempt to fill Rγ with primary trees. Approximate γ(β, α) = 1/2 from above: 2/3,
3/5, 4/7, . . . .
2/3:0 011
(0011, 0110)
× ×st∞ 001·ts∞ 0110·
All other primary trees do not give anything admissible. (For example, 3/5 gives
(001011, 010110) which is too small.)
Notice also that (0011)∞ = 0·, so in fact we have b0 = (0110)∞.
Then, make a secondary tree:
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 109
(0011, 0110) (01, 10)
(010011, 011010)
(0100110011,0110011010)
(01010011,01101010)
×
s∞ = 0101001·t∞ = 01·
(01(0110)∞,(0110)∞)
Notice that Jβ,α((01)∞, 01·) is empty except for (01)∞, 0· and 1·. This is the smallest
a from balanced words and the largest b from secondary trees, and so as it is on the
boundary of D0 this means we have filled all of Rγ and have completed the boundary
here.
For Rδ, approximate δ(β, α) = 1/2 from below: 1/3, 2/5, 3/7, . . . .
Create primary trees.
1/3:100 1
(1001, 1100)
× ×st∞ 1001·ts∞ 110·
2/5:10100 1
(101001, 110100)
× ×st∞ 101001·ts∞ 11010·
3/7:1010100 1
(10101001, 11010100)
× ×st∞ 10101001·ts∞ 1101010·
However, all further primary trees give no admissible words. Note that (11010100)∞ =
1·, meaning we have a0 = (10101001)∞ from the last primary tree.
Make secondary trees between the primary trees.
1/3 and 2/5:
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 110
(1001, 1100) (101001, 110100)
(1010011001, 1100110100)
(10100110011001,11001100110100)
(1010011010011001,1100110100110100)
(1010·,110·)
((101001)∞,1100(110100)∞)
We make a similar tree from roots (101001, 110100) and (10101001, 11010100).
This completes the description of D0. For D1 and D2, we begin with the two gaps.
These are given by
(T 2(1), T (0)) = ((01010011)∞, (0110)∞), and
(T (1), T 2(0)) = ((10101001)∞, (1100)∞).
Then, we have Farey descendants of (01, 10). The admissible descendants are from
r = (1/2, 1/2) to r = (1/2, 3/4), that is from (0110, 1001) to (01101010, 10011010).
For the remainder, we essentially take shifts of these Farey descendants of (01, 10),
as these are the only possible admissible words.
For example, for large (a, b) we can make trees with roots 1100 and 110100, and
then one with roots 110100 and 11010100. However, we can also take the preimages
of these and use trees with roots 1001 and 101001 or 101001 and 10101001.
For smaller (a, b) we can use minimal shifts of Farey descendants, creating trees
from roots 0011 and 001101, or from 001101 and 00110101.
We conjecture that it will take every extremal pair from every admissible Farey
descendant, provided the pair does give D2. Indeed, this is more the challenge: which
pairs have st∞ and ts∞ admissible. This does immediately imply some strange be-
haviour: the boundary of D2 should contain arbitrarily small holes. This does not
happen in the transitive case.
This example is shown in Figure 9.4.
CHAPTER 9. SUMMARY AND WORKED EXAMPLES 111
Figure 9.4: 1· = (11010100)∞, 0· = (0011)∞. This figure shows D0(β, α) (light grey)and an approximation to D2(β, α) (dark grey). Notice how the boundaries of the twodo not intersect, which also does not happen in the transitive case. Parts of D2 aremissing: each staircase section arises from a different tree, and we have not foundevery tree.
Chapter 10
Large holes for Di
An interesting question to consider about Di is what are the largest holes on the
boundary of Di. To this end, we denote by Ci(β, α) the value such that if (a, b) ∈
Di(β, α) then b − a < Ci(β, α). These values were studied for the doubling map in
[17] and [20]. We extend these methods to as many (β, α) as possible. Note that we
only want to consider the “interesting” portion of each Di, therefore for each Ci we
consider only (a, b) with b > bi and a < ai.
Lemma 10.0.1. For every (β, α) such that Tβ,α is transitive, we have
C1(β, α) = C2(β, α) =β − 1
β2.
Proof. Each Ci(β, α) is given by
sup{b− a : (a, b) ∈ Di} = max{b− a : (a, b) is a corner of Di}.
For D1 and D2 this is equal to
max{ts∞ − s∞ or t∞ − st∞ : (s, t) is maximal extremal for Tβ,α
and st∞ or ts∞ is admissible}.
We have that, given admissibility, ts∞ − s∞ = t∞ − st∞ = t− s. Recall that t = u1-
min(w) and s = u0-max(w) for some word w with u1 and u0 factors of w. Therefore
t − s will be maximised when u is the empty word. The only maximal extremal
pairs with s beginning 0 and t beginning 1 are central balanced pairs, and provided
Tβ,α is transitive there must exist such balanced pairs with ts∞ and st∞ admissible.
For any balanced pair, we have t = 10w and s = 01w for some word w. Thus
t− s = 10− 01 = 1/β − 1/β2, giving the required result for C1 and C2.
112
CHAPTER 10. LARGE HOLES FOR DI 113
Lemma 10.0.2. C0(β, α) is given by t∞r − s∞r where (sr, tr) is the balanced pair cor-
responding to r = p/q ∈ [δ(β, α), γ(β, α)] where q is minimised. This gives
C0(β, α) =βq−2(β − 1)
βq − 1,
for the smallest possible such q.
Proof. As for C1 and C2, the maximal value of b − a must be found at a corner of
D0. The corners are given by (s∞, t∞) where (s, t) is a maximal extremal pair. As
before, we have s = u0 max(w) and t = u1 min(w) for some words u and w. Therefore
the different will be maximised with u empty, hence (s, t) balanced. Then t∞ − s∞
is maximised when w is as short as possible. This minimises the length of the word,
which is given by q where r = p/q, therefore we wish to minimise q. Then we have
t∞ − s∞ = (10w)∞ − (01w)∞,
= (10q−1)∞ − (010q−2)∞,
= (1
β− 1
β2)∞∑i=0
β−iq,
=βq−2(β − 1)
βq − 1.
Notice that for C0 we do not need to worry about the admissibility of st∞ and ts∞,
therefore this works for non-transitive maps as well, but as ever D1 and D2 are more
difficult and so we leave the question of C1 and C2 open for non-transitive intermediate
transformations.
An extension of this would be to let ci(β, α) be the value such that if a−b < ci(β, α)
then (a, b) ∈ Di(β, α); that is, the smallest holes on the boundary of each Di(β, α). In
this case we need the inverted corners. For c2, this involves minimising ts∞−st∞. For
c0, we need to minimise ts∞−s∞ or t∞−st∞. For c1, we need to minimise ts∞−sts∞
(over Farey descendants of maximal pairs). Unfortunately this is before considering
the pairs used for small b and also before considering the interesting situation for D1
and D2 when secondary trees are involved. We therefore leave this is an open question
to be studied in the future.
Part II
Holes for the baker’s map
114
Chapter 11
Introduction
In this part of the thesis, we will present some preliminary results in the setting of the
baker’s map on [0, 1)2. In this chapter we demonstrate that this higher dimensional
setting exhibits some interesting behaviour which is not seen in the one-dimensional
case. We will then present some results for corner holes, that is holes [0, a)2 ∪ (b, 1)2,
and finally we will consider some examples of traps: that is, holes with empty avoidance
set. Further work is being done on this topic by Nikita Sidorov and Kevin Hare.
11.1 The baker’s map
The baker’s map is the natural extension of the doubling map, conjugate to the shift
map on the set of bi-infinite sequences {0, 1}Z.
Definition 11.1.1 (Baker’s map). Given (x, y) ∈ [0, 1)2, we define the baker’s map
as follows:
B(x, y) =
(2x, y2) if 0 ≤ x < 1
2,
(2x− 1, y+12
) if 12≤ x < 1.
We repeat here some of the basic combinatorics on words notation from Chapter 2,
this time in the bi-infinite setting. This is included for the reader who is beginning with
Part 2 and may be skipped otherwise as it is substantially the same as the beginning
of Chapter 2.
Let A be a set, called the alphabet, with its elements being called letters. In this
115
CHAPTER 11. INTRODUCTION 116
part we will always have A = {0, 1}. The set of finite words over A is defined to be
A+ = {a1 . . . an | n ≥ 1, ai ∈ A}.
We denote the empty word by ε and define A∗ to be A+ ∪ ε.
Given two finite words u = u1 . . . un and v = v1 . . . vm, we denote by uv their
concatenation u1 . . . unv1 . . . vm. In particular uk = u . . . u (k times). Concatenation
with the empty word is defined by wε = εw = w for all w ∈ A+.
Concatenation is clearly associative. Recall that a semigroup is a set closed under
an associative binary operation, and a monoid is a semigroup with an identity element.
This means that A+ is a semigroup and A∗ is a monoid, commonly called the free
monoid over A.
The length of a word w is denoted |w|. The number of occurrences of the letter a
in w is denoted |w|a.
A word u is said to be a factor of w if there exist words x, y such that w = xuy.
If x (respectively y) is the empty word then u is called a prefix (respectively suffix ) of
w.
To compare words we use the lexicographic order : given an order on the alphabet,
a word u is lexicographically smaller than a word v (that is, u ≺ v) if either u1 < v1
or there exists k > 1 with ui = vi for 1 ≤ i < k and uk < vk. This is the standard
order in a dictionary.
Two words x and y are said to be conjugate if there exist u, v with x = uv and
y = vu. Equivalently, x is a cyclic permutation of y.
We also introduce the following notation. Given a word w and a factor u of w,
we denote by u-max(w) the lexicographically maximal conjugate of w that begins
with the word u. Similarly we denote by u-min(w) the lexicographically minimal
conjugate of w that begins with the word u. For example, given w = 10100, we have
0-max(w) = 01010 and 1-min(w) = 10010.
A bi-infinite word over A is defined as being an element of AZ; that is a sequence
of letters indexed by the integers. Prefixes and the lexicographic order can be defined
on one-sided infinite words in the same way as for finite words, and factors can be
defined for bi-infinite words in the same was as for finite words.
CHAPTER 11. INTRODUCTION 117
The shift map σ : AZ → AZ is defined on bi-infinite words by
σ(. . . w−2w−1 · w0w1w2w3 . . . ) = . . . w−2w−1w0 · w1w2w3w4 . . . .
This then has a well defined inverse, given by shifting in the other direction:
σ−1(. . . w−2w−1 · w0w1w2w3 . . . ) = . . . w−2 · w−1w0w1w2w3w4 . . . .
A bi-infinite word w is called periodic if σn(w) = w for some n ≥ 1. The smallest
such n is called the period of w. We write a bi-infinite periodic word as w = ∞(u)∞
for some finite word u.
To obtain the correspondence between the baker’s map and bi-infinite sequences,
set
R0 = {(x, y) : 0 ≤ x < 1/2},
R1 = {(x, y) : 1/2 ≤ x < 1}.
We associate each point (x, y) ∈ [0, 1) × [0, 1) with a bi-infinite sequence w ∈ {0, 1}Z
as follows:
wi = k ⇐⇒ Bi(x, y) ∈ Rk.
This gives us the projection as follows:
(x, y) =
(∞∑i=0
wi2i+1
,−∞∑i=−1
wi2−i
).
At times we will need to be able to distinguish where the middle of the word is; that
is which digit is w0. To facilitate this, we place a point, akin to a decimal point, in the
middle of the word: . . . w−2w−1 · w0w1w2 . . . . The x coordinate is to the right of the
dot in binary and the y coordinate is to the left of the dot, also in binary but reading
in reverse. Given a word w, we denote by w the word obtained from w by swapping
1s and 0s.
The final piece of notation is that we will use for example (0a, b) to denote a word
b · 0a, that is any word with w0 = 0. Similarly (1b, a) would denote a word b1 · a, with
w−1 = 1.
Given a hole H ⊂ [0, 1)2, we define the avoidance set of H to be
J (H) = {(x, y) ∈ [0, 1)2 \ {(0, 0)} : Bn(x, y) /∈ H for all n ∈ Z}.
We have excluded the fixed point (0, 0) because in the context of avoidance sets, fixed
points are not interesting and not useful to study.
CHAPTER 11. INTRODUCTION 118
11.2 Interior holes are avoidable
Lemma 11.2.1. Suppose H is an interior hole; that is to say H ⊂ (0, 1) × (0, 1).
Then dimHJ (H) > 0.
Proof. Define
An = {σkw|w consists of combinations of 0n1 and 0n+11}.
We claim that for every interior hole H, there exists an n such that An avoids H. To
see this, consider
limn→∞
∞(0n1)∞.
This consists of countably many values depending on the location of the 0th position.
For example, put a 1 in the 0th position. Then the x coordinate begins with this 1
and the y coordinate begins with n 0s. This gives us:
limn→∞
∞(0n1)∞ = limn→∞
((10n)∞, (0n1)∞)→ (1/2, 0).
Extending this, we have
limn→∞
An = {w = limn→∞
∞(0n1)∞} = {(0, 2−k), (2−k, 0), k ≥ 0}.
None of these points are in the interior of the square. Thus we can choose n such that
An avoids H. This gives positive Hausdorff dimension as required.
Remark 11.2.2. The subshifts An lie towards the bottom left of the square [0, 1]2.
There is also another collection of subshifts given by
Bn = {σkw|w consists of combinations of 1n0 and 1n+10}.
These subshifts lie in the bottom right of the square and by switching 0s and 1s in the
above proof we can see that for suitably large n these subshifts will also avoid interior
holes.
This means that in a startling contrast to the one dimensional case, there are holes
of size arbitrarily close to full measure which are avoided by a set of positive Hausdorff
dimension.
Chapter 12
Corner holes for the baker’s map
As we have found subshifts in the bottom left and top right corners, we want to consider
some form of hole which immediately contains these subshifts. The simplest way of
doing this is to take a square from the bottom left and the top right and consider this
a hole. We begin with the symmetric version.
12.1 The symmetric case
Define the hole Hs to be
Hs = [0, s)2 ∪ (1− s, 1]2.
Notice that if s > 1/2 then this hole has an overlap.
Lemma 12.1.1. We have that J (Hs) is empty whenever s > 2/3 and non-empty for
s ≤ 2/3.
s
s
1− s
1− s
Figure 12.1: The hole Hs
119
CHAPTER 12. CORNER HOLES FOR THE BAKER’S MAP 120
Proof. There is a period 2 orbit given by ∞(10)∞ which is {(2/3, 1/3), (1/3, 2/3)}. If
s ≤ 2/3 then this orbit avoids Hs.
The easiest way to see that Hs is a trap for s > 2/3 is actually to reduce to the
one dimensional case. Notice that H2/3 contains [1/3, 2/3] × [0, 1]. Considering just
the value of x, recall that [1/3, 2/3] = [(01)∞, (10)∞] is a trap for the doubling map.
Therefore, this strip is a trap for the baker’s map, and thus Hs is a trap for s > 2/3.
The Hausdorff dimension of the avoidance set of Hs has an interesting tipping
point. We define a substitution on four letters as follows:
θ : {a, b, c, d} → {a, b, c, d}∗
a 7→ b
b 7→ bda
c 7→ cad
d 7→ c
Set a = 000, b = 00100, c = 11011 and d = 111. Then define s∗ to be a fixed point
for θ:
s∗ = limn→∞
θn(da)
= cadbcbdacadbdacbcadbcbdacbcadbdacadbcbdacb . . .
= 11011000111001001101100100111000110110001110010011100011011 . . . .
This is equal to limn→∞ θn(c), but the sequences given by θn(da) are actually more
useful, so we will think of it in this way.
Lemma 12.1.2. The Hausdorff dimension of J (Hs) is positive for s < 0s∗ and zero
for s > 0s∗ ≈ 0.4236.
Proof. Notice that as the baker’s map is invertible and the hole is symmetrical, in
order to avoid the hole a word must in fact avoid the hole when read both forwards
and backwards. In the word s∗, a and b are never adjacent and c and d are never
adjacent. For this reason questions involving the lexicographic order on 0s and 1s are
equivalent to questions involving the lexicographic order on a, b, c, d. Swapping 0s and
1s corresponds to swapping a with d and b with c.
CHAPTER 12. CORNER HOLES FOR THE BAKER’S MAP 121
To prove that dimHJ (Hs) > 0 for s < 0s∗, define sn for n ≥ 2 to be the nth
iterate of θn(da) with the last two letters removed. Then 0(sn)∞ → 0s∗ from below,
and H0(sn)∞ is avoided by the following subshift:
An = {Bi-infinite sequences composed of words sn and θn(da)}.
On the other hand, notice that 0θn(da) ↘ 0s∗ from above. As s ↘ 0s∗, we
gradually get the periodic points given by sequences ∞(θn(da))∞ appearing in J(Hs).
These cycles plus ∞(01)∞ are the only points avoiding Hs as s→ 0s∗ from above. To
see this, consider that the following words fall into the hole:
00 · 00, 00 · 010, 010 · 010, 00 · 011010, 001 · 010.
The same words with 0s and 1s switched are also in the hole. This immediately restricts
the possible words avoiding the hole to being contained in {a, b, c, d}Z. Furthermore,
a and c must followed by b or d: we can never have ac, ca, bd or db. The forthcoming
induction now becomes rather fiddly and is best followed with pen and paper close at
hand.
Consider the first step, s = 0(da)∞ = 0(111000)∞. To avoid Hs, a point must
either be (01)∞ or it must contain 01111 or 10000, but either of these shift to fall in
the hole. Thus for n = 1, J is just the 2-cycle as required.
To continue by induction, notice that θn(da) ends in either cb or da, alternating
between the two. Take this ending and change it to cad or b respectively. This word
is actually θn(c). Then
θn+1(da) = θn(c)θn(c).
Suppose we are in the case where θn(da) ends in cb. We try to construct a periodic
point inbetween θn(da)∞ and θn+1(da)∞. To decrease θn(da), we have to change the
final cb to cad and take θn(c), as this is the only option. Then to remain above θn+1(da)
we have to flip the sequence θn(c), because this is the maximal sequence beginning
b by design just as θn(c) is minimal by design. This means we are then already at
θn+1(da) with no options to increase the sequence. The other case is similar.
This means there are no possibly periodic points between θn(da) and θn+1(da).
Thus, at each stage J must consist of only the sequences described.
CHAPTER 12. CORNER HOLES FOR THE BAKER’S MAP 122
The value of s∗ is approximately 0.4236. The lengths of the words θn(da), and thus
the periods of the cycles as they appear, are given by the following relation, subject
to `0 = 6:
`n = 2(`n−1 + (−1)n),
where `n gives the length of θn(da) in binary or the length of θn+2(da) in the alphabet
{a, b, c, d}. As mentioned, the 2-cycle {(1/3, 2/3), (2/3, 1/3)} given by the sequence
∞(01)∞ also avoids Hs for any s < 2/3.
The substitution on {a, b, c, d} does not appear in any other setting of which I am
aware, and was found purely by trial and error: I regret that I cannot offer any easy
motivation for its appearance.
12.2 The asymmetric case
We now consider the hole Ha,b = [0, a)2× (b, 1]2. Ideally, we would like to describe the
sets Di(a, b) as we did for one dimensional maps. In this section we fully describe the
set D3, defined below, and make a conjecture as to D0.
12.2.1 D3
As in Hare and Sidorov [20], we define D3 to be the set
D3 = {(a, b) ∈ (0, 1)2 : J (Ha,b) contains a periodic orbit of period n for every n ≥ 3}.
We exclude n = 2 because there is only one period 2 orbit, making it somewhat less
interesting.
Remark 12.2.1. Observe that for all ε, δ > 0, we have that
(i) if (a, b) ∈ D3 then (a− ε, b+ δ) ∈ D3 also;
(ii) if (a, b) /∈ D3 then (a+ ε, b− δ) /∈ D3 also.
We also define corners and anti-corners:
Definition 12.2.2. We say that (a, b) is a corner of D3 if for (a′, b′) sufficiently close
to (a, b) we have
CHAPTER 12. CORNER HOLES FOR THE BAKER’S MAP 123
(a, b)
(a, b)
Figure 12.2: A corner (left) and an anti-corner (right)
(i) if a′ < a and b′ > b then (a′, b′) ∈ D3;1
(ii) if a′ > a or b′ < b then (a′, b′) /∈ D3.
Definition 12.2.3. We say that (a, b) is an anti-corner of D3 if for (a′, b′) sufficiently
close to (a, b) we have
(i) if a′ < a or b′ > b then (a′, b′) ∈ D3;
(ii) if a′ > a and b′ < b then (a′, b′) /∈ D3.
Theorem 12.2.4. D3 is made up of 6 corners and 7 anti-corners. The corners are:
(a1, b1) = ( 431, 27) (a2, b2) = ( 4
15, 1031
) (a3, b3) = (27, 35)
(a4, b4) = (25, 57) (a5, b5) = (21
31, 1115
) (a6, b6) = (57, 2731
)
Taking a0 = 0 and b7 = 1, the anti-corners are the points (ai, bi+1) for i = 0, . . . , 6.
Proof. The set D3 is depicted in Figure 12.3. We prove this result by showing firstly
that (ai, bi) ∈ D3. This is done by explicitly constructing an n-cycle avoiding H(ai,bi)
for each n ≥ 3. These n-cycles are given for the first three corners in Table 12.1:
by symmetry one simply swaps the 1s and 0s to obtain the result for the remaining
corners.
Secondly we show that (ai + ε, bi+1 − δ) /∈ D3 for all ε, δ > 0 by findng a bad n.
This is shown in Table 12.2: for each cycle an element contained in the hole is given.
Again the results for the remaining anti-corners follow by symmetry.
1Note that this is not strictly necessary as it is precisely part (i) of Remark 12.2.1 and is hencetrue regardless of whether or not we include it in this definition.
CHAPTER 12. CORNER HOLES FOR THE BAKER’S MAP 124
Corner n-cycles
( 431, 27)
(010)k0(010)k00(010)k001
( 415, 1031
)(010)k0(010)k01(010)k001
(27, 35)
(010)k1(010)k01(010)k001
Table 12.1: n-cycles avoiding the corners.
Anti-corner n n-cycle Bad element
(0, 27) 3 100 (2/7, 2/7)
110 all
( 431, 1031
) 5 10000 (4/31, 4/31)
11000 (17/31, 17/31)
11100 (14/31, 14/31), (25/31, 19/31), (19/31, 25/31)
11110 all
10100 (10/31, 10/31)
11010 all
( 415, 35) 4 1000 (2/15, 4/15), (4/15, 2/15)
1100 (3/5, 3/5)
1110 (11/15, 13/15), (13/15, 11/15)
(27, 57) 3 100 (2/7, 2/7)
110 (5/7, 5/7)
Table 12.2: Bad n for anti-corners.
12.2.2 D0
Recall that D0 is given by:
D0(a, b) = {(a, b) ∈ [0, 1)2 : J (Ha,b) 6= ∅}.
We make a conjecture as to the description of D0 for the baker’s map. This has been
found by experimentation and appears to be correct, but this section is unproven.
It is useful to observe that periodic orbits for the baker’s map are symmetric about
the line x = y. To get the boundary of D0, we need the equivalent of extremal pairs.
For a particular periodic orbit, we want to choose the a and b that means the orbit
just falls into the hole: that is, the smallest possible a and the largest possible b. We
do not want to include extraneous points of the orbit in the middle of the hole.
CHAPTER 12. CORNER HOLES FOR THE BAKER’S MAP 125
(4/15, 10/31)
(2/7, 3/5)
(5/7, 27/31)
a
b
Figure 12.3: D3 (shaded)
a
b
a
b
a
b
Figure 12.4: Possible orbits and corresponding H(a,b). In each of these cases we havechosen the smallest a and largest b such that the orbit just touches the edge of thehole.
Suppose we have a periodic orbit P . Then to get the equivalent of an extremal
pair we take (a, b) as follows:
a = min{max{x, y} : (x, y) ∈ P},
b = max{min{x, y} : (x, y) ∈ P}.
This is shown in Figure 12.2.2. In the one dimensional case, each orbit gave multiple
extremal pairs, but here each orbit only offers one pair that could reasonably be called
extremal. Because of this, given a periodic point w∞, we will denote its corresponding
a and b by w(a) and w(b).
We claim the correct orbits are thus. For each orbit, we write the maximal cyclic
shift. Begin with 10n and 01n for odd n only. We make a tree as in the one dimensional
CHAPTER 12. CORNER HOLES FOR THE BAKER’S MAP 126
1000 1110
10
101000
1010100010100100
1010001000
1001001000
111010
1101101011101010
11101010
11011010
Figure 12.5: Corner words for D0. Plain lines indicate regular concatenation; dashedlines indicate irregular concatenation.
case. Two words L and R (left and right) with L(a) < R(a) are said to be neighbours
if there does not exist any word w with L(a) < w(a) < R(a) and |w| < min{|L|, |R|}.
Then we combine two such neighbouring words to give RL. This gives a new corner
of D0. We will call any word derived in this fashion a regular word.
The key difference from the one dimensional case is that we can also combine in a
different way. Suppose L = L1 . . . Ln and R = R1 . . . Rm. Then the word
L1 . . . Ln−1R1LnR2 . . . Rm,
is also a new corner provided it is of prime period n + m. We will call these words
irregular. Some words can arise from both regular and irregular concatenation.
Conjecture 9. The words formed from regular and irregular concatenation as de-
scribed give the corners of D0. Together with their limit points this fully describes
D0.
CHAPTER 12. CORNER HOLES FOR THE BAKER’S MAP 127
Figure 12.6: Conjectured D0 for the baker’s map.
Chapter 13
Convex traps
13.1 The kite K
Denote by K the closed convex polygon with corners (0, 1), (1/2, 1), (1/3, 2/3) and
(0, 1/2). Similarly let Kε be the closed convex polygon with corners (0, 1), (1/2+ ε, 1),
(1/3, 2/3) and (0, 1/2− ε). The results about this kite also hold for its reflection in the
line y = x by swapping maxima and minima as needed. These two kites are depicted
in Figure 13.1.
Lemma 13.1.1. Any point whose orbit has a minimum x or a maximum y, excluding
fixed points, must fall into K.
Proof. We can define K by the inequalities y ≥ 2x and y ≥ 12x + 1
2. We rewrite this
lexicographically:
(0a, b) =⇒ b ≥ a,
(a, 1b) =⇒ b ≥ a.
Consider a point shifted so that its x-coordinate is minimal and assume that it avoids
K:
(x, y) = . . . y3y2y1 · x1x2x3 . . .
Then because this has minimal x-coordinate we must have y1 = 1 and either x = 0
or x beginning 0n1 for some n. If x = 0 and y1 = 1 then this is instantly in K, so
suppose x begins 0n1. If x is minimal and n = 1 then we must have either y = 1 or
128
CHAPTER 13. CONVEX TRAPS 129
(x, y) = ∞(01)∞, both of which are in K. Hence suppose y1 = 1 and x begins 0n1 for
some n > 1. Then as this point avoids K, one of the following inequalities must hold:
y1y2y3 . . . ≺ x2x3x4 . . . ,
y2y3y4 . . . ≺ x1x2x3 . . . .
Given we know y1 = 1 and x2 = 0, the first of these inequalities does not hold, and
so the second must. Then this implies that y2y3 . . . yn+1 = 0n. Thus y begins 10n.
Due to our assumption that x is minimal, we know y cannot contain 0n+1. Then by
repeatedly applying this method we can see that y must be (10n)∞.
Then by repeatedly shifting (n+ 1) places to the left, we can see that our minimal
x must be no larger (0n1)∞. Similarly x cannot be smaller than this because then
the minimal shift would begin 0n+1, contradicting our assumption that it starts 0n1.
Therefore the only possibility is that our point is (x, y) = ∞(0n1)∞. But this is on
the boundary of K, and we defined K to be closed. Hence any point whose orbit
has a minimal x-coordinate must fall into K. The argument for maximal y follows by
symmetry.
Lemma 13.1.2. Any point that does not fall into K is either a fixed point or comes
arbitrarily close to the boundary of K.
Proof. The only points that do not fall into K have neither a minimal x-shift or a
maximal y-shift.
We know that the cylinder [11 · 00] ⊂ K, thus multiple 1s cannot be followed by
multiple 0s. Furthermore the cylinders [10n1 ·0n+1] and [1m+1 ·01m0] are also contained
in K for any n,m > 0, so these words are also forbidden. This restricts the possible
words that can avoid K.
Any point that is not the period 2 orbit must contain 0a0 or 1b0 for some a0, b0 > 1.
Let us start from here and attempt to continue the word given the above forbidden
words.
If the point contains 1b0 , then attempt to continue the word to the right. The
CHAPTER 13. CONVEX TRAPS 130
possibilities are:
1∞, forbidden as has a maximal y-shift,
1b00c, forbidden for c > 1,
1b0(01)n0c, forbidden for c > 1,
1b001b1 , forbidden for b1 < b0.
So the only viable options are 1b0(01)∞ or 1b001b1 with b1 ≥ b0.
Similarly, if the point contains 0a0 , then attempt to continue the word to the left.
The possibilities are:
∞0, forbidden as has a minimal x-shift,
1c0a0 , forbidden for c > 1,
1c010a0 , forbidden for c > 1,
0a110a0 , forbidden for a1 < a0.
So the only viable options are ∞(01)0a0 or 0a110a0 with a1 ≥ a0.
Combining these, we see that the only bi-infinite words that could avoid K are in
one of the following forms:
∞(10)1b001b101b2 . . . ,
. . . 0a210a110a0(10)∞,
. . . 0a210a110a0(10)n1b001b101b2 . . . ,
. . . 0a110a0(10)n1b001b1 . . . ,
where n ≥ 0, ai, bi > 1 for all i, and the sequences ai and bi are both (not necessarily
strictly) increasing.
Thus we have ai ≤ ai+1 and bi ≤ bi+1 for all i. We discount the option of having
ai = ∞ or bi = ∞ for some i because this could be rewritten to end with trailing 1s
instead of 0s or vice versa - the orbit would then fall into K.
If either sequence ai or bi tends to infinity, then the resultant point must come
arbitrarily close to (0, 1/2) and (1/2, 1) because we have shifts
. . . 10an1 · 0an−11 . . . ,
. . . 01bm−1 · 01bm0 . . . ,
CHAPTER 13. CONVEX TRAPS 131
Figure 13.1: K (top left) and its mirror.
with an, bm arbitrarily large.
If not, then the sequence must be eventually periodic on both the left and the right.
However, this means the point falls arbitrarily close to (10n)∞ or (1n0)∞ for some n.
These periodic points are on the boundary of K.
This covers all the possibilities and thus the lemma is proved.
Corollary 13.1.3. Kε is a trap for the baker’s map for every ε > 0.
13.2 The parallelogram P
The results in this section are joint work with Kevin Hare and Nikita Sidorov, and
work is ongoing on this topic.
Let P be the parallelogram with vertices (1/3, 2/3), (1/2, 1), (2/3, 1/3) and (1/2, 0),
shown in Figure 13.2. This parallelogram may also be defined by the inequalities
2x− 1 < y < 2x and 2− 4x < y < 3− 4x.
These inequalities convert to symbolic space in a reasonable way. For instance, let
x1 = 1; then the inequality y < 2x − 1 means x0x−1 · · · ≺ x2x3 . . . , and y > 3 − 4x
means x0x−1x−2 · · · � x3x4 . . ..
CHAPTER 13. CONVEX TRAPS 132
Figure 13.2: The parallelogram P .
Lemma 13.2.1. No element of J (P ) can contain the factors
10k1n0 n > k ≥ 2
01k0n1 n > k ≥ 2
10k10n1 n > k ≥ 1
01k01n0 n > k ≥ 1
Proof. Consider a word w ∈ J (P ). Case 10k1n0 with n > k ≥ 2:
Assume that this word contains the factor 10k1n0 with n > k ≥ 2. We can shift this
factor so that x = ·01n0 . . . and y = ·0k−11 . . . . Then notice that
2x = ·1n0 . . .
4x− 1 = ·1n−10 . . .
2− 4x = ·0n−11 . . .
From this it is easy to see that 2x− 1 < y < 2x and 2− 4x < y < 3− 4x. Hence the
orbit of this word will lie in P .
Case 01k0n1 with n > k ≥ 2:
Assume that this word contains the factor 01k0n1 with n > k ≥ 2. We can shift this
factor so that x = ·10n1 . . . and y = ·1k−10 . . . . Then notice that
2x− 1 = ·0n1 . . .
4x− 2 = ·0n−11 . . .
3− 4x = ·1n−10 . . .
From this it is easy to see that 2x− 1 < y < 2x and 2− 4x < y < 3− 4x. Hence the
orbit of this word will lie in P .
CHAPTER 13. CONVEX TRAPS 133
Case 10k10n1 with n > k ≥ 1:
Assume that we have a factor 10k10n1 with n > k ≥ 1. Then we shift this factor so
that x = ·10n1 . . . and y = ·0k1 . . . . Then notice that
2x− 1 = ·0n1
4x− 2 = ·0n−11
3− 4x = ·1n−10
From this it is easy to see that 2x− 1 < y < 2x and 2− 4x < y < 3− 4x. Hence the
orbit of this word will lie in P .
Case 01k01n0 with n > k ≥ 1:
Assume that we have a factor 01k01n0 with n > k ≥ 1. Then we shift this factor so
that x = ·01n0 . . . and y = ·1k0 . . . . Then notice that
2x = ·1n0
4x− 1 = ·1n−10
2− 4x = ·0n−11
From this it is easy to see that 2x− 1 < y < 2x and 2− 4x < y < 3− 4x. Hence the
orbit of this word will lie in P .
Corollary 13.2.2. The only periodic points of the baker’s map which do not intersect
P are of the form (10n)∞, (01n)∞ or (1n0n)∞. All these intersect P . Hence J (P )
contains no periodic points.
Proof. Assume that the periodic point contains 1nk for some nk ≥ 2. Consider the
point (0m11n1 . . . 0mk1nk)∞. By the previous result, either mk = 1 or mk ≥ nk. In
either case, we have nk−1 ≥ nk. Similarly, we have n1 ≥ n2 · · · ≥ nk ≥ n1 Thus, all ni
must be all the same. In a similar way, all mi are the same. Hence we can write this
sequence as (0m1n)∞. If m,n ≥ 2 then we have m ≥ n ≥ m and hence it is of the
form (0n1n)∞. Otherwise it is of the form (10n)∞ or (01n)∞, as required.
As stated, work on this topic is ongoing with Kevin Hare and Nikita Sidorov.
Chapter 14
Conclusion
In the first part of this thesis, we studied avoidance sets for intermediate β-expansions.
The primary method of doing this is to find maximal extremal pairs. The easiest such
pairs are formed from balanced words, but whilst for the doubling map these are
sufficient, there are many more maximal extremal pairs for the intermediate case.
In Chapter 6, we saw that the next step is to approximate the maximal admissible
balanced word by other inadmissible balanced words from above, and then combine
these with 0 to create admissible words. This is repeated by approximating the minimal
admissible balanced word from below and combining with 1. These are the primary
trees.
Primary trees are sufficient for greedy or lazy maps, but for truly intermediate
expansions there are still gaps remaining. By this point these remaining gaps are tiny.
We conjecture that we fill these gaps with so-called secondary trees, created from
the surrounding primary trees. However, sometimes the creation of these becomes
complicated, and as demonstrated in the example of Section 9.3.2 this description
may still be incomplete. This is as far as we can describe the boundary of the set D0:
the set of holes (a, b) where Jβ,α(a, b) is non-empty.
ForD1 andD2, it usually suffices to use the same maximal extremal pairs. However,
this is impossible for the pairs derived from secondary trees, hence for the gaps we also
have conjectures. The other problem with these sets is when b is small or a is large,
as covered in Chapter 7. Here the boundary of D1 is particularly fiendish, and more
trees are required.
134
CHAPTER 14. CONCLUSION 135
In Section 4.2 we presented an easy way of describing the set of non-transitive inter-
mediate β-expansions. The non-transitive case also exhibits some strange behaviour,
covered in Chapter 8. The final short chapter of part 1 contains some results on the
largest holes on the boundary of the sets Di.
Future work to be done would include firming up the conjectures regarding sec-
ondary trees and the situation for small b and large a, and answering the question
of ci(β, α). Beyond this, there are many more one dimensional maps that could be
studied: maps with more than two branches such as the tripling map, or maps such as
the tent map or negative β-expansions which use the alternating lexicographic order
instead of the standard lexicographic order. These are still broadly sensible maps, yet
given the difficulties presented by just β transformations, one dreads to think what
could occur for something more complex. However, in some ways these maps should
be similar. The idea of extremal pairs can easily be tweaked to use a different order
or a larger alphabet. Then for D0 at least maximal extremal pairs are by definition
exactly what is needed. In this way, the problem would be no different: identify the
maximal extremal pairs. Of course as illustrated already this can be highly nontriv-
ial for a complicated symbol space, and in particular there is no obvious analog of
balanced words to provide a friendly starting point, but the idea is there.
In the second part, we presented some preliminary results on the baker’s map.
There is also obviously much further work to be done here, including a proof of Section
12.2.2. There are also many questions to be asked following on from Chapter 13: what
is the smallest convex trap? What is the smallest convex hole such that J (H) has
positive Hausdorff dimension? Further to this, for Pisot numbers in particular there
is a sensible well studied higher dimensional analogue of greedy β-transformations.
What then can be said about these? In general, higher dimensional maps present a
wealth of possibilities for maps with holes that are yet to be studied.
Bibliography
[1] J. Allouche and J. Shallit. Automatic Sequences: Theory, Applications, General-
izations. Cambridge University Press, 2003.
[2] W. Bahsoun, C. Bose, and G. Froyland. Ergodic Theory, Open Dynamics, and
Coherent Structures. Springer Proceedings in Mathematics & Statistics. Springer
New York, 2014.
[3] A. Bertrand-Mathis. Developpement en base θ, repartition modulo un de la suite
(xθn), n ≥ 0, langages codes et θ-shift. Bulletin de la S. M. F., 114:271–323,
1986.
[4] P. Boyland, A. de Carvalho, and T. Hall. On digit frequencies in β-expansions.
Transactions of the American Mathematical Society, 2016.
[5] S. Bullett and P. Sentenac. Ordered orbits of the shift, square roots, and the
devil’s staircase. Math. Proc. Camb. Phil. Soc., 115(03):451–481, 1994.
[6] S. Bundfuss, T. Krger, and S. Troubetzkoy. Topological and symbolic dynamics for
hyperbolic systems with holes. Ergodic Theory and Dynamical Systems, 31:1305–
1323, 2011.
[7] N. Chernov and R. Markarian. Anosov maps with rectangular holes. nonergodic
cases. Boletim da Sociedade Brasileira de Matematica - Bulletin/Brazilian Math-
ematical Society, 28(2):315–342, 1997.
[8] N. Chernov and R. Markarian. Ergodic properties of anosov maps with rectan-
gular holes. Boletim da Sociedade Brasileira de Matematica - Bulletin/Brazilian
Mathematical Society, 28(2):271–314, 1997.
136
BIBLIOGRAPHY 137
[9] L. Clark. The β-transformation with a hole. Discrete and Continuous Dynamical
Systems, 36(3):1249–1269, 2016.
[10] K. Dajani and C. Kraaikamp. From greedy to lazy expansions and their driving
dynamics. Expositiones Mathematicae, 20(4):315–327, 2002.
[11] P. Erdos, I. Joo, and V. Komornik. Characterization of the unique expansions
1 =∑∞
i=1 q−ni and related problems. Bulletin de la S. M. F., 118(3):377–390,
1990.
[12] K. Falconer. Fractal geometry: mathematical foundations and applications. John
Wiley & Sons, 1990.
[13] L. Flatto and J. C. Lagarias. The lap-counting function for linear mod one trans-
formations i: explicit formulas and renormalizability. Ergodic Theory and Dy-
namical Systems, 16(03):451–491, 1996.
[14] N. P. Fogg. Substitutions in Dynamics, Arithmetics and Combinatorics, volume
1794 of Lecture Notes in Mathematics. Springer, 2002.
[15] P. Glendinning. Topological conjugation of Lorenz maps by β-transformations.
Mathematical Proceedings of the Cambridge Philosophical Society, 107(02):401–
413, 1990.
[16] P. Glendinning and T. Hall. Zeros of the kneading invariant and topological
entropy for lorenz maps. Nonlinearity, 9(4):999, 1996.
[17] P. Glendinning and N. Sidorov. The doubling map with asymmetrical holes.
Ergodic Theory and Dynamical Systems, 35:1208–1228, 2015.
[18] P. Glendinning and C. T. Sparrow. Prime and renormalisable kneading invariants
and the dynamics of expanding Lorenz maps. Phs. D, 62:22–50, 1993.
[19] L. Goldberg and C. Tresser. Rotation orbits and the Farey tree. Ergodic Theory
and Dynamical Systems, 16:1011–1029, 1996.
[20] K. G. Hare and N. Sidorov. On cycles for the doubling map which are disjoint
from an interval. Monatsh. Math., 175:347–365, 2014.
BIBLIOGRAPHY 138
[21] J. H. Hubbard and C. T. Sparrow. The classification of topologically expansive
Lorenz maps. Comm. Pure Appl. Math., 43:431–443, 1990.
[22] D. Lind and B. Marcus. An introduction to symbolic dynamics and coding. Cam-
bridge University Press, 1995.
[23] M. Lothaire. Combinatorics on words, volume 17 of Encyclopedia of Mathematics
and its Applications. Addison-Wesley, 1983.
[24] M. Lothaire. Algebraic combinatorics on words, volume 90 of Encyclopedia of
Mathematics and its Applications. Cambridge University Press, 2002.
[25] M. Lothaire. Applied combinatorics on words, volume 105 of Encyclopedia of
mathematics and its Applications. Cambridge University Press, 2005.
[26] M. R. Palmer. On the classification of measure preserving transformations of
Lebesgue spaces. PhD thesis, University of Warwick, 1979.
[27] W. Parry. On the β-expansions of real numbers. Acta Math. Acad. Sci. Hung.,
11:401–416, 1960.
[28] W. Parry. Representations for real numbers. Acta Math. Acad. Sci. Hung., 15:95–
105, 1964.
[29] W. Parry. Symbolic dynamics and transformations of the unit interval. Transac-
tions of the American Mathematical Society, 122(2):368–378, 1966.
[30] A. Renyi. Representations for real numbers and their ergodic properties. Acta
Math. Acad. Sci. Hung., 8:477–493, 1957.
[31] N. Sidorov. Almost every number has a continuum of β-expansions. The American
Mathematical Monthly, 110(9):838–842, 2003.
[32] N. Sidorov. Arithmetic dynamics. In S. Bezuglyi and S. Kolyada, editors, Topics
in Dynamics and Ergodic Theory, number 310 in LMS Lecture Notes Series, pages
145–189. 2003.
[33] N. Sidorov. Expansions in noninteger bases. Lectures notes of a graduate course
at the summer school at Queen Mary, University of London, July 2010.
BIBLIOGRAPHY 139
[34] N. Sidorov. Supercritical holes for the doubling map. Acta Mathematica Hungar-
ica, 143:298–312, 2014.
[35] L. Vuillon. Balanced words. Bull. Belg. Math. Soc. Simon Stevin, 10(5):787–805,
2003.
[36] L. Young. Entropy, lyapunov exponents, and hausdorff dimension in differentiable
dynamical systems. IEEE Transactions on Circuits and Systems, 30(8):599–607,
1983.