hacking the master switch? the role of infrastructure in ... table of contents acknowledgments........
Post on 02-Apr-2018
223 Views
Preview:
TRANSCRIPT
Hacking the Master Switch? The Role of Infrastructure in Google’s Network
Neutrality Strategy in the 2000s
by
John Harris Stevenson
A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy
Faculty of Information University of Toronto
© Copyright by John Harris Stevenson 2017
ii
Hacking the Master Switch? The Role of Infrastructure in Google’s Network Neutrality
Strategy in the 2000s
John Harris Stevenson
Doctor of Philosophy
Faculty of Information University of Toronto
2017
Abstract
During most of the decade of the 2000s, global Internet company Google Inc. was one of the
most prominent public champions of the notion of network neutrality, the network design
principle conceived by Tim Wu that all Internet traffic should be treated equally by network
operators. However, in 2010, following a series of joint policy statements on network
neutrality with telecommunications giant Verizon, Google fell nearly silent on the issue,
despite Wu arguing that a neutral Internet was vital to Google’s survival.
During this period, Google engaged in a massive expansion of its services and technical
infrastructure. My research examines the influence of Google’s systems and service offerings
on the company’s approach to network neutrality policy making. Drawing on documentary
evidence and network analysis data, I identify Google’s global proprietary networks and
server locations worldwide, including over 1500 Google edge caching servers located at
Internet service providers.
iii
I argue that the affordances provided by its systems allowed Google to mitigate potential
retail and transit ISP gatekeeping. Drawing on the work of Latour and Callon in Actor–
network theory, I posit the existence of at least one actor-network formed among Google and
ISPs, centred on an interest in the utility of Google’s edge caching servers and the success of
the Android operating system. I suggest that this actor-network was a manifestation of the
transformation of Google from a content provider as conceived by Wu into a new kind of
Internet governance policy and network actor: the platform hybrid. Drawing on the work of
Ciborra, I suggest a number of processes of technological and organisational change that
were central to this transformation, allowing Google to at least partially “hack” Wu’s master
switch. In conclusion, I describe the characteristics of Google as a platform hybrid and its
implications for network neutrality policy making and Internet governance.
iv
Acknowledgments
Undertaking this work has been a profoundly transformative experience, and it would not
have been possible without the guidance and support that I received from many good people.
I would like to express my sincere gratitude to my advisor Prof. Andrew Clement for his
continuous support of my PhD studies. He was both very patient and genuinly enthusiastic
about my research, and his immense knowledge was exceedingly valuable. I have learned a
great deal from Andrew about the work of a public intellectual.
I am thankful for the members of my committee, Prof. David Phillips and Prof. Leslie Regan
Shade, who have challenged and supported me over the past five years as I moved from a
collection of questions to a completed study. Leslie has been, for more years than I dare
count, as wise and supportive a mentor and friend as I could hope for.
I wish to thank the members of the iSchool community who provided guidance and support
over the years, particularly Prof. Lynne Howarth, a role model as an engaged researcher and
teacher, and the iSchool staff, especially Ms. Laura Jantek.
I am very appreciative of Mr. Matt Calder and Dr. Aemen Lodhi who shared with me data
that proved to be invaluable to my research.
It would not have been possible to complete my research as a part-time student without the
truly exceptional support and understanding of two of my professional colleagues, Mr. Teras
Gavin and Ms. Valerie Morin.
v
I will be forever thankful to my mentor at Dalhousie University, the late Prof. Robert Merritt,
who started me on this journey more than twenty years ago.
Lastly, without the support and understanding of my family and friends, this work would
have been simply impossible. My friends and colleagues Caroline Côté, Mark MacLeod,
Christy Conte, Shelley Robinson, Iain Cook, and Monica Auer intervened at crucial times.
My mother Mary, my father David, and my aunt Edith all supported my graduate work for
very many years, in very many ways. My wife, Natasha Gauthier, helped me immeasurably
through a mix of encouragement and loving impatience that made completion of this work an
absolute necessity. Finally, the intellectual passion of my son Joshua inspired me to keep
going, along with a legitimate fear that he might complete a PhD before I did.
vi
Table of Contents
Acknowledgments .............................................................................................................. iv
Table of Contents ............................................................................................................... vi
List of Tables .......................................................................................................................x
List of Figures .................................................................................................................... xi
List of Appendices ............................................................................................................ xii
1 Google and network neutrality: A contested Internet .....................................................11.1 The day Google became “evil” ................................................................................1
1.2 Wu’s network neutrality and Google .......................................................................31.3 An alternative Internet? ............................................................................................5
1.4 Research goal and questions ....................................................................................91.5 Overview of argument ...........................................................................................10
1.5.1 Chapter Two: Approaches to Internet infrastructure, network neutrality, and Google studies .....................................................................................10
1.5.2 Chapter Three: Researching the burgeoning network giant ......................121.5.3 Chapter Four: Extending search .................................................................13
1.5.4 Chapter Five: The policy-active hyper giant .............................................141.5.5 Chapter Six: The platform hybrid ..............................................................15
1.5.6 Chapter Seven: Google, beyond good and evil ..........................................161.6 Location in contemporary research and policy debates .........................................17
1.7 Chapter summary ...................................................................................................18
2 Approaches to Internet infrastructure, network neutrality, and Google studies ...........19
2.1 Relevant theoretical and methodological approaches ............................................192.1.1 Foundational work: Internet governance ...................................................19
2.1.2 (Infra)structure follows strategy, and vice versa ........................................232.1.3 Star to Sandvig: Infrastructure studies .......................................................25
2.1.4 Theory of affordances ................................................................................272.1.5 Actor–network theory ................................................................................28
2.1.6 Ciborra: Hosting, drift, hacking, and the platform organisation ................31
vii
2.1.7 The political economy of Google and critical Google studies ...................402.2 Understanding network neutrality ..........................................................................45
2.2.1 Common carriage .......................................................................................462.2.2 The end-to-end principle and Internet traffic management .......................48
2.2.3 Net neutrality controversies in North America ..........................................502.3 Chapter summary ...................................................................................................51
3 Researching the burgeoning network giant ...................................................................523.1 Research Design .....................................................................................................54
3.1.1 Case study: Ciborra and Actor–network theory .........................................543.1.2 Infrastructure studies ..................................................................................55
3.1.3 Propositions ...............................................................................................563.2 Research process ....................................................................................................61
3.2.1 Textual analysis .........................................................................................623.2.2 Discovering infrastructure .........................................................................66
3.3 Mapping Google: Process and impact ...................................................................803.3.1 Software .....................................................................................................81
3.3.2 Data sets .....................................................................................................813.4 Chapter summary ...................................................................................................86
4 Extending search ...........................................................................................................874.1 Origin in academic practice ...................................................................................88
4.2 Advertising .............................................................................................................914.3 Building the “innovation machine” .......................................................................94
4.4 First infrastructures ................................................................................................994.5 Search extends into new domains ........................................................................102
4.6 Google participation in the policy process ...........................................................1044.7 Chapter summary .................................................................................................105
5 The policy-active hyper giant .....................................................................................1065.1 Google apps influence infrastructure ...................................................................106
5.1.1 Google Video and YouTube ....................................................................1075.1.2 Google web applications ..........................................................................109
5.1.3 Voice ........................................................................................................1115.2 Google and Internet governance ..........................................................................112
viii
5.2.1 Net neutrality controversies .....................................................................1135.2.2 The open Internet .....................................................................................115
5.2.3 Versus telcos ............................................................................................1175.2.4 Other jurisdictions ....................................................................................118
5.3 Google, mobile and Android ................................................................................1205.4 Network neutrality in the late-2000s ....................................................................123
5.4.1 Comcast and BitTorrent throttling ...........................................................1235.4.2 The Google-Verizon statements ..............................................................124
5.4.3 Tepid support for network neutrality .......................................................1335.5 Google’s infrastructure ........................................................................................138
5.5.1 Data Centres .............................................................................................1395.5.2 Wide area Networks .................................................................................146
5.5.3 Peering and Caching Servers ...................................................................1485.6 Google as hyper giant ..........................................................................................155
5.7 Chapter summary .................................................................................................155
6 The platform hybrid ....................................................................................................157
6.1 Extending Wu and unpacking The Cloud ............................................................1576.2 Ciborra: Google’s technological transformations ................................................162
6.2.1 Drift, embedded bricolage, and platform organisation ............................1626.2.2 The “pasted-up” infrastructure .................................................................165
6.2.3 Technological stages ................................................................................1676.3 Forming, reforming actor-networks .....................................................................169
6.3.1 Actor-networks and infrastructure ...........................................................1746.3.2 Neutrality-focused actor-networks ...........................................................175
6.3.3 An affordance-focused actor-network .....................................................1836.4 What is Google? ...................................................................................................188
6.4.1 The business platform ..............................................................................1886.4.2 Bigger than a hyper giant .........................................................................192
6.5 The challenge to net neutrality and Internet policy .............................................1996.5.1 Network neutrality discourse ...................................................................200
6.5.2 Implications for policy and regulation .....................................................2026.6 Chapter summary .................................................................................................206
ix
7 Google, beyond good and evil ....................................................................................2077.1 Major findings ......................................................................................................209
7.1.1 What was Google’s policy position on network neutrality, and how did it change? ....................................................................................................210
7.1.2 How did Google’s infrastructure and systems, and the affordances they provided, change during the period of its network neutrality engagement? ............................................................................................212
7.1.3 In what ways could infrastructure and systems influence Google’s policy approach to network neutrality? ...............................................................215
7.1.4 How can Google be characterised as a network and policy actor during this period, in relation to Wu’s network neutrality models? ....................215
7.1.5 Google, hacking the master switch ..........................................................217
7.2 Contributions and limitations of research ............................................................2217.2.1 Policy contributions: extending Wu .........................................................221
7.2.2 Contributions to practice: extending Internet governance .......................2257.2.3 Contributions to research methods ...........................................................226
7.2.4 Limitations ...............................................................................................2317.2.5 Directions for future work .......................................................................232
7.3 Final thoughts ......................................................................................................235
References ........................................................................................................................238
Appendix A: List of acronyms .........................................................................................257
Appendix B: Google Peering Locations, October 2013 ..................................................259
Appendix C: Timeline of Google’s history .....................................................................264
Copyright Acknowledgements .........................................................................................266
x
List of Tables
Table 3.1: Google infrastructure data set ................................................................................ 84
Table 3.2: Data set metadata ................................................................................................... 85
Table 4.1: Google acquisitions, 2001 to 2004 ........................................................................ 98
Table 5.1: Annual lobbying by Google to 2014 ................................................................... 137
Table 5.2: Google large data centres, October 28, 2013 ....................................................... 145
Table 6.1: Development of Google infrastructure, 1998 to 2013 ......................................... 169
Table 6.2: Google’s Network Neutrality strategies .............................................................. 187
xi
List of Figures
Figure 1.1: Wu’s 2010 model of how Google reaches customers ....................................... 3
Figure 3.1: Google’s infrastructure elements, 2013 .......................................................... 59
Figure 3.2: PeeringDB website, 2016 ................................................................................ 79
Figure 5.1: Google data centre, 2014 .............................................................................. 141
Figure 5.2: Google data centres and data centre (G-scale) network, 2013. ..................... 143
Figure 5.3: Google Data Centre locations in North America .......................................... 144
Figure 5.4: Google server locations worldwide, October 13 2013 .................................. 153
Figure 5.5: North American Google server locations, October 28, 2013 ........................ 154
Figure 6.1: Wu’s 2010 Model: How Google reaches customers, circa 2003 .................. 158
Figure 6.2: How Google reaches users, 2003 .................................................................. 159
Figure 6.3: How Google reaches users, 2013 .................................................................. 161
Figure 6.4: Google’s participation in late-2000s neutrality-focused actor-network ....... 182
Figure 7.1: Google's Infrastructure, October 28 2013 ..................................................... 229
Figure 7.2: Google's infrastructure, coast of Brazil, October 28 2013 ............................ 230
xii
List of Appendices
Appendix A: List of acronyms .........................................................................................257
Appendix B: Google Peering Locations, October 2013 ..................................................259
Appendix C: Timeline of Google’s history .....................................................................264
1
1 Google and network neutrality: A contested Internet
In this chapter, I introduce my research into Google, its infrastructure, and its role as a network
neutrality policy actor in the 2000s. I begin by presenting an illuminating moment in Google’s
history, when the company’s support for network neutrality seemed to waver, and then fade,
revealing tensions around the identity of the company that will be a theme in this dissertation.
1.1 The day Google became “evil”
It was a shock the day Google became “evil”. On August 4th, 2010, the New York Times was the
first media outlet to report the bad news. Google, the world’s most used search engine, had
purportedly made a secret deal that would see its Internet traffic prioritized on the network of
America’s largest mobile telecommunications provider, Verizon (Wyatt, 2010b). If true, the
agreement would be, by all appearances, a rejection of Google’s long-held position in support of
network neutrality, the notion that all Internet traffic should be managed more or less the same,
regardless of origin or destination. Since at least 2007, when Google had begun to engage
matters of governmental policy and regulatory process in earnest, the company had become
perhaps the most vocal and effective critic of the “closed Internet” that appeared to be attractive
to many Internet service providers.
The reaction to the report was swift and critical. Gigi B. Sohn, president and co-founder of media
advocacy organisation Public Knowledge (and now special counsel U.S. Federal
Communications Commission) called the reported deal “deeply regrettable”, yet also argued that
it “should be considered meaningless” (Public Knowledge, 2010) . Other American network
2
neutrality proponents soon followed suit, with some suggesting that Google’s seeming rejection
of an “open Internet” was a repudiation of the company’s well known though unofficial motto,
“Don’t be evil” (Aaron, 2010).
However, it soon became apparent that the Times’ story of Google’s abandonment of network
neutrality was not wholly accurate. When Google and Verizon made their official joint policy
statement later in the week, on August 9th 2010, the specifics were somewhat less dramatic. The
companies had not agreed to prioritise Google traffic on the Verizon network. Quite the contrary:
the leaderships of Google and Verizon had committed their companies to network neutrality on
the physical, wireline Internet, though not on wireless networks (Verizon’s primary business),
nor on future “new information services” (Davidson & Tauke, 2010b). Google’s then-CEO, Eric
Schmidt, felt compelled by criticism of both the real and imagined proposals to state publically
that Google remained strongly committed to network neutrality (Goldman, 2010; Tady, 2010).
But then Google, one of network neutrality’s most vocal proponents, became nearly silent on the
issue. From 2007 to 2010, the Google Public Policy Blog featured dozens of posts arguing
various points of Internet traffic management policy. However, in 2011 posts from Google
mentioning the topic fell 83%. Public statements and lobbying on the issue in the United States
appear to have become rare. Unlike the outcry over the company’s seeming alliance with
Verizon, Google’s silence on network neutrality generated little reaction in the popular or
technology press until 2014, when network neutrality again became a topic of significant public
interest.
3
1.2 Wu’s network neutrality and Google
Figure 1.1: Wu’s 2010 model of how Google reaches customers
From Wu (2010), The Master Switch: The Rise and Fall of Information Empires, page 284. Copyright © 2010 by Tim Wu. Used with permission.
In his 2010 book The Master Switch, Columbia Law School professor Tim Wu argued strongly
that Google had much to lose if network neutrality rules and practices changed significantly, and
retail ISPs were able to block or degrade third party services. Wu’s concern was straightforward:
Internet service providers could easily control Google’s access to its customers over last mile
connections, and at least one retail ISP CEO, SBC Communications’ Ed Whitacre, had stated in
2005 that they would do so (O’Connell, 2005). In The Master Switch, Wu argued that Google
represented a commitment to the “open” aspects of the Internet, in contrast to companies such as
Apple and Facebook, who preferred to build “walled gardens” of content using closed technical
ecosystems. Wu’s argument was succinctly summarised with a diagram (Figure 1.1 above)
showing cable and telephone companies sitting between Google and its customers, and hence
4
acting as a “master switch” that could control any content providers access to users.
Wu admitted that the Google-Verizon statements, made just as he was completing his book in
2010, flew in the face of some of Google’s past commitments to network neutrality and an open
Internet. Google’s public silence on network neutrality that began in 2010 was not a matter of the
company returning to its policy isolation of the early-2000s when it put few resources into
lobbying policy makers and regulators in any jurisdiction. In fact, Google’s spending on
lobbying in the United States increased dramatically after 2010, growing by approximately 350%
in 2012 (The Center for Responsive Politics, 2015). Google appeared to be more active than
ever, but significantly less vocal within the public sphere.
Nor were the policy and regulatory questions surrounding network neutrality settled in most
jurisdictions, including the United States. In fact, anti-network neutrality positions became de
rigour among many Washington politicians in the mid-2010s, to say nothing of the
telecommunications industry’s opposition to network neutrality regulation and legislation. If
anything, the regulatory environment became more hostile to Google than it had been prior to
2010. Wu’s master switch seemed to be as much a danger to Google as ever.
As network neutrality and other Internet governance issues became increasingly important in
public discourse in the mid-2010s, there was speculation in the popular technology press
concerning Google’s behaviour, much of it couched in outrage and confusion. Google had an
unofficial motto– “Don’t be evil” – and a formal mission– “to organize the world’s information
and make it universally accessible and useful”, and enjoyed an uncharacteristically positive
image for a global multi-national corporation.
5
But the reality of Google’s business and behaviour was significantly more problematic and
complex. Unlike other Internet entities that had evolved into key information management roles
as non-profit entities, such as Wikipedia and the Internet Archive, Google was clearly a
commercial entity. As Vaidhyanathan so pointedly stated, “Google is not a free-speech engine: it
is an advertising company” (2011, p. 130). In The Googlization of Everything (2011) he argues
that while Google lobbied to preserve network neutrality, this support was hollow. Writes
Vaidhyanathan:
Many of Google’s positions correspond roughly with the public interest (such as giving
empty support to a network neutrality policy and “safe-harbor” exemptions from
copyright liability). Others, such as fighting against stronger privacy laws in the United
States, do not. (2011, p. 18)
In a 2014 interview concerning Google’s absence from the network neutrality debate, Wu
himself recognised that Google had changed. “Net neutrality got them where they are,”
suggested Wu, “There’s a danger that they, having climbed the ladder, might pull it up after
them” (quoted in Shields, 2014) and abandon the cause of the open Internet.
It is Google’s climb up Wu’s metaphorical ladder that is the focus of my research.
1.3 An alternative Internet?
McMillan (2014) suggests that Google remained committed to network neutrality and that its
withdrawal from public discourse was simply strategic. Yet Google’s evident retreat coincides
with a number of events in the company’s history which may have driven or influenced a change
in approach, including changes to the company’s leadership. The most significant changes to
6
Google during this period were the parallel expansions of its consumer and enterprise service
offerings—including new web applications and the introduction of the Android operating
system—and of the company’s infrastructure.
Throughout the 2000s, Google deployed a number of new services that drove expansion and
modification of the company’s technical infrastructure. In the early-2000s, Google Maps, Gmail
and AdWords required Google to increase the capacity of its high-speed networks to ensure low
network latency, as well as invest significantly in data storage. The 2006 acquisition of video-
sharing site YouTube also led Google to begin creation of a massive content delivery network,
including the development of an edge caching program with last-mile retail ISPs and numerous
peering agreements with various network entities (R. Miller, 2010). It had already been reported
in 2005 that Google was purchasing “dark fibre”, possibly to create its own Internet backbone
networks (Hansen, 2005). In 2007, the same year that Google began to aggressively engage in
the public policy process, the company announced the construction of four massive data centres
in the United States (Nurmi, 2008). By 2011, Google had acknowledged the existence of six
large data centres in North America, all connected by fibre owned by Google (Google, 2013).
As early as 2005, the technology press had reported on Google’s efforts to create what some
called an “alternative Internet” (Hedger, 2005). Some in the telecommunications industry
suggested the potential for Google to create its own network resources and circumvent possible
retail ISP gatekeeping. During a January 2007 speech at Fordham Law School, William Barr,
Verizon’s General Counsel, made several statements concerning network neutrality in general
and Google’s position in particular. Among many comments on the then current regulatory
environment, Barr is paraphrased as stating that “Google is unhappy with the public Internet”
and is “creating a virtual network” that “bypasses the Internet” (Isenberg, 2007).
7
I believe examining Google’s infrastructure is important to addressing questions about the
company’s approach to network neutrality. Wu (2010) describes network neutrality as a
“network design principal”, a set of values that arise from the network design of the past, but can
and should be woven into the creation of current and future infrastructure. Seemingly less
concerned about the rights of citizens or consumers, Wu’s network neutrality explicitly focuses
on technological affordances, those aspects of a technology that allow people to perform certain
tasks or access certain information. Wu’s concept of network neutrality prompts us to explore
how the affordances of technical infrastructures linked by network and business connections to
the public Internet, including Google’s, might influence the behaviour of actors in the network
neutrality policy process, and transform the policy debate itself.
A challenge arising from this perspective is to a fundamental aspect of Wu’s analysis: the notion
of Google as “content provider”, an entity that connects to the Internet and is dependent on
network providers to access users. As I examine in more detail in the following chapters,
Google’s services and infrastructure changed significantly in the 2000s. Google ended the 1990s
as an indexer of the web and a source of search results, very much aligned with Wu’s notion of
the content provider. But by 2010, Google had launched numerous new services, some extending
search to new domains, others leveraging Google’s systems to provide new functionalities, but
all dependent on user labour and the leveraging of Google’s massive user base to create a
platform for advertising. The company became essential to millions of users and generated
billions of dollars of profits.
While still a “content provider”—in fact, more of a content provider than ever, distributing vast
quantities of video, music, software, and other media—Google had also become something else.
Wu took a relatively favourable view of Google in the 2000s as a more open alternative to other
8
giant technology companies of the time. Other writers, often in the popular technology and
business presses, similarly emphasised its business and technological innovations, and the extent
to which Google is admired as a company. But there were other, more critical views of Google
during this period. As we will see in Chapter 2, writers such as Vaidhyanathan (2011), Stalder
and Mayer (2009), and Pariser (2011) characterize Google as a platform for universal
surveillance and a location for the exploitation of labour, criticizing the company for its impact
on intellectual discourse, its alleged abuse of intellectual property rights, and its misplaced faith
in technological solutions. Writes Vaidhyanathan, we must study Google “realising that we are
not Google’s customers: we are its product” (2011, p. 3).
What motivates my research is the desire to understand, on some level, what Google is. In doing
so, I ask how we can describe and understand the relationships among aspects of Google’s
organisation (its services, infrastructure, and strategies on policy), and its interactions with actors
external to it (policy makers, other network entities, and retail and transit ISPs). I also explore
the company’s complex relationships with the various types of actors I see as Google’s users1,
including consumers, advertisers, and enterprise customers, all of whom use Google’s content
and services and provide labour for its operations. Such an understanding will be critical for
regulators and policy makers as they engage Google and other large Internet companies.
1 I find the term “user” highly problematic, especially as it is used by Wu to primarily describe ISP customers and
retail consumers. User-centered design advocate Don Norman (2006) (among others) has criticized the term, preferring that we consider “people” and their specific roles. The term has some utility to identify individuals interacting with technological systems, though I have attempted to define specific roles (viewer, author, advertise, etc.) when useful.
9
1.4 Research goal and questions
My research questions arise from the realisation that while Wu provides an interesting and useful
analysis of Google within the context of late-2000s network neutrality discourse, a more
comprehensive and historically situated examination of the company as a network and policy
actor during this period is now possible. What Google became in the 2000s and how it can be
characterised after 2010 are key components of my analysis.
Given this context, my principal research goal is to better understand what was the role that
Google’s infrastructure played in the company’s retreat from public support for Wu’s network
neutrality after 2010, an influence that may well have been bi-directional.
In order to understand the influence and role of infrastructure to Google’s policy-making
activities, I have worked to better understand it: its functionalities and affordances are
documented, and its scope and reach through time and space detailed. Google went to some
lengths in the 2000s to obscure its infrastructure; glimpses of it nearly always focused on the
interests of the company, whether recruiting last mile ISPs to host edge caching, or promoting
Google as a powerful, innovative, and friendly technology brand.
Four supporting questions guide my research, as follows:
1. What was Google’s policy position on network neutrality, and how did it change?
2. How did Google’s infrastructure and systems, and the affordances they provided, change
during the period of its network neutrality engagement?
10
3. In what ways could infrastructure and systems influence Google’s policy approach to
network neutrality?
4. How can Google be characterised as a network and policy actor during this period, in
relation to Wu’s network neutrality models?
I respond to these questions in the following chapters, as outlined in the next section.
1.5 Overview of argument
In the following section I detail how this research unfolds through the seven chapters of this
dissertation.
1.5.1 Chapter Two: Approaches to Internet infrastructure, network
neutrality, and Google studies
In Chapter Two I survey the existing scholarly approaches and works that are key to
understanding Google’s history, infrastructure, and approaches to network neutrality, drawing on
past work on business strategy, enterprise architecture, infrastructure studies, and Internet
governance.
I place my research within the emerging field of Internet governance. By its nature
interdisciplinary, Internet governance spans a number of existing fields, including law,
technology studies, information studies, computer science, sociology, and political science.
Network neutrality has been subject to substantial study within this area, and there have been a
number of theoretical perspectives and methods that are appropriate to this work. Mueller (2002)
argues that control of the Internet might be complex and difficult, but it does most certainly exist,
11
and that it takes the form of institutions: organizations of various sorts that create and enforce
rules. I also briefly examine recent work by DeNardis (2009) on the formation of Internet
protocols and standards. One key to this work is the notion that the creation of technologies and
the standards that shape them is, to quote Abbate, “politics by other means” (1999, p. 180).
I draw on the work of business historian Alfred Chandler, who argued (1962) that the structure
of a corporation reflects and is created in response to business strategy. Extending his thesis is
the work of Bower (1970), Burgelman (1983), and others, in their suggestion that not only does
“structure follow strategy”, but “strategy follows structure”. Applying these notions to
organisational technology, I identify the concept of enterprise architecture, the process of
defining operational structures and activities that are conceptualised as delivering business vision
and strategy (Zachman, 1987).
I review the field of infrastructure studies as defined by Star (1999), Bowker (1994), and
Sandvig (2013), and identify the theory of affordances Gibson (1977) as central to my analysis.
Finally, I place my work within the context of critical Google studies, including the political
economy of Google (Zuboff, 2015).
With this foundation established, I describe the theoretical contexts in which I situate data
concerning Google’s systems, the company’s history of service development, and its role as a
policy actor. To explore the processes by which Google’s systems and its various actors interact
with network entities and policy stakeholders in greater depth, I draw upon actor-network theory
(ANT) developed by Latour (1996) and Callon (1991).
Regarding the specific relationships among various policy and network actors, technologies and
“strategies”, I call on the work of Ciborra (1996, 1997, 2002). I introduce several useful concepts
12
to describe the interaction and influence between human and technological actors described by
Ciborra. These processes include xenia (the process by which new technology is “hosted” by
organizations and users), dérive (the drift in function and impact of a technical system over
time), and, most profoundly, bricolage (the “hacking” of technical systems by users and others
that makes systems useful and functional).
I survey the history of the notion of network neutrality, beginning with the end-to-end principle,
a core precept of the design of the Internet. I explain the function of quality of service rules that
prioritise certain types of Internet content over others. I describe fundamental changes to the
operation and ownership of the Internet in the 2000s that led to the emergence of network
neutrality as a concept in the mid-2000s through the work of Tim Wu and others.
1.5.2 Chapter Three: Researching the burgeoning network giant
In Chapter Three I present the methods used to generate data concerning Google’s past and
current infrastructure, as well as its history as a service provider and Internet governance actor. A
combination of techniques is described, including documentary evidence, network diagnostic
tools, and large-scale network analysis tools. I describe the difficulties in collecting consistent
and accurate information concerning Google’s systems; like an iceberg, only certain aspects of
Google’s infrastructure are visible. I also describe the methods used to create interactive maps of
Google’s infrastructure.
I present my early, exploratory attempts to use common network diagnostic tools to probe
Google’s network for evidence of specific technical components. I then present sources of
documentary evidence which describe specific components and aspects of Google’s networks,
data centres, peering points, and edge caches. These sources are academic, as well as drawn from
13
the popular and technical presses. I describe a specific effort to utilise a large-scale network
analysis tool to identify elements of Google’s systems; I present the work of Calder et al. (2013),
which has mapped Google servers in several thousand discrete locations. I then discuss the
research of Lodhi et al. (2014), which provides a historical analysis of the PeeringDB database of
peering relationships among networking entities (including Google) over time.
I explain the methods for creating and working with data sets drawn from documentary evidence
and technical analysis, detailing infrastructure elements, and specific steps used to create the
map. I then describe the methods used to create interactive maps as a means to present, using
mapping platform Google My Maps.
1.5.3 Chapter Four: Extending search
In Chapter Four I present the first part of a detailed narrative of the interactions and influences
that can be reliably imputed among Google’s business strategies. I examine the development of
its services and infrastructure and the company’s approach to Internet governance issues.
I describe the development of Google’s first search services, and the single server farm that
hosted it. I explore the early commitment of Google’s leadership to a technical environment
focused on hardware and network scalability, and the importance of speed and low latency from
the beginning of the company’s history. I describe Google’s monetization of its content and
search users with advertising. I discuss Google’s first web-based applications, beginning with
Gmail and Google Maps, and how these applications were created on Google’s existing
infrastructure while at the same time influencing the development of large-scale data centres and
Google’s first purchases of “dark fibre” transit networks. Finally, I discuss Google’s early
participation in the policy process.
14
1.5.4 Chapter Five: The policy-active hyper giant
In Chapter Five I discuss the maturation of Google’s services and infrastructure in the second
half of the 2000s. I examine the building of new infrastructural elements in response to numerous
new services with ambitious storage, computing, and networking requirements. I examine the
importance of Google’s purchase of YouTube in 2005 and the company’s subsequent creation of
a large-scale content delivery network.
I present a detailed snapshot of Google’s infrastructure in October 2013. I describe aspects of
Google’s systems, presenting the company’s transition from what could be considered a content
provider to an entity that mixed computing, content, and network services. Infrastructural
elements that are described include data centres, internal and external-facing networks, peering
connections with other networking entities, undersea cable consortia which Google was a
member, and edge caching servers.
I also discuss the importance of Google’s commitment to mobile technologies, most prominently
the Android operating system for mobile devices, and the company’s approaches to both
infrastructure and policy. Android, unlike the bulk of Google’s other products required the
creation of a variety of alliances with other entities in order to be successful.
Similarly, I discuss the relationships between Google and Internet service providers in the
provisioning of Google’s content delivery network, which resulted in Google directly peering
with a large number of retail ISPs while also providing them with edge caching technology that
was installed inside retail ISP physical plants. I enumerate the specific affordances provided by
Google’s infrastructure, including the company’s lack of dependence on large Internet transit
providers (such as Level 3), its symbiotic relationships with retail ISPs worldwide due to
15
integration of the company’s infrastructure into theirs, and its creation of a “walled garden” of
services through the combination of local edge caching and its separate networks.
During this period Google competed with retail Internet service providers and wireless carriers in
content provision and other areas, resulting in a sometimes-schizophrenic relationship between
Google and various ISPs. I discuss Google’s efforts to promote the ideals of network neutrality
in the late-2000s, heralded by the launch of the Google Public Policy Blog in 2007. I explore
Google’s efforts at lobbying, its support for pro-neutrality public interest groups, and early
attempts to create a legislative consensus around network neutrality. I detail the 2009/2010
Google-Verizon joint statements on network neutrality, and discuss their substance and impacts.
1.5.5 Chapter Six: The platform hybrid
In Chapter Six, I argue that it is no longer adequate to describe Google and other large Internet
entities as “content providers” within the context of network neutrality and other issues of
Internet governance. Rather, I characterise Google as a platform hybrid, an entity significantly
less impacted by any possible erosion of network neutrality practise. I suggest three
technological stages of Google’s history, as the company evolved from a content provider, to
hyper giant, and then to platform hybrid.
Using ANT and concepts from Ciborra, I describe processes of infrastructural development that
influenced Google’s approach to network neutrality. I examine Google’s alliances with policy
and network actors, their utility and inherent tensions, in terms of Latour’s concepts of enrolment
and alignment within the formation and maintenance of actor-networks. I posit Google
participation in actor-networks formed during this period: network neutrality-focused actor-
networks, comprising network neutrality policy stakeholders, and an actor-network focused on
16
the affordances of Google’s systems, with retail ISPs and wireless carriers. I draw on the work of
Ciborra to describe Google as a platform organisation, able to maintain processes for both
innovation and product lifecycle management by institutionalising hacking (bricolage), drift
(derive), and other processes.
In addition to characterising the platform hybrid, I discuss its impact on network neutrality
discourse.
1.5.6 Chapter Seven: Google, beyond good and evil
In the final chapter, I discuss my major finding in light of my principal research goal and
supporting research questions. I conclude that Google has successfully “hacked” Wu’s master
switch, but not by using technical circumvention of ISP last mile connections with consumers.
Instead, I argue that symbiotic relationships among Google, retail ISPs and wireless carriers
fundamentally changed these relationships, making gatekeeping more difficult, though not
impossible. I suggest that this “hack” is a manifestation of Google’s embrace (as a matter of
strategy and tactics both) of what Ciborra described as bricolage, applied to technology and
policy strategy both. I further explore how Google’s infrastructure has influenced the
transformation of the company from a content provider to a platform hybrid.
I briefly discuss the contributions and limitations of my work, and conclude by suggesting that
network neutrality remains a critically important principle of network design, but one that needs
to be reassessed in light of the platform hybrid.
17
1.6 Location in contemporary research and policy debates
Internet governance studies have typically focused on the formation of Internet policy and
regulation involving a number of what I would consider to be established entities, including state
actors (governments, regulators, and supra-national entities, such as the European Union),
Internet service providers (retail and transit), content and cloud services providers, public interest
groups, and some others. Many retail ISPs are both network operators and content providers
through traditional broadcasting technologies. This potential conflict in roles has led to some
predictions that ISPs will provide preferential treatment for content they control, a central
concern of network neutrality.
Less attention has been paid to those policy actors who were considered to be primarily content
providers, but have other roles within the traffic management process, creating their own
network infrastructure, committing to technological platforms for content consumption, and
exerting significant influence on the policy process. Craig Labovitz, then of Arbor Networks,
labelled these large content/network entities hyper giants (Silbey, 2012). Such entities—
including Google—appeared to have transcended the traditional “content versus carrier”
dichotomy of the 2000s’ net neutrality debate.
This class of Internet governance actor—what I characterise as the platform hybrid—deserves
significant study. Google’s behaviour during the early-2010s network neutrality debate was
puzzling to other stakeholders and researchers alike. Only by applying a new frame with which
to examine Google and similar entities can we get a clearer picture of how network neutrality
policy and other Internet governance matters will evolve in the future.
This is to say nothing of the need to study Google more generally, and its approach to business
18
strategy and Internet governance more specifically. Google has a substantial impact on
contemporary life, and Google and its activities would benefit from higher levels of serious
research. As much of the material concerning Google’s operations, business strategies, and
impact is found in the popular and technological media, with relatively little in the scholarly
literature.
My research, which will examine Google’s business strategies, technical operations, and its role
as a policy actor, has the potential to make a significant contribution to the inter-disciplinary
fields of Internet Governance and Science and Technology Studies. Specifically, this research
provides an opportunity to map Google’s unprecedented technological infrastructure, which may
well be the most powerful known grouping of computing technology in human history, setting
aside the even less well understood American military-intelligence infrastructure. No effort has
yet been made to date to map Google’s technical infrastructure in any detail.
1.7 Chapter summary
In this chapter I have introduced my research, identifying my principal research goal and
supporting questions, as well as the location of my research in contemporary research and policy
debates. In the next chapter, I detail several theoretical and methodological approaches to
studying Google that are central to my research.
19
2 Approaches to Internet infrastructure, network neutrality,
and Google studies
In this chapter I present an overview of theoretical and methodological approaches that inform
my research on Google, its infrastructure, and their complex relationships with the formation of
network neutrality policy. I also identify and discuss scholarly work on Google itself, which I
draw on later in this dissertation. Finally, I discuss network neutrality and its recent history.
2.1 Relevant theoretical and methodological approaches
I begin by placing this work within the tradition of Internet governance research, an
interdisciplinary approach to the formation, control, and policy of the Internet (Number Resource
Organization, 2015). I then discuss this work in light of infrastructure studies as posited by Star
(1999) and Bowker (1994). In order to describe and analyse the relationships among Google, its
infrastructure and Internet policy formation, I draw upon complementary approaches from
Latour’s (1996) Actor–network theory and Ciborra’s (2002) conceptions of the process of
organisational technological change as well as the platform organisation that can adapt
successfully to them.
2.1.1 Foundational work: Internet governance
I place my research most squarely within the discipline of Internet governance, a field that
Ziewitz & Pentzold (2014) suggest has been “emerging” or “under construction” since the turn
of the century. By its nature interdisciplinary, Internet governance spans a number of existing
20
fields, including computer science, law, technology studies, information studies, sociology, and
political science. Network neutrality has been subject to substantial study within Internet
Governance, which has developed a variety of theoretical perspectives and methods potentially
appropriate to my research.
As Mueller (2002) points out, popular notions of Internet governance have typically fallen into
one of two extremes. On one hand, one might be told that the Internet is controlled by a small
cabal of some type, characterised as international governments, government bodies, or large
corporations, who might centrally control the functioning of the Internet. The other extreme
position, exemplified by founder of the Electronic Frontier Foundation founder John Parry
Barlow's Declaration of Independence of Cyberspace (1996), posits that the Internet itself is
inherently ungovernable due to its decentralised nature. Mueller argues that within Internet
governance discourse, both positions should be considered inaccurate and unrealistic; control of
the Internet most certainly exists, but that control of the Internet is manifested in complex ways
that can be difficult to untangle and study. As I discuss in coming chapters, this complexity can
in fact obscure the roles played by Google and other large Internet companies in the management
and control of the Internet.
While Internet governance work on network neutrality in the past 15 years took a broadly
interdisciplinary approach, in the early-2000s traditional economic analysis was often central to
examinations of network neutrality policy and regulation. Marsden (2010) claims this is because
most discussions of the subject of network neutrality arose within the context of traditional
telecommunications policy, and therefore use an analysis that he only half-jokingly describes as
“neo-classical price-oriented competition-based … which has been prevalent in telecoms policy
21
in the past decade…” (2010, p. 1). Marsden himself relies primarily on extensive legal analysis,
which is quite appropriate to his examination of policy and regulation.
However, neither Marsden’s legal analysis approach, nor traditional economic analysis, are
suitable for the bulk of my research. Legal analysis is helpful when examining specific aspects of
network neutrality policy making, such as the formation and substance of regulation. Economic
analysis is also useful when discussing the functioning of the market for retail and transit Internet
services, and models of the behaviour of various economic actors. However, both are limited
when discussing the process of organisational policy formation, or the progression of
technological and strategic change within an institution. As I argue below, these are better
understood by looking at the specific interplay among various heterogeneous actors.
Milton Mueller, first notably in 2002’s Ruling the Root: Internet Governance and the Taming of
Cyberspace, positions himself within a different sort of economic analysis: the institutional
economics of Ostrom (2007) and other noted theorists. Mueller applies a version of institutional
economic analysis to examine what he considers to be the commodification of aspects of the
Internet that had not, in the past, been considered commercially valuable and subject to
commodification.
Mueller argues that the governance of the Internet manifests itself primarily in the form of
institutions: organisations of various sorts that create and enforce rules and standards on how the
Internet operates. He is clear that control in these contexts is by no means absolute or easily
facilitated, but that contending parties have a tendency to work out rules and procedures that
make their interactions less costly and more stable and predictable. Mueller argues most
famously that “the root”—the top of the domain name and Internet address hierarchies—has
22
enormous social and economic value, but was created in such a way as to place it outside of
normal economic systems, and therefore outside of the purview of then-existing institutions. The
challenge of governing this system could only be solved through the development of new
institutional arrangements. Mueller argues that institutionalisation is what happened to the
Internet between 1996 and his writing in 2002, and predicts that the process will continue. As I
suggest in coming chapters, Google represents a type of institution not originally conceived by
Mueller, but nonetheless very much in his mould as a rule-making entity.
In part building on Mueller’s work, Laura DeNardis’ Protocol Politics: The Globalization of
Internet Governance (2009) tackles a somewhat broader subject—the formation of Internet
standards and protocols—using a wider framework of study. While Mueller’s Internet Protocol
address space can be seen as a common-pool resource, and therefore is well-suited for
institutional economic analysis, DeNardis looks specifically at the creation of technological
platform standards and norms. She implicitly suggests the examination of property rights is not
sufficient to fully account for Internet Protocol development and standardisation.
DeNardis also takes a more self-evidently interdisciplinary approach in her work. In addition to
Mueller’s institutional economics, she also draws on work in Science and Technology Studies,
the research of Janet Abbate (1999) and other technology historians, and legal scholarship from
Lessig (1999), Benkler (2006), and others. DeNardis’ (2009) key theoretical argument, one that
is central to my work, is that politics is not external to technical architectures. Quoting Abbate,
DeNardis describes the technical standards that are the centre of her research to be “politics by
other means” (2009, pt. 149). She states that the creation of technical standards often embodies
various unspoken conflicts of interest. Just as the creation of protocols is political, protocols and
standards in turn have significant political consequences and, in some cases, create largely
23
unspoken conflicts. DeNardis extends Lessig’s (1999) notion of “code as law” by stating that
underlying protocols are a form of invisible and embedded “legal architecture able to constrain
behaviour, establish public policy, or restrict or expand online liberty” (2009, p. 11). She states
that her project focuses on “generalizing the discussion into a framework for understanding the
political and economic implications of technical protocols” (2009, p. 190).
While Mueller is primarily interested in non-governmental and state institutions and their role in
Internet governance, in her work DeNardis includes private institutions which also establish
standards. Specifically, DeNardis is concerned with the “institutional characteristics and
principles necessary to maximise the legitimacy of private institutions to establish global
knowledge policy” (2009, p. 13). She must also, by necessity, examine the role of individuals in
protocol formation. She suggests that “[s]tandards can serve as a form of public policy
established primarily by private institutions” (2009, p. 190).
The theoretical frameworks and methods of Mueller and DeNardis, while useful, are not to be
applied wholesale to my research. The notion that Google’s infrastructures embody various
political and other positions is important to my discussion of the processes by which
infrastructure influences policy formation. However, I do not argue that Google’s infrastructure
efforts are contained easily within the institution-building narrative of institutional economics,
concerned as it is by common pool resources or the emergence of newly-commoditized
technologies.
2.1.2 (Infra)structure follows strategy, and vice versa
Business historian Alfred Chandler (1962) famously argued that the structure of a corporation
reflects and is created in response to business strategy. In studying the giants of American
24
industry in the mid-20th century—Du Pont, General Motors, Standard Oil of New Jersey and
Sears Roebuck—Chandler concluded that changes to transportation and communication
technologies had both forced and facilitated large firms to establish multi-division operations
organised around geography, product line, or both. These divisions as described by Chandler
were overseen by some sort of central office or authority, one that managed business strategy for
the corporation as a whole.
While Chandler did acknowledge that structure influences an organisation’s business strategy
and growth, Bower (1970) and Burgelman (1983) have extended Chandler’s thesis, suggesting
that not only does “structure follow strategy”, but also that “strategy follows structure”.
Burgelman argues that the relationship between structure and strategy is, in fact, “interactive”.
He further suggests that the notion of “structure” must encompass both formal and informal
elements that make up an organisation, rather than simply formal, “official” elements that might
appear in a corporate brochure or organisational chart.
In their multi-year study of 262 major firms, Amburgey & Dacin (1994) concluded that
Chandler’s original thesis—that structure follows strategy—was essentially correct. They also
detailed the influences of organisational structure on strategy.
For the purposes of my research, I apply the approaches of Chandler, Bower, Burgelman and
others to high-level organisational technological infrastructures which can be identified with the
term enterprise architecture. Arising from large-scale information technology practice in the
1980s, enterprise architecture is the process and practise of defining operational structures and
activities that will help to realise business vision and strategy (Federation of Enterprise
Architecture Professional Organizations, 2014). While concerned with business processes,
25
enterprise architecture is primarily associated with the creation of technical infrastructures that
supports all aspects of an organisation’s operations.
What is generally considered to be the first enterprise architecture framework, developed by John
Zachman in 1987, was modelled in part on classical notions of architecture. Zachman’s
framework conceptualised architectures existing on multiple levels and reflecting a variety of
perspectives. As laid out in his original framework, Zachman (1987) included a variety of
models that reflect differing perspectives on the enterprise, including technology and business
models. Enterprise architecture is therefore seen as having a number of requirements, meanings,
structures, and functions, depending on the perspective of the actor within the organisation.
It seems useful to consider enterprise architecture as at least a component in business structure,
and possibly as a way to re-conceive business structure in the context of the organisation in a
highly technical milieu. We can at least consider extending Chandler’s notion of structure in a
manner similar to Burgelman and explore the suggestion that, to adapt Chandler, enterprise
architecture follows business strategy, and conversely, business strategy is influenced by and
follows enterprise architecture. In coming chapters, I describe several examples of new Google’s
services both utilising the company’s existing infrastructure, and driving changes to the nature,
reach, and scope of Google’s systems.
2.1.3 Star to Sandvig: Infrastructure studies
I also place this research within the tradition of infrastructure studies as suggested by scholars
such as Star, Bowker and Sandvig.
26
Bowker et al. (2009) define infrastructure studies as an extension of understanding
infrastructures from a mechanistic or engineering perspective, to study them within relationships
with human actors and other technological entities. Star defines infrastructure as “the set of
organisational practices, technical infrastructure, and social norms that collectively provide for
the smooth operation of scientific work at a distance” (1999, p. 102).
Edwards et al. (2009) argue that although the notion of infrastructure has been a constant focus
of scholars of information systems going back to the 1960s, what constitutes that subject of study
has changed to reflect a networked, multimedia environment. They suggest that various “e-
infrastructures” are emerging on top of and around the established structure of the public
Internet. As I discuss in coming chapters, this is exactly the process by which Google has
established its own infrastructures, surrounding some networks and replacing others,
constructing a place both within and without the “public” Internet.
Sandvig (2013) suggests that within the context of Internet studies,
Infrastructure studies… refers to the multidisciplinary body of scholarship that is
increasingly directed toward understanding the co-evolution of the Internet and society,
and it does so by considering the Internet as infrastructure. (p. 90)
Sandvig (2013) argues that contemporary infrastructure studies have two strains, both
multidisciplinary and both engaged in a project to break down the dichotomy that suggests that
content and infrastructure are separate. The starting point for such work may be the materialist or
social, but the core project objectives—“unpacking the Internet’s complexity” (2013, p. 102) and
“the co-evolution of the Internet and society” (2013, p. 90)—remain the same.
27
As I will demonstrate in coming chapters, I consider my research to arise from an overlap of
these materialist or social approaches to technology. I cast Google’s systems in a new light,
rejecting the analogy of the cloud often used to represent complex technical systems, and expose
the components (technological, human and otherwise) that make up the company’s
infrastructure. This is very much the “unpacking” exercise that Sandvig suggests. A key concern
in that unpacking is the identification of relationships that exist within infrastructure, and among
infrastructure and other actors, an approach that Sandvig describes as social and “relationalist”.
2.1.4 Theory of affordances
Central to my research is the notion of technological affordance as presented by Gibson (1977)
and Norman (1988). Affordance is a quality of a technology or artefact that allows an individual
or entity to take a certain action. Writes Norman:
[T]he term affordance refers to the perceived and actual properties of the thing, primarily
those fundamental properties that determine just how the thing could possibly be used.
[...] Affordances provide strong clues to the operations of things. Plates are for pushing.
Knobs are for turning. Slots are for inserting things into. Balls are for throwing or
bouncing. When affordances are taken advantage of, the user knows what to do just by
looking: no picture, label, or instruction needed. (1988, p. 9)
In coming chapters, I explore the idea that Google’s infrastructure has technical characteristics
that may have the effect of mitigating the risk of transit and retail ISPs gatekeeping. In other
words, I argue that Google's infrastructure to some extent affords circumvention of possible
retail ISP gatekeeping. Google’s intention in the design of its infrastructure is certainly of
interest, but we must also account for affordances that were unintended. Gaver (1991)
28
conceptualises affordances as both perceptible and hidden; that is, some affordances might not at
first be seen or understood as affordances. In the case of Google, I suggest that the company may
not have understood all of the affordances inherent in their infrastructure until well after it was
designed.
2.1.5 Actor–network theory
To explore in greater depth the processes by which technologies and other actors interact, I also
draw upon actor-network theory (ANT), proposed by Bruno Latour (1996), Michel Callon
(1986b, 1991) and John Law (Law & Lodge, 1984). ANT helps us to understand on the interplay
between both technical and non-technical actors in designing and utilising information
technology (Hanseth, 1996), making few distinctions between the social and the technological,
or between human and non-human elements. All are theorised to participate in the formation of a
network that can be understood as an integrated whole (Walsham, 1997). Latour (1991) argues
forcefully that the technological and social are not separate—he rejects the act of separating
these elements by describing such a process as “purification”—but rather indivisible
interrelations among non-human and human actors, which he calls “hybridization”.
Latour posits that the actor-network formed by various heterogeneous actors is a web of
connection that links them together, each dependent on the network as a whole. Latour writes of
varying requirements and interests that are “translated” into constructed common meanings, co-
opting one another in the creation of new states of interaction. For example, we might see
Google’s requirement for decreased latency and faster serving of content to its consumers, a
commercial advantage that allows greater user exposure to advertising, translated into various
common meanings which result in new hardware and software (the establishing of data centres
29
and edge caching servers), as well as changes to business practices (negotiating with ISPs to host
Google’s content). I explore these translations in greater detail in Chapter 6.
Hughes (1994) suggests that technology can and should be seen as having forms of agency
autonomous from humans. Within ANT, both humans and non-humans are seen as having
agency, one of ANT’s more controversial positions. In his well-known 1986 paper “Some
elements of a sociology of translation: domestication of the scallops and the fishermen of St
Brieuc Bay”, Callon describes this notion as generalized symmetry. Describing the elements of
his study of a fishery, he writes:
The second principle (generalized symmetry) compelled us not to change the grid of
analysis in order to study controversies in connection with Nature and those in
connection with Society. We have carefully followed this requirement by using the same
vocabulary throughout. Problematization, interessement, enrolment, mobilization and
dissidence (controversy-betrayal) are used for fishermen, for the scallops and for
scientific colleagues. These terms are applied to all the actors without discrimination.
(1986a, p. 213)
Callon (1991) argues that the stability of an actor-network flows from the alignment of the
discrete interests of various actors. Further, the stability of an Actor-Network is by no means
guaranteed. At the beginning of network formation translations may be difficult, and actors may
resist engagement with other actors as part of the emerging network. Transaction and interplay
among actors are required to build an actor-network. Conflicts, latent or open, may arise and are
in fact quite common. ANT accepts that actors act in self-interest, or are at least self-motivated,
drawing upon varying intentions.
30
Law (1992) describes the process of constructing the network from various discrete elements as
heterogeneous engineering. Once translation has taken place, it is impossible to undo the
process, even if networks themselves are unstable. As we will see later with Google’s
participation in some actor-networks, established networks can fail, or transform into new actor-
networks.
A translation process that is successful will result in certain patterns of behaviour and
characteristics inscribed in technological artefacts; these may be evident when an artefact is
analysed separately from its actor-network (Cordella & Shaikh, 2006). Technological artefacts
can therefore be seen as influencing human (and other) actors, empowered by their position in
the network to “act”. The technology can thus be understood as an actor that influences human
actors.
Networks surround themselves with what Latour (1996) calls their own “frame of reference”,
which defines the network and its characteristics. Writes Latour:
One does not jump outside a network to add an explanation—a cause, a factor, a set of
factors, a series of co-occurrences; one simply extends the network further. Every
network surrounds itself with its own frame of reference, its own definition of growth, of
referring, of framing, of explaining overflowing the frames constructed to contain them…
there is no way to provide an explanation if the network does not extend itself. (1996, p.
376)
I further discuss the application of ANT to this research in Chapter 6.
31
2.1.6 Ciborra: Hosting, drift, hacking, and the platform organisation
The work of Claudio Ciborra is of particular importance to my research. Ciborra’s work provides
not only several useful concepts for understanding organisational and technological change, his
work is also inspirational in its insistence on digging beyond seeming technological appearances
and institutional self-claims. In this section I describe Ciborra’s concepts of technological change
and organisation operation that I use to explore Google’s development as a new sort of Internet
policy actor in later chapters. I start here by describing Ciborra’s key concepts: xenia, derive, and
bricolage.
In his 2002 book The labyrinths of information: Challenging the wisdom of systems, Ciborra
presents a number of concepts that he uses to explore the relationship between technologies and
users. Ciborra engaged in neolexia to associate certain specific terms from classical Greek, Latin
and Chinese with the characteristics and processes he wished to illuminate. In so doing, Ciborra
begins with the meanings identified with the original term and, as appropriate, extends that
meaning by placing it in a new context. In Labyrinths, Ciborra (2002) suggests
[O]ne way to get closer to the obvious which permeates the everyday chores is first to put
aside all our concerns for methods and scientific modelling and encounter the multiple
apparitions through which strategizing, knowing, organizing, and implementing offer
themselves to our relentless, mood-affected caring for, and dealing with, the world. As a
help in this direction the reader will find each of the chapters introduced by a title, a non-
English word, aimed at creating an uncanny dislocation of perspective, suspending, if
only for a brief instant, his or her usual attitude and expectations. (2002, p. 6)
32
2.1.6.1 Xenia
The first of Ciborra’s key concepts is xenia, the ancient Greek word for the notion of hospitality,
which plays a significant role in various ancient Greek myths (Louden, 2011). Ciborra writes
(2002):
Since ancient times, hospitality has been an important (even sacred) institution able to
establish a much needed bridge between the nomads, the pilgrims, the strangers, and the
settlers of the cities; more generally, between the inside and the outside of a settlement, a
house, or a persona. Hospitality has worked over the centuries as a time-economizing
institution: it is an institutional device to cut down the time needed to merge cultures, and
to integrate alien mindsets and costumes. Hospitality can precipitate the turning of an
ephemeral contact into a relationship that has the look (and the feel) of long
acquaintance. (2002, p. 103)
Ciborra argued that a similar relationship can be seen between new technologies (the guests) and
organizations and individual users (the hosts), suggesting that his approach reflected then-
contemporary research from the social studies of technology (2002, p. 104), and identifies
technological systems as non-human actors, drawing on the work of Latour. Ciborra also
contrasts examining the process of technological adaptation with a paradigm that he believed
was prevalent in organisation theory: economic exchanges taking place through markets.
In The labyrinths of information, Ciborra writes that “[h]ospitality describes the phenomenon of
dealing with new technology as an ambiguous stranger” (2002, p. 110). By imagining technology
as a non-human “other”, Ciborra suggests that many aspects of the host-guest relation apply.
Tellingly, writes Ciborra:
33
Effective hospitality creates a (partial and temporary) symmetry between the
host/subject/lord/owner and the (weaker) guest. This is achieved by introducing a new
asymmetry and adopting culturally dependent rituals by which the host becomes the
server of the guest. The latter can behave as if she were in her own home. (2002, p. 112)
Ciborra challenges us to imagine the encounter with a new technology as a process in which we
host that technology, just as we might host a little-known human actor in our home or place of
work. Ciborra argues that conceptualising the encounter with technology this way “introduces a
universe of discourse closer to human existence and its basic institutions” (2002, p. 116).
Ciborra further argues that formal system development and deployment methods popular within
information technology practise are simply manifestations of more complex rituals imposed on
the deployment process by human hosts. No amount of planning will dispel the inherent mystery
and unpredictability of the encounter with the guest technology. It is inherent in such a
relationship that identities must be redefined to accommodate the alien technology. Attempting
to fully control the technology will not be successful, just as fully controlling the behaviour of a
guest is unlikely. The technology is not simply unproblematically accepted. Hosting
organisations may be obliged to have a new technology visit, but not to stay permanently. Each
organisational culture is unique, and the guest technology must also adapt.
As we will explore in coming chapters, xenia is key to understanding the relationships between
Google and retail ISPs. ISPs might be willing to extend a kind of courtesy to Google to allow
them access to their connections, their internal networks, and their customers. In doing so, ISPs
might have power over Google and its success in serving consumers within the ISP’s domain,
but to engender trust, must take on the role of servant (at least temporarily) to Google. This
34
process might ultimately result in a kind of symmetry between hosted and host; both are changed
(or translated in ANT terminology), with each assimilating and absorbing technologies and new
ways of working that are mutually advantageous.
2.1.6.2 Dérive and bricolage
I now turn to two of Ciborra’s other key and related concepts: dérive and bricolage, both terms
taken from French. For Ciborra, dérive (meaning drift) and bricolage (meaning craft) are
“ubiquitous, puzzling processes of tinkering, hacking, and improvisation around the
implementation and use of new technology” (2002, p. 84). While derive is used to describe the
process of change of technological use as characteristically passive, bricolage is the active
modification of the technology. These concepts are predicated on the idea that, invariably, “the
technology does not seem to work completely according to plan” (2002, p. 84). Ciborra argues
that technologies are at once á la dérive (adrift) and the subject of constant attempts to modify
them to meet various and sometimes conflicting needs: patched-up, in flux, and hacked
(bricolage).
Dérive is a central concept for Ciborra, the notion that technologies will experience a “shift of
the role and function in concrete situations of usage, compared to the planned, pre-defined, and
assigned objectives and requirements that the technology” (2002, p. 85). In Labyrinths Ciborra
cites a number of specific examples from his past research on the adoption of groupware
platforms, suggesting such examples of drift as “use as a group focusing support” of Group
Decision Support System by the World Bank, and “bypassing existing applications routines” for
Lotus Notes at Unilever. In a number of the cases described by Ciborra, technological platforms
were heavily used, but not as they had been originally intended.
35
In some of the cases Ciborra describes, he attributes drift to user bricolage. While derive is less
problematically translated as “drift”, the meaning of bricolage is more complex. In French,
bricolage has several connotations that Ciborra wishes to invoke: do-it-yourself repair, home
improvement, crafts, and even the English term hacking. As I show in later chapters, all of these
meanings are relevant to Ciborra’s understanding of the relationship between technologies and
its users.
Bricolage is an essential concept for Ciborra, one that represents part of his critique of the ideal
of a perfectly rational process of strategic technology planning. The relationship between
strategy and structure is not one-way (Ciborra, 2002, p. 37), and Ciborra is generally highly
critical of simplistic notions of the strategic process within technology firms, and by
extrapolation within the enterprise generally. In fact, he argues that the most competitive
advantage results from “the exploitation of unique, intangible characteristics of the firm
(including its networks of relations) and the unleashing of its innovative capabilities” (Ciborra,
2002, p. 39). Ciborra draws on a number of organizational examples of strategic technology
development—including the development of American Airlines’ SABRE, and the Internet from
its origins in 1968 through its rapid development in the 1990s—to argue that “chance,
serendipity, trial and error, or even gross negligence seem to play a major role in shaping
systems that will become of strategic importance” (2002, p. 39).
Ciborra’s best example of bricolage and dérive is Minitel, the French videotex system that was
perhaps the most successful of its kind during the 1980s and 1990s. While it has often been
argued that the decision of the Direction Générale des Télécommunications to provide free
Minitel terminals was a key factor in the platform’s success, Ciborra suggests that it was an act
of hacking that drove public interest in the platform. The incident, when an unknown hacker
36
responded to classified ads from a Strasbourg newspaper, changed the perception of Minitel from
a “dumb terminal” to a vital and interesting communications and messaging platform.
For Ciborra, the history of the development of the Internet is also filled with hacks, too numerous
to fully enumerate here. As an example, the network, first created to facilitate sharing of
mainframe computing resources among universities, was soon used as a platform for many other
applications such as electronic mail, which by 1973 represented three-quarters of the traffic on
the then-ARPANET (2002, p. 43).
Ciborra further posits that an “innovation process” is central to the creation of successful
strategic information technologies. For Ciborra, a key part of this process is “allowing and even
encouraging tinkering by people close to the operational level, combining and applying known
tools and routines to solve new problems” (2002, p. 45).
As I explore in coming chapters, Google has attempted in various ways to institutionalise
bricolage while understanding (perhaps less successfully) that drift is inevitable. In fact, I
suggest that Google successfully mitigated the danger of retail ISP gatekeeping by hacking its
technological platform in new and surprising ways.
2.1.6.3 The platform organisation
Ciborra (1996) further explores the notions of xenia, dérive, and bricolage in his conception of
the successful technology institution that can be advantaged by these processes: the platform
organisation. Ciborra links his notion of the platform organisation with the Chinese word shih,
which he identifies with Sun-Tzu’s The Art of War. Writes Ciborra:
37
Waging a war effectively relies, according to the Chinese strategist, on the exploitation of
the contours (configuration) of the resources at hand. Shih, then, captures the strategic
disposition for action of things organizational. (2002, p. 122).
Ciborra argues that a perception of stability in the business landscape is an illusion. “[A]t best”
he writes, “one can only settle for a shapeless organisation that keeps generating new forms
through frequent recombination” (1996, p. 104). This is the platform organisation or, more
accurately, the conception of the organisation as a platform, or “metaorganization”, one that
provides affordances for novel functionings while being volatile and changing radically in
response to its environment and its emerging capacities. The value of the platform organisation
lies in its ability to take on whatever organisational structure and identity is required by
circumstances, even if this makes the organisation seem chaotic and unfocused at times.
Ciborra describes the European computer manufacturer Olivetti between 1977 to 1990 as an
example of a platform organisation. As Ciborra points out, in the early 1970s Olivetti was not a
computer manufacturer, but focused on typewriters and calculators. The company “crossed at
least two technological discontinuities” (2002, p. 105) in its history, first from mechanical to
electronic calculators and typewriters, and then from electronic products to products utilising
microprocessors, such as PCs and workstations. As product lines changed, work practises and
management approaches changed radically, with an increasing emphasis on research and
development.
Ciborra identified product life cycle as a particularly important part of Olivetti’s operations. On
one hand, company leadership argued that the creation of new products and product lines was the
result of top-down strategic processes, while at the same time admitting that such processes were
38
limited and could not account for much of the real-world product development within the
company.
Several conclusions about Olivetti’s success are drawn by Ciborra. First, Olivetti evolved
through a series of technological stages paralleled by a construction of a company identity, such
as “office equipment manufacturer” or “computer manufacturer”, as appropriate to its activities.
These identities set the stage for product life cycle development, which aligned with that identity.
However, as a technology company, each identity, and hence each set of product lines, were
quite short-lived. Maturity in a declining product segment—for Olivetti, mechanical calculators
in the 1970s—meant little in the face of rapid technological change. Tensions necessarily arise
within such an organisation as product development, production and marketing cycles mature,
only to be abandoned or transformed by parallel product development processes that
fundamentally change the company.
Drawing on the work of Burgelman (1983), Ciborra argues that
The organizational structures which support the technology strategy must be able to cope
simultaneously with the management of discontinuities and incremental innovation. This
has put, over time, a premium on the firm’s ability to develop multiple, often inconsistent
competencies, to deal with the emerging, divergent technological and organizational
requirements. (1996, p. 108).
Just as the platform organisation forms and reforms internally, shedding old identities for one
new and more useful, Ciborra (1997) argues that it also engages in a series of acquisitions and
alliances with other organisations. He writes that partnerships might be pursued for strategic
reasons—in the case of Olivetti, principally access to capital and to technologies, both with the
39
objective of accelerating growth. However, other dynamics arise, leading to unexpected benefits
and challenges. In fact, the original objectives of the partnership might soon be forgotten as new
benefits are exploited, such as a newfound ability to actively influence standards-making, or to
accelerate internal learning. Ciborra argues that acquisitions and alliances succeed or fail more
because of bricolage and derive than top-down business strategy. Platforms are characterised by
surprises.
Ciborra adopts the term “platform” in his description of the functioning technological firm
because of the similarities he saw between platform organisation functioning and the design and
functioning of what he called in 1996 the “computer platform,” specific to the creation in that era
of desktop personal computers. He argues that the platform organisation utilises concepts for
both management practise and technological design from outside the company, including other
firms, and that management practises are always mediated by the characteristics of technologies
they intend to coordinate. Writes Ciborra:
The platform, being easily reconfigurable, is particularly suited to supporting the practice
of betting and what it entails, i.e., high flexibility in exiting when one is losing or moving
in rapidly to reap the ephemeral benefits, or adapting to the new circumstances that
require a commitment to a new risky move. (1996, p. 114)
As I discuss in coming chapters, Google’s history in the 2000s can be seen as an increasing
embrace of characteristics of Ciborra’s platform organisation. I examine the strategic approach
of Google in more detail, analysing the company’s founding through the early-2010s to describe
the transformation of the company as a network neutrality policy actor.
40
2.1.7 The political economy of Google and critical Google studies
Both popular and academic writings specifically about Google are relevant to my research. This
work tends to fall into two reasonably discrete categories: coverage in the popular technology
press of Google’s products and activities, and critical academic examinations of Google’s
influence on society and culture. The former I describe in Chapter 3; the latter, below.
Much scholarly research critical of Google’s operations and impacts has been conducted since
the company came to prominence in the 2000s. One benefit of examining Google through the
lens of political economy is that it tends to avoid the sometimes-celebratory tone of the popular
and business press.
Fuchs (2011), Vaidhyanathan (2011), and others criticise what they describe as the affirmative
and uncritical coverage of Google found in the business and technology press, ignoring the role
played by the company’s staff, users, and the venture capitalists who initially funded the
company. Van Couvering (2008), Van Hoboken (2009) and Maurer et al. (2007) all examine
Google’s market position and suggest the company has monopoly power in the search
marketplace. This dominant market position raises numerous concerns.
Zook and Graham (2007), Halavais (2011), Vaidhyanathan (2011), and Hinman (2005) raise
concerns around Google’s role in state-mandated censorship and surveillance. Stalder and Meyer
(2009), Becker (2009), Darnton (2009), and others criticise Google’s role as an arbiter of what
constitutes valid knowledge. Lobet-Maris (2009) has identified that Google’s PageRank
algorithm, which determines which sources of information are most prominent, is a trade secret
and therefore difficult to fully understand.
41
Weber (2007) and Carr (2008) suggest that Google may be reducing its consumers’ cognitive
capacities. Pariser (2011) argues that Google and similar personalised web platforms create a
“filter bubble” that limits the range of opinion and information available to searchers.
Various analyses of the political economy of Google have attempted to identify the specific
process of commodification central to Google’s accumulation of wealth. As argued by Wasko
and Erickson (2009), Kang (2009), and Vaidhyanathan (2011), it would seem unproblematic to
suggest that Google’s business model is centred on delivering audiences to advertisers through a
process of commodification very manner similar to Smythe’s (1981) notion of traditional mass
media advertising practises, where the audience is sold as a commodity to businesses.
In his critique of commercial media forms of the late-20th century, Smythe (1977) suggested that
media studies was too focused on cultural aspects of media, rather than core business models.
Bermejo argues “the consideration of audiences as the main commodity produced by advertiser-
supported communication media” (2009, p. 136) transformed the media studies debate. Bermejo
further suggests that while there is some disagreement within media studies as to what, exactly,
is being commodified in commercial media, it is some combination of audiences’ attention and
time. Smythe further argues that audiences themselves provide labour as part of the media
system:
[T]he work which audience members perform for the advertiser to whom they have been
sold is to learn to buy particular ‘brands’ of consumer goods, and to spend their income
accordingly. In short, they work to create the demand for advertised goods. (1977, p. 6)
Bermejo argues that the audience becomes a commodity for the media firm through the process
of systematised audience behaviour measurement. In 20th century broadcasting, these
42
measurements took the form of “ratings”. Bermejo suggests that “all participants in the trading of
audiences are interested in the measurement taking place, but they have conflicting interests over
the results of the measurement” (2009, p. 137).
Bermejo further argues that the online advertising industry had conformed to a structure similar
to that of 20th century broadcasting, reliant on delivering an audience to advertisers and
measuring this delivery accurately enough to set prices for audiences.
Google and other search engines had to define a somewhat different business model. As I detail
in Chapter 4, early Internet companies attempted to keep visitors on their websites in order to
expose them to more ads. Google was rejected by some potential suitors during this period
because Google was “too good” at returning relevant results and sending visitors to other
websites. Other business models soon emerged, including pay-for-placement. Google rejected
these approaches, clearly distinguishing between search results and ads, though they appeared
side-by-side. Google’s initial model was exposure based, then switched to performance-based in
2002. Online advertising spending in the United States in 2016 was estimated to be $62 billion
(Interactive Advertising Bureau, 2016).
Zuboff (2015) analyzes Google’s operations in the context of what she calls surveillance
capitalism, wherein Google and Facebook “exploited a lag in social evolution as the rapid
development of their abilities to surveil for profit outrun public understanding and the eventual
development of law and regulation that it produces” (2015, p. 83)
There are other useful perspectives on the commodification process. Pasquinelli (2009) focuses
on the Marxian notion of rent, and specifically a form of “cognitive rent” with value created by
Google’s PageRank algorithm. That rent is realised by what Lee (2011) and Bermejo (2009)
43
describe as Google’s commodification of search keywords that are sold through automated
auctions to advertisers, in addition to creating a commodity of users’ attention. Halavais (2008)
and Petersen (2008) argue that Google and other web 2.0 platforms are based on the exploitation
of free user labour. Fuchs (2011) agrees, suggesting that a “prosumer commodity” (drawing on
the work of Toffler (1980)) is created by Google through exploiting knowledge labour.
Fuchs (2011) extends his analysis to many “web 2.0” platforms, describing a process of
searchers and content consumers contributing various forms of unpaid work in exchange for
“free” services and content. He argues that this relationship differs from traditional mass media
in that the audience is also producing content (as “prosumers”), and that this content, along with
user data which captures patterns of content and service consumption, constitutes an “audience
commodity” that is sold to advertisers. Google indexes user-generated content, and consumers
utilise Google services, both acts contributing toward the accumulation of surplus value, unpaid
labour. Fuchs draws on Marx’s notion that the analysis of the political economy of capitalism
should begin with “the analysis of the commodity”; political economic discourse of Google
therefore focuses on Google’s commodity production, distribution and consumption.
Siva Vaidhyanathan’s book The Googlization of Everything — and Why We Should Worry
(2011) provides a particularly useful cultural critique of Google’s operations and impact. The
author argues that the popular discourse surrounding the rise of social media focuses on
liberating its users from traditional global mass media. Vaidhyanathan suggests instead that this
is an illusion, and that what is taking place is something he calls Googlization, “the process of
harvesting and analysing information about all of us” (2011, p. 83). Exploiting society’s desire to
connect, we trade small amounts of our personal information for this at a heightened level.
44
Vaidhyanathan further likens Google’s activities to a form of “universal surveillance” over
which we have inadequate control.
Vaidhyanathan further suggests that Google is instrumental in a process of what he calls
infrastructural imperialism. Because its content and services are so ubiquitous on a global scale,
it is Google’s methods and means of organising information that are prevalent. Writes
Vaidhyanathan:
If there is a dominant form of cultural imperialism, it concerns the pipelines and
protocols of culture, not its products—the formats of distribution of information and the
terms of access and use… What flows from North to South does not matter as much as
how it flows, how much revenue the flows generate, and who uses and reuses them. In
this way, the Googlization of us has profound consequences. It’s not so much the
ubiquity of Google’s brand that is troubling, dangerous, or even interesting: it’s that
Google’s defaults and ways of doing spread and structure ways of seeking, finding,
exploring, buying, and presenting that influence (though they do not control) habits of
thought and action. These default settings, these nudges, are expressions of an ideology.
(2011, pp. 109–110)
Vaidhyanathan argues forcefully for a recognition that the company’s users are not its customers,
but its product, in a way not dissimilar to advertiser-based commercial media. He argues for a
sort of nationalisation of Google’s search and knowledge management functions, suggesting that
society “should influence—even regulate” search systems (2011, p. xii).
45
As I describe in coming chapters, Google's technical imperatives—particularly low latency, high
bandwidth, and system reliability—were direct results of a business model that commodified
users and their labour.
Winseck (2012) argues for a balanced approach to Google’s impacts. Many of Google’s interests
might align with those of its various users, including the notion of an “open”, index-able and
searchable Internet. However, what some might consider positive such as mitigating the “babble
effect”—the inability to process or categorise millions of chunks of content—(Benkler, 2006),
may also reinforce hierarchies of power (Shirky, 2003). Winseck suggests that a dogmatic
commitment to “open” data or an “open” Internet creates numerous challenges.
Having described a number of useful theoretical and methodological approaches that I utilise in
my research on Google, I now discuss network neutrality and its history in North America in the
2000s.
2.2 Understanding network neutrality
During the 2000s, network neutrality emerged as perhaps the most prominent technology
regulatory issue in telecommunications. Legal scholar Tim Wu, who originated the term
“network neutrality” in 2003, described it as follows:
Network neutrality is best defined as a network design principle. The idea is that a
maximally useful public information network aspires to treat all content, sites, and
platforms equally. This allows the network to carry every form of information and
support every kind of application. The principle suggests that information networks are
often more valuable when they are less specialized – when they are a platform for
46
multiple uses, present and future. (For people who know more about network design,
what is just described is similar to the “end-to-end” design principle). (Wu, 2006)
In 2007, network neutrality became an issue of broad public interest. Comcast, America’s largest
cable company, became the target of complaints when it was discovered to be restricting the
ability of its broadband Internet customers use of BitTorrent, a popular file-sharing protocol
(Ernesto, 2007). These complaints began a long series of regulatory and judicial processes during
which the American telecommunication regulator, the Federal Communications Commission,
played a central role.
Concerns about network neutrality and Internet traffic management practises (ITMPs) have
mirrored past discussions about the management, ownership, and use of telephone networks that
have existed since the technology was created. More ancient still are common law principles of
common carriage that require owners of transportation infrastructures to provide services to the
general public without unreasonable discrimination.
2.2.1 Common carriage
In his writing on network neutrality, Wu tends to under-emphasize the legal principle of common
carriage. The term is not used in “Network Neutrality, Broadband Discrimination” (2003), and
while common carrier is discussed in Wu’s The Master Switch (2010) in the context of the
evolution of the telephone and radio industries, he does not make the connection between the
legal principle applied to these industries and to his own concept of network neutrality.
Yet by situating his notion of network design in an historical context, Wu does place network
neutrality in the context of various open access principle beyond network design approaches,
47
principally the end-to-end principle. I further argue that common carriage principles provide an
important context for the understanding and acceptance of network neutrality principles and rules
in the 2000s.
As Wu (2003) states, telephony in North America operated under legislation and regulation that
treated the monopoly networks as common carriers. The Federal Communications Commission
examined computer communications in the 1960s in a series of three interrelated processes that
became known as the FCC Computer Inquiries. It was during these processes that the FCC made
its first distinctions between data and communications use of communications networks. In 1966,
in a process that came to be called Computer I, the FCC approached regulation of these uses
based not on the technologies involved, but on their markets; “pure data processing” was viewed
as an open, competitive market, while “communications” was seen as closed and monopolistic,
dominated by legacy telephony (Cannon, 2003). As such, “data” was left unregulated, while
several rules were applied to “communications” uses, including regulations against cross-
subsidization of services by the monopoly and structural separation of data processing from
communications.
In a second process (Computer II) in the 1970s, the FCC further refined its categorization of
computer-based communications services, defining basic services as those offered for “pure
transmission” only, and enhanced services as “everything else” (Cannon, 2001, p. 54). The bright
line test developed by the FCC categorized any user interaction with stored information as
making the service ‘enhanced’ rather than ‘basic’. Again, basic services were subject to many
regulations, including an expectation of common carrier, which enhanced services were not.
Computer III, which followed in the 1980s, retained these distinctions and addressed regulations
to insure open network architectures by telephone carriers (Cannon, 2001).
48
During the late-1990s Internet service provider America Online lobbied the FCC and state
legislatures with public interest groups in an effort to force incumbent cable television
companies to share local infrastructure. When AOL purchased Time Warner Inc. and its cable
television operations, this lobbying ceased nearly immediately (Goodman & Timberg, 2000).
The FCC imposed several conditions on the merger, including making the AOL Instant
Messenger app, at that time a very popular communications platform, interoperable with other
messaging services (Labaton, 2001). The Federal Trade Commission also required the new entity
to open access to its high-speed Internet cable infrastructure to resellers (Carroll, 2001).
During this period, the FCC continued to classify Internet service provision differently from
basic telephony services, and not as common carriage. A 2002 decision classified cable Internet
provision differently from Internet provision over traditional telephone lines (twisted-pair
copper), and in 2005 also classified wireline broadband over telephone lines as information
services subject to Title I regulation. Again, common carrier did not apply.
Wu did not draw on common carriage in his notion of network neutrality in the early-2000s, as it
had been largely absent from regulation of American online services. However, notions of
common carriage became increasingly important to the discourse of Internet regulation as
Internet use became more prevalent and the market for Internet access more concentrated.
2.2.2 The end-to-end principle and Internet traffic management
The notion of network neutrality is tied closely to the end-to-end principle, a core precept of the
design of the Internet. The end-to-end principle was first articulated in the 1960s by Paul Baran
and Donald Davies, the inventors of packet switching (Baran, 1964; Davies, Bartlett,
Scantlebury, & Wilkinson, 1967), and further detailed in a 1981 conference paper by Saltzer,
49
Reed, and Clark (1984). The end-to-end principle suggests that application-specific functions
should not reside in the network itself (what Saltzer et al. call intermediary nodes), but in the
hosts at the edges of the network. For example, the only means by which two hosts exchanging
data over a large network might achieve “perfect reliability” is by the hosts communicating and
checking the data themselves, rather than relying on other network elements to do so that are out
of their control (Saltzer et al., 1984). It is from the end-to-end principle that the notion of a
“dumb” or neutral Internet arose, a network that only needs to pass along data without altering it
or managing it.
The end-to-end principle aligned well with the operation of the early Internet. The Internet’s
design was initially fairly straightforward, with often similar hardware deployed across networks
and excess bandwidth available (Nagle, 1984). Rapid usage growth and interconnection in the
mid-1980s, however, revealed limitations to the Internet’s foundational protocols, resulting in
concerns that the Internet would face “congestion collapse” and cease to function (Jacobson,
1988). Network protocols were therefore modified to enhance the abilities of Internet nodes and
links to control traffic flows. All traffic was “backed off” regardless of the source during periods
of congestion, a principle of fairness was generally applied and which created an environment of
“equitable sharing of bandwidth” (Floyd, 2000).
During the Internet’s early history, most traffic was of one type: text. The Internet of the 2000s,
however, became the medium for numerous classes of content, some of which could only be
functional if traffic was prioritised, or shaped. For example, the packets of data making up a
voice-over-IP (VoIP) call were most useful if they could flow between participants in as timely a
manner as possible. Network service providers would therefore establish technical control
mechanisms that would reserve and prioritise network resources depending on network use
50
(Evans & Filsfils, 2007). VoIP might typically be prioritised over electronic mail, which was less
time-sensitive data; these control mechanisms were called quality of service (QoS).
Network congestion results from Internet traffic exceeding the ability of network components’
capacity to manage it. Congestion would in turn result in increased packet delay variation (which
users may experience as “jitter” when streaming media is being received) and network latency
(the measure of time delay experienced when using the network).
The question of when it was appropriate to implement QoS, and in what way, was a key network
policy question in the 2000s (Stevenson & Clement, 2010). Using QoS, service providers had the
ability to limit severely (or even block) certain types of Internet traffic and applications. This
practice was typically called bandwidth throttling (Reisman, 2007). In some instances,
applications were capable of obscuring their use of the network to avoid throttling, endeavouring
to make their traffic indistinguishable from that of other, less bandwidth intensive applications.
Service providers had therefore turned to the use of specialised network surveillance
technologies, called deep packet inspection (DPI), that allowed them to analyse the contents of
the data flowing through their network (Abelson, Ledeen, & Lewis, 2009).
2.2.3 Net neutrality controversies in North America
The Internet grew rapidly in the 1990s. While large incumbent telecommunications companies
focused on the potential of centrally provisioned network information services—often labelled
the “information superhighway”—the Internet grew increasingly popular, provisioned primarily
by relatively small and medium-sized dial-up Internet service providers (Besser, 1995). From
1995 through the 2000s, Internet traffic increased by 2400% (Cisco Systems, 2012), driven by
the distribution of rich media such as music and video.
51
In Chapter 5 I describe the network neutrality controversies that began during the early-2000s, as
Google was establishing itself as a profitable search engine and Internet brand. I detail the first
conflicts around Internet traffic management, and various public interest and regulatory
responses. In Chapter 6 I describe Google’s approach to network neutrality as its position on the
issue became nuanced and problematic.
2.3 Chapter summary
In this chapter I described relevant theoretical and methodological approaches to studying
Google, identifying foundational work in Internet governance, business strategy theory,
infrastructure studies, the theory of affordances, actor-network theory, the work of Ciborra on
technological and organisational change, and political economy approaches to Google. I also
defined network neutrality as a concept, and described the technical characteristics of the Internet
from its early history to contextualise controversies around the issue that arose in the 2000s.
In Chapter 3, I detail my approach to generating data about Google, including the use of
documentary evidence, network diagnostic tools, and large-scale network research projects. I
also outline my techniques for creating a web-based map of Google’s systems in 2013.
52
3 Researching the burgeoning network giant
In the previous chapter, I identified several frameworks for exploring questions in Internet
governance generally, and the studies of infrastructure, organisational change, and business
strategy specifically. I identified a number of potentially useful approaches, all of which can
include textual analysis and empirical studies (including case studies) of organisational
behaviour. The work of Ciborra, for example, makes significant use of case studies of
technology firms. Work in Actor–network theory focuses on processes and relationships that
have been identified in real-world governance.
My approach to studying the influence of Google’s infrastructure on the company’s strategic
decision-making on network neutrality policy has been informed by my technical knowledge, my
familiarity with regulatory discourse, and my theoretical understandings. I use a number of
methods to generate useful and relevant data, drawing on Infrastructure Studies, traditions within
ANT of studying technologies, and techniques of network design and analysis. In this chapter I
describe my research process with an emphasis on technical discovery, and discuss the design of
my research in relation to my principal research goal—to better understand what was the role
that Google’s infrastructure played in the company’s retreat from public support for Wu’s
network neutrality after 2010—and my guiding research questions, as follows:
1. What was Google’s policy position on network neutrality, and how did it change?
2. How did Google’s infrastructure and systems, and the affordances they provided, change
during the period of its network neutrality engagement?
53
3. In what ways could infrastructure and systems influence Google’s policy approach to
network neutrality?
4. How can Google be characterised as a network and policy actor during this period, in
relation to Wu’s network neutrality models?
The process of generating useful data about Google’s systems has presented several significant
challenges. It has not been since the company’s early history that it has publically described its
infrastructure in any detail to those outside the organisation. In fact, Google preferred to treat
much of the detail of its systems as trade secrets. For example, Levy (2011) describes Google
initially purchasing land for its cyclopean data centres using shell companies; the existence of
many data centres was, at best, an open secret in the mid-2000s. Google also created in-house
much of the underlying technology of its infrastructure, such as its custom server operating and
file systems, in secret.
As I discuss in the next three chapters, Google uses information concerning its systems
strategically: to promote the company and its services, to influence competitors and policy
makers, and as institutional advertising. Google promotes its infrastructure to strengthen its
brand identity as powerful, useful and (through colourful photos of a data centre’s cooling
system) perhaps friendly and fun as well.
In this chapter I describe textual analysis of writings by and about Google, and the technical
means I used to explore Google’s infrastructure. I specifically describe the process of generating
data concerning Google and it operations, which I use later in this dissertation to determine the
affordances associated with Google’s infrastructure, a key step in understanding Google’s
business strategies and policy-making behaviours.
54
3.1 Research Design
3.1.1 Case study: Ciborra and Actor–network theory
Like Ciborra, my research draws on a foundation of empirical evidence. Ciborra’s work relies
heavily on case studies of technology firms, in particular Italian technology manufacturer
Olivetti.
My case study analysis is guided by the conceptual framework of ANT. It is central to actor-
network theory to explore the process of interest alignment that leads to actor-network
development (Monteiro & Hanseth, 1996). This dissertation thus presents a case study of
Google’s development over time, from the company’s founding to the early-2010s. This requires
capturing contextual changes to the company’s services, strategic approaches to policy, systems
and infrastructure.
I produce a narrative account of Google’s history, emphasising cycles of service development
and infrastructure change, along with descriptions of the company’s engagement with the policy
process. To analyse this narrative, I then draw on actor-network theory to reframe interests and
alignments both between Google and among other actors, and within the company itself.
My narrative is akin to Ciborra’s notion of an “empirical case” of Google, rather than simply
history of the company and its activities. I am concerned about the shape of the organisation (to
the extent that it had one) and its constructed networks of relationship with other actors. As
Ciborra might suggest, there is no “pure alignment” to be discovered between Google and other
entities, and we must be careful to avoid too much abstraction or idealisation. Wrote Ciborra in
55
Labyrinths, success is to “accept coexistence with the messiness of the worldly routines and
surprises without panicking” (2002, p. 26).
Methodological approaches associated with ANT, utilising both ethnographic research and
documentary analysis (Van House, 2001), align well with the recommendations of Star (1999)
concerning the study of infrastructure, which I discuss in the next section.
3.1.2 Infrastructure studies
Sandvig (2013) argues that the study of Internet infrastructures requires a change from looking at
how people utilise a network to its underlying structure, from infrastructures as “what people say
with it” to “how it works” (2013, p. 89). Bowker et al. suggest “better forms of multi-modal
research (2009, p. 113) are required to study infrastructure. Star identifies a program of research
when studying infrastructures that might include “a combination of historical and literary
analysis, traditional tools like interviews and observations, systems analysis, and usability
studies” (1999, p. 382). For Star these methods will surface both “master” and other narratives of
infrastructure and systems that embody values.
Bowker et al. (2009) argue that no one methodology is applicable in all infrastructure studies.
They discuss a variety of specific methods, including “infrastructure inversion” (Bowker, 1994),
observation during “moments of breakdown”, and reading of texts and databases, all to lead to an
“integrative view” (Bowker et al., 2009, p. 113). As I describe in the next chapter, my approach
to studying Google’s infrastructure draws on these techniques and others, with infrastructural
inversion—seeing the real work of knowledge production and politics that underlies the
interdependence of technical networks and rules—being a necessary approach to explore a set of
systems that are intentionally obscured.
56
Drawing on the work of Star (1999), I have examined Google’s infrastructure both as a human-
constructed artefact that has influences and relationships with people and other systems, and as a
record of Google’s activities, an information-collecting device of sorts that traces the company’s
strategies and history. As Star states, these two perspectives are not mutually exclusive. But they
do require investigation on both fronts.
Similarly, I have found it useful to focus on specific characteristics of Google’s systems,
mirroring several of Star’s characteristics of infrastructure. For example, it is important to
consider the reach and scope of Google’s infrastructure, as I am concerned with the geographic
locations in which Google’s servers and networks can be found, and their proximity to people
and other systems. As well, embeddedness, the extent to which Google’s infrastructure is inside
or overlapping other structures and systems, is also important for the question of the extent to
which Google can bypass retail ISP gatekeeping, or create alliances with ISPs that make
technical gatekeeping unlikely.
Finally, I am also concerned, as was Star, with narratives, both the meta-narrative of Google’s
infrastructure as created by the company itself, and the narrative of the infrastructures creation,
which is for the most part invisible and must be surfaced. This links to the project of constructing
a narrative of the growth and changes to Google’s services and policy activities—also obscured.
3.1.3 Propositions
Following Yin (2013) and Baxter and Jack (2008), I created a number of distinct propositions in
order to guide my work of data generation. These propositions, described below, were fashioned
based on initial reading of popular accounts of Google’s infrastructure in 2010 and my own
57
technical knowledge. In order to accomplish this, preliminary work was completed to
conceptualise Google’s infrastructure, and thus define a site of study.
3.1.3.1 Conceptualising infrastructure elements
As I indicated in my first chapter, this research began with a specific question arising from
Google’s withdrawal from the network neutrality policy debate in 2010: had Google found a way
to circumvent retail ISP gatekeeping? Under scrutiny, this question requires a nuanced
interrogation, and (not surprisingly) begets other queries.
My starting point is knowing in general terms, based on a very incomplete depiction of the
company in the public record, that Google maintains a very large, globe-spanning technical
infrastructure, with significant investments in backbone networks, peering, data centres, and
caching servers. This picture of Google conveys scale, but only implies specific capacities and
affordances.
But this information is far too general to be of direct use. As well, it is quite likely misleading in
whole or in part, as it clearly responds to Google’s objectives in building the global public
perception of its brand. Without investigation we cannot know the affordances of these systems
in any detail whatsoever, and perhaps more importantly, we cannot know enough about the
history of these systems to understand their relationship to Google’s strategies and operations,
explicit or tacit.
What I did as a first step to shape this information generation was to begin to organise aspects of
Google’s infrastructure that were known, and to imagine what was missing. Google is
information technology and information management on a nearly unprecedented scale, but it is
58
information technology and information management nonetheless. Servers connect to one
another through networks, and require software to run. All must be designed, deployed, and
renewed by people, and were created in response to strategic, operational, and other imperatives.
For the purposes of this work, I have therefore chosen to conceptualise various aspects of
Google’s technological infrastructure as consisting of interacting components distinct from one
another. This arises from approaches established in information technology service management
(ITSM) (Hochstein, Zarnekow, & Brenner, 2005) and embodied in the ITIL standards
(Strahonja, 2009), which imagines technological services that provide organizational affordances
as being made up of a number of elements, including systems (such as software platforms or
hardware devices) which are in turn made up of smaller and less complex components. An
example of such an approach is an enterprise content management system for document
management; the service is conceptualised as a business service containing people and
technologies, which afford certain abilities to human workers and other systems and services.
I have therefore created a preliminary organisation of Google’s various systems in a manner
similar to both information technology service practice, which also aligns well with how the
technology press, the company itself, and IT practitioners conceptualise them. I identify
Google’s systems as follows: wide area networks (WAN), distinguishing between internal
Google networks that connect only Google servers, and external networks that connect to non-
Google elements, including peering connections between Google and other network entities; and
server and server clusters of various types, including public data centres, Internet exchange point
presences, and retail ISP-hosted caching servers.
59
As stated above, these distinctions may at times be somewhat arbitrary, but are useful for this
research as organising my initial specific inquiries into Google’s systems.
Figure 3.1: Google’s infrastructure elements, 2013 Figure by John Harris Stevenson.
ISPNetwork
GoogleNetwork
Google Data CentreGoogle Data
Centre
Google Global Cache
Google IXP Server
Users
Users
60
3.1.3.2 Guiding propositions
The site of technical data generation established, I identified a number of propositions to be used
to guide data generation. These propositions were not designed to be exhaustive, but did provide
direction and kept the scope of my enquiry manageable. All of these propositions arise primarily
from my second guiding research question: how did Google’s infrastructure and systems, and the
affordances they provided, change during the period of its network neutrality engagement?
My propositions were as follows:
1. Google’s infrastructure of the early-2010s had technical characteristics that could have the
effect of mitigating the risk of retail and transit ISPs gatekeeping Google’s content. As I
suggested above and will discuss further in coming chapters, Google maintained servers that
appear to be used for content caching and service provision at Internet exchange points, other
peering points, within retail ISP networks, and at other locations. This approach to
distributing content was not novel—it was a common practice to place servers closer to
consumers—but the scale of Google’s content delivery network was impressive. I knew in
general terms that Google caches were likely located at ISPs, and at additional location. But
how prevalent was this caching? Where, specifically, were caches located? What were the
capacities of the caches? And how did these caches impact the relationships among Google,
the cache’s hosts, and their users?
2. Google peered very extensively with other network entities. Some information in this area
was public. But many, perhaps most, peering arrangements were private, and their existence
could only be inferred after looking at other evidence. Could I draw conclusions concerning
61
private peering by understanding the extent of public peering? What details of peering
agreements between Google and other network entities could we know? What do these
peering relationships tell us about the relationships among various network entities and
Google?
3. Google’s broadband networks allowed the company to circumvent most third party transit
ISPs. This proposition responds to questions about the extent to which Google relies on third-
party transit providers (such as Level3) for its connections to other network entities. How do
networks change Google’s relationships with other network actors and users? Which traffic
might third parties interfere with, and which not?
4. Google developed its infrastructure based on engineering requirements driven by Google’s
business models; the mitigation of retail or transit ISP gatekeeping was not necessarily
initially a design objective of Google’s systems. Technical objectives of importance to
Google included increasing the efficiency of Google’s network operations by decreasing the
latency of web applications and improving access to online video.
3.2 Research process
In this section, I detail two principal means of generating data to answer my principal research
goal and supporting research questions. First, I conducted textual analysis of popular and
academic writing on Google, and examined some of Google’s own internal technical
documentation. Second, I performed technical analysis of Google’s infrastructure using network
diagnostic tools and large-scale network analysis platforms.
62
3.2.1 Textual analysis
As detailed in the narrative below, textual analysis is a core component of my data generation in
this research. I began this research by becoming engaged with popular coverage of network
neutrality issues and Google’s activities beginning the late-2000s. I was also able to access a
limited number of internal Google documents, which provided significant insight into the
company’s activities. Finally, I collected over fifty maps of terrestrial and submarine Internet
routes.
3.2.1.1 Popular, advocacy, technical press
One of the sparks for my research was popular coverage of network neutrality policy-making
beginning in 2008, and Google’s involvement in this process starting in 2009. In the chapters to
come, I draw on many of the same publications for their coverage of Google’s activities,
services, and stances on various issues, as well as descriptions of Google’s history and
management practises, very often laudatory or intending to be edifying.
Dozens of different popular sources were drawn upon. They included the popular press (New
York Times, The Guardian, The Washington Post, The Globe and Mail), the popular technology
press (Wired, Ars Technica, CNET News, Gizmodo, PC World, Recode, The Verge, The Register,
TechCrunch), and the popular business press (Forbes).
In 2010 I turned to institutional material on Google and network neutrality, such as the Google
Public Policy Blog and content from advocacy groups such as the Electronic Frontier
Foundation, Free Press, and Public Knowledge. Since the primary nexus of activity during this
period appeared to be the Google-Verizon agreements (which I discuss in chapter 5), the bulk of
63
these sources were American. I also draw on the analysis of Google’s stance from various
opinion-makers, including American professor of Internet law Jonathan Zittrain (2010).
Chief among popular accounts of Google’s history is Steven Levy’s In The Plex: How Google
Thinks, Works, and Shapes Our Lives (2011). A Google executive who I interviewed in 2012
recommended Levy’s work as being generally accurate concerning Google’s internal operations
and strategies when compared to other coverage of the company. Levy is a respected American
technology journalist, well-known for his writing about the early PC industry in the 1980s and
the history of Apple Inc. In the Plex provides a wealth of information concerning Google’s
operations, drawing on sources within the company which no other accounts can match. In the
Plex describes the history of the company from its founding to the late-2000s, with substantial
information concerning the development of the company’s infrastructure.
Also of great utility was John Battelle’s 2005 The Search: How Google and Its Rivals Rewrote
the Rules of Business and Transformed Our Culture, and Bernard Girard’s The Google Way:
How One Company Is Revolutionizing Management as We Know It (2009), both of which detail
much of the company’s early technical history.
Other popular accounts were also somewhat useful: How Google Works (2014) by Google’s then
executive chairman, Eric Schmidt, with Jonathan Rosenberg; Wikileaks founder Julian
Assange’s self-serving When Google Met Wikileaks (2014); Jeff Jarvis’ What Would Google
Do?: Reverse-Engineering the Fastest Growing Company in the History of the World (2009);
Googled: The End of the World As We Know It (2010) by Ken Auletta; and, The Google Story:
For Google's 10th Birthday (2008) by David A. Vise and Mark Malseed.
64
3.2.1.2 Specialised technical knowledge
It became evident early in my research that acquiring adequate data concerning Google’s
systems would be challenging. I made several attempts to speak with Google staff concerning the
company’s infrastructure and policy activities, either formally through company representatives
or informally through social media connections and referrals from friends and colleagues.
Official requests were met with silence. My informal requests were also for the most part
unsuccessful. My only success was an off-the-record conversation with a Canadian Google
executive in May 2012. This executive worked in government relations and policy for Google
Canada, and though not involved in the network neutrality policy formation process I was
researching, seemed well aware of events and the personalities involved. The conversation was
wide-ranging and candid, and proved to be useful in challenging simplistic notions I had begun
to develop concerning the ability of Google’s infrastructure to circumvent retail ISP gatekeeping.
The executive believed that “tech did not influence policy”, claiming that policy decisions within
Google on network neutrality had been “personality-driven” and framed by individual
ideologies, including a strong commitment to the growth and vitality of the World Wide Web,
rather than simply in the best interests of Google as a company.
As my research progressed, I found pockets of technical information, often specialised blogs and
websites focused on wide area networking, that provided detailed information about network
infrastructures and content hosting. These included Data Center Knowledge, DSLReports,
Netmanias Tech-Blog, Submarine Cable Networks, Speedtest Blog, Rayburn’s
StreamingMediaBlog, Search Engine Land, DataCenterDynamics, and Norton’s Ask Dr.
Peering. I drew heavily on Norton’s various publications on peering, the value of which was
65
substantial. I was also able to conduct limited interviews with networking experts, such as Jon
Nistor, director of TorIX (2013).
3.2.1.3 Google technical material
Key to the success of my research was my ability to access a limited number of key Google
internal documents concerning infrastructure, as well as descriptions of Google systems from
third parties. All of this data was available in some form on the open Internet, though much of it
was obscure, and was gathered as the result of careful and sometimes creative search techniques
using the Google and other search engines.
Principal among this material was the 2011 Google Global Cache Beta Installation and
Operations Guide, which provided important details on the operation of Google Global Cache,
and Mike Axelrod’s 2008 presentation to The Value of Content Distribution Networks to the
African Network Operators Group (AfNOG), which detailed Google’s strategy on content
delivery prior to the launch of GGC.
3.2.1.4 Maps of network infrastructure
Beginning in 2010, I also collected as many maps of wide area networks (WAN) as possible
from public sources, in order to more easily estimate the locations of Google’s network and
server resources. These maps range in detail from abstract representations of approximate
network routes useful primarily for promotional purposes, to interactive maps which show detail
down to the kilometre. Most maps, unfortunately, are in the former category. Nonetheless, these
maps taken on aggregate show what appear to be common fibre highway routes, and locations
for server resources. These maps provided a basis for more detailed exploration of technical
66
infrastructure, suggesting further work that would provide greater detail concerning technical
capacities.
3.2.2 Discovering infrastructure
The initial analysis of textual descriptions of Google’s operations and infrastructure described
above led to further data generation in a number of areas. In the section below I detail efforts to
gather data concerning the conceptual categories of identified Google’s systems: Google
managed server capacity and Google wide area networks.
3.2.2.1 Public data centres
Google’s public data centres were the most prominent and promoted aspects of Google’s
infrastructure during the time of my study. Their importance to this research is clear: they
represent both significant technical capacity and embody Google’s practises and values. While
the company made few public statements about its data centre projects prior to 2007, Google
found it impossible to work effectively with local and state governments on planning matters
without public disclosure. As well, building large-scale data centres was a difficult secret to
keep, and secrecy was not advantageous when dealing with local governments, often in rural
areas (R. Miller, 2012).
While for most of the first decade of the 2000s Google made few public statements about its
infrastructure, by 2010 this had changed. Google began announcing their intention to build new
data centres in 2007, specifying location and implying capacity. Google also published a full list
of its data centres and locations in September 2011 on their public website (Google, 2013). Most
of these locations were situated near known fibre highway routes, although several other factors
67
were considered when choosing their location. All of Google’s large, stand-alone data centres
were known by location during the period of my research.
In 2012 Google further engaged in a public relations campaign concerning its data centres,
providing writer Steven Levy with access to some locations, which resulted in an article in Wired
magazine.
3.2.2.2 Google’s networks
Understanding Google’s networks is also central to this research. It is critical to grasp the scope
and reach of Google’s networks to determine the extent to which they allow the company to
circumvent ISP gatekeeping at both the retail and network transit levels.
In order to determine network routeing, I depended nearly exclusively on documentary evidence
and conjectures based on that evidence. As I described above, I collected over fifty network
maps published primarily by national and global level network operators from 2003 to 2013.
Many of these maps are abstract, promotional images, used to indicate network reach but lacking
in clear detail as to routes and connections. Some were more detailed, showing rough routes in
various locations. Only a few maps provided a level of detail down to the street level, with an
online interactive map created by Level 3 Communications showing both land-based and
submarine fibre routes with a relatively high degree of granularity.
As I discuss in coming chapters, all maps studied provided confirmation that major network
routes followed existing infrastructure rights-of-way, such as railroad lines and major highways;
this fact was also confirmed by a networking expert (Duncan, 2013). I determined that although
68
these routes could be mapped in detail after further research, such work was beyond the scope of
this dissertation.
3.2.2.3 Google edge caches and peering
The foundational question of my research is the extent to which Google’s infrastructure allows
the company to mitigate the possible impact of ISP gatekeeping at any point beyond Google’s
network. The technical means by which this might be accomplished included the provisioning of
Google edge caching hardware within retail ISP networks, a possibility that led me to this
research in 2010. Edge caching is the practice of Internet content providers placing content
servers closer to consumers, usually hosting their caching servers at third-party locations.
As I describe in Chapters 4 and 5, Google, like other web content providers, had placed content
distribution closer to content consumers using a content delivery network. A CDN is made up of
network capacity that is purchased or leased, and is used to send content to multiple servers
located closer to major clusters of consumers. A provider of live streaming video, for example
YouTube, will benefit from streaming once to a server located in a major city, and then
streaming from that caching server to customers in that region, rather than streaming individually
to all customers worldwide. This type of distribution is called multicasting.
A content provider can also use a CDN to distribute textual data, such as search results. Data that
is requested most often—for example, the most popular local search results—can be cached on a
local server, greatly decreasing latency.
Determining the locations and capacities of potential Google caching servers was of particular
interest. The first mention of caching that I discovered was a posting on Erik Hersman’s blog
69
(2008); Hersman was a network administrator who lived and worked in Kenya. Hersman
provided significant information about the deployment of Google’s caching hardware in Africa,
identifying the program as Google Global Cache (GGC).
As with data centres in the past, Google provided limited information on GGC prior to 2013,
even though subsequent investigation showed that the program existed from at least 2008. Using
the information provided by Hersman, it was possible to unearth additional documentary
evidence on GGC. A 2008 presentation deck prepared by Mike Axelrod of Google for two wide
area networking conferences outlined Google’s approach to content delivery networking,
describing the GGC of 2008 as a beta program. A variety of anecdotal information also existed
from network administrators at various retail ISPs, primarily outside North America and Europe,
who have deployed the caching servers, or seen the impact of these deployments first hand.
Google did not publically disclose the program until creating a GGC web page in August 2013.
The most significant of my discoveries was of a Google Global Cache Beta Installation and
Operations Guide, dated June 2011. The guide contains information concerning the server
hardware itself, and more detailed information on its functionalities and operation.
Determining where these caches are located beyond a very few specific geographic locations
proved challenging. By the time the program began to publicise itself to retail ISPs in the global
south in 2011, I concluded that caching servers had already been available to southern retail ISPs
for many years, with more current deployments to North American and European ISPs.
Determining the scope of the GGC deployments globally became an important objective for my
research.
70
3.2.2.4 Initial efforts: Google hides its IP addresses
An early and ultimately key question in my research concerned the locations and capacities of
Google’s caching servers. Documentary evidence of the existence and function of the caches
indicated that North America and Europe were likely to have many installed at various retail
ISPs. However, I could find no significant indication from ISPs in the global north concerning
the specific locations of caches.
Documentation concerning GGC indicated that Google provided caching servers to retail ISPs
for placement within their networks. Pictures of these servers indicated a rack of servers (Turner,
2009), rather than a stand-alone enclosure such as Google’s larger server containers. My initial
efforts to identify these caching servers centred on using standard network diagnostic tools. The
Google Global Cache Beta Installation and Operations Guide provided a basis for technical
exploration, listing a domain name, cache.google.com, as a DNS server address. One hypothesis
I wished to test was whether Google used a third level domain name for many or perhaps most of
its caching servers, and if the servers connected directly to Google’s network. My tools for this
phase were traceroute and ping.
Traceroute is a network utility used to approximately estimate the route a network request takes
over the Internet. Traceroute was developed in 1987 by Van Jacobson, a lecturer in Computer
Science at the University of California Berkeley (Jacobson, 1997). Traceroute is implemented in
some form on nearly every computer operating system.
A typical traceroute request results in a list of routers between the client and the traced server. To
accomplish this, traceroute sends a series of requests to the remote server, each with an
increasing time-to-live (TTL) value. For example, to conduct a traceroute for utoronto.ca, an
71
initial request would be sent with a TTL of one, returning the first router that handles the request.
A second request would have a TTL of two, showing the next router on the route. This process
continues until no further responses to requests are received.
An example of the traceroutes conducted during this research is as follows:
tracert google.com Tracing route to google.com [74.125.226.96] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 10.10.10.1 2 1 ms <1 ms <1 ms 209.217.82.225 3 9 ms 9 ms 8 ms 10.201.122.201 4 9 ms 8 ms 8 ms 10.201.120.197 5 10 ms 10 ms 10 ms 10.201.120.78 6 184 ms 218 ms 9 ms bb1.tor.primus.ca [216.254.132.165] 7 22 ms 38 ms 9 ms gw-primus.torontointernetxchange.net [206.108.34 .22] 8 8 ms 7 ms 8 ms gw-google.torontointernetxchange.net [206.108.34 .6] 9 9 ms 9 ms 9 ms 216.239.47.114 10 8 ms 8 ms 8 ms 209.85.250.207 11 7 ms 7 ms 8 ms yyz08s13-in-f0.1e100.net [74.125.226.96]
In the example above, we can see the request to google.com travelling over the Primus network
(identified by the primus domain at step 6), through peering at the Toronto Internet Exchange at
151 Front Street, and then to a Google server in Toronto (based on the yyz08s13 prefix and the
standard Google server domain 1e100.net).
In addition to traceroute, I drew to a lesser extent on the ping protocol. Ping is used to assess the
“reachability” of a host, as well as returning the round-trip time for a request.
In addition to cache.google.com, I gathered a number of other promising domain names,
including cache.l.google.com, static.cache.l.google.com, and lscache4.c.youtube.com,
among others. From March 2012 onward I conducted a series of pings and traceroutes on these
and additional domain names, with particular focus on cache.google.com.
72
Initial results performing a traceroute from Ottawa were promising. I was a Rogers broadband
customer, and could see the route remain within the Rogers network until it terminated,
seemingly in Toronto. This appeared to indicate Google caching servers housed at Rogers in
Toronto, but did not align with the notion that Google would likely maintain caches locally in
Ottawa, Canada’s fourth largest metropolitan area.
Using VPN technology, I then ran the same traceroute tests from locations in the United States.
The results of these tests were similarly confusing, typically showing the route end in a major
metropolitan centre, rather than locally. I also conducted traces to the IP address associated with
the cache.google.com domain name. These provided similar results to the traces to
cache.google.com. Subsequent research showed that in nearly all cases, the traces transited
through locations that could reasonably be Internet exchange points (IXPs).
I concluded that my series of traceroutes did not indicate that cache.google.com was a catch-all
domain name that resolved to servers inside retail ISP networks, but was in fact an address
mapped to elements of Google’s network. The traces ran until they found Google’s private
network, which terminated at dozens of IXPs and other points; after at that point, no meaningful
information could be determined by network diagnostic tools. I concluded that Google’s network
was essentially a black box if being examined by traditional network diagnostic tools in this way.
IP addresses controlled by Google are a matter of public record, accessible through records at
Regional Internet Registries (RIR). However, beginning in the late 2000s the company began to
obscure these addresses from those outside the company. The emerging search engine
optimisation (SEO) industry had spent considerable time and resources in the early-2000s tuning
web pages and servers to perform well in Google searches for specific keywords, as required by
73
their clients. To accomplish this, they had used standard network diagnostic tools to determine
the IP addresses of Google’s data centres. Using this information, SEO companies had been able
to determine which data centres returned which search results, with the differences among the
various servers showing changes in Google’s ranking algorithms. However, Google began to
obscure their data centre IP numbers in the late-2000s, and by 2011 no useful mapping of IP
number to data centre could be easily accomplished.
Google does not publish a list of caching locations.
3.2.2.5 Calder on server locations
While I did not completely abandon the use of network diagnostic tools such as traceroute and
ping to explore aspects of Google’s network, I did conclude that I did not have sufficient
technical resources to use them to identify Google server locations. The breadth of Google’s
technical obfuscations, motivated by a desire to limit access to what the company considered to
be trade secrets, also made it highly unlikely that a single researcher with limited access to
custom technical interrogation technologies would be successful.
I turned then to academic research on the shape and function of the Internet being conducted by
relatively large-scale projects and institutions. Such research institutions include Citizen Lab at
the University of Toronto and PlanetLab. In August 2013, I contacted Matt Calder, who was
leading a team of researchers at the University of Southern California Information Sciences
Institute studying various aspects of global network infrastructure, and specifically caching
servers. Calder and his group (2013) had a specific interest in identifying Google network
elements. Calder provided me access to the data generated by his team from 2012 through 2013.
74
Calder’s work was useful because it focused on identifying geographic locations of Google
servers, as well as providing IP numbers of host networks for these servers.
Implicit in the functioning of a content delivery network, like that controlled by Google, is that it
is a distributed infrastructure in which the content provider manages traffic requests based on
geographic location. Typically, a client request for content is handled first by a web front end
server, with the request then redirected to a proxy caching server. This depends on the client
front end being able to resolve a domain name request (for example, www.google.com) to a
specific Internet Protocol (IP) address (for example, 207.126.144.1) using the Domain Name
System (DNS). Google controls a large range of server IP addresses, each of which represents a
group of geographically located servers. Google’s front end can resolve the request for
“Google.com” to an appropriate IP address, one of which will be able to respond to the request
most efficiently. A request for Google.com in Toronto might, for instance, resolve to a data
centre in the Midwest of the United States, or to a server at the 151 Front Street West Internet
exchange point in the city itself.
As I had determined performing rudimentary network diagnostics, understanding the basic
mechanics of this system does not necessarily provide significant detail as to the specific
locations of Google caching servers. However, Calder et al.’s approach to determining Google’s
server locations leveraged existing methods used by Google’s CDN to serve content. Calder and
his team determined that an extension of DNS called EDNS-client-subnet, which Google used,
would have some utility in determining geographic server locations. The EDNS-client-subnet
extension of the domain name system allows requesting information such as originating IP
number to be attached to a DNS request, presenting more accurate information on the location of
75
the requestor. This allowed the server to respond more quickly and accurately to client requests
for caching.
The first step for Calder’s team was to enumerate Google’s front end servers. To do so, they sent
IP requests to Google from a seemingly large number of vantage points: every active IP prefix in
the IP address space. Calder queried from approximately 10 million IP prefixes, with each set of
queries taking about a day to run (Calder et al., 2013). Using EDNS-client-subnet, Calder was
able to make Google DNS believe it was responding to requests from a large number of separate
IP addresses, each in a different geographic location. Calder at al. sent the requests using a
version of the Unix tool dig (domain information groper). Calder issued the queries using dig
through Google’s public DNS servers; this returned a set of front ends appropriate to the
supposed client’s geographic location.
Calder at al. worked to identify the geographic locations of the Google front ends using a
technique they called client-centric geolocation (CCG). This technique assumed that identified
front ends were geographically close to the supposed locations of the querying IP addresses. The
team used traceroutes to gather 102,604 prefixes, and BitTorrent logs to gather an additional two
million prefixes.
Calder’s team relied heavily on geolocation databases, primarily MaxMind, to identify the
locations of servers. Calder attempted to improve the accuracy of geolocation by pruning clearly
incorrect locations and combining a large number of conflicting locations, using weighted
averages to determine which appeared to be the best fit. The team also pruned front ends based
on pinged response times; if a server was very distant from a client, it could not provide useful
location information and was dropped. These techniques pruned 40% of server prefixes. To
76
accomplish this, the team clustered servers in large metropolitan areas where the same IP address
might be served from multiple locations. Then, they utilised mapping algorithms that created
performance maps designed to improve network efficiency. Calder called this technique client-
centric geolocation. Finally, Calder and his team utilised PlanetLab to estimate the locations of
clusters of front end servers based on round-trip times from numerous IP vantage points.
In summary, Calder’s approach was to first enumerate Google front-end servers by repeating
millions of queries. They then estimated the locations of these front ends based on the known
locations of clients that were directed to them. Calder et al. claim that their CCG technique is
accurate to within 10s of kilometres. Using these novel methods, Calder recorded the growth of
Google’s infrastructure beginning in October 2012 and continuing through April 2016.
Calder and his team provided me access to their raw data, which contained millions of records of
their queries. As I detail in coming sections, the data required substantial work to produce
indications of Google’s edge caching servers within retail ISP networks. Before detailing this
work, I discuss generating data concerning Google’s presence at IXPs.
3.2.2.6 Internet exchange points
Google connects with other networks, and as I discuss in more detail in coming chapters, to do
so is a process called peering. It is well known that Google connects to other network entities at
locations called Internet Exchange Points (IXPs), which are typically found in large metropolitan
areas.
During most of the Internet’s history, locations for peering were not generally public knowledge.
Lists of peering locations were traditionally maintained by various network entities themselves
77
for internal use. However, as peering arrangements became more complex, it was difficult for
any single network entity to understand peering arrangements, as details were spread among a
number of organisations. Richard Steenbergen created a central database of peering
arrangements, called PeeringDB, in 2004 (Lodhi et al., 2014). As of September 2015, PeeringDB
contained peering information for over 8000 network entities across 608 peering locations.
PeeringDB is an invaluable resource, as it shows many public and private peering connections at
locations worldwide. Google has an open peering policy—it will connect to any other network
entity above a minimum capacity—so the locations that are identified appear to be representative
of Google’s presence. PeeringDB data proved extremely valuable, detailing street addresses and
connections between Google and other network entities. PeeringDB is not, however, an
exhaustive source in the area of private peering. Reporting peering is voluntary, and the database
only indicated when private connections take place at an IXP. This is understandable, as private
connections are, of course, private.
PeeringDB data has other limitations as well. Historical changes are not presented on the
website, meaning that changes must be tracked over time. A team led by Aemen Lodhi at the
School of Computer Science at the Georgia Institute of Technology was able to provide me with
access to their unpublished historical database of PeeringDB data, which they had generated in
cooperation with the PeeringDB operators. This data set begins in July 2010 and is continuing to
be updated on a daily basis at the time of this writing.
Another limitation of the PeeringDB data is the lack of information concerning the capacities of
entities hosted at IXPs. Public peering points list network connection capacity; for example,
Google peers publically at 6000 Mbit/sec at the Toronto Internet Exchange. However, we have
78
no idea as to what equipment Google locates at the exchange. We are also not certain as to
connection type, though a subsequent interview with TorIX director John Nistor (2013) did
provide some useful information concerning connections.
79
Figure 3.2: PeeringDB website, 2016 This image was captured July 7, 2016 from https://www.peeringdb.com/net/433.
80
3.3 Mapping Google: Process and impact
My research is primarily focused on the creation of Google’s policy position on network
neutrality, with technology, organisation management and other areas of study central to the
work. With a policy focus, I concluded early in my research that I needed to present my findings
in a manner that would be accessible to policy-makers. Using the technical terminology and
detailed data of Calder, Lodhi, and similar researchers, and presenting data as they had, would do
little to illuminate my findings to my intended readers. Even experts in networking technology
would likely find data tables made up of thousands of rows unilluminating. I therefore decided
early in my research to make an interactive map of Google’s infrastructure (Stevenson, 2016) a
central contribution of my research, one that would show the reach of scope of Google systems
at a glance, while easily providing more detailed information.
As I have noted above, network maps are by no means an uncommon thing. However, many
network maps made public in the past several years have been for promotional purposes, to sell
the network and server services of cloud service and transit providers to potential clients. They
are, as a whole, only vaguely accurate, filled with gaps, and lacking in detail.
There were several reasons to create a map to present my research. It could contain as much
detail as was useful concerning Google’s infrastructure, while presenting the scale of Google’s
network and servers, and allowing adequate detail of specific elements. The contemporary
practice of map-making relies heavily on online mapping services, which provide numerous
advantages over paper-based cartography, including the ability to combine disparate data sources
in a dynamic display of information.
81
3.3.1 Software
To create my map of Google’s infrastructure, I utilised a suite of software. Central to the work
was (perhaps ironically) Google My Maps. This was a web-based application that allowed
importation of mapping data and the plotting of this data on an interactive Google-hosted map,
with reasonable options for displaying in layers and customising it appearance. When I started
research in 2010, Google My Maps was a premium Google product called Google Map Engine
Pro. However, Pro was rolled into the Google My Maps offering in September 2014 and
combined with the free version of Map Engine. I evaluated several mapping software platforms
in 2011 and 2012 prior to selecting Google My Maps. In 2016 this application was a standard
part of Google Apps for Work, a package of Google enterprise and productivity applications, and
it was this version that I used to create my Google map. I hereafter refer to any use of Google
Map Engine Pro as Google My Maps.
3.3.2 Data sets
When my research began I assumed incorrectly that Google server locations were relatively
limited—in the hundreds rather than thousands—and that I would therefore have a limited
number of data points to manipulate and display in an interactive map. My early data generation
resulted in a data set of roughly 100 elements, primarily IXPs and public data centres, which
could easily be edited by hand. However, access to data from Calder et al. (2013) and others
made handling of data sets much more challenging. My early-2013 set of server IP addresses and
locations contained 14,000 data points. When I revisited the data with Calder’s help, the data set
had grown significantly. I chose October 28 2013 as the date for mapping, as I had already
82
generated other historical data for Google for that day. Calder’s data from this day had
approximately 25,000 data points.
Calder’s data contained Internet protocol (IP) addresses, an Autonomous System Number,
domain name, geolocated longitude and latitude, and a country code. ASNs typically represent
unique network entities that control a collection of Internet Protocol routeing prefixes and are
critical to routeing among Internet entities. Calder shared this data in comma-separated value
format (CSV), with each IP address representing a server or cluster of servers, many of which
were identified as being at the same geographic location.
Combined with additional data for network routes, known IXPs and data centres, this data would
easily overwhelm My Maps, which had a 10,000 data point limit per map. As well, as each
geographic location might host dozens of IP addresses, mapping each IP separately would be
inefficient and obscure data; Google assigned unique IP addresses to various servers or clusters
of servers at the same location. In additional to the technical limitations of My Maps, which
allowed only 10,000 data points per map, stacking points on the map for each IP address would
actually obscure the IP information, rather than illuminate the capacity of Google at that location.
Because of the structure of Calder’s data, I devised a data compression approach that allowed me
to identify and consolidate IP addresses at the same location. Importing the CSV data into
Microsoft Excel, I created a non-unique key value for each IP number by multiplying the
autonomous system number (ASN) of the IP address by the server’s longitude and latitude. This
value would be identical for every IP address at the same location. I then wrote an Excel macro
that would combine rows that shared a key value, with the objective of gathering each unique IP
address and associating it with a unique location (geographic location plus ASN). I then wrote a
83
second macro to concatenate the IP addresses and delete duplicate rows of data. This took the
number of data points from over 25,000 to 1631, well within the limits of Google My Maps to
display, while retaining all the IP information from Calder’s data.
Calder’s data contained location information, but no detail as to the identity of the host of the
Google server. I assumed locations identified with a Google ASN was a Google facility. For
other locations I conducted an IP Whois query using the public databases of regional Internet
registries ARIN, APNIC, AFRINIC, LACNIC, and RIPE NCC. In 96% of queries this identified
the host entity of the Google server. Each entity was then categorised manually as a type of
Internet service provider (retail or WAN), or other type of entity (educational, IXP or other).
I further used Google Fusion Tables, a web-based set of data visualisation tools used to process
large data sets, to refine geographic information for other elements of Google’s infrastructure,
particularly IXP and data centre locations. Fusion Table can both import and export Keyhole
Markup Language (KML), an XML schema for geographic annotation that can be used in
Google My Maps. I also had additional address information for IXPs and data centres, but these
addresses did not have corresponding latitude and longitude. I used Google Fusion Tables to
identify latitude and longitude for these addresses.
This work resulted in the creation of a series of data sets, detailed in Table 3.1. Each data set
corresponds to a layer of the Google Infrastructure Map. Layering the data allows greater
flexibility in contrasting various data points and identifying patterns. Table 3.2 details the
metadata used in data sets which specify specific server locations.
84
Table 3.1: Google infrastructure data set
Data set name Description Principal sources
Server Locations Known edge caches and points of presence, including Google Global Cache
Calder et al.(2013), ARIN, APNIC, AFRINIC, LACNIC, RIPE NCC
Peer Participants Private Private peering locations PeeringDB, Lodhi et al.(2014)
Peer Participants Public Public peering locations PeeringDB, Lodhi et al. (2014)
Public Data Centres High capacity, publically-know data centres
Google (2013)
Data Centre Network Google’s G-Scale network that links its global data centres
Hölzle (2012)
South-East Asia Japan Cable System (2013)
Submarine communications cables TeleGeography (2008), Qiu (2013)
Unity Submarine Cable (2010) Trans-Pacific submarine communications cable
TeleGeography (2008), Qiu (2013)
85
Table 3.2: Data set metadata
Data Sets
Public Peering Private Peering Large Data Centres
Server Locations
Description
Name Name Name Name Short name of the entity.
Description Description Longer name and additional information for the entity.
Address Address Street address of the entity, including city and country.
City City City City in which entity is located.
Country Country Country Country ISO 3166-1 Alpha-2 country code of entity.
Facility IP Range Internet Protocol version 4 (IPv4) number range associated with this facility.
Host Name Host name, if known, in format name.name.name.
Google ASN? Flagged “Yes” if node is listed as Google ASN
IP Addresses Google IP addresses identified with this location, comma separated.
CLLI Code CLLI (Common Language Information Services identifier) is utilised by the North American telecommunications industry to identify the use or location of telecommunications equipment.
RENcode Server naming element for location.
NPA-NXX North American area code/exchange associated with a telecommunications facility.
ASN Autonomous System Number(s) associated with this entity or location.
Year Announced
Year data centre was publically announced to be constructed, if known.
Year Operational
Year data centre was announced as operational.
Latitude Latitude Latitude Latitude North–south geographic coordinate.
Longitude Longitude Longitude Longitude East-west geographic coordinate.
86
Once the data set was properly organised, I converted the Excel spreadsheets back to CSV files,
and imported each as a separate layer to Google My Maps. My Maps preserved my data set
structures, allowing data points to be graphically identified in a unique manner. For example, I
chose to distinguish between Google servers identified with a Google ASN, and those with a
different network entity ASN, using colour coding. As well, each data point presents more
information concerning that location when clicked, all from the original data set.
The completed map of Google’s infrastructure as of October 28th, 2013, is located at the
following URL:
https://drive.google.com/open?id=1nXSNhvDo5jaSS1h9gFuqQnRNIqg
3.4 Chapter summary
In this chapter I have detailed the work of data generation required to address questions
concerning the scope and reach of Google’s infrastructure, and the functionality of these
systems. In the following two chapters I describe this infrastructure further in the historical
context of Google growth from its founding to the early-2010s.
87
4 Extending search
In the previous chapter I discussed the methods used to generate data concerning Google’s
technical infrastructure from the company’s founding to the early-2010s. In this chapter, I draw
upon this data to describe the beginnings of Google, from its founding at Stanford University in
the late-1990s through the launch of its first web-based, search-centred applications to roughly
2005. I describe the origins of the company’s search engine technology in academic practice, its
culture of innovation, and the boundaries of the company’s activities, focused on search and
technical efficiencies. Google’s foundational period established several cultural and operational
norms for the company, many retained by Google to the time of this writing, while some others
were put aside.
This is a period, roughly nine years from 1996 to 2005, characterised by the rapid growth of
Google’s search engine traffic and revenue, and the expansion of its search business to new
domains, including maps, Usenet, and other areas. It is during this time that Google began to
transition from a company focused firmly on monetizing content provision (as suggested by Wu)
to what Ciborra would describe as a platform organisation, an entity that will change many of its
best core competencies and its business identity in order to remain profitable. As I explore
further in coming chapters, this transformation is key to understanding Google’s approach to
network neutrality during the decade of the 2000s.
88
4.1 Origin in academic practice
It is quite possible that Google could not have been created by anyone other than academics, and
anywhere other than a university. It is well-known that Google was initially a project arising
from company co-founder Larry Page’s graduate research at Stanford University in the late-
1990s. Working with friend and fellow graduate student Sergey Brin, Page’s initial research
interest was not search per se, but the mathematical properties of the link structure of the World
Wide Web (Page, 2008). This interest became more specific as Page turned to creating a crawler
that could index the Web. An interest in search arose when Page and Brin sought a means to
analyse that they had captured.
Full-text web search engines of the late-1990s depended on techniques and approaches that were
common to the search engines for proprietary databases and collections in the 1980s and 1990s.
These search engines depended on information provided by the websites themselves, primarily
the page’s textual content and metadata. This approach worked well for curated text collections,
such as legal or technical databases. The web, however, was not a walled garden. It was routine
for website owners and webmasters seeking better search engine ranking for their sites to
manipulate search engines by using misleading metadata, hiding page text, and other tricks. Late-
1990s search engines “trusted” web page content and metadata, a trust that proved to be
misplaced. For example, a search for the term “flower” might return a page or site in which the
term “flower” appeared most frequently, and might rank a page more highly if the term appeared
in the title of the page, as well as the body. Page and Brin understood the weakness of this
approach: results were often shallow, and could easily be controlled by webmasters.
89
To address this, Page and Brin took a quite different approach, one that understood an essential
fact about the Internet that their late-1990s competitors did not. The pair turned to what Page
would describe as the “other signal” that could be extracted from their newly-built index: the
links between and among web pages. As Battelle has impressively pointed out, Page’s approach
appears to have been drawn from the practice of academic citation counting (2005, p. 74). In
essence, Page’s system ranked the value of a web page in part based on the extent to which other
pages hyperlinked to it. Girard characterises this approach as within the context of an emerging
“Internet economy of distributed intelligence” (2009, p. 2).
Just as Tim Berners-Lee had created the World Wide Web to address weaknesses he saw in the
contemporary system of academic publishing—such as speed of sharing research results—Brin
and Page created Google as a means to explore the relationships among web content (Battelle,
2005). Page’s early experiments in web page linking even allowed links to be followed from
linked work back to the linking page, an aspect of hypertext that has never been fully deployed to
the World Wide Web.
The first version of what would become the Google engine was called “Backrub”, and was
launched in 1996. Brin and Page developed Backrub through a rapid system of iterating and
release, examining search results and tuning search algorithms until they received the results they
desired. Girard states that this new approach to search result ranking, which would be dubbed
PageRank, “requires highly complex mathematics and involves the integration of several classes
of problems” (2009, p. 16). From the beginning, analysing the relationships among countless
web pages required a great deal of computing power, and could not be implemented without a
mix of programming theory and practical knowledge of network sociology.
90
The technical requirements of their project necessitated an infrastructure that could scale
effectively, and this was important to Page and Brin from the beginning of their work together.
Battelle writes that the partners understood that the usefulness of their search tool would only
increase as the web grew, and in fact this principle explains in part Google’s desire to index and
rank increasingly larger amounts of data over the company’s history. The name “Google” is in
fact inspired by the “googol”, the term for the number 1 followed by 100 zeros, reflecting the
founders’ appreciation for the sheer scope of their project (Hanley, 2003).
Google’s first servers were scraped together from various sources. Battelle describes how Brin
and Page “begged and borrowed” Google’s first server farm into existence. Like many websites,
Google came to rely on the open source operating system Linux (2005, p. 355). The project used
a considerable amount of Stanford University’s network bandwidth, at one point peaking at
around 50% of all network usage on campus (Battelle, 2005, p. 78). Google’s crawling spider
also alarmed the owners of the pages it indexed, with one art museum contacting Stanford to
object to what it thought was site scraping, a process of essentially stealing the content of a target
site.
Brin and Page progressed with their research, understanding that the practise and importance of
annotation and citation was key to Page’s concept for Internet search. This approach to search
would prove to be more useful to searchers than the indexing methods of the then-leading web
search engines, AltaVista and Excite. Writes Lawrence Lessig in his book Free Culture: How
Big Media Uses Technology and the Law to Lock Down Culture and Control (2004)
Search engines are a measure of a network’s intimacy. Google brought the Internet much
closer to all of us by fantastically improving the quality of search on the network.
91
Specialty search engines can do this even better. The idea of “intranet” search engines,
search engines that search within the network of a particular institution, is to provide
users of that institution with better access to material from that institution. Businesses do
this all the time, enabling employees to have access to material that people outside the
business can’t get. Universities do it as well. (2004, p. 48)
Even at this early point in the company’s history, Google had already established aspects of its
culture that would remain central to the company through the 2000s: a foundation in and reliance
on research and engineering, an obsession with scalability of both systems and functionality, an
iterative approach to software design, and the utilization of consumer-grade components that
were easily interchangeable. The Google platform had begun to be defined.
4.2 Advertising
Ceruzzi (2003) suggests that Google’s success illustrated something fundamental about the Web:
that it had emerged at the end of the 1990s as something incomplete and flawed. While Internet
use rose dramatically in the late-1990s and websites proliferated, attempts to monetize the new
environment were, at best, mixed. Money flowed from investors into new dot-com start-ups, and
stock valuations increased dramatically, but very few Internet companies were profitable. It was
quite common during this period for business models from print and broadcasting to be applied
to the Internet, with mixed results. As we have seen, pre-Google search engines such as
AltaVista and Excite relied on information management paradigms from analogue media,
ranking pages based on the information found on the pages themselves. In the late-1990s Yahoo!
employed hundreds of indexers to catalogue the Web into subject-based hierarchies in a manner
92
ultimately not too dissimilar from a printed Yellow Pages telephone directory (Gauch, Chaffee,
& Pretschner, 2003).
Search in the late-1990s was also dominated by the portal paradigm, with a business model to
attract visitors to a website and keep them there for as long as possible, interacting with a variety
of content, all framed by various sorts of advertising. The principle advertising format was the
banner, a form of display advertising that was graphics-heavy and, by the turn of the century,
often animated. Site owners attempted to maximise page views within the site, and display as
many ads as possible to a visitor.
While Excite and Yahoo! attracted millions of visitors a day, the search engine business in the
late-1990s was not profitable (Levy, 2011). Users visited sites and viewed content, but they
rarely clicked on the prominent banner ads that stretched across the top of pages. In fact, by the
time Google launched in the late-1990s, click-through rates were declining, leading some to
suggest that advertising would never be effective business model for the web, and that other
methods of monetization, such as subscriptions and micropayments, would have to emerge for
the web to remain viable (Oberoi, 2013).
Page and Brin looked to monetize the Google search technology from early in their work
together, in 1996 and 1997. They initially met with Yahoo!, AltaVista and Excite, but could not
strike a deal with any of the companies to license Page and Brin’s search technology. Levy
(2011) notes that Excite’s leadership had thought Backrup was “too good”, as relevant search
results would impact the “stickiness” of Excite and impact advertising views. With little
commercial interest from existing companies in purchasing their search engine technology, Page
and Brin incorporated Google as a for-profit business in 1998.
93
Google broke decisively, and sometimes inadvertently, with the search engine business models
of the late-1990s. For example, Brin’s minimalistic Google home page—containing the
company’s logo, a search box, and little else—was not designed as a counterpoint to the busy
portal landing pages of Excite and Yahoo! Rather, Brin had limited knowledge of HTML and
web design, and was not able to create anything more complex.
Google was seeing significant traffic for its search engine, but had no revenue to speak of. The
company did attempt to sell its search technology to the enterprise market, with little success
(Battelle, 2005). Google’s leadership considered selling advertising space to DoubleClick, an
advertising network focusing on banner advertising. Instead, the company decided to explore a
strategy for advertising that extended its approach to search results, allowing advertisers to
purchase ads to appear in the results pages for specific keyword searches. As well, these ads
were plain text, and limited in length. It was an approach very different from the banner-based
ads of the existing search services. Brin and Page were aware that the approach might not work,
and decided that their fall-back would be DoubleClick-style banner ads (Levy, 2011).
Google initially sold ads in a traditional fashion, on a cost-per-thousand (CPM) views basis
through traditional sales channels. While there was some interest, as Battelle writes, “they didn’t
scale” (2005, p. 124). Google considered moving to DoubleClick, but the online advertising
market collapsed in 2000 with the end of the dot-com bubble. Google therefore looked to the
automated, paid placement model of GoTo.com. Unlike GoTo, Google did not mix paid
advertising with search results; in fact, ads were clearly distinguishable from search content.
However, Google did use an automated, self-service system for advertisers to buy ads.
94
Google’s new advertising system launched in October 2000, called AdWords. Advertisers
initially paid for impressions (displays of their ads), but Google’s engineers discovered one of
the best-kept secrets of the advertising business: advertisers generally cannot usefully evaluate
the effectiveness of an advertising campaign (Girard, 2009), and in 2001 Google started charging
by click-through. By this time Google was the largest search engine on the web, with 60 million
searches a day (Battelle, 2005).
Google’s embrace of paid advertising was a key moment in the company’s development. Once it
had chosen a commercial path, attempts to monetize its users and their data were inevitable. As
we will see below, Google’s advertising business became extremely lucrative. Most important
for my research, the embrace of advertising strongly influenced Google’s engineering strategy.
While providing revenue that allowed technical innovation and expansion, advertising also drove
the substance of the company’s infrastructural changes, constantly reaching out to enrol new
users and their data, retaining greater and more nuanced information, and providing services
more quickly. As I will argue in Chapter 6, Google’s advertising underpins the company as a
platform, one that utilises massive computing power to harness user labour through automation.
4.3 Building the “innovation machine”
Google became emblematic of changes to the technology product development cycle throughout
Silicon Valley in the 2000s. In his 2009 book The Google Way, Girard describes the Google of
that decade as an “innovation machine” (2009, p. 75), its generation’s example of “reinventing
management methods” (2009, p. 1). He argues that Google “was created to buck the system” and
that Page and Brin drew on management methods that ran contrary to what they were told were
best practice by their investors and many other Silicon Valley professionals (2009, p. 24). Girard
95
described Google’s as an “environment for innovation” (2009, p. 16) that encouraged serial
entrepreneurship, one in which success bred further success through the creation of
commercially viable products. Girard argues that three aspects of Google’s management practise
were key: streamlining product life-cycles, systematising innovation, and aggressively acquiring
technologies and talent.
Girard writes of the development of Apple’s iPhone, and the modifications to Google’s Android
operating system in response to it, as examples of how much “time to competition” shrunk
during the 2000s. Google rejected what it considered to be traditional technology management
practises early in its history, with the company’s management team choosing to simplify the
product development process as much as possible. In September 2001, Page and Brin fired or re-
assigned all of the company’s engineering managers, replacing them with a system of small
engineering teams that reported to the founders or other senior managers on a regular basis.
Projects competed with one another for resources. Writes Battelle:
The idea of company founders being unwilling—or unable—to give up power is not new.
In fact, it’s so common in Silicon Valley that it’s got a name: entrepreneur’s syndrome.
But while Page and Brin’s unique approach to management angered some, others
blossomed under it, and the company certainly continued to innovate. (2005, Chapter 6).
Page and Brin put in place a system of project peer review, which dramatically sped up project
approvals and reduced paperwork (Girard, 2009, p. 78). In evaluating projects, two criteria were
emphasised over any other: user interest (which could be monetized with advertising) and
technical feasibility. If it was possible to accomplish, and consumers wanted it, it was prioritised.
Compliance with a business plan or strategy was a minor consideration (Girard, 2009, p. 78).
96
Google’s product lifecycle is also reflective of Agile practises in software development. In the
2000s, Google became well known for releasing products with an extended “beta” cycle to the
public. This method became known popularly as “release early and often” (Girard, 2009, p. 84).
This reflected the technological fetishes of some Internet users who became enamoured with the
newest web-based technologies who would happily use Google’s “beta” platforms and provide
the company with valuable feedback. Girard compares this release method to the notion of
bootstrapping, which originated in the Augmentation Research Center at Stanford University
(Lenoir, 1997). Google tested new products with increasingly large circles of users: first the
development team itself, then a broader circle of internal users, and then with the participants of
the Trusted Tester Program, a sort of private club of Google employees.
Girard compared Google products to a Swiss Army Knife: Google products would go through
separate development cycles and were released incrementally as features were ready. The release
of a new version of Gmail, for example, would not replace the Gmail that users were already
using.
Page and Brin also put in place the now well-known personal-time allotment: Google staff could
work on their own projects during 20 percent of working hours. According to former Google
executive Marissa Mayer, this made innovation within the company everyone’s business in an
exceedingly simple way (Girard, 2009). The personal time allotment had the effect of
systemising improvisation, tinkering and hacking with existing and new technologies.
Beginning in 2001, Google also began to form alliances with other companies, while acquiring
start-ups of various sizes, as detailed in
97
Table 4.1. These acquisitions allowed Google to immediately move into new areas of search, and
would later form the basis for new web applications. Google acquired 13 companies between
2001 and 2004, and another ten in 2005 alone. In all, Google acquired over 50 companies in the
2000s (Girard, 2009).
Google’s interest in partnership did not extend to leadership of the company, however. Google
approached outside investment differently than many Internet start-ups, placing a heavy
emphasis on the founders retaining control over the company. Google received venture capital
from two different companies, rather than a single principal investor, with each firm taking an
equal share in the company (Girard, 2009). This gave Google management significantly more
influence over the company’s strategic direction.
For its 2004 initial public offering (IPO), Google again took an unconventional route, with the
company’s founders again claiming to look to the long-term. They agreed with then-CEO Eric
Schmidt to work together for twenty years running the company. They also established a two-tier
voting system, one that would make it impossible for shareholders to remove the management
triumvirate from power (Girard, 2009). The IPO itself was managed in such a way that it
marginalised the role of the investment banks that typically profited the most from IPOs, at the
expense of small investors. Using a method for allocating shares called OpenIPO, all investors
were able to bid on stock at a price they considered fair, with the objective of attracting investors
who were committed to Google’s long-term success.
As I explore in detail in Chapters 6 and 7, Google’s product design methods, embrace of alliance
and acquisition, and emphasis on innovation and tinkering (diverge and bricolage) can be seen as
the beginning of an attempt to institutionalize aspects of what Ciborra had earlier described as
98
essential characteristics of the platform organization: that a technology institution must
constantly reinvent itself in order to generate profits and survive.
Table 4.1: Google acquisitions, 2001 to 2004
Date Firm Business Focus
February 12, 2001 Deja Usenet
September 20, 2001 Outride Web search engine
February 2003 Pyra Labs (Blogger) Weblog software
April 2003 Neotonic Software Customer relationship management
April 2003 Applied Semantics Online advertising
September 30, 2003 Kaltix Web search engine
October 2003 Sprinks Online advertising
October 2003 Genius Labs Blogging
May 10, 2004 Ignite Logic HTML editor
July 13, 2004 Picasa Image organizer
September 2004 ZipDash Traffic analysis
October 2004 Where2 Map analysis
October 27, 2004 Keyhole, Inc Map analysis
99
4.4 First infrastructures
Girard, Levy and other authors have identified several characteristics of Google’s early
infrastructure as central to the company’s continuing ability to grown and adapt as it added new
services and capacities. Girard (2009) argues that the limitations of this first infrastructure
became, over the long term, critically important assets for Google. The principal characteristics
of this infrastructure appear to have been redundancy, reusability, and scalability. Battelle writes
of the importance of Google’s distributed infrastructure in the long-term success of the company:
[D]istributed computing… would soon become all the rage in corporate environments.
Even IBM realized its value, introducing a line of cheap servers it called blades in early
2002. But Google took it many steps further, developing its own operating system on top
of its servers, and even customizing and patenting its approach to designing, cooling, and
stacking its components. While nobody was paying much attention to Google’s approach
to computing back in 2000, this approach would become the company’s core defensible
asset by the time it was ready to go public in 2004. (2005, p. 130).
As discussed in the previous section, Google’s first servers were cobbled together by Page and
Brin in 1998. In retrospect, that system seems like a somewhat unlikely technical model for what
would go on to become one of the largest commercial network infrastructures in the world. Levy
(2011) suggests that if the two founders had had more money, they might have purchased
commercial servers and stored them neatly in expensive racks. Instead, Page and Brin were
forced to build their systems using both consumer components and surplus enterprise servers,
quickly and inexpensively. Page and Brin could not rely on a large number of standard, high
capacity commercial servers. Because Google had to cobble together its first infrastructure, the
100
founders came to use lowest common denominator equipment: off-the-shelf components,
including a variety of commercial servers and low-cost consumer boards and hard drives
(Battelle, 2005). The founders’ office at Stanford was filled with surplus machines, begged,
borrowed or purchased on sale. Stuck for space, nothing the founders acquired was superfluous.
Google maintained this approach to its systems long after its first server farm, and this strategy
had a long-term benefit for the company: since components were low-cost and interchangeable,
if one of them failed, it could be easily and inexpensively removed and replaced. Space was
limited at Stanford, so Brin devised a means of storing as many servers as possible in a small
space that would also facilitate the equipment being moved easily as new equipment was added.
Brin’s solution was to use rack cabinets with casters, allowing the servers to be moved from
room to room easily (Girard, 2009).
These characteristics also meant that it was easier to scale the emerging system, placing Google
clearly in the realm of distributed computing which became a popular model for large-scale web
application design in the early 2000s. To take advantage of a large number of interchangeable,
networked components, the company developed its own Linux-based operating system and file
system. Google’s MapReduce programming model was created to handle large data sets in
parallel across server clusters, perfect for processing large-scale web indexing and large numbers
of search results (Girard, 2009). BigFiles, the first version of what would become the Google
File System, was a distributed and high-performance file system. Google Web Server (GWS)
was a web server based on Linux that supported most of the company’s online services.
For its network design, Google drew on the work of Jim Reese, a neurosurgeon whose career
included years in medical computer science. He designed Google’s network to use computational
101
power for “what they do best, repetitive tasks”, allowing a network that “quickly reconstructs
itself” (Girard, 2009, p. 125).
In 1999, Google had about 300 servers, all located in a colocation facility in Santa Clara,
California (Levy, 2011), with a second co-location centre established in 2000. In 2002, author
Steven Levy was given a tour of Google’s then single data centre location, then co-located at
Exodus in San Jose, California. As Levy later wrote, Google would soon become highly
secretive concerning its infrastructure, but in 2002 Jim Reese, tasked by Google to oversee the
facility, was happy to explain that Google had approximately 10,000 servers (Levy, 2011). By
this point, Google was handling 150 million searches a day. Levy describes seeing the servers of
Google competitors eBay and Yahoo! neatly organised in racks; Google’s servers were without
cases. Reese confirmed that unlike the rest of the industry, Google used “white box” (unbranded)
servers running Linux.
Levy suggests that from the company’s beginning, exponentially scaling up Google’s
infrastructure was a priority. We know that between 2001 and Google’s initial public offering in
2004, the company expanded extremely rapidly. Google’s revenue was US$347 million in 2002,
but had grown to nearly US$2 billion by 2004 (Levy, 2011). Levy suggests that profit
fluctuations during this period indicate significant investments in human resources and
infrastructure; Google had clearly decided in this period to use the high margins of their search-
centred advertising business to improve their core services and create new ones, further
extending the reach of their advertising platform while creating new revenue streams.
When Brin and Page interviewed Eric Schmidt for the CEO position at Google in 2001, they
were critical of Schmidt decision at Novell to build caching proxies for Internet traffic. Battelle
102
quotes Schmidt as saying that “They argued that this was the stupidest thing they’d ever heard
of—you wouldn’t need it. I was just floored” (2005, Chapter 6). Schmidt would later admit that
“everything [Page and Brin] said was right” (quoted in Battelle, 2005, Chapter 6).
The company also started leasing space at a colocation facility in Atlanta in 2003. It was during
this period that Google developed a strategy to bundle servers into intermodal (shipping)
containers, further indicating the company’s interest in rapidly scaling up its server infrastructure
on a continental basis. By 2005, CEO Schmidt considered Google’s infrastructure—its data
centre capacity and bandwidth—to be the company’s core asset (Battelle, 2005).
4.5 Search extends into new domains
With search advertising established as a highly profitable and growing business, Google’s
developed and launched new products of the first half of the 2000s—some created in-house,
others the result of acquisitions—exemplified various aspects of Google’s internal culture of
innovation and hacking. During the first half of the 2000s, Google’s new services typically began
as engineering projects that did or could provide new domains for search and advertising. Google
described these projects internally as Googlettes, new businesses within the company that would
be “start-up[s] within the start-up” (Kottke, 2003).
Gmail was perhaps the most prominent of these internal “start-ups”. Google employee Paul
Buchheit initially conceived of Gmail as a web-based email client that used JavaScript
extensively to create a user interface much faster than other web-based email systems, such as
the popular Hotmail. Begun in August 2001 by Buchheit, Gmail was soon championed by
Google founders Larry Page and Sergei Brin, who saw “email as a search problem” (Levy, 2011,
p. 169) that could be used as a platform for advertising.
103
In February 2001, Google acquired DejaNews, an archive of current and past Usenet messages,
with approximately 500 million postings. While little noticed at the time, Battelle (2005)
suggests that the purchase of DejaNews was significant as the first expansion of Google’s search
technology beyond third party web pages to a repository of content internal to Google. In 2001
Google also added public telephone directory information to its index, along with image search.
By the end of 2001, Google’s index encompassed over 3 billion content objects (Battelle, 2005).
Between 2001 and 2004, Google launched other services that expanded the domain of the
company’s search offerings. Blogging platform Blogger was acquired with Pyra Labs in 2003
and used as a test bed for Google’s new AdSense program of targeted content advertising. Image
host Picasa was purchased in 2004. Gmail was made public in a limited beta in April 2004, and
in October Google acquired Where 2 Technologies, the start-up that provided the basis for
Google Maps, launched in February 2005. Google had acquired another important component for
Google Maps (and Google Earth) in 2004 with the purchase of Keyhole Inc., which created
geospatial data visualisation applications.
Battelle argues that the demand for news and other information following the September 11th
2001 terrorist attacks demonstrated to Google the flexibility of the infrastructure it had been
building. Many news websites crashed under the demand for information during the attacks, but
Google did not, making available cached copies of otherwise unavailable news sites. This led
directly to the creation of news aggregation site Google News, which launched in September
2002.
Google Maps and Gmail extended Google’s search into new areas as high-performance, low-
latency web-based applications, running in a browser and using scripting languages such as
104
JavaScript. Maps and Gmail also depended on connections to software processes and databases
running on remote servers. The success of these applications depended not just on the speed of
the web application itself running in the browser, but also on the speed with which data flowed
between Google’s remote servers and a user’s browser. Levy writes that Page was obsessed with
latency and application speed, important for the success of web-based applications that I discuss
in Chapter 5.
As Google expanded its search offerings, it also devoted significant resources to growing and
diversifying all aspects of its technical infrastructure, moving data and server-side applications
closer to users through acquisition of fibre optic backbone and the creation of data centres.
Google’s extension of search to new domains indicated that the company, though highly
profitable in the search advertising segment, understood that it must become a more varied
platform, able to provide additional services and accompanying revenue streams in order to
remain competitive. The company’s product development approach embraced individual
innovation and tinkering, a theme which we will return to later as essential to the notion of
Ciborra’s platform organisation.
4.6 Google participation in the policy process
As Google’s revenue and services grew, the company dealt with an increasing number of
lawsuits, a fact that the company leadership saw as a consequence of Google’s importance. Levy
(2011) quotes Google policy leader Mike Jones who stated that, “It’s as if Google took over the
water supply for the entire United States… It’s only fair that society slaps us around a little bit to
make sure we’re doing the right thing” (2011, p. 328). Google retained legal counsel early in its
105
existence: the company’s current Senior Vice President of Corporate Development and Chief
Legal Officer, David C. Drummond, was Google’s first outside legal counsel.
While expanding its legal resources in the first half of the 2000s, Google appears to have largely
avoided the broader realms of public policy and regulation during this time. OpenSecrets.org, a
website run by The Center for Responsive Politics (2015) that attempts to track lobbying
expenditures by American corporations, estimates Google’s spending on lobbying to have been
only $80,000 in 2003, $180,000 in 2004, and $260,000 in 2005. These are startlingly low
amounts of expenditure for a company with over $6 billion in revenue in 2005. As we will see in
Chapter 5, Google’s lobbying activities expanded greatly after 2005 (Molla, 2014).
4.7 Chapter summary
In this chapter I have discussed the early history of Google to approximately 2005, describing an
infrastructure designed to index the web, deliver search results, and support Google’s growing
advertising business. As I explore in coming chapters, these systems also provided additional
affordances that Google may not have anticipated, and most certainly did not explicitly plan for
during its early technical build out. As we see in the next chapter, in addition to expanding
content offerings that would support advertising, Google also laid the technical foundation for
additional advertising-supported services and content platforms that would be quite different
from the extension of search that characterised the first half of the 2000s.
106
5 The policy-active hyper giant
In the previous chapter I presented Google’s early history to the mid-2000s, a time when the
company’s focus was on developing, extending, and monetizing its search engine functionality
through advertising. Google then began to broaden the domains of its search advertising services,
first by presenting a full archive of Usenet, then through maps and the Blogger content platform.
In this chapter I address the changes to infrastructure and systems, and the affordances they
provided, during the period of its network neutrality engagement. I discuss the influence that new
Google services in the second half of the 2000s—specifically video storage and streaming and
web-based applications—had on the company’s infrastructural requirements. I describe Google’s
parallel participation during the same period in the network neutrality policy debate, with a
specific emphasis on the Google-Verizon joint policy statements of 2010. I describe the rise of
the Android mobile operating system, which necessitated new relationships with the
telecommunications industry. Finally, I discuss Google’s infrastructure as it had emerged by the
early 2010s in light of Labovitz’s (2010b) notion of the hyper giant.
5.1 Google apps influence infrastructure
In this section I explore the expansion of Google’s services in three areas—online video, web-
based productivity applications, and voice communications—that influenced the company’s
infrastructure development to further limit latency, increase bandwidth, and establish edge
caching. As I discuss further in this chapter and the next, these services were important to the
107
transformation of the company as it engaged with the network neutrality policy process in North
America.
5.1.1 Google Video and YouTube
A key influence on Google’s infrastructure development in the late-2000s and early-2010s was
online video storage and streaming. Infrastructure was a driver of both the sale of YouTube to
Google in 2006, and on the build out of Google’s content delivery network beginning in the
years after.
Google’s first effort in online video was Google Video, begun in 2003 as an extension of the
company’s existing search engine that indexed mainstream TV programs, news clips and movies
(Levy, 2011). Google Video also licenced high-quality video content that could be presented
through the site, supported by advertising, or rented or sold to consumers. Google did not
conceive of Google Video as a site for user-created content.
Google Video was challenged in many key respects by the launch of YouTube in February 2005,
the brainchild of former PayPal employees. Unlike Google Video, which required a separate
video player application, YouTube displayed video on a web page as an embedded Flash object.
While Google Video emphasised licenced and mainstream video content, YouTube focused on
user-created content, allowing the easy upload of video as well as the embedding of video on
third-party web pages. And YouTube also had what could be described as a tolerant approach to
the sharing of copyrighted content. Rather than receiving permission from copyright holders
prior to allowing content to be uploaded, YouTube took the stance that they would respond to
complaints, but not proactively block copyrighted content. Levy writes that one of YouTube’s
founders, Steve Chen, had a “canny interpretation of the Digital Millennium Copyright Act”
108
(2011, p. 244). YouTube became a destination for short-form video (contributions needed to be
ten minutes or less), both professional and amateur.
Google Video began allowing uploads from users in mid-2005, policing them for copyrighted
content much more stringently than YouTube. By late-2005, Google was also ready to launch the
Google Video Store, with a small variety of videos available for purchase. But by the spring
2006, YouTube was streaming 25 million video a day, three times what was being presented by
Google Video (Levy, 2011).
Quoted in Levy, Google’s counsel David Drummond stated that the company had expected the
scope and reach of Google’s infrastructure to make Google Video a success, but that YouTube
“had beaten us—we had underestimated the power of user-generated content… We imagined
that if you put [YouTube] on the Google platform, and, you know, with Google distribution,
Google machines, and everything… you’d really, really accelerate.” (2011, p. 247).
By the fall of 2006, YouTube was the global leader in online video. But the service was
becoming overwhelmed by traffic, and did not have the capacity to scale out its content delivery
network, despite receiving new rounds of venture capital exceeding $8 million in March 2006
(Crunchbase, 2016). YouTube’s founders wanted to sell the company, and both Yahoo! and
Google showed interest, with Google being the successful suitor. Eric Schmidt was swayed to
some extent to consider the purchase by the YouTube founders’ insistence that their objective
was to “democratize the video experience online”, something that aligned with Google’s vision
for the World Wide Web . Google paid $1.65 billion for YouTube. Schmidt stated at the time,
“This is the next step in the evolution of the Internet” (Levy, 2011, p. 248), although it was easier
to see the purchase as Google taking leadership of the potentially highly lucrative video
109
advertising segment. The YouTube acquisition caused Google to reconsider some aspects of its
emerging corporate culture. Google Video had been a top-down initiative, run by a lawyer
concerned with copyright. YouTube had been a start-up. After the acquisition, Google tried to
minimise its interference with the existing culture of YouTube.
Levy writes that the YouTube founders began working with Google’s engineering group soon
after the purchase to utilise Google’s data centres and fibre networks to optimise streaming
video. I describe the results of this work later in the chapter. Both adequate bandwidth and low
latency are equally important to the delivery of high-definition video over the Internet (Hruska,
2015). In a 2007 study of YouTube’s infrastructure, Gill, Arlitt, Li, and Mahanti (2007) argue
that improved caching would be beneficial for YouTube, improving both scalability through the
distribution of content, and end-user experience, as caching would decrease latency.
5.1.2 Google web applications
A second important driver of Google’s infrastructure development during this time was what
eventually became known as Google Apps, a suite of web-based “productivity” applications
including a word processor (Google Docs), a spreadsheet (Google Sheets) and a presentation
program (Google Slides), along with Gmail, Google Calendar and other applications. The
Google Apps suite was similar to Microsoft’s bundled Office offering, long the industry leader.
However, unlike the Microsoft Office suite of the time, the Google Apps applications were
entirely web-based, operating within a web browser while storing documents in Google’s remote
servers. This allowed documents to be shared with other Google Apps users and worked on
collaboratively.
110
This first Google suite of web-based office applications originated from two products, Google
Spreadsheets, based on technology acquired with 2Web Technologies in 2005, and Writely, an
acquired start-up (Chang, 2005). Both applications were asynchronous web applications
(Maryka, 2009), written in JavaScript and HTML that ran locally on the client side in a web
browser, while retrieving and using data asynchronously with remote server applications. The
programming technique used to create this sort of web application was called Ajax, short for
Asynchronous JavaScript and XML, although a number of different technologies can be used to
create dynamic web applications (w3schools.com, 2007). Ajax applications allowed web pages
to be updated very quickly, permitting browser-based applications to present similar
functionality to a desktop application.
The Google Apps suite changed significantly throughout the second half of the 2000s. By late-
2010, the suite included Google Docs, Sheets, and Presentation, and other Ajax applications such
as Gmail, Google Calendar, Google Sites (a website editor), and Google Talk (Milo, 2010).
Google positioned Google Apps as an alternative to Microsoft Office (Perez, 2009).
While their asynchronous nature meant that connection quality could vary somewhat between the
client web browser and the application server, Ajax applications of this period performed
optimally when a number of network characteristics were in place. Technical experts during in
the mid-2000s (Almaer, 2005; Castelein, 2005a; Rotem-Gal-Oz, 2006) suggest latency is the key
constraint on the usability of web applications. Castelein suggests that in the network
environment of the mid-2000s that latency accounted for 80% of Ajax application downtime, and
speed (bandwidth) 20%. Almaer (2005) and Castelein (2005b) both suggest that Ajax
applications would benefit from utilising content delivery networks to reduce latency of web
111
applications. As we will see later in this chapter, caching was a key component in the
infrastructure Google built in the late-2000s and early-2010s.
5.1.3 Voice
A third key driver of Google’s infrastructure build-out was voice communications.
Seeing the popularity of Skype in 2007, Google product manager Wesley Chan decided to
explore purchasing a similar telecom start-up, GrandCentral (Levy, 2011). Levy speculates that
the purchase was attractive primarily because it would drive usage of the web, and therefore
indirectly increase Google’s search traffic, but it seems more likely that the purchase aligned
well with an emerging strategy for a suite of integrated telephony-related products and services.
Larry Page initially objected to the purchase, in part because it would worsen relations with
carriers. According to Levy, Chan argued that “they [telcos] already hate us—what’s the
downside?” (quoted in Levy, 2011, p. 234), referring to Google’s participation in the FCC’s 700
MHz auction that I detail in the next section. GrandCentral was acquired by Google on July 2,
2007, for US$95 million.
Google’s plans for GrandCentral, to be redubbed Google Voice, soon became clear. Page
suggested that it become an Android application for voice communications, essentially allowing
anyone with an Internet connection and an Android device to make free phone calls,
circumventing mobile carrier voice channels. During this period Google turned down an
opportunity to purchase Skype from eBay, in part because Skype’s peer-to-peer technology
would be unnecessary given Google’s substantial network and server infrastructure. When
Google Voice launched in March 2009, it featured integration with Gmail and Google Calendar.
112
Park (2008) indicates that voice over Internet platforms require several favourable network
characteristics in order to meet user expectation for call quality, including minimal packet loss,
low latency, and limited jitter (packet delay variation).
Video streaming, web applications and voice communication all required a low latency, high
bandwidth network, along with edge caching to function optimally. At the conclusion of this
chapter I present a description of the Google infrastructure created in the late-2000s and early-
2010s, in large part influenced by the requirements of YouTube, Google Ajax, and VoIP
applications.
5.2 Google and Internet governance
The new service offerings from Google I have described above required not just access to high-
bandwidth, low-latency networks, but to consumers only reachable by last-mile retail ISP
networks. During the second half of the 2000s, as Google was expanding its service offerings in
web applications, online video and other areas, the company also became an active participant in
policy and regulatory processes, and made consistent and relatively frequent statements in
support of the concept of an open Internet.
Tim Wu contextualised Google’s support of network neutrality during this period in his 2010
book The Master Switch. Wu argues that communications and technological systems tend to
follow a long cycle during which open systems unvaryingly become closed. Wu describes how
the Bell system grew to dominate the North American telephone market in the early-20th century,
followed by the increasing concentration and centralization of ownership of the broadcasting
system. The Internet, Wu advises, could see a similar fate.
113
Wu describes the danger of Google’s business appearing to depend almost entirely on what he
described as “a set of ideas” expressed in open Internet protocols (2010, p. 282). He warns that
Google’s lack of vertical integration—controlling both content creation and content
distribution—made the company vulnerable to gatekeeping from Internet service providers. An
retail ISP might merely choose to not make Google available to its customers, perhaps striking
an exclusive carriage agreement with a competing search engine. ISPs could act as what Wu
calls a “master switch”, blocking Google from its users.
5.2.1 Net neutrality controversies
By the mid-2000s, the bulk of the smaller Internet service providers had been purchased by large
incumbent telecommunications and cable companies, resulting in functional monopolies or
duopolies in most North American communities (Malonis, 2002). By 2005 it was clear that
incumbent American cable and telecommunications ISPs were examining ways to monetize their
new market positions. AT&T CEO Edward Whitacre stated this very clearly in 2005 when he
said:
How do you think they’re [Internet content providers] going to get to customers?
Through a broadband pipe. Cable companies have them. We have them. Now what they
would like to do is use my pipes free, but I ain’t going to let them do that because we
have spent this capital and we have to have a return on it. So there’s going to have to be
some mechanism for these people who use these pipes to pay for the portion they’re
using. Why should they be allowed to use my pipes? (Quoted in O’Connell, 2005)
The “they” referred to by Whitacre were large Internet content providers, including Yahoo! and
Google; his assertion that they were using network resources for free was contentious.
114
Whitacre’s declaration caused significant concern, most notably at the Federal Communications
Commission and among Internet content providers. The FCC imposed network neutrality rules
on AT&T as a condition of its merger with SBC, and also adopted a Broadband Policy Statement
that applied to cable, DSL, and other broadband providers in August 2005 (Federal
Communications Commission, 2005). While the Commission indicated that it would incorporate
it into future policymaking, the statement did not have the weight of a FCC rule. Stating that the
“Commission has a duty to preserve and promote the vibrant and open character of the Internet
as the telecommunications marketplace enters the broadband age” (Federal Communications
Commission, 2005) the FCC adopted the following four principles:
1. To encourage broadband deployment and preserve and promote the open and
interconnected nature of the public Internet, consumers are entitled to access the lawful
Internet content of their choice.
2. To encourage broadband deployment and preserve and promote the open and
interconnected nature of the public Internet, consumers are entitled to run applications
and use services of their choice, subject to the needs of law enforcement.
3. To encourage broadband deployment and preserve and promote the open and
interconnected nature of the public Internet, consumers are entitled to connect their
choice of legal devices that do not harm the network.
4. To encourage broadband deployment and preserve and promote the open and
interconnected nature of the public Internet, consumers are entitled to competition among
network providers, application and service providers, and content providers. (Federal
Communications Commission, 2005)
115
The FCC offered the qualification that “all of these principles are subject to reasonable network
management”. While the FCC did not draft rules at this time that reflected the Broadband Policy
Statement, it did transform them it into an enforceable standard through an adjudicatory process
involving Comcast Corporation, the second largest retail ISP in the United States, which I
discuss below (Ross, 2008).
5.2.2 The open Internet
Beginning in 2005, Google became more focused on policy issues than it had been in the first
half of the decade. It is unlikely that a single event triggered Google’s engagement with the
policy process. Levy only identifies the “increasingly hostile Washington environment” that
“required some concerted action” (2011, p. 328). It would be convenient to identify the 2005
published statements from then AT&T CEO Edward Whitacre, but is likely that a confluence of
concerns about the company’s activities prompted Google’s engagement with policy.
In addition to worries about network neutrality, Google faced considerable criticism for its
approach to user privacy following the April 2004 launch of Gmail (Ott, 2004). Gmail served ads
to users based on the content of their email. Algorithms, not Google’s human employees, parsed
the mail and selected the ads, but there was considerable concern voiced about this practice, even
though nearly all email travelled through the public Internet unencrypted and easily read by
network operators if they so desired. However, Google had also been collecting and retaining a
great deal of personal information from users since its inception, with little oversight (Thierer,
2011). Levy writes that “Google had been fortunate in postponing the inevitable privacy
showdown until Gmail’s arrival” (2011, p. 173). It is quite likely that concerns about potential
changes to privacy legislation specifically aimed at Google’s practises, along with the limited
116
interest of Google’s chief legal counsel in policy issues, were another key driver for the
company’s push into policy.
By January 2004 Andrew McLaughlin, who had worked as a lawyer with ICANN, had already
been hired as the first member of Google’s policy team, and the company had also recruited its
first Washington lobbyist, former associate director of the Center for Democracy & Technology
Alan Davidson, in May (Mills, 2006). Levy also indicates that Elliot Schrage was in charge of
Google’s communications and policy beginning in 2005.
Google made several public statement in support of network neutrality, beginning with the open
letter to Google users from Google’s CEO Eric Schmidt in 2006, urging them to “take action to
protect Internet freedom” (Schmidt, 2006). Numerous posts on the Google Public Policy Blog
from 2007 onward were Google’s primary means of communicating its policy positions,
suggesting the company strongly supported network neutrality principles.
Google hired its first telecommunications lobbyist in 2007, former Verizon lawyer Richard
Whitt, with his principal focus of working on network neutrality (Levy, 2011), not long after the
emergence of network neutrality as an issue of public concern. Whitt authored four blog posts in
June 2007 detailing Google’s position on network neutrality; the first was “What Do We Mean
By ‘Net Neutrality’?” (Whitt, 2007). Whitt identified a number of principles that Google
subsequently promoted during various regulatory and legislative processes, including its 2007
filing with the FCC:
In our filing with the FCC, we explained our strong support for the adoption of a national
broadband strategy. That strategy should include (1) some incremental fixes (like
requiring carriers to submit semiannual reports with broadband deployment data, and
117
mandating that carriers provide clear and conspicuous terms of service to customers); (2)
structural changes (various forms of network-based competition, such as interconnection,
open access, municipal networks, and spectrum-based platforms); (3) a ban on most
forms of packet discrimination; and (4) an effective enforcement regime. We also urged
the FCC to take the next step in its oversight on net neutrality, by instituting a formal
rulemaking proceeding to consider these ideas. (Whitt, 2007)
5.2.3 Versus telcos
Whitt led Google’s efforts to extend some form of network neutrality to wireless networks
through influencing the FCC’s regulation of newly available 700 MHz spectrums wireless
spectrum. The FCC’s 2008 auction of spectrums had been necessitated by the abandonment of
the band by UHF television broadcasters, and saw Google in competition with several incumbent
telecommunications companies (Lasar, 2008). A number of public interest groups, including the
Media Access Project and Public Knowledge, urged the FCC to reserve a portion of the
auctioned spectrum for “open” applications, devices, services and networks (Brodsky, 2007).
Google pressed for four conditions around “openness”, with the two most significant being
device neutrality (any phone should work on the new network) and service neutrality (any
software could run on the network) (Levy, 2011). Whitt then suggested that Google actually
participate in the auction itself, on the condition that the FCC impose some version of the
openness requirements on carriers. Two of these requests were successful--open devices and
applications—and in return Google made a minimum bid of $4.6 billion for the spectrums
(Albanesius, 2007). Google expected to be outbid by Verizon and other incumbents, and indeed
this transpired (Levy, 2011).
118
Google committed to bidding at least $4.6 billion in the auction in order to trigger the provisions,
but also realised that winning the auction would be a “disaster” (Levy, 2011, p. 222), creating a
“massive distraction” from the company’s core business. Nonetheless, Google participated in the
auction, “losing” the bidding but driving up initial prices, and the FCC set aside some spectrum
with the openness provisions which Google had requested. As Whitt had predicted, “There was
no way in hell that Verizon was going to let us walk away with spectrum that would destroy its
business model” (quoted in Levy, 2011, p. 223).
These openness provisions were subsequently challenged by Verizon through a lawsuit against
the FCC; Verizon dropped the lawsuit in October of 2007 (Ali, 2007). Marsden (2010) argues
that “it remains to be seen whether the commitments secured from … Verizon, will prove to be
another AOL-style ‘Kingsbury commitment’ [an agreement to interconnect] – or a one-off sop to
net neutrality advocates that is rapidly forgotten as the industry attempts to erect further walled
garden barrier” (2010, p. 198).
5.2.4 Other jurisdictions
Google formally and informally advocated for network neutrality in a number of jurisdictions
during this time. In Canada, network neutrality had first become an issue of public concern in
2005 when, during a labour dispute, telecommunications company Telus had blocked Internet
users’ access to union website, along with many other websites hosted on the same server
(OpenNet Initiative, 2005). But it was not until 2008 that the issue of network neutrality came
before Canada’s broadcasting and telecommunications regulator, the Canadian Radio-television
and Telecommunications Commission. An organisation of Internet service providers who resold
wholesale bandwidth from Bell Canada to retail customers, the Canadian Association of Internet
119
Providers, requested the CRTC order Bell to end the shaping of Internet traffic. As in the
Comcast case, Bell was throttling BitTorrent peer-to-peer traffic (Canadian Association of
Internet Providers, 2008).
Section 27 of the Canadian Telecommunications Act gives the CRTC powers relating to undue
preference and unjust discrimination on telecommunications networks. CAIP claimed that Bell
violated several sections of the Telecommunications Act: subsection 27(2) concerning unjust
discrimination; section 24 and subsection 25(1) concerning setting of tariffs; and, section 36
concerning control of content (Canadian Radio-television and Telecommunications Commission,
2008). CAIP also claimed that Bell violated CRTC rules requiring advance notice of network
changes, and privacy rules.
In July 2008, Google appeared before the CRTC as part of its process on Bell’s throttling of
peer-to-peer Internet traffic. Google argued at the time that “[n]etwork management does not
include Canadian carriers’ blocking or degrading lawful applications that consumers wish to use”
(quoted in Nowak, 2008).
While its November 20th, 2008 decision denied CAIPs claims, the CRTC announced a
comprehensive review of the Internet traffic management practices of Canadian retail Internet
service providers (Stevenson & Clement, 2010). In Telecom Regulatory Policy CRTC 2009-657,
the CRTC established a complaints-driven process for adjudicating potential network neutrality
violations by ISPs, stating that ITMPs by ISPs cannot be “unjustly discriminatory nor unduly
preferential” (Canadian Radio-television and Telecommunications Commission, 2009).
As we will see in a coming section, Google’s success with the 700 MHz spectrums was perhaps
its high water point as a strong supporter of network neutrality. Google’s engagement with the
120
telecommunications industry around Android was a factor in transforming Google from a strong
neutrality supporter to something much more problematic.
5.3 Google, mobile and Android
At the same time that Schmidt and others at Google were jousting with the telecommunications
industry over spectrum neutrality, other stakeholders within the company were beginning to
engage with telecoms in order to create new alliances around the Android operating system,
extending Google’s search offerings to new mobile platforms.
Levy (2011) argues that the roots of Google’s interest in mobile technology can be traced to a
demo given at a class Larry Page attended in Stanford by Andy Rubin of a new instant
messaging appliance called the Sidekick in 2002. Rubin visited Google in 2004, seeking support
for his next start-up, a company named Android that would create an open source operating
system for smartphones and then give the OS away to mobile carriers. Levy suggests that carriers
typically paid about 20% of per-phone cost for an operating system, and that Android’s business
model was that the OS company would monetize back-end services, including support, security
and storage (Levy, 2011, p. 213).
Android’s business model faced resistance from carriers and handset manufacturers. In 2005,
Rubin and Android co-founder Rich Milner presented Android to Page and other members of
Google leadership, looking to include Google services as part of the platform’s service offering.
Page showed interest in purchasing Android, with Levy suggesting that Google’s founders had
been considering a move into mobile for some time, even though CEO Schmidt had stated
emphatically that there would be no “Google phone” as recently as nine months before Google
acquired Android (Schwartz, 2007).
121
Levy notes that such a substantial strategic commitment appeared to be outside of Google’s
already ambitious vision to “access and organise the world’s information”. Google, argues Levy,
was concerned that the tight controls exhibited by mobile carriers over the services available to
their customers might limit Google’s availability on popular mobile networks. Google would
benefit most from an open network.
Google acquired Android for approximately $50 million in July 2005. The acquisition changed
the Android business model. The operating system would still be freely available to carriers and
manufacturers, but the platform would be what Levy describes as a “Trojan horse” for Google
consumer services, and specifically mobile search (2011, p. 217). The change in ownership also
changed manufacturer and carrier attitudes: Samsung quickly signed on.
Android developed two different prototype systems. Most attention was initially paid to Sooner,
a platform with a physical keyboard that was designed to get to market quickly. Levy repeats
Google’s claim that the company also worked on a touchscreen-based platform, codenamed
Dream, prior to the announcement of Apple’s iPhone in January 2007. The iPhone meant the end
of Sooner’s development, and likely supported the notion within Google’s leadership that mobile
platforms would soon emerge as significantly more important to Google’s search business than
desktop computers.
Prior to the Android project, Google had in place what Levy describes as a “bustling” mobile
division, focused on producing Google applications for existing mobile platforms, including the
mobile web (2011, p. 219). This team worked with Apple to ensure that the first iPhone, at that
point a closed application ecosystem, launched with two Google apps: Google Maps and
YouTube. Google’s mobile team soon revamped mobile search as well. But as Android was
122
developed, Google’s mobile efforts shifted somewhat toward the platform and away from the
iPhone. Google’s mobile platform strategy drifted it into more direct competition (and eventual
conflict) with Apple. Schmidt served on Apple’s board, and appears to have been privy to early
discussions of the iPhone platform (Cleland, 2012).
In November 2007 Google publically announced the creation of a mobile telephone platform
(though not a phone product), and a formal group of mobile carriers and manufacturers called the
Open Handset Alliance (Open Handset Alliance, 2007). The Alliance did not include the two
largest American mobile providers, AT&T and Verizon.
The first “Google phone” was the G1, manufactured by HTC and available on the T-Mobile
network. Android head Rubin had long-term business relationships with the two companies.
Levy quotes Rubin as stating, “There was trust” (2011, p. 226). The phone needed to launch by
October 2008 to be available as a holiday gift. The platform emphasised cloud storage of users’
content, such as contact, email, and music files.
Google saw the “openness” of the Android platform as a distinct advantage over Apple’s iPhone.
While Apple exercised tight control over its ecosystem of third-party applications, Google’s
controls were relatively lax.
The breakthrough Android device was the Motorola Droid, which was marketed by Verizon in
late-2009, the result of an agreement between the companies to jointly develop wireless devices
(Reed, 2009). As Levy notes, Verizon had been a fierce opponent of Google on Internet
governance issues, and the company had not joined the Open Handset Alliance. A partnership
with Google represented a thaw in relations between the companies, one that would extend to the
joint Google-Verizon statements that I discuss in a coming section. Levy speculates that
123
Verizon’s partnerships with Google resulted from “Verizon’s need to market a competitor to
AT&T’s iPhone” (2011, p. 228).
By mid-2010, 200,000 Android devices were being sold each day, outpacing the iPhone (Levy,
2011).
Google’s Android mobile operating system influenced the company to seek a variety of business
partnerships, particularly with mobile carriers who had not aligned with Google on the issue of
network neutrality in the 2000s. Android was one of only a very small number of potential
commercial and technological alternatives to Apple’s iOS mobile operating system platform,
providing Google considerable leverage in its relationships with mobile providers, the bulk of
whom were also ISPs.
5.4 Network neutrality in the late-2000s
Conflict over Internet traffic management was substantial during the late-2000s, as Comcast
attempted to throttle peer-to-peer traffic on its network, and Google and Verizon proposed
several policy and regulatory changes to address concerns about network neutrality.
5.4.1 Comcast and BitTorrent throttling
In 2007, several media outlets reported that Comcast had been preventing subscribers from using
peer-to-peer technology to legally share files online (Svensson, 2007). Media reform
organisation Free Press filed a complaint with the FCC against Comcast in November 2007,
requesting the Commission to determine “that an Internet service provider violates the FCC’s
Internet Policy Statement when it intentionally degrades a targeted Internet application” (Public
124
Knowledge, 2007). In August 2008, the Commission ruled that the traffic management
techniques the retail ISP had used were unreasonable. As for reasonable and alternative
remedies, the FCC suggested that Comcast use per-user bandwidth caps and fees for high levels
of traffic.
The FCC also announced its intention in the Comcast ruling to address future traffic management
issues on a case-by-case basis, drafting no detailed regulation concerning traffic management. A
set of “protocol-agnostic” traffic management techniques were subsequently implemented by
Comcast in December 2008 (Fisher, 2008). Although Comcast modified its network
management practices, it also appealed the FCC’s ruling in Federal Court on a variety of
grounds. The U.S. Court of Appeals rejected the FCC’s 2008 cease and desist order against
Comcast in April 2010. The court ruled that the FCC had no statutory powers to either regulate
an ISP network nor be responsible for the management of such networks. This action also
promoted legislative responses: some American lawmakers proposed legislation that would
require the FCC to demonstrate market failure before new network neutrality rules could be
enacted (Corbin, 2010), while others began a review of applicable legislation, the
Communications Act (Wyatt, 2010a).
5.4.2 The Google-Verizon statements
It was against this backdrop of unsettled issues around network neutrality that Google and
Verizon released their joint policy statements on Internet governance in 2009 and 2010. The
statements presented an interesting set of artefacts reflecting aspects of Google’s approach not
only to public policy discourse, but its commercial relationships with other business entities. In
this section I discuss them in some depth, emphasising the statements as reflections of Google’s
125
emerging business relationship with a leading telecom entity, and its core concerns for network
neutrality.
The statements and proposal can be seen as encompassing two principal areas of concern: the
regulation of the Internet in a limited number of key areas, and ground rules for Internet service
providers in the management of their networks (Davidson & Tauke, 2010a, 2010b).
5.4.2.1 Procedural matters
Reacting in large part to the then on-going conflict between Comcast and the FCC, the Google-
Verizon proposals suggest maintaining the FCC’s current case-by-case approach to network
neutrality issues, in which the regulator deals with traffic management issues when a content
provider or user makes complaints. However, Google-Verizon recommends that adjudications
should be based on clear legislative rules—which did not yet exist—rather than the FCC’s own
rule-making process. These rules would also respond to the principles outlined by Google and
Verizon (detailed below).
As regulatory gaps may result between what the United States Congress mandates and what ISPs
are “bound to do” in self-interest, Google-Verizon suggested filling this gap with private rule-
making, creating “non-governmental dispute resolution processes established by independent,
widely-recognized Internet community governance initiatives” (Verizon & Google, 2010).
Google and Verizon’s mistrust of government decision-making processes may have been the
source of this approach. Idealistically, it proposes that stakeholders could negotiate a more
effective set of rules, perhaps in a manner comparable to Verizon and Google themselves.
126
5.4.2.2 Network neutrality rules
Zittrain (2010) describes Google and Verizon’s proposals on Internet traffic management as
network neutrality “with plenty of exceptions”. The proposals are noticeably vague in several
areas.
First, Internet users are not prohibited from any “lawful” activities on the Internet, but “lawful”
is not clearly defined, nor is it clear if retail ISPs have a right or an obligation to determine what
constitutes “lawful” activities. Of some concern is the notion that ISPs might be at some point
obligated to determine whether a user is engaged in “lawful” conduct. Second, ISPs cannot
discriminate against types of traffic, with “pay for priority” specifically forbidden. ISPs cannot
manage their networks in such a way that it causes meaningful harm to competition or to users.
Third, network management policies, practises and “capabilities” must be disclosed to users.
ISPs should behave in a transparent concerning Internet traffic management.
However, Google and Verizon proposed that these provisions would apply only to wireline
Internet services. There are two significant exceptions to these network neutrality rules. Only
transparency principles would apply to wireless Internet services; the other neutrality provisions
would not. Google and Verizon instead suggest that the “U.S. Government Accountability Office
would report to Congress annually on the continued development and robustness of wireless
broadband Internet access services”. Perhaps most concerning, the principles would not apply to
what Google and Verizon call “additional online services”. Such services might include online
gaming networks and video distribution.
127
5.4.2.3 Analysis of Google-Verizon statements
As of this writing, there had been little academic analysis of the Google-Verizon joint statements
and proposals, and very little substantive research on Google as a policy actor within an Internet
governance context. The most prominent immediate academic analysis of the statements came
from Jonathan Zittrain, who unlike the bulk of the popular media or blog-based commentators on
the proposals, does not take a strong advocacy position.
It is important to first consider that the joint statements may not have represented an actual
substantive agreement between Google and Verizon. As Zittrain points out in his 2010 blog post
on the subject, while the statements may represent a “meeting of the minds”, they are not, in any
formal sense, legal agreements, or at least do not represent agreements of which we are aware.
Zittrain cites Sunstein's (2007) work concerning parties who may disagree on underlying
constitutional theories, but can still agree on constitutional practices. Specifically, parties can
agree on abstract principles without agreeing on the meaning of such abstractions. Zittrain
suggests that parties who disagree in this way may develop agreements that are intentionally
vague so that they can settle on some important issue and move on to the task at hand, whatever
that might be.
Zittrain suggests that there are many vague statements within the Google-Verizon statements that
indicate that Sunstein's process of vague agreement is taking place, citing such legally-unclear
terms and phrases as “reasonable”, “undue discrimination” and “at this time”. Documents are not
fleshed out to the extent that they can have much impact.
Zittrain also suggests, rightly, that much of the criticism of the proposals comes from an
128
idealistic perspective on network neutrality, rather than recognition that the agreements represent
the result of negotiation between two commercial actors that likely required substantial and
difficult horse-trading.
In associated statements, Google suggested that the proposals responded to the “political
realities” of network neutrality policy in Washington, where policy actors had been “intractable”
for some time (Whitt, 2010). Google referenced the difficulties which arose from the FCC’s
2005 policy statement on network neutrality that was applied to its action against Comcast,
which had been restricting peer-to-peer traffic without notifying its users. Google attempted to
address protestations raised by Comcast in that action, who argued (successfully) that the FCC
went beyond the authority granted it by Congress. Google also argued that its joint statements
with Verizon were an attempt to move past the process of FCC rule-making on neutrality, which
was also begun in 2010 and with which many were disappointed (Kessler, 2010).
Also of particular interest in the proposals was the notion that network neutrality principles
would not apply to “additional online services”, which might include specialised services for
video delivery, online gaming, or other uses (Davidson & Tauke, 2010b). While reaction to this
provision was negative, Zittrain does not find this provision unreasonable and argues that walled
gardens and open networks can exist side by side, as the Internet remains the “main attraction” to
users. He uses the analogy of the smartphone application ecosystem. Both Google Android and
Apple curate (to differing degrees) which applications are available for their respective
smartphone platforms. Some applications might be rejects due to security and privacy concerns,
others because content or functionality is deemed inappropriate. However, both platforms
provide unfiltered access to the Internet through a standards-compliant browser, which can
access any content.
129
Further, this proposal did in fact reflect the reality that North American Internet service providers
have repurposed existing wireline infrastructure—coax cable and twisted-pair copper—for
TCP/IP networking. As legacy wiring was being repurposed, different rules were applied to
different uses, rather than to different infrastructures. When incumbent cable and
telecommunication companies added Internet services, they were different enough from existing
set-top-box-based services to be considered distinct and separate.
The Google-Verizon distinction between wireline and wireless Internet also raised concerns, as
the FCC did not originally distinguish between the two in traffic management policy. Zittrain is
sceptical of rules that distinguish between the two, but doesn’t see an “evil plan” lurking in the
differentiation. He suggests that perhaps there are technical necessities at this point which require
differing rules, but is concerned that hopes for greater and more robust competition in future may
not be realised.
5.4.2.4 Public interest and regulator reactions
Reaction to the statements and proposal by public interest organisations were generally negative.
This response may have been coloured by press reports prior to the August 2010 release of the
legislative proposal that indicated that Google had fundamentally changed its position on
network neutrality. On August 4th 2010, the New York Times reported that “Google and
Verizon Near Deal on Web Pay Tiers”. The article reported some details of the yet-to-be-
released statement, suggesting that the agreement would “allow Verizon to speed some online
content to Internet users more quickly if the content’s creators are willing to pay for the
privilege” (Wyatt, 2010b). On August 5th 2010, in advance of the policy proposal, Josh Silver,
president of US media advocacy organisation Free Press, warned that the Google-Verizon deal
130
was a “doomsday scenario” and “the beginning of the end of the Internet as you know it” (2010).
However, Silver's Huffington Post op-ed referenced only a New York Times story which
reported that Google and Verizon had agreed to paid traffic prioritisation, which was not part of
the eventual proposal.
As well, prior to the statement's publication, PCWorld's Ian Paul (2010) suggested that Google
had already moved away from adherence to a strict definition of network neutrality, reporting
that Google president Eric Schmidt had told the London Telegraph that he favoured traffic
prioritization by type, with video and other time-sensitive content being prioritized over less
time-sensitive content. Paul suggested that this position appeared “contradict previous Google
statements about net neutrality”, highlighting a specific posts in Google (Paul, 2010).
As stated above, reaction from Internet public interest advocacy groups to the proposal was
generally critical. Cindy Cohn, legal director of the Electronic Frontier Foundation (EFF), wrote
a short review of the Google-Verizon proposal that appeared one day after the proposals were
published (Cohn, 2010). Cohn acknowledged the negative reaction to the proposal and reiterated
EFF's intellectual property director Corynne McSherry's warning (McSherry, 2009) that FCC net
neutrality policies may be a “trojan horse” designed to extend FCC authority into areas not
legislatively mandated. The EFF appeared to support only limited FCC authority over the
Internet, a “narrow grant of power to the FCC to enforce neutrality within carefully specified
parameters” (Cohn, 2010). Cohn indicated that EFF specifically opposes FCC jurisdiction over
Internet content or software. Cohn also indicated that EFF opposed several aspects of the
legislative proposal, specifically the many exemptions from neutrality requirements for wireless
services and unlawful content. Cohn also suggested that the concepts of “additional online
services” and
131
“reasonable network management” were both too vaguely defined. Cohn writes that exemptions
for “additional online services” were overly broad, allowing virtually any new networking
service to be defined as such and thus allowed to discriminate.
In an August 11th posting on its website, the Center for Democracy & Technology's Andrew
McDiarmid (2010) suggested that the Google-Verizon agreement “fall short”, reiterating
common public interest group arguments against the proposal: supporting greater FCC authority
over broadband, but criticizing the approach to wireless and “additional or differentiated
services”.
August 13th 2010 saw protests outside the Googleplex, organised by MoveOn, Free Press, and
the Progressive Change campaign (Levy, 2011, p. 384). Levy writes that the protest indicated
“true disenchantment” by former allies of Google.
The initial reaction from the US federal telecommunications regulator was limited to a statement
from Commissioner Michael Copps, appointed as a Democrat to the FCC in 2001. In a very brief
statement issued on the date of the proposal's publication, Copps was critical of the statement,
and suggested that the FCC's “authority over broadband telecommunications” be reasserted in
order to guarantee “an open Internet now and forever” (Federal Communications Commission,
2010).
5.4.2.5 Media and blog reactions
Reaction to the statement in the blogosphere and popular press were also generally negative. In
an August 9th 2010 editorial entitled “FCC needs to get tough on network neutrality”, the San
Francisco Chronicle (2010) argued that the Google-Verizon legislative proposal “doesn’t look
132
right” and that a broader range of stakeholders should have influence on any congressional
neutrality legislation. Given likely congressional inaction, the Chronicle suggests the FCC act
alone to reclassify broadband in such a way that common carrier rules would apply to it.
There was some speculation within the blogosphere that the announcement was part of a broader
business agreement between the two companies. In an August 11 2010 posting, O’Reilly Radar
blogger Marc Hedlund (2010) argued that the statement was “out of character” for Google and
speculated that the deal was to “keep Verizon from making a deal with Apple for the iPhone”.
Wired’s Eliot Van Buskirk (2010) describes the statement as “A Tale of Two Internets”,
highlighting provisions that allow for additional, specialised online services that would not fall
under neutrality provisions. Van Buskirk quoted most of the principal Open Internet advocates
indicating their opposition to the statement: Media Access Project’s Andrew Jay
Schwartzman, President and co-founder of Public Knowledge Gigi B. Sohn, Free Press Political
Adviser Joel Kelsey, and SavetheInternet.com coalition.
Blogger Nelson Minar, a Google employee, described Google’s statement as throwing “wireless
network neutrality under the bus” (Minar, 2010). Minar speculates that the Google-Verizon deal
was “the best [they] could do”, and that Google gave up “the principle of network neutrality just
to get a temporary advantage for the next couple of years”.
Long-time O’Reilly Media editor Andy Oram (2010) was also confused by the statement. He
wrote that “the language of the agreement didn’t match any Internet activity” he could recognise,
with most provisions were too general and lacking in “meaningful or enforceable rules”. The
provision calling for non-discrimination on wireline Internet, he argued, is directly contradicted
by the allowance of “additional online services” to which neutrality would not apply, rendering
133
both sections “effectively unusable”. However, Oram takes a contrary view to some other
commentators in a number of areas. Wireless Internet, Orem suggests, will never be as fast as
fibre wireline, and its bandwidth limitations call for Internet traffic management.
In an article in the New York Times on the day of the statement, Miller and Helft (2010) report on
the various reactions to the statement, highlighting the controversies surrounding it.
In a September 3rd article in the New York Times, business columnist Joe Nocera (2010) wrote
that the Google-Verizon agreement was “well-meaning” but resulted in the companies being
vilified by public interest groups.
Writing in IEEE Internet Computing, Stephen Ruth suggests that the Google-Verizon agreement
may raise the prospect of Google making traffic prioritisation agreements with retail ISPs in
future, and suggests that “constructive negotiations” among stakeholders would be preferable to
congressional action (2010, p. 63). In IEEE Network, editor Thomas M. Chen wrote that the
Google-Verizon agreement “put the FCC in an awkward position” as the agreement exempted
wireless and allowed for tiered pricing, potentially derailing the FCC’s attempts at reaching a
consensus among all industry actors (2010, p. 3).
5.4.3 Tepid support for network neutrality
The Google-Verizon agreements and subsequent policy proposals were promoted at the time as a
“principled compromise” in neutrality policy in the United States, and also as a significant step
forward by the participating companies. But the proposals appeared to have little immediate
impact on policy making around network neutrality in the United States (Greenberg & Veytsal,
2010; Gustin, 2010).
134
On December 22, 2010 the FCC introduced the Federal Communications Commission Open
Internet Order, which created rules for two classes of Internet provision, fixed-line and wireless.
The rules required providers to disclose their network management practices, forbidding them
from blocking lawful content, applications or services, or from engaging in “unreasonable
discrimination”. The rules reflected the language of the neutrality guidelines of 2005, but used
specific legal language.
The United States Congress did not introduce any clear neutrality legislation in 2011. In fact,
House of Representatives leadership appeared to oppose all forms of Internet regulation, and
attempted to limit the FCC’s rule-making ability in this area. On January 20th 2011, Verizon filed
a lawsuit with the District of Columbia United States Court of Appeals attempting to overturn
FCC rules concerning network neutrality (Higginbotham, 2011). The suit was filed in a court that
had previously thrown out the FCC's order to Comcast concerning neutrality. In a media release
on the day of the filing, Verizon senior vice president and deputy general counsel Michael E.
Glover stated that while “Verizon is fully committed to an open Internet” it believed that the
proposed regulation was “inconsistent with the statute” and “unneeded”.
Google’s statements with Verizon on network neutrality were the company’s last significant
public efforts in support of network neutrality for many years. Articles concerning network
neutrality, a mainstay on the Google Public Policy Blog from 2007 on, fell 83% in 2011. This
was not indicative of a withdrawal from lobbying or advocacy, however. As we see in Table 5.1,
Google’s lobbying expenditures more than tripled from $5.16 million in 2010 to $16.83 million
in 2014 (The Center for Responsive Politics, 2015). Google’s meetings with the FCC and
members of the United States Congress also increased substantially.
135
Google publicly supported an “open Internet” in a 2014 letter to Federal Communications
Commission from 150 technology companies regarding network neutrality (Engine & New
America Foundation, 2014). However, many (Shields, 2014; Singel, 2013; Worstall, 2014) noted
at the time Google’s lack of strong engagement on the issue. Sasso (2014) reports that according
to FCC lobbying records, Google had “rarely discussed” network neutrality at the Commission.
McMillan (2015) speculates that the silence from Google’s and other large technology
companies was a strategic attempt to avoid having network neutrality too strongly associated
with large corporations.
During the renewed American debate on network neutrality in the mid-2010s, Tim Wu suggested
that some change to Google’s circumstances allowed the company to show significantly less
concern for network neutrality. Wu stated that “There’s a danger that [Google], having climbed
the ladder, might pull it up after them”, now that it was no longer endangered by retail ISP
gatekeeping (quoted in Shields, 2014).
Various events coincide with Google’s pull-back on network neutrality. Eric Schmidt, Google’s
CEO since 2001, was replaced by co-founder Larry Page in mid-2011 (Singel, 2011). Schmidt
was identified with support for network neutrality principles, speaking in public often on the
topic (Goldman, 2010). A change in leadership at the company may have softened Google’s
stance on net neutrality. As well, by 2010 Google may have concluded that it would not be
advantageous for retail ISPs to limit their customers’ access to very popular websites, including
Google search and YouTube. In the late-2000s retail ISPs may have had limited leverage when
dealing with large content providers; for example, in 2007 Verizon served only 3% of Google’s
total customers (Isenberg, 2007). Even in a typical North American market that is served by a
136
retail ISP duopoly, a provider’s failure to offer popular websites such as YouTube and Google
would likely have been unacceptable to prospective and current customers.
However, perhaps the most significant changes at Google in the late-2000s were to the
company’s infrastructure and services, including the launch of the Android operating system and
the growth in Google’s infrastructure. In the next section I discuss that infrastructure, and the
affordances it provided Google and its partners.
137
Table 5.1: Annual lobbying by Google to 2014
Number of Reports Listing Lobbying
Year Annual Lobbying Expenditure (USD)
US Senate House FCC
2003 $80,000 2 2 0
2004 $180,000 2 2 0
2005 $260,000 3 3 0
2006 $800,000 7 7 0
2007 $1,520,000 8 7 2
2008 $2,840,000 23 22 4
2009 $4,030,000 26 25 4
2010 $5,160,000 24 24 4
2011 $9,680,000 66 61 8
2012 $18,220,000 101 94 7
2013 $15,800,000 108 101 8
2014 $16,830,000 75 68 3
Source: The Center for Responsive Politics (2015).
138
5.5 Google’s infrastructure
While building out its service offerings and engaging in the policy process, Google was also
making significant changes to its infrastructure, expanding the company’s scope and reach. In
this section, and in Chapter 6, I discuss the development of Google’s systems through the late-
2000s, and provide a snapshot of those services in 2013.
As I wrote in Chapter 3, I have conceptualised Google’s infrastructure into the following
elements:
• Google managed server capacity, including
o Private peering facilities, including points of presence,
o Public peering facilities, including points of presence,
o Public data centres, and
o Edge caching servers, located at third-party facilities
• Google wide area networks, including
o Google data centre network, and
o Submarine cable
In the following sections I detail each of these infrastructure elements. I describe the
characteristics and capacities (as far known) of these systems as of October 28th, 2013, and
indicate what affordances they provided Google in relation to possible retail and transit ISP
gatekeeping.
139
5.5.1 Data Centres
By 2010 Google had built seven large-scale data centres, the most publicly visible aspects of its
systems: six in North America and one in Europe. By 2013, Google had announced or opened
additional data centres in Europe and North America, with two new data facilities announced for
Asia (Google, 2013). These data centres existed in addition to Google’s server capacity at the
company’s headquarters in Mountain View, California, the peering and edge caching facilities I
describe below, and other co-location facilities that performed similar functions, about which we
have limited information.
In 2013, Google spent more than $7.35 billion on capital expenditures, driven primarily by the
expansion of Google’s data centre network (R. Miller, 2014). This included a $600 million
expansion to the Dalles data centre, $350 millions of new construction at St. Ghislain, Belgium,
and the purchase of 1 million square feet for future expansion of the Mayes County, Oklahoma
data centre.
While it is impossible to precisely determine the total number of individual Google data centre
servers in the early-2010s using available data, there is no question that the number was
exceedingly large. Estimates based on Google’s power usage at public data centres placed the
count as low as 900,000 total servers (R. Miller, 2011), while other estimates based on space
available at data facilities placed the estimate as high as 2,376,640 (Pearn, 2012). A somewhat
tongue-in-cheek 2013 estimate by Randall Munroe suggests Google’s data centre active storage
was 10 exabytes (or 10,000,000 terabytes) attached to running clusters, with another 5 exabytes
for backups and cold storage (Munroe, 2013). This number is by no means unreasonable.
140
These data centres provided platforms for Google’s core services, including Search, YouTube,
Google Apps/Drive, Gmail, and Maps. Data centres provide flexible platforms for distributed
computing; one of the means by which Google allocates data centre server resources to various
business units and services within Google is by means of an auction system (Levy, 2009).
Google interconnect and content distribution manager Thomas Volmer (2015) indicated that data
centres handle 40% of user requests.
Google’s large data centres were the public face of Google’s infrastructure in the early-2010s. In
2012, Google invited press into its data centres, and released a series of images of data centre
elements (Levy, 2012), such as Figure 5.1.
141
Figure 5.1: Google data centre, 2014 Copyright © 2014 Google Inc. The source URL for the image is https://www.google.com/intl/en/about/datacenters/gallery/#/
142
5.5.1.1 Relevant data centre affordances
Stand-alone data centres—not co-locating computing resources with other companies—provided
Google with a number of significant affordances. Perhaps most importantly, the data centres
allowed Google to shift computing resources relatively quickly among clusters of servers. The
use of a standard, low-cost server equipment provided Google a flexible and uniform platform
for application development, testing, and deployment, one that made application and storage
scaling easier.
Google’s data centres were physical plants completely separate from any third party hosting
facility; Google controlled many aspects of the facilities’ operations, including connectivity,
location, and security. Connectivity among data centres was by Google’s G-scale network.
While Google’s large data centres did not directly mitigate ISP gatekeeping as described by Wu,
they could be seen as limiting opportunities for third party control or influence on Google’s
technical operations through Google-managed physical and network security.
143
Figure 5.2: Google data centres and data centre (G-scale) network, 2013. This image was captured October 12, 2016 from https://www.google.com/maps/d/viewer?mid=1nXSNhvDo5jaSS1h9gFuqQnRNIqg .
Figure by John Harris Stevenson. Copyright © 2016 John Harris Stevenson. Base map data Copyright © 2016 Google.
144
Figure 5.3: Google Data Centre locations in North America This image was captured October 12, 2016 from https://www.google.com/maps/d/viewer?mid=1nXSNhvDo5jaSS1h9gFuqQnRNIqg .
Figure by John Harris Stevenson. Copyright © 2016 John Harris Stevenson. Base map data Copyright © 2016 Google.
145
Table 5.2: Google large data centres, October 28, 2013
Location Country Year Operational
The Dalles, Oregon US 2006
Lenoir, North Carolina US 2008
Douglas County, Georgia US 2008
Berkeley County, South Carolina US 2009
Council Bluffs, Iowa US 2009
St Ghislain, Belgium US 2010
Mayes County, Oklahoma US 2011
Hamina Finland 2011
Dublin Ireland 2012
Changhua County Taiwan 2013 (not operational)
Singapore Republic of Singapore 2013 (not operational)
146
5.5.2 Wide area Networks
In the mid-2000s, reports appeared in the technology press that Google was buying hundreds of
kilometres of “dark” fibre optic cable, relics of the 1990s Internet boom’s over-build of network
capacity (Hansen, 2005). Fibre purchases continued through the late-2000s, although specific
routes and capacities were unknown. In 2008 Google joined a consortium of telecommunications
companies that had agreed to build a high-bandwidth fibre optic submarine cable between the
United States and Japan, estimated to cost approximately US$300 million (Williams, 2009). By
2013, Google was operating the Unity Japan-to-United States submarine cable, and connected to
Unity, the South-East Asia Japan Cable System, both as part of as part of a consortia (Qiu,
2013).
There existed no detailed maps of Google’s network infrastructure at the beginning of this
research. A stylised map displayed in Hölzle’s (2012) OpenFlow presentation indicated
backbone connections between all of Google’s large data centres. This was Google’s “G-Scale”
network (see Figure 5.2), the internal backbone that was used only to carry traffic among Google
data facilities worldwide. A 2010 Arbor Networks study (Labovitz, 2010a) identified Google’s
internal network as the third largest known network in the world. Determining the total traffic
handled by Google’s backbone networks is challenging, but a public talk from Google senior
vice president of technical infrastructure Hölzle indicated that Google’s network was nearly fully
utilised at all times, and could scale to approximately one terabit per second throughput (Levy,
2012). In 2012 the G-Scale network was reported be operating near 100% capacity at 10 gigabits
per seconds (Hölzle, 2012).
147
Google also operated an “I-Scale” network which was Internet facing, carried users’ traffic, and
might have utilised third party transit networks. This network was also used to index the world’s
websites (Crabbe & Vytautas, 2012).
The 2009 ATLAS Internet Observatory Report, created by researchers at Merit Network, the
University of Michigan, and Arbor Network, stated that Google was handling 5.2% of all
Internet traffic, and had become the third largest Internet service provider in the world, behind
Level 3 and Global Crossing (Labovitz et al., 2009). The 2007 Internet Observatory report had
not listed Google as a top ten transit ISP, but by 2010, Google was reported to carry 6.4% of
Internet traffic and be the second largest transit ISP in the world (Labovitz, 2010a). The Wall
Street Journal, citing “one person familiar with its assets”, claimed in December 2013 that
Google controlled more than 100,000 miles of fibre optic routes worldwide (Fitzgerald & Ante,
2013). It was estimated that as of July 2013, Google carried 25% of North American Internet
traffic (McMillan, 2013).
5.5.2.1 Relevant network affordances
In addition to significant technical advantages afforded by Google’s high-capacity networks,
Google’s extensive WAN also provided the company with semi-independence from third-party
transit providers. While there is no evidence that by 2013 Google had stopped using transit ISPs
and their networks altogether, by this point in the company’s history Google was much less
dependent on them.
The danger of tier 1 and other transit providers gatekeeping traffic was not highlighted by Wu,
but became a matter of public concern in network neutrality discourse during the dispute
between Netflix and Comcast in 2014. Google’s 2013 network topology allowed the company to
148
be significantly less dependent on the public Internet and the transit providers at its core than it
had been in its early history.
5.5.3 Peering and Caching Servers
Google operates and controls a large number of servers that reside outside their facilities. These
servers can be divided into point of presence (PoP) servers, located at Internet exchange points
(IXPs), and edge caching servers, located within network entity facilities, such as retail ISP
networks. Volmer (2015) indicates that caching servers handled 60% of Google’s traffic as of
2015.
Point of presence servers were housed at locations where Google connects to other network
entities, at Internet exchange points and co-location facilities around the globe. These were
called typically peering connections, with many peering locations are published in the
PeeringDB online database, as I discussed in Chapter 3. In its 2013 published policy on peering,
Google indicates that it had an “open” peering policy and would connect to a wide range of
network entities that have access to facilities at which Google peers. Google connected to
network entities through both private and public peering; Google peered privately when its
network connected directly with the network of another network entity. As of October 2013,
Google privately peered at least 79 locations worldwide, and publically at 72 IXP locations,
more than any other network entity listed by PeeringDB. The specific technical specifications of
these IXP points of presence is not known, but was likely substantial given the role these servers
play in caching Google content. A complete list of Google’s public and private peering locations
is included as Appendix B.
Google also had a large number of edge caching servers which were housed in the facilities of
149
network entities, predominantly retail Internet service providers. These caching servers make up
a content delivery network (CDN), designed to serve video and other content more efficiently to
Google’s users, regardless of location.
The evolution of Google’s content delivery network in the late-2000s was linked closely with the
growth of YouTube. Adhikari, Jain and Zhang (2010) present a mapping of YouTube traffic in
2008, prior to what they describe as Google’s “restructuring” of the YouTube CDN. At the time
of that study, traffic originated from a limited number of data centres, served without
consideration of a user’s geographic location, and without edge caching per se.
Evidence of a new Google technology, Google Global Cache (GGC), was first reported by tech
bloggers in 2008. GGC placed Google hardware inside retail ISP networks worldwide (R.
Miller, 2010). GGC was also discussed publically with WAN administrators by Google’s Mike
Axelrod at the African Network Operators Group meeting in 2008. Axelrod states that YouTube
traffic was consuming a great deal of traffic and that each request had to be filled by a large
Google data centre of the type described above. Describing the caching program, Axelrod termed
the Google Global Cache technology as “beta” (Axelrod, 2008).
In a 2012 follow-up to their 2010 study, Adhikari, Jain, Chen and Zhang (2012) describe this
content delivery network in some detail, identifying a number of cache locations and a complex
organisation of video servers and three-tiered physical cache hierarchy. They also suggest that by
2012, Google had integrated the YouTube CDN into its own, greatly expanding it.
According to the Google Global Cache website,
Google Global Cache (GGC) enables your company to optimise network infrastructure
costs associated with delivering Google and YouTube content to your users by serving
150
this content from inside your network… GGC is implemented as a set of servers
deployed in your datacenter, remotely managed by Google. The number of servers
deployed will depend on the bandwidth demands of your users and the number of
locations at which you chose to install GGC nodes. (“Google Global Cache Beta,” 2011)
Google’s caching technology allowed retail ISPs to more easily and cheaply serve Google
content (including YouTube video) and search results to their subscribers without content having
to travel beyond the ISP’s network. A request for Google content is first sent to a Google front
end server, which determines whether the request can be handled locally by a Google Global
Cache server located at an ISP, or must be handled by a Google data server in another location.
Google first began promoting the GGC program in 2008 to Latin American retail ISPs (Guzmán,
2008); by late-2010, caching servers were being offered to retail ISPs in Kenya and Uganda
(Hersman, 2011b). It appears caching servers were available in regions more distant from
Google’s large data centres in 2010, prior to a large-scale growth in North America and Europe
caching (Cowan, 2013).
GGC could improve retail ISP network efficiency significantly. An indication of the GGC’s
attractiveness to retail ISPs is that as mentioned above, by 2015 roughly GGCs handled 60% of
all Google traffic (Volmer, 2015). While advantageous to retail ISPs from both a bandwidth and
quality of service perspective, caching also allowed Google to serve its users without, in many
cases, utilising Internet backbone transit providers.
Due to Google’s confidentially agreements with ISPs, it is challenging to determine the exact
number or location of GGC servers. However, Calder et al. (2013) at the University of Southern
California Networked Systems Laboratory engaged in an extensive study of Google’s server
151
infrastructure from 2012 to 2016, revealing the locations of several thousand Google servers at
various locations. Calder et al. analysed and filtered millions of IP addresses, and utilising a
technique called client-centric geolocation (CCG), made reasonably accurate identifications of
each IP address’ geographic location. Calder et al. had identified over 25,000 unique IP
addresses associated with Google front end servers, across 1632 unique geographic locations, as
of October 2013.
Calder et al. published an analysis of their server mapping beginning in November 2012, along
with raw data for the entire period of their study. From November 2012 to August 2013, the
number of Google’s in-service IPs increased by approximately 700% (Calder et al., 2013).
Ninety-five percent of this growth was outside of Google-controlled networks. This increase
represented both new capacity in some areas that previously had none, such as Vietnam and
Thailand, and greater server capacity at existing locations.
My analysis of Google’s server footprint, as described in Chapter 3, identified 25,043 discrete IP
addresses used by Google servers as of October 28 2013, located in at least 1631 discrete
geographic locations worldwide. Of these locations, 82 were within private peering sites, and 69
within public peering locations; in nearly all of these instances, these locations were the same. Of
the 1631 identified discrete locations, I have identified 1511 as being located at retail Internet
service providers. The servers located at retail ISPs were, I concluded, part of the Google Global
Cache program, which provided edge caching servers to ISPs. The prevalence of ISP server
locations indicates the importance to the company of placing significant server resources much
closer to users and within ISP networks.
152
5.5.3.1 Relevant affordances of peering and edge caching
The Google Global Cache program, which placed Google servers inside over 1000 retail ISP
networks worldwide, created several affordances for the company. From a technical perspective,
the location of Google servers with direct access to retail ISP networks allowed Google content
and services to be more immediately available to end users, increasing advertising views and
therefore revenue and profit.
GGC had direct benefits for retail ISPs as well. First, bandwidth costs for ISPs were reduced, as
video content from YouTube, for instance, could more frequently be served from within an ISP
network without the need to utilise the public Internet. As well, quality of service for Google
services, popular with ISP subscribers, was also increased, again at no cost to the ISP. In many
cases, users accessing Google service may never leave local ISP networks, and never transit the
global Internet.
The hosting of GGC by retail ISPs thus created a series of symbiotic relationships. As I argue in
greater detail in Chapter 6, these relationships were the basis for a network of shared interest, one
that made ISP gatekeeping of Google content and services significantly less likely.
153
Figure 5.4: Google server locations worldwide, October 13 2013 Some locations such as those in Northern Canada appear inaccurate. This image was captured October 12, 2016 from https://www.google.com/maps/d/viewer?mid=1nXSNhvDo5jaSS1h9gFuqQnRNIqg .
Figure by John Harris Stevenson. Copyright © 2016 John Harris Stevenson. Base map data Copyright © 2016 Google.
154
Figure 5.5: North American Google server locations, October 28, 2013 This image was captured October 12, 2016 from https://www.google.com/maps/d/viewer?mid=1nXSNhvDo5jaSS1h9gFuqQnRNIqg .
Figure by John Harris Stevenson. Copyright © 2016 John Harris Stevenson. Base map data Copyright © 2016 Google.
155
5.6 Google as hyper giant
Craig Labovitz, then of Arbor Networks, defined a new type of network entity in 2010 that he
suggested transcended the traditional the “carrier versus content” dichotomy of the net neutrality
debate: the hyper giant (Silbey, 2012). Labovitz identified the hyper giant as a content provider
that made massive investments in bandwidth, storage, and computing capacity to maximise
efficiencies and performance. Writes Labovitz (2010b):
[T]he future of the Internet is being decided today by billions of dollars of investments in
data centers, backbone infrastructure and alliances / contracts with other content owners
and last-mile providers. And increasingly, Hyper Giant strategies are coalescing around
similar infrastructure investments as the giants compete on content, capacity (bandwidth,
storage, compute), cost and performance. In other words, Google is not unique in their
infrastructure ambitions.
5.7 Chapter summary
In this chapter I have discussed Google’s history in the latter half of the 2000s, a period that saw
substantial diversification to Google’s service offering and growth in its infrastructure. I
discussed the principle influences on infrastructure expansion as Google Video, YouTube, VoIP,
and Google web applications were deployed and monetized, and the growth and adoption of the
Android operating system. I discussed Google’s efforts to promote the notion of network
neutrality during this period, its joint policy statement with Verizon, and the company’s
subsequent silence on the issue. Finally, I detailed Google’s infrastructure as it existed in
156
October 2013, illustrated using an interactive map of my own creation, and identified Google
with Labovitz’s notion of the hyper giant.
In Chapter 6, I further explore the notion of Labovitz’s hyper giant, a useful label for Google and
other network entities, but one that I argue is also inadequate. In the next chapter I draw on actor-
network theory and the work of Ciborra to discuss the transformation of Google in the 2000s
from a content provider, to a hyper giant, and then into a new class of network and policy entity:
the platform hybrid, characterised by its “chameleonic” business practises.
157
6 The platform hybrid
In Chapters 4 and 5, I presented a history of Google from its founding through the early-2010s,
describing the growth of its services and infrastructure and the company’s engagement with the
policy process, specifically the formation of network neutrality policy in the United States.
In this chapter, I explore the trajectory of the company’s development in the context of the
analytical frameworks that I discussed in Chapter Two. My objective is to illuminate the process
by which Google transformed from what Wu described as an Internet content provider into what
I call the platform hybrid, an entity to which a weakening of network neutrality was much less of
a concern. To accomplish this, I map the process of Google’s organisational transformation and
its engagement with various network neutrality policy stakeholders using actor–network theory
and drawing on of the work of Claudio Ciborra.
I begin by illustrating the changes to Google’s infrastructure during the period of my study
through an extension of Wu’s 2010 model of how Google reaches its users.
6.1 Extending Wu and unpacking The Cloud
Before analysing the changes to Google’s infrastructure and operations in the light of Ciborra
and actor-networks, in this section I describe these changes through an extension of Wu’s 2010
model of how Google connects to its users. Here I identify technological elements and network
entities in two moments of Google’s history, 2004 and 2013. I believe Wu’s model is useful for
understanding potential ISP gatekeeping, providing a graphical illustration of relationships
among Google and other network and commercial entities. I briefly highlight key changes to
158
Google’s infrastructure between these years, with a focus on detailing the affordances various
elements provided Google within the context of mitigation of potential ISP gatekeeping.
Figure 6.1: Wu’s 2010 Model: How Google reaches customers, circa 2003 From Wu (2010), The Master Switch: The Rise and Fall of Information Empires, page 284. Copyright © 2010 by Tim Wu. Used with permission.
Wu’s 2010 model of how Google reaches customers (Figure 6.1) is clear and useful, but it was
not intended to provide a detailed description of Google’s infrastructure in the early-2010s which
is required by my research. It is necessary to unbundle Wu’s network components and subject
them to a process with more detailed identification, untangling ideas of cables and fibre lines,
and deconstructing the Internet cloud (no symbol has likely obscured more detail in the history
of network design).
Although Wu presented his model in 2010, it is most accurately a description of the company’s
infrastructure of some years earlier, prior to the construction of the company’s content delivery
network after the acquisition of YouTube in 2006. Google’s connections to users in the mid-
2000s depended on connections through both transit ISPs (the Internet cloud of Figure 6.1) and
retail ISPs (the “cable or telephone carrier” line in Figure 6.1).
159
Figure 6.2: How Google reaches users, 2003 Figure by John Harris Stevenson.
160
A number of other network entities, not indicated in Wu’s graphical model, are also important to
consider. As I described in Chapter 5, beginning in the early-2000s Google peered with
numerous other network entities at various Internet exchange points worldwide. It is at these
locations that Google connected to both transit and retail ISPs. Figure 6.2 depicts these other
connections between Google and ISPs. Google had various means to connect to ISPs, providing
other additional potential locations for gatekeeping, particularly at IXP connections (R. Miller,
2015).
As I detailed in the last chapter, by 2013 Google’s relationships with various network entities
had become more complex and nuanced. In many cases, new connections placed Google servers
much closer to users. In Figure 6.3, we can see the location of Google points of presence at IXPs,
as well as Google Global Cache servers at retail ISPs. Google had expanded the number and size
of its data centres, and had connected them with a proprietary network (indicated by the heavy
black lines in the figure). Finally, Google had created its own retail ISP offering, Google Fiber,
directing reaching a relatively limited number of customers entirely through its own network
resources.
As with Wu’s model, my extended model for 2013 indicates a number of locations for potential
ISP gatekeeping. However, as Figure 6.3 illustrates, we can also clearly see the integration of
Google into other network entities—IXPs, transit ISPs, and retail ISPs—through POPs and edge
caching.
In the next section I begin an exploration of this process of change through the work of Claudio
Ciborra.
161
Figure 6.3: How Google reaches users, 2013
Figure by John Harris Stevenson.
162
6.2 Ciborra: Google’s technological transformations
Central to this discussion will be the role of infrastructure in Google’s evolution. Here I turn to
the work of Ciborra, whose writing on organisation and technological change reflects a
recognition that infrastructure, once built, does not remain static. It is critical, Ciborra argues,
that infrastructure can and is modified; it is built upon, upgraded, broken down, and evolved.
And even without a change in its physical or technological composition or components,
infrastructure can and is used in different ways over time, by many different actors. Ciborra
dismisses static notions of infrastructure as impossible idealisations; a technical system can only
embody conscious strategy partially, temporarily, and imperfectly.
In Chapter 2 I presented several of Ciborra’s central concepts of change and adaptation, and in
this chapter I argue that they provide powerful insights into understanding Google’s history.
6.2.1 Drift, embedded bricolage, and platform organisation
Ciborra’s notion of dérive (drift), which I presented in Chapter 2, is the process by which a
technology or system is built for one purpose but used for another, or has some effect or
influence unplanned for and perhaps even unknown. Ciborra states that drift may be slight or
profound in its impact. We are surrounded by examples of drift; one example is the change in use
of wireless mobile telephones in the 2000s, a technology developed for voice communications
which became used by many primarily for text communication (Richmond, 2012). With various
forms of texting are lower cost and may have greater utility in many situations, texting has
superseded voice as the most utilised services for this class of device for many users (Lenhart,
163
2012). We can therefore think of mobile phones as having drifted into use as textual
communicators.
Ciborra’s concept of dérive is related to his notion of bricolage, which I also described in
Chapter Two. Bricolage is the process by which those interacting with technology and systems
will modify them opportunistically in order for the platform to meet their needs. Such
modification is nearly always accomplished outside of a formal product development lifecycle,
and without the cooperation or approval of technical experts or authorities who nominally
control the platform in question. Ciborra argues that whatever the basis for the construction and
deployment of a technology, its use typically changes over time. Users find ways to exploit the
technology that were not originally intended, and its creators and maintainers are then forced to
respond to these changes.
The impact of technological development can only be partially predicted. A more complete view
of infrastructure development as described by Ciborra is one that accepts that technological
development is not only a top-down process in response to strategic planning, but a dynamic
interaction among various stakeholders, as well as social and technological contexts. As Ciborra
suggests, technology change is “almost outside anybody’s control” (Ciborra, 1997, p. 76).
The behaviour of Google’s leadership and management during the 2000s as described in the past
two chapters seems to reflect both tacit and explicit acceptance of Ciborra’s processes of dérive
and bricolage. Ciborra argues that technology companies survive when they are platform
organisations, able to change their products and identity with agility in response to their
environments. Google’s management style as modelled by the company’s founders and senior
leadership were clearly informed by their experiences with product and software development
164
lifecycles and organisational management in both university and corporate settings, drawing
them into alignment with many of Ciborra’s conclusions concerning the characteristics required
for a technology company to both survive and thrive.
Eric Schmidt , the CEO of Google from 2001 to 2011, started his career at Sun Microsystems as
a software manager and director of software engineering and then became their CEO in the
1990s (Walker, 2012). Coming to Google in 2001, Schmidt found a corporate culture more akin
to a start-up than the more rigid technological bureaucracies of Sun and Novell. Perhaps
informed by the failure of Novell and its acquisition by Cambridge Technology Partners in 2001,
Schmidt seemed to reject much of the structured planning common in corporate environments
once he joined Google. In their 2014 book How Google Works, Schmidt and former Google
senior vice-president of products Jonathan Rosenberg describe how they responded to the request
by their board to create a “traditional, MBA-style business plan” in 2003.
We knew that the Google patient would reject a formal, regimented plan as if it were an
alien organ transplanted into its body, which in many respects it would be. As
experienced business executives, we had joined Google with the idea of bringing “adult
supervision” to a chaotic place. But by the summer of 2003 we had been at the company
long enough to realize that it was run differently than most any other place, with
employees who were uniquely empowered, and operating in a new, rapidly evolving
industry. We understood the dynamics of our new industry enough to get that the way to
fend off Microsoft was continuous product excellence, yet we also understood that the
best way to achieve that excellence was not via a prescribed business plan, but rather by
hiring the very best engineers we could and then getting out of the way. We understood
that our founders intuitively grasped how to lead in this new era, but they—by their own
165
admission—didn’t know how to build a company to the scale where it could achieve their
ambitious vision. They were great leaders of computer scientists, but we needed more
than computer scientists to create a great company. (2014, sec. 116)
It is quite possible that much of Schmidt and Rosenberg’s account may well be corporate
mythmaking, but it is clear that many product development decisions at Google during the 2000s
were made relatively quickly, without the rigorous analysis typical at established technology
companies. Schmidt and Rosenberg state that they intended to build a company that eschewed
process and hierarchy, and that a “traditional” business plan would not “address the strategic
dynamics of this brand new industry” (2014, sec. 138). “When things are running perfectly
smoothly, with people and boxes on charts enjoying a one-to-one relationship, then the processes
and infrastructure have caught up to the business. This is a bad thing” (Schmidt et al., 2014, sec.
2605). Citing his experience as the CEO of Novell, Schmidt suggests that when a company is
running like a “well-oiled machine”, the “new-great-product cupboard” is bare; “The business
should always be outrunning the processes, so chaos is right where you want to be” (2014, sec.
2605).
This sort of discourse aligns strongly with Ciborra’s notion of the platform organisation, as does
Schmidt’s own definition of “platform” as “a base of technologies or infrastructure on which
additional technologies, processes, or services can be built” (2014, sec. 3587).
6.2.2 The “pasted-up” infrastructure
Google’s operations in the 2000s align with Ciborra’s notions in another important way. As we
saw in my earlier descriptions of the history of Google’s infrastructure and services, Google’s
technical platforms were constructed in such a way that new elements were layered on and
166
integrated with old infrastructure. As I discussed in Chapter Two, Bowker (1994), Star and
Ruhleder (1996), Ciborra (1996), and others have explored this characteristic of infrastructure
development, what Ciborra calls “pasted-up”.
While the purpose and functioning of Google’s infrastructure has changed significantly over
time, most greatly in scope and capacity, there does not seem to have been an attempt at a
complete replacement of Google’s systems. Rather, Google’s initial infrastructure, designed in
the late-1990s to support the indexing of websites and the delivery of search results, provided the
basis for a sufficiently powerful and robust platform for more complex and technically
demanding web applications, such as Gmail (2004) and Google Maps (2005). As we saw in
Chapter 5, in response to the technical requirements of both existing and emerging web
applications, Google expanded its infrastructure in the mid-2000s to extend technical
specifications—low latency, high speed, high reliability—and create new capacities, particularly
in data storage. This expanded set of systems provided a minimum basis for the delivery of high
bandwidth video, first for Google Video and then YouTube, when the start-up was acquired in
late-2006. Video storage and distribution, in turn, required that Google expand its content
delivery network to include content caching closer to end users (edge caching) and asymmetrical
peering with retail ISPs in the late-2000s, providing another set of affordances I explore later in
this chapter.
At no point during this process was Google’s infrastructure replaced wholesale. Instead, it was
built upon, with the preceding system serving as the platform and basis for next. Ciborra argued
that technologies change as they interact with existing technological and social contexts. In the
case of Google’s systems, the environments in which they existed provided extensive feedback,
influencing the technological adaptation to respond to changing circumstances.
167
As I have discussed in the past two chapters, Google leadership not only allowed the company’s
infrastructure to drift into new uses, it appears to have encouraged it. The company’s early
decision to build systems using consumer-grade, off-the-shelf server components (called “white
box” components in the technology industry), and the use of storage containers to create a
standard server collection, created an extremely flexible toolset for system design. This toolset
was one in which various components could be easily interchanged, reused, and repurposed in
response to changing requirements.
The drifting of Google’s infrastructure provided the company with various affordances. Some
were planned, some not, but they changed the relationship between Google and other network
entities. Specifically, in the latter half of the 2000s Google found itself crafting business
partnerships with many (if not most) of the world’s Internet service providers, some of whom
intervened in network neutrality policy processes in their respective jurisdictions. As I discussed
briefly in Chapters Two and Five, we can recognize Google’s infrastructure as an artefact on
which is inscribed affordances provided by Google to retail ISPs to serve the company’s content
and services more efficiency and at lower costs, resulting in a beneficial partnership between
entities on opposite sides of the network neutrality debate. I explore the formation of these
Actor-Networks in the later sections.
6.2.3 Technological stages
In previous chapters I identified three broad periods of Google’s history, in the style of Ciborra’s
various case studies of technologies firms, most notably Olivetti. Ciborra (1996) describes a
process of “identity building across discontinuities”, characterised by the company managing
various technologies’ life cycles over time while building and rebuilding the company’s
168
identity—mission, culture, products and so on—in each “technological stage”. Ciborra
characterises each of these “stages” as having a prevailing technology or approach.
I think it is useful to examine Google’s history through a similar set of technological stages:
Google as a content provider (1998 to 2005), Google transforming into a multi-service content
and network provider (2005-2010), and Google’s maturation as what I call a platform hybrid
(beginning in 2010). These stages are presented in Table 6.1.
Each of these stages can also be characterised by a different approach to Internet governance
issues, a differing style of engagement with the policy process, and varying attempts to create
actor-networks of shared interest around these policies. Of course, boundaries between these
stages are by no means clearly delineated, nor can they be; it is more helpful to imagine Google
engaged in a series of overlapping and sometimes conflicting activities, approaches and
conceptions which can and do collide. Drawing on Ciborra’s concept of shih (as described in
Chapter Two) we can also see the differing notions of Google’s own core identity as a company
changing through these stages.
A meta-narrative implicit in my analysis is the transformation of the policy space around issues
of network neutrality. In this arena, the public, policy makers, technology industry actors, ISPs
and telecommunications companies struggled to shape network neutrality policy and regulation.
My research focus is on Google’s specific strategies on network neutrality, but these actors
influenced Google’s approach, and the resulting strategy inscribes Google’s interests in network
neutrality policy, particularly in the United States, but in other jurisdictions as well.
Drawing on the work of Latour, the following sections will posit two actor-networks, protean
and overlapping, through three stages of their development: emergence, development, and
169
stabilisation. The first actor-network I suggest aligned around an interest in network neutrality,
while the second aligned actors around infrastructure affordances and shared business objectives.
Table 6.1: Development of Google infrastructure, 1998 to 2013
Technological Stage Data Storage Network Peering & Caching
1998 to 2005: Google as content provider
+ Co-location facilities
+ 2003: First data centre (Douglas County, Georgia)
+ Leased transit network access.
2005 to 2010: Google as hyper giant
+ Six large-scale data centres
+ Dark fibre networks purchased
+ 2008: Google Global Cache deployed
+ 2009: Peering with retail ISPs
2010 to 2013: Google as platform hybrid
+ Additional large-scale data centres announced or deployed
+ Expansion of network reach and capacity.
+ 2012: Internal network completely re-designed to run under OpenFlow with substantial efficiency improvement.
+ 2012: Submarine cable consortium to Asia.
+ Peering with additional network entities.
+ 2014: Google peers with more entities worldwide than any other network entity
+ Asymmetrical peering with retail ISPs
6.3 Forming, reforming actor-networks
In this chapter I have discussed numerous aspects of the evolution of Google’s services,
infrastructure and approach to policy, from the company’s founding to the early-2010s. I believe
that the company can be thought of as a platform organisation as described by Ciborra, one that
has seen a series of organisational transformations, as some services remain relatively static
while others are renewed. A central concern in my research are the processes of these
170
transformations, the understanding of which will provide important insights into the formation of
Google’s strategy on network neutrality.
Complementing Ciborra work on the ever-changing reinvention of the platform organisation is
an approach to technological change that can help us to describe the actors that we can consider
to be influencing such changes, as well as the specific processes that bring competing actors in
alignment with one another. It is here that I turn to Latour (1991, 1996, 2005), Callon (1986b),
and Law’s (1992) actor–network theory to explore aspects of Google’s technical transformation,
it’s processes of strategy formation, and its relationships with other companies and organizations
who engaged in the network neutrality policy process. An ANT perspective is useful in studying
the process by which various actors, human and non-human, align to form a network of shared
interests around an issue. In the following section I draw on ANT to examine Google’s activities
in the 2000s as attempts to create, manage and sustain actor-networks around network neutrality
policy formation in ways that were both planned and unplanned.
Actor-network theory sees actors as active, defined by their actions. Networks are relationships
among actors. Important to ANT is the notion that networks are constituted by both human and
non-human actors, which may be people and things. Things can include technological artefacts
which can participate in networks of common interest with other human and non-human actors.
While it may initially seem problematic to imagine non-human actors (such as aspects of
Google’s infrastructure) having agency, it is easier to understand their place in a network when
considering the requirements that they might demand in order to function, and the affordances
they might provide to other actors (Callon, 1991; Latour, 1996).
171
The actor-networks I describe in the following sections pass through the stages of emergence,
development, and stabilisation (Stalder, 1997). Actor-networks emerge from other actor-
networks; they are established by actors, and not cut from whole cloth. In fact, an actor cannot
exist without an existing network to which it belongs. Identifying the birth of a network is
necessary, but also arbitrary, since networks emerge out of other, similar alignments. The
impetus for such an emergence may be a sudden change in circumstances, or a subtle shift over
time.
Networks emerge as intermediaries align actors to the network’s interests. Intermediaries are put
into play by network actors, circulating among actors and transporting meaning, reflecting the
attempts by network actors to grow and change (Callon, 1981; Latour, 1991). This is the process
of translation, as problems are defined, actors are enrolled, roles defined, and primary actors
begin to represent more passive actors (Callon, 1986a). Networks are shaped as more and more
actors are aligned. Translation is a one-way process, with one actor acting on another.
Latour and Callon argue that actors’ interests are flexible, and networks can take one (or several)
of many forms. Alignment is influenced by, and results in, the creation of artefacts that ANT
terms inscriptions (Walsham, 1997).
The development of a network will take one of two paths, as its actors either diverge or converge.
As network develop, the process of translation can become more challenging, as each new actor
is already aligned with the differing goals of other actor-networks. Networks develop through a
process of mutual shaping among new actors.
The strength of a network depends on the coordination among its actors. Actor-networks
stabilise when they are heterogeneous and successful, and the actors who constitute the network
172
could not exist in their current form without the network. When network actors diverge, the
network is weakened and may form the basis for a new actor network.
Inscriptions are both the result of the creation of a network, and serve to reinforce and sustain the
network’s existence. Inscriptions capture and might present certain values, reflecting
characteristics that actors wish to advance through the network. Translation implies that the
network incorporates a variety of interests from various actors and becomes a structure of
shared—though also changed and changing—values.
In the arena of network neutrality policy formation during the 2000s, we can identify numerous
actors, both human and non-human. Many human and institutional actors can be identified:
policy makers and legislators, including the Federal Communications Commission and members
of the United States Congress; public interest groups, such as the Electronic Frontier Foundation
and Free Press; Internet service providers, including Verizon, Comcast, and other
telecommunications companies; and Internet companies, most prominently Google. Members of
the general public were also human actors. It may also be useful at some points in this analysis to
consider the policy environment as a whole—a black box of citizens, policy makers, legislation,
regulators, and regulation—as a single actor, influencing human actors (policy makers, the
public, and public interest groups).
Google’s infrastructure, along with other technological systems (including those of retail ISPs
and network transit providers) were also non-human actors, creating affordances for other actors,
requesting compliance on some matters, and influencing and interacting with actors within
Google itself.
173
As with the policy environment, it may in some cases be useful to conceptualise Google as a
black box actor from as ANT perspective, the internal processes of which are for the most part
unknown and unknowable. Google acting as a single actor in the network neutrality policy arena
can be seen as a contingent achievement, through alignment of its constitutive elements.
However, we can certainly reliably describe at least some of the various actors within the
company itself—human actors such as the corporation’s strategic leadership, lobbyists, and
technologists—each with distinct interests, as well as non-human actors, principally Google’s
infrastructure and services.
From the perspective of actor–network theory, strategy enacted by a policy actor (or actors)
within Google is determined by many factors, including their political influence, the strength of
their interests, their position within the Internet industry, and their efforts to influence change.
Paraphrasing Bijker et al. (1989), Gao writes that “[t]he human interpretation of the interests
embedded in telecommunications technology and market is flexible, just as human actor interests
are adaptable” (2005, p. 258). I would suggest that the same can be said specifically of Google’s
infrastructure in relation to Google’s strategy during the 2000s. Gao draws on various writers to
argue that social and technological contexts are key to allowing for “multiple inscriptions and
representations of contexts” (2005, p. 258). Given this, we can easily assume multiple
inscriptions and representations of contexts from Google, and other key policy actors in the
network neutrality policy process.
As I discuss below, I consider changes to Google’s network neutrality strategy as a process,
taking place over several stages of the company’s history, with varying foci reflecting changing
interests and alignments of both human and non-human actors. actor–network theory requires
that we examine the process of interest alignment which results in the formation of a network. In
174
the following sections I present the narrative of the development of Google’s network neutrality
strategy as a process of actor-network formation and dissolution, identifying the various actors
and their interests, and how they struggled to inscribe their interests into the policy on network
neutrality.
As Holmström and Stalder suggest, “there is always more than one actor-network” (2001, p.
202). In the following sections I suggest the existence of two fairly distinct actor-networks,
among others that may have existed and could certainly be conceptualised in an exploration of
Google’s transformation in the 2000s.
Figure 6.4 shows a simplified, high-level model of some of the relationships within the actor-
networks I describe. It presents only a few of the several actors I have identified, and suggests
that the creation of Google’s network neutrality strategy is the result of ongoing tension among a
number of human and non-human actors, even within Google itself. I argue that over time, the
interests of various actors shifted and varied, and with them the approach to network neutrality
strategy.
6.3.1 Actor-networks and infrastructure
In this chapter I have begun to discuss the influence of Google’s infrastructure on network
neutrality policy-making through three technological periods. In the sections that follow, I flesh
out this narrative, exploring the formation of the actor-networks that are useful to our
understanding of Google’s behaviour in the 2000s.
I argue that Google’s infrastructure is not a stable entity or set of entities, but something formed
by dynamic relationships with other actors, internal and external to Google. Cordella (2010)
175
describes these processes of infrastructure change as “performative”, shaped by relationships.
Google’s infrastructure was shaped by countless other entities. Writes Cordella:
[T]he dynamic interplay between organizations and information technology is the
condition that has to be analysed in order to gain a better understanding of the effects of
information technology adoption in organizational settings… Information infrastructures
are embedded in, and defined by, this interplay and lead to the exploration here of the
concept of information infrastructure in action… [T]he core proposition is that the role,
effects, and implications of information technologies cannot be defined if they are not
considered in terms of the emergent phenomenon, the outcome of the contingent and
contextual interplay between information technology and its users in the organization.
(2010, p. 3)
I would suggest that the narrative of Google’s infrastructure within the context of the actor-
networks I describe is a cycle during which we can see development, affordance, bricolage, and
drift at each technological stage of Google’s history.
6.3.2 Neutrality-focused actor-networks
In Chapter 5, I discussed the efforts of Google to promote network neutrality in the policy
debates around Internet traffic management in the 2000s. Here I argue that it is useful to think of
Google’s activities during this period as manifestations of the company’s relationship with actor-
networks of shared interest in an “open Internet”, and the processes of development, translation,
enrollment, and stabilisation, the success of which is somewhat problematic.
176
I label the actor-networks I describe in this section as neutrality-focused, knowing full well that
actor-networks shift and change their objectives over time, and that any labelling is questionable.
However, the notion of neutrality-focused actor-networks is a useful model in understanding
Google’s attempts to form alliances with public interest groups, policy makers, Google users,
advertisers, and other Internet companies around issues related to Internet traffic management
and retail ISP gatekeeping.
The mid-2000s saw an emergence of interest in network neutrality as an issue of public concern;
Wu first discussed it in 2003, and American media advocacy organisation Free Press had raised
the issue in 2005. I suggest that although Google would become a prominent actor in a public
neutrality-focused actor-network, it did not originate this actor-network, and in fact it was other
actors—academics and public interest groups—who likely translated Google to the network
neutrality position.
I suggest that a neutrality-focused actor-network formed within Google. Google’s commitment
to network neutrality appears to have also been driven at the senior leadership level of the
company, with then-CEO Eric Schmidt personally directing Google’s position. We have no
specific knowledge of the translation process around network neutrality within Google, but
alignment arose among senior company leadership and government relations staff, including
former Verizon lawyer Richard Whitt, who joined Google in 2007.
Google’s infrastructure was also an actor in this network formation. I have written above
detailing the affordances presented by this infrastructure, and drawn on Ciborra’s work to
describe various processes by which that infrastructure might shift and change over time in
response to various factors. Google’s infrastructure provided the company important affordances,
177
but also required a sustaining policy and network governance environment in which it could be
developed and successful: an open Internet in which Google could connect to both transit and
retail ISP networks, and ISPs would not or could not gate keep content providers’ access to
users. Google’s infrastructure during the mid-2000s included a number of centralised data
centres connected to one another and to retail ISP networks primarily through transit network
providers. With so much reliance on third-party networks, Google’s infrastructure restricted the
company’s strategic options. Google’s infrastructure, as a non-human actor, therefore enrolled
company leadership and lobbyists as mediators to support policy objectives around network
neutrality and peering.
It is at this point that the description of Google’s policy objectives included for the first time an
“open Internet”, an artefact indicating an actor-network formed within the company. It was this
actor-network that was the basis for Google’s participation in another larger and emerging actor-
network, this one network neutrality-focused, in which other actors were already enrolled outside
the company. Google’s actor-network converged with the “public” neutrality-focused actor-
network, as illustrated in Figure 6.4.
A process of enrollment unfolded, one of both public and private efforts to recruit other actors
who might share the network’s objectives on network neutrality, some of whom were already
aligned in other actor-networks of somewhat differing orientation. A key to translation was likely
Google’s relationships with public interest groups, some of which the company funded, that had
taken up network neutrality and issues similar to it and actively lobbied policy makers. As the
network developed, Google circulated intermediaries inside and outside the network, publicising
its position and lobbying on network neutrality, beginning in 2007.
178
During much of this period, Google’s position on network neutrality aligned strongly with that of
Wu, and the company’s identity was centred on the notion of the content provider as defined by
Wu (2010), in contrast with Internet service providers who controlled and provisioned network
services. As we saw in Chapter 2, Wu conceived of network neutrality not first as a matter of
human rights, concerned with free and open communications, but as a “network design
principle” that would make arbitrary discrimination against content provider data impossible.
Google’s culture, grounded in academia and engineering, appeared to find Wu’s definition
attractive. But as I discussed in the preceding chapters, and as Wu identified in 2010, Google’s
participation in network neutrality policy debates in the 2000s was clearly self-interested. With
no direct means to monetize users or access other content providers except through the public
Internet, the company would have been significantly disadvantaged by two-sided pricing or
preferential bandwidth arrangements imposed by then-concentrated ISPs. Wu’s emphasis on the
“design principle” also aligned well with related work by Lessig during the same period; code
might embody legal principles (Lessig, 1999), as could hardware (Wu, 2010).
It is important to reiterate that Google did not originate the public neutrality-focused actor-
network; rather, it arose as policy makers, public interest and consumer groups, and other
Internet companies aligned on aspects of network neutrality policy. Some American legislators
supported network neutrality regulation or legislation, though many were opposed, making
actual passage of legislation impossible. Some Internet companies took a strong stance in favour
of net neutrality, while others, including Facebook, did not.
179
As we saw in Chapter 5, Google’s participation in the neutrality-focused actor-network created
several artefacts in the late-2000s, including public statements, blog posts, and appearances
before regulatory bodies in several jurisdictions.
North American retail ISPs, not surprisingly, did not initially enrol in the neutrality-focused
actor-network in the mid-2000s. Fresh from acquisitions and mergers in the 1990s and 2000s that
created a consolidated market in the United States for Internet access, the ISPs were concerned
with maximising rents, providing competitive services, and retaining an environment of
advantageous regulation. As AT&T CEO Edward Whitacre indicated in 2005, some retail ISPs
wished Internet access to be a two-sided market in which rents were applied to both the retail and
the transit side of the ISP network, in a way not dissimilar to the market for cable television
programming.
This neutrality-focused actor-network did not remain stable. First, the success of the network in
the late-2000s depended on the translation of network neutrality positions to policy makers and
regulators. While the FCC began a process of positive rule-making and regulation, legislative
encoding of network neutrality principles was not realised. Despite Democratic majorities in
both the House and Senate, and the introduction of pro-network neutrality legislation, none of the
bills were passed into law.
Second, while the notion of “network neutrality” appears to have become stable for most of the
actor-network’s actors, Google’s interest in network neutrality had become fluid by 2010. We
saw in Chapter 5 that Google’s position on neutrality shifted between 2005 and 2010, moving
toward what other actors might consider a position of compromise with retail ISPs. It is with the
180
Google-Verizon statements on network neutrality that the extent of the divergence among
Google and other actors becomes most apparent.
I discuss the creation of another actor-network, this one focused on the technical affordances
provided by Google’s systems, in the next section. As for the neutrality-focused actor-network, it
would weaken as Google’s interest in neutrality shifted as the company scaled up its
infrastructure and began business partnerships with retail ISPs (edge caching) and wireless
carriers. (Android). It would appear that Google’s leadership at this time desired to both develop
the affordance-focused actor-network while maintaining the neutrality-focused actor-network,
attempting to enrol retail ISPs in both networks, Verizon being the most prominent example.
This appears to have been impossible.
Other participants in the neutrality-focused actor-network, unburdened by complex and
developing business relationships with retail ISPs and carriers, and less motivated to
compromise, were unmoved. Ironically, within the neutrality-focused actor-network Google had
worked so hard to sustain, Google suddenly became marginal with the release of the Google-
Verizon policy statements. As I discuss later in this chapter, Google was transforming into a
policy and network actor no longer seriously threatened by potential retail ISP gatekeeping.
I do not argue that the company’s participation in neutrality-focused actor-networks was a
“failure” for Google. In fact, Google’s interest in the neutrality-focused actor-network is actually
not fully clear, and we cannot be certain of all of the strategic imperative of their involvement. A
simplistic reading of Google’s motivations during this period is that Google desired only strong
network neutrality regulation or legislation. However, as I have discussed above, a number of
other outcomes would also be beneficial to the company.
181
Google would, for instance, gain from increased negative public perception of Internet service
providers, including notions that retail ISPs were anti-consumer, failed to innovate, or did not
provide value for cost. Google might specifically benefit in the latter half of the 2000s from
negative perceptions of retail ISPs as wireless telephone providers, as Google would soon
become a participant in this market through its Android operating system, which would
necessitate Google making business arrangements with ISPs.
Google also benefited from public and political support arising from the network neutrality
discourse that was strongly critical of the market position of retail ISPs; this could provide
Google with leverage in commercial negotiations, while damaging various ISP brands (video-on-
demand, VoIP, storage, and so on) that were in direct competition with many of Google’s
products and services. As well, the various relationships between Google and other policy actors
created in the pursuit of network neutrality might well have proven helpful to the company.
182
Figure 6.4: Google’s participation in late-2000s neutrality-focused actor-network
Figure by John Harris Stevenson.
183
6.3.3 An affordance-focused actor-network
During the emergence of the neutrality-focused actor-network I have described above, we can
discern that another actor-network was also emerging, one centred on the technical and business
affordances that Google could provide retail ISPs and wireless providers. I call this actor-
network affordance-focused.
As Google worked with actors external to the company in the neutrality-focused actor-network,
Google’s infrastructure continued to evolve. Network asset purchases and construction, increased
peering, and the creation of even greater server infrastructure provided greater capacities.
Google’s launch of an edge caching program in 2008 was advantageous to most Google services
and content, and allowed even more efficient delivery of the company’s services, making them
geographically closer to end users. Edge caching, peering and the deployment of the Android
operating system required business relationships with ISPs, telecommunications companies and
other network entities; infrastructure therefore enrolled Google leadership in forming these
partnerships, even though they required cooperation with entities that were clearly adversaries on
the issue of network neutrality. During this period, the company’s infrastructure also lessened
Google’s reliance on third-party network transit providers.
The affordance-focused actor-network arose out at least one other network, one centred on the
concerns of a number of American wireless carriers about the potential dominance of Apple’s
newly-announced iPhone. In 2007, Apple launched the iPhone with AT&T (then called
Cingular) as the phone’s exclusive carrier in the United States. No competing hardware
manufacturer in early-2007 had the ability to ship a device of comparable functionality to the
184
iPhone. Wireless manufacturers and carriers were strongly concerned about Apple’s potential
control of the mobile marketplace, expecting that demand for the new phone would be high.
During this period, Google found itself in some form of opposition to telecom companies that, as
retail Internet service providers, opposed network neutrality regulation. But concurrent with the
company’s efforts to establish network neutrality practise and regulation, Google was expanding
its business offerings. Google acquired Android in 2005, and partnerships with equipment
manufacturers and wireless carriers would be critical to its success. It is difficult to determine
who translated whom, the extent to which carriers enrolled Google, or vice versa; perhaps
Android enrolled them all. Certainly, many parties desired the establishment of a robust actor-
network that would address the diverse needs of various actors, each market sector—
manufacturing, carriage, Internet services, applications—driven by differing business models.
Regardless, Google appears to have emerged during this network development as its primary
actor and an obligatory point of passage, as the Android OS, which Google controlled, was
functionally indispensable to the network. During the second half of the 2000s, Google created a
series of strategic alliances with a large number of wireless stakeholders focused on advancing
the Android mobile telephone operating system to the benefit of all actors.
As I have noted, Google was prompted to partner with many entities that had been its adversaries
in other markets. This network presented an early, concrete manifestation in the form of the
Open Handset Alliance, a group founded by Google in 2007 and which included T-Mobile,
HTC, Qualcomm, Motorola, and nearly 30 other companies. The members of the alliance, all
participants in the mobile telephone marketplace, responded to the announcement of Apple
proprietary iPhone platform in early 2007 by supporting Google’s open Android platform.
185
As Google was building alliances around Android, Google was expanding its offerings in
another area: online video. As I discussed in Chapter 5, Google had acquired YouTube in 2006
as the online video provider was struggling to maintain its infrastructure. For online video to be
successful, it must be available quickly and smoothly to users, regardless of geographic location.
Google thus focused on extending its content delivery network to provide video as well as search
services more efficiently to users. As Google grew its content delivery network, driven primarily
by video distribution opportunities created in the wake of the 2006 YouTube acquisition, Google
engaged in business relationships with retail ISPs worldwide. As I have described in previous
chapters, at least by 2008 Google had begun offering edge caching servers to retail ISPs for
installation within their networks.
As I discussed in Chapter Five, Google also offered to peer (interconnect) with many large retail
ISPs. We have limited knowledge of these peering agreements, which given their commercial
nature are typically accompanied by non-disclosure agreements, but know that they were
asymmetrical, with much more network traffic flowing from Google to retail ISPs than vice
versa, and with Google therefore likely compensating the retail ISPs to take this traffic. Here we
see Ciborra’s concept of xenia, or hospitality, which we discussed in Chapter Two. Rather than
seeing these agreements between Google and the ISPs as simply a question of technical
installation, Ciborra would argue that the act of a retail ISP accepting Google’s edge caching
servers into its network is actually a process of social negotiation, a practice that he stresses can
be transformative to both the host and hosted.
The objectives for these two sets of business alliance were in some ways quite different, but both
were centred on Google’s technical affordances, and the utility of those affordances to other
actors. I suggest that given that so many retail ISPs benefited from edge caching were also
186
wireless carriers dependent on Android, a single actor-network centred on Google’s affordances
was formed. I would further argue that it is the very size and heterogeneity of this network that
made it stable, and that it remains so at the time of this writing.
It is out of the tensions between competing notions of Google’s identity embodied in the
objectives of the neutrality-focused and the affordance-focused actor-networks that we see the
emergence of Google as Ciborra’s platform organization, in which “strategy, action and structure
coalesce as an entity designed for coping with surprises” (2002, p. 122). Interests aligned among
disparate entities: Google’s infrastructure and leadership, retail ISPs and wireless carriers.
Network neutrality was still an issue of public concern, but Google and its infrastructure were
now integrated into the operations of numerous ISPs worldwide, making the disruption of
Google services and content to end users considerably less likely, if not unthinkable to all
members of the affordance-focused actor-network. Google had become an obligatory point of
passage for this second actor-network, functionally indispensable to it.
The 2010 Google-Verizon statements on network neutrality are interesting artefacts of these
processes. Expansion of its businesses pushed Google to attempt to create alliances with
telecommunication companies that held opposing views to Google on network neutrality matters.
Translation was required, not just with Google and the telcos, but within the neutrality-focused
actor-network in which Google was an important actor. The statements can therefore be seen as
an attempt by Google to translate its business-necessary network management approach to the
neutrality-focused actor-network, an attempt which ultimately failed. Google’s approach to
network neutrality is summarised in relation to its technological stages in Table 6.2.
187
Table 6.2: Google’s Network Neutrality strategies
Technological stage
ISPs Public interest groups
Policymakers Google Google network neutrality strategy
1998 to 2005: Google as content provider
ISP consolidation increases market power.
Neutrality not an issue of significant public concern.
Google search, Gmail, Google Maps.
Google not engaged with the policy process. Treats Internet governance issues as engineering problems
2005 to 2010: Google transforms to hyper giant
Some instances of ISPs violating neutrality, public statements oppose neutrality.
Network neutrality becomes issues of public concern. Advocacy groups raise awareness. Support for versions of Wu’s neutrality.
2008: FCC rulings support neutrality principles.
Some legislative interest, but no action.
2006: YouTube purchased.
2008: Peering & CDN efforts expanded.
2008: Android launches. Google engages with wireless carriers.
Google engages with policy process. Hires lobbyists, presents public statements on neutrality.
2010 to 2013: Google as platform hybrid
Oppose neutrality regulations and legislation, but publically supports some version of neutrality.
Oppose classification of ISP services as Title II.
Continued support for Wu-style network neutrality.
FCC considers reclassifying Internet services to Title II.
2011: Google+ launched.
Google-Verizon joint statements on net neutrality followed by public silence on neutrality issue. Disengages from the publicly visible policy process around neutrality? Behind scenes lobbying.
188
6.4 What is Google?
Earlier in this chapter I introduced the notion of Google as a new sort of network and policy
actor, the platform hybrid. I believe how we identify Google and similar entities matters within
the policy discourse; suggesting an entity of Google’s scope and power is a content provider in
the mould of Wu is no longer adequate to within policy discourse. While Google and similar
companies share some characteristics with the large and often monopolistic communications
entities of the past, in many ways the network behemoths of today have far outstripped them in
scale and power.
In this section I will discuss the rationale for my identification of Google as a platform hybrid,
the entomology of the term, and characterize the platform hybrid within the context of network
neutrality discourse.
6.4.1 The business platform
In the previous two chapters, I described the development of Google's business from its founding
to the early-2010s. It is useful here to further explore some of the characteristics of Google as a
commercial entity and business platform, and its relationships with its many sorts of “users”.
While Wu’s project in The Master Switch is inherently historical, his positioning of Google in
one historical context (open versus closed systems) though not in another (commodification
versus non-commercial models), is surprising. A focus on Google’s approaches to technology,
and it’s roles as a policy and network actor, might lead to what Smythe (1977) would suggest is a
“blindspot” in our study of the company.
189
As I discussed in Chapter 4, even as early as 2001 most of the characteristics of Google’s
commercial operation had been put in place, to be scaled into the platform we see during the
period of my study. Google began in academic research, in a project focused on studying web
indexing and search that was to be the basis for dissertations by Google's founders Page and
Brin. The founders’ decision to incorporate Google as a commercial enterprise might seem
natural in hindsight, but other approaches were possible. Google could have taken many forms:
perhaps a non-profit foundation like Wikipedia, or a set of freely licenced technologies and
standards like the Web itself. Brin and Page chose to monetize their technology and, unable to
find licensees for their search engine, incorporated as a for-profit business.
Advertising was the chosen business model for Google, as it was for other search engines and
content providers of the time. What set Google apart from other search engines was the
company's relationship with the consumers of its services. Yahoo! and similar platforms
modelled their businesses on the print and broadcasting giants of commercial mass media, seeing
searchers as mostly passive consumers of content delivered to advertisers.
Google saw searchers somewhat differently. Google's services not only brought users to
advertisers, but searchers were also active participants in Google's business. Page's PageRank
search ranking algorithm was so successful because it relied on millions of links among the
Internet’s web pages, links that were created by website authors. Write Tapscott and Williams:
Google is the runaway leader in search because it harnesses the collective judgments of
Web surfers. Its PageRank technology is based on the idea that the best way to find
relevant information is to prioritize search results not by the characteristics of a
document, but by the number of sites that are linking to it. (2008, p. 41)
190
A few moments of curation and knowledge sharing was multiplied by millions of contributors,
resulting in tens of millions of dollars in value provided to Google at no cost. Users added value
to Google's advertising business directly. User behaviour interacting with advertising was
tracked and analysed, and used as a basis for setting advertising prices and determining
advertisement effectiveness.
Since the enrollment of searchers in Google's advertising business, users were also enrolled in
new services as they were developed and deployed. Users of Gmail had their email analysed by
algorithms which presented them with personalised advertising. Users of the Blogger
weblogging platform published content without hosting costs, but agreed to have their content
parsed by Google's systems and framed by ads. Google Maps users benefited from the rich
information provided by the service, but were encouraged to improve it by identifying new
locations and reviewing businesses.
Every Google service presented a similar bargain to consumers: provide Google with some
amount of your labour, your personal information, your habits and your time, and Google would
provide you with a service. This applied to other types as users as well, as advertiser interaction
with Google’s AdWords and AdSense platforms set ad rates and prioritised keyword-based ad
placements.
It perhaps goes without saying that while many would agree on many of the specifics of Google's
relationship with its users, its implications were and are contested. As we saw in Chapter 2, a
political economy analysis exemplified by Fuchs (2011), Halavais (2008), and Petersen (2008)
see Google’s relationship with users as an exploitation of free labour. Pasquinelli (2009) writes
of Google’s exacting “cognitive rent” from users, specifically to power the Google’s PageRank
191
system. Zuboff (2015) identifies Google’s operation as “surveillance capitalism” and the
company itself as the “Big Other”. Writes Zuboff,
Google understood that were it to capture more [personal] data, store them, and analyze
them, they could substantially affect the value of advertising. As Google’s capabilities in
this arena developed and attracted historic levels of profit, it produced successively
ambitious practices that expand the data lens from past virtual behavior to current and
future actual behavior. New monetization opportunities are thus associated with a new
global architecture of data capture and analysis that produces rewards and punishments
aimed at modifying and commoditizing behavior for profit. (2015, p. 85)
Others, particularly in the business and technology press, see the relationship more positively.
Popular writers such as Jarvis (2009) and Tapscott and Williams (2008) prefer to see the
relationship among Google and its users as a collaboration.
In much of this work I have written of technical requirements or engineering imperatives
influencing Google's infrastructure development. However, we should remember that these
requirements are driven by Google’s business objectives (profit-making) and business strategy
(user enrollment in technology systems that capture, analyse and monetize their information).
While a classic information technology tiered stack architecture—made up of business strategy,
services, systems, and components—is an incomplete model when applied to Google, it does
have some utility to better understand these influences. Business objectives (profit making)
influence business strategy (the commodification of user labour and user enrollment in an
advertising platform). These in turn influence service design (search, Maps, Gmail, Apps) which
in turn influence the design and deployment of technical systems and infrastructure. Stack
192
architecture may be overly simplistic in this case (as Ciborra might suggest) and it can be
dismantled, both conceptually flattened and revealed in its complexity, to be seen in the more
useful form of an actor-network, as I have described above.
There should be no doubt as to the nature of the technological platform Google built from 2001
on. It was one that was designed to facilitate the involvement of various sorts of users in
Google's systems. As I will discuss in the next section, Google aligns with some aspects of
Latour's notion of the hybrid, and one way of thinking of Google's system is not as a set of
technologies, but as a platform that constitutes a human-technological hybrid.
6.4.2 Bigger than a hyper giant
As I have argued in earlier chapters, Wu’s 2010 conception of Google was primarily that of a
content provider, dependent on retail ISPs for connection to its customers. Yet Wu also thought
of Google as something more. In The Master Switch he asks, “what exactly is Google?” (2010, p.
279), placing the company in the context of the Bell telephone system that is the subject of much
of his book. Wu describes Google as “the Internet’s switch”, the principal conduit through which
most users connect to people and information. Wu suggests that Google’s power as that switch is
in some ways greater and more far-reaching than the media monopolies of old, but in other ways
weaker, with its strong dependencies on other content providers and network entities.
In the past several chapters I have described Google’s transformation into something else,
focusing on the company’s history in the late-2000s and early-2010s, a transformation that saw
an evolution of both the substance of Google’s operations and its material infrastructure. I have
examined parallels between these developments and the company’s engagement with network
neutrality policy debate and the transformation of Google from a content provider to a hybrid
193
entity providing a range of services while building an increasingly massive and powerful
technical infrastructure.
In Chapter 5 I discussed some of the early attempts to characterise and classify Google within
this context. Network neutrality discourse typically sees Google as a “content provider”, lumping
the company in with providers of published text and rich content. As Wu indicates, this
conception of Google is limiting and insufficient. Technically, the notion of “content provider”
has been defined within the discourse of distributed content delivery technologies (Dilley et al.,
2002) and mobile platform development (Google, 2009); Google itself describes a content
provider as an entity that will “manage access to a structured set of data”.
Within the discourse of Internet peering arrangements, Norton (2014) describes content provider
as a class of peering network entity, as distinct from two other types focused on network
transport: Tier 1 ISPs and Tier 2 ISPs. Writes Norton:
Content Providers are companies that operate an Internet-based service but do not sell
transit within the Internet Peering Ecosystem… The Content Provider’s core competence
is the creation and managing of content and the relationships with those who use,
enhance, or support the content. They are generally happy to purchase transit because
operating a network is not strategic or core to their mission… Most Content Providers do
not peer – they have a “No Peering” policy. Their focus is on their core competence, and
they pay others to handle the rest. Content providers I interviewed uniformly said that the
most important thing was the end-user experience, although they also are motivated to
reduce their Internet Transit costs, among others. (2014, p. 123)
194
Norton does distinguish among types of content providers in his writing, using various terms for
the larger content providers who approach networking as a strategic activity, and build their own
backbone networks with which to peer with other entities. Norton calls this class of network
entity the Large-Scale Network-Savvy Content Provider, or LSNSCPs. He identifies these
entities, including Netflix, Yahoo!, Google, Walmart, Apple, Electronic Arts, and Sony Online,
with three characteristics:
- do not sell transit
- focus on content creation, and if they do operate a network it is for exactly one
customer: themselves
- have visibility into the end-to-end performance characteristics (Tier 2 ISPs see packets,
while LSNSCPs can see packets and flows) (2014, p. 131)
Norton further describes this sort of entity in terms of network connections through peering, with
content providers peering directly to retail ISPs, shifting significant traffic away from Internet
transit providers to content delivery networks, controlled either by content providers or third
parties.
In chapter 5 I briefly discussed the identification of Google by Labovitz as a “hyper giant”,
something nearly identical to Norton’s LSNSCP, which saw increased content and service
performance through significant investments in network and server capacity. Labovitz identifies
a number of hyper giants, including Akamai, Microsoft, Limelight, Yahoo!, and GigaNews
(Roush, 2009). As Labovitz (2010b) suggests, a key distinction between hyper giants and other
content providers is the size of their technical infrastructures. That is, his hyper giants balance
the enormity of their content and service provision with a network and server infrastructure
195
substantial enough to distribute key aspects of it. They thus exhibit not just hyper-giant size, but
also a hybrid—content/carrier—characteristics.
These identifications are useful, but technically-centred and limited when looking at Google as a
policy actor. Labovitz focuses on the dual content-infrastructure nature of Google-like entities,
but as I have touched on throughout this work, these companies are hybrids in other ways as
well. We can identify Google (and several other network neutrality policy actors) as, first, some
sort of content provider, and then, as we appreciate the complexity and strategic actions of
certain of these content providers, as something more specific and powerful. As Wu states,
Google is a conduit for vast knowledge sharing and communications, a new “master switch” in
its own right, a platform of broad functionality.
The meaning of platform when describing Google is two-fold, encompassing both technical
capacity and organisational identity. Google is a platform in both a broad, generally understood
technical sense, as well as more theoretically narrow sense.
Google is a hyper giant, but it is also a technical platform of massive development, software, and
infrastructure functionality. The company provides an incredibly wide range of computing
services to both internal and external clients, including office and enterprise applications, virtual
machines and parallel computing capabilities, an app development platform, massive distributed
storage, virtual and content delivery networks, machine learning, data warehousing, and hosting
for built-to-purpose applications. Computing power, storage, and network capacity can be made
available to countless entities, creating hybrid hosting and application relationships that make
partners of competitors.
196
Google is also a platform organisation as conceptualised by Ciborra, able to modify its
organisational form and business models as required by its environment. Writes Ciborra:
A platform is a meta-organization, a formative context that molds structures, and routines
shaping them into well-known forms, such as the hierarchy, the matrix and even the
network, but on a highly volatile basis. Hence, the platform organization may appear to
be confused and inefficient but its value lies in its readiness to sport whatever
organizational form is required under the circumstances. Platforms are characterized by
surprises, and organization members, no matter how they see themselves after the fact,
are busy improvising and tinkering. Drawing on similar studies carried out in Silicon
Valley, one can draw the conclusion that high-tech firms can survive if they are smart at
… bricolage. (1996, p. 103)
Ciborra’s platform organisation shares some characteristics with Benkler’s notion of the
“networked organisation”, an entity that is a key participant in the networked information
economy (NIE). In The Wealth of Networks, Benkler (2006) draws on the work of Charles Sabel
to describe organizations managing themselves in a networked manner, “loosening the
managerial bonds, locating more of the conception and execution of problem solving away from
the managerial core of the firm, and implementing these through social, as well as monetary,
motivations” (2006, p. 112). While Google’s management practices appear to emphasise
innovation and individual or team initiative, similar to what is commonly thought of as start-up
culture, power within Google’s corporate structure remains highly centralised and limited to the
company’s two founders along with executive chairman Eric Schmidt.
197
In chapters 4 and 5, I described the history of Google through three technical periods beginning
from its founding in the 1990s to the early-2000s. This history is characterised by the changes to
the company’s services and organisation as it adapted to changing technological, business, and
political circumstances. Google survived the end of the dot-com bubble in the late-1990s by
embracing a protean identity, a position that has allowed the company to continue to grow into
new markets, enrol new users in the company’s platform, and generate substantial profits.
What should we call this thing, Google, when speaking of it as a policy actor? Law (1999)
describes the challenges of naming actors (and of naming actor-network theory itself) when he
writes:
How to talk about something, how to name it, without reducing it to the fixity of
singularity? Or imagining, as if we were talking of the Roman Empire in the sixth
century, that something that used to be coherent has simply fallen apart? How to talk
about objects (like theories) that are more than one and less than many? How to talk
about complexity, to appreciate complexity, and to practice complexity? (1999, p. 10)
Law suggests the metaphor of the fractal—more a process than a set of stable characteristics—as
a useful model for naming, as no single identifier can encompass all the characteristic of an
actor. Borrowing from Stalder (1997), I suggest we limit ourselves to the category which is in
some ways self-formed. It is useful to note that Google’s leadership during the period of my
research did not see the company as simply a content provider. In 2011 Google’s Eric Schmidt
described a group of four companies—Google, Apple, Amazon, Facebook— “behind the
consumer revolution on the Internet today” and “growing at incredible rates” as a “Gang of
Four” (quoted in Schonfeld, 2011). Writes Schonfeld:
198
Schmidt notes that all four are together worth about half a trillion dollars, they are all
platforms in their own right [emphasis added], and they are all basically spreading their
power where before there was only one company who had such influence: namely,
Microsoft. But “Microsoft is not driving the consumer revolution,” Schmidt notes
(although they still do well in the enterprise)... The Gang of Four compete and cooperate
in various ways, but each has its own strengths: search (Google), social (Facebook),
commerce (Amazon), and devices (Apple). Although relations with Apple are not as cozy
as when he sat on its board, he notes that Google just renewed it maps and search
partnership with Apple. (Schonfeld, 2011)
In this research I have explored various labels that could be applied to Google to describe an
entity more than a content provider. Labovitz’s notion of the hyper giant was initially attractive,
but failed to describe Google as a network neutrality policy actor.
I was tempted by Ciborra’s example to turn to classical languages to find a term that is both
descriptive of Google and similar entities in its original meaning, and also open to semantic
expansion, to create a neologism with which to label Google. However, I believe that something
less ambiguous would be the best fit for broad policy discourse. I therefore propose a term that
encompasses five characteristics of Google and the other “Gang of Four” entities that Schmidt
identified in 2011: the platform hybrid.
As I argued above, Google is a platform in at least two senses, first as a technical platform of
substantial capacity, and second as a platform organisation as described by Ciborra. Writes
Ciborra of the platform organisation:
199
It is chameleonic: thus, for example, if Olivetti were facing a threat by NCR, rather than
Compaq, it could rearrange its internal resources in order to sport the appropriate,
competitive attitude, and stage an attack against the specific rival firm or class of firms.
Indeed, one of the striking characteristics of the platform consists in being programmed
for perpetual transformation, for generating new organizational arrangements and
cognitive frames, and for constantly branching out to other, radically different businesses,
identities and industries. (1996, p. 115)
However, Google is different from other platform organisations that Ciborra might identify,
including the company he studied with the greatest interest, Olivetti. Google is also a hybrid in
regards to its activities as a network entity—a network/content provider hybrid—a characteristic
I have explored in some depth in this dissertation.
I am also drawn to Latour’s notion of hybrid as presented in Nous n’avons jamais été modernes
(1991). Latour uses the term (in French, hybride) to describe the construction of systems that
combine technology, politics, and people. We might be tempted to approach Google only as a
business entity, or only as a set of technologies; the popular press is guilty on both counts. Latour
might suggest that we examine Google as a hybrid, a connecting point between the technological
and the human.
6.5 The challenge to net neutrality and Internet policy
The rise of Google and the other platform hybrids represent a significant challenge to existing
paradigms of communications policy and regulation. In this section I discuss some of these
challenges, including the impact of large Internet companies on network neutrality discourse.
200
6.5.1 Network neutrality discourse
Platform hybrids suggest several new scenarios that may emerge in network neutrality discourse.
Network neutrality discourse has long assumed an imbalance of power between retail ISPs and
content providers, with ISPs purportedly holding the upper hand in their relations with content
providers. However, something more akin to symmetry may now exist between retail ISPs and
platform hybrids. Although more content and services are available online than ever before, clear
leaders with large market shares have emerged for most services. Without Facebook, Google,
Amazon, Netflix, Apple, and other platform hybrids, it is questionable that a North American
ISP could confidently present a set of service offerings that would be acceptable to existing or
new subscribers. As well, large retail ISPs are often content providers in their own right,
distributing traditional broadcasting content, along with controlling print holdings, film
properties, and sports and creative brands and entities. ISP gatekeeping of platform hybrids could
have serious consequences, prompting a potential response from regulators to behaviour that
might be considered anti-competitive.
The importance of the platform hybrid to retail ISPs also raises the possibility of these entities
extracting rents from ISPs to access their services, a model common to cable television. This
possibility is not lost on the American NCTA, an organisation of retail ISPs and cable television
providers. In a 2014 submission to the FCC, the NCTA argued that ISPs have little leverage to
demand payments from giant Internet companies to access their customers, and that in fact these
companies are more likely to charge ISPs for access to their content and services (Brodkin,
2014).
201
However, retail ISPs could still exact other rents from content and service providers in various
two-sided pricing scenarios that are not typically thought of as being subject to network
neutrality rules. Of significant interest is sponsored data, which allows a content provider to pay
for either an increase in maximum bandwidth available to a customer, or for the bandwidth the
customer uses to connect to a service. In the US, AT&T has promoted sponsored data options for
content providers for its wireless network since January 2014 (Lowensohn, 2014).
Platform hybrids have also created an Internet that is far different from what it was in 2005, with
streaming video now a central part of any Internet service offering. With the rise in popularity of
Netflix and Google’s YouTube, video made up half of all downstream Internet traffic in 2013
(Holpuch, 2013), with that proportion increasing significantly through 2015 (Sandvine, 2015).
While video and video advertising may be the main drivers for increasing throughput and
decreasing network latency, these are also requirements for web applications provided by
Salesforce.com, Google, Microsoft, and others.
The combination of increased video traffic, the demand for low latency web applications, and the
substantial network and financial resources of the platform hybrid, have all altered peering
arrangements among content and service providers, retail ISPs, and backbone transit providers.
As noted by Norton (2014) in The Internet Peering Playbook, it is now advantageous for the
originator of content or services, particularly large distributors of video such as Google and
Netflix, to connect directly to ISP networks at Internet exchange points. Unlike the peering
arrangements of the past, these connections are typically asymmetrical; much more data flows
from the content originator to the ISP than the reverse, making settle-free peering arrangements
less likely. The 2014 Netflix-Comcast dispute raised the question of to what extent network
neutrality principles should be applied to private peering agreements.
202
As the 2014 disagreement between Netflix and Comcast showed, while retail ISPs may be
reluctant to engage in explicit gatekeeping, limiting quality of service may be another matter.
During the Comcast-Netflix dispute, Comcast subscribers could access Netflix content, but
streaming video quality varied considerably (Rayburn, 2014). Rather than restricting content
outright, the ISP was able to collect rents from Netflix through asymmetrical peering
agreements. Retail ISPs remain in many cases natural monopolies, and competition is limited.
We should also consider whether the platform hybrid and retail ISPs are now, in fact, different
classes of entities. Platform hybrids and ISPs compete in many of the same areas, most notably
the provisioning of video programming. Google is also a retail ISP, operates an international
fibre network, and provides video and audio content to millions of viewers through YouTube.
Comcast and Verizon also provide retail Internet service, operate national fibre networks, and
provide video programming. While these entities can be distinguished by identifying their
original core businesses and areas of greatest revenues, such distinctions are increasing
problematic. This raises additional concerns around concentration of media ownership, as the
hybrid platforms control increasingly large swathes of services and content. Noam (2016)
suggests that network neutrality and media concentration are both part of a larger debate around
societal and economic inequality, what he calls a “mobilization that has been taking place over
the control of information resources” (2016, p. 5).
6.5.2 Implications for policy and regulation
The platform hybrid also presents a number of challenges to communication policy and
regulation, which in most jurisdictions is struggling to apply 20th-century ideas to the new
environment of internetworking.
203
In various national jurisdictions, a set a paradigms emerged around telecommunications and
broadcasting policy regulation in the last century that typically dealt with such matters as the
management of bandwidth scarcity, the public interest, approaches to commercialization, and
essential services. Noam (2006) suggests that the result in telecommunications regulation was, in
many cases, byzantine, but a complexity required to deal with very many matters of importance.
He writes:
A major reason for the complexity and sophistication has been the large number of goals
that telecom regulation tries to accomplish. In America, these range from general
coverage and affordability (universal service); to openness to users (common carriage); to
control of market power (price and return regulation); to integration of networks
(interconnection); to international collaboration (accounting rates); to encouragement of
competitors (wholesale retail pricing); to consumer protection (quality regulation); to the
protection against interference of transmissions (spectrum licensing); to innovation
(information services): to vertical protections (divestitures and fully separated
subsidiaries); to national security (CALEA); to personal safety (911); to consumer choice
(number portability); to federalism (state and federal jurisdiction); to rural-metropolitan
equity (high cost fund); to social equity (lifeline); to promotion of the internet (e-rate) -
and quite a few more. The result has been a highly complex set of rules which try to
balance the multiple objectives and accommodate the various political forces behind
them. (2006)
Marsden (2010), Noam (2006), and others have identified the significant challenge of
harmonising already complex telecommunications and media policy and regulation for
application to the Internet. Marsden’s (2010) solution is what he describes as medium law, a
204
bringing together of elements of telecommunications and broadcasting law that reflects the
emerging dominance of internetworks and Internet protocols in communications and the carriage
of audio, video and other media. Security, e-commerce, child protection, content regulation, and
many aspects of telecommunications law and regulation are all included in Marsden’s medium
law.
Noam (2006) makes a somewhat similar argument, proposing that various existing elements of
law and regulation—criminal law, commercial law, broadcasting, telecommunications, and other
areas—be selectively brought to bear on the Internet as required. Further, he suggests that each
national jurisdiction will have its own particular laws and regulations that will have to be
(mostly) respected by content providers. One possibility suggested by Sandvig (2013) is the
potential for harmonisation of content regulation across jurisdictions, a sort of lowest common
denominator that attempts to distribute content that violates no national laws. Another
possibility, suggested by Noam (2014), is that cloud content platforms such as YouTube,
Facebook, and Netflix will tailor their content to respond to national laws and regulations,
provided the platforms are “sophisticated enough to deal with the multiplicity of national rules”
(2013, p. 688).
We have already seen this sort of sophistication from several content platforms, including
Amazon, Netflix, and Google. Like Google, Netflix maintains a substantial network of data
centres and caching servers globally, presenting a distinct set of content offerings to each media
rights jurisdiction based on local licensing. Google’s YouTube similarly presents different
content to different jurisdictions, based on the national criminal law, content regulation, and local
rights, varying YouTube service offerings (such as YouTube Red), and other factors. While
205
some content harmonisation is taking place, as Noam suggests, platform hybrids are also able to
maintain one set of infrastructure which can span dozens of content jurisdictions.
This type of content curation is possible because of the characteristics that are central to platform
hybrids: the utilisation of user labour, ubiquitous and geographically diverse infrastructure, and
utilisation of massive amounts of computing power with machine learning.
A key characteristic of the platform hybrid—the enrollment of users in the generation of
revenue—is the use of automation and pattern recognition. Technologist, security writer, and
cryptographer Bruce Schneier (2016) has suggested that the “computerization of everything in
our lives” is leading to the “building [of] a world-sized robot”. This entity (or entities), what he
calls the World-Sized Web, is made up of cloud servers which provide data analytics and
machine learning, sensors that collect data, and actuators that can modify environments.
Schneier’s WSW is not monolithic, but a catch-all to describe pools of coordinated algorithmic
capacity configured into a variety of sizes and for numerous purposes, interacting through the
Internet. The WSW is mobile and ubiquitous, made up of various components controlled by
different entities that interact with one another. Writes Schneier (2016), “You might be able to
turn off small pieces of it here and there, but in the main the WSW will always be on, and always
be there.”
The technological capacities of Google and other platform hybrids fit well within Schneier’s
conception of the WSW. Google has made substantial investments in Internet-of-things
technologies, most notably with its 2014 purchase of the “learning” thermostat company Nest in
2014 (Wohlsen, 2014), and through its research on autonomous cars through its X subsidiary
beginning in 2010 (Markoff, 2010). Google also acquired DeepMind, an artificial intelligence
206
company, in 2014 (Shu, 2014). When asked in June 2016 which company’s efforts in the area of
automation and pattern recognition most concerned him from a technological and political
perspective, Tesla CEO Elon Musk all but named Google (McCormick, 2016).
An effective policy and regulatory response to the platform hybrids will be one that creates a
hybrid of appropriate media and telecommunications policy and regulation, while at the same
time facing the new challenges arising from these entities’ significant affordances in pattern
recognition and automation. Marsden (2010) suggests that such a regime would have to rely
much more heavily on co-regulation, defining clear regulatory parameters through legislation but
providing the regulated with significant autonomy.
6.6 Chapter summary
In this chapter I have described the changes to Google’s infrastructure by extending Wu’s model
of Google’s connection to its users, analysed these changes in the context of Ciborra’s notions of
technological and organisation transformations, and identified two actor-networks that are key to
understanding Google’s approach to network neutrality in the 2000s. I describe Google as a
platform hybrid, a new class of large network and policy actor, and discuss its implications for
network neutrality discourse.
In the next chapter I summarise my principal conclusions in relation to my research questions,
identify some of the major contributions and limitations of the work, and suggest avenues for
future work.
207
7 Google, beyond good and evil
I began this research in 2010 with a substantial and long-held interest in the process of public
policy making for both media and telecommunications. I was interested in the specific issue of
network neutrality from the perspective of an advocate of democratic communications and
technological openness, and I believed my own beliefs very much aligned with that of Wu in The
Master Switch. I thought of Wu’s network neutrality as relatively unproblematic and a positive
characteristic of the Internet, one that should be codified by regulators in Canada, the United
States, Europe, and other jurisdictions.
My research was motivated in part by a wariness of North American Internet service providers,
most of whom enjoyed monopoly or duopoly markets that arose through a combination of
natural monopoly, the creation and privatization of government monopoly telecom suppliers,
regulatory forbearance, and consolidation in the telecom marketplace, a circumstance that some
would argue is inevitable under neo-liberal policy approaches to telecommunications
infrastructures. As an Internet researcher, activist, and entrepreneur since the early-1990s, I
admittedly thrilled at the notion of disruptive Internet companies, particularly Google,
challenging the stifling telecom monopolies with agility and innovation.
The struggle to secure network neutrality seemed central to the continuing vitality of the Internet.
Wu’s idea of Google as an innovative and agile content provider was an attractive, if somewhat
simplistic, reframing of Internet design principles that had allowed the network to grow as a
platform for innovation for forty years. I was particularly interested in the role that academics
and public interest organisations—actors whom I perceived to be motivated by empirical
208
evidence and concepts of the public good—might have in the creation of communications policy
and regulation.
The 2009 and 2010 Google-Verizon joint policy statements provided a challenge to these rather
simplistic assumptions. In fact, the Google-Verizon relationship seemed, for a time, to adhere to
another well-known trope in telecommunications history, that of business goliaths aligning in
their best interests to the detriment of the public good. Google’s motivation seemed questionable,
but network neutrality, as a principle defined by Wu, still seemed to remain as valid and useful
as ever.
After 2010, network neutrality in the North America remained a contested matter, as it still does
at the time of this writing, and promises to remain so for the foreseeable future. While Google’s
alliance with Verizon could be seen as strategic and perhaps opportunistic, I found the
company’s subsequent silence on network neutrality more puzzling. Wu had argued that Google
absolutely depended on an open Internet for its survival. The company’s seeming withdrawal
from the net neutrality policy arena appeared to be a mystery worth solving.
The early focus of my research shifted from confirming Wu’s model of network neutrality and
examining the policy making process as a reflection of it, to determining to what extent and by
what means Wu’s model must be extended in order to answer the question of what accounted for
Google’s actions. I centred my work on challenging my own (and others’) earlier, somewhat
superficial conclusions concerning Google’s character and the formation of network neutrality
policy in North America. I explored its changes in leadership, its relationships with other policy
actors, and the growth of the company’s services and infrastructure, all of which changed in
nearly revolutionary ways in the short life of the company.
209
The main contribution of this research was to chart the transformation of Google from a content
provider as conceived by Wu’s into a new kind of network entity and policy actor, one perhaps
unprecedented in reach and power, that has for the most part neutralised the dangers of retail ISP
gatekeeping. I identify Google and similar entities with the label platform hybrid, characterising
Google as a policy actor aligning with and coming to dominate other policy actors in various
networks of shared interest in a milieu of network-spanning behemoths.
In this concluding chapter, I retrace the path of my research. I answer my principal and
supporting research questions by summarising the findings of my work, and synthesise my key
conclusions by relating them to the impact of Google and similar entities as policy actors. I
conclude by reviewing this work’s major contributions, illuminating some of the limitations of
my research, and identifying directions for possible future research.
7.1 Major findings
In this section, I review the major findings of my research in relation to my principal research
goal and my four supporting questions. As stated above, I began this research in response to the
seeming incongruity of Google unexpectedly aligning in several ways with telecom company
Verizon on network neutrality issues, and subsequently seeming to disengage from the public
policy discourse on the issue. An early proposition of my research was that the development of
Google’s technical infrastructure in the 2000s, in parallel with the expansion of the company’s
products and services were significant influences on the development of Google’s approach to
network neutrality policy. Wu’s conception of network neutrality in technical terms—as a
network design principle rather than a matter of the rights of the network’s users—indicated that
an emphasis on the technical aspects of Google’s systems would prove to be valuable.
210
In the early stages of my work, I was primarily concerned with the technical potential of
infrastructure to directly circumvent ISP last-mile connections to consumers, and was much less
concerned with the process of policy formation itself. For this reason, I was uncertain what work
within Science and Technology Studies, actor–network theory, or the writing of Ciborra and
similar theorists would be appropriate in examining what appeared to be a more technical
exploration of Google’s capacities. As my research progressed, the relationship between
technologies and policy actors, rather than simply the affordances provided by Google’s
technologies, became central to my work. As I examine my research questions as initially stated
in Chapter 1, I therefore interweave my findings with contributions from Ciborra and actor–
network theory.
7.1.1 What was Google’s policy position on network neutrality, and how did
it change?
It is clear from my research that, beginning in 2006 with Eric Schmidt’s public plea for network
neutrality, until roughly 2010, Google supported network neutrality as a matter of public policy.
Google made consistent and relatively frequent statements in support of the concept of an open
Internet in the latter half of the 2000s. Numerous posts on the Google Public Policy Blog, the
company’s principal means of communicating information concerning its policy positions,
suggested the company supported network neutrality principles. In 2007, Google’s first telecom
lobbyist was former Verizon lawyer Richard Whitt (Levy, 2011), with his main focus being
fighting for network neutrality. Whitt would also go on the lead Google’s efforts to influence the
FCC’s regulation of newly available wireless spectrum. In June of 2007, Whitt authored and
211
published four blog posts under the title “What Do We Mean By “Net Neutrality”?” that detailed
Google’s position.
Google’s position on network neutrality appeared to shift, beginning in 2009. That year Google
and Verizon began a series of joint statements and policy proposals on Internet regulation,
culminating in the August 9th 2010 publication of a two page “Verizon-Google Legislative
Framework Proposal”. On the same day, the companies published a blog posting (actually longer
than the legislative framework) that attempted to explain the proposal, entitled “A joint policy
proposal for an open Internet”. It was this statement that appeared to contain the greatest
deviations from Google’s past proclamations on network neutrality. Contrary to the fears of
public interest groups and technology bloggers, the statements were by no means a refutation of
all network neutrality principles. However, while still supporting neutrality of the wireline
Internet, the wireless network was exempt, as were “additional online services”, specialised
networks for such things as gaming and telemedicine.
The Google-Verizon statements, for all the sound and fury they generated in the summer of
2010, were followed by near silence from Google on network neutrality issues. From 2007 to
2009, network neutrality had been a popular topic on the Google Public Policy Blog, with terms
similar to “open Internet” and “network neutrality” mentioned in 37 posts. In 2010 alone, the
year of the statements’ release, network neutrality was the subject of 18 posts. But from the end
of 2010 through 2012, “network neutrality” was mentioned only once in Google Public Policy
Blog. As well, Google was notably absent from discussions of network neutrality from late 2010
to mid-2012. Sasso (2014) indicates that Google rarely lobbied the FCC on network neutrality in
the years prior to 2014.
212
7.1.2 How did Google’s infrastructure and systems, and the affordances
they provided, change during the period of its network neutrality
engagement?
My research found that Google’s infrastructure and systems of the late-2000s and early-2010s
comprised several different technical elements, that they grew and changed substantially during
the period I studied, and that they created a number of planned and unplanned affordances for
Google and other network actors. In chapters 4 and 5 I described the various elements of
Google’s systems in detail.
In the early 2000s, Google maintained servers at multiple locations at third-party colocation
facilities (Levy, 2011). Beginning in 2006, Google began constructing its own large-scale data
centres. By the end of 2013, Google operated eleven such data centres, seven in the United
States, two in Europe, and two in Asia. These data centres were connected to one another by a
backbone network used exclusively for that purpose. Google also operated a public-facing
network.
While Google’s data centres were the most public manifestation of Google’s infrastructure, the
company operated and controlled a large number of other servers during the period of my study.
Beginning in 2008, Google began to promote the Google Global Cache program to Internet
service providers. GGC placed edge caching servers inside retail ISP networks worldwide. By
2014, Google reported that 60% of Google traffic was being handled by GGC. For October 28th
2013, I identified 1433 ISP locations containing at least one GGC instance. At this time, Google
also placed servers at various third party facilities, including Internet exchange points (IXPs). I
have identified 19 locations for these servers.
213
Calder’s earliest server location data is from October 2012; he and his research team saw an
approximately 700% increase in the number of Google-utilized IP addresses between that date
and mid-August 2013. While this expansion is impressive, textual evidence indicates that GGC
had been promoted to retail ISPs beginning in 2008, when it was deployed outside North
America (Hersman, 2011a).
As well, Google’s strategy for the Android mobile operating system, launched in 2008, required
the company to create alliances with mobile carriers and equipment manufacturers. This is most
evident in the founding of the Open Handset Alliance, formed in 2007 and made up of Google
and numerous carriers, software companies, and component and device manufacturers.
Google’s 2013 systems, which I have detailed in an interactive map (Stevenson, 2016) provided
the company with a number of affordances, some more purely technical, others organisational
and strategic. Some affordances were planned, some not. Ciborra describes the process of using a
technology differently than what was intended as bricolage, or hacking. Some technologies drift
(dérive) into other uses or roles over time. Below I identify the three main affordances most
relevant to my research questions:
First, Google’s 2013 network topology allowed the company to be significantly less dependent
on the public Internet—that is, tier 1 and other transit networks controlled by third parties—
making transit ISP gatekeeping less likely.
Second, the Google Global Cache program, which placed Google servers inside over 1400 retail
ISP networks worldwide, created symbiotic relationships and made gatekeeping or rent-seeking
less attractive to retail ISPs. GGC had been created to increase Google’s network efficiency, but
214
was also a hack (bricolage) by Google that fundamentally changed Google’s relationships with
many retail ISPs.
Third, the requirements of Google’s Android mobile operating system encouraged the company
to establish partnerships with mobile carriers who had not aligned with Google on the issue of
network neutrality in the 2000s.
During the period of my study, Google also began development of systems to provide Internet
access directly to consumers through Google Fibre and “blue sky” projects such as Project Loon.
2015 estimates (Baumgartner, 2015) placed the number of paid subscribers to Google Fibre at
only between 100,000 to 120,000 households. However, Google’s interest in retail Internet
service provision may have been designed in part to indicate to ISPs Google’s ability and
potential willingness to physically circumvent ISP last mile connections, if need be, a latent stick
that could be developed if ISPs are not cooperative with Google.
My research indicates that Google was not able to circumvent retail ISP gatekeeping using its
technical systems in the late-2000s and early-2010s. However, ISPs received several benefits in
their relationship with Google, including lower network costs, better access to Google services
for their subscribers, and access to the Android operating system. Google’s infrastructure and
systems thus provided affordances that allowed Google to move away from dependence on other
networking entities, both transit and retail, and other affordances that created symbiotic
relationships among Google and these same entities.
215
7.1.3 In what ways could infrastructure and systems influence Google’s
policy approach to network neutrality?
My research identified a process by which Google infrastructure and systems could influence the
company’s position on network neutrality.
Drawing on the actor–network theory (ANT) developed by Latour (1996), Callon (1991) and
Law (1999), I suggest a model for how Google technical systems could have influenced the
development of the company’s approach to network neutrality. ANT posits an interplay between
both technical and non-technical actors in the creation of networks of aligned interests, including
the design and utilisation of information technology (Hanseth, 1996); all actors are theorised to
participate in the formation of a network that can be understood as an integrated whole
(Walsham, 1997). Google’s infrastructure is such an actor, with requirements and affordances,
influencing company leadership on a variety of issues. Infrastructure benefits from connection
and extension; it is successful when it delivers content quickly to users, requiring that it be closer
to those users. I discuss the substance of that influence in section 7.1.5 below.
7.1.4 How can Google be characterised as a network and policy actor
during this period, in relation to Wu’s network neutrality models?
Google and other large Internet-centred companies that emerged into dominance in the late-
2000s constituted a new class of Internet policy actor, what I call the platform hybrid. Some
platform hybrids focused on specific domains that they came to dominate, such as Google for
search, Facebook for social media, and Amazon for retail books. Others, like Apple and
Microsoft, were founded during the beginnings of the personal computer industry in the 1970s,
216
the survivors of a once rich field of hardware and software innovation. All became what Ciborra
described as platform organisations, able to manage and exploit technology product lifecycles
while institutionalising innovation that spawned new products and services. Each found
substantial revenue by maximising the impact of the network effect by monetizing the work of
billions of users worldwide, extending their services, and coming to dominate several markets on
a global scale, often driving down prices and creating barriers to entry for competitors.
Like the hyper giant described by Labovitz, the platform hybrid is a very large provider of
products and services, provisioned using substantial technical infrastructure it owns and controls
to the extent that it can influence business practises and network policy. But a platform hybrid is
also a platform organisation as described by Ciborra: in many ways mature but remaining agile,
able and willing to shift and drift into new markets to capture and monetize users of all sorts
while holding competitors at bay and exploiting lucrative new revenue opportunities.
The implications of the platform hybrid for Internet governance are substantial. The platform
hybrid extends Wu’s conceptual model of the importance of mitigating retail ISP gatekeeping, as
it identifies an entity—one with significant technological power and accompanying market
dominance—to which typical concerns about network neutrality simply do not apply. The
platform hybrid is a third actor in the content-carrier dichotomy, relatively immune to ISP
gatekeeping, and a potent “master switch” in its own right.
The danger of the non-neutral Internet remains, and is in fact multiplied, as the platform hybrid
has become a potential site for content and service gatekeeping on an unprecedented scale. The
platform hybrids hosts trillions of pieces of third party content of every conceivable kind, both
public and private. Any concern about openness and non-discrimination must be further
217
extended to the platform hybrids, who can, in a non-neutral Internet, effectively discriminate
against less well-resourced content providers. Policy makers and researchers must begin to
engage with the platform hybrid as a commercial, network, and policy actor.
7.1.5 Google, hacking the master switch
My research indicates that Google’s infrastructure exerted significant influence on the
company’s retreat from public support for Wu’s network neutrality. As I detail below, I describe
this process as a hack—Ciborra’s bricolage—of the potential gatekeeping by retail ISPs as
defined by Wu.
Google’s infrastructure was a key component in at least two ways in the development of
Google’s approach to network neutrality.
First, the requirements of infrastructure as a lucrative medium of commerce—to be connected,
speedy, closer—influenced Google leadership to embrace network neutrality as a policy position.
Other actors internal and external to Google were critical to this position as well—leaders,
services, lobbyists, customers—aligned around the utility of network neutrality. This led to
Google aligning with policy actors outside the company that supported similar positions. I argue
that this actor-network, which I described as neutrality-focused in chapter 6, was key to
understanding Google’s behaviour on the issue in the late-2000s.
Second, other infrastructure and system requirements and affordances influenced movement
away from network neutrality. Another actor-network formed around the utility of Google’s
systems—principally edge caching and Android—as these systems benefited from various sorts
of alliances with carriers and device manufacturers in order to succeed. YouTube could not
218
function and grow without being closer to users and delivering more content, more quickly.
Retail ISPs and carriers benefited from both systems, but had to change and be receptive to
Google integrating into key areas of their business.
With the deployment of the Google Global Cache program, Google and many retail ISPs entered
into what Ciborra describes as xenia, the hosting relationship. Google’s servers were “strangers”
to the ISPs, promising benefits for the companies, but also potential challenges and disruptions.
Effective hosting necessitated a symmetry between host and hosted, one that redefined the
identities of both entities. In the case of Google and retail ISPs, hosting GGC servers created a
symbiotic relationship that benefited both parties, but also changed them. This relationship
transformed both the host and the hosted, creating a new social relationship between Google and
retail ISPs. Systems benefited from these ISP and carrier alliances in tangible ways, and how
likely was an ISP to block a service that was provisioned from within their infrastructure?
Network neutrality retreated as a concern, and Google all but stopped speaking about it
publically.
Xenia was a mechanism for bricolage, Google’s hack that mitigated the retail ISP master switch.
Google planned and established Google Global Cache systematically, with the objective of
decreasing bandwidth costs and improving service, leading to higher revenues. Android required
alliances to prosper. Both technologies drifted (dérive) beyond boundaries prescribed for them,
used in ways not planned to create synergies in new areas. Google’s systems were reinterpreted
to create new political solutions. Writes Ciborra (2002):
The power of bricolage, improvisation, and hacking is that these activities are highly
situated; they exploit, in full, the local context and resources at hand, while often pre-
219
planned ways of operating appear to be derooted, and less effective because they do not
fit the contingencies of the moment. Also, almost by definition, these activities are highly
idiosyncratic, and tend to be invisible both because they are marginalized and because
they unfold in a way that is small in scope… [T]he smart bricolage or the good hack
cannot be easily replicated outside the cultural bed from which it has emerged. (2002, p.
50)
Wu’s 2010 model of retail ISP gatekeeping presents Google as reliant on aspects of the public
Internet to connect to ISP networks, and through those networks, to users. Google Global Cache
changed this relationship fundamentally, placing Google services within ISP networks, to the
benefit of both entities. Google hacked Wu’s master switch by utilising technology to transform
the relationship between it and other network and policy actors, making ISP gatekeeping
significantly less likely.
The actor-network in which Google and retail ISPs were enrolled was a network of shared
interest in the commercial success of its members, one that had the potential to bypass debates
over the public interest. The process of the creation of the Google-ISP actor-network was
obviously opaque in detail but more clear in influence. We do not know many of the details of
Google’s interactions with ISPs in the negotiation of Android adoption or the deployment of
Google Global Cache. However, the sequence of events creates a persuasive narrative of network
creation. I have described Google’s contentious relationships with telecommunications
companies in the late-2000s with the introduction of Google Voice, but parallel to these events
was the creation of the contemporary smartphone paradigm by Apple in 2007, which allied itself
with a single wireless carrier, AT&T. Through the acquisition and development of its Android
operating system as the best and lowest cost competitor to Apple, Google established Android as
220
an obligatory passage point for other carriers and manufacturers seeking to affordably offer
smartphones and mobile services. It thereby was able to constitute an emerging actor-network
centred on mobile telephony with itself as the focal actor.
Google further enrolled now familiar and receptive retail ISP actors with the deployment of
Google Global Cache, another OPP virtually no other content providers could take advantage of,
one that reduced costs for ISPs while improving quality of service for consumers accessing
YouTube and other popular Google services.
The most prominent textual artefacts of the actor-networks I identified in Chapter 6 was the
series of 2009 and 2010 Google-Verizon joint policy statements on network neutrality.
Somewhat mysterious at the time, their significance was misinterpreted. In retrospect they
present Google attempting to enrol Verizon in both of the actor-networks I have identified. But
only one enrollment was truly successful. Because Google’s compromises with Verizon on
network neutrality were considered so fundamental by other members of the neutrality-focused
actor-network, Google fell out of alignment with that actor-network. Google did enrol Verizon in
the affordance-focused actor-network centred on the shared success of the free and open source
Android operating system as an alternative to Apple’s proprietary iOS, and the shared benefits of
the Google Global Cache.
It is Google’s alignment with the affordance-focused actor-network—centred on caching,
peering, and Android—that endured into the 2010s. While the regulatory and policy processes
around network neutrality continued to unfold, Google appeared to have all but abandoned its
traditional advocacy for network neutrality, comfortable in its transformation into an entity to
which network neutrality is a much less significant concern.
221
7.2 Contributions and limitations of research
The principal contribution of my research has been to reframe the discourse of network neutrality
with the identification of a third type of network and policy actor, the platform hybrid. I have
accomplished this not by replacing Wu’s model of network neutrality, but extending it, exploring
new areas of Internet governance study, and contributing to methods of research.
7.2.1 Policy contributions: extending Wu
My research is built on Wu’s foundational work. In fact, without Wu’s seminal formulation of
the notion of net neutrality, and his subsequent model of Google’s relationship with retail ISPs
and its dependence on neutrality principles as presented in The Master Switch, this research
would not be possible. Wu did much to establish the contemporary discourse of Internet traffic
management and broadband non-discrimination, presenting the case for net neutrality in contrast
to similar approaches, such as open access, providing an alternative to the work of Saltzer
(1999), and Lessig and Lemley (2001).
The thesis that Wu developed first in 2003 and matured in 2010’s The Master Switch reflects the
Internet of that period. Large, consolidated Internet service providers had emerged from the
competition and innovation of the 1990s retail ISP marketplace within a policy context that
marginalised public sector actors and public interest considerations, threatening to replicate the
monopoly and duopoly environments of the earlier telephone and the current television
distribution markets. As with Wu’s historical narratives, the distinction between carrier and
content in the Internet of the 2000s appeared initially quite clear: AT&T, Comcast and Verizon
were carriers in the old mould of the Bell systems, while Google, Yahoo!, Vonage and other new
222
Internet companies were vulnerable content providers. The master switch of Wu’s title was
clearly controlled by the retail ISPs, as it had been owned by Bell in the past.
However, as Wu himself wrote in 2003, the notion of what constitutes network neutrality can
change over time:
Neutrality, as a concept, is finicky, and depends entirely on what set of subjects you
choose to be neutral among. A policy that appears neutral in a certain time period… may
lose its neutrality in a later time period, when the range of subjects is enlarged. (2003, p.
149)
Wu’s work in the 2000s reflected the realities of the Internet and Google as he understood them,
with understandably limited awareness of Google’s strategic direction, the details of its
infrastructure, and the mechanics of its lobbying. Non-discrimination was Wu’s primary concern,
and it remains central to my research. There is every reason to believe that Google remained
acutely aware of the potential for retail ISP and transit provider gatekeeping. But the platform
transformation of Google in the 2000s, one that saw the company come to dominate information
provisioning globally by building out its own network infrastructure and creating alignment with
telecoms and retail ISPs, resulted in relationships that made discrimination against its own data
streams much less likely at both the retail ISP and transit network levels.
The evolution of a very few powerful content providers into a new category of network entity is
reflected in Norton’s (2014) work on the practical realities of network peering. Norton describes
Google and some other entities as “large-scale content companies” and highlighted their
increasing impact with each edition of his Internet Peering Playbook. My extension of Wu’s
model in the identification of the platform hybrid encompasses Norton’s category, a class of
223
network entity and Internet governance policy actor that includes Google, Amazon, Facebook,
Apple, and Microsoft.
The identification of the platform hybrid, which I argue constitutes a “third class” of network
policy actor, is critical to engaging with the challenges of Internet regulation and non-
discrimination in the 2010s and beyond. Regulators in North America (Brodkin, 2016b) and
Europe (Belli & Marsden, 2016) are still grappling with the relative positions of content
providers and retail ISPs, and still classifying Google and the platform hybrids with smaller, less
powerful content providers of all types. This is likely a dangerous error. The platform hybrid, as
a network policy actor, may seem similar to the large media conglomerates of the past, but it
differs in several key ways. For all the market-spanning power of the Bell system, the US
television networks, and others that Wu analyses historically in The Master Switch, they did not
constitute platforms as defined by Ciborra. When Time-Warner merged with AOL in 2000, it
was the corporate culture and practices of Time-Warner that pushed out the agility and
innovation of the Internet-focused AOL. It is possible that Verizon’s recent proposed purchase of
Yahoo!, which had long attempted to model itself after large traditional media companies rather
than agile disruptors like Google and Netflix, may also result in a more traditional corporate
culture that makes innovation and agility difficult.
The platform hybrid is a policy actor that challenges normative notions of regulatory and policy
jurisdiction. North American and European regulators sometimes fall into relationships with
telecommunications and media entities that embody aspects of regulatory capture. Disruptive to
traditional business models and overwhelmingly effective in the delivery of its services and
content, the platform hybrid both attracts and repulses media and telecom regulators. But the
224
platform hybrids, perhaps to a greater extent than the traditional media and telecom
conglomerates, also appears to reject traditional regulation.
This dynamic was on full display at the September 2014 CRTC Let’s Talk TV hearing
concerning the future of Canadian television regulation. The chair of the CRTC, Jean-Pierre
Blais, lauded the disruptive nature of Netflix and Google’s YouTube on the first day of the
hearing, stating that
While the current regulatory model was appropriate to achieve the objectives set out in
the Broadcasting Act, based on past technology and past viewing habits it has grown into
a complex and at times unwieldy framework. How Canadians interact with television has
changed. Broadcasting has changed. It’s time the regulatory model also changed. (Blais,
2014)
But Google and Netflix, during their appearances before the commission, did not accommodate
the chairman’s vision, declining to provide the CRTC with information about their viewership in
Canada, and repeatedly denying the Commission’s jurisdiction over them, something the CRTC
had claimed since the 1990s. Angered and embarrassed by the companies’ reluctance to
cooperate with the commission, Blais ordered the Google and Netflix testimony redacted from
the hearing transcript (Geist, 2014). The shift in power the chair had identified days earlier had
been greater than even he could acknowledge.
But regulators and communication policy makers cannot ignore Netflix, Google, and other
platform hybrids, nor can the arguments of any of the platform hybrids concerning jurisdiction
be accepted unproblematically. One of the clearest contributions of my research has been to
identify these new and powerful actors, explain how they do not fit into the carrier/content
225
dichotomy that has for decades dominated broadcasting and telecommunications policy
discourse, and argue that there appears to be limited middle ground within traditional regulatory
structures on which to deal with them.
7.2.2 Contributions to practice: extending Internet governance
In addition to reframing the network neutrality discourse, a central interest to Internet
governance studies, my research extends the domain of research in other ways.
Mueller (2004) has suggested that one of the principal means of Internet governance is through
the creation and operation of institutions. Both Mueller (2002) and DeNardis (2009) have
focused their work on public and non-governmental institutions, while allowing that other
organisations and businesses play similar roles in setting standards and establishing rules for the
Internet’s operations. My research extends these concepts by studying a business entity, Google,
and describing its influence on policy and regulatory processes.
My work also presents an approach to the influence of infrastructure on policy formation that
differs from work of the past. Recent work on network neutrality tends to see infrastructure as
something built, regulated and used, but rarely as a policy actor itself. For example, Marsden
(2010) his book Net Neutrality: Towards a Co-regulatory Solution exhaustively addresses
problems of Internet regulation in a number of jurisdictions from a legal perspective. He sees
Google’s edge caching efforts as expected behaviours of a content provider, not as an aspect of
the transformation of Google and an important factor in the reframing of the neutrality discourse.
226
7.2.3 Contributions to research methods
I was required to innovate methodologically in order to map Google’s infrastructure and its
strategies. My approach to research was informed by the notion of Starr’s (1999) unearthing of
infrastructure and those that created it, by the descriptions of the complex relationships between
technologies and users described by Ciborra, and by the tradition of conceptualising networks of
aligned policy and other interests of actor–network theory. My exploration of Google’s
infrastructure was inspired in no small part by my interest in system and network exploration
embodied in the hacker ethos of the 1980s and 1990s (Bennahum, 1996; Himanen, 2001; Levy,
1984). My examination of relationships between technologies and users involved examining
Google relationships with its technologies and other entities in critical and sometimes creative
ways, looking beyond the instrumentality of technologies to see more complex influences. For
relationships among policy actors, I challenged my own assumptions concerning the interests of
Google and other actors.
The methodological contribution of this dissertation was to develop a means of using detailed
infrastructural topology as data with which to identify essential elements of my proposed actor-
networks. I developed an approach to mapping the network and server elements of a large
Internet company, Google, drawing on network investigative projects and extensive documentary
evidence. The data generated from this work was used to create an interactive map of Google’s
systems (Stevenson, 2016) (Figure 7.1 and Figure 7.2) detailing geographic locations, hosting
organisation, and IP address. This map demonstrated the efficacy one of my central arguments
concerning the symbiotic relationships between Google and retail ISPs, leading directly to the
conception and descriptions of actor-networks that describe the shared interests amongst Google
and other entities. This map is publically available on the Google My Map website.
227
I also innovated in response to the challenges of data collection. Detailed information on
Google’s infrastructure (outside of Google) is extremely restricted. As I describe in Chapter 3,
Google had begun to obscure elements of its network from technologically-savvy outsiders in the
mid-2000s in part to limit the ability of the search engine optimisation industry to manipulate
search rankings. I was told by a WAN-level engineer that the world population of experts like
him was very small, perhaps as low as one thousand, and that much of their knowledge was tacit.
Some information received was useful, though anecdotal. For example, one engineer told me of
the installation of a Google server container at a Southern Ontario retail ISP in the early-2010s,
but was not comfortable telling me which city and ISP had received it, nor exactly when.
I therefore relied on an iterative process of documentary research, technical exploration, and
analysis of data from network investigative projects. Each bit of substantial evidence concerning
Google’s systems from one type of source often led to a reconceptualization of data from other
sources. Even as I began to create my interactive map of Google’s infrastructure, new
information would challenge assumptions I had made concerning the function or location of
some already identified element.
I have not been alone in creating maps of wide area networks—the commercial
telecommunications research firm TeleGeography in particular does similar work for commercial
clients—but my work appears to be the first attempt, from a policy perspective, at such complete
mapping of a single network entity’s systems. A participant at my 2014 TPRC talk on Google
and network neutrality policy tweeted that this was a “citizen mapping” of a major
telecommunication infrastructure, something novel that had not been attempted on this scale
previously.
228
Researchers focused on questions about the role of infrastructures on Internet policy-making
would do well to build upon my methodological approach, relying more heavily on large-scale
network investigative platforms and institutions to generate data. During the eight months of
Calder’s initial study of Google’s server growth, the number of locations discovered by his team
grew by approximately 700%, suggesting it will simply be impossible to rely on fragmentary
documentary evidence to determine the scope, reach, and affordances provided by technical
systems to platform hybrids.
229
Figure 7.1: Google's Infrastructure, October 28 2013 This image was captured October 12, 2016 from https://www.google.com/maps/d/viewer?mid=1nXSNhvDo5jaSS1h9gFuqQnRNIqg .
Figure by John Harris Stevenson. Copyright © 2016 John Harris Stevenson. Base map data Copyright © 2016 Google.
230
Figure 7.2: Google's infrastructure, coast of Brazil, October 28 2013 This image was captured October 12, 2016 from https://www.google.com/maps/d/viewer?mid=1nXSNhvDo5jaSS1h9gFuqQnRNIqg .
Figure by John Harris Stevenson. Copyright © 2016 John Harris Stevenson. Base map data Copyright © 2016 Google.
231
7.2.4 Limitations
In discussing the contributions of this research, it is important to also acknowledge its limitations
of scope and resources.
I was not able to gather significant or attributable information from Google leadership and staff.
As I wrote in Chapter 3, I did conduct a wide-ranging, off-the-record interview with a Canadian
Google executive in 2012, who provided valuable feedback, though no technical details. Other
formal and informal requests for information were simply ignored.
While my research does not treat Google as a complete black box (Walsham, 1997), I have
identified black boxes within Google itself, conceptualising groups of distinct actors including
the company’s leadership, lobbyists, and Google’s infrastructure. None of these are monolithic
by any means; we can know of specific actions and statements of Page, Schmidt, Whitt and so
on, and the differing agencies and affordances of individual aspects of infrastructure, particularly
the characteristics of the Google Global Cache program.
However, the intention of this research is not to provide a precisely detailed historical narrative
of Google’s growth and policy-making activities. Rather, my objective has been to provide a
framework for understanding why Google as an entity appears to have taken the policy positions
that it has on network neutrality.
My original plan of research did not include any work on the importance of the Android
operating system to the formation of actor-networks among Google and retail ISPs. As my
research progressed, however, it was clear that Android was a key element in Google’s systems
232
that influenced network creation with ISPs. There are opportunities for greater research in this
area.
7.2.5 Directions for future work
While I am satisfied that the results of this work address the thesis at hand, it is difficult not to be
left with a feeling that the surface of this research arena has only been skimmed. There remains
much work to be conducted in the areas of Internet companies influence on policymaking within
the context of large-scale technical systems. I foresee that the following issues will prove
important factors in the development of future research in this area: the further mapping of
Google and other platform hybrids, greater work on the impact of platform hybrids on network
neutrality policy, and even closer examinations of platform hybrid business strategies and
operations.
7.2.5.1 Mapping Google and beyond
My research examined Google during a defined and limited historical period. Because of time
and resource constraints, my map of Google is a snapshot of one day—October 28, 2013—rather
than presenting Google’s systems in a more complete historical context. As Calder et al. have
shown, there are substantial opportunities for identifying more elements in Google’s
infrastructure. Along with Google, there are opportunities to map other infrastructures.
Facebook, for instance, which my research showed was also actively scaling up its infrastructure
during the same period of my research, is often little discussed in the popular press as a major
networking entity. We have very little idea of Facebook’s systems, nor those of Apple, Microsoft
and (to a lesser extent) Amazon and Netflix. As with Google, the network and server
233
infrastructures of other large Internet companies is profoundly important to their behaviour as
policy actors.
Examining their systems in light of their alignments with other policy actors would be quite
beneficial. This will be necessary to understand the affordances new Google systems provide, as
well as fully explore jurisdictional issues that arise from the globe-spanning nature of the
platform hybrid infrastructure.
7.2.5.2 Network neutrality policy and infrastructure in specific jurisdictions
My research focused on the network neutrality policy process in the United States and Canada.
Although I mapped Google’s systems globally, I focused primarily on the interest alignment
around Internet governance issues in North America. Europe, Asia, Africa and other global
regions each have WAN network topologies that are distinct.
There seem to be numerous opportunities to explore the relationship between infrastructure and
network neutrality policies in several other jurisdictions. Early in 2016, Indian
telecommunications regulators rejected Facebook’s Free Basics program which provided access
free of bandwidth costs to Facebook-approved content (Agrawal, 2016). An examination of
Google’s business models in the global south, the network and server infrastructure that supports
its business, and the networks of interest between Google and other stakeholders, is one of many
studies that could be conducted fruitfully in this area.
234
7.2.5.3 Google and platform hybrid studies
As I wrote in Chapter 2, it is possible to see the emergence of an interdisciplinary field of study
of Google. The company and its activities is the focus of much ongoing research in political
economy, computer science, science and technology studies, sociology, information studies,
business research, and other areas. I would suggest, however, that the focus of this work not be
Google alone, but on Google and other, similar entities that I have called platform hybrids. These
organisations share significant characteristics and, most critically, thrive and grow through
interaction and competition with one another.
I am reminded of Benjamin Bratton’s The Stack: On Software and Sovereignty (2016), which
attempts to create a conceptual model for planetary-scale computation, identifying various
elements of the fragmented platform as layers in a solution stack. One of Bratton’s layers is
Cloud, and in his review of The Stack, Whitson (2016) writes:
Cloud and platform poelis, like Facebook, Apple, Amazon, and Google are emerging to
contest state and global sovereignty. To limit the activities of such platforms to market-
driven intentions really underestimates their emerging impact on planetary computation.
Perhaps the most radical of these described by Bratton is Google, whose stated intention
is the “organization of the world’s information,” and often offers their services for free.
We must question whether monetization is Google’s only ambition, or whether forms of
mapping and organizing information might not only lead to new ways of governing.
Although this research has focused specifically on Google’s systems, future work should include
examinations of systems created by other platform hybrids such as Facebook and Amazon. I
believe that the study of Internet Governance will benefit greatly from further in-depth
235
examinations of both the technical infrastructure controlled by commercial policy actors, and the
strategic affordances such infrastructures allow.
7.3 Final thoughts
In this dissertation, I have brought together disparate disciplines of study in an attempt to answer
what at first appeared to be, early in my research, a rather simple question about the power of
new technologies to defeat the old. I have argued that the emergence of Google and the other
Internet platform hybrids fundamentally challenge some of the assumptions of the network
neutrality debate. I described the components that make up Google’s global infrastructure, and
some of the affordances that they provide the company and its partners. I argued that, although
Google’s network cannot completely circumvent the retail ISP gatekeeping that is a core concern
of network neutrality advocates, the company’s connections with and integration into ISPs
networks makes gatekeeping less likely. Finally, I identified some of the challenges to network
neutrality discourse created by platform hybrids and their interactions with ISPs. I believe that
the study of Internet Governance will benefit from further in-depth examinations of both the
technical infrastructure controlled by platform hybrids, and the strategic affordances such
infrastructures allow.
At the conclusion of this research, I am struck by the notion that Google has grown up and is
now less concerned about simplistic notions of evil that were never well defined when their
unofficial motto was coined. We might be thankful for its utility and marvel at its infrastructure,
but Google is a for-profit multinational corporation with an unprecedented grip on the world’s
knowledge architectures that can only survive if those architectures are monetized.
236
Google clearly understands that the rules of the network neutrality game, and the Internet itself,
have changed since the 2000s. Wu’s neutrality might have been critical to Google when it was a
small start-up, but it is not nearly as important now to a company that is perhaps the largest non-
military, non-security networking entity in the world. In fact, it is difficult to imagine that there
are not cases where Google would prefer that network neutrality not be applied or extended, such
as to currently private peering agreements, or to the edge caching within retail ISP networks.
Google’s leadership may or may not support network neutrality as a general principle, but it is
fair to conclude that the company takes a much more complex approach to the issue in 2016 than
it did in 2005 or 2010.
As I argued in Chapter 6, while discrimination may still be a concern to the platform hybrids, it
is one that most have significantly mitigated, to the point that an entity like Google may pay it
little practical heed. After all, what retail ISP could imagine increasing its bandwidth costs and
providing a poorer QoS for Google’s many services in the hopes of generating additional
revenue from the company? It seems difficult to envision today’s ISPs playing Microsoft off
against Google, attempting to charge rents for one company’s search engine to operate more
speedily than the other. The platform hybrids dominate their markets in a manner that no
communications or media companies has ever done in the past. Blocking or degrading YouTube,
Facebook or Netflix would generate a public outcry that ISPs would likely not be able to
withstand.
Dealings among the retail ISPs and the platform hybrids are by no means without friction, as the
2014 disagreement between Netflix and Comcast illustrated. But symbiotic relationships that
now exist between ISPs and the hybrids make the danger of ISP gatekeeping of Google’s content
and services significantly much less likely than Wu suggested in 2010. The success of one
237
increasingly depends on the success of the other, and the power of the platform hybrids is
growing.
As of this writing, network neutrality still seems tenuous, a contested matter in North America
and elsewhere. In the United States, majorities in Congress remain hostile to the notion (Brodkin,
2016a), while zero rating schemes by Facebook, AT&T (Hong, 2016), and Bell’s Mobile TV
(Wright, 2016), and T-Mobile’s creative use of video throttling (Swanner, 2016), challenge the
open Internet in practice.
What should concern us is the fate of new and emerging Internet content providers and services
that are now appearing to challenge the retail ISPs and the platform hybrids, and must struggle in
their shadows. The platform hybrids may enjoy the luxury of a kind of ambivalence about
network neutrality, but some form of open Internet seems critical if the new, the innovative, and
the marginal are to emerge and thrive. Recognising the transformation of network neutrality
discourse brought about by the platform hybrids provides a useful opportunity to revisit what we
mean by an “open” and “neutral” Internet.
238
References
Aaron, C. (2010, August 9). Google-Verizon Pact: It Gets Worse. Huffington Post. Retrieved from http://www.huffingtonpost.com/craig-aaron/google-verizon-pact-it-ge_b_676194.html
Abbate, J. (1999). Inventing the Internet. Cambridge: MIT Press. Abelson, H., Ledeen, K., & Lewis, H. (2009). Just Deliver the Packets (Essays on Deep Packet
Inspection). Office of the Privacy Commissioner of Canada. Retrieved from https://webcache.googleusercontent.com/search?q=cache:btd1HvPRwPIJ:https://www.priv.gc.ca/information/research-recherche/2009/ledeen-lewis_200903_e.asp+&cd=1&hl=en&ct=clnk&gl=ca
Adhikari, V. K., Jain, S., Chen, Y., & Zhang, Z.-L. (2012). Vivisecting YouTube: An Active Measurement Study. In INFOCOM, 2012 Proceedings IEEE (pp. 2521–2525). IEEE.
Adhikari, V. K., Jain, S., & Zhang, Z.-L. (2010). YouTube Traffic Dynamics and Its Interplay with a Tier-1 ISP: An ISP Perspective. In Proceedings of the 10th ACM SIGCOMM conference on Internet measurement (pp. 431–443). ACM.
Agrawal, R. (2016, February 9). Why India rejected Facebook’s Free Basics. Mashable. Retrieved from http://mashable.com/2016/02/09/why-facebook-free-basics-failed-india/
Albanesius, C. (2007, July 31). New FCC Spectrum Rules Win Google’s Nod. PCMAG. Retrieved from http://www.pcmag.com/article2/0,2817,2164661,00.asp
Ali, R. (2007, October 23). Verizon Wireless Drops Lawsuit Against FCC on Spectrum Auction Rules. Gigaom. Retrieved from https://gigaom.com/2007/10/23/419-verizon-wireless-drops-lawsuit-against-ffc-on-spectrum-auction-rules/
Almaer, D. (2005, September 13). Ajax Latency: Myth, Reality, and Solutions. Retrieved from http://ajaxian.com/archives/ajax-latency-myth-reality-and-solutions
Amburgey, T. L., & Dacin, T. (1994). As the left foot follows the right? The dynamics of strategic and structural change. Academy of Management Journal, 37(6), 1427–1452.
Assange, J. (2014). When Google Met Wikileaks. London: OR Books. Auletta, K. (2010). Googled: The End of the World As We Know It (Reprint). New York City:
Penguin. Retrieved from https://www.amazon.ca/dp/B002UZ5JR2/ Axelrod, M. (2008). The Value of Content Distribution Networks. Presented at the African
Network Operators Group. Retrieved from http://www.afnog.org/afnog2008/conference/talks/Google-AFNOG-presentation-public.pdf
Baran, P. (1964). On distributed communications networks. IEEE Transactions on Communications Systems, 12(1), 1–9.
Barlow, J. P. (1996, February 8). A Declaration of the Independence of Cyberspace. Retrieved October 9, 2016, from https://www.eff.org/cyberspace-independence
239
Battelle, J. (2005). The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture. Penguin.
Baumgartner, J. (2015, October 17). Study: Market “Too Dismissive” of Google Fiber. Multichannel News. Retrieved from http://www.multichannel.com/news/distribution/study-market-too-dismissive-google-fiber-s-potential/394356
Baxter, P., & Jack, S. (2008). Qualitative case study methodology: Study design and implementation for novice researchers. The Qualitative Report, 13(4), 544–559.
Becker, K. (2009). The Power of Classification: Culture, Context, Command, Control, Communications, Computing. In K. Becker & F. Stalder (Eds.), Deep Search: The Politics of Search beyond Google (pp. 163–172). StudienVerlag.
Belli, L., & Marsden, C. T. (2016, October 4). European net neutrality, at last? Retrieved October 12, 2016, from https://www.opendemocracy.net/luca-belli-christopher-t-marsden/european-net-neutrality-at-last
Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven: Yale Univeristy Press. Retrieved from http://books.google.com/
Bennahum, D. (1996). MEME 2.04 [Interview with Richard Stallman]. MEME. Retrieved from http://memex.org/meme2-04.html
Bermejo, F. (2009). Audience manufacture in historical perspective: from broadcasting to Google. New Media & Society, 11(1–2), 133–154.
Besser, H. (1995). The Information SuperHighway: Social and Cultural Impact. In J. Brook & I. Boal (Eds.), Resisting the Virtual Life: The Culture and Politics of Information. San Francisco: City Lights Press. Retrieved from http://web.archive.org/web/20090509075945/http://www.gseis.ucla.edu/~howard/Papers/brook-book.html
Bijker, W., Hughes, T. P., & Pinch, T. (Eds.). (1989). The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge: MIT Press.
Blais, J.-P. (2014, September 8). Opening remarks by Jean-Pierre Blais at the public hearing on Let’s Talk TV. Retrieved September 26, 2016, from http://news.gc.ca/web/article-en.do?nid=882009
Bower, J. L. (1970). Managing the Resource Allocation Process: A Study of Corporate Planning and Investment. Boston: Harvard Business School Press.
Bowker, G. C. (1994). Information mythology and infrastructure. In L. Bud (Ed.), Information Acumen: The Understanding and use of Knowledge in Modern Business (pp. 231–247). London: Routledge.
Bowker, G. C., Baker, K., Millerand, F., & Ribes, D. (2009). Toward information infrastructure studies: Ways of knowing in a networked environment. In International Handbook of Internet Research (pp. 97–117). Springer.
Bratton, B. H. (2016). The Stack: On Software and Sovereignty. MIT Press.
240
Brodkin, J. (2014, July 25). Cable companies: We’re afraid Netflix will demand payment from ISPs. Ars Technica. Retrieved from http://arstechnica.com/business/2014/07/cable-companies-were-afraid-netflix-will-demand-payment-from-isps/
Brodkin, J. (2016a, February 2). One year later, net neutrality still faces attacks in court and Congress. Ars Technica. Retrieved from http://arstechnica.com/business/2016/02/one-year-later-net-neutrality-still-faces-attacks-in-court-and-congress/
Brodkin, J. (2016b, October 10). Hillary Clinton vs. Donald Trump on broadband: She has a plan, he doesn’t. Ars Technica. Retrieved from http://arstechnica.com/tech-policy/2016/10/hillary-clinton-vs-donald-trump-on-broadband-she-has-a-plan-he-doesnt/
Brodsky, A. (2007, April 5). Save Our Spectrum Coalition Asks FCC To Create Wireless Broadband Competition [Media Release]. Retrieved September 1, 2016, from https://www.publicknowledge.org/press-release/save-our-spectrum-coalition-asks-fcc-create-wirele
Burgelman, R. A. (1983). A Model of the Interaction of Strategic Behavior, Corporate Context, and the Concept of Strategy. Academy of Management Review, 8(1), 61–70.
Calder, M., Fan, X., Hu, Z., Katz-Bassett, E., Heidemann, J., & Govindan, R. (2013). Mapping the expansion of Google’s serving infrastructure. In Proceedings of the 2013 conference on Internet measurement conference (pp. 313–326). Barcelona, Spain: ACM. Retrieved from http://www.isi.edu/~xunfan/research/Calder13a.pdf
Callon, M. (1981). Pour une sociologie des controverses technologiques. Fundamenta Scientiae, 2(3/4), 381–399.
Callon, M. (1986a). Some elements of a sociology of translation: domestication of the scallops and the fishermen of St Brieuc Bay. In Power, action and belief: a new sociology of knowledge? (pp. 196–223). London: Routledge.
Callon, M. (1986b). The sociology of an actor-network: The case of the electric vehicle. In M. Callon, J. Law, & A. Rip (Eds.), Mapping the dynamics of science and technology: Sociology of science in the real world (pp. 19–34). London: Macmillan Press.
Callon, M. (1991). Techno-economic networks and irreversibility. In J. Law (Ed.), A sociology of monsters. Essays on power, technology and domination (pp. 132–161). Routledge.
Canadian Association of Internet Providers. (2008). Reply of the Canadian Association Of Internet Providers: Application to the Canadian Radio-television and Telecommunications Commission. Retrieved from http://www.crtc.gc.ca/partvii/eng/2008/8622/c51_200805153.htm
Canadian Radio-television and Telecommunications Commission. (2008). Telecom Decision CRTC 2008-108: The Canadian Association of Internet Providers’ application regarding Bell Canada’s traffic shaping of its wholesale Gateway Access Service. Retrieved from http://www.crtc.gc.ca/eng/archive/2008/dt2008-108.htm
Canadian Radio-television and Telecommunications Commission. (2009). Telecom Regulatory Policy CRTC 2009-657 (No. 8646-C12-200815400). Retrieved from http://www.crtc.gc.ca/eng/archive/2009/2009-657.htm
241
Cannon, R. (2001). Where Internet Service Providers and Telephone Companies Compete: A Guide to the Computer Inquiries, Enhanced Service Providers and Information Service Providers. CommLaw Conspectus: Journal of Communications Law and Technology Policy, 9(1), 49.
Cannon, R. (2003). Legacy of the Federal Communications Commission’s Computer Inquiries, The. Federal Communications Law Journal, 55(2), 167.
Carr, N. (2008). Is Google Making Us Stupid? What the Internet is doing to our brains. The Atlantic, (July/August). Retrieved from http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/
Carroll, J. (2001, January 13). FCC Approves AOL-Time Warner Deal, Imposes Measures to Protect Competition. Wall Street Journal. Retrieved from http://www.wsj.com/articles/SB979256638823720242
Castelein, J. (2005a, September 4). AJAX Latency problems: myth or reality? Retrieved from https://richui.blogspot.ca/2005/09/ajax-latency-problems-myth-or-reality.html
Castelein, J. (2005b, September 12). AJAX: reducing latency with a CDN [Blog]. Retrieved from https://richui.blogspot.ca/2005/09/ajax-reducing-latency-with-cdn.html
Ceruzzi, P. E. (2003). A History of Modern Computing. The MIT Press. Retrieved from http://books.google.com/
Chandler, A. (1962). Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Cambridge: MIT Press.
Chang, E. (2005, October 5). eHub Interviews: Writely [Sam Shillace, October 2, 2005]. Retrieved from https://web.archive.org/web/20110722190058/http://emilychang.com/ehub/app/ehub-interviews-writely/
Chen, T. M. (2010). The fight for control of Internet traffic [Editor’s Note]. IEEE Network, 24(5), 2–3.
Ciborra, C. (1996). The Platform Organization: Recombining Strategies, Structures, and Surprises. Organization Science, 7(2), 103–118.
Ciborra, C. (1997). De profundis? Deconstructing the concept of strategic alignment. Scandinavian Journal of Information Systems, 9(1), 2.
Ciborra, C. (2002). The labyrinths of information: Challenging the wisdom of systems. Oxford University Press. Retrieved from http://books.google.ca
Cisco Systems. (2012). Cisco Systems Visual Networking Index (White Paper). Retrieved from http://www.cisco.com/go/vni
Cleland, S. (2012, September 10). What Really Made Steve Jobs So Angry at Google? Gizmodo. Retrieved from http://gizmodo.com/5941817/what-really-made-steve-jobs-so-angry-about-google
Cohn, C. (2010, August 10). A Review of Verizon and Google’s Net Neutrality Proposal. Retrieved from https://www.eff.org/deeplinks/2010/08/google-verizon-netneutrality
242
Corbin, K. (2010, July 22). GOP Senators Aim to Sink FCC Broadband Plans. InternetNews.com. Retrieved from http://www.internetnews.com/government/article.php/3894551
Cordella, A. (2010). Information Infrastructure: an Actor-Network Perspective. International Journal of Actor-Network Theory and Technological Innovation, 27–53(1), 325.
Cordella, A., & Shaikh, M. (2006). From Epistemology to Ontology: Challenging the Constructed’truth’of ANT. Department of Information Systems, London School of Economics and Political Science.
Cowan, J. (2013, July 23). Google Drives 25 Percent of North American Internet Use. Retrieved from http://www.sitepronews.com/2013/07/23/google-drives-25-percent-of-north-american-internet-use/
Crabbe, E., & Vytautas, V. (2012, August). SDN at Google: Opportunities for WAN Optimization. Presented at the IETF 84, Vancouver. Retrieved from https://www.ietf.org/proceedings/84/slides/slides-84-sdnrg-4.pdf
Crunchbase. (2016, September 1). YouTube. Retrieved September 1, 2016, from https://www.crunchbase.com/organization/youtube#/entity
Darnton, R. (2009). The Library in the Information Age. In K. Becker & F. Stalder (Eds.), Deep Search: The Politics of Search beyond Google (pp. 32–44). StudienVerlag.
Davidson, A., & Tauke, T. J. (2010a). Google and Verizon Joint Submission on the Open Internet, GN Docket No. 09-191; WC Docket No. 07-52. Retrieved from http://www.scribd.com/doc/25258470/Google-and-Verizon-Joint-Submission-on-the-Open-Internet
Davidson, A., & Tauke, T. J. (2010b, August 9). A joint policy proposal for an open Internet. Retrieved from http://googlepublicpolicy.blogspot.ca/2010/08/joint-policy-proposal-for-open-internet.html
Davies, D. W., Bartlett, K. A., Scantlebury, R. A., & Wilkinson, P. T. (1967). A digital communication network for computers giving rapid response at remote terminals. In SOSP ’67: Proceedings of the first ACM symposium on Operating System Principles (p. 2.1-2.17). ACM.
DeNardis, L. (2009). Protocol Politics: The Globalization of Internet Governance. Cambridge: MIT Press. Retrieved from http://books.google.com/
Dilley, J., Maggs, B., Parikh, J., Prokop, H., Sitaraman, R., & Weihl, B. (2002). Globally distributed content delivery. IEEE Internet Computing, 6(5), 50–58.
Duncan, I. (2013, June 2). Personal interview.
Edwards, P. N., Bowker, G. C., Jackson, S. J., & Williams, R. (2009). Introduction: an agenda for infrastructure studies. Journal of the Association for Information Systems, 10(5), 6.
Engine, & New America Foundation. (2014, May 7). Untitled letter to Federal Communications Commission from 150 technology companies regarding network neutrality. Retrieved from https://static1.squarespace.com/static/571681753c44d835a440c8b5/57323e0ad9fd5607a3
243
d9f66b/57323e10d9fd5607a3d9f91a/1462910480032/Company_Sign_On_Letter_051414.pdf?format=original
Ernesto. (2007, August 17). Comcast Throttles BitTorrent Traffic, Seeding Impossible. Retrieved from http://torrentfreak.com/comcast-throttles-bittorrent-traffic-seeding-impossible/
Evans, J., & Filsfils, C. (2007). Deploying IP and MPLS QoS for multiservice networks. London: Morgan Kaufmann. Retrieved from http://books.google.com/
Federal Communications Commission. In the Matters of: Appropriate Framework for Broadband Access to the Internet over Wireline Facilities; Review of Regulatory Requirements for Incumbent LEC Broadband Telecommunications Services; Computer III Further Remand Proceedings: Bell Operating Company Provision of Enhanced Services; 1998 Biennial Regulatory Review – Review of Computer III and ONA Safeguards and Requirements; Inquiry Concerning High-Speed Access to the Internet Over Cable and Other Facilities; Internet Over Cable Declaratory Ruling; and Appropriate Regulatory Treatment for Broadband Access to the Internet Over Cable Facilities, No. CC Docket No. 02-33; CC Docket No. 01-337; 15 CC Docket Nos. 95-20, 98-10; GN Docket No. 00-185; CS Docket No. 02-52 (August 5, 2005). Retrieved from http://fjallfoss.fcc.gov/edocs_public/attachmatch/FCC-05-151A1.pdf
Federal Communications Commission. (2010, August 9). Statement of Commissioner Michael J. Copps on Verizon-Google Announcement - DOC-300754A1 [Media Release]. Retrieved from https://apps.fcc.gov/edocs_public/attachmatch/DOC-300754A1.pdf
Federation of Enterprise Architecture Professional Organizations. (2014). A Common Perspective on Enterprise Architecture. Federation of Enterprise Architecture Professional Organizations. Retrieved from http://feapo.org/wp-content/uploads/2013/11/Common-Perspectives-on-Enterprise-Architecture-v15.pdf
Fisher, S. (2008). Comcast finalizes its network management strategy. Betanews. Retrieved from http://www.betanews.com/article/Comcast_finalizes_its_network_management_strategy/1222122139
Fitzgerald, D., & Ante, S. (2013, December 16). Google, Facebook Push to Control Web’s Pipes - WSJ. Wall Street Journal. Retrieved from http://online.wsj.com/news/articles/SB10001424052702304173704579262361885883936
Floyd, S. (2000). Congestion Control Principles (Request for Comments No. 2914). Network Working Group. Retrieved from https://tools.ietf.org/html/rfc2914
Fuchs, C. (2011). A Contribution to the Critique of the Political Economy of Google. Fast Capitalism, 8(1), 1–24.
Gao, P. (2005). Using actor-network theory to analyse strategy formulation. Information Systems Journal, 15(3), 255–275. https://doi.org/10.1111/j.1365-2575.2005.00197.x
Gauch, S., Chaffee, J., & Pretschner, A. (2003). Ontology-based personalized search and browsing. Web Intelligence and Agent Systems: An International Journal, 1(3, 4), 219–234.
Gaver, W. W. (1991). Technology affordances. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 79–84). ACM.
244
Geist, M. (2014, September 30). The CRTC’s Right to Forget: Regulator to Ignore Netflix and Google TalkTV Submissions. Retrieved from http://www.michaelgeist.ca/2014/09/crtcs-right-forget-regulator-ignore-netflix-google-talktv-submissions/
Gibson, J. J. (1977). The Theory of Affordances. In R. Shaw & J. Bransford (Eds.), Perceiving, Acting, and Knowing: Toward an Ecological Psychology. Hillsdale, NJ: Lawrence Erlbaum.
Gill, P., Arlitt, M., Li, Z., & Mahanti, A. (2007). YouTube traffic characterization: a view from the edge. In Proceedings of the 7th ACM SIGCOMM conference on Internet measurement (pp. 15–28). ACM.
Girard, B. (2009). The Google Way: How One Company Is Revolutionizing Management as We Know It. San Francisco: No Starch Press.
Goldman, D. (2010, August 5). Why Google and Verizon’s Net neutrality deal affects you. Retrieved January 20, 2016, from http://money.cnn.com/2010/08/05/technology/google_verizon_net_neutrality_rules/index.htm
Goodman, P. S., & Timberg, C. (2000, February 12). AOL Reverses It’s Position On “Open Access” Internet. Washington Post. Retrieved from http://www.commondreams.org/headlines/021200-04.htm
Google. (2009, February 13). Content Providers. Retrieved July 4, 2016, from https://developer.android.com/guide/topics/providers/content-providers.html
Google. (2013, November 8). Data Centers – Google. Retrieved January 20, 2016, from http://www.google.com/about/datacenters/
Google Global Cache Beta. (2011, January 25). Retrieved January 20, 2016, from http://ggcadmin.google.com/ggc
Greenberg, T., & Veytsal, T. (2010, August 11). The Google/Verizon Walled Garden Plan: No Substantive Impact on Net Neutrality. The Huffington Post. Retrieved from http://www.huffingtonpost.com/tony-greenberg/the-googleverizon-walled_b_677908.html
Gustin, S. (2010, September 8). Google, Verizon and the FCC: Inside the War Over the Internet’s Future. Retrieved August 15, 2014, from http://www.dailyfinance.com/2010/09/08/google-verizon-fcc-war-over-internets-future/
Guzmán, J. M. (2008, May). Google Peering Policy - Latin America 2008. Presented at the LACNIC XI, Salvador, Brazil. Retrieved from http://lacnic.net/documentos/lacnicxi/presentaciones/Google-LACNIC-final-short.pdf
Halavais, A. (2008). Search Engine Society. Polity. Hanley, R. (2003, February 12). From Googol to Google: Co-founder returns. The Stanford
Daily. Retrieved from http://stanforddailyarchive.com/cgi-bin/stanford?a=d&d=stanford20030212-01.2.31&e=-------en-20--1--txt-txIN-------
Hansen, E. (2005, January 19). Google wants “dark fiber.” Retrieved July 20, 2014, from http://news.cnet.com/Google-wants-dark-fiber/2100-1034_3-5537392.html
245
Hanseth, O. (1996). Information technology as infrastructure (Doctoral dissertation). Goteborg University, Goteborg.
Hedger, J. (2005, September 22). Is Google Building Alternative Internet? Search Engine Guide. Retrieved from http://www.searchengineguide.com/jim-hedger/is-google-bui.php
Hedlund, M. (2010, August 11). Wacky Google/Verizon net neutrality theory. O’Reilly Radar. Retrieved from http://radar.oreilly.com/2010/08/wacky-googleverizon-net-neutra.html
Hersman, E. (2008, July 4). Google Kenya and the Google Global Cache. Retrieved from http://whiteafrican.com/2008/07/04/google-kenya-and-the-google-global-cache/
Hersman, E. (2011a, January 17). Local Web Cache Lessons: Uganda [Blog]. Retrieved from http://whiteafrican.com/2011/01/17/local-web-cache-lessons-uganda/
Hersman, E. (2011b, April 13). The Google Global Cache hits Kenya [Personal blog]. Retrieved from http://whiteafrican.com/tag/google-global-cache/
Higginbotham, S. (2011, January 20). Here’s What’s Hiding in Verizon’s Net Neutrality Suit. Gigaom. Retrieved from https://gigaom.com/2011/01/20/heres-whats-hiding-behind-verizons-net-neutrality-suit/
Himanen, P. (2001). The Hacker Ethic and the Spirit of the Information Age. New York: Random House.
Hinman, L. M. (2005). Esse est indicato in Google: Ethical and political issues in search engines. International Review of Information Ethics, 3(6), 19–25.
Hochstein, A., Zarnekow, R., & Brenner, W. (2005). ITIL as common practice reference model for IT service management: formal assessment and implications for practice. In 2005 IEEE International Conference on e-Technology, e-Commerce and e-Service (pp. 704–710). IEEE.
Holmström, J., & Stalder, F. (2001). Drifting technologies and multi-purpose networks: the case of the Swedish cashcard. Information and Organization, 11(3), 187–206.
Holpuch, A. (2013, November 11). Netflix and YouTube make up majority of US Internet traffic, new report shows. Theguardian.com. Retrieved from http://www.theguardian.com/technology/2013/nov/11/netflix-youtube-dominate-us-internet-traffic
Hölzle, U. (2012, April). OpenFlow @ Google. Presented at the Open Networking Summit 2012, Santa Clara. Retrieved from http://www.opennetsummit.org/archives/apr12/hoelzle-tue-openflow.pdf
Hong, E. (2016, February 8). Zero-Rating, Explained. Slate. Retrieved from http://www.slate.com/articles/technology/future_tense/2016/02/why_activists_are_fighting_facebook_t_mobile_over_zero_rating.html
Hruska, J. (2015, April 6). What is Ping and Latency? Retrieved from http://www.speedtest.net/articles/what-is-ping-what-is-latency/
Hughes, T. P. (1994). Technological Momentum. In L. Marx (Ed.), Does technology drive history?: The dilemma of technological determinism. MIT Press.
246
Interactive Advertising Bureau. (2016, December 28). Q3 2016 Internet Ad Revenues Hit $17.6 Billion, Climbing 20% Year-Over-Year, According to IAB. Retrieved December 30, 2016, from https://www.iab.com/news/q3-2016-internet-ad-revenues-hit-17-6-billion-climbing-20-year-year-according-iab
Isenberg, D. S. (2007, January 17). Notes on Verizon Official Position on Net Neutrality. Retrieved from http://isen.com/blog/2007/01/notes-on-verizon-official-position-on.html
Jacobson, V. (1988). Congestion avoidance and control. In Proceedings of SIGCOMM ’88 (Vol. 18, pp. 314–329). Stanford: ACM.
Jacobson, V. (1997, April). pathchar — a tool to infer characteristics of Internet paths. Mathematical Sciences Research Institute, Berkeley. Retrieved from ftp://ftp.kfki.hu/pub/packages/security/COAST/netutils/pathchar/msri-talk.pdf
Jarvis, J. (2009). What Would Google Do?: Reverse-Engineering the Fastest Growing Company in the History of the World. New York City: HarperBusiness. Retrieved from http://books.google.com/
Kang, H. (2009). You as a commodity of Google: Examining audience commodification of Google. In Annual meeting of the International Communication Association, Marriott, Chicago, IL (Vol. 20). Marriott, Chicago, IL. Retrieved from http://citation.allacademic.com/meta/p_mla_apa_research_citation/3/0/1/0/1/pages301019/p301019-1.php
Kessler, S. (2010, December 21). FCC’s Net Neutrality Order Finally Passes, Many Disappointed. Mashable. Retrieved from http://mashable.com/2010/12/21/fcc-passes-net-neutrality/
Knowledge, F. P., & Public. (2007). Formal Complaint of Free Press and Public Knowledge against Comcast Corporation for Secretly Degrading Peer-to-Peer Applications.
Kottke, J. (2003, August 8). Google and the Fabulous Googlettes. Retrieved from http://kottke.org/03/08/google-and-the-fabulous-googlettes
Labaton, S. (2001, January 12). F.C.C. Approves AOL-Time Warner Deal, With Conditions. The New York Times. Retrieved from http://www.nytimes.com/2001/01/12/business/fcc-approves-aol-time-warner-deal-with-conditions.html
Labovitz, C. (2010a, March 16). How Big is Google? Retrieved from http://www.arbornetworks.com/asert/2010/03/how-big-is-google/
Labovitz, C. (2010b, April 27). The Battle of the Hyper Giants (Part I). Retrieved from http://www.arbornetworks.com/asert/2010/04/the-battle-of-the-hyper-giants-part-i-2/
Labovitz, C., Iekel-Johnson, S., McPherson, D., Jahanian, F., Oberheide, J., & Karir, M. (2009, November). ATLAS Internet Observatory 2009 Annual Report. Retrieved from https://www.nanog.org/meetings/nanog47/presentations/Monday/Labovitz_ObserveReport_N47_Mon.pdf
Lasar, M. (2008, May 6). Google holds Verizon’s feet to fire on 700MHz open access. Ars Technica. Retrieved from http://arstechnica.com/uncategorized/2008/05/google-holds-verizons-feet-to-fire-on-700mhz-open-access/
247
Latour, B. (1991). Nous n’avons jamais été modernes : Essai d’anthropologie symétrique. La Découverte.
Latour, B. (1996). On actor-network theory: a few clarifications. Soziale Welt, 47(4), 369–381. Latour, B. (2005). Reassembling the Social: An Introduction to Actor-network Theory. Oxford:
Oxford University Press. Law, J. (1992). Notes on the theory of the actor-network: Ordering, strategy, and heterogeneity.
Systems Practice, 5(4), 379–393. Law, J. (1999). After ANT: complexity, naming and topology. The Sociological Review, 47(S1),
1–14. Law, J., & Lodge, P. (1984). Science for social scientists. Palgrave Macmillan.
Lee, M. (2011). Google ads and the blindspot debate. Media, Culture & Society, 33(3), 433–447. Lemley, M. A., & Lessig, L. (2001). The End of End-to-End: Preserving the Architecture of the
Internet in the Broadband Era. UCLA Law Review, 48(4), 925–972. Lenhart, A. (2012). Teens, smartphones & texting (Pew Research Center’s Internet & American
Life Project) (pp. 1–34). Pew Research Center. Retrieved from http://www.unav.edu/matrimonioyfamilia/observatorio/uploads/29710_Pew-Internet-Lenhart_Teens-smartphones-2012.pdf
Lenoir, T. (1997). Doug Engelbart 1968 Demo. Retrieved August 23, 2016, from http://web.stanford.edu/dept/SUL/library/extra4/sloan/mousesite/1968Demo.html
Lessig, L. (1999). Code and Other Laws of Cyberspace. Basic Books.
Lessig, L. (2004). Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity. Penguin. Retrieved from http://books.google.com/
Levy, S. (1984). Hackers: Heroes of the Computer Revolution. New York City: Nerraw Manijaime/Doubleday.
Levy, S. (2009, May 22). Secret of Googlenomics: Data-Fueled Recipe Brews Profitability. Wired. Retrieved from http://www.wired.com/2009/05/nep-googlenomics/
Levy, S. (2011). In The Plex: How Google Thinks, Works, and Shapes Our Lives. Simon Schuster.
Levy, S. (2012, October 17). Google Throws Open Doors to Its Top-Secret Data Center. Wired. Retrieved from http://www.wired.com/2012/10/ff-inside-google-data-center/
Lobet-Maris, C. (2009). From Trust to Tracks. A Technology Assessment Perspective Revisited. In K. Becker & F. Stalder (Eds.), Deep Search: The Politics of Search beyond Google (pp. 73–84). StudienVerlag.
Lodhi, A., Larson, N., Dhamdhere, A., Dovrolis, C., & claffy, kc. (2014). Using peeringDB to understand the peering ecosystem. ACM SIGCOMM Computer Communication Review, 44(2), 20–27.
Louden, B. (2011). Homer’s Odyssey and the Near East. Cambridge University Press. Lowensohn, J. (2014, July 10). New app uses AT&T’s sponsored data to sidestep monthly limits
on apps, games, and surfing | The Verge. The Verge. Retrieved from
248
http://www.theverge.com/2014/7/10/5889031/new-app-uses-at-ts-sponsored-data-to-sidestep-monthly-limits-on-apps
Malonis, J. A. (Ed.). (2002). Internet Service Provider (ISP). In Gale Encyclopedia of E-commerce (Vol. 1). Detroit: Gale Group. Retrieved from http://www.encyclopedia.com/topic/Internet_service_provider.aspx
Markoff, J. (2010, October 9). Google Cars Drive Themselves, in Traffic. The New York Times. Retrieved from http://www.nytimes.com/2010/10/10/science/10google.html
Marsden, C. T. (2010). Net Neutrality: Towards a Co-Regulatory Solution. London: Bloomsbury Academic. Retrieved from http://www.amazon.com/Net-Neutrality-Towards-Co-Regulatory-Solution/dp/1849660069%3FSubscriptionId%3D1V7VTJ4HA4MFT9XBJ1R2%26tag%3Dmekentosjcom-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3D1849660069
Maryka, S. (2009, April 1). What is the Asynchronous Web, and How is it Revolutionary? Retrieved August 28, 2016, from http://www.theserverside.com/news/1363576/What-is-the-Asynchronous-Web-and-How-is-it-Revolutionary
Maurer, H., Balke, T., Kappe, F., Kulathuramaiyer, N., Weber, S., & Zaka, B. (2007). Report on dangers and opportunities posed by large search engines, particularly Google (p. 187). Graz: Institute for Information Systems and Computer Media, Graz University of Technology. Retrieved from http://www.iicm.tugraz.at/iicm_papers/dangers_google.pdf
McCormick, R. (2016, June 2). Elon Musk: There’s only one AI company that worries me. Retrieved July 19, 2016, from http://www.theverge.com/2016/6/2/11837566/elon-musk-one-ai-company-that-worries-me
McDiarmid, A. (2010, August 11). Why the Google-Verizon Proposal Falls Short. Retrieved from https://cdt.org/blog/why-the-google-verizon-proposal-falls-short/
McMillan, R. (2013, July 7). Google Serves 25 Percent of North American Internet Traffic. Wired. Retrieved from http://www.wired.com/2013/07/google-internet-traffic/
McMillan, R. (2014, June 23). What Everyone Gets Wrong in the Debate Over Net Neutrality. Wired. Retrieved from http://www.wired.com/2014/06/net_neutrality_missing/
McMillan, R. (2015, February 26). How Google’s Silence Helped Net Neutrality Win. Wired. Retrieved from https://www.wired.com/2015/02/google-net-neutrality/
McSherry, C. (2009, October 21). Is Net Neutrality a FCC Trojan Horse? Retrieved from https://www.eff.org/deeplinks/2009/09/net-neutrality-fcc-perils-and-promise
Miller, C. C., & Helft, M. (2010, August 9). Google and Verizon Offer a Vision for Managing Internet Traffic. The New York Times. Retrieved from http://www.nytimes.com/2010/08/10/technology/10net.html
Miller, R. (2010, March 18). Is Google’s Network Morphing Into a CDN? Retrieved July 21, 2014, from http://www.datacenterknowledge.com/archives/2010/03/18/google-boosts-peering-to-save-on-bandwidth/
249
Miller, R. (2011, August 1). Report: Google Uses About 900,000 Servers. Retrieved from http://www.datacenterknowledge.com/archives/2011/08/01/report-google-uses-about-900000-servers/
Miller, R. (2012, May 15). Google Data Center FAQ & Locations. Retrieved July 20, 2014, from http://www.datacenterknowledge.com/archives/2012/05/15/google-data-center-faq/
Miller, R. (2014, February 3). Google Spent $7.3 Billion on its Data Centers in 2013. Data Center Knowledge. Retrieved from http://www.datacenterknowledge.com/archives/2014/02/03/google-spent-7-3-billion-data-centers-2013/
Miller, R. (2015, December 14). Carrier Hotels Are Sexy Again. Retrieved December 15, 2015, from https://datacenterfrontier.com/netrality-carrier-hotels/
Mills, E. (2006, February 27). Who’s who of Google hires. ZDNet. Retrieved from http://www.zdnet.com/article/whos-who-of-google-hires/
Milo, J. (2010, July 2). Google Apps highlights. Retrieved from https://googleblog.blogspot.com/2010/07/google-apps-highlights-722010.html
Minar, N. (2010, August 12). Why would Google give up on net neutrality? Some Bits: Nelson’s Weblog. Retrieved from http://www.somebits.com/weblog/tech/google-verizon-abandoning-net-neutrality.html
Molla, R. (2014, August 18). A Decade in Google Lobbying - The Numbers [Blog]. Retrieved from http://blogs.wsj.com/numbers/a-decade-in-google-lobbying-1713/
Monteiro, E., & Hanseth, O. (1996). Social shaping of information infrastructure: on being specific about the technology. In Information Technology and Changes in Organizational Work (pp. 325–343). Springer.
Mueller, M. (2004). Making Sense of “Internet Governance:” Defining Principles and Norms in a Policy Context. Retrieved from http://www.wgig.org/docs/ig-project5.pdf
Mueller, M. L. (2002). Ruling the Root: Internet Governance and the Taming of Cyberspace. Cambridge: MIT Press. Retrieved from http://books.google.ca
Munroe, R. (2013, September 17). Google’s Datacenters on Punch Cards. What If? Retrieved from https://what-if.xkcd.com/63/
Nagle, J. (1984). Congestion Control in IP/TCP Internetworks (Request For Comments No. 896). Retrieved from https://tools.ietf.org/html/rfc896
Nistor, J. (2013, July 24). Email interview with Jon Nistor, Director, Toronto Internet Exchange (TorIX) [Email].
Noam, E. (2014). Cloud TV: Toward the next generation of network policy debates. Telecommunications Policy, 38(8), 684–692.
Noam, E. M. (2006). Why TV regulation will become telecom regulation. Richards, Foster, and Kiedrowski, 67–72.
Noam, E. M. (Ed.). (2016). Who owns the world’s media? : media concentration and ownership around the world. Oxford University Press. Retrieved from
250
http://books2.scholarsportal.info.myaccess.library.utoronto.ca/viewdoc.html?id=/ebooks/ebooks3/oso/2016-01-02/1/9780199987238-Noam
Nocera, J. (2010, September 3). The Struggle for What We Already Have. The New York Times. Retrieved from http://www.nytimes.com/2010/09/04/business/04nocera.html
Norman, D. A. (1988). The Design of Everyday Things. New York: Basic Books. Norman, D. A. (2006). Words Matter. Talk About People: Not Customers, Not Consumers, Not
Users. Retrieved from http://www.jnd.org/dn.mss/words_matter_talk_ab.html Norton, W. B. (2014). The 2014 Internet Peering Playbook: Connecting to the Core of the
Internet (2014 edition). DrPeering Press. Nowak, P. (2008, July 7). Bell’s internet throttling illegal, Google says. Retrieved January 20,
2016, from http://www.cbc.ca/news/technology/bell-s-internet-throttling-illegal-google-says-1.727851
Number Resource Organization. (2015, June 2). Internet Governance. Retrieved September 4, 2016, from https://www.nro.net/nro-and-internet-governance
Nurmi, S. (2008, April 11). Map of all Google data center locations. Retrieved from http://royal.pingdom.com/2008/04/11/map-of-all-google-data-center-locations/
Oberoi, A. (2013, July 3). The History of Online Advertising. Retrieved from http://www.adpushup.com/blog/the-history-of-online-advertising/
O’Connell, P. (2005, November 6). Online Extra: At SBC, It’s All About “Scale and Scope.” BusinessWeekOnline. Retrieved from http://www.bloomberg.com/bw/stories/2005-11-06/online-extra-at-sbc-its-all-about-scale-and-scope
Open Handset Alliance. (2007, November 5). Industry Leaders Announce Open Platform for Mobile Devices [Media Release]. Retrieved September 1, 2016, from http://www.openhandsetalliance.com/press_110507.html
OpenNet Initiative. (2005). Telus Blocks Consumer Access to Labour Union Web Site and Filters an Additional 766 Unrelated Sites. OpenNet Initiative Bulletin, (10), 2/9/2010.
Oram, A. (2010, August 11). What I get and don’t get about the Google/Verizon proposal. O’Reilly Radar. Retrieved from http://radar.oreilly.com/2010/08/what-i-get-and-dont-get-about.html
Ostrom, E. (2007). Institutional Rational Choice: An Assessment of the Institutional Analysis and Development Framework. In P. A. Sabatier (Ed.), Theories of the Policy Process (Second Edition). Cambridge: Westview Press.
Ott, S. A. (2004, May 29). Google’s GMail - Privacy concerns. Retrieved from http://www.linksandlaw.com/gmail-google-privacy-concerns.htm
Page, L. (2008, April 30). The best advice I ever got - Larry Page. Fortune. Retrieved from http://archive.fortune.com/galleries/2008/fortune/0804/gallery.bestadvice.fortune/2.html
Pariser, E. (2011). The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. New York: Penguin.
251
Park, P. (2008). Voice over IP Security. Cisco Press. Retrieved from http://www.networkworld.com/article/2272296/lan-wan/chapter-1--working-with-voip.html
Pasquinelli, M. (2009). Google’s PageRank Algorithm: A Diagram of Cognitive Capitalism and the Rentier of the Common Intellect. In K. Becker & F. Stalder (Eds.), Deep Search: The Politics of Search beyond Google (pp. 152–162). StudienVerlag.
Paul, I. (2010, August 6). Net Neutrality: Are Google and Verizon Waffling? PCWorld. Retrieved from http://www.pcworld.com/article/202741/net_neutrality_are_google_and_verizon_waffling.html
Pearn, J. (2012, January 25). How many servers does Google have? [Google+]. Retrieved from https://plus.google.com/+JamesPearn/posts/VaQu9sNxJuY
PeeringDB. (2013, October 28). [Database]. Retrieved August 15, 2014, from https://www.peeringdb.com/
Perez, J. C. (2009, July 10). Outlook Separation Anxiety Holds Back Google Apps. PCWorld. Retrieved from http://www.pcworld.com/article/168248/article.html
Petersen, S. M. (2008). Loser generated content: From participation to exploitation. First Monday, 13(3). Retrieved from http://firstmonday.org/article/view/2141/1948
Public Knowledge. (2010, August 4). Public Knowledge Calls Verizon-Google Deal “Regrettable” [Press Release]. Retrieved January 20, 2016, from http://test.publicknowledge.org/press-release/public-knowledge-calls-verizon-google-deal-regret
Qiu, W. (2013). South-East Asia Japan Cable (SJC) System Overview. Retrieved July 18, 2014, from http://submarinenetworks.com/systems/intra-asia/sjc/sjc-cable-system
Rayburn, D. (2014, February 23). Inside The Netflix/Comcast Deal and What The Media Is Getting Very Wrong [Blog]. Retrieved from http://blog.streamingmedia.com/2014/02/media-botching-coverage-netflix-comcast-deal-getting-basics-wrong.html
Reed, B. (2009, October 9). Google, Verizon teaming to develop Android devices. Network World. Retrieved from https://web.archive.org/web/20091009001427/http://www.networkworld.com/news/2009/100609-google-verizon-android.html
Reisman, A. (2007, July 5). The White Lies ISPs Tell About Broadband Speeds. PCMag.com. Retrieved from http://www.pcmag.com/article2/0,2817,2155140,00.asp
Richmond, S. (2012, June 29). Smartphones hardly used for calls. The Telegraph. Retrieved from http://www.telegraph.co.uk/technology/mobile-phones/9365085/Smartphones-hardly-used-for-calls.html
Ross, J. (2008, August 1). FCC Rules Against Comcast. NPR.org. Retrieved from http://www.npr.org/templates/story/story.php?storyId=93194962
Rotem-Gal-Oz, A. (2006). Fallacies of distributed computing explained. Retrieved from https://pages.cs.wisc.edu/~zuyu/files/fallacies.pdf
252
Roush, W. (2009, October 20). Arbor Networks Reports on the Rise of the Internet “Hyper Giants.” Xconomy. Retrieved from http://www.xconomy.com/boston/2009/10/20/arbor-networks-reports-on-the-rise-of-the-internet-hyper-giants/#
Ruth, S. (2010). Bumps on the Road to the National Broadband Plan. IEEE Internet Computing, 14(6), 59–63.
Saltzer, J. H. (1999, October 22). “Open Access” is Just the Tip of the Iceberg. Retrieved September 26, 2016, from http://web.mit.edu/Saltzer/www/publications/openaccess.html
Saltzer, J. H., Reed, D. P., & Clark, D. D. (1984). End-To-End Arguments in System Design. ACM Transactions on Computer Systems (TOCS), 2(4), 277–288.
San Francisco Chronicle. (2010, August 9). FCC needs to get tough on network neutrality. SFGate. Retrieved from http://www.sfgate.com/opinion/editorials/article/FCC-needs-to-get-tough-on-network-neutrality-3256563.php
Sandvig, C. (2013). The Internet as Infrastructure. In W. H. Dutton (Ed.), The Oxford Handbook of Internet Studies (pp. 86–108). Oxford: Oxford University Press.
Sandvine. (2015, December 7). Sandvine: Over 70% Of North American Traffic Is Now Streaming Video And Audio [Media Release]. Retrieved September 7, 2016, from https://www.sandvine.com/pr/2015/12/7/sandvine-over-70-of-north-american-traffic-is-now-streaming-video-and-audio.html
Sasso, B. (2014, September 15). Netflix Has Replaced Google as the Face of Net Neutrality. National Journal. Retrieved from http://www.nationaljournal.com/tech/netflix-has-replaced-google-as-the-face-of-net-neutrality-20140915
Schmidt, E. (2006, July 19). A Note to Google Users on Net Neutrality. Retrieved from http://www.google.com/help/netneutrality_letter.html
Schmidt, E., Rosenberg, J., & Eagle, A. (2014). How Google Works (Kindle Edition). New York: Grand Central Publishing.
Schneier, B. (2016, February 4). The Internet of Things Will Be the World’s Biggest Robot [Blog]. Retrieved from https://www.schneier.com/blog/archives/2016/02/the_internet_of_1.html
Schonfeld, E. (2011, May 31). Eric Schmidt’s Gang Of Four: Google, Apple, Amazon, and Facebook. TechCrunch. Retrieved from http://techcrunch.com/2011/05/31/schmidt-gang-four-google-apple-amazon-facebook/
Schwartz, B. (2007, March 21). No Google Phone But Instead Mobile Software, Says Google. Search Engine Land. Retrieved from http://searchengineland.com/no-google-phone-but-instead-mobile-software-says-google-10781
Shields, T. (2014, July 8). Google Waning on Net Neturality Leaves Fight to Startups. Retrieved January 20, 2016, from http://www.bloomberg.com/news/articles/2014-07-08/google-waning-on-net-neturality-leaves-fight-to-startups
Shirky, C. (2003, February 10). Power Laws, Weblogs, and Inequality [Blog]. Retrieved from http://www.shirky.com/writings/powerlaw_weblog.html
253
Shu, C. (2014, January 26). Google Acquires Artificial Intelligence Startup DeepMind For More Than $500M. TechCrunch. Retrieved from http://social.techcrunch.com/2014/01/26/google-deepmind/
Silbey, M. (2012, February 27). Discovering who controls the other half of the Internet. Retrieved from http://www.zdnet.com/article/discovering-who-controls-the-other-half-of-the-internet/
Silver, J. S. (2010, 05). Google-Verizon Deal: The End of The Internet as We Know It. The Huffington Post. Retrieved from http://www.huffingtonpost.com/josh-silver/google-verizon-deal-the-e_b_671617.html
Singel, R. (2011, January 20). Google Co-Founder Larry Page Takes Over From CEO Eric Schmidt. Wired. Retrieved from https://www.wired.com/2011/01/schmidt-page-google-ceo/
Singel, R. (2013, July 30). Now That It’s in the Broadband Game, Google Flip-Flops on Network Neutrality. Wired Threat Level. Retrieved from http://www.wired.com/2013/07/google-neutrality/
Smythe, D. W. (1977). Communications: blindspot of western Marxism. Canadian Journal of Political and Social Theory, 1(3), 1–27.
Smythe, D. W. (1981). On the audience commodity and its work. In M. G. Durham & D. Kellner (Eds.), Media and Cultural Studies (pp. 230–56). Malden: Blackwell.
Stalder, F. (1997, September). Actor-Network-Theory and Communication Networks. Retrieved August 18, 2016, from http://felix.openflows.com/html/Network_Theory.html
Stalder, F., & Mayer, C. (2009). The Second Index Search Engines, Personalization and Surveillance. In K. Becker & F. Stalder (Eds.), Deep Search: The Politics of Search beyond Google (pp. 98–115). Strange Chemistry.
Star, S. L. (1999). The Ethnography of Infrastructure. American Behavioral Scientist, 43(3), 377–391. https://doi.org/10.1177/00027649921955326
Star, S. L., & Ruhleder, K. (1996). Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces. Information Systems Research, 7(1), 111–134.
Stevenson, J. H. (2014). The Master Switch and the Hyper Giant: Google’s Infrastructure and Network Neutrality Strategy in the 2000s. In TPRC Conference Paper: 42nd Research Conference on Communication, Information and Internet Policy. George Mason University School of Law, Arlington, Virginia.
Stevenson, J. H. (2016, October 1). Google’s Infrastructure, October 28 2013 [Google My Map]. Retrieved from https://drive.google.com/open?id=1nXSNhvDo5jaSS1h9gFuqQnRNIqg&usp=sharing
Stevenson, J. H., & Clement, A. (2010). Regulatory Lessons for Internet Traffic Management from Japan, the European Union, and the United States: Toward Equity, Neutrality and Transparency. Global Media Journal, 3(1), 9–29.
Strahonja, V. (2009). Definition Metamodel of ITIL. In W. Wojtkowski, G. Wojtkowski, C. Barry, M. Lang, & K. Conboy (Eds.), Information Systems Development: Challenges in Practice, Theory, and Education (Vol. 2, pp. 1081–1092). Springer.
254
Sunstein, C. R. (2007). Incompletely theorized agreements in constitutional law (Public Law & Legal Theory Working Paper No. 147) (pp. 1–24). University of Chicago Law School.
Svensson, P. (2007, October 19). Comcast blocks some Internet traffic: Tests confirm data discrimination by number 2 U.S. service provider. NBC News. Retrieved from http://www.msnbc.msn.com/id/21376597/
Swanner, N. (2016, August 21). T-Mobile One is garbage, and the EFF says it violates net neutrality. Retrieved October 12, 2016, from http://thenextweb.com/insider/2016/08/21/t-mobile-one-eff-net-neutrality/
Tady, M. (2010, August 10). What Google Still Isn’t Saying. The Huffington Post. Retrieved from http://www.huffingtonpost.com/megan-tady/what-google-still-isnt-sa_b_672341.html
Tapscott, D., & Williams, A. D. (2008). Wikinomics: How Mass Collaboration Changes Everything. Penguin.
TeleGeography. (2008, August 26). Google’s subsea ambitions expand. Retrieved September 9, 2016, from https://www.telegeography.com/products/commsupdate/articles/2008/08/26/googles-subsea-ambitions-expand/
The Center for Responsive Politics. (2015). Lobbying Spending Database - Google Inc, 2015. Retrieved June 29, 2015, from https://www.opensecrets.org/lobby/clientsum.php?id=D000022008
Thierer, A. (2011, March 25). Lessons from the Gmail Privacy Scare of 2004. Retrieved from https://techliberation.com/2011/03/25/lessons-from-the-gmail-privacy-scare-of-2004/
Toffler, A. (1980). The Third Wave. New York: Bantam Books. Turner, B. (2009, April 16). Google’s Peering and Caching Strategy [Blog]. Retrieved from
http://blogs.broughturner.com/2009/04/googles-peering-and-caching-strategy.html Vaidhyanathan, S. (2011). The Googlization of Everything (And Why We Should Worry).
Berkeley: University of California Press. Retrieved from http://books.google.com/ Van Buskirk, E. (2010, August 9). Here’s The Real Google/Verizon Story: A Tale of Two
Internets. Wired. Retrieved from https://www.wired.com/2010/08/google-verizon-propose-open-vs-paid-internets/
Van Couvering, E. (2008). The history of the Internet search engine: Navigational media and the traffic commodity. In A. Spink & M. Zimmer (Eds.), Web Search: Multidisciplinary Perspectives (pp. 177–206). Berlin: Springer.
Van Hoboken, J. (2009). Search Engine Law and Freedom of Expression: A European Perspective. In K. Becker & F. Stalder (Eds.), Deep Search: The Politics of Search beyond Google (pp. 85–97). StudienVerlag.
Van House, N. (2001). Actor-Network Theory, Knowledge Work, and Digital Libraries. School of Information Management and Systems. Retrieved from http://people.ischool.berkeley.edu/~vanhouse/bridge.html
255
Verizon, & Google. (2010). Verizon-Google Legislative Framework Proposal, August 9, 2010. Retrieved from https://static.googleusercontent.com/media/www.google.com/en//googleblogs/pdfs/verizon_google_legislative_framework_proposal_081010.pdf
Vise, D. A., & Malseed, M. (2008). The Google Story: For Google’s 10th Birthday. Random House Digital.
Volmer, T. (2015, June). Growing local content & peering. Presented at the AfNOG 2015 [African Network Operators Group], Tunis. Retrieved from http://www.slideshare.net/AfriNIC/2-ais-tunis-2015-thomas-volmer
w3schools.com. (2007). AJAX Introduction. Retrieved September 1, 2016, from http://www.w3schools.com/ajax/ajax_intro.asp
Walker, T. (2012, December 14). Eric Schmidt: Is the executive chairman of Google really the arrogant defender of tax avoidance that his critics claim? The Independent. Retrieved from http://www.independent.co.uk/news/people/profiles/eric-schmidt-is-the-executive-chairman-of-google-really-the-arrogant-defender-of-tax-avoidance-that-8418153.html
Walsham, G. (1997). Actor-network theory and IS research: current status and future prospects. In Information Systems and Qualitative Research (pp. 466–480). Springer.
Wasko, J., & Erickson, M. (2009). The political economy of YouTube. In P. Snickars & P. Vonderau (Eds.), The YouTube Reader (pp. 372–386). Stockholm: National Library of Sweden.
Weber, S. (2007). Das Google-Copy-Paste-Syndrom: Wie Netzplagiate Ausbildung und Wissen gefährden. Hannover: Heise Zeitschriften Verlag.
Whitson, R. (2016, July 19). Review of Benjamin Bratton’s The Stack: On Software and Sovereignty. Roger Whitson. Retrieved from http://www.rogerwhitson.net/?p=3501
Whitt, R. (2007, June 16). Google Public Policy Blog: What Do We Mean By “Net Neutrality”? Retrieved July 21, 2014, from http://googlepublicpolicy.blogspot.ca/2007/06/what-do-we-mean-by-net-neutrality.html
Whitt, R. (2010, August 12). Facts about our network neutrality policy proposal. Google Public Policy Blog. Retrieved from https://publicpolicy.googleblog.com/2010/08/facts-about-our-network-neutrality.html
Williams, M. (2009, November 2). Google-backed Unity Cable Lands in Japan | PCWorld. PC World. Retrieved from http://www.pcworld.com/article/181258/article.html
Winseck, D. (2012, April 25). Open Data and Open Internet Dogma: Sergey Brin’s Guardian Interview and the Political Economy of Google. Retrieved from https://dwmw.wordpress.com/2012/04/25/open-data-and-open-internet-dogma-sergey-brins-guardian-interview-and-the-political-economy-of-google/
Wohlsen, M. (2014, January 14). What Google Really Gets Out of Buying Nest for $3.2 Billion. Wired. Retrieved from https://www.wired.com/2014/01/googles-3-billion-nest-buy-finally-make-internet-things-real-us/
Worstall, T. (2014, July 15). Why Google, Facebook, The Internet Giants, Are Arguing For Net Neutrality. Forbes. Retrieved from
256
http://www.forbes.com/sites/timworstall/2014/07/15/why-google-facebook-the-internet-giants-are-arguing-for-net-neutrality/
Wright, L. (2016, May 30). Why “unlimited streaming” plans could be bad for consumers. Retrieved October 12, 2016, from http://www.cbc.ca/news/technology/crtc-review-differential-pricing-zero-rating-1.3603807
Wu, T. (2003). Network neutrality, broadband discrimination. Journal on Telecommunications and High Technology Law, 2, 141–176.
Wu, T. (2006). Network Neutrality FAQ. Retrieved December 11, 2015, from http://www.timwu.org/network_neutrality.html
Wu, T. (2010). The Master Switch: The Rise and Fall of Information Empires. New York City: Knopf.
Wyatt, E. (2010a, May 24). Congress to Review Telecommunications Law. The New York Times. Retrieved from http://www.nytimes.com/2010/05/25/technology/25broadband.html
Wyatt, E. (2010b, August 4). Google and Verizon Near Deal on Pay Tiers for Web. The New York Times. Retrieved from http://www.nytimes.com/2010/08/05/technology/05secret.html?pagewanted=all&_r=1&
Yin, R. K. (2013). Case study research: Design and methods. Sage Publications Ltd.
Zachman, J. A. (1987). A framework for information systems architecture. IBM Systems Journal, 26(3), 276–292.
Ziewitz, M., & Pentzold, C. (2014). In search of Internet governance: Performing order in digitally networked environments. New Media & Society, 16(2), 306–322.
Zittrain, J. (2010, August 16). The Google/Verizon framework [Blog]. Retrieved from http://blogs.law.harvard.edu/futureoftheinternet/2010/08/16/the-googleverizon-framework/
Zook, M. A., & Graham, M. (2007). The creative reconstruction of the Internet: Google and the privatization of cyberspace and DigiPlace. Geoforum, 38(6), 1322–1343.
Zuboff, S. (2015). Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology, 30(1), 75–89.
257
Appendix A: List of acronyms
AfNOG African Network Operators Group
AFRINIC African Network Information Center
ANT Actor-Network Theory
APNIC Asia-Pacific Network Information Centre
ARIN American Registry for Internet Numbers
ASN Autonomous System Number
CAIP Canadian Association of Internet Providers
CCG Client-Centric Geolocation
CDN Content Delivery Network
CRTC Canadian Radio-television and Telecommunications Commission
CSV Comma-separated value format
DNS Domain Name System
DPI Deep packet inspection
FCC Federal Communications Commission
GGC Google Global Cache
HTML HyperText Markup Language
IP Internet Protocol
ISP Internet Service Provider
IXP Internet Exchange Point
IX Internet Exchange
KML Keyhole Markup Language
LACNIC Latin America and Caribbean Network Information Centre
LSNSCP Large-Scale Network-Savvy Content Provider
258
Mbit Megabit
NCTA National Cable and Telecommunications Association
NIE Networked Information Economy
OPP Obligatory Passage Point
PoP Point of presence
QoS Quality of Service
RIPE NCC Réseaux IP Européens Network Coordination Centre
RIR Regional Internet Registries
SEO Search Engine Optimization
TorIX Toronto Internet Exchange
TTL Time-To-Live
UHF Ultra high frequency
VoIP Voice Over IP
VPN Virtual Private Network
WAN Wide Area Network
WSW World-Sized Web
XML Extensible Markup Language
259
Appendix B: Google Peering Locations, October 2013
Location Peering Type Country
151 Front Street West Toronto Private CA
AIMS Kuala Lumpur Private MY
Amsterdam Internet Exchange Public NL
Berlin Commercial Internet Exchange Public DE
Blue City Private OM
Budapest Internet Exchange Public HU
Buffalo Niagara International Internet Exchange Public US
Cable & Wireless Munich Private DE
Chief LY Building Taipei Private TW
Consorzio Top-IX Public IT
CoreSite - Any2 California Public US
CoreSite - Any2 Denver / Formerly RMIX Public US
CoreSite - LA1 - One Wilshire Private US
CSF CX1 Cyberjaya Private MY
Dataline Borovaya Private RU
Dataplex Budapest Private HU
DE-CIX, the Hamburg Internet Exchange Public DE
Deutscher Commercial Internet Exchange Public DE
Distributed IX in EDO (former NSPIXP2) Public JP
Equinix Ashburn (DC1-DC11) Private US
Equinix Ashburn Exchange Public US
Equinix Atlanta (AT2/3) Private US
Equinix Chicago (CH1/CH2) Private US
Equinix Chicago Exchange Public US
Equinix Dallas (DA1) Private US
Equinix Dallas Exchange Public US
Equinix Frankfurt KleyerStrasse (FR5) Private DE
Equinix Hong Kong Public HK
Equinix Internet Exchange Atlanta Public US
Equinix Internet Exchange New York Public US
Equinix Internet Exchange Palo Alto Public US
Equinix London Slough (LD4) Private UK
Equinix London Slough (LD5) Private UK
Equinix Los Angeles (LA1) Private US
260
Equinix Los Angeles Exchange Public US
Equinix New York (111 8th) Private US
Equinix Newark (NY1) Private US
Equinix Palo Alto (SV8) Private US
Equinix San Jose (SV1/5) Private US
Equinix San Jose / Bay Area Exchange Public US
Equinix Seattle (SE2/3) Private US
Equinix Singapore Private SG
Equinix Singapore Exchange Public SG
Equinix Sydney Private AU
Equinix Sydney Exchange Public AU
Equinix Tokyo Public JP
Equinix Tokyo (TY2) Private JP
Equinix Toronto Private CA
Equinix Zurich (ZH1) Private CH
Equinix Zurich, formerly TIX Public CH
Espana Internet Exchange Public ES
euNetworks (Global Voice) Private IE
European Commercial Exchange Berlin Public DE
European Commercial Exchange Duesseldorf Public DE
European Commercial Exchange Hamburg Public DE
France-IX Marseilles Public FR
FranceIX Public FR
GIGAbit Portuguese Internet eXchange Public PT
Global Crossing Sao Paulo Private BR
Global Switch (London 2) Private UK
Global Switch Singapore Private SG
Global Switch Sydney Private AU
Greater Toronto International Internet Exchange Public CA
Hong Kong Internet Exchange Public HK
iAdvantage Internet Exchange Public HK
Infomart Private US
Internet Exchange Point of Nigeria Public NG
Internet Exchange Service Public DE
Internet Multifeed Company Public JP
Internet Multifeed JPNAP Osaka Public JP
Internet Neutral Exchange Association Ltd. Public IE
InterXion Frankfurt 1 Private DE
InterXion Frankfurt 3 Private DE
InterXion Madrid Private ES
261
Itenos Frankfurt Private DE
Jacksonville Internet Exchange Public US
Japan Internet Exchange Public JP
LDCOM Netcenter Paris (Courbevoie) Private FR
Level(3) Chicago Private US
Level(3) Denver Private US
Level(3) London Braham Street Private UK
Level(3) Sunnyvale Private US
London Internet Exchange Ltd. Public UK
London Internet Exchange Ltd. Public UK
London Network Access Point Public UK
Los Angeles International Internet eXchange Public US
Malaysia Internet Exchange Public MY
Marubeni Access Solutions Inc. ComSpace I Private JP
Medallion Communications Lagos Private NG
MEGA iAdvantage Hong Kong Private HK
Milano Internet eXchange Public IT
Moscow Internet Exchange Public RU
Moscow Internet Exchange Public RU
Moscow M9 Private RU
NAP Africa Public ZA
NAP Of The Americas Public US
Netscalibur Milan Private IT
Neutral Internet Exchange Public CZ
New South Wales Internet Exchange (NSW-IX) Public AU
New York International Internet eXchange Public US
Northwest Access Exchange, Inc. Public US
Pacific Wave Exchange in Los Angeles and Seattle Public US
Pacific Wave Exchange in Los Angeles and Seattle Public US
Pacific Wave Exchange in Los Angeles and Seattle Public US
PacketExchange - eXpress Public GB
panap.fr - France - Bouygues Telecom ISP Public FR
Pipe Networks MLPA Sydney Public AU
Pirix Internet Exchange Public RU
PTT Rio de Janeiro Public BR
Saint-Petersburg Internet Exchange Public RU
SARA Amsterdam Private NL
SEACOM Mombasa Cable Landing Station Private KE
Seattle Internet Exchange Public US
SFR Netcenter Marseille Private FR
262
Shin Nikko Bldg Private JP
Singapore Open Exchange Public SG
Sitel Prague / CE Colo Prague Private CZ
SmartHub Fujairah Private AE
Speedbone Berlin Private DE
Stockholm Open Local Internet Exchange Public SE
Swiss Internet Exchange Public CH
TATA Communications Ltd Private IN
Tata Mumbai IDC Private IN
TelecityGroup Amsterdam 2 (South East) Private NL
TelecityGroup Dublin CityWest Private IE
TelecityGroup London (Harbour Exchange) Private UK
TelecityGroup London (Meridian Gate) Private UK
TelecityGroup London (Sovereign House) Private UK
TelecityGroup London 1 (Bonnington House) Private UK
TelecityGroup London 2 (Harbour Exchange) Private UK
TelecityGroup Stockholm 1 Private SE
Telehouse Europe London (Docklands East) Private UK
Telehouse Europe London (Docklands North) Private UK
Telehouse Paris 2 (Voltaire) Private FR
Telehouse Tokyo Private JP
TELEPOINT Private BG
Telvent Carrierhouse 2 Madrid Private ES
Telvent Carrierhouse Lisbon Private PT
Telx Atlanta Private US
Telx New York (60 Hudson) Private US
Teraco House Johannesburg JB1 Private ZA
Terremark - NAP do Brasil Public BR
Terremark Brazil Private BR
Terremark Miami Private US
The Big Apple Peering Exchange Public US
TIE:Atlanta, Atlanta Internet Exchange, AtlantaIX Public US
TIE:New York, Telx Internet Exchange New York Public US
Toronto Internet Exchange Community Public CA
Ucomline (Digital Generation) Kiev Private UA
VIBO Private TW
Westin Building Seattle Private US
263
Source: PeeringDB (2013) Retrieved from https://www.peeringdb.com/net/433, October 28 2013
264
Appendix C: Timeline of Google’s history
Year Month Day Event
1996 Page and Brin begin to research web indexing and search in a project called BackRub. The project is hosted on the Stanford University campus until it begins to use too much bandwidth.
1998 09 04 Google files for incorporation in California.
2000 10 AdWords, Google’s first advertising platform, launches. The platform is self-service, and features performance feedback and keyword targeting.
2001 02 12 Google purchases Usenet archive Deja.com. Deja hosts over 500 million Usenet discussions dating back to 1995.
2001 08 Eric Schmidt becomes Google CEO.
2002 09 Google News launches.
2003 02 Google acquires Pyra Labs, creators of Blogger.
2003 03 Google launches AdSense, a content-targeted advertising service that enables publishers to access Google's network of advertisers.
2003 10 Android founded to create advanced OS for digital cameras.
2004 04 01 Gmail launches.
2004 08 Google initial public offering.
2004 10 Google acquires Where 2 Technologies, basis for Google Maps.
2004 Apple begins development of iPhone/Project Purple.
2005 01 25 Google Launches Google Video, a video-sharing site.
2005 02 Google Maps launches.
2005 04 YouTube launches.
2005 06 Google Mobile Web Search is released.
2005 06 Google acquire XL2Web by 2Web Technologies, the basis for Google Sheets.
2005 08 17 Google acquires Android Inc.
2005 08 Instant messaging platform Google Talk released.
2006 03 09 Google acquires Upstartle, creators of a web-based word processor Writely, the basis for Google Docs.
2006 06 01 Google acquires 2Web Technologies, the basis for Google Spreadsheet.
2006 06 06 Google launches Google Spreadsheets as a test app in Google Labs.
2006 08 28 Eric Schmidt elected to Apple Inc.'s board of directors.
2006 08 A suite of web-based apps for enterprise/small business, Google Apps for Your Domain (now G-Suite), is released.
2006 10 09 Google purchases YouTube for USD$1.65 billion.
2007 01 09 Apple announces first iPhone.
2007 04 13 Google acquires web display advertising company DoubleClick.
2007 07 02 Google purchases GrandCentral, the basis for Google’s VoIP service.
265
2007 06 16 First post about network neutrality from Whitt on Google Public Policy Blog, called “What Do We Mean By Net Neutrality?”
2007 07 20 Google indicates it Intends to bid in FCC spectrum auction if FCC adopts consumer choice and competition requirements.
2007 10 05 Open Handset Alliance is announced, made up of Google, HTC, Sony, Samsung, Sprint Nextel, T-Mobile, Qualcomm, Texas Instruments; goal is to develop open standards for mobile devices, with Android as the first product.
2008 10 22 First Android smartphone released, the HTC Dream, on T-Mobile.
2008 06 Google begins to deploy Google Global Cache technology. Google promotes caching to Latin American ISPs.
2009 03 11 Google launches Google Voice.
2009 08 03 Schmidt leaves Apple board of directors.
2010 02 Google announces an intention to offer high-speed retail ISP services.
2010 02 Percentage of Google traffic using direct peering now over 60%. Most large Internet service providers in Europe and North America now host GGC servers.
2010 08 09 Google and Verizon issue “A joint policy proposal for an open Internet”
2010 12 Caching servers offered to ISPs in Kenya and Uganda.
2011 01 11 Google announces that Schmidt will step down as CEO.
2011 04 04 Page replaces Schmidt as the CEO.
2012 01 Google takes a public stand against two legislative proposals in the U.S. (SOPA and PIPA) that Google claims would have censored the Internet and "impeded innovation". The bills are set aside. Google mobilises its users, hosting as "take action" web page, something it has not done for network neutrality.
2012 10 Google releases images of inside Google data centres, something it calls "unprecedented".
2012 11 Google Fibre begins installation in Kansas City, Kansas and Kansas City Missouri.
2013 09 Android passes one billion device activations.
266
Copyright Acknowledgements
Figures Figure 1.1 &
Figure 6.1
“How Google reaches customers” reproduced with permission
from Wu (2010). Copyright © 2010 by Tim Wu.
Figures Figure 5.2,
Figure 5.3, Figure 5.4,
Figure 5.5, Figure 7.1 &
Figure 7.2
Base map data Copyright © 2016 Google.
Figure 5.1 Copyright © 2014 Google.
Chapters 1, 4, 5, 6 & 7 Some passages adapted with permission from Stevenson
(2014). Copyright © 2014 John Harris Stevenson.
top related