SlideShare a Scribd company logo
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
Interconnection 101
KEY FINDINGS
ƒ Network-dense, interconnection-oriented facilities are not easy to replicate and are typically able to charge higher
prices for colocation, as well as charging for cross-connects and, in some cases, access to public Internet exchange
platforms and cloud platforms.
ƒ Competition is increasing, however, and competitors are starting the long process of creating network-dense
sites. At the same time, these sites are valuable and are being acquired, so the sector is consolidating. Having facili-
ties in multiple markets does seem to provide some competitive advantage, particularly if the facilities are similar
in look and feel and customers can monitor them all from a single portal and have them on the same contract.
ƒ Mobility, the Internet of Things, services such as SaaS and IaaS (cloud), and content delivery all depend on net-
work performance. In many cases, a key way to improve network performance is to push content, processing and
peering closer to the edge of the Internet. This is likely to drive demand for facilities in smaller markets that offer
interconnection options. We also see these trends continuing to drive demand for interconnection facilities in the
larger markets as well.
As cloud usage takes off, data production grows exponentially, content pushes closer to the edge, and end
users demand data and applications at all hours from all locations, the ability to connect with a wide variety
of players becomes ever more important. This report introduces interconnection, its key players and busi-
ness models, and trends that could affect interconnection going forward.
AUG 2015
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
I 451 RESEARCH
ABOUT 451 RESEARCH
451 Research is a preeminent information technology research and advisory company.
With a core focus on technology innovation and market disruption, we provide essential
insight for leaders of the digital economy. More than 100 analysts and consultants
deliver that insight via syndicated research, advisory services and live events to over
1,000 client organizations in North America, Europe and around the world. Founded in
2000 and headquartered in New York, 451 Research is a division of The 451 Group.
© 2015 451 Research, LLC and/or its Affiliates. All Rights Reserved. Reproduction and distribution of
this publication, in whole or in part, in any form without prior written permission is forbidden. The
terms of use regarding distribution, both internally and externally, shall be governed by the terms
laid out in your Service Agreement with 451 Research and/or its Affiliates. The information contained
herein has been obtained from sources believed to be reliable. 451 Research disclaims all warranties as
to the accuracy, completeness or adequacy of such information. Although 451 Research may discuss
legal issues related to the information technology business, 451 Research does not provide legal
advice or services and their research should not be construed or used as such. 451 Research shall have
no liability for errors, omissions or inadequacies in the information contained herein or for interpreta-
tions thereof. The reader assumes sole responsibility for the selection of these materials to achieve its
intended results. The opinions expressed herein are subject to change without notice.
New York
20 West 37th Street, 6th Floor
New York, NY 10018
Phone: 212.505.3030
Fax: 212.505.2630
San Francisco
140 Geary Street, 9th Floor
San Francisco, CA 94108
Phone: 415.989.1555
Fax: 415.989.1558
London
Paxton House (5th floor), 30 Artillery Lane
London, E1 7LS, UK
Phone: +44 (0) 207 426 0219
Fax: +44 (0) 207 426 4698
Boston
1 Liberty Square, 5th Floor
Boston, MA 02109
Phone: 617.275.8818
Fax: 617.261.0688
II Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
TABLE OF CONTENTS
SECTION 1: EXECUTIVE SUMMARY 1
1.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1.2 KEY FINDINGS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1.3 METHODOLOGY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
SECTION 2: WHAT IS INTERCONNECTION, AND WHERE DOES IT
COME FROM? 3
2.1 CARRIER-NEUTRAL DATACENTER VS MEET-ME ROOM . . . . . . . . . . . . . .4
Figure 1: Carrier-Neutral Datacenter Compared with Meet-Me Room . . . . . . . . . 4
2.2 INTERCONNECTING THE INTERNET . . . . . . . . . . . . . . . . . . . . . . .5
2.2.1 Private Interconnection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Figure 2: Internet Transit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Figure 3: Private Peering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Figure 4: Internet Transit Plus Peering . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.2 Public Interconnection or Public Peering . . . . . . . . . . . . . . . . . . . . . 9
Figure 5: Public Peering Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Figure 6: Public Peering in the US vs. Europe . . . . . . . . . . . . . . . . . . . . . . 10
SECTION 3: INTERCONNECTION AS A BUSINESS 11
3.1 COMPONENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.1 The Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.2 Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.3 Cross-Connects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.4 Public Peering Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.5 Access to Other Customers in the Facility, Particularly Cloud Providers. . . . . . 12
3.1.6 Additional Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 SUPPLY AND DEMAND . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.1 Supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.2 Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 CUSTOMERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Figure 7: Customers of Interconnection Facilities . . . . . . . . . . . . . . . . . . . 15
Figure 8: Drivers of Facility Selection. . . . . . . . . . . . . . . . . . . . . . . . . . 17
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
III 451 RESEARCH
SECTION 4: INTERCONNECTION PROVIDERS 18
Figure 9: 451 Research Interconnect Market MapTM
. . . . . . . . . . . . . . . . . . 18
Figure 10: Interconnection Provider Segments. . . . . . . . . . . . . . . . . . . . . 19
Figure 11: Summary Chart: Market Challenges and Innovations . . . . . . . . . . . 21
SECTION 5: EVOLUTION OF INTERCONNECTION:
TRENDS AND DISRUPTORS 22
5.1 CONTINUED GROWTH OF INTERNET TRAFFIC AND THE NEED
FOR INTERCONNECTION . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.2 INCREASE IN THE NUMBER OF FIRMS INTERCONNECTING . . . . . . . . . . . 22
5.3 GROWING REQUIREMENT FOR INTERNET CONNECTIVITY AT THE EDGE . . . . 23
5.4 CLOUD’S IMPACT ON INTERCONNECTION . . . . . . . . . . . . . . . . . . . 24
5.5 NET NEUTRALITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.6‘PRIVATIZATION’OF THE INTERNET . . . . . . . . . . . . . . . . . . . . . . 26
5.7 COMPETITIVE CHANGES . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.7.1 Open-IX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.7.2 European Exchanges in the US . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.7.3 Additional Competition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.8 TECHNOLOGY TRENDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
SECTION 6: THE 451 TAKE 30
APPENDIX A: GLOSSARY 31
APPENDIX B: KEY CARRIER HOTELS IN NORTH AMERICAN MARKETS 33
APPENDIX C: LOCATIONS FOR DIRECT CONNECTIONS TO
CLOUD PROVIDERS 34
AWS Direct Connect Locations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Microsoft Azure ExpressRoute Locations. . . . . . . . . . . . . . . . . . . . . . . . 34
APPENDIX D: OPEN-IX CERTIFIED PROVIDERS 35
INDEX OF COMPANIES 36
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
1 451 RESEARCH
SECTION 1
Executive Summary
1.1 INTRODUCTION
Interconnection has come a long way since telecommunications providers connected their
networks in order to exchange voice traffic. Now, in addition to carriers, many other kinds of
firms need to connect with each other to exchange data traffic, and interconnection itself
has become a business. Facilities where the largest number of firms can meet have become
extremely valuable. This report looks at the business of interconnection and discusses trends
that are likely to impact it going forward.
1.2 KEY FINDINGS
• Network-dense, interconnection-oriented facilities are not easy to replicate and are typically
able to charge higher prices for colocation, as well as charging for cross-connects and, in
some cases, access to public Internet exchange platforms and cloud platforms.
• Competition is increasing, however, and competitors are starting the long process of creating
network-dense sites. At the same time, these sites are valuable and are being acquired, so
the sector is consolidating. Having facilities in multiple markets does seem to provide some
competitive advantage, particularly if the facilities are similar in look and feel and customers
can monitor them all from a single portal and have them on the same contract.
• Mobility, the Internet of Things, services such as SaaS and IaaS (cloud), and content
delivery all depend on network performance. In many cases, a key way to improve network
performance is to push content, processing and peering closer to the edge of the Internet.
This is likely to drive demand for facilities in smaller markets that offer interconnection
options. We also see these trends continuing to drive demand for interconnection facilities in
the larger markets as well.
1.3 METHODOLOGY
This report on interconnection services is based on a series of in-depth interviews with a
variety of stakeholders in the industry, including technology vendors, surveys and interviews
of IT managers at end-user organizations across multiple sectors, datacenter service providers
and providers of connectivity services. This research was supplemented by additional primary
research, including attendance at trade shows and industry events.
Please note that the names of vendors and service providers are meant to serve as illustrative
examples of trends and competitive strategies; company lists are comprehensive, but are not
intended to be exhaustive. The inclusion (or absence) of a company name in the report does not
necessarily constitute endorsement.
2 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
Reports such as this one represent a holistic perspective on key emerging markets in
the enterprise IT space. These markets evolve quickly, so 451 Research offers additional
services that provide critical marketplace updates. These updated reports and perspec-
tives are presented on a daily basis via the company’s core intelligence service, 451
Research Market Insight. Forward-looking M&A analysis and perspectives on strategic
acquisitions and the liquidity environment for technology companies are also updated
regularly via 451 Market Insight, which is backed by the industry-leading 451 Research
M&A KnowledgeBase.
Emerging technologies and markets are also covered in additional 451 Research chan-
nels, including Datacenter Technology; Enterprise Storage; Systems and Systems Manage-
ment; Enterprise Networking; Enterprise Security; Data Platforms & Analytics; Dev, Devops
& Middleware; Business Aps (Social Business); Managed Services and Hosting; Cloud
Services; MTDC; Enterprise Mobility; and Mobile Telecom.
Beyond that, 451 Research has a robust set of quantitative insights covered in products
such as ChangeWave, TheInfoPro, Market Monitor, the M&A KnowledgeBase and the
Datacenter KnowledgeBase.
All of these 451 Research services, which are accessible via the Web, provide critical and
timely analysis specifically focused on the business of enterprise IT innovation.
This report was written by Jim Davis, Senior Analyst, Service Providers, and Kelly Morgan,
Research Director, Datacenters. Any questions about the methodology should be
addressed to Jim Davis or Kelly Morgan at: jim.davis.@451research.com or
kelly.morgan@451research.com.
For more information about 451 Research, please go to: www.451research.com.
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
3 451 RESEARCH
SECTION 2
What Is Interconnection, and Where Does It Come From?
The very essence of the Internet is interconnection; the word is a shortened version of‘internet-
working,’because the Internet is a system of millions of networks that have been linked together by
the use of standard protocols for communication. Beyond the technical standards, however, intercon-
nection has become a business in its own right. In this report, we focus on interconnection services,
key players and business models – particularly within and between datacenters.
Many interconnect locations got their start as‘carrier hotels.’National telecom providers have always
needed to hand off international traffic to carriers in other countries. They connected with each other
at key locations to make this handoff, often near the landing points of undersea cables. As national
carriers have been deregulated and competition within the US has grown, competing carriers have
had to connect their networks to exchange national as well as international traffic. As a result, the
number of carrier hotels and the locations where they are needed have multiplied. Due to the concen-
tration of carriers, these carrier hotels have also become key locations for Internet connectivity.
The original buildings where carriers connected their networks belonged to the carriers themselves,
to the incumbents and/or the long-haul network providers. These tended to be central offices (COs),
where the owner had telco equipment but leased out extra space to other carriers. Often, the owner
provided the only means of network connectivity to the facility. However, there was not necessarily
much incentive for the carrier-owner to maintain, expand or upgrade the CO to add capacity for
potential competitors.
Local carriers sought locations that were more‘neutral.’These were often office buildings in the
center of cities, to which several providers already had fiber connectivity. The carriers paid rent to the
building owner and the connections were made in a central location in the building that came to be
called the‘meet-me room.’Facilities where participants had multiple network options to access the
building became known as‘carrier-neutral.’The facilities usually are not owned by carriers, but some-
times can be if the carrier offers interconnection without requiring that participants use its network.
For example, NAP of the Americas in Miami is a carrier-neutral facility owned by Verizon.
Some carrier hotels grew up after market deregulation; in the US, One Wilshire’s status as a carrier
hotel began with then-regional telco PacBell refusing to allow competing telecom service provider
MCI (which at the time was focused on long-distance calling) to ban competitors’switches and circuits
inside its central switching facility at 400 South Grand in Los Angeles. MCI chose a building nearby
that had a sightline for its microwave transmission equipment. Over time, other telecom providers
began bringing fiber into the building, eventually turning it into one of the most interconnected hubs
for Internet and telecom services in the world.
Similar examples can be found in Europe. In Frankfurt, datacenter and IT services provider ITENOS
started by building out a former bakery for a telecom client in 1995 and over the next decade adding
space for carriers in several nearby buildings, including Kleyerstrasse 90. Kleyer 90’s list of carrier
tenants meant it was considered a carrier hotel by the time Equinix acquired it in 2013.
4 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
Other carrier hotels, such as 60 Hudson Street in New York City, had a longer historical link to
network interconnection. The building was originally the headquarters of the Western Union
Company, the provider of telegraph communication services founded in 1851. The building
served as a point of connection for the firm’s telegraph network; now the building houses more
than 100 companies from around the globe that interconnect at the building’s meet-me room.
2.1 CARRIER-NEUTRAL DATACENTER VS MEET-ME ROOM
In original carrier hotels, the meet-me room was where the physical interconnections were made.
Now, however, the term carrier-neutral datacenter may be used to describe an interconnection
location. Figure 1 notes some of the differences between the two, but there can also be some
overlap between the terms. For example, a Telx facility within a larger building can be considered
a carrier-neutral datacenter on its own and can also be the building’s meet-me room. Perhaps the
main difference is that today’s carrier-neutral datacenters often have more power and cooling
available than the older carrier hotels or carrier points of presence (POPs).
FIGURE 1: CARRIER-NEUTRAL DATACENTER COMPARED WITH MEET-ME ROOM
Source: 451 Research, 2015
CHARACTERISTICS CARRIER-NEUTRAL DATACENTER MEET-ME ROOM
Size
Any size, but usually >10,000 sq. ft Almost always smaller than a carrier-
neutral datacenter; often 1,000-5,000 sq. ft
Power and cooling
Typically built to densities that
accommodate servers and edge
routers rather than less power-
hungry switches
Originally built for telecom equipment,
they typically offer DC power and
relatively low density, though many have
been upgraded to handle servers and
larger routers
Stand-alone building Yes or No No
Ownership
Owned by datacenter operator, or in
space leased by the operator
Owned by the owner of the building
Operator
Datacenter owner Building management, or an operator that
has a contract with the building owner
Purpose
Can be interconnection-focused,
or focused on providing space and
power with the ability to connect to
multiple carriers
Interconnection
Policies on interconnection
Typically only allow interconnection
with other tenants in the datacenter
Typically, any building tenant can
interconnect, whether leasing space in the
MMR or not
Size of deployment
Typically a minimum deployment
is required – e.g., 5-10 racks – with
smaller amounts provided by tenants
Full racks, half racks, quarter racks
Examples
Equinix, KDDI/Telehouse, Interxion
facilities
Telx in Digital Realty facilities, 151 Front
Street meet-me room operated by Allied
Fiber in Toronto, CoreSite in Denver
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
5 451 RESEARCH
2.2 INTERCONNECTING THE INTERNET
In the early days of computer networking, there existed many incompatible and disjointed
networks (e.g., enterprise networks and government-run networks that used different propri-
etary networking technologies). Not only were the networks incompatible, they were created
with different purposes and were not expected to interoperate. The US Department of
Defense, for instance, had ARPANET, which connected different research sites, while CSNET
was created for the academic and commercial community of computer scientists. Eventu-
ally, users on one network wanted access to data or wanted to exchange email with users on
other networks. In the early 1980s a commercial‘multi-protocol’router was created, as were
a number of exchanges where networks could interconnect and transfer traffic between
different networks. These facilities were initially run by government agencies and nonprofits,
and they became known as network access points, or NAPs (e.g., MAE-East in 1992). The
management of these was eventually moved to commercial entities – mainly telecom
providers such as Sprint and some of the Regional Bell Operating Companies (RBOCs). After
the original sites became too crowded, particularly as data and content moved beyond the
telcos to firms such as AOL and Yahoo, other exchanges were created. This drove the growth
of commercial Internet exchanges (IXs) that we see today, in the multi-tenant datacenter
(MTDC) landscape.
Currently, there are different methods and business arrangements for transferring data
between networks at interconnection points.
2.2.1 PRIVATE INTERCONNECTION
Private interconnection (or‘peering’) is when networks are interconnected directly between
edge routers on each network. This is typically done using a pair of fiber-optic cables (one for
transmitting, one for receiving), called‘cross-connects,’and may involve running these cables
from one party’s equipment directly to the other’s, or both parties running cables to the
central meet-me room.
Examples of private interconnection include transit and private peering.
6 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
Internet Transit
Internet transit or IP transit refers to when an ISP sells global access to the Internet. In
practice, this usually means a network, or autonomous system (AS), is paying for the ISP
to announce Internet routes to it and to let the rest of the Internet know that the AS or
network and its customers are on the Internet (see Figure 2).
FIGURE 2: INTERNET TRANSIT
Source: 451 Research, 2015
Transit traffic is Ethernet and is exchanged typically at 10Gbps or, increasingly, 100Gbps.
The most common way to bill for transit is the 95/5 model. Every five minutes, the
amount of traffic passing over the link is sampled. Every month, the readings are sorted
from lowest to highest and the 95th percentile (of traffic either in or out, whichever is
highest) is used to calculate what the customer pays, so the top 5% of spikes in traffic
are not included. Thus, overall, the more transit used, the higher the costs. Transit costs
vary widely but have been declining steadily for years. Current estimates range from $3/
Mbps (and up) to as low as $0.50/Mbps. Although the prices have been steadily declining,
contracts tend to be for a year or so, and traffic per customer generally is rising, so the
cost curve looks like the following:
BA
Customer
Customer
Customer
Customer
C
Transit $ Transit $TRAFFIC FLOW
TRANSIT
COST
NO. OF MBPS
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
7 451 RESEARCH
Internet Private Peering
Peering is when two parties provide access to each other’s network endpoints by inter-
connecting and exchanging routing information (see Figure 3). Peering is not used for
traffic going to end users on networks other than the peers’. It is referred to as private
peering, because the two parties connect directly. Peering can help optimize traffic flow
and latency. It is typically settlement-free, meaning that no payments exchange hands,
since the two parties exchange roughly the same amounts of traffic. If there is an imbal-
ance in traffic (e.g., one party receives more traffic than it sends), one party will pay the
other for access to its customers; this is called paid peering.
FIGURE 3: PRIVATE PEERING
Source: 451 Research, 2015
It is not always cost-effective to peer. Setup costs for peering are typically higher than for
transit, so peering is cost-effective once there is a high enough volume of traffic. There
may be some setup costs for transit; for example, if the transit connection is made in
a colocation facility there will be costs for renting space in the facility and possibly for
network connectivity between the facility and the customer’s office. For peering, there
will be the same costs to be at a meeting point (often a colocation facility), plus typically
the cost of a router (rather than just a switch), the setup fee for a cross-connect to the
peer(s) and in some cases a monthly fee for the cross-connect(s) as well. However, once
they have a cross-connect, peers can exchange as much traffic as the size of the cross-
connect (well, up to 70-80% of the cross-connect size, to be safe). There are higher fixed
costs, but once enough traffic is passed over the cross-connect, the cost is lower. Typical
costs are anywhere from $100 to $350/month per fiber cross-connect. So if sending or
receiving 500Mbps per month (95th percentile) at a transit cost of $2/Mbps, the transit
cost would be $1,000 per month, while the same traffic over a cross-connect would cost
$350 plus the setup costs, for a cost curve that looks more like this:
BA
Peering
Customer
Customer
Customer
Customer
TRAFFIC FLOW
TRANSIT
CROSS-CONNECT
COST
NO. OF MBPS
8 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
However, even if it is does not necessarily save money to peer versus using transit, some firms
prefer to peer in order to have traffic go directly to the peer’s end users, avoiding the hops
that a transit provider might send the traffic through. In other words, networks may prefer
peering to gain more control over traffic routes (see Figure 4).
No single ISP is physically connected to every other network on the planet; most have a
customer base in a particular region. So an ISP that sells transit also has to connect with
network providers via peering arrangements, IXs or by buying transit as well. Through this
series of business relationships and network connections, each network can reach the
entirety of other websites on the Internet, and vice versa.
FIGURE 4: INTERNET TRANSIT PLUS PEERING
Source: 451 Research, 2015
2.2.2 PUBLIC INTERCONNECTION OR PUBLIC PEERING
Public peering refers to the practice of multiple parties connecting to each other via an IX
that operates a shared switching fabric, typically an Ethernet switch, which enables one-
to-many connections. The location and switch used to connect multiple firms is called an
Internet exchange point (IXP). The Ethernet switches can provide 100Mb connections (or
ports), up through 100Gb ports in some cases (see Figure 5).
Public peering is more scalable and often less expensive than setting up a large number of
individual private peering arrangements/connections. Once connected to the main platform,
there is relatively little cost to add interconnection partners that are also on the platform.
BA
Customer
Customer
Customer
Customer
C
Transit $ Transit $
DPeering
TRAFFIC FLOW
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
9 451 RESEARCH
FIGURE 5: PUBLIC PEERING PLATFORM
Source: 451 Research, 2015
In North America, in general, there is one major public peering exchange per market, typi-
cally available via one or two datacenters. The owner(s) of those datacenters typically run
the exchange. The reverse is true in Europe, with most public peering fabrics operated on
behalf of their members either as nonprofits or as cooperatives and available in multiple
datacenters in the market. Their members are the firms connected to the exchange. This
model has slightly different economics: Since the exchanges are in multiple sites, there
are costs for equipment in each site and network connectivity between them (e.g., the
cost to lease dark fiber and the cost for equipment to light the fiber at each end).
In North America, private peering is more common; public peering has generally been
used for lower bandwidth requirements and/or as a backup for private peering traffic.
Public peering is more popular in Europe than in North America for historic reasons, since
it arrived in Europe later, when the technology was better developed (see Figure 6).
Router
ISP A POP ISP B POP
Router
ISP C POP ISP D POP
Router
Router
Router Router
Router Router
Ethernet
Switch
Internet Exchange Point
PUBLIC PEERING
Across a
Shared Public
Peering Switch
PRIVATE PEERING
Across a
Cross-Connect
10 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
FIGURE 6: PUBLIC PEERING IN THE US VS. EUROPE
Source: 451 Research, 2015
CHARACTERISTICS US EUROPE
IXP business model For-profit Cooperative or nonprofit
IXP operator
The colocation provider A committee selected by
members or an association
IXP location
The IXP is located in the facility
(-ies) of its colocation provider.
The IXP has equipment in
multiple datacenters belonging
to a variety of operators.
Interconnection price model
Installation fee for connection
to the IXP based on number and
bandwidth of ports provided,
plus monthly recurring fee.
Installation fee for connection
to the IXP based on number and
bandwidth of ports provided,
plus monthly recurring fee. There
is also an annual membership
fee not related to the quantity of
ports or traffic.
Cross-connect price model
Installation fee plus, often,
monthly recurring fee paid to the
colocation provider per cross-
connect.
Installation fee – and typically no
monthly recurring fee – paid to
the colocation provider per cross-
connect.
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
11 451 RESEARCH
SECTION 3
Interconnection as a Business
Originally, connections were made by physically patching (connecting) two customers
together via a fiber-optic or copper cable. Every carrier in a facility was connected indi-
vidually to others. Over time this generated enormous quantities of cables that were hard
to keep track of and became quite complex to manage. (Physical network connections –
when done wrong – are believed to be a major source of network errors.)
In the early carrier-neutral sites, the building owner sometimes made the physical
connections, i.e., ran the meet-me room. Sometimes the carriers ran the meet-me room
themselves, e.g., as a cooperative. As complexity grew, firms sprang up that specialized
in operating interconnection spaces. They worked out arrangements with the building
owners and earned their keep by charging for their services. When the original carrier
hotels filled up, these operators sometimes built and ran expansion space nearby. This
launched the business of interconnection and also led to the automation of the process,
when the interconnect operators began to provide switching services (as well as the
physical cabling services).
3.1 COMPONENTS
There are several components to the business of interconnection:
• The building where the connections are made
• In some cases, bandwidth services to or within the building where the connections
are made
• Physical cross-connects
• Often, a public peering platform
• Access to other customers of the facility, such as cloud providers, either directly or
through a cloud exchange platform
• Additional services provided to customers
3.1.1 THE BUILDING
In the early days of carrier-to-carrier connections, the facility where connections were
made mainly housed telecom equipment – which generally requires relatively little
power but uses direct current (DC). Thus, when these facilities were set up in office build-
ings, they did not normally require extra power and cooling – they just required DC plant.
Through the years, as more firms sought to connect, Internet traffic grew, and customers
signed on that required AC plant and more power and cooling, the facilities had to be
upgraded. The owners had an incentive to do that because as the number of customers
and connections grew, the facilities became more valuable.
12 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
3.1.2 BANDWIDTH
Customers of the facility typically need to pay for bandwidth to their offices or other sites
and for transit to access Internet customers that are not on the networks of firms the
customer peers with. In general, the customer sets up a direct relationship for bandwidth
and/or transit with carriers in the facility. Sometimes, however, the owner of the datacenter
also provides bandwidth services and can charge separately for those. In addition, some
facilities are connected to others to provide access to customers in those other facilities,
and bandwidth is required between the datacenters. This can be provided, on a separate
contract, by a dark fiber or network service provider (NSP). Or sometimes, again, the data-
center owner/operator provides the connectivity to other datacenters – either as a separate
charge or rolled into one of the other fees.
3.1.3 CROSS-CONNECTS
A customer pays to be in a datacenter but also needs to be connected to other firms in the
datacenter. In the early days, carriers ran cables themselves but as the number of cables
grew, this became unwieldy and a third party took over managing the physical cables. The
third party charged a fee for this service. This fee remains in place today and typically is an
installation charge to pay for a technician to physically run the cables and connect them (it
also covers the cost of the cable and equipment). In addition, some providers also charge a
monthly recurring fee for the cross-connect.
3.1.4 PUBLIC PEERING PLATFORM
As mentioned above, an IXP allows a customer to connect to one platform and, through
that platform, to other members of the exchange without having to run separate cables
each time. There is a fee for this service – typically an installation fee and a monthly mainte-
nance fee as well. It is generally based on the size of the port (e.g., 1Gb per second), though
some providers (e.g., IIX) charge a fee based on the amount of bits actually transferred.
3.1.5 ACCESS TO OTHER CUSTOMERS IN THE FACILITY, PARTICULARLY
CLOUD PROVIDERS
Some interconnect providers offer ways to connect to other customers in the facility.
These may include a portal that allows customers to see and contact each other, or a cloud
exchange, which in theory is a platform that allows customers to connect to multiple cloud
providers easily by incorporating the APIs and specific requirements for access to each
cloud provider into one platform. These are at various stages of development, depending
on the provider, but can certainly be an additional source of revenue.
3.1.6 ADDITIONAL SERVICES
Customers may require consulting, network management, remote hands and other services
that are billed separately.
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
13 451 RESEARCH
CLOUD EXCHANGE EXAMPLE: EQUINIX
Cloud exchanges are still relatively new. Equinix launched its Cloud Exchange in spring of 2014.
The idea is to take the IX concept and expand it beyond NSPs to connect to other infrastructure
service providers. Ideally, this would allow a customer to connect to multiple IaaS providers
available on the exchange – such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud
Platform and SoftLayer (IBM) – through one interface or portal. This has been relatively complex
to set up; cloud providers have different requirements for accessing their clouds, so a portal has
to provide the correct information to each provider.
The Equinix Cloud Exchange does this using a co-developed version of Cisco’s InterCloud
orchestration tool coupled with SDN technology developed in Equinix labs, as well as components
from Ciena and Juniper for layers 1-3 and the software platform Apigee. The Cloud Exchange
provides a range of services, including automatic provisioning and policy setting. A customer
can connect to Cloud Exchange participants via a port on an Equinix switch. Instead of taking
out a dedicated fiber connection to each cloud provider, the customer can open many smaller
virtual circuits to various cloud suppliers. Equinix is aiming to encourage end users to connect
with providers, pricing the service as a utility to help spur connections between a customer and
multiple cloud providers. Equinix, in turn, makes money from the customer and supplier for both
colocation and the cross-connect to the platform, as well as a nominal fee for joining the platform.
Cloud Exchange VLANs target enterprise users consuming smaller amounts of traffic
for smaller time frames (200Mbps, 500Mbps, 1Gbps and other speeds up to 10Gbps are
available). Those customers with higher bandwidth consumption rates over a long-term
contract, including those looking at Amazon’s Direct Connect service, will buy 1Gbps or
10Gbps ports.
3.2 SUPPLY AND DEMAND
3.2.1 SUPPLY
Carrier hotels today, particularly those with the most customers, in general remain more
valuable than other datacenters. Typically, several carriers have laid fiber to the building
while others have installed equipment inside, so there is a sunk cost to being in the
facility and it can be expensive to move to another one. Customers usually need to be in
the facility because they need to connect to the maximum number of carriers, ISPs, cloud
providers and others. It is difficult to start a competing exchange nearby because each
exchange has a tipping point before which there are not enough customers in the facility
to make it worth paying for equipment and colocation fees to have a presence there.
Another option is to build a datacenter nearby and connect it to the original carrier hotel.
However, there is then a cost for the connectivity between the two buildings (currently
around $1,000 per month for 10 Gigabit metro Ethernet connectivity, although this varies
widely). The owners/managers of the original carrier hotel meet-me room have an
advantage, as they can pay for dark fiber to connect the two buildings and arrange for
interconnection of customers from the second facility. However, if a competitor sets up
a building nearby, the competitor needs to work with the original carrier hotel owner to
determine how to connect customers of the second building. Typically, the customers of
the second building would need to pay for network transport to the carrier hotel. Note
14 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
that the original carrier hotel owner can simply charge a lower price for space in the carrier
hotel than the cost of that network transport and try to win the customer away. Or the owner
of the second building can pay for network connectivity to the first building, but then someone
has to pay for space at the first building to house the equipment for interconnection.
As a result, with captive customers for which moving may be expensive, and with some barriers
to entry that keep competitors from easily recreating the ecosystem of customers at a facility,
the carrier hotel owner/operator can have strong pricing power. The fees for colocation in the
building (just for the space and power for equipment) are typically at least 20% higher than
those for facilities nearby that are less network-dense. Sometimes for a very desirable location,
e.g., where the matching engine for financial trades sits, the fees can be much more than that.
If there is not enough capacity for growth at a particular network-dense facility, we have
seen ecosystem participants move to a different location. This has happened, for example,
with financial exchanges where the trading engine moved from downtown (in Manhattan or
Chicago) out to the suburbs and brought its trading ecosystem participants with it.
3.2.2 DEMAND
In addition to acting as hubs where network providers can connect to each other, a variety of
models for interconnection have arisen between enterprises, NSPs and cloud service providers.
The business model of the MTDC operator is one component to that value – whether or
not they can attract a large number of providers of bandwidth (ISPs, carriers and such), and
customers that need connectivity to the public Internet as well as to other customers in a
particular datacenter facility. A more recent factor in the equation is the presence of cloud
compute service providers such as AWS, Microsoft Azure, Google Compute Engine and IBM
SoftLayer as well as SaaS companies such as salesforce.com. In addition, enterprises are looking
to place applications closer to an ever more mobile customer and employee base. Locating
in facilities with higher peering points with mobile carriers can greatly improve applica-
tion performance. This extends to enterprise partners providing services such as marketing
and integration, like HubSpot, Eloqua and Marketo. Providing connectivity to these services
for customers already colocated in a datacenter is an area of significant commercial activity.
Another area of growing interest is secure, private connections between cloud providers and
customers outside the facility.
The value of an interconnection‘ecosystem’is growing and is already very large for compa-
nies in particular sectors, e.g., where groups of companies need to share large data sets (oil and
gas, movie production, pharmaceuticals and genomics), or need to trade information (financial
services trading ecosystems). As more firms start to compute and share large data sets, demand
for these communal meeting points (datacenters) will continue to grow.
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
15 451 RESEARCH
3.3 CUSTOMERS
With the rise of the Internet, firms besides carriers have sought to connect with carriers
and with each other, so the list of customers/participants at interconnection facilities has
grown (see Figure 7).
FIGURE 7: CUSTOMERS OF INTERCONNECTION FACILITIES
Source: 451 Research, 2015
CUSTOMER DESCRIPTION REASONS FOR INTERCONNECTING
Network service
provider or carrier
Provides network access and very
high-volume bandwidth access to
the Internet‘backbone.’NSPs sell
bandwidth to ISPs, which in turn
connect and sell access to consumers
and enterprises; some carriers also sell
directly to enterprises.
• To make the maximum number of connections for
buying and selling Internet transit, peering and VoIP
interconnection. The cost savings from being in
an interconnection facility usually make up for the
equipment and rental costs to be there.
• Carrier equipment in these facilities typically
requires less power than servers. The footprint is
relatively fixed.
ISP
Provides businesses and consumers
access to the Internet. May offer other
services, e.g., email, website hosting.
• To gain access to all other destinations on the
Internet that they are not connected to, ISPs
buy transit from network providers or peer with
them or with other ISPs. They want to be in an
interconnection facility with the maximum number
of small networks and ISPs present in order to
peer with them, as well as with top-tier network
providers present in order to buy transit from them.
Content provider
Usually a large-scale provider that
stores video Web pages or other files
that consumers want to access.
• Content providers need to interconnect with
networks via Internet peering or transit to serve
their content to end users. They tend to have
large server and storage deployments. Often,
interconnect facilities do not have enough
contiguous space available for the full content
deployment, so much of it ends up in a building
close by, connected to the interconnect facility via
dark fiber or wavelength services.
• Facility quality and reliability are of great
importance to most content providers, as they
lack carriers’geographic redundancy, and, in some
cases, will serve a specific product out of a single
datacenter.
Content delivery
network
A set of distributed servers and
software used to deliver content.
• Like content providers, CDNs will place servers at
interconnection facilities in order to gain access
to end users, through transit and ideally peering
agreements. The direct billing model of CDNs makes
them highly price-sensitive.
• Many CDNs are less concerned about the ability to
expand within a single facility, and prefer to spread
their footprint out to cover many facilities, thus
improving CDN performance.
16 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
CUSTOMER DESCRIPTION REASONS FOR INTERCONNECTING
Web hoster
A service provider that offers space on
servers for websites, and enables those
sites to be available to the Internet.
• Margins for the hosting business tend to be lower
than those of content providers, so hosters are
often more concerned about price and less about
facility quality. Web hosters are also less concerned
with network density, as their content tends to be
less attractive for Internet peering. They are mainly
looking for lower-cost providers for Internet transit
services. Similarly, network services such as dark
fiber and wavelengths are of less interest to Web
hosters, which typically have less traffic than many
content providers.
Cloud and/or
hosting provider
A cloud provider is a service provider
offering IaaS, SaaS or PaaS in an
on-demand, multi-tenant environment.
Examples include Amazon Web
Services, Microsoft Azure, IBM
SoftLayer and salesforce.com.
• SaaS and IaaS providers tend to have larger
footprints than network providers, often with
relatively high-density architecture. Their growth is
relatively unpredictable as well, so they often seek
facilities with the capability to provide relatively
large amounts of power in small footprints and
space available for expansion.
Systems integrator
A company whose business is building
complete compute systems from
disparate hardware and software
components.
• The major systems integrators (SIs) are interested in
the use of interconnection facilities as a cost-saving
measure. Customers of SIs – enterprises of varying
sizes – will use many different network providers
for connecting with their SI vendors. Placing SI
infrastructure in interconnection facilities enables
easy interconnection with carrier Internet, ATM and
MPLS networks.
Enterprise
‘Enterprise’can refer to any business
entity. For the purposes of this report,
an enterprise is typically a company
with 500 employees or more; it may
have extensive WANs and its own
datacenters, but buy network and
compute resources from third parties
in order to conduct business over (and
on) the Internet.
• Large enterprises are becoming more interested
in interconnection facilities to increase their
network options and access cloud and other
service providers. Enterprises also sometimes use
interconnection facilities as disaster-recovery hubs.
• In general, enterprises prefer higher-quality facilities
with highly redundant cooling and power and a
high level of security. They tend to be less cost-
conscious than some other customers, as their
footprints are smaller and facility quality is so
important to them.
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
17 451 RESEARCH
We summarize the drivers behind selection of interconnect facilities in Figure 8.
FIGURE 8: DRIVERS OF FACILITY SELECTION
Source: 451 Research, 2015
CARRIERS
CONTENT
PROVIDERS
WEB HOSTING
PROVIDERS
CONTENT DELIVERY
NETWORKS
LARGE ENTERPRISES,
SYSTEM INTEGRATORS
Network density High High Medium High Medium
Telecom services
available
Medium Medium Low Medium High
Cost Low Medium High High Low
Power and
cooling
Low High High High Medium
Expansion
capacity
Medium High High Medium Low
Managed
services
Low Low Low Low High
Facility quality,
reliability
Medium High Medium High High
Examples
Verizon,
Level 3, Zayo
Google,
Netflix
Rackspace
Akamai,
Limelight
EDS, IBM,
Morgan Stanley
Criteria and
level of
importance
18 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
SECTION 4
Interconnection Providers
Originally, providers were known for particular locations where they had facilities, and in each
market there were only one or two interconnection options. This has changed somewhat, as
larger providers have acquired the single-site carrier hotels in various cities and/or have built
competing facilities in the top markets. In smaller markets there are often no public IXs, but
there are still locations where carriers, ISPs, content providers, etc., meet to exchange traffic.
These exchange points are typically owned by a carrier or ISP.
The Market Map in Figure 9 shows key interconnect providers and some of the characteris-
tics that differentiate them. Geographic reach and focus is one characteristic: Some firms are in
multiple countries; some are in a single region, typically a top market; some are in markets at the
‘edge’of the Internet, in cities outside the traditional top 10 datacenter/interconnection loca-
tions. When it comes to service offerings, there are providers that offer interconnection but also
provide their own network services. There are firms that offer interconnection but also larger
suites, in a combination of interconnection and a more wholesale-like offer. There are firms that
offer connections through a public peering platform. Finally, there are firms that offer direct
connectivity to cloud providers, through a cloud exchange platform or through direct connec-
tions to well-known public cloud providers such as AWS and Microsoft Azure.
FIGURE 9: 451 RESEARCH INTERCONNECT MARKET MAPTM
Source: 451 Research, 2015
Focus on Single Market
Network Services &
Cross-Connects
Geographic Reach
(Multiple Countries)
Cloud Exchange Platform
Focus on Markets Outside Top 10
Hosts or Operates Public
Peering Platform
Interconnection
Plus Larger Suites
Direct Connections to Public
Cloud Providers
MARKLEY GROUP
CITYNAP
COLO ATL
PHOENIX NAP
PTT METRO
WESTIN BUILDING EXCHANGE
KIO NETWORKS
CYRUSONE
CENTURYLINK
LEVEL 3 COMMUNICATIONS
COLT
NTT COMMUNICATIONS
PCCW
TELSTRA
VERIZON TERREMARK
ZAYO/ZCOLO
DIGITAL REALTY
COLT
NTT COMMUNICATIONS
PCCW
CENTURYLINK
LEVEL 3 COMMUNICATIONS
EQUINIX
TATA COMMUNICATIONS
CENTURYLINK
LEVEL 3 COMMUNICATIONS
CORESITE
TELX
SWITCH SUPERNAP
GLOBAL SWITCH
COLT
NTT COMMUNICATIONS
PCCW
TELSTRA
VERIZON TERREMARK
ZAYO/ZCOLO
EQUINIX
TATA COMMUNICATIONS
CENTURYLINK
LEVEL 3 COMMUNICATIONS
CORESITE
TELX
SWITCH SUPERNAP
SABEY DATA CENTERS
365 DATA CENTERS
COLOGIX
ZAYO/ZCOLO
NEXTDC
SABEY DATA CENTERS
365 DATA CENTERS
COLOGIX
ZAYO/ZCOLO
NEXTDC
GLOBAL NET ACCESS (GNAX)
MIAMI-CONNECT
MORGAN REED GROUP
SIERRA DATA CENTERS
KOMO PLAZA
SWITCH SUPERNAP
SWITCH SUPERNAP
GLOBAL SWITCH
EVOSWITCH
AMS-IX
DE-CIX
IIX
LINX
INTERXION
KDDI/TELEHOUSE
KIO NETWORKS
TELSTRA
VERIZON TERREMARK
TATA COMMUNICATIONS
EQUINIX
MARKLEY GROUP
DUPONT FABROS TECHNOLOGY
CYRUSONE
CORESITE
EVOSWITCH
GLOBAL SWITCH
DIGITAL REALTY
DUPONT FABROS TECHNOLOGY
CYRUSONE
CORESITE
GLOBAL SWITCH
EVOSWITCH
AMS-IX
DE-CIX
IIX
LINX
INTERXION
KDDI/TELEHOUSE
KIO NETWORKS
TELSTRA
VERIZON TERREMARK
TATA COMMUNICATIONS
EQUINIX
TELX
INTERXION
COLOGIX
SWITCH SUPERNAP
EDGECONNEX
EXPEDIENT DATA CENTERS
INVOLTA
NETRALITY PROPERTIES
SUNGARD AVAILABILITY SERVICES
TIERPOINT
VXCHNGE
MARKLEY GROUP
CITYNAP
COLO ATL
PHOENIX NAP
PTT METRO
WESTIN BUILDING EXCHANGE
SABEY DATA CENTERS
QTS REALTY TRUST
EQUINIX
NEXTDC
KDDI/TELEHOUSE
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
19 451 RESEARCH
Identification and placement of companies into these segments is based on analysis,
both published and unpublished, performed by 451 Research. This analysis includes
interviews, reports and advisory work with several thousand enterprises, vendors, service
providers and investors annually. 451 Research Market Maps™ are not intended to repre-
sent a comprehensive list of every vendor operating in this market. Inclusion on 451
Research Market Maps™ does not imply that a given vendor will be specifically featured in
one or more 451 Research reports.
FIGURE 10: INTERCONNECTION PROVIDER SEGMENTS
Source: 451 Research, 2015
PROVIDER
FOCUS ON
SINGLE
MARKET
FOCUS ON
MARKETS
OUTSIDE
TOP 10
GEOGRAPHIC
REACH
(MULTIPLE
COUNTRIES)
INTERCONNECTION
PLUS LARGER
SUITES
NETWORK
SERVICES
AND CROSS-
CONNECTS
HOSTS OR
OPERATES
PUBLIC
PEERING
PLATFORM
CLOUD
EXCHANGE
PLATFORM
DIRECT
CONNECTIONS
TO PUBLIC
CLOUD
PROVIDERS
365 Data Centers ü ü
AMS-IX ü ü
CenturyLink ü ü ü ü
CityNAP ü ü
Colo Atl ü ü
Cologix ü ü ü
Colt ü ü ü
CoreSite ü ü ü ü
CyrusOne ü ü ü
DE-CIX ü ü
Digital Realty ü ü
DuPont Fabros ü ü
EdgeConneX ü
Equinix ü ü ü ü ü
Evoswitch ü ü ü
Expedient Data
Centers
ü
Global Net Access
(GNAX)
ü
Global Switch ü ü ü ü
IIX ü ü
Interxion ü ü ü
Involta ü
KDDI/Telehouse ü ü ü
KIO Networks ü ü ü
KOMO Plaza ü
Level 3
Communications
ü ü ü ü
LINX ü ü
Markley Group ü ü ü
20 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
PROVIDER
FOCUS ON
SINGLE
MARKET
FOCUS ON
MARKETS
OUTSIDE
TOP 10
GEOGRAPHIC
REACH
(MULTIPLE
COUNTRIES)
INTERCONNECTION
PLUS LARGER
SUITES
NETWORK
SERVICES
AND CROSS-
CONNECTS
HOSTS OR
OPERATES
PUBLIC
PEERING
PLATFORM
CLOUD
EXCHANGE
PLATFORM
DIRECT
CONNECTIONS
TO PUBLIC
CLOUD
PROVIDERS
Miami-Connect ü
Morgan Reed
Group
ü
Netrality
Properties
ü
NextDC ü ü ü
NTT
Communications
ü ü ü
PCCW ü ü ü
Phoenix NAP ü ü
PTT Metro ü ü
QTS Realty Trust ü
Sabey Data
Centers
ü ü ü
Sierra Data Centers ü
SunGard AS ü
Switch SUPERNAP ü ü ü ü ü
Tata
Communications
ü ü ü ü
Telstra ü ü ü ü
Telx ü ü ü
TierPoint ü
Verizon Terremark ü ü ü ü
vXchnge ü
Westin Building
Exchange
ü ü
Zayo/zColo ü ü ü ü
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
21 451 RESEARCH
FIGURE 11: SUMMARY CHART: MARKET CHALLENGES AND INNOVATIONS
Source: 451 Research, 2015
MARKET SEGMENT KEY CHALLENGES INNOVATIONS
Focus on Single Market
• Expanding the facility in a key
market – accessing capital,
working within geographic
constraints.
• Offering specific high-density
rooms; tethering nearby buildings
to the main site; trying various
pricing models.
Focus on Markets Outside
Top 10
• Gaining enough customers at
each site.
• Encouraging customers to
deploy in multiple markets.
• Offering similar look and feel in
each facility.
• Allowing customers to manage
facilities in multiple markets on a
single contract and with a single
portal.
Geographic Reach
• Encouraging customers to
deploy in multiple markets.
• Determining where to add
facilities.
• Offering similar look and feel in
each facility.
• Forming partnerships to offer
facilities in other countries without
having to build there.
Interconnection Plus
Larger Suites
• Finding space and power near
interconnect facilities to provide
larger suites.
• Targeting customers that need
both large blocks of space and
interconnection options.
• Providing dark fiber, wavelength
or other network services to
encourage customers to deploy in
a building separate from the main
interconnect facility.
Network Services and
Cross-Connects
• Convincing customers that the
facility offers interconnection
without requiring the use of the
provider’s network.
• At the same time, encouraging
customers to use the provider’s
network.
• Creative bandwidth options and
pricing by the network provider,
particularly for customer access to
cloud services.
• Stressing the benefits of having‘one
throat to choke’or pricing in such
a way that it is a benefit using one
provider.
Hosts or Operates Public
Peering Platform
• Security; attracting customers;
enabling those on the platform
to know who else is connected.
• Providing portals that show
who is available to peer with on
the platform and enable those
connections to happen rapidly
(with only a couple of clicks).
Cloud Exchange Platform
• Attracting customers,
particularly enterprises, but also
attracting cloud providers to the
platform.
• Making the connections simple
to the end user despite the lack
of standards for accessing cloud
providers.
• Providing flexible bandwidth
options to encourage uptake by
end users.
• Developing orchestration platforms
that connect to all the cloud
providers using APIs to simplify
access to each.
Direct Connections to
Public Cloud Providers
• Convincing public cloud
providers to offer direct
connections from a particular
facility.
• Helping customers access the
direct connections for multiple
providers easily.
• Creating a‘dashboard’using APIs
to allow customers to connect
to various public cloud providers
directly with a single pane of glass.
22 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
SECTION 5
Evolution of Interconnection: Trends and Disruptors
We see a variety of factors impacting interconnection, including: the continued growth
of Internet traffic; the increasing number of firms that want to interconnect with others;
the migration of content and Web applications closer to the network edge; the need to
interconnect with cloud providers; mobility and the Internet of Things (IoT); the need for
a variety of firms to work on the same data sets; and developments in networking and
datacenter technology that could further accelerate decentralization. The number of
interconnections will continue to increase dramatically.
5.1 CONTINUED GROWTH OF INTERNET TRAFFIC AND THE
NEED FOR INTERCONNECTION
Global IP-based Internet traffic will continue to grow threefold per year over the next
three years, according to Cisco’s Visual Networking Index report for 2015. The continued
growth in traffic is driven by several factors:
• The popularity of streaming media, music/radio services, TV/video on demand and
Internet video sites such as YouTube, and the massive bandwidth requirements of
video compared with those of static Web pages.
• The popularity of the Web for distributing major media events such as the World Cup
and the Olympics.
• Growing traffic from mobile devices, which is estimated to increase tenfold by 2019.
The typical effective speed for Internet network traffic exchange is 7Gbps, or 70% of an
OC-192 or 10 Gigabit Ethernet link. While that seems to be an extraordinary amount of
traffic, it is small compared with the large traffic volume seen on the Internet today. To
deal with the large amount of Internet traffic and keep up with the significant growth,
networks are peering with a greater number of interconnections at any specific location,
and they are interconnecting in more locations. While the spread of 100 Gigabit Ethernet
technology has the potential to control this growth, it is likely that the new technology
will only keep up with, rather than lead, demand.
5.2 INCREASE IN THE NUMBER OF FIRMS INTERCONNECTING
The number of websites and Internet content sources has grown considerably over the
last five years as successive waves of social networking, picture-sharing and video-sharing
websites have come online. Although there has been some consolidation in the number
of such players, the increasing number of entrants in the fast-growing Internet space has
boosted interconnection requirements. More networks and providers are discovering the
cost-saving benefits of carrier hotels and carrier-neutral datacenters.
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
23 451 RESEARCH
Due to the high traffic volumes that are now the norm, networks are driven to place
content in or near interconnection facilities to cut down on pricey local loops and gain
access to inexpensive Internet transit and peering. In years past, such a strategy would
be foreign to the majority of network engineers; the typical strategy would have been to
build their own datacenter and order costly local loops from RBOCs and other metro fiber
providers. However, the growing availability of interconnection facilities, combined with
the enormous costs of multi-gigabit local loops, has forced a sea change in this behavior.
Even those networks that are too large to place their servers in direct proximity to inter-
connection facilities, such as Google and Yahoo, maintain large network nodes to enable
low-cost interconnection to other networks. That capacity is then provided to Google and
Yahoo datacenters via the lower-cost metro fiber available in an interconnection facility.
The increasing number of connections is a simple matter of economics. Local loops are
simply too expensive per bit to support modern Web and Internet media properties.
Another key factor is Internet transit and peering pricing. Comparing the sort of Internet
transit/access pricing that a network can receive at its own datacenter to that which is
available in an interconnection facility, there is an enormous difference due to a strong
marketplace that evolves in most interconnection facilities. Finally, Internet peering is
generally available only to networks at interconnection facilities, and is an increasingly
popular way of cutting costs as bandwidth requirements continue to increase.
5.3 GROWING REQUIREMENT FOR INTERNET CONNECTIVITY
AT THE EDGE
One solution to the ongoing increases in traffic in core networks is storing (also referred
to as‘caching’) in servers closer to end users, or at the edge of the Internet. The need for
data storage and interconnection at the edge of the Internet is expected to explode over
the next few years due to the growth in video and other application content delivered to
mobile devices.
In addition, the growth of devices that provide streams of data such as wearable devices,
automobiles, machines, houses – in other words, the IoT – is expected to affect the
flow of data traffic, shifting it from mostly downstream today (video to end devices) to
upstream (end-user devices sending data back to central repositories) and potentially
vastly increasing the traffic as well. The challenge here isn’t so much the amount of data
from a device, but the frequency with which it communicates with a central server, how
many sessions the server can handle simultaneously, and the latency between the mobile
device and server.
In addition, the enterprise customer segment is evolving. In the past, most enterprises
opted to keep their datacenter requirements in-house. However, several recent trends,
including globalization, ongoing proliferation of Internet-facing applications, ongoing
growth of bandwidth-intensive rich media content, the rise of virtualization and cloud
24 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
computing, and changing business continuity and disaster recovery needs in light of data sover-
eignty have led more and more enterprise CIOs to consider and/or choose to outsource some
or all of their datacenter requirements. Meanwhile, one of the biggest challenges for datacenter
and operations managers is maintaining enough datacenter space and power. With the typical
in-house datacenter ranging in size from 2,000 to 40,000 square feet, and with very limited optical
fiber availability, many CIOs struggle to virtualize and squeeze their applications into the current
datacenters while also trying to justify the necessary capital to connect their existing facilities
and/or build new ones. Colocation is an option in many places, with that market growing roughly
10-20% a year, depending on the location. 451 Research forecasts global colocation market annu-
alized revenue to reach $36bn by 2017.
5.4 CLOUD’S IMPACT ON INTERCONNECTION
The growth of cloud computing itself continues to drive need for interconnect services, but
the need for performance and security will also push enterprises toward using interconnection
services more in the future.
From the origins of interconnection, the path of evolution has given rise to a variety of models for
interconnection between enterprises, NSPs and cloud service providers; datacenters are a valuable
place for these parties to meet, as described in Section 3.2.2 above.
There are strong underlying reasons that enterprises need to evaluate interconnection services as
part of an overall networking strategy. The first is that hybrid cloud computing will eventually be a
reality. Companies want to respond to customers’needs more quickly; doing so requires a digital
infrastructure that can quickly ramp up to meet demand. While CEOs are recognizing that cloud
computing is one way to help businesses adapt, there are business requirements for performance
and security that may get in the way of that goal. Private cloud is one answer to that problem, but
meanwhile, there is a gap opening up between demand for cloud and the ability for the enterprise
datacenter to meet demand in a cost-effective fashion.
The delineation between public cloud and hosted private cloud workloads highlights what one
would intuit: that despite survey after survey stating that security is the top reason not to move
to cloud, enterprises will move applications to the cloud given the right assurances. Our research
indicates that a hosted private cloud option is able to meet the security and performance require-
ments of enterprises.
It’s not just hosted private cloud that will be under consideration for enterprises. A complex hybrid
cloud strategy will eventually emerge where enterprises will use a mix of on-premises private
cloud, hosted private cloud and public cloud resources to achieve their business goals. There
are already signs of this occurring: According to our Voice of the Enterprise: Cloud Computing
Customer Insight Survey Results and Analysis (Q4 2014), over the next two years, executives
expect that 15% of workloads will run in a hosted private cloud environment, 28% of workloads
will be running in a mix of hybrid and public clouds venues, and the remaining 58% of workloads
will be done on-premises.
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
25 451 RESEARCH
This shift is underway because enterprises must move toward building a complete digital
infrastructure strategy – meaning a strategy that includes orchestrating the use of compute
capacity, data storage and applications with a policy-based approach. In the longer term, enter-
prises will create services and products by dynamically matching and placing workloads at the
best execution venue for a job based on cost, performance, legal and other requirements.
Interconnection services within the datacenter environment will play a large part in this vision
becoming a reality.
A secure, high-speed link between cloud provider and enterprise is critical to a successful cloud
strategy. To facilitate these connections, cloud providers have been busy building up partner
programs with NSPs and MTDC providers. How exactly do MTDC service providers fit in the
mix, especially those that bill themselves as carrier-neutral? They can play a key role, both by
offering a breadth of NSPs at a facility and by providing interconnection services to the major
cloud providers, either via a cross-connect or a cloud exchange platform.
Enterprises have long been accustomed to using private connections to hosting environments,
but the same hasn’t always been the case with public cloud offerings. Cloud providers have
been responding to customer demand for better connectivity options by offering the ability to
let customers use a dedicated physical connection to a nearby point of presence. The providers
have also been setting up programs that help pair NSPs with enterprise customers.
There are a number of advantages to using direct connections to a cloud provider:
• Security – A dedicated, direct link to the cloud provider offers an inherently more secure
transport path as compared to traversal over the public Internet. Some providers tout the
ability to allow multiple IPsec VPN connections to connect through a dedicated link, allowing
multiple branch/remote office locations to use cloud resources.
• Cost – The use of a private connection can sometimes save money because the traffic
doesn’t have to be routed over the ISP’s connection to the Internet – it’s sent direct to the
cloud provider. Cloud providers such as Amazon will also charge a lower outbound data
transfer rate as compared to transfer over public Internet links.
• Performance – Latency and bandwidth are more consistent with deterministic routing.
Depending on the point of interconnection, performance may be suitable for latency-
sensitive workloads that could not be run over a public Internet link.
• Service agility – A variety of hybrid service models can be implemented, including a mix of
public and hosted private cloud services, over the same secure, dedicated link. This allows for
more flexibility in placing different workloads on resources that have an appropriate price/
performance profile.
Amazon, Microsoft (Express Route), SoftLayer and Google (Cloud Interconnect) are among
the cloud providers offering interconnect options. Amazon’s Direct Connect product has been
around the longest; it is a dedicated physical connection from a customer’s network into one of
Amazon’s Direct Connect locations. For an hourly fee, Amazon will provide its customers with a
26 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
1Gbps or 10Gbps port into its S3 and EC2 (as well as VPC) environments within any of its Direct
Connect locations. Depending on the amount of data to be transferred, a direct connection can
be less expensive as well – as an example, uploading data to AWS is free but downloading using
Internet bandwidth on US-East is $0.09 per GB, while downloading using Direct Connect is $0.02
to $0.03 per GB plus the relatively small port charge of $0.30/hour.
5.5 NET NEUTRALITY
Network neutrality is the idea that all traffic running over a network should be treated equally
and that content providers or customers cannot have their traffic prioritized, e.g., by paying a
higher rate. Network operators have argued that they should be able to charge more to prior-
itize some content and that otherwise, essentially, they will not earn enough to expand their
networks and services. Critics argue that this would allow the largest content providers (or at
least those with the largest budgets) to push their content, putting smaller or newer (or poten-
tially more innovative) providers at a disadvantage. Currently, regulatory bodies in various coun-
tries are determining what level of Net neutrality they want and what legislation/enforcement is
required to achieve it. It is possible that if network neutrality is not regulated and enforced, the
number of content providers could shrink. This would reduce the number of potential customers
for interconnect providers. However, it is also possible that additional regulation could hurt
Internet performance and reduce the adoption of new Internet services – also possibly reducing
the number of new service providers and potential customers for interconnect sites.
In the meantime, neutrality is the general rule, and this has affected some peering relationships.
Many‘eyeball networks’(i.e., big broadband providers such as Verizon in the US) argue that they
are carrying too much traffic for particular partners (e.g., content providers) via settlement-free
peering relationships. This first became a problem with the growth of file sharing, but the imbal-
ance of traffic flows from content providers (YouTube, Dailymotion, Netflix) has led more of the
networks to charge for peering. As these arrangements start to look less network-neutral, regu-
latory agencies are keeping an eye on these arrangements; in the US, the FCC has stipulated that
AT&T provide detailed reports on its interconnection agreements as part of its $49bn acquisition
of satellite-TV service provider DirecTV, for example. In addition, partly in response to this poten-
tial requirement that content providers pay to prioritize their traffic, some of the larger content
providers are setting up their own networks, which we discuss next.
5.6 ‘PRIVATIZATION’ OF THE INTERNET
A rapidly growing amount of network traffic is‘private,’i.e., coming from mega-scale cloud
providers. For years these large providers have been buying up dark fiber capacity and using
it to bypass the Internet to get better end-to-end performance and/or to prioritize traffic. By
some estimates, 50% of traffic on undersea cables crossing the Pacific is private. The Internet
therefore seems to be fragmenting into a set of mega-scale controlled private networks, with
the traditional Internet available for everyone else. This may lead to strong incentives for using
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
27 451 RESEARCH
a particular cloud provider’s services, particularly if partners/customers are with the same
cloud provider, in order to get the best connectivity and/or the best rates. Net neutrality
could significantly accelerate this network privatization. In addition, the ability to move
mission-critical workloads close to where the customer or clients are – boosting overall
performance and availability of services without incurring higher costs – may provide a
strong competitive advantage, adding to the appeal of investing in private networks. The
growth of these wholly private networks outside of the Internet will have an impact on
where cloud providers will need to interconnect to reach end customers, possibly boosting
their need for interconnection locations and services. However, it may also impact the
number of content provider competitors, since smaller, newer firms will not be able to afford
their own networks. As discussed above, this could then reduce the number of potential
customers for interconnection providers.
5.7 COMPETITIVE CHANGES
As we noted above, in the US, public peering exchanges have been run by datacenter opera-
tors as private businesses, which means they have been located in the facilities of the owner-
operator. In Europe, by contrast, public peering exchanges have generally been cooperatives
or nonprofits separate from the datacenter facilities where they are located. They tend to be
housed in multiple facilities in a market, belonging to multiple providers. In the US, there are
efforts underway to create interconnect systems similar to those in Europe. These include the
launch of the Open-IX Association (OIX) and the arrival of European exchanges in the US.
5.7.1 OPEN-IX
OIX is a nonprofit industry group that arose as part of an effort to counter the current US
interconnect approach in which one or two datacenter owners in each market typically have
a monopoly/duopoly on public peering there. OIX is not a provider of IX services; rather, it
is an association formed by a number of datacenter providers, CDNs, network operators,
content providers and others. To build a more resilient peering architecture in North America
and boost competition for interconnection services, the idea is to promote a model similar
to that found in Europe, in which public peering exchanges are spread across multiple data-
centers in a market. OIX has developed a set of interconnection standards to encourage the
growth and spread of these public exchanges.
Certification by OIX signifies that a company has adopted the OIX standards and can be
identified as an OIX datacenter. The OIX Data Center Standards (OIX-2) define a broad range
of requirements, including for security, concurrent maintainability, connectivity, and oper-
ational and maintenance procedures. The OIX-1 standards define requirements for public
peering exchanges. Detailed requirements are available on the Open-IX website. The entities
that have been certified so far are listed in Appendix D.
28 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
It is difficult to tell what the impact of OIX has been so far. The effort has brought publicity
to peering in the US and possible alternatives to the current system. The major European
exchanges have launched in the US and new public peering exchanges have been launched
in several markets as well. There is some pressure on providers to lower the monthly cost of
cross-connects or not charge monthly cross-connect fees at all, as is more typically the case in
Europe. It is unclear to what extent OIX certification has been the catalyst for all this or whether
it is due to overall interest in having more peering options.
5.7.2 EUROPEAN EXCHANGES IN THE US
Several European exchanges have launched operations in the US over the past two years.
The Amsterdam Internet Exchange (AMS-IX) launched in New York/New Jersey in November
2013. It is available at 111 8th Avenue (x2), 375 Pearl St and 325 Hudson in Manhattan, and 101
Possumtown Rd in New Jersey. Unlike the other two European exchanges, it is in multiple US
markets; it will launch in the Bay Area in September 2015 and in Chicago in October 2015.
The German Internet Exchange (DE-CIX) entered the US in November 2013 as well and has
installed nine switches in eight buildings in New York/New Jersey: 60 Hudson (x2), 111 8th
Avenue, 165 Halsey St, 32 Avenue of the Americas, 325 Hudson St, 85 10th Ave, and 375 Pearl St
in Manhattan, and 2 Emerson Lane in Secaucus, New Jersey. The exchange claims that its traffic
has doubled since early 2015 and was at 36.08Gbps in April.
In 2013, the London Internet Exchange (LINX) launched in three sites in Virginia: EvoSwitch
(Manassas), CoreSite (Reston) and DuPont Fabros (Ashburn).
5.7.3 ADDITIONAL COMPETITION
For many years interconnection has been a very local business, with a few providers offering
national and international footprints. Competition is increasing, however.
Wholesale providers with deep pockets, such as CyrusOne, Digital Realty and DuPont Fabros,
are increasingly interested in interconnection to differentiate their facilities and provide
a service that is becoming ever more important to their customers. In the past, wholesale
providers ensured that at least two network providers were available for service at a building
and then let their customers negotiate with those providers or, if they were customers of
another network provider elsewhere, encourage that provider to connect to the datacenter as
well. A large choice of network providers was not typically available. Now, however, customers
often prefer to have a choice of several network providers at a facility and also like to access
SaaS and IaaS providers. They increasingly seek facilities that offer those choices or that at least
connect to facilities that offer those choices. Digital Realty recently made a big strategic move,
purchasing interconnect-oriented provider Telx in response to its customers’demands for inter-
connection options and a connectivity story.
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
29 451 RESEARCH
There is also a growing number of datacenter operators building/buying/expanding facilities
to provide interconnection and peering options closer to the edge of the Internet. They build
or buy interconnection space close to end users, in cities outside the top datacenter markets.
Examples include Cologix, EdgeConneX and 365 Data Centers. These firms are expanding
quickly, in some cases through acquisition. Some competitors are fiber providers such as
Allied Fiber and Zayo. Allied Fiber, for example, is building dark fiber networks and providing
small datacenters along the route, currently available in the Southeast. This has been partic-
ularly useful for mobile operators. ZenFi is doing something similar in Manhattan. Zayo is
adding datacenter space along its dark fiber routes across the country.
Consolidation has been a key way for interconnect players to expand, since network-dense
assets are relatively hard to replicate. Cologix is an example of a firm that has grown through
acquisition, as has 365 Data Centers. Equinix is in the process of buying Telecity to grow its
network-dense footprint in Europe. As mentioned before, Digital Realty has acquired Telx.
These are desirable assets and do not come up for sale very often – we believe consolidation
will continue, but in many edge markets, firms will need to build and develop interconnection
assets rather than acquire them.
5.8 TECHNOLOGY TRENDS
Some technology trends could potentially impact interconnection. While the hosting industry
has been transformed by cloud computing, change has been slower for network services. Just
as virtualization of servers was key to igniting the cloud computing revolution, virtualization
at the network layer is allowing enterprise networking to move from a focus on appliances
and communications links to cloud-delivered services. We see some possible interconnec-
tion impacts from network providers using SDN and NFV to provide more innovative network
services. Beyond a rather basic vision of bandwidth on demand, some network providers,
for example, are looking to provide some of the benefits of interconnection for enterprises
(particularly interconnection to cloud providers) through programmable (i.e., API-driven)
network services rather than through interconnection facilities. The idea is to encourage
enterprises to use one network provider for network, cloud and datacenter requirements
rather than multiple providers by pitching ease of use and better visibility into performance
of the whole IT stack. Using one provider for most network and datacenter needs would
make it less helpful for enterprises to lease space in network-dense facilities, assuming that
AT&T or Verizon, for example, could be price-competitive. Such a trend, over the longer term,
could potentially result in fewer cloud and SaaS providers overall, which could also reduce the
number of customers for network-dense facilities.
30 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
SECTION 6
The 451 Take
In the MTDC industry, network-dense carrier hotels are the hardest facilities to replicate
and interconnect-oriented providers therefore often have relatively few competitors in
any one location. This is changing, particularly in the US, as investors back new builds
with interconnect-focused business plans and providers previously relatively less inter-
ested in interconnection, such as some of the wholesale firms, work to develop their own
interconnect ecosystems.
With the rise of services that depend on network speed and reliability, we believe the
demand for interconnection facilities will continue to grow, particularly globally and
in markets outside the top 10 in the US as content pushes further to the edge of the
Internet. There may be some shifts in business models, particularly as the European inter-
connection model expands in the US, but overall we believe interconnect providers will
continue to grow and obtain a premium for their datacenter space.
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
31 451 RESEARCH
APPENDIX A
Glossary
Cloud exchange or cloud connect: A cloud exchange platform is essentially a variation
on the virtual cross-connect service. Where an IX platform is facilitating the movement of
data across the public internet, a cloud exchange is facilitating the connection of a party
to a cloud service provider in a private, secure manner rather than via the public Internet.
Like an IX, a single port enables access to multiple providers that are colocated in a carrier-
neutral datacenter.
Carrier hotels: A carrier hotel is also a colocation facility, but the name typically connotes
a facility that has a very high concentration of networks, carriers and service providers. The
term also reflects that fact that many of the famous carrier hotels are not single-purpose
datacenters, but mixed-use buildings such as One Wilshire in Los Angeles and 60 Hudson
Street in New York City. They are often located in the heart of a city’s business district,
have office space rented to third parties, and weren’t built specifically to house computer
networks and servers.
Datacenter interconnection: The networking of two more or more datacenters for a
common business purpose. The datacenters have a physical connection between at least
two facilities, and are connected at a designated space within a building.
Direct connections to cloud providers: A type of interconnection that connects a cloud
service provider to a customer via a‘direct’connection, with connectivity provided by a
carrier partner that links a customer with a fiber or other high-speed connection to the
cloud provider’s node at a datacenter facility. Examples include Amazon’s Direct Connect
or Microsoft’s ExpressRoute. There are different deployment scenarios. For example, in one,
the network interfaces with the cloud provider’s compute and storage resources at a third-
party datacenter. In another, the network interfaces with the cloud provider at the connec-
tion node in a meet-me room, but the node/switch is itself linked to the cloud provider’s
own datacenter – which in some cases may be off-site relative to the network node.
IX providers: An IX provider is an entity that manages the infrastructure used by organi-
zations such as carriers, ISPs, hosting companies and CDN service providers to exchange
Internet traffic. Peering agreements form the basis for the exchange of traffic. Some IXs are
operated as nonprofit, member-based associations. Characteristics of this type of provider
include operating a peering fabric, and pricing services in line with the costs to provide the
service to its members. The nonprofit IXs don’t run or sell colocation services; instead, the
peering fabric is installed in a facility managed by a third-party colocation provider – some-
times in multiple providers in a given region.
In the US, a more common model is for the IX to be run as a for-profit service that is
managed by the colocation provider, which is of course also managing the facility and
selling space along with the opportunity to participate in the IX peering fabric. The
members of the IX in this case are customers of the colocation provider.
32 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
As suggested by the above definition, the commercial IX model is the dominant model in
the North American market, while the nonprofit, member-based IXs are more commonly
found in Europe.
Physical cross-connect: A cross-connect is a means of physically patching (connecting)
two customers together via a fiber-optic or copper cable at a patch panel. This initially was
used to connect telecom networks together but now can connect ISPs, content providers,
cloud providers or enterprise networks together.
Virtual cross-connect: A virtual cross-connect is a service that allows a customer to
connect to a single port to gain access to multiple other parties via a common switch. While
a standard physical cross-connect has no electronics involved, being a physical connec-
tion of cables, a virtual cross-connect has a switch in the path; the switch is what enables
customers to access a wider range of partners than would be physically possible (given
space and power constraints) if they were to connect on a 1:1 basis with each partner.
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
33 451 RESEARCH
APPENDIX B
Key Carrier Hotels in North American Markets
MARKET KEY CARRIER HOTEL ADDRESSES
Atlanta 55 Marietta, 34 Peachtree
Boston One Summer, 230 Congress
Charlotte 3100 Intl Airport Drive, 1960 Cross Beam Drive
Chicago 350 Cermak
Dallas Infomart, 2323 Bryan
Denver 910 15th St, 1500 Champa
Houston 1301 Fannin
Kansas City 1102 Grand
Las Vegas Switch SuperNAP
Los Angeles One Wilshire
Madison 222 W Washington Ave
Manhattan 60 Hudson St., 111 8th Ave, 32 Ave of the Americas
Miami 50 NE 9th St
Minneapolis 511 11th Avenue (NAP of the Americas)
Montreal 1250 Boulevard René-Lévesque
New Jersey Equinix Secaucus, CenturyLink Weehawken
Northern Virginia 21715 Filigree Court (Equinix Ashburn)
Philadelphia 401 North Broad St.
Phoenix 3402 E. University Dr.
Pittsburgh Allegheny Center Mall, 322 Fourth Avenue
San Antonio 415 N. Main Ave
San Francisco 200 Paul St, 365 Main St
Seattle Westin Building
Silicon Valley 9-11 Great Oaks, 55 South Market
Toronto 151 Front St
Vancouver Harbour Centre - West Hastings St
34 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
APPENDIX C
Locations for Direct Connections to Cloud Providers
AWS DIRECT CONNECT LOCATIONS
LOCATION AWS REGION
CoreSite 32 Avenue of the Americas, NY US East (Virginia)
CoreSite One Wilshire & 900 North Alameda, CA US West (Northern California)
Equinix DC1 - DC6 & DC10 US East (Virginia)
Equinix FR5 EU (Frankfurt)
Equinix SV1 & SV5 US West (Northern California)
Equinix SE2 & SE3 US West (Oregon)
Equinix SG2 Asia Pacific (Singapore)
Equinix SY3 Asia Pacific (Sydney)
Equinix TY2 Asia Pacific (Tokyo)
Eircom Clonshaugh EU (Ireland)
Global Switch SY6 Asia Pacific (Sydney)
Sinnet Jiuxianqiao IDC China (Beijing)
Switch SUPERNAP 8 US West (Oregon)
TelecityGroup, London Docklands EU (Ireland)
Terremark NAP do Brasil South America (Sao Paulo)
MICROSOFT AZURE EXPRESSROUTE LOCATIONS
PROVIDER LOCATIONS
Aryaka Networks Silicon Valley, Singapore, Washington DC
AT&T
Amsterdam (coming soon), London (coming soon), Dallas, Silicon
Valley, Washington DC
British Telecom Amsterdam, London, Silicon Valley (coming soon), Washington DC
China Global Telecom Hong Kong (coming soon)
Colt Amsterdam, London
Comcast Silicon Valley, Washington DC
Equinix
Amsterdam, Atlanta, Chicago, Dallas, Hong Kong, London, Los Angeles,
Melbourne, New York, Sao Paulo, Seattle, Silicon Valley, Singapore,
Sydney, Tokyo, Washington DC
InterCloud Systems Amsterdam, London, Singapore, Washington DC
Internet Initiative Japan Tokyo
Internet Solutions –
CloudConnect
Amsterdam, London
Interxion Amsterdam
Level 3 Communications Chicago, Dallas, London, Seattle, Silicon Valley, Washington DC
NEXTDC Melbourne, Sydney (coming soon)
NTT Communications Tokyo (coming soon)
Orange Amsterdam, London, Silicon Valley, Washington DC
PCCW Global Hong Kong
SingTel Singapore
Tata Communications
Amsterdam, Chennai (coming soon), Hong Kong, London, Mumbai
(coming soon), Singapore
TelecityGroup Amsterdam, London
Telstra Melbourne (coming soon), Sydney
Verizon London, Hong Kong, Silicon Valley, Washington DC
Zayo Group Washington DC
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
35 451 RESEARCH
APPENDIX D
Open-IX Certified Providers
OIX-1 CERTIFIED ENTITIES LOCATION
LINX NoVA Ashburn
AMS-IX Bay Area San Francisco
DE-CIX NY New York
AMS-IX Amsterdam Amsterdam (Netherlands)
Florida Internet Exchange Miami
OIX-2 CERTIFIED ENTITIES LOCATION
CyrusOne Austin, Cincinnati (2), Dallas, Houston, Phoenix
Continuum Chicago
DataBank (pending) Richardson
DataGryd New York
Digital Realty Dallas, NY (111 8th Ave), Los Angeles, San Francisco
DuPont Fabros Ashburn, Piscataway
EdgeConneX Houston
EvoSwitch Ashburn
Expiris Middletown
Jaguar Network Marseille (France)
PhoenixNAP (pending) Phoenix
QTS Atlanta, Richmond, Suwanee (Atlanta)
Sentinel Durham, Somerset
Vantage Santa Clara
Zayo Atlanta, Miami
36 Interconnection 101
© 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED.
INDEX OF COMPANIES
365 Data Centers 19, 29
Allied Fiber 4, 29
Amazon Web Services 13, 14, 16, 18, 25,
26, 31, 34
AOL 5
Apigee 13
AT&T 26, 29, 34
AWS 13, 14, 18, 26, 34
Ciena 13
Cisco 13, 22
Cologix 19, 29
CoreSite 4, 19, 28, 34
CyrusOne 19, 28, 35
Dailymotion 26
Digital Realty 4, 19, 28, 29, 35
DirecTV 26
DuPont Fabros 19, 28, 35
EdgeConneX 19, 29, 35
Eloqua 14
Equinix 3, 4, 13, 19, 29, 33, 34
EvoSwitch 28, 35
Google 13, 14, 17, 23, 25
HubSpot 14
IBM 13, 14, 16, 17
ITENOS 3
Juniper 13
Marketo 14
MCI 3
Microsoft 13, 14, 16, 18, 25, 31, 34
Netflix 17, 26
PacBell 3
Sprint 5
Telecity 29
Telx 4, 20, 28, 29
Verizon 3, 17, 20, 26, 29, 34
Yahoo 5, 23
YouTube 22, 26
Zayo 17, 20, 29, 34, 35
ZenFi 29

More Related Content

PPTX
Signal flow graph
PPT
Industrial automation using programmable logic controller (plc)
PPTX
EWSD Switching Systems
PPTX
BTS NOKIA EBENEZA.pptx
PDF
LTE Air Interface
PPTX
PDF
Keyence plc programming course1
DOCX
Frequency synthesizer
Signal flow graph
Industrial automation using programmable logic controller (plc)
EWSD Switching Systems
BTS NOKIA EBENEZA.pptx
LTE Air Interface
Keyence plc programming course1
Frequency synthesizer

What's hot (20)

DOCX
UNIT-III-DIGITAL SYSTEM DESIGN
PDF
Adc lab
PPTX
Pulse and Digital Circuits - PHI Learning
PPT
Low Power Techniques
PDF
Basic plc-programming
PPTX
programmable logic controller presentation
PDF
Behavioral modeling of Clock/Data Recovery
PDF
GSM Channel Concept
PDF
fpga programming
PDF
Plc programming course3
PPTX
System partitioning in VLSI and its considerations
PDF
Design-for-Test (Testing of VLSI Design)
PPT
Control chap3
PDF
Real-Time 200Gbit/s PAM4 Transmission Over 80km SSMF Using Quantum-Dot Laser ...
PDF
射頻電子 - [第一章] 知識回顧與通訊系統簡介
PDF
Bts3900 Site Maintenance Guide(V200 01)
PDF
Overview 3GPP NR Physical Layer
PPT
Introduction to plc (s7)­
PPT
DOMINO LOGIC CIRCUIT (VLSI)
PPTX
Introduction to Nokia RNC
UNIT-III-DIGITAL SYSTEM DESIGN
Adc lab
Pulse and Digital Circuits - PHI Learning
Low Power Techniques
Basic plc-programming
programmable logic controller presentation
Behavioral modeling of Clock/Data Recovery
GSM Channel Concept
fpga programming
Plc programming course3
System partitioning in VLSI and its considerations
Design-for-Test (Testing of VLSI Design)
Control chap3
Real-Time 200Gbit/s PAM4 Transmission Over 80km SSMF Using Quantum-Dot Laser ...
射頻電子 - [第一章] 知識回顧與通訊系統簡介
Bts3900 Site Maintenance Guide(V200 01)
Overview 3GPP NR Physical Layer
Introduction to plc (s7)­
DOMINO LOGIC CIRCUIT (VLSI)
Introduction to Nokia RNC
Ad

Viewers also liked (10)

PPTX
Networking
PPTX
Interconnection: Costs, Complexity, Policies
PPT
Interconnection mechanisms
PPSX
Interconnections: SPICESS
PPTX
Interconnecting devices
PDF
Telecommunications Policies Standards and Regulations Notes
PPTX
EVOLUTION Dallas
PPTX
Interconnection Network
PDF
SOC Interconnects: AMBA & CoreConnect
PPT
Telecom BSS
Networking
Interconnection: Costs, Complexity, Policies
Interconnection mechanisms
Interconnections: SPICESS
Interconnecting devices
Telecommunications Policies Standards and Regulations Notes
EVOLUTION Dallas
Interconnection Network
SOC Interconnects: AMBA & CoreConnect
Telecom BSS
Ad

Similar to Interconnection 101 (20)

PDF
PDF
SDN and NFV white paper
PDF
PDF
Cloud view platform-highlights-web3
PDF
Outsourcery Disaggregation Point of View
PDF
Disaggregation Point of View
PDF
Whitepaper: Strategic Analysis of the Top OTT Trends to Watch in 2025-29
PDF
Regulating for a Digital Economy: Understanding the Importance of Cross-Borde...
PDF
Multi-Cloud Service Delivery and End-to-End Management
PDF
A Cloud Decision making Framework
PDF
Economic value-of-the-advertising-supported-internet-ecosystem’
PDF
Rapport veille salon-mobile IT & bigdata
PDF
Whitepaper channel cloud computing paper 4
PDF
Whitepaper Channel Cloud Computing Paper 4
PDF
3G/3.5G/3.75/3.9G Cellular Services in the Arab World
PDF
2019 bluetooth-market-update
PDF
Information Centric Networks A New Paradigm For The Internet 1st Edition Pedr...
PDF
advancing-the-automotive-industry-by-collaboration-and-modularity
PDF
book_connected-aviation
PDF
Server Virtualization
SDN and NFV white paper
Cloud view platform-highlights-web3
Outsourcery Disaggregation Point of View
Disaggregation Point of View
Whitepaper: Strategic Analysis of the Top OTT Trends to Watch in 2025-29
Regulating for a Digital Economy: Understanding the Importance of Cross-Borde...
Multi-Cloud Service Delivery and End-to-End Management
A Cloud Decision making Framework
Economic value-of-the-advertising-supported-internet-ecosystem’
Rapport veille salon-mobile IT & bigdata
Whitepaper channel cloud computing paper 4
Whitepaper Channel Cloud Computing Paper 4
3G/3.5G/3.75/3.9G Cellular Services in the Arab World
2019 bluetooth-market-update
Information Centric Networks A New Paradigm For The Internet 1st Edition Pedr...
advancing-the-automotive-industry-by-collaboration-and-modularity
book_connected-aviation
Server Virtualization

More from Equinix (20)

PDF
GDS Summit Keynote - manufacturing at software speed draft
PDF
Challenges with Data Center Synchronization and Options for Precise Synchroni...
PDF
Equinix Women Leaders Network (EWLN) Asia-Pacific
PDF
International Women's Day 2020 | Women at Equinix
PDF
Getting “Edge-y” with the Internet of Things
PDF
An Insider's View on What It Takes to Be Digital Ready
PPTX
3 keys to Digital transformation
PDF
Gartner IO 2018 Keynote Presentation: Architect a Digital-Ready Infrastructure
PDF
5 IT Predictions for Digital Business in 2019
PPTX
Equinix Cloud Exchange Fabric™ - Flexible, on-demand global interconnection
PPTX
Equinix IBX SmartView
PDF
The Interconnection Index Volume 2: Breaking Down Interconnection in Latin Am...
PDF
The Interconnection Index Volume 2: Breaking Down Interconnection in Asia-Pac...
PDF
The Interconnection Index Volume 2: Breaking Down Interconnection in the Unit...
PDF
Equinix and IDC Webinar - Trends Transforming Digital Connectivity
PDF
Maximize the Capabilities of Oracle® Golden Gate: Replicate Data Bi-Direction...
PDF
IoT-Enabled Predictive Maintenance Infobite
PPTX
Equinix Women's Leadership Network Celebrates International Women's Day 2018
PPTX
Optimizing Oracle Cloud Infrastructure through Interconnection
PDF
The IDC and Equinix Webinar - 2018 - The Year of the Intelligence Ready Digit...
GDS Summit Keynote - manufacturing at software speed draft
Challenges with Data Center Synchronization and Options for Precise Synchroni...
Equinix Women Leaders Network (EWLN) Asia-Pacific
International Women's Day 2020 | Women at Equinix
Getting “Edge-y” with the Internet of Things
An Insider's View on What It Takes to Be Digital Ready
3 keys to Digital transformation
Gartner IO 2018 Keynote Presentation: Architect a Digital-Ready Infrastructure
5 IT Predictions for Digital Business in 2019
Equinix Cloud Exchange Fabric™ - Flexible, on-demand global interconnection
Equinix IBX SmartView
The Interconnection Index Volume 2: Breaking Down Interconnection in Latin Am...
The Interconnection Index Volume 2: Breaking Down Interconnection in Asia-Pac...
The Interconnection Index Volume 2: Breaking Down Interconnection in the Unit...
Equinix and IDC Webinar - Trends Transforming Digital Connectivity
Maximize the Capabilities of Oracle® Golden Gate: Replicate Data Bi-Direction...
IoT-Enabled Predictive Maintenance Infobite
Equinix Women's Leadership Network Celebrates International Women's Day 2018
Optimizing Oracle Cloud Infrastructure through Interconnection
The IDC and Equinix Webinar - 2018 - The Year of the Intelligence Ready Digit...

Recently uploaded (20)

PPTX
Benefits of Physical activity for teenagers.pptx
DOCX
search engine optimization ppt fir known well about this
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PDF
CloudStack 4.21: First Look Webinar slides
PPTX
Web Crawler for Trend Tracking Gen Z Insights.pptx
PDF
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
PDF
Developing a website for English-speaking practice to English as a foreign la...
PDF
WOOl fibre morphology and structure.pdf for textiles
PDF
Architecture types and enterprise applications.pdf
PDF
DP Operators-handbook-extract for the Mautical Institute
PDF
A contest of sentiment analysis: k-nearest neighbor versus neural network
PDF
Getting Started with Data Integration: FME Form 101
PDF
Hybrid model detection and classification of lung cancer
PPTX
Final SEM Unit 1 for mit wpu at pune .pptx
PDF
Getting started with AI Agents and Multi-Agent Systems
PDF
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PPT
Geologic Time for studying geology for geologist
PDF
NewMind AI Weekly Chronicles – August ’25 Week III
Benefits of Physical activity for teenagers.pptx
search engine optimization ppt fir known well about this
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
CloudStack 4.21: First Look Webinar slides
Web Crawler for Trend Tracking Gen Z Insights.pptx
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
Developing a website for English-speaking practice to English as a foreign la...
WOOl fibre morphology and structure.pdf for textiles
Architecture types and enterprise applications.pdf
DP Operators-handbook-extract for the Mautical Institute
A contest of sentiment analysis: k-nearest neighbor versus neural network
Getting Started with Data Integration: FME Form 101
Hybrid model detection and classification of lung cancer
Final SEM Unit 1 for mit wpu at pune .pptx
Getting started with AI Agents and Multi-Agent Systems
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
Group 1 Presentation -Planning and Decision Making .pptx
Assigned Numbers - 2025 - Bluetooth® Document
Geologic Time for studying geology for geologist
NewMind AI Weekly Chronicles – August ’25 Week III

Interconnection 101

  • 1. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. Interconnection 101 KEY FINDINGS ƒ Network-dense, interconnection-oriented facilities are not easy to replicate and are typically able to charge higher prices for colocation, as well as charging for cross-connects and, in some cases, access to public Internet exchange platforms and cloud platforms. ƒ Competition is increasing, however, and competitors are starting the long process of creating network-dense sites. At the same time, these sites are valuable and are being acquired, so the sector is consolidating. Having facili- ties in multiple markets does seem to provide some competitive advantage, particularly if the facilities are similar in look and feel and customers can monitor them all from a single portal and have them on the same contract. ƒ Mobility, the Internet of Things, services such as SaaS and IaaS (cloud), and content delivery all depend on net- work performance. In many cases, a key way to improve network performance is to push content, processing and peering closer to the edge of the Internet. This is likely to drive demand for facilities in smaller markets that offer interconnection options. We also see these trends continuing to drive demand for interconnection facilities in the larger markets as well. As cloud usage takes off, data production grows exponentially, content pushes closer to the edge, and end users demand data and applications at all hours from all locations, the ability to connect with a wide variety of players becomes ever more important. This report introduces interconnection, its key players and busi- ness models, and trends that could affect interconnection going forward. AUG 2015
  • 2. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. I 451 RESEARCH ABOUT 451 RESEARCH 451 Research is a preeminent information technology research and advisory company. With a core focus on technology innovation and market disruption, we provide essential insight for leaders of the digital economy. More than 100 analysts and consultants deliver that insight via syndicated research, advisory services and live events to over 1,000 client organizations in North America, Europe and around the world. Founded in 2000 and headquartered in New York, 451 Research is a division of The 451 Group. © 2015 451 Research, LLC and/or its Affiliates. All Rights Reserved. Reproduction and distribution of this publication, in whole or in part, in any form without prior written permission is forbidden. The terms of use regarding distribution, both internally and externally, shall be governed by the terms laid out in your Service Agreement with 451 Research and/or its Affiliates. The information contained herein has been obtained from sources believed to be reliable. 451 Research disclaims all warranties as to the accuracy, completeness or adequacy of such information. Although 451 Research may discuss legal issues related to the information technology business, 451 Research does not provide legal advice or services and their research should not be construed or used as such. 451 Research shall have no liability for errors, omissions or inadequacies in the information contained herein or for interpreta- tions thereof. The reader assumes sole responsibility for the selection of these materials to achieve its intended results. The opinions expressed herein are subject to change without notice. New York 20 West 37th Street, 6th Floor New York, NY 10018 Phone: 212.505.3030 Fax: 212.505.2630 San Francisco 140 Geary Street, 9th Floor San Francisco, CA 94108 Phone: 415.989.1555 Fax: 415.989.1558 London Paxton House (5th floor), 30 Artillery Lane London, E1 7LS, UK Phone: +44 (0) 207 426 0219 Fax: +44 (0) 207 426 4698 Boston 1 Liberty Square, 5th Floor Boston, MA 02109 Phone: 617.275.8818 Fax: 617.261.0688
  • 3. II Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. TABLE OF CONTENTS SECTION 1: EXECUTIVE SUMMARY 1 1.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 1.2 KEY FINDINGS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 1.3 METHODOLOGY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 SECTION 2: WHAT IS INTERCONNECTION, AND WHERE DOES IT COME FROM? 3 2.1 CARRIER-NEUTRAL DATACENTER VS MEET-ME ROOM . . . . . . . . . . . . . .4 Figure 1: Carrier-Neutral Datacenter Compared with Meet-Me Room . . . . . . . . . 4 2.2 INTERCONNECTING THE INTERNET . . . . . . . . . . . . . . . . . . . . . . .5 2.2.1 Private Interconnection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Figure 2: Internet Transit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Figure 3: Private Peering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Figure 4: Internet Transit Plus Peering . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.2 Public Interconnection or Public Peering . . . . . . . . . . . . . . . . . . . . . 9 Figure 5: Public Peering Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Figure 6: Public Peering in the US vs. Europe . . . . . . . . . . . . . . . . . . . . . . 10 SECTION 3: INTERCONNECTION AS A BUSINESS 11 3.1 COMPONENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.1.1 The Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.1.2 Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.3 Cross-Connects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.4 Public Peering Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.5 Access to Other Customers in the Facility, Particularly Cloud Providers. . . . . . 12 3.1.6 Additional Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 SUPPLY AND DEMAND . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2.1 Supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2.2 Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3 CUSTOMERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Figure 7: Customers of Interconnection Facilities . . . . . . . . . . . . . . . . . . . 15 Figure 8: Drivers of Facility Selection. . . . . . . . . . . . . . . . . . . . . . . . . . 17
  • 4. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. III 451 RESEARCH SECTION 4: INTERCONNECTION PROVIDERS 18 Figure 9: 451 Research Interconnect Market MapTM . . . . . . . . . . . . . . . . . . 18 Figure 10: Interconnection Provider Segments. . . . . . . . . . . . . . . . . . . . . 19 Figure 11: Summary Chart: Market Challenges and Innovations . . . . . . . . . . . 21 SECTION 5: EVOLUTION OF INTERCONNECTION: TRENDS AND DISRUPTORS 22 5.1 CONTINUED GROWTH OF INTERNET TRAFFIC AND THE NEED FOR INTERCONNECTION . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.2 INCREASE IN THE NUMBER OF FIRMS INTERCONNECTING . . . . . . . . . . . 22 5.3 GROWING REQUIREMENT FOR INTERNET CONNECTIVITY AT THE EDGE . . . . 23 5.4 CLOUD’S IMPACT ON INTERCONNECTION . . . . . . . . . . . . . . . . . . . 24 5.5 NET NEUTRALITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.6‘PRIVATIZATION’OF THE INTERNET . . . . . . . . . . . . . . . . . . . . . . 26 5.7 COMPETITIVE CHANGES . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.7.1 Open-IX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.7.2 European Exchanges in the US . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.7.3 Additional Competition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.8 TECHNOLOGY TRENDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 SECTION 6: THE 451 TAKE 30 APPENDIX A: GLOSSARY 31 APPENDIX B: KEY CARRIER HOTELS IN NORTH AMERICAN MARKETS 33 APPENDIX C: LOCATIONS FOR DIRECT CONNECTIONS TO CLOUD PROVIDERS 34 AWS Direct Connect Locations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Microsoft Azure ExpressRoute Locations. . . . . . . . . . . . . . . . . . . . . . . . 34 APPENDIX D: OPEN-IX CERTIFIED PROVIDERS 35 INDEX OF COMPANIES 36
  • 5. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 1 451 RESEARCH SECTION 1 Executive Summary 1.1 INTRODUCTION Interconnection has come a long way since telecommunications providers connected their networks in order to exchange voice traffic. Now, in addition to carriers, many other kinds of firms need to connect with each other to exchange data traffic, and interconnection itself has become a business. Facilities where the largest number of firms can meet have become extremely valuable. This report looks at the business of interconnection and discusses trends that are likely to impact it going forward. 1.2 KEY FINDINGS • Network-dense, interconnection-oriented facilities are not easy to replicate and are typically able to charge higher prices for colocation, as well as charging for cross-connects and, in some cases, access to public Internet exchange platforms and cloud platforms. • Competition is increasing, however, and competitors are starting the long process of creating network-dense sites. At the same time, these sites are valuable and are being acquired, so the sector is consolidating. Having facilities in multiple markets does seem to provide some competitive advantage, particularly if the facilities are similar in look and feel and customers can monitor them all from a single portal and have them on the same contract. • Mobility, the Internet of Things, services such as SaaS and IaaS (cloud), and content delivery all depend on network performance. In many cases, a key way to improve network performance is to push content, processing and peering closer to the edge of the Internet. This is likely to drive demand for facilities in smaller markets that offer interconnection options. We also see these trends continuing to drive demand for interconnection facilities in the larger markets as well. 1.3 METHODOLOGY This report on interconnection services is based on a series of in-depth interviews with a variety of stakeholders in the industry, including technology vendors, surveys and interviews of IT managers at end-user organizations across multiple sectors, datacenter service providers and providers of connectivity services. This research was supplemented by additional primary research, including attendance at trade shows and industry events. Please note that the names of vendors and service providers are meant to serve as illustrative examples of trends and competitive strategies; company lists are comprehensive, but are not intended to be exhaustive. The inclusion (or absence) of a company name in the report does not necessarily constitute endorsement.
  • 6. 2 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. Reports such as this one represent a holistic perspective on key emerging markets in the enterprise IT space. These markets evolve quickly, so 451 Research offers additional services that provide critical marketplace updates. These updated reports and perspec- tives are presented on a daily basis via the company’s core intelligence service, 451 Research Market Insight. Forward-looking M&A analysis and perspectives on strategic acquisitions and the liquidity environment for technology companies are also updated regularly via 451 Market Insight, which is backed by the industry-leading 451 Research M&A KnowledgeBase. Emerging technologies and markets are also covered in additional 451 Research chan- nels, including Datacenter Technology; Enterprise Storage; Systems and Systems Manage- ment; Enterprise Networking; Enterprise Security; Data Platforms & Analytics; Dev, Devops & Middleware; Business Aps (Social Business); Managed Services and Hosting; Cloud Services; MTDC; Enterprise Mobility; and Mobile Telecom. Beyond that, 451 Research has a robust set of quantitative insights covered in products such as ChangeWave, TheInfoPro, Market Monitor, the M&A KnowledgeBase and the Datacenter KnowledgeBase. All of these 451 Research services, which are accessible via the Web, provide critical and timely analysis specifically focused on the business of enterprise IT innovation. This report was written by Jim Davis, Senior Analyst, Service Providers, and Kelly Morgan, Research Director, Datacenters. Any questions about the methodology should be addressed to Jim Davis or Kelly Morgan at: jim.davis.@451research.com or kelly.morgan@451research.com. For more information about 451 Research, please go to: www.451research.com.
  • 7. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 3 451 RESEARCH SECTION 2 What Is Interconnection, and Where Does It Come From? The very essence of the Internet is interconnection; the word is a shortened version of‘internet- working,’because the Internet is a system of millions of networks that have been linked together by the use of standard protocols for communication. Beyond the technical standards, however, intercon- nection has become a business in its own right. In this report, we focus on interconnection services, key players and business models – particularly within and between datacenters. Many interconnect locations got their start as‘carrier hotels.’National telecom providers have always needed to hand off international traffic to carriers in other countries. They connected with each other at key locations to make this handoff, often near the landing points of undersea cables. As national carriers have been deregulated and competition within the US has grown, competing carriers have had to connect their networks to exchange national as well as international traffic. As a result, the number of carrier hotels and the locations where they are needed have multiplied. Due to the concen- tration of carriers, these carrier hotels have also become key locations for Internet connectivity. The original buildings where carriers connected their networks belonged to the carriers themselves, to the incumbents and/or the long-haul network providers. These tended to be central offices (COs), where the owner had telco equipment but leased out extra space to other carriers. Often, the owner provided the only means of network connectivity to the facility. However, there was not necessarily much incentive for the carrier-owner to maintain, expand or upgrade the CO to add capacity for potential competitors. Local carriers sought locations that were more‘neutral.’These were often office buildings in the center of cities, to which several providers already had fiber connectivity. The carriers paid rent to the building owner and the connections were made in a central location in the building that came to be called the‘meet-me room.’Facilities where participants had multiple network options to access the building became known as‘carrier-neutral.’The facilities usually are not owned by carriers, but some- times can be if the carrier offers interconnection without requiring that participants use its network. For example, NAP of the Americas in Miami is a carrier-neutral facility owned by Verizon. Some carrier hotels grew up after market deregulation; in the US, One Wilshire’s status as a carrier hotel began with then-regional telco PacBell refusing to allow competing telecom service provider MCI (which at the time was focused on long-distance calling) to ban competitors’switches and circuits inside its central switching facility at 400 South Grand in Los Angeles. MCI chose a building nearby that had a sightline for its microwave transmission equipment. Over time, other telecom providers began bringing fiber into the building, eventually turning it into one of the most interconnected hubs for Internet and telecom services in the world. Similar examples can be found in Europe. In Frankfurt, datacenter and IT services provider ITENOS started by building out a former bakery for a telecom client in 1995 and over the next decade adding space for carriers in several nearby buildings, including Kleyerstrasse 90. Kleyer 90’s list of carrier tenants meant it was considered a carrier hotel by the time Equinix acquired it in 2013.
  • 8. 4 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. Other carrier hotels, such as 60 Hudson Street in New York City, had a longer historical link to network interconnection. The building was originally the headquarters of the Western Union Company, the provider of telegraph communication services founded in 1851. The building served as a point of connection for the firm’s telegraph network; now the building houses more than 100 companies from around the globe that interconnect at the building’s meet-me room. 2.1 CARRIER-NEUTRAL DATACENTER VS MEET-ME ROOM In original carrier hotels, the meet-me room was where the physical interconnections were made. Now, however, the term carrier-neutral datacenter may be used to describe an interconnection location. Figure 1 notes some of the differences between the two, but there can also be some overlap between the terms. For example, a Telx facility within a larger building can be considered a carrier-neutral datacenter on its own and can also be the building’s meet-me room. Perhaps the main difference is that today’s carrier-neutral datacenters often have more power and cooling available than the older carrier hotels or carrier points of presence (POPs). FIGURE 1: CARRIER-NEUTRAL DATACENTER COMPARED WITH MEET-ME ROOM Source: 451 Research, 2015 CHARACTERISTICS CARRIER-NEUTRAL DATACENTER MEET-ME ROOM Size Any size, but usually >10,000 sq. ft Almost always smaller than a carrier- neutral datacenter; often 1,000-5,000 sq. ft Power and cooling Typically built to densities that accommodate servers and edge routers rather than less power- hungry switches Originally built for telecom equipment, they typically offer DC power and relatively low density, though many have been upgraded to handle servers and larger routers Stand-alone building Yes or No No Ownership Owned by datacenter operator, or in space leased by the operator Owned by the owner of the building Operator Datacenter owner Building management, or an operator that has a contract with the building owner Purpose Can be interconnection-focused, or focused on providing space and power with the ability to connect to multiple carriers Interconnection Policies on interconnection Typically only allow interconnection with other tenants in the datacenter Typically, any building tenant can interconnect, whether leasing space in the MMR or not Size of deployment Typically a minimum deployment is required – e.g., 5-10 racks – with smaller amounts provided by tenants Full racks, half racks, quarter racks Examples Equinix, KDDI/Telehouse, Interxion facilities Telx in Digital Realty facilities, 151 Front Street meet-me room operated by Allied Fiber in Toronto, CoreSite in Denver
  • 9. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 5 451 RESEARCH 2.2 INTERCONNECTING THE INTERNET In the early days of computer networking, there existed many incompatible and disjointed networks (e.g., enterprise networks and government-run networks that used different propri- etary networking technologies). Not only were the networks incompatible, they were created with different purposes and were not expected to interoperate. The US Department of Defense, for instance, had ARPANET, which connected different research sites, while CSNET was created for the academic and commercial community of computer scientists. Eventu- ally, users on one network wanted access to data or wanted to exchange email with users on other networks. In the early 1980s a commercial‘multi-protocol’router was created, as were a number of exchanges where networks could interconnect and transfer traffic between different networks. These facilities were initially run by government agencies and nonprofits, and they became known as network access points, or NAPs (e.g., MAE-East in 1992). The management of these was eventually moved to commercial entities – mainly telecom providers such as Sprint and some of the Regional Bell Operating Companies (RBOCs). After the original sites became too crowded, particularly as data and content moved beyond the telcos to firms such as AOL and Yahoo, other exchanges were created. This drove the growth of commercial Internet exchanges (IXs) that we see today, in the multi-tenant datacenter (MTDC) landscape. Currently, there are different methods and business arrangements for transferring data between networks at interconnection points. 2.2.1 PRIVATE INTERCONNECTION Private interconnection (or‘peering’) is when networks are interconnected directly between edge routers on each network. This is typically done using a pair of fiber-optic cables (one for transmitting, one for receiving), called‘cross-connects,’and may involve running these cables from one party’s equipment directly to the other’s, or both parties running cables to the central meet-me room. Examples of private interconnection include transit and private peering.
  • 10. 6 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. Internet Transit Internet transit or IP transit refers to when an ISP sells global access to the Internet. In practice, this usually means a network, or autonomous system (AS), is paying for the ISP to announce Internet routes to it and to let the rest of the Internet know that the AS or network and its customers are on the Internet (see Figure 2). FIGURE 2: INTERNET TRANSIT Source: 451 Research, 2015 Transit traffic is Ethernet and is exchanged typically at 10Gbps or, increasingly, 100Gbps. The most common way to bill for transit is the 95/5 model. Every five minutes, the amount of traffic passing over the link is sampled. Every month, the readings are sorted from lowest to highest and the 95th percentile (of traffic either in or out, whichever is highest) is used to calculate what the customer pays, so the top 5% of spikes in traffic are not included. Thus, overall, the more transit used, the higher the costs. Transit costs vary widely but have been declining steadily for years. Current estimates range from $3/ Mbps (and up) to as low as $0.50/Mbps. Although the prices have been steadily declining, contracts tend to be for a year or so, and traffic per customer generally is rising, so the cost curve looks like the following: BA Customer Customer Customer Customer C Transit $ Transit $TRAFFIC FLOW TRANSIT COST NO. OF MBPS
  • 11. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 7 451 RESEARCH Internet Private Peering Peering is when two parties provide access to each other’s network endpoints by inter- connecting and exchanging routing information (see Figure 3). Peering is not used for traffic going to end users on networks other than the peers’. It is referred to as private peering, because the two parties connect directly. Peering can help optimize traffic flow and latency. It is typically settlement-free, meaning that no payments exchange hands, since the two parties exchange roughly the same amounts of traffic. If there is an imbal- ance in traffic (e.g., one party receives more traffic than it sends), one party will pay the other for access to its customers; this is called paid peering. FIGURE 3: PRIVATE PEERING Source: 451 Research, 2015 It is not always cost-effective to peer. Setup costs for peering are typically higher than for transit, so peering is cost-effective once there is a high enough volume of traffic. There may be some setup costs for transit; for example, if the transit connection is made in a colocation facility there will be costs for renting space in the facility and possibly for network connectivity between the facility and the customer’s office. For peering, there will be the same costs to be at a meeting point (often a colocation facility), plus typically the cost of a router (rather than just a switch), the setup fee for a cross-connect to the peer(s) and in some cases a monthly fee for the cross-connect(s) as well. However, once they have a cross-connect, peers can exchange as much traffic as the size of the cross- connect (well, up to 70-80% of the cross-connect size, to be safe). There are higher fixed costs, but once enough traffic is passed over the cross-connect, the cost is lower. Typical costs are anywhere from $100 to $350/month per fiber cross-connect. So if sending or receiving 500Mbps per month (95th percentile) at a transit cost of $2/Mbps, the transit cost would be $1,000 per month, while the same traffic over a cross-connect would cost $350 plus the setup costs, for a cost curve that looks more like this: BA Peering Customer Customer Customer Customer TRAFFIC FLOW TRANSIT CROSS-CONNECT COST NO. OF MBPS
  • 12. 8 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. However, even if it is does not necessarily save money to peer versus using transit, some firms prefer to peer in order to have traffic go directly to the peer’s end users, avoiding the hops that a transit provider might send the traffic through. In other words, networks may prefer peering to gain more control over traffic routes (see Figure 4). No single ISP is physically connected to every other network on the planet; most have a customer base in a particular region. So an ISP that sells transit also has to connect with network providers via peering arrangements, IXs or by buying transit as well. Through this series of business relationships and network connections, each network can reach the entirety of other websites on the Internet, and vice versa. FIGURE 4: INTERNET TRANSIT PLUS PEERING Source: 451 Research, 2015 2.2.2 PUBLIC INTERCONNECTION OR PUBLIC PEERING Public peering refers to the practice of multiple parties connecting to each other via an IX that operates a shared switching fabric, typically an Ethernet switch, which enables one- to-many connections. The location and switch used to connect multiple firms is called an Internet exchange point (IXP). The Ethernet switches can provide 100Mb connections (or ports), up through 100Gb ports in some cases (see Figure 5). Public peering is more scalable and often less expensive than setting up a large number of individual private peering arrangements/connections. Once connected to the main platform, there is relatively little cost to add interconnection partners that are also on the platform. BA Customer Customer Customer Customer C Transit $ Transit $ DPeering TRAFFIC FLOW
  • 13. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 9 451 RESEARCH FIGURE 5: PUBLIC PEERING PLATFORM Source: 451 Research, 2015 In North America, in general, there is one major public peering exchange per market, typi- cally available via one or two datacenters. The owner(s) of those datacenters typically run the exchange. The reverse is true in Europe, with most public peering fabrics operated on behalf of their members either as nonprofits or as cooperatives and available in multiple datacenters in the market. Their members are the firms connected to the exchange. This model has slightly different economics: Since the exchanges are in multiple sites, there are costs for equipment in each site and network connectivity between them (e.g., the cost to lease dark fiber and the cost for equipment to light the fiber at each end). In North America, private peering is more common; public peering has generally been used for lower bandwidth requirements and/or as a backup for private peering traffic. Public peering is more popular in Europe than in North America for historic reasons, since it arrived in Europe later, when the technology was better developed (see Figure 6). Router ISP A POP ISP B POP Router ISP C POP ISP D POP Router Router Router Router Router Router Ethernet Switch Internet Exchange Point PUBLIC PEERING Across a Shared Public Peering Switch PRIVATE PEERING Across a Cross-Connect
  • 14. 10 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. FIGURE 6: PUBLIC PEERING IN THE US VS. EUROPE Source: 451 Research, 2015 CHARACTERISTICS US EUROPE IXP business model For-profit Cooperative or nonprofit IXP operator The colocation provider A committee selected by members or an association IXP location The IXP is located in the facility (-ies) of its colocation provider. The IXP has equipment in multiple datacenters belonging to a variety of operators. Interconnection price model Installation fee for connection to the IXP based on number and bandwidth of ports provided, plus monthly recurring fee. Installation fee for connection to the IXP based on number and bandwidth of ports provided, plus monthly recurring fee. There is also an annual membership fee not related to the quantity of ports or traffic. Cross-connect price model Installation fee plus, often, monthly recurring fee paid to the colocation provider per cross- connect. Installation fee – and typically no monthly recurring fee – paid to the colocation provider per cross- connect.
  • 15. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 11 451 RESEARCH SECTION 3 Interconnection as a Business Originally, connections were made by physically patching (connecting) two customers together via a fiber-optic or copper cable. Every carrier in a facility was connected indi- vidually to others. Over time this generated enormous quantities of cables that were hard to keep track of and became quite complex to manage. (Physical network connections – when done wrong – are believed to be a major source of network errors.) In the early carrier-neutral sites, the building owner sometimes made the physical connections, i.e., ran the meet-me room. Sometimes the carriers ran the meet-me room themselves, e.g., as a cooperative. As complexity grew, firms sprang up that specialized in operating interconnection spaces. They worked out arrangements with the building owners and earned their keep by charging for their services. When the original carrier hotels filled up, these operators sometimes built and ran expansion space nearby. This launched the business of interconnection and also led to the automation of the process, when the interconnect operators began to provide switching services (as well as the physical cabling services). 3.1 COMPONENTS There are several components to the business of interconnection: • The building where the connections are made • In some cases, bandwidth services to or within the building where the connections are made • Physical cross-connects • Often, a public peering platform • Access to other customers of the facility, such as cloud providers, either directly or through a cloud exchange platform • Additional services provided to customers 3.1.1 THE BUILDING In the early days of carrier-to-carrier connections, the facility where connections were made mainly housed telecom equipment – which generally requires relatively little power but uses direct current (DC). Thus, when these facilities were set up in office build- ings, they did not normally require extra power and cooling – they just required DC plant. Through the years, as more firms sought to connect, Internet traffic grew, and customers signed on that required AC plant and more power and cooling, the facilities had to be upgraded. The owners had an incentive to do that because as the number of customers and connections grew, the facilities became more valuable.
  • 16. 12 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 3.1.2 BANDWIDTH Customers of the facility typically need to pay for bandwidth to their offices or other sites and for transit to access Internet customers that are not on the networks of firms the customer peers with. In general, the customer sets up a direct relationship for bandwidth and/or transit with carriers in the facility. Sometimes, however, the owner of the datacenter also provides bandwidth services and can charge separately for those. In addition, some facilities are connected to others to provide access to customers in those other facilities, and bandwidth is required between the datacenters. This can be provided, on a separate contract, by a dark fiber or network service provider (NSP). Or sometimes, again, the data- center owner/operator provides the connectivity to other datacenters – either as a separate charge or rolled into one of the other fees. 3.1.3 CROSS-CONNECTS A customer pays to be in a datacenter but also needs to be connected to other firms in the datacenter. In the early days, carriers ran cables themselves but as the number of cables grew, this became unwieldy and a third party took over managing the physical cables. The third party charged a fee for this service. This fee remains in place today and typically is an installation charge to pay for a technician to physically run the cables and connect them (it also covers the cost of the cable and equipment). In addition, some providers also charge a monthly recurring fee for the cross-connect. 3.1.4 PUBLIC PEERING PLATFORM As mentioned above, an IXP allows a customer to connect to one platform and, through that platform, to other members of the exchange without having to run separate cables each time. There is a fee for this service – typically an installation fee and a monthly mainte- nance fee as well. It is generally based on the size of the port (e.g., 1Gb per second), though some providers (e.g., IIX) charge a fee based on the amount of bits actually transferred. 3.1.5 ACCESS TO OTHER CUSTOMERS IN THE FACILITY, PARTICULARLY CLOUD PROVIDERS Some interconnect providers offer ways to connect to other customers in the facility. These may include a portal that allows customers to see and contact each other, or a cloud exchange, which in theory is a platform that allows customers to connect to multiple cloud providers easily by incorporating the APIs and specific requirements for access to each cloud provider into one platform. These are at various stages of development, depending on the provider, but can certainly be an additional source of revenue. 3.1.6 ADDITIONAL SERVICES Customers may require consulting, network management, remote hands and other services that are billed separately.
  • 17. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 13 451 RESEARCH CLOUD EXCHANGE EXAMPLE: EQUINIX Cloud exchanges are still relatively new. Equinix launched its Cloud Exchange in spring of 2014. The idea is to take the IX concept and expand it beyond NSPs to connect to other infrastructure service providers. Ideally, this would allow a customer to connect to multiple IaaS providers available on the exchange – such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform and SoftLayer (IBM) – through one interface or portal. This has been relatively complex to set up; cloud providers have different requirements for accessing their clouds, so a portal has to provide the correct information to each provider. The Equinix Cloud Exchange does this using a co-developed version of Cisco’s InterCloud orchestration tool coupled with SDN technology developed in Equinix labs, as well as components from Ciena and Juniper for layers 1-3 and the software platform Apigee. The Cloud Exchange provides a range of services, including automatic provisioning and policy setting. A customer can connect to Cloud Exchange participants via a port on an Equinix switch. Instead of taking out a dedicated fiber connection to each cloud provider, the customer can open many smaller virtual circuits to various cloud suppliers. Equinix is aiming to encourage end users to connect with providers, pricing the service as a utility to help spur connections between a customer and multiple cloud providers. Equinix, in turn, makes money from the customer and supplier for both colocation and the cross-connect to the platform, as well as a nominal fee for joining the platform. Cloud Exchange VLANs target enterprise users consuming smaller amounts of traffic for smaller time frames (200Mbps, 500Mbps, 1Gbps and other speeds up to 10Gbps are available). Those customers with higher bandwidth consumption rates over a long-term contract, including those looking at Amazon’s Direct Connect service, will buy 1Gbps or 10Gbps ports. 3.2 SUPPLY AND DEMAND 3.2.1 SUPPLY Carrier hotels today, particularly those with the most customers, in general remain more valuable than other datacenters. Typically, several carriers have laid fiber to the building while others have installed equipment inside, so there is a sunk cost to being in the facility and it can be expensive to move to another one. Customers usually need to be in the facility because they need to connect to the maximum number of carriers, ISPs, cloud providers and others. It is difficult to start a competing exchange nearby because each exchange has a tipping point before which there are not enough customers in the facility to make it worth paying for equipment and colocation fees to have a presence there. Another option is to build a datacenter nearby and connect it to the original carrier hotel. However, there is then a cost for the connectivity between the two buildings (currently around $1,000 per month for 10 Gigabit metro Ethernet connectivity, although this varies widely). The owners/managers of the original carrier hotel meet-me room have an advantage, as they can pay for dark fiber to connect the two buildings and arrange for interconnection of customers from the second facility. However, if a competitor sets up a building nearby, the competitor needs to work with the original carrier hotel owner to determine how to connect customers of the second building. Typically, the customers of the second building would need to pay for network transport to the carrier hotel. Note
  • 18. 14 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. that the original carrier hotel owner can simply charge a lower price for space in the carrier hotel than the cost of that network transport and try to win the customer away. Or the owner of the second building can pay for network connectivity to the first building, but then someone has to pay for space at the first building to house the equipment for interconnection. As a result, with captive customers for which moving may be expensive, and with some barriers to entry that keep competitors from easily recreating the ecosystem of customers at a facility, the carrier hotel owner/operator can have strong pricing power. The fees for colocation in the building (just for the space and power for equipment) are typically at least 20% higher than those for facilities nearby that are less network-dense. Sometimes for a very desirable location, e.g., where the matching engine for financial trades sits, the fees can be much more than that. If there is not enough capacity for growth at a particular network-dense facility, we have seen ecosystem participants move to a different location. This has happened, for example, with financial exchanges where the trading engine moved from downtown (in Manhattan or Chicago) out to the suburbs and brought its trading ecosystem participants with it. 3.2.2 DEMAND In addition to acting as hubs where network providers can connect to each other, a variety of models for interconnection have arisen between enterprises, NSPs and cloud service providers. The business model of the MTDC operator is one component to that value – whether or not they can attract a large number of providers of bandwidth (ISPs, carriers and such), and customers that need connectivity to the public Internet as well as to other customers in a particular datacenter facility. A more recent factor in the equation is the presence of cloud compute service providers such as AWS, Microsoft Azure, Google Compute Engine and IBM SoftLayer as well as SaaS companies such as salesforce.com. In addition, enterprises are looking to place applications closer to an ever more mobile customer and employee base. Locating in facilities with higher peering points with mobile carriers can greatly improve applica- tion performance. This extends to enterprise partners providing services such as marketing and integration, like HubSpot, Eloqua and Marketo. Providing connectivity to these services for customers already colocated in a datacenter is an area of significant commercial activity. Another area of growing interest is secure, private connections between cloud providers and customers outside the facility. The value of an interconnection‘ecosystem’is growing and is already very large for compa- nies in particular sectors, e.g., where groups of companies need to share large data sets (oil and gas, movie production, pharmaceuticals and genomics), or need to trade information (financial services trading ecosystems). As more firms start to compute and share large data sets, demand for these communal meeting points (datacenters) will continue to grow.
  • 19. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 15 451 RESEARCH 3.3 CUSTOMERS With the rise of the Internet, firms besides carriers have sought to connect with carriers and with each other, so the list of customers/participants at interconnection facilities has grown (see Figure 7). FIGURE 7: CUSTOMERS OF INTERCONNECTION FACILITIES Source: 451 Research, 2015 CUSTOMER DESCRIPTION REASONS FOR INTERCONNECTING Network service provider or carrier Provides network access and very high-volume bandwidth access to the Internet‘backbone.’NSPs sell bandwidth to ISPs, which in turn connect and sell access to consumers and enterprises; some carriers also sell directly to enterprises. • To make the maximum number of connections for buying and selling Internet transit, peering and VoIP interconnection. The cost savings from being in an interconnection facility usually make up for the equipment and rental costs to be there. • Carrier equipment in these facilities typically requires less power than servers. The footprint is relatively fixed. ISP Provides businesses and consumers access to the Internet. May offer other services, e.g., email, website hosting. • To gain access to all other destinations on the Internet that they are not connected to, ISPs buy transit from network providers or peer with them or with other ISPs. They want to be in an interconnection facility with the maximum number of small networks and ISPs present in order to peer with them, as well as with top-tier network providers present in order to buy transit from them. Content provider Usually a large-scale provider that stores video Web pages or other files that consumers want to access. • Content providers need to interconnect with networks via Internet peering or transit to serve their content to end users. They tend to have large server and storage deployments. Often, interconnect facilities do not have enough contiguous space available for the full content deployment, so much of it ends up in a building close by, connected to the interconnect facility via dark fiber or wavelength services. • Facility quality and reliability are of great importance to most content providers, as they lack carriers’geographic redundancy, and, in some cases, will serve a specific product out of a single datacenter. Content delivery network A set of distributed servers and software used to deliver content. • Like content providers, CDNs will place servers at interconnection facilities in order to gain access to end users, through transit and ideally peering agreements. The direct billing model of CDNs makes them highly price-sensitive. • Many CDNs are less concerned about the ability to expand within a single facility, and prefer to spread their footprint out to cover many facilities, thus improving CDN performance.
  • 20. 16 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. CUSTOMER DESCRIPTION REASONS FOR INTERCONNECTING Web hoster A service provider that offers space on servers for websites, and enables those sites to be available to the Internet. • Margins for the hosting business tend to be lower than those of content providers, so hosters are often more concerned about price and less about facility quality. Web hosters are also less concerned with network density, as their content tends to be less attractive for Internet peering. They are mainly looking for lower-cost providers for Internet transit services. Similarly, network services such as dark fiber and wavelengths are of less interest to Web hosters, which typically have less traffic than many content providers. Cloud and/or hosting provider A cloud provider is a service provider offering IaaS, SaaS or PaaS in an on-demand, multi-tenant environment. Examples include Amazon Web Services, Microsoft Azure, IBM SoftLayer and salesforce.com. • SaaS and IaaS providers tend to have larger footprints than network providers, often with relatively high-density architecture. Their growth is relatively unpredictable as well, so they often seek facilities with the capability to provide relatively large amounts of power in small footprints and space available for expansion. Systems integrator A company whose business is building complete compute systems from disparate hardware and software components. • The major systems integrators (SIs) are interested in the use of interconnection facilities as a cost-saving measure. Customers of SIs – enterprises of varying sizes – will use many different network providers for connecting with their SI vendors. Placing SI infrastructure in interconnection facilities enables easy interconnection with carrier Internet, ATM and MPLS networks. Enterprise ‘Enterprise’can refer to any business entity. For the purposes of this report, an enterprise is typically a company with 500 employees or more; it may have extensive WANs and its own datacenters, but buy network and compute resources from third parties in order to conduct business over (and on) the Internet. • Large enterprises are becoming more interested in interconnection facilities to increase their network options and access cloud and other service providers. Enterprises also sometimes use interconnection facilities as disaster-recovery hubs. • In general, enterprises prefer higher-quality facilities with highly redundant cooling and power and a high level of security. They tend to be less cost- conscious than some other customers, as their footprints are smaller and facility quality is so important to them.
  • 21. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 17 451 RESEARCH We summarize the drivers behind selection of interconnect facilities in Figure 8. FIGURE 8: DRIVERS OF FACILITY SELECTION Source: 451 Research, 2015 CARRIERS CONTENT PROVIDERS WEB HOSTING PROVIDERS CONTENT DELIVERY NETWORKS LARGE ENTERPRISES, SYSTEM INTEGRATORS Network density High High Medium High Medium Telecom services available Medium Medium Low Medium High Cost Low Medium High High Low Power and cooling Low High High High Medium Expansion capacity Medium High High Medium Low Managed services Low Low Low Low High Facility quality, reliability Medium High Medium High High Examples Verizon, Level 3, Zayo Google, Netflix Rackspace Akamai, Limelight EDS, IBM, Morgan Stanley Criteria and level of importance
  • 22. 18 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. SECTION 4 Interconnection Providers Originally, providers were known for particular locations where they had facilities, and in each market there were only one or two interconnection options. This has changed somewhat, as larger providers have acquired the single-site carrier hotels in various cities and/or have built competing facilities in the top markets. In smaller markets there are often no public IXs, but there are still locations where carriers, ISPs, content providers, etc., meet to exchange traffic. These exchange points are typically owned by a carrier or ISP. The Market Map in Figure 9 shows key interconnect providers and some of the characteris- tics that differentiate them. Geographic reach and focus is one characteristic: Some firms are in multiple countries; some are in a single region, typically a top market; some are in markets at the ‘edge’of the Internet, in cities outside the traditional top 10 datacenter/interconnection loca- tions. When it comes to service offerings, there are providers that offer interconnection but also provide their own network services. There are firms that offer interconnection but also larger suites, in a combination of interconnection and a more wholesale-like offer. There are firms that offer connections through a public peering platform. Finally, there are firms that offer direct connectivity to cloud providers, through a cloud exchange platform or through direct connec- tions to well-known public cloud providers such as AWS and Microsoft Azure. FIGURE 9: 451 RESEARCH INTERCONNECT MARKET MAPTM Source: 451 Research, 2015 Focus on Single Market Network Services & Cross-Connects Geographic Reach (Multiple Countries) Cloud Exchange Platform Focus on Markets Outside Top 10 Hosts or Operates Public Peering Platform Interconnection Plus Larger Suites Direct Connections to Public Cloud Providers MARKLEY GROUP CITYNAP COLO ATL PHOENIX NAP PTT METRO WESTIN BUILDING EXCHANGE KIO NETWORKS CYRUSONE CENTURYLINK LEVEL 3 COMMUNICATIONS COLT NTT COMMUNICATIONS PCCW TELSTRA VERIZON TERREMARK ZAYO/ZCOLO DIGITAL REALTY COLT NTT COMMUNICATIONS PCCW CENTURYLINK LEVEL 3 COMMUNICATIONS EQUINIX TATA COMMUNICATIONS CENTURYLINK LEVEL 3 COMMUNICATIONS CORESITE TELX SWITCH SUPERNAP GLOBAL SWITCH COLT NTT COMMUNICATIONS PCCW TELSTRA VERIZON TERREMARK ZAYO/ZCOLO EQUINIX TATA COMMUNICATIONS CENTURYLINK LEVEL 3 COMMUNICATIONS CORESITE TELX SWITCH SUPERNAP SABEY DATA CENTERS 365 DATA CENTERS COLOGIX ZAYO/ZCOLO NEXTDC SABEY DATA CENTERS 365 DATA CENTERS COLOGIX ZAYO/ZCOLO NEXTDC GLOBAL NET ACCESS (GNAX) MIAMI-CONNECT MORGAN REED GROUP SIERRA DATA CENTERS KOMO PLAZA SWITCH SUPERNAP SWITCH SUPERNAP GLOBAL SWITCH EVOSWITCH AMS-IX DE-CIX IIX LINX INTERXION KDDI/TELEHOUSE KIO NETWORKS TELSTRA VERIZON TERREMARK TATA COMMUNICATIONS EQUINIX MARKLEY GROUP DUPONT FABROS TECHNOLOGY CYRUSONE CORESITE EVOSWITCH GLOBAL SWITCH DIGITAL REALTY DUPONT FABROS TECHNOLOGY CYRUSONE CORESITE GLOBAL SWITCH EVOSWITCH AMS-IX DE-CIX IIX LINX INTERXION KDDI/TELEHOUSE KIO NETWORKS TELSTRA VERIZON TERREMARK TATA COMMUNICATIONS EQUINIX TELX INTERXION COLOGIX SWITCH SUPERNAP EDGECONNEX EXPEDIENT DATA CENTERS INVOLTA NETRALITY PROPERTIES SUNGARD AVAILABILITY SERVICES TIERPOINT VXCHNGE MARKLEY GROUP CITYNAP COLO ATL PHOENIX NAP PTT METRO WESTIN BUILDING EXCHANGE SABEY DATA CENTERS QTS REALTY TRUST EQUINIX NEXTDC KDDI/TELEHOUSE
  • 23. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 19 451 RESEARCH Identification and placement of companies into these segments is based on analysis, both published and unpublished, performed by 451 Research. This analysis includes interviews, reports and advisory work with several thousand enterprises, vendors, service providers and investors annually. 451 Research Market Maps™ are not intended to repre- sent a comprehensive list of every vendor operating in this market. Inclusion on 451 Research Market Maps™ does not imply that a given vendor will be specifically featured in one or more 451 Research reports. FIGURE 10: INTERCONNECTION PROVIDER SEGMENTS Source: 451 Research, 2015 PROVIDER FOCUS ON SINGLE MARKET FOCUS ON MARKETS OUTSIDE TOP 10 GEOGRAPHIC REACH (MULTIPLE COUNTRIES) INTERCONNECTION PLUS LARGER SUITES NETWORK SERVICES AND CROSS- CONNECTS HOSTS OR OPERATES PUBLIC PEERING PLATFORM CLOUD EXCHANGE PLATFORM DIRECT CONNECTIONS TO PUBLIC CLOUD PROVIDERS 365 Data Centers ü ü AMS-IX ü ü CenturyLink ü ü ü ü CityNAP ü ü Colo Atl ü ü Cologix ü ü ü Colt ü ü ü CoreSite ü ü ü ü CyrusOne ü ü ü DE-CIX ü ü Digital Realty ü ü DuPont Fabros ü ü EdgeConneX ü Equinix ü ü ü ü ü Evoswitch ü ü ü Expedient Data Centers ü Global Net Access (GNAX) ü Global Switch ü ü ü ü IIX ü ü Interxion ü ü ü Involta ü KDDI/Telehouse ü ü ü KIO Networks ü ü ü KOMO Plaza ü Level 3 Communications ü ü ü ü LINX ü ü Markley Group ü ü ü
  • 24. 20 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. PROVIDER FOCUS ON SINGLE MARKET FOCUS ON MARKETS OUTSIDE TOP 10 GEOGRAPHIC REACH (MULTIPLE COUNTRIES) INTERCONNECTION PLUS LARGER SUITES NETWORK SERVICES AND CROSS- CONNECTS HOSTS OR OPERATES PUBLIC PEERING PLATFORM CLOUD EXCHANGE PLATFORM DIRECT CONNECTIONS TO PUBLIC CLOUD PROVIDERS Miami-Connect ü Morgan Reed Group ü Netrality Properties ü NextDC ü ü ü NTT Communications ü ü ü PCCW ü ü ü Phoenix NAP ü ü PTT Metro ü ü QTS Realty Trust ü Sabey Data Centers ü ü ü Sierra Data Centers ü SunGard AS ü Switch SUPERNAP ü ü ü ü ü Tata Communications ü ü ü ü Telstra ü ü ü ü Telx ü ü ü TierPoint ü Verizon Terremark ü ü ü ü vXchnge ü Westin Building Exchange ü ü Zayo/zColo ü ü ü ü
  • 25. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 21 451 RESEARCH FIGURE 11: SUMMARY CHART: MARKET CHALLENGES AND INNOVATIONS Source: 451 Research, 2015 MARKET SEGMENT KEY CHALLENGES INNOVATIONS Focus on Single Market • Expanding the facility in a key market – accessing capital, working within geographic constraints. • Offering specific high-density rooms; tethering nearby buildings to the main site; trying various pricing models. Focus on Markets Outside Top 10 • Gaining enough customers at each site. • Encouraging customers to deploy in multiple markets. • Offering similar look and feel in each facility. • Allowing customers to manage facilities in multiple markets on a single contract and with a single portal. Geographic Reach • Encouraging customers to deploy in multiple markets. • Determining where to add facilities. • Offering similar look and feel in each facility. • Forming partnerships to offer facilities in other countries without having to build there. Interconnection Plus Larger Suites • Finding space and power near interconnect facilities to provide larger suites. • Targeting customers that need both large blocks of space and interconnection options. • Providing dark fiber, wavelength or other network services to encourage customers to deploy in a building separate from the main interconnect facility. Network Services and Cross-Connects • Convincing customers that the facility offers interconnection without requiring the use of the provider’s network. • At the same time, encouraging customers to use the provider’s network. • Creative bandwidth options and pricing by the network provider, particularly for customer access to cloud services. • Stressing the benefits of having‘one throat to choke’or pricing in such a way that it is a benefit using one provider. Hosts or Operates Public Peering Platform • Security; attracting customers; enabling those on the platform to know who else is connected. • Providing portals that show who is available to peer with on the platform and enable those connections to happen rapidly (with only a couple of clicks). Cloud Exchange Platform • Attracting customers, particularly enterprises, but also attracting cloud providers to the platform. • Making the connections simple to the end user despite the lack of standards for accessing cloud providers. • Providing flexible bandwidth options to encourage uptake by end users. • Developing orchestration platforms that connect to all the cloud providers using APIs to simplify access to each. Direct Connections to Public Cloud Providers • Convincing public cloud providers to offer direct connections from a particular facility. • Helping customers access the direct connections for multiple providers easily. • Creating a‘dashboard’using APIs to allow customers to connect to various public cloud providers directly with a single pane of glass.
  • 26. 22 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. SECTION 5 Evolution of Interconnection: Trends and Disruptors We see a variety of factors impacting interconnection, including: the continued growth of Internet traffic; the increasing number of firms that want to interconnect with others; the migration of content and Web applications closer to the network edge; the need to interconnect with cloud providers; mobility and the Internet of Things (IoT); the need for a variety of firms to work on the same data sets; and developments in networking and datacenter technology that could further accelerate decentralization. The number of interconnections will continue to increase dramatically. 5.1 CONTINUED GROWTH OF INTERNET TRAFFIC AND THE NEED FOR INTERCONNECTION Global IP-based Internet traffic will continue to grow threefold per year over the next three years, according to Cisco’s Visual Networking Index report for 2015. The continued growth in traffic is driven by several factors: • The popularity of streaming media, music/radio services, TV/video on demand and Internet video sites such as YouTube, and the massive bandwidth requirements of video compared with those of static Web pages. • The popularity of the Web for distributing major media events such as the World Cup and the Olympics. • Growing traffic from mobile devices, which is estimated to increase tenfold by 2019. The typical effective speed for Internet network traffic exchange is 7Gbps, or 70% of an OC-192 or 10 Gigabit Ethernet link. While that seems to be an extraordinary amount of traffic, it is small compared with the large traffic volume seen on the Internet today. To deal with the large amount of Internet traffic and keep up with the significant growth, networks are peering with a greater number of interconnections at any specific location, and they are interconnecting in more locations. While the spread of 100 Gigabit Ethernet technology has the potential to control this growth, it is likely that the new technology will only keep up with, rather than lead, demand. 5.2 INCREASE IN THE NUMBER OF FIRMS INTERCONNECTING The number of websites and Internet content sources has grown considerably over the last five years as successive waves of social networking, picture-sharing and video-sharing websites have come online. Although there has been some consolidation in the number of such players, the increasing number of entrants in the fast-growing Internet space has boosted interconnection requirements. More networks and providers are discovering the cost-saving benefits of carrier hotels and carrier-neutral datacenters.
  • 27. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 23 451 RESEARCH Due to the high traffic volumes that are now the norm, networks are driven to place content in or near interconnection facilities to cut down on pricey local loops and gain access to inexpensive Internet transit and peering. In years past, such a strategy would be foreign to the majority of network engineers; the typical strategy would have been to build their own datacenter and order costly local loops from RBOCs and other metro fiber providers. However, the growing availability of interconnection facilities, combined with the enormous costs of multi-gigabit local loops, has forced a sea change in this behavior. Even those networks that are too large to place their servers in direct proximity to inter- connection facilities, such as Google and Yahoo, maintain large network nodes to enable low-cost interconnection to other networks. That capacity is then provided to Google and Yahoo datacenters via the lower-cost metro fiber available in an interconnection facility. The increasing number of connections is a simple matter of economics. Local loops are simply too expensive per bit to support modern Web and Internet media properties. Another key factor is Internet transit and peering pricing. Comparing the sort of Internet transit/access pricing that a network can receive at its own datacenter to that which is available in an interconnection facility, there is an enormous difference due to a strong marketplace that evolves in most interconnection facilities. Finally, Internet peering is generally available only to networks at interconnection facilities, and is an increasingly popular way of cutting costs as bandwidth requirements continue to increase. 5.3 GROWING REQUIREMENT FOR INTERNET CONNECTIVITY AT THE EDGE One solution to the ongoing increases in traffic in core networks is storing (also referred to as‘caching’) in servers closer to end users, or at the edge of the Internet. The need for data storage and interconnection at the edge of the Internet is expected to explode over the next few years due to the growth in video and other application content delivered to mobile devices. In addition, the growth of devices that provide streams of data such as wearable devices, automobiles, machines, houses – in other words, the IoT – is expected to affect the flow of data traffic, shifting it from mostly downstream today (video to end devices) to upstream (end-user devices sending data back to central repositories) and potentially vastly increasing the traffic as well. The challenge here isn’t so much the amount of data from a device, but the frequency with which it communicates with a central server, how many sessions the server can handle simultaneously, and the latency between the mobile device and server. In addition, the enterprise customer segment is evolving. In the past, most enterprises opted to keep their datacenter requirements in-house. However, several recent trends, including globalization, ongoing proliferation of Internet-facing applications, ongoing growth of bandwidth-intensive rich media content, the rise of virtualization and cloud
  • 28. 24 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. computing, and changing business continuity and disaster recovery needs in light of data sover- eignty have led more and more enterprise CIOs to consider and/or choose to outsource some or all of their datacenter requirements. Meanwhile, one of the biggest challenges for datacenter and operations managers is maintaining enough datacenter space and power. With the typical in-house datacenter ranging in size from 2,000 to 40,000 square feet, and with very limited optical fiber availability, many CIOs struggle to virtualize and squeeze their applications into the current datacenters while also trying to justify the necessary capital to connect their existing facilities and/or build new ones. Colocation is an option in many places, with that market growing roughly 10-20% a year, depending on the location. 451 Research forecasts global colocation market annu- alized revenue to reach $36bn by 2017. 5.4 CLOUD’S IMPACT ON INTERCONNECTION The growth of cloud computing itself continues to drive need for interconnect services, but the need for performance and security will also push enterprises toward using interconnection services more in the future. From the origins of interconnection, the path of evolution has given rise to a variety of models for interconnection between enterprises, NSPs and cloud service providers; datacenters are a valuable place for these parties to meet, as described in Section 3.2.2 above. There are strong underlying reasons that enterprises need to evaluate interconnection services as part of an overall networking strategy. The first is that hybrid cloud computing will eventually be a reality. Companies want to respond to customers’needs more quickly; doing so requires a digital infrastructure that can quickly ramp up to meet demand. While CEOs are recognizing that cloud computing is one way to help businesses adapt, there are business requirements for performance and security that may get in the way of that goal. Private cloud is one answer to that problem, but meanwhile, there is a gap opening up between demand for cloud and the ability for the enterprise datacenter to meet demand in a cost-effective fashion. The delineation between public cloud and hosted private cloud workloads highlights what one would intuit: that despite survey after survey stating that security is the top reason not to move to cloud, enterprises will move applications to the cloud given the right assurances. Our research indicates that a hosted private cloud option is able to meet the security and performance require- ments of enterprises. It’s not just hosted private cloud that will be under consideration for enterprises. A complex hybrid cloud strategy will eventually emerge where enterprises will use a mix of on-premises private cloud, hosted private cloud and public cloud resources to achieve their business goals. There are already signs of this occurring: According to our Voice of the Enterprise: Cloud Computing Customer Insight Survey Results and Analysis (Q4 2014), over the next two years, executives expect that 15% of workloads will run in a hosted private cloud environment, 28% of workloads will be running in a mix of hybrid and public clouds venues, and the remaining 58% of workloads will be done on-premises.
  • 29. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 25 451 RESEARCH This shift is underway because enterprises must move toward building a complete digital infrastructure strategy – meaning a strategy that includes orchestrating the use of compute capacity, data storage and applications with a policy-based approach. In the longer term, enter- prises will create services and products by dynamically matching and placing workloads at the best execution venue for a job based on cost, performance, legal and other requirements. Interconnection services within the datacenter environment will play a large part in this vision becoming a reality. A secure, high-speed link between cloud provider and enterprise is critical to a successful cloud strategy. To facilitate these connections, cloud providers have been busy building up partner programs with NSPs and MTDC providers. How exactly do MTDC service providers fit in the mix, especially those that bill themselves as carrier-neutral? They can play a key role, both by offering a breadth of NSPs at a facility and by providing interconnection services to the major cloud providers, either via a cross-connect or a cloud exchange platform. Enterprises have long been accustomed to using private connections to hosting environments, but the same hasn’t always been the case with public cloud offerings. Cloud providers have been responding to customer demand for better connectivity options by offering the ability to let customers use a dedicated physical connection to a nearby point of presence. The providers have also been setting up programs that help pair NSPs with enterprise customers. There are a number of advantages to using direct connections to a cloud provider: • Security – A dedicated, direct link to the cloud provider offers an inherently more secure transport path as compared to traversal over the public Internet. Some providers tout the ability to allow multiple IPsec VPN connections to connect through a dedicated link, allowing multiple branch/remote office locations to use cloud resources. • Cost – The use of a private connection can sometimes save money because the traffic doesn’t have to be routed over the ISP’s connection to the Internet – it’s sent direct to the cloud provider. Cloud providers such as Amazon will also charge a lower outbound data transfer rate as compared to transfer over public Internet links. • Performance – Latency and bandwidth are more consistent with deterministic routing. Depending on the point of interconnection, performance may be suitable for latency- sensitive workloads that could not be run over a public Internet link. • Service agility – A variety of hybrid service models can be implemented, including a mix of public and hosted private cloud services, over the same secure, dedicated link. This allows for more flexibility in placing different workloads on resources that have an appropriate price/ performance profile. Amazon, Microsoft (Express Route), SoftLayer and Google (Cloud Interconnect) are among the cloud providers offering interconnect options. Amazon’s Direct Connect product has been around the longest; it is a dedicated physical connection from a customer’s network into one of Amazon’s Direct Connect locations. For an hourly fee, Amazon will provide its customers with a
  • 30. 26 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 1Gbps or 10Gbps port into its S3 and EC2 (as well as VPC) environments within any of its Direct Connect locations. Depending on the amount of data to be transferred, a direct connection can be less expensive as well – as an example, uploading data to AWS is free but downloading using Internet bandwidth on US-East is $0.09 per GB, while downloading using Direct Connect is $0.02 to $0.03 per GB plus the relatively small port charge of $0.30/hour. 5.5 NET NEUTRALITY Network neutrality is the idea that all traffic running over a network should be treated equally and that content providers or customers cannot have their traffic prioritized, e.g., by paying a higher rate. Network operators have argued that they should be able to charge more to prior- itize some content and that otherwise, essentially, they will not earn enough to expand their networks and services. Critics argue that this would allow the largest content providers (or at least those with the largest budgets) to push their content, putting smaller or newer (or poten- tially more innovative) providers at a disadvantage. Currently, regulatory bodies in various coun- tries are determining what level of Net neutrality they want and what legislation/enforcement is required to achieve it. It is possible that if network neutrality is not regulated and enforced, the number of content providers could shrink. This would reduce the number of potential customers for interconnect providers. However, it is also possible that additional regulation could hurt Internet performance and reduce the adoption of new Internet services – also possibly reducing the number of new service providers and potential customers for interconnect sites. In the meantime, neutrality is the general rule, and this has affected some peering relationships. Many‘eyeball networks’(i.e., big broadband providers such as Verizon in the US) argue that they are carrying too much traffic for particular partners (e.g., content providers) via settlement-free peering relationships. This first became a problem with the growth of file sharing, but the imbal- ance of traffic flows from content providers (YouTube, Dailymotion, Netflix) has led more of the networks to charge for peering. As these arrangements start to look less network-neutral, regu- latory agencies are keeping an eye on these arrangements; in the US, the FCC has stipulated that AT&T provide detailed reports on its interconnection agreements as part of its $49bn acquisition of satellite-TV service provider DirecTV, for example. In addition, partly in response to this poten- tial requirement that content providers pay to prioritize their traffic, some of the larger content providers are setting up their own networks, which we discuss next. 5.6 ‘PRIVATIZATION’ OF THE INTERNET A rapidly growing amount of network traffic is‘private,’i.e., coming from mega-scale cloud providers. For years these large providers have been buying up dark fiber capacity and using it to bypass the Internet to get better end-to-end performance and/or to prioritize traffic. By some estimates, 50% of traffic on undersea cables crossing the Pacific is private. The Internet therefore seems to be fragmenting into a set of mega-scale controlled private networks, with the traditional Internet available for everyone else. This may lead to strong incentives for using
  • 31. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 27 451 RESEARCH a particular cloud provider’s services, particularly if partners/customers are with the same cloud provider, in order to get the best connectivity and/or the best rates. Net neutrality could significantly accelerate this network privatization. In addition, the ability to move mission-critical workloads close to where the customer or clients are – boosting overall performance and availability of services without incurring higher costs – may provide a strong competitive advantage, adding to the appeal of investing in private networks. The growth of these wholly private networks outside of the Internet will have an impact on where cloud providers will need to interconnect to reach end customers, possibly boosting their need for interconnection locations and services. However, it may also impact the number of content provider competitors, since smaller, newer firms will not be able to afford their own networks. As discussed above, this could then reduce the number of potential customers for interconnection providers. 5.7 COMPETITIVE CHANGES As we noted above, in the US, public peering exchanges have been run by datacenter opera- tors as private businesses, which means they have been located in the facilities of the owner- operator. In Europe, by contrast, public peering exchanges have generally been cooperatives or nonprofits separate from the datacenter facilities where they are located. They tend to be housed in multiple facilities in a market, belonging to multiple providers. In the US, there are efforts underway to create interconnect systems similar to those in Europe. These include the launch of the Open-IX Association (OIX) and the arrival of European exchanges in the US. 5.7.1 OPEN-IX OIX is a nonprofit industry group that arose as part of an effort to counter the current US interconnect approach in which one or two datacenter owners in each market typically have a monopoly/duopoly on public peering there. OIX is not a provider of IX services; rather, it is an association formed by a number of datacenter providers, CDNs, network operators, content providers and others. To build a more resilient peering architecture in North America and boost competition for interconnection services, the idea is to promote a model similar to that found in Europe, in which public peering exchanges are spread across multiple data- centers in a market. OIX has developed a set of interconnection standards to encourage the growth and spread of these public exchanges. Certification by OIX signifies that a company has adopted the OIX standards and can be identified as an OIX datacenter. The OIX Data Center Standards (OIX-2) define a broad range of requirements, including for security, concurrent maintainability, connectivity, and oper- ational and maintenance procedures. The OIX-1 standards define requirements for public peering exchanges. Detailed requirements are available on the Open-IX website. The entities that have been certified so far are listed in Appendix D.
  • 32. 28 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. It is difficult to tell what the impact of OIX has been so far. The effort has brought publicity to peering in the US and possible alternatives to the current system. The major European exchanges have launched in the US and new public peering exchanges have been launched in several markets as well. There is some pressure on providers to lower the monthly cost of cross-connects or not charge monthly cross-connect fees at all, as is more typically the case in Europe. It is unclear to what extent OIX certification has been the catalyst for all this or whether it is due to overall interest in having more peering options. 5.7.2 EUROPEAN EXCHANGES IN THE US Several European exchanges have launched operations in the US over the past two years. The Amsterdam Internet Exchange (AMS-IX) launched in New York/New Jersey in November 2013. It is available at 111 8th Avenue (x2), 375 Pearl St and 325 Hudson in Manhattan, and 101 Possumtown Rd in New Jersey. Unlike the other two European exchanges, it is in multiple US markets; it will launch in the Bay Area in September 2015 and in Chicago in October 2015. The German Internet Exchange (DE-CIX) entered the US in November 2013 as well and has installed nine switches in eight buildings in New York/New Jersey: 60 Hudson (x2), 111 8th Avenue, 165 Halsey St, 32 Avenue of the Americas, 325 Hudson St, 85 10th Ave, and 375 Pearl St in Manhattan, and 2 Emerson Lane in Secaucus, New Jersey. The exchange claims that its traffic has doubled since early 2015 and was at 36.08Gbps in April. In 2013, the London Internet Exchange (LINX) launched in three sites in Virginia: EvoSwitch (Manassas), CoreSite (Reston) and DuPont Fabros (Ashburn). 5.7.3 ADDITIONAL COMPETITION For many years interconnection has been a very local business, with a few providers offering national and international footprints. Competition is increasing, however. Wholesale providers with deep pockets, such as CyrusOne, Digital Realty and DuPont Fabros, are increasingly interested in interconnection to differentiate their facilities and provide a service that is becoming ever more important to their customers. In the past, wholesale providers ensured that at least two network providers were available for service at a building and then let their customers negotiate with those providers or, if they were customers of another network provider elsewhere, encourage that provider to connect to the datacenter as well. A large choice of network providers was not typically available. Now, however, customers often prefer to have a choice of several network providers at a facility and also like to access SaaS and IaaS providers. They increasingly seek facilities that offer those choices or that at least connect to facilities that offer those choices. Digital Realty recently made a big strategic move, purchasing interconnect-oriented provider Telx in response to its customers’demands for inter- connection options and a connectivity story.
  • 33. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 29 451 RESEARCH There is also a growing number of datacenter operators building/buying/expanding facilities to provide interconnection and peering options closer to the edge of the Internet. They build or buy interconnection space close to end users, in cities outside the top datacenter markets. Examples include Cologix, EdgeConneX and 365 Data Centers. These firms are expanding quickly, in some cases through acquisition. Some competitors are fiber providers such as Allied Fiber and Zayo. Allied Fiber, for example, is building dark fiber networks and providing small datacenters along the route, currently available in the Southeast. This has been partic- ularly useful for mobile operators. ZenFi is doing something similar in Manhattan. Zayo is adding datacenter space along its dark fiber routes across the country. Consolidation has been a key way for interconnect players to expand, since network-dense assets are relatively hard to replicate. Cologix is an example of a firm that has grown through acquisition, as has 365 Data Centers. Equinix is in the process of buying Telecity to grow its network-dense footprint in Europe. As mentioned before, Digital Realty has acquired Telx. These are desirable assets and do not come up for sale very often – we believe consolidation will continue, but in many edge markets, firms will need to build and develop interconnection assets rather than acquire them. 5.8 TECHNOLOGY TRENDS Some technology trends could potentially impact interconnection. While the hosting industry has been transformed by cloud computing, change has been slower for network services. Just as virtualization of servers was key to igniting the cloud computing revolution, virtualization at the network layer is allowing enterprise networking to move from a focus on appliances and communications links to cloud-delivered services. We see some possible interconnec- tion impacts from network providers using SDN and NFV to provide more innovative network services. Beyond a rather basic vision of bandwidth on demand, some network providers, for example, are looking to provide some of the benefits of interconnection for enterprises (particularly interconnection to cloud providers) through programmable (i.e., API-driven) network services rather than through interconnection facilities. The idea is to encourage enterprises to use one network provider for network, cloud and datacenter requirements rather than multiple providers by pitching ease of use and better visibility into performance of the whole IT stack. Using one provider for most network and datacenter needs would make it less helpful for enterprises to lease space in network-dense facilities, assuming that AT&T or Verizon, for example, could be price-competitive. Such a trend, over the longer term, could potentially result in fewer cloud and SaaS providers overall, which could also reduce the number of customers for network-dense facilities.
  • 34. 30 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. SECTION 6 The 451 Take In the MTDC industry, network-dense carrier hotels are the hardest facilities to replicate and interconnect-oriented providers therefore often have relatively few competitors in any one location. This is changing, particularly in the US, as investors back new builds with interconnect-focused business plans and providers previously relatively less inter- ested in interconnection, such as some of the wholesale firms, work to develop their own interconnect ecosystems. With the rise of services that depend on network speed and reliability, we believe the demand for interconnection facilities will continue to grow, particularly globally and in markets outside the top 10 in the US as content pushes further to the edge of the Internet. There may be some shifts in business models, particularly as the European inter- connection model expands in the US, but overall we believe interconnect providers will continue to grow and obtain a premium for their datacenter space.
  • 35. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 31 451 RESEARCH APPENDIX A Glossary Cloud exchange or cloud connect: A cloud exchange platform is essentially a variation on the virtual cross-connect service. Where an IX platform is facilitating the movement of data across the public internet, a cloud exchange is facilitating the connection of a party to a cloud service provider in a private, secure manner rather than via the public Internet. Like an IX, a single port enables access to multiple providers that are colocated in a carrier- neutral datacenter. Carrier hotels: A carrier hotel is also a colocation facility, but the name typically connotes a facility that has a very high concentration of networks, carriers and service providers. The term also reflects that fact that many of the famous carrier hotels are not single-purpose datacenters, but mixed-use buildings such as One Wilshire in Los Angeles and 60 Hudson Street in New York City. They are often located in the heart of a city’s business district, have office space rented to third parties, and weren’t built specifically to house computer networks and servers. Datacenter interconnection: The networking of two more or more datacenters for a common business purpose. The datacenters have a physical connection between at least two facilities, and are connected at a designated space within a building. Direct connections to cloud providers: A type of interconnection that connects a cloud service provider to a customer via a‘direct’connection, with connectivity provided by a carrier partner that links a customer with a fiber or other high-speed connection to the cloud provider’s node at a datacenter facility. Examples include Amazon’s Direct Connect or Microsoft’s ExpressRoute. There are different deployment scenarios. For example, in one, the network interfaces with the cloud provider’s compute and storage resources at a third- party datacenter. In another, the network interfaces with the cloud provider at the connec- tion node in a meet-me room, but the node/switch is itself linked to the cloud provider’s own datacenter – which in some cases may be off-site relative to the network node. IX providers: An IX provider is an entity that manages the infrastructure used by organi- zations such as carriers, ISPs, hosting companies and CDN service providers to exchange Internet traffic. Peering agreements form the basis for the exchange of traffic. Some IXs are operated as nonprofit, member-based associations. Characteristics of this type of provider include operating a peering fabric, and pricing services in line with the costs to provide the service to its members. The nonprofit IXs don’t run or sell colocation services; instead, the peering fabric is installed in a facility managed by a third-party colocation provider – some- times in multiple providers in a given region. In the US, a more common model is for the IX to be run as a for-profit service that is managed by the colocation provider, which is of course also managing the facility and selling space along with the opportunity to participate in the IX peering fabric. The members of the IX in this case are customers of the colocation provider.
  • 36. 32 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. As suggested by the above definition, the commercial IX model is the dominant model in the North American market, while the nonprofit, member-based IXs are more commonly found in Europe. Physical cross-connect: A cross-connect is a means of physically patching (connecting) two customers together via a fiber-optic or copper cable at a patch panel. This initially was used to connect telecom networks together but now can connect ISPs, content providers, cloud providers or enterprise networks together. Virtual cross-connect: A virtual cross-connect is a service that allows a customer to connect to a single port to gain access to multiple other parties via a common switch. While a standard physical cross-connect has no electronics involved, being a physical connec- tion of cables, a virtual cross-connect has a switch in the path; the switch is what enables customers to access a wider range of partners than would be physically possible (given space and power constraints) if they were to connect on a 1:1 basis with each partner.
  • 37. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 33 451 RESEARCH APPENDIX B Key Carrier Hotels in North American Markets MARKET KEY CARRIER HOTEL ADDRESSES Atlanta 55 Marietta, 34 Peachtree Boston One Summer, 230 Congress Charlotte 3100 Intl Airport Drive, 1960 Cross Beam Drive Chicago 350 Cermak Dallas Infomart, 2323 Bryan Denver 910 15th St, 1500 Champa Houston 1301 Fannin Kansas City 1102 Grand Las Vegas Switch SuperNAP Los Angeles One Wilshire Madison 222 W Washington Ave Manhattan 60 Hudson St., 111 8th Ave, 32 Ave of the Americas Miami 50 NE 9th St Minneapolis 511 11th Avenue (NAP of the Americas) Montreal 1250 Boulevard René-Lévesque New Jersey Equinix Secaucus, CenturyLink Weehawken Northern Virginia 21715 Filigree Court (Equinix Ashburn) Philadelphia 401 North Broad St. Phoenix 3402 E. University Dr. Pittsburgh Allegheny Center Mall, 322 Fourth Avenue San Antonio 415 N. Main Ave San Francisco 200 Paul St, 365 Main St Seattle Westin Building Silicon Valley 9-11 Great Oaks, 55 South Market Toronto 151 Front St Vancouver Harbour Centre - West Hastings St
  • 38. 34 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. APPENDIX C Locations for Direct Connections to Cloud Providers AWS DIRECT CONNECT LOCATIONS LOCATION AWS REGION CoreSite 32 Avenue of the Americas, NY US East (Virginia) CoreSite One Wilshire & 900 North Alameda, CA US West (Northern California) Equinix DC1 - DC6 & DC10 US East (Virginia) Equinix FR5 EU (Frankfurt) Equinix SV1 & SV5 US West (Northern California) Equinix SE2 & SE3 US West (Oregon) Equinix SG2 Asia Pacific (Singapore) Equinix SY3 Asia Pacific (Sydney) Equinix TY2 Asia Pacific (Tokyo) Eircom Clonshaugh EU (Ireland) Global Switch SY6 Asia Pacific (Sydney) Sinnet Jiuxianqiao IDC China (Beijing) Switch SUPERNAP 8 US West (Oregon) TelecityGroup, London Docklands EU (Ireland) Terremark NAP do Brasil South America (Sao Paulo) MICROSOFT AZURE EXPRESSROUTE LOCATIONS PROVIDER LOCATIONS Aryaka Networks Silicon Valley, Singapore, Washington DC AT&T Amsterdam (coming soon), London (coming soon), Dallas, Silicon Valley, Washington DC British Telecom Amsterdam, London, Silicon Valley (coming soon), Washington DC China Global Telecom Hong Kong (coming soon) Colt Amsterdam, London Comcast Silicon Valley, Washington DC Equinix Amsterdam, Atlanta, Chicago, Dallas, Hong Kong, London, Los Angeles, Melbourne, New York, Sao Paulo, Seattle, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC InterCloud Systems Amsterdam, London, Singapore, Washington DC Internet Initiative Japan Tokyo Internet Solutions – CloudConnect Amsterdam, London Interxion Amsterdam Level 3 Communications Chicago, Dallas, London, Seattle, Silicon Valley, Washington DC NEXTDC Melbourne, Sydney (coming soon) NTT Communications Tokyo (coming soon) Orange Amsterdam, London, Silicon Valley, Washington DC PCCW Global Hong Kong SingTel Singapore Tata Communications Amsterdam, Chennai (coming soon), Hong Kong, London, Mumbai (coming soon), Singapore TelecityGroup Amsterdam, London Telstra Melbourne (coming soon), Sydney Verizon London, Hong Kong, Silicon Valley, Washington DC Zayo Group Washington DC
  • 39. © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. 35 451 RESEARCH APPENDIX D Open-IX Certified Providers OIX-1 CERTIFIED ENTITIES LOCATION LINX NoVA Ashburn AMS-IX Bay Area San Francisco DE-CIX NY New York AMS-IX Amsterdam Amsterdam (Netherlands) Florida Internet Exchange Miami OIX-2 CERTIFIED ENTITIES LOCATION CyrusOne Austin, Cincinnati (2), Dallas, Houston, Phoenix Continuum Chicago DataBank (pending) Richardson DataGryd New York Digital Realty Dallas, NY (111 8th Ave), Los Angeles, San Francisco DuPont Fabros Ashburn, Piscataway EdgeConneX Houston EvoSwitch Ashburn Expiris Middletown Jaguar Network Marseille (France) PhoenixNAP (pending) Phoenix QTS Atlanta, Richmond, Suwanee (Atlanta) Sentinel Durham, Somerset Vantage Santa Clara Zayo Atlanta, Miami
  • 40. 36 Interconnection 101 © 2015 451 RESEARCH, LLC AND/OR ITS AFFILIATES. ALL RIGHTS RESERVED. INDEX OF COMPANIES 365 Data Centers 19, 29 Allied Fiber 4, 29 Amazon Web Services 13, 14, 16, 18, 25, 26, 31, 34 AOL 5 Apigee 13 AT&T 26, 29, 34 AWS 13, 14, 18, 26, 34 Ciena 13 Cisco 13, 22 Cologix 19, 29 CoreSite 4, 19, 28, 34 CyrusOne 19, 28, 35 Dailymotion 26 Digital Realty 4, 19, 28, 29, 35 DirecTV 26 DuPont Fabros 19, 28, 35 EdgeConneX 19, 29, 35 Eloqua 14 Equinix 3, 4, 13, 19, 29, 33, 34 EvoSwitch 28, 35 Google 13, 14, 17, 23, 25 HubSpot 14 IBM 13, 14, 16, 17 ITENOS 3 Juniper 13 Marketo 14 MCI 3 Microsoft 13, 14, 16, 18, 25, 31, 34 Netflix 17, 26 PacBell 3 Sprint 5 Telecity 29 Telx 4, 20, 28, 29 Verizon 3, 17, 20, 26, 29, 34 Yahoo 5, 23 YouTube 22, 26 Zayo 17, 20, 29, 34, 35 ZenFi 29