-
Fibre Channel Over Ethernet ForBeginners
EMC Proven Professional Knowledge Sharing 2011
Bruce YellinAdvisory Technology ConsultantEMC [email protected]
-
EMC Proven Professional Knowledge Sharing 2
Table of Contents
Introduction............................................................................................................................ 3
Basic Concepts ...................................................................................................................... 4
Do You Need to Change Cables? ........................................................................................... 7
Costs ..................................................................................................................................... 8
How CEE and FCoE works................................................................................................... 12
FAQs ................................................................................................................................... 16
Conclusion........................................................................................................................... 18
Footnotes ............................................................................................................................ 22
Disclaimer: The views, processes or methodologies published in this compilation are those of the author. They do not
necessarily reflect EMC Corporations views, processes, or methodologies.
-
EMC Proven Professional Knowledge Sharing 3
Method
Speed
bits per second
Seconds to send
a 5 MB iTune
Dial-up Modem 56,000 714
Ethernet 3 Mbps 3,000,000 13
Ethernet 10 Gbps 10,000,000,000 0.0039
Introduction
We live in a dynamic world where facts change. In grade school, I learned that there were nine
planets in the solar system. As Pluto was demoted in 2006, kids today are taught the fact that
there are eight planets in the sky.
Technology helps create and alter facts. The caveman discovered fire after an act of God
during a lightning storm, and then learned to use it for cooking, lighting, and warmth. The facts
around the creation of fire were eventually altered as mankind learned to create it on demand by
rubbing sticks together or from sparks created by striking a flint. Chemist John Walkers
technological breakthrough invention of the match in 1827, along with other discoveries, taught
us that fire is the combustive oxidation of certain molecules and can be created in many ways.
Technology has also revolutionized computing facts. The ancient mariners would tell you for a
fact that reliable navigation was possible by looking at the stars. This was later refined when the
Antikythera mechanical computer1, a forerunner to the sextant, brought precision to navigation.
Computing technology continued to evolve leading to modern day marvels such as the GPS.
The fact is that I dont travel very far without my TomTom even though I could still navigate by
the stars; however, I might not end up in the right place!
Just like the flint revolutionized fire, Robert Metcalfe forever changed computer communication
in 1973 when he introduced Ethernet as a way to link computers together. Prior to that, inter-
computer exchanges were difficult and a dial-up phone
modem would need 12 minutes to send a 5 MB iTune.
Ethernet, the first industry standard local network using
thick coaxial cable and vampire taps, reached speeds of 3 Mbps or 13 seconds to send that
iTune. Used by millions of computers, Ethernet has survived numerous changes, readjusting its
definition and speed along the way. Today, Wi-Fi is ubiquitous and fiber-optic networks bring
HDTV to our homes, all based on Ethernet. In the business world, 10 Gbps Ethernet is available
allowing a song to be sent in 4/1000 of a second.
Until recently, a pressing data center issue was the need for two different networks, one for
Ethernet and one for Fibre Channel (FC). Commercially introduced in 1997 along with point-to-
point and hub connections, FC replaced bulky 50 wire cables that could only attach a small
-
EMC Proven Professional Knowledge Sharing 4
Backup
CIFSiSCSI
FCoE
SAN
Converged
Enhanced
Ethernet
(CEE)
Mgmt.
HPC
NFS
number of SCSI devices over a very limited distance. While the underlying concept of Ethernet
packets is part of FC, the two are not compatible. As a result, it was impossible to send FC
packets unaltered through an Ethernet system2 until now.
Technology has evolved to the point where Fibre Channel over Ethernet (FCoE) using
Converged Enhanced Ethernet (CEE3) can now send native FC packets through an Ethernet
system. Like John Walkers match and fire, FCoE represents a quantum
leap in Ethernet technology. The premise of FCoE is that it allows
(FC) storage networking to be transmitted simultaneously with
InifiniBand4, CIFS5, NFS6, and iSCSI7 data over a CEE network on a
single cable. The arrival of CEE and FCoE couldnt have come at a
better time as bandwidth and storage challenged data centers are
consolidated to leverage multi-core processors, virtualization, cloud
computing, and networked storage. This paper examines FCoE, its
practicality, applicability in the data center, and future directions.
Basic Concepts
Ethernet and FC were designed with unique goals, evolved independently, and use different
cables and adapters. Ethernet provides multi-user shared file access over a local area network
(LAN) using various protocols and mostly copper cable. It is used for home directories and
VMware VMDKs on file servers, and it transports phone calls with Voice over Internet Protocol
(VoIP) and video with IPTV. Ethernet supports different topologies such as bus and star at
speeds from 10 Mbit/s to 10 Gbits/s using a network interface card (NIC) or motherboard-based
LAN circuitry. There is more Ethernet by far in your data center than any other network.
FCs mission is to support the SCSI8 protocol to connect single or clustered servers with block
mode storage disks and other devices, usually over fiber optic cable, at the fastest speeds with
the lowest latencies. As part of your storage area network (SAN), it supports mission-critical
databases and decision support systems. FC uses a point-to-point connection scheme (FC-AL9)
or a switching scheme (FC-SW10) at speeds from 1-8 Gbit/s using a host bus adapter (HBA) that
occupies a card slot in a server. Most companies deploy FC in dual fabric configurations to
avoid single points of failure along with multi-pathing automatic failover software.
-
EMC Proven Professional Knowledge Sharing 5
NICs
HBA
CNA
NIC HBAServer Before
Ethernet Switch
Fibre Channel Switch
LAN SAN
Server After
Fibre Channel over Ethernet
Switch
LAN SAN
CNA
CIFS
NFS
iSCSI
HCA
CNA
CNA
backup
management
FC SAN
FC SAN
All LAN, SAN
& IPC traffic
goes over
10GbE CNAs
While Ethernet and FC are often used concurrently in a server, CEE replaces multiple NICs and
HBAs and their unique cables with
a pair of converged network
adapters (CNA) and cables. It allows
all of the data traffic to flow
through the same adapter and
wire, allowing for slimmer
servers, power savings, and
fewer adapters, cables, and
switch ports. It offers substantial operational expense (OPEX) and capital expense (CAPEX)
savings since a single network is deployed in the rack. The diagram illustrates the before and
after FCoE server footprint.
A servers operating system works with the CNA as though it
were a NIC and an HBA rolled into one. From an imaginary
standpoint, think of NICs and an HBA duct-taped together and
put into a single card slot. With FCoE and CNAs, you still have
FC zones and MAC addresses, so the business application is not
altered and in fact is unaware of any changes.
The reason a server has so many NIC ports in this age of virtualization is due to the success
of consolidation. With excess CPU and
memory resources, multiple guests
run on a single ESX server. When
we focus the amount of I/O that ten
or twenty individual servers
generated onto a single server, it
may need extra Ethernet and FC
horsepower i.e. more NICs, HBAs,
and cables. In addition, you still
need a management NIC port and a dedicated VMotion port. As such, it is not that unusual for
an ESX server to have eight NIC ports. A server with two NIC cards and two HBAs, or even five
NICs, two HBAs, and an InfiniBand HCA can be consolidated into just two CNAs.
-
EMC Proven Professional Knowledge Sharing 6
Server 20
. 19
. 18
. 17
. 16
. 15
. 14
. 13
. 12
. 11
. 10
. 9
. 8
. 7
. 6
. 5
. 4
Server 3
Server 2
Server 1 N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
N N N N N N N N F F
E E E E E E E E E E E EE E E E E E E E E E E EE E E E E E E E E E E EE E E E E E E E E E E E
E E E E E E E E E E E EE E E E E E E E E E E EE E E E E E E E E E E EE E E E E E E E E E E E
S S S S S S S S S S S SS S S S S S S S S S S S
S S S S S S S S S S S SS S S S S S S S S S S S
FC Top of rack24 port switch
FC Top of rack24 port switch
E E E E E E E E E E E EE E E E E E E E E E E EE E E E E E E E E E E EE E E E E E E E E E E E
E E E E E E E E E E E EE E E E E E E E E E E EE E E E E E E E E E E EE E E E E E E E E E E EEthernet
Top of rack 48 port Switch
EthernetTop of rack
48 port Switch
EthernetTop of rack
48 port Switch
EthernetTop of rack
48 port Switch
Server 20
Server 19
Server 18
. 17
. 16
. 15
. 14
. 13
. 12
. 11
. 10
. 9
. 8
. 7
. 6
. 5
. 4
Server 3
Server 2
Server 1 C C
C C
C C
C C
C C
C C
C C
C C
C C
C C
C C
C C
C C
C C
C C
C C
C C
C C
C C
C C
T T T T T T T T T T E ET T T T T T T T T T E E
FCoE Top of rack20 port switch
FCoE Top of rack20 port switch S
SSS
T T T T T T T T T T E ET T T T T T T T T T E E
SS
SS
Comparison
Adapter CNA Ethernet FC Total NIC FC Total
20 Server 40 40 160 40 200
Top-of-rack 40 8 48 192 48 240
Ports 80 8 8 96 352 88 440
Cables 40 8 8 56 192 48 240
CEE/FCoE Rack Inventory Traditional Rack Inventory
In this example, each rack has 20 servers. On the right, each server
has an Ethernet NIC and a FC HBA . These traditional racked
servers have eight NIC ports and two HBA ports. So there are 160
Ethernet cables connecting into 4 x 48 port 1 Gbps Ethernet top-
of-rack switches and there are 40 FC cables connected into 2 x 24
port 4 Gbps FC top-of-rack switches with
small form-factor pluggable (SFP) or
small form-factor plus (SFP+) transceivers.
In the CEE/FCoE rack to the left, each server
has just two CNA ports. So there are 40
CNAs connecting into 2 x 20 port 10 Gbps
FCoE top-of-rack switches using twinax
cables (more about twinax later on). Coming
out of each FCoE switch are 4 Ethernet uplink
connections and 4 FC uplink connections.
The simplicity achieved through FCoE yields a tremendous savings in hardware and labor
costs. By converging LAN and SAN traffic, a
traditional rack of servers needing 200 cables and
440 rack ports is transformed into smaller servers
with 56 cables and 96 ports with CEE/FCoE 72% fewer cables and 78% fewer ports.
FCoE works along side existing Ethernet and FC networks. As a result, it can be implemented in
phases rather than replacing working fabrics. For example, deploying a rack of VMware ESX
servers can be done with NICs and HBAs, or with FCoE CNAs since the underlying protocols
are unchanged. The top-of-rack FCoE switches can connect seamlessly into existing LANs and
SANs, and even connect directly into other devices such as FCoE storage arrays.
You might be wondering It sounds simple. Why did this take so long to invent? The technology
required two building blocks to be in place CEE lossless Ethernet and the new FCoE protocol
to natively transport FC packets unaltered over 10 Gbps Ethernet.
1. The 10 Gbps Ethernet standards came out in 2002 and is just now becoming affordable.
N F
E
S
C
T
-
EMC Proven Professional Knowledge Sharing 7
Brocade 8000 Cisco Nexus 5000
2. Ethernet is lossy, meaning when an Ethernet switch gets congested, data packets can
get dropped or delivered out of order (TCP reassembles the packets in the right order).
FC is lossless, so data is sent only if a buffer is available to receive it. CEE is lossless.
To support FCoE, vendors such as Brocade and Cisco have introduced families of switches
such as the Brocade 8000 and Cisco Nexus, and
Emulex, QLogic, and Brocade sell CNAs. Storage
vendors such as EMC, IBM, HP, Dell, NetApp, and others also support CEE/FCoE. Many of
these products and services are discussed in later chapters. CEE is not directly compatible with
older Ethernet implementations it needs a switch or line card that understands CEE.
Do You Need to Change Cables?
When a data center has been around a long time, its cabling plant may resemble a computer
museum. While equipment may have a useful life of 3-5 years, many shops have ancient 10
and 100 Mbps RJ4511 Category 5 (Cat 5) Ethernet cables that could have been installed in
1992. In 2002, backwards compatible RJ45 Cat 6 was just being installed. The latest Lucent
Connector (LC) aqua colored OM3 fiber cables fit into SFP+ connectors and copper twinaxial
cables called twinax which have the SFP+ transceiver permanently on the end of the cable.
CEE can utilize SFP+ fiber, SFP+ twinax, and RJ45 Cat 7/6/6A/6e/5e (all backward compatible
with Cat 5). While Cat 5 is not supported, you might find it works with short distances. Each
cable has its pros and cons. For example, to span a large distance, you want to use OM3 fiber
and LC/SFP+ connectors. If you are wiring up a rack, twinax is very green. If you need to
redeploy Cat 5e or 6, existing RJ45 cables may do the trick. Conversely, fiber can be tricky to
run and active twinax is limited to about 60 feet. 10GBASE-T products that accept the RJ45 Cat
5e (
-
EMC Proven Professional Knowledge Sharing 8
Cisco Health Care Study
LAN &
SAN
Unified
Fabric
Unified
Fabric
Savings
Power Consumption 147 KW 63KW 57%
Power & Cooling Costs $909,000 $390,000 57%
Number of Host Adapters 8,000 4,000 50%
Number of Cables 10,484 5,200 50%
Number of Access Ports 10,000 4,000 60%
While low-cost Cat 6/6A would normally make it the preferred CNA cable choice, 10GBASE-Ts
high power consumption has prevented their widespread introduction. In power-challenged data
centers, it is not practical to connect racks of servers with RJ45 10GBASE-T Cat 6/6A. Twinax
cable uses 60-80X less power (0.2 watts versus 12-16 watts), so multiply the
number of server racks in your data center and the power/cooling savings really
adds up! That is why most CNAs come with twinax or LC fiber (SFP+ connectors). If
power is not an issue, 10GBASE-T CNAs products such as Mellanoxs MNPH28C-XTR are
available for $58513. Other vendors should follow suit with their RJ45-based
CNA choices as the circuitry is further miniaturized to reduce the power usage.
To span distances, SFP+ twinax coppers 20m maximum length is too limiting.
For inter-rack applications, use RJ45 10GBASE-T for up to 100m or SFP+ optical fiber which
can reach 300m. To decide which to use for end-of-row racks, consider the noise of RJ45
copper versus SFP+ fiber. Noise is measured by a Bit Error Rate (BER). 10GBASE-Ts BER is
10-12 (one error for every 116GB sent) while SFT+ fibers is 10-18 (one error for every 113,690TB
sent). If you have ever experienced a noisy cable, its BER may have been too high, perhaps
resulting in dropped Ethernet packets and a slowdown of data transmissions.
Costs
Successful companies keep an eye on bottom-line profitability. Your CIO understands that no
matter how great the technology, it has to be affordable, and have a favorable TCO. The
principle components of TCO are CAPEX and OPEX. CAPEX are typically one time purchases
such as CNAs, top-of-rack switches, expenses of acquiring assets, and installation labor. OPEX
encompasses long-term items such as salary, electricity costs, maintenance costs, and so on.
Technology costs tend to drop over time as more and more competitors sell similar products or
as manufacturing ramps up production, while OPEX tends to increase over time.
A basic premise of FCoE is that is saves money. A 2009
Cisco case study showed major CAPEX and OPEX
savings with a unified fabric. From a CAPEX standpoint:
1. Requires fewer adapters
2. Needs fewer switches
3. Reduces the number of cables
4. Needs fewer PCIe server slots and permits the use of smaller severs
5. Takes less rack space
-
EMC Proven Professional Knowledge Sharing 9
Components Qty
servers 20
Ethernet/mgmt 2
Ethernet/data 2
FC 2
CNA/CEE 2
Component Cost
Ethernet 4-port 10/100/1000 $400
FC 1-port $1,000
CNA $1,600
Ethernet CAT 6/6A 3m cable $10
FC LC-LC cable 3m $30
Twinax cable SFP-H10GB-CU3M 3m $60
Ethernet switch 24 ports Catalyst 3560E-24TD $4,000
Ethernet switch 48 ports Catalyst 3560E-48TD $8,000
Ethernet switch 96 ports (2) Catalyst 3560E-48TD $16,000
FC switch Cisco 9124 - 24 x 4Gb $10,600
FCoE switch Cisco Nexus 5010 (20) - 4x10GbE, 4x4Gb FC, FMS $27,000
KwH Power & Cooling $0.15
Labor per cable $100
From an OPEX standpoint, it:
1. Reduces management and provisioning time
2. Reduces power and cooling requirements
3. Lowers staff costs through unified management
Emulex14 and Cisco15 have calculators that analyze the CAPEX and OPEX of CNAs, cables,
switches, and other items. To demonstrate FCoEs affordability, lets leverage these calculators
to create a price comparison model for traditional Ethernet/FC server racks versus pure FCoE.
Assume a rack had 20 traditional or CEE/FCoE-based servers. A traditional server might have
four NIC ports two for Ethernet data traffic, one for management, and one for
VMotion. It also has two FC HBA ports for SCSI storage access. In contrast, an
FCoE server has just two CNA ports. As you would imagine, adding NICs to
the traditional server improves the business case for FCoE since increased connections have
higher initial costs, installation labor, and management effort. Adding NICs also increases power
and cooling requirements, and raises risk risk is proportional to the number of components.
Conversely, simplification can save time and money.
Now assign a cost to the equipment keeping in
mind that the price you would pay depends on
your supplier, the quantity you purchase, and so
forth. This table summarizes the cost basis for
each component used in this model. Where I live,
electricity costs about $0.15 a KwH16.
A quick calculation shows the traditional rack with a 24-port Ethernet and a 24-port FC switch is
$14,600. If a 48-port Ethernet switch is needed, the cost rises to $18,600. The cost is $26,600
when the server has eight NICs. Meanwhile, CEE/FCoE gear costs $27,000 i.e. FCoE
hardware is more expensive than traditional Ethernet and FC hardware. Therefore, a compelling
FCoE TCO has to come from other CAPEX and OPEX areas.
Anecdotally, labor costs consume 60-80% of the total operational cost of a data center17, so it is
no surprise that rack wiring is important for the TCO analysis. Labor estimates are hard to pin
down since some businesses use existing staff to wire a rack while others contract out this
-
EMC Proven Professional Knowledge Sharing 10
Power Factor Value
Watts Ethernet 4-port 6.0
Watts FC 1-port 14.4
Watts CNA 9.0
Watts Ethernet cable 16.0
Watts FC cable 2.0
Watts twinax cable 0.2
Watts Ethernet switch 24 ports 330
Watts Ethernet switch 48 ports (like 24) 330
Watts Ethernet switch 96 ports 660
Watts FC switch 300
Watts FCoE switch 350
EER of A/C unit 12
Energy Savings per Sever with CNAs and Cables
0 50 100 150 200
2 CNAs
and cables
2 NICs
2 HBAs
and cables
8 NICs
2 HBAs
and cables
Watts of Power Consumed
20 Servers Ethernet FC Total
CAPEX
Cost
3YR
Power/Cool
Total
Cost
Adapters 40 40 80 $56,000 $4,132 $60,132
Switches 2 2 4 $37,200 $6,380 $43,580
Cables 80 40 120 $2,000 $6,886 $8,886
Uplinks 4 4 8 $160 $365 $525
Labor $12,800 $12,800
Total 126 86 212 $108,160 $17,763 $125,923
work, but it is not unheard of to find $20,000 fixed-price rack wiring proposals. Cisco says
Cable installation within a rack can run up to US$200 per cable, and running cabling to patch
panels can cost more than US$600 per cable, depending on labor rates and the state of cable
infrastructure18. The University of Maryland internally charges $90 per network port19 and $100
for labor per cable. With this labor charge, an eight NIC/two HBA server costs $1,000 to install.
In contrast, the same server with two CNAs costs just $200 to install i.e. FCoE racks have
major CAPEX advantages20 over traditional racks. It also goes without saying, more ports also
equals higher on-going maintenance costs.
Another table provides the power usage for each device21. The
cost to cool the equipment is based on the estimated efficiency
of data center air conditioning (EER). It assumes add-in
adapters are used for Ethernet server support. Two four-port
NICs consume 12 watts of power (6 watts per adapter) and two
FC HBAs use 28.8 watts (14.4 watts per adapter), while a pair
of CNAs needs just 18 watts a 56% savings. Hyper-consolidated servers might require more
NICs, so the savings from swapping NICs and HBAs for CNAs is significant!
Even more dramatic is the power savings from 0.2
watt twinax cables versus 12-16 watt Cat 6/6A
cables22. As discussed earlier, the power difference
is attributed to the complexity of 10GBASE-T
circuitry. In highly virtualized servers, eight Cat
6/6A Ethernet cables and two FC cables
collectively use 132 watts while a pair of twinax
cables uses just 0.4 watts a 99.7% power savings. Clearly, servers with CNAs can save a lot
of electricity.
Tying this all together, a data center rack using a
server with two data NICs, two management NICs,
and necessary adapters, switches, cables, and
labor has a three-year cost of $126,000.
-
EMC Proven Professional Knowledge Sharing 11
$100,000
$120,000
$140,000
$160,000
$180,000
4 6 8
3 Y
ea
r C
os
ts
Number of NICs
Traditional versus FCoE Costs
Traditional Today's FCoE Future FCoE
20 Servers Ethernet FC Total
CAPEX
Cost
3YR
Power/Cool
Total
Cost
Adapters 40 40 80 $56,000 $4,132 $60,132
Switches 2 2 4 $53,200 $9,722 $62,922
Cables 120 40 160 $2,400 $10,127 $12,527
Uplinks 4 4 8 $160 $365 $525
Labor $16,800 $16,800
Total 166 86 252 $128,560 $24,346 $152,906
20 Servers Ethernet FC Total
CAPEX
Cost
3YR
Power/Cool
Total
Cost
Adapters 40 40 80 $56,000 $4,132 $60,132
Switches 2 2 4 $53,200 $9,722 $62,922
Cables 160 40 200 $2,800 $13,368 $16,168
Uplinks 4 4 8 $160 $365 $525
Labor $20,800 $20,800
Total 206 86 292 $132,960 $27,587 $160,547
20 Servers CNA Total
CAPEX
Cost
3YR
Power/Cool
Total
Cost
Adapters 40 40 $64,000 $1,823 $65,823
Switches 2 2 $54,000 $3,544 $57,544
Cables 40 40 $2,400 $41 $2,441
Uplinks 8 8 $160 $365 $525
Labor $4,800 $4,800
Total 90 90 $125,360 $5,773 $131,133
Should there be four data NICs, perhaps due to
server virtualization, six NIC cables are needed for
each server. Add additional components and the
cost jumps to nearly $153,000 for three years.
In highly virtualized environments, you could easily
have six data NICs and two management NICs in
the server. At that point, the analysis shows the
three-year cost exceeds $160,000.
FCoE saves $21,773-$29,414 when consolidating servers with six or eight NICs. Cable count is
reduced 80% making management and
troubleshooting easier. While adapters, switches,
and cables are currently more expensive than
traditional Ethernet and FC components, the labor
savings and lower power and cooling profile already make FCoE the clear winner! Keep in mind
that OPEX tends to increase over the lifespan of equipment as labor and power increases every
year. And if you advocate green technology, you will love the 80% FCoE power and cooling
savings!
As this graph shows, the TCO of FCoE gets
better over time. CNAs should reach HBA
price points as market demand increases
and more advanced components become
available. Meanwhile, higher CNA costs are
offset by a reduced adapter count, smaller
servers, fewer cables and switches, lower
power and cooling requirements, and
reduced management costs. Switch prices from Brocade and Cisco will likely decrease.
Running out of space or power in your data center? If so, then equipment CAPEX may not be
critical to the TCO. FCoE offers a reduction in floor space by achieving high server density per
rack. For example, it is possible to fit 32 1U servers in a rack with CNAs because they need
-
EMC Proven Professional Knowledge Sharing 12
10GbE
10GbE
NICFCoE
FCPCIe
fewer PCIe slots compared to multiple NICs and HBAs. This may be further spurred on by FCoE
integrated motherboard designs, perhaps with ASICs like the Chelsio Terminator 4 chipset23.
Even without integrated designs, the fewer PCIe slots a server contains, the lower the power
impact since every PCIe adapter has a power budget of up to 25 watts. Since a pair of CNAs
replaces all the Ethernet and FC adapters in the server, it needs only two PCIe slots. That
means FCoE can save 50 watts per server.
How CEE and FCoE works
The goal of CEE and FCoE is to simplify disparate parallel networks by supporting them over a
modified Ethernet 10 GbE network. Looking in your data center, you see multiple networks:
The traditional multi-purpose Ethernet LAN typically transports small amounts of end-user
data between IP storage locally and remotely, device and cluster management, dedicated
VMware VMotion links, VOIP, iSCSI, and other LAN traffic.
High-speed FC SANs to support mission-critical databases, email, important systems, and
other highly available, high bandwidth, low overhead lossless systems.
InfiniBand for clustered servers.
CEE/FCoE combines these networks and unique adapters into a single common network with
single- or dual-port CNAs for efficient I/O consolidation. Drivers
exist for Windows, Linux, VMware ESX, AIX, and Solaris. Check
your vendors support list for the latest information.
From an application perspective, the CNA is indistinguishable from a HBA or a NIC since the
server has no idea it is communicating with a CNA nothing changes. It has both an Ethernet
driver and a FC driver rolled into one. If this sounds far-fetched, then perhaps a telephone
analogy will help. When you use a super-advanced IP telephone, an iPhone, or even a
BlackBerry to call someone who answers with an old rotary dial phone, the call goes through
without any issues and neither side knows the other side is using different phone equipment.
-
EMC Proven Professional Knowledge Sharing 13
FC PayloadVariable size0-2112 bytes
Ethernet header (16)
type = FCoEFCoE header (16)
SOF, EOFFC header (24)
FCS (4)EOF (4)
CRC (4)
A full fibre channel frame is fully contained in one Ethernet frame, so jumbo frames are used.
Jumbo Ethernet Packet2180 bytes
Standardfibre channel packet up to
2112 bytes
Data
Active twinax
SFF-8461
Passive twinax
SFF-8431
Active twinax
SFF-8461
Passive twinax
SFF-8431
The CNAs can use either optical or copper cables, but optical OM3 aqua colored cables require
the card have an add-on optical SFP+ GBICs which tend to be expensive. Copper
twinax comes with copper SFP+ GBICs built-in and cost less than optical. And
as weve discussed, RJ45 copper is re-emerging as a viable solution in 2011. Optical cables
come in longer lengths and are useful for attaching top-of-rack switches to other core switches
at distances up to 300 meters, but are not as power efficient as twinax.
There are actually two types of twinax that can be used with CEE/FCoE
passive and active. Passive twinax (SFF-8431) is used within a rack
because it is limited to 5-7 meter lengths. Active twinax (SFF-8461)
can be used both in the rack and between racks as it can span 20
meters. In the close-up to the right, you will see the active connector
contains an extra finger on the connection that is absent on the
passive cable. Active cables are built to provide transmit and
receive equalization in the SFP+ connector instead of the circuit
board enabling the greater distance. Active cables tend to be
thinner, and thinner cables allow for better cable management and even greater power
efficiency. It turns out that 80% of cables in the data center are less than 30m in length24, so it
is very possible that active twinax with some additional improvements will someday be prevalent
in the data center. Before deciding on active or passive twinax, make sure your solution
supports them. For example, the Brocade 1010 and 1020 CNAs expect active twinax cables.
From an Ethernet packet standpoint, a jumbo packet completely holds a
standard FC packet without modification that is what
preserves the FC protocol. As the jumbo packet
is processed by the top-of-rack CEE/FCoE
switch, it determines if the payload is for another
FC switch (i.e. a core-edge design), an FCoE storage
device, or a NAS storage unit.
There are many differences in how Ethernet and FC handle data. To begin with, FC is lossless
in terms of packet arrival guarantees. It is accomplished through buffer to buffer credits e.g.
with buffer credits set to 5, only 5 FC data packets are sent before an acknowledgement must
be received. Ethernet has no concept of buffer credits and in fact is called lossy i.e. packets
-
EMC Proven Professional Knowledge Sharing 14
Transmit QueuesEthernet Link
Receive Buffers
STOP PAUSE
CoS
0
1
2
3Email
4
5
6
7
CoS
0
1
2
3Email
4
5
6
7
can be lost when the network is congested requiring them to be retransmitted. So for Ethernet to
carry FC traffic, it needed to be made lossless as well.
CEE overcame the lossy limitation through a selective PAUSE command implemented within
the priority flow control (PFC) mechanism. PFC, based on a setting of eight Class of Service
(CoS) virtual lanes, can issue a PAUSE equal to the time (quanta) it takes to send 512 bits of
data at the current network speed. An
application can be associated with a CoS
priority for example, SAN and LAN
traffic can have higher priorities than
email traffic. This illustration shows the
receive buffers for the email application
assigned to CoS #3 has room for two
more 512 byte frames before it drops a frame (blue solid oval), so a PAUSE is sent to the
transmit queue on the left to stop sending frames (red dashed oval) until there is room to
accommodate them. Hence, it becomes lossless.
The PAUSE is selective because there is no need to halt the other 7 CoS lanes of traffic
similar to a traffic light controlling car congestion on a road. At any point in time, a particular
virtual lane of traffic can be issued a specific red light, or in the absence of any congestion, a
green light. As shown by the colors, each virtual lane can represent separate traffic flows. With
the email example in lane #3, a SCSI storage device might have needed extra time to process
inbound FC data, so the PAUSE is removed when it is ready to receive more data.
If network bandwidth were unlimited, there clearly would not be a need to assign priorities or
regulate traffic flow. Given that bandwidth is finite, another CEE enhancement allows the
administrator to prioritize traffic over a single link. The Enhanced Transmission Selection (ETS)
mechanism is used to set the minimum and maximum portion of the link each particular traffic
slice should have. It was structured so that if the full bandwidth of one slice was not needed, it
could be doled out to other needy slices as long as it did not go below any specified minimums.
Both PFC and ETS parameters are communicated between switches and CNAs using a
mechanism called Data Center Bridging eXchange (DCBX).
-
EMC Proven Professional Knowledge Sharing 15
CoS Min Max
Infiniband 2 3
SAN 3 3
LAN 3 6
Email 0 3
Receive Buffer
Transmit Queues
CNA
10GbE
10GbE
CNA
10GbE
10GbETransmit Queues
For example, if there were four CoS traffic slices going over a 10 Gbps link, it might look like the
Offered Traffic in the figure below. At 1PM, InfiniBand might have 3 Gbps of data to transmit
reducing to 2 Gbps by 3PM. SAN traffic is steady at 3 Gbps of traffic from 1-3PM. LAN traffic
increases from 3 Gbps to 6 Gbps over the next 2 hours. Email has 3 Gbps reducing to 1 Gbps
at 3PM.
At 1PM, the link is already at full utilization so InfiniBand needing 3 Gbps only gets its minimum
of 2 Gbps. SAN and LAN traffic get their minimums of 3 Gbps. Email, like InfiniBand, needs 3
Gbps of the bandwidth but only gets 2 Gbps. At 2PM, LAN traffic
increases to 4 Gbps and gets 1 Gbps more bandwidth at the expense of
email because it has a minimum of 0 Gbps while the other traffic is
already at their CoS minimums. By 3PM, LAN traffic increases again to 6 Gbps but can only
negotiate another 1 Gbps at the expense of email traffic. With email having a minimum of 0
Gbps, it gets no link bandwidth, stopping all email traffic. This dynamic is easily changed over
time by the policy administrator. Clearly, should this condition continue, the administrator may
want to change the CoS min:max settings or add more bandwidth in the form of another link.
When a CEE switch becomes congested, it can throttle-
back the offending CNA. CEE monitors queue
lengths and when it is getting low on frame
buffers, it uses Congestion Notification
(CN) versus PAUSE to instruct specific
Offered Traffic
1PM
10 GE Link Realized Traffic Utilization
2G/s
HPC Traffic
2G/s
LAN Traffic
4G/s 5G/s3G/s
1PM2PM 3PM
3G/s 4G/s 6G/s
Other LAN Traffic
Min (3)Max (6)
3G/s 3G/s 3G/s
SAN Traffic
Min (3)Max (3)
3G/s 3G/s 2G/s
Infiniband Traffic
Min (2)Max (3)
2PM 3PMEmail Traffic
Min (0)Max (3) 3G/s 3G/s 1G/s
Storage Traffic
3G/s3G/s 3G/s
1G/s2G/s
2G/s
Offered Traffic
1PM
10 GE Link Realized Traffic Utilization
2G/s
HPC Traffic
2G/s
LAN Traffic
4G/s 5G/s3G/s
1PM2PM 3PM
3G/s 4G/s 6G/s
Other LAN Traffic
Min (3)Max (6)
3G/s 3G/s 3G/s
SAN Traffic
Min (3)Max (3)
3G/s 3G/s 2G/s
Infiniband Traffic
Min (2)Max (3)
2PM 3PMEmail Traffic
Min (0)Max (3) 3G/s 3G/s 1G/s
Storage Traffic
3G/s3G/s 3G/s
1G/s2G/s
2G/s
-
EMC Proven Professional Knowledge Sharing 16
Server A Server B
Switch 1
Switch 2
Switch 3
Switch 6
Switch 5
Switch 4
With STPServer A Server B
Switch 1
Switch 2
Switch 3
Switch 6
Switch 5
Switch 4
With TRILL
CNAs to slow their transmissions. By reducing traffic on the edges, the traffic impact on the
switch is eased.
In the world of traditional Ethernet, with a large number of devices communicating with each
other, transmission loops can form that slow down data traffic. The Spanning Tree Protocol
(STP) is a static routing approach created to protect against looping and to provide for optimal
link flow by assigning a primary path between two Ethernet devices. Alternate traffic paths are
not used unless there is a primary path failure, at which point the network re-converges using
the new path. Re-convergence adds overhead to the network, and if the network is very large
the delay can be very noticeable. The other STP weakness is that these redundant paths could
be put to good use to help speed traffic if organized properly.
CEE introduces Transparent
Interconnection of Lots of Links
(TRILL) to provide intelligent,
dynamic shortest path logic
something STP did not
accomplish. TRILL understands
all the paths and traffic loads,
and uses multi-path load
balancing between them. If a
path failure occurs, the traffic continues as the network re-converges. In this example, while
STP on the left provides a path between server A and B, the route it takes is not very
efficient. On the right, TRILL routes the traffic through the optimal path, and could even use
multi-pathing if there were a lot of traffic between A and B.
One of the issues with FCoE occurs not with the technology per se, but organizationally. Since
Ethernet and FC are being brought together, you may find FCoE falling under the domain of
your networking group rather than your storage group. So you might have network
administrators learning more about storage or storage administrators learning more about
network administration. Either way, teamwork will be required in the short run.
FAQs
Q. Does FCoE need 10 Gbps or will it run at 1 Gbps?
-
EMC Proven Professional Knowledge Sharing 17
Optical
OM3 Fiber
A. FCoE requires 10 Gbps and does not work at lower Ethernet speeds. CEE does not run on
traditional gigabit Ethernet networks.
Q. Why do we need a new Ethernet?
A. CEE is an enhanced Ethernet designed to be lossless. It supports FCoE and allows
multiple networks to be consolidated into one network. CEE also supports multi-pathing, classes
of service, and other features.
Q. What are the benefits of FCoE?
A. Combining Ethernet with FC provides for significant OPEX and CAPEX savings by using
fewer adapters and switches. It also uses less power, fewer cables, and less time to manage.
Q. What is a SFP+?
A. A SFP+ is a small, lower power, and low cost transceiver
providing 10 Gbps support. When used with OM3 (aqua)
fiber and LC connectors, the SFP+ transceiver is built into the
device. With twinax, the SFP+
transceiver is part of the cable.
Q. With FCoE, are zoning and LUN masking still needed?
A. Yes you are still configuring and administering native FC.
Q. What is the major difference between FCoE and iSCSI?
A. While both use Ethernet, FCoE is designed to be lossless and iSCSI requires dropped
frames to be retransmitted.
Q. Can FCoE and traditional Ethernet and FC coexist?
A. Yes the initial rollout of FCoE includes the ability for top-of-rack CEE switches to attach to
traditional Ethernet and FC switches. The underlying Ethernet and FC remains the same.
Q. Is there a bandwidth difference between Ethernet and FC?
A. Theoretically, Ethernet is 97% efficient using 64b/66b encoding or 66 bits to send 64 bits of
data while FC at 1/2/4/8 Gbps is 80% efficient using 8b/10b encoding.
-
EMC Proven Professional Knowledge Sharing 18
From its origins, 1 Gbps FC sends 1.0625 Gbps, so 8 Gbps FC is 8 x 1.0625 = 8.5 Gbps. At
80% efficiency, 8.5 Gbps*.8 = 6.8 Gbps of usable data. A 10 Gbps FCoE yields 9.7 Gbps of
usable data. Starting with 10 Gbps, FC uses 64b/66b encoding to become as efficient as
Ethernet.
Q. What does a speed rating of 10 Gbps mean?
A. One of the most misunderstood concepts is quoted speed. For example, 2 Gbps is not twice
as fast as 1 Gbps they are both the same speed because they both transfer at near the speed
of light. What is different is the bandwidth think of 10 Gbps as a ten lane highway where all the
cars travel at the same speed of light. By the way, light in a fiber cable travels at about 70% the
speed of light in a vacuum.
Q. Which is more efficient, FCoE or FC?
A. In terms of efficiency, 10 Gbps FC is about 1-2% more efficient than 10 Gbps FCoE because
FCoE encapsulates the entire FC frame and still needs a few more bytes for Ethernet overhead.
Q. Does FCoE support VLANs?
A. Yes. CNAs running FCoE appear as genuine NICs.
Q. What speed is likely after 10 Gbps CEE/FCoE?
A. The industry is moving towards 40 Gbps and 100 Gbps CEE/FCoE.
Q. Can I use a CNA with a standard FC switch?
A. No. A CEE switch such as a Cisco 5020 or Brocade 8000 is needed since a standard switch
cannot decode the enhanced Ethernet protocol.
Q. While storage vendors are moving to support FCoE natively, will I still be able to use a FC
tape drive?
A. Yes. The FC protocol is not changing. You will need a currently shipping CEE switch that
supports FCoE and FC.
Conclusion
Ethernet and FC have been on independent journeys until recently and required separate
infrastructures, but with the introduction of CEE the facts are about to change. CEE and FCoE,
-
EMC Proven Professional Knowledge Sharing 19
CiscoCatalyst
6513
Servers in a rack
. . .
.
CiscoMDS9513
FC8Gb
FC8Gb
Servers
1Gb or 10Gb Ethernet
CNA
10Gb
CEE
EMC
Celerra
UNIFIED
IP1/10Gb FC 8GbIP 1/10Gb
Cisco Nexus
5010
Servers in a rack
. . .
.
Servers
1Gb or 10Gb Ethernet
IP1/10Gb FC 8GbCEE 10Gb
CNA
10Gb
CEE
CEE 10Gb
Cisco Nexus
5010
EMC
Celerra
UNIFIED
Cisco
Catalyst
or Nexus
conceived in early 2007 and by mid-2009 an ANSI standard, is delivering on its original vision25
to provide a robust architecture uniting separate but interdependent physical networks.
At the end of 2008, Cisco introduced the Nexus 5000 and Brocade followed nine months later
with its 8000 the first CEE/FCoE switches. This Cisco/EMC diagram shows how the
technology started with top-of-rack
Nexus switches connecting into a
standard Catalyst Ethernet LAN
or MDS FC SAN. Rack servers
would have their CEE traffic broken into either
Ethernet traffic or FCoE FC traffic to access Celerra file
systems or SCSI block devices. With this
implementation, neither device supported hops i.e.
span the link between two CEE switches. A similar diagram could also be constructed using
Brocade equipment.
CEE/FCoEs development continued in mid-2010 with a 2nd generation design that allowed
FCoE traffic to flow (or hop) between top-of-rack and core FCoE switches with the enhanced
Ethernet frame intact. This was necessary to allow for practical large data center deployments.
FCoE appears as a core switch and in a blade form-factor for existing Cisco MDS and Brocade
DCX SAN directors. Optical OM3 is used to connect SAN switches to FC storage ports.
Todays version of CEE/FCoE provides a unified fabric. With an end-to-end FCoE server
attachment, native FCoE SCSI storage devices can now be accessed without changing the
protocol and without the need for a SAN
switch, or even with LANs attached
through a Cisco Catalyst or Nexus
CEE switch. Storage frames offer
FCoE natively with either a twinax or
SFP+ optical interface. Servers can now access both file
systems and SCSI blocks over a single cable. With this ability,
the industry expects to see CEE/FCoE shipments really take off.
As standard RJ45 cables26 are used in place of twinax for even
-
EMC Proven Professional Knowledge Sharing 20
lower cost rack wiring, albeit with a higher power usage profile, CEE/FCoE will become even
more popular.
While CEE/FCoE will continue to make rapid advances, it will not signal the end of either
traditional Ethernet or FC since there is still an important need for the native topologies.
Ethernet 40 Gbps and 100 Gbps were ratified in June, 2010, so shipments of these products are
expected this year. FC bandwidth is also expected to double to 16 Gbps this year and a 32
Gbps is possible after that. CEE/FCoE will also benefit from this FC innovation. Forecasted for
2012, 40GBASE-T with Cat 7A+ to support 40 Gbps will become available. Projections from
both IDC and the DellOro Group show that by 2013, the revenue of FCoE will exceed FC.
On the CNA front, organizations such as Open FCoE (open-fcoe.org) are working on low-cost
software implementations of a hardware CNA that uses a standard 10 GbE NIC. They currently
have a driver that is part of the 2.6.29 Linux kernel.
To summarize, what we have is a robust set of products that saves money and time and is
positioned for the future. If you are excited about using CEE/FCoE in your company, these
areas could be a great place to start:
Opportunity #1
Add a rack of FCoE servers to your existing Ethernet and FC environment. Coexistence
of FCoE and FC make this server consolidation a low risk/high gain opportunity to prove
the technology in your environment.
Opportunity #2
Server consolidation committing to server consolidation with products such as VMware
and Hyper-V tends to need servers with a great amount of Ethernet and FC bandwidth at
a low cost point. The driving force could be limited space or environmentals.
Opportunity #3
Next-generation data center as exponential data growth continues, disaster
recovery/business continuity protection is even more important. If there is a new data
center on your roadmap, rolling out dense racks with a single cable infrastructure makes
sense. It saves on administrative costs and allows for denser, smaller servers.
Opportunity #4
Blade servers the number of PCIe bus slots is limited, so reducing the number of
disparate networks is very important.
-
EMC Proven Professional Knowledge Sharing 21
Opportunity #5
The push for green and ease of deployment power and cooling savings seems to be
on the top of everyones list and rack cable management continues to be problematic.
CEE/FCoE can easily be phased in to existing environments.
We have seen that CEE/FCoE can lower your acquisition and operating costs by reducing
componentry today with additional savings around the corner as production costs decrease and
competition increases. At the same time, the risk of switching to a new approach is minimized
since your underlying applications still work they way they do today. Not a bad value proposition
faster, cheaper, and better!
-
EMC Proven Professional Knowledge Sharing 22
Footnotes
1 We are still learning about this device, but it is speculated that it could calculate the position of the Sun, Moon, or other
astronomical information such as the location of other planets. http://en.wikipedia.org/wiki/Antikythera_mechanism 2 iSCSI has been around for about a decade and somewhat popular with small-to-medium businesses, but it is fundamentally
different from FC in how encapsulates SCSI commands. It also tends to have a higher overhead and slower than FC. If the Ethernet network was over-subscribed, iSCSI provided less predictability than FC. 3 Converged Enhanced Ethernet (CEE) is the name used by Brocade, IBM, EMC, Emulex, QLogic and many others. Convergence
Enhanced Ethernet is IBMs trademark name for CEE. Cisco used to call it Data Center Ethernet (DCE), but now trademarks their offering as Data Center Bridging, the same name used by the IEEE. 4 InfiniBand uses a Host Channel Adapter (HCA) to provide high-performance switched protocol services.
5 Common Internet File System (CIFS) examples are your MS-Windows N:\ drive assignment.
6 Network File System (NFS) examples are mount points such as \home.
7 Internet SCSI (iSCSI) uses the IP protocol to carry SCSI commands to block mode devices.
8 Small Computer System Interface (SCSI) a set of physical and logical standards for connecting servers and storage devices.
9 Fibre Channel Arbitrated Loop (FC-AL) either point-to-point direct connections between servers and storage or through a loop
device that allows multiple servers to be attached to SCSI devices. 10
Fibre Channel Switched Protocol (FC-SW) connections between servers and SCSI devices that flow through intelligent switches. 11
RJ45 stands for Registered Jack 45. Telephone companies designed it to support 8 positions 8 contacts/connectors or 8p8c. 12
The designation 10GBASE-T stands for 10 Gb speed, baseband, twisted-pair. 13
http://www.colfaxdirect.com/store/pc/viewPrd.asp?idproduct=219 14
http://www.emulex.com/files/tools/FCoE-Calculator.html 15
http://www.cisco.com/assets/cdc_content_elements/flash/dataCenter/nexus5k_tco_calc/cisco.html 16
http://www.eia.doe.gov/electricity/epm/table5_6_b.html 17
http://public.dhe.ibm.com/software/solutions/soa/pdfs/WebSphere_Cloudburst_ImpactonLabor.pdf 18
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/white_paper_c11-468838.html 19
The University of Maryland http://www.oit.umd.edu/ops/rdc/costs.html charges $90 per network port to install the connection and $19/month to maintain the connection. 20
Assumes your company counts wiring installation as a capitol expense 21
Cisco documents the wattage for the 48 port Ethernet is the same as their 24 port switch. 22
http://www.ciscosystems.com.ro/web/DK/assets/docs/presentations/Aurelie-Fonteny-N5K2K1K-external.pdf 23
http://www.chelsio.com/assetlibrary/whitepapers/Chelsio%20T4%20Architecture%20White%20Paper.pdf 24
http://www.phyworks-ic.com/enabling_10g_copper.php 25
Fibre Channel over Ethernet was proposed to the ANSI committee on April 4, 2007 by Emulex, Brocade, Cisco, Nuova, EMC, IBM, Intel, QLogic and Sun. It became an ANSI standard on June 4, 2009. On August 11, 2009, NetApp announced a native FCoE
storage product. 26
Cat 6 UTP (Unshielded Twisted Pair), which is limited to less than 55 meters, and Cat 6A UTP with a 100 meter limitation, are part
of a 10GBASE-T initiative under consideration for FCoE. To use Cat 6/6A, a different CNA will be needed.