converged networks: fcoe, iscsi and the future of storage networking
DESCRIPTION
Presentation from EMC World 2010 (Boston)TRANSCRIPT
1© Copyright 2010 EMC Corporation. All rights reserved.
Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking
Stuart Miniman, Technologist, Office of the CTOEMC Corporation
2© Copyright 2010 EMC Corporation. All rights reserved.
Agenda
The Journey to Convergence
Protocols & Standards Update
Solution Evolution
Conclusion and Summary
3© Copyright 2010 EMC Corporation. All rights reserved.
Rack Server Environment Today
Servers connect to LAN, NAS and iSCSI SAN with NICs
Servers connect to FC SAN with HBAs
Many environments today are still 1 Gigabit Ethernet
Multiple server adapters, multiple cables, power and cooling costs– Storage is a separate network
(including iSCSI)
Rack-mounted servers
EthernetFibre Channel
Ethernet LAN
1 Gigabit Ethernet
1 Gigabit EthernetNICs
Storage
Fibre Channel SAN
FibreChannelHBAs
1 Gigabit Ethernet
iSCSI SAN
Today < 30% of servers in the data center are SAN attached to
storage
4© Copyright 2010 EMC Corporation. All rights reserved.
Transport storage (SCSI) over standard Ethernet
Reliability through TCP
SCSI has limited distance, iSCSI provides even more flexibility than FC due to IP routing
1Gb iSCSI has good performance
iSCSI has thrived, especially where the server, storage and network administrators are the same person
The iSCSI Story
Link
IPsec
IP
TCP
iSCSI
SCSI
Initiator
IP Network
5© Copyright 2010 EMC Corporation. All rights reserved.
Why a New Option for FC Customers?
FC has a large and well managed install base– Want a solution that is attractive for customers with FC expertise /
investment– Previous convergence options did not allow for incremental adoption
Requirement for a Data Center solution that can provide I/O consolidation
Leveraging Ethernet infrastructure and skill set has always been attractive
FCoE allows an Ethernet-based SAN to be introduced into the FC-based Data Center
without breaking existing administrative tools and workflows
6© Copyright 2010 EMC Corporation. All rights reserved.
Non-Ethernet Convergence Options
InfiniBand– Used broadly for High Performance
Computing (HPC) environments– Low cost and ultra-low latency geared for
server to server cluster– Separate use from general network
(Ethernet) or storage (FC or Ethernet)
Extend server’s PCI bus to appliance
Appliance connects to existing infrastructure
(FC, Ethernet)
PCIe – Extension of the server bus to an I/O
aggregation box Single network from server -> top of
rack, similar to early FCoE deployments
– Not a standard (parallel effort to MR-IOV standard), small players
– Still using Ethernet and FC network and storage from the aggregation box
7© Copyright 2010 EMC Corporation. All rights reserved.
10Gb Ethernet allows for Converged Data Center
Maturation of 10 Gigabit Ethernet– 10 Gigabit Ethernet allows replacement of n x 1Gb with a much smaller
number (start with 2) of 10Gb Adapters– Single network allows for easier mobility for virtualization/cloud deployments
10 Gigabit Ethernet simplifies server, network and storage infrastructure– Reduces the number of cables and server adapters– Lowers capital expenditures and administrative costs – Reduces server power and cooling costs– Blade servers and server virtualization drive consolidated bandwidth
10 Gigabit Ethernet is the answer!iSCSI and FCoE both leverage this inflection point
LAN
SANSingle Wire for Network and Storage10 GbE
8© Copyright 2010 EMC Corporation. All rights reserved.
FCoE Extends FC on a Single Network
Network Driver
FC Driver
Converged Network Adapter
Server sees storage traffic as FC
FC network
FC storage
Ethernet Network
FCoE Switch
EthernetFC
FCoE SW Stack
Standard 10G NIC
Lossless Ethernet Links2 options
SAN sees host as FC
9© Copyright 2010 EMC Corporation. All rights reserved.
Time To Widespread Adoption
1990 2000 20101980
Defined73
Standard83
Widespread93
Defined85
Standard94
Widespread03
iSCSIiSCSI
Defined00 02
Widespread08
Standard
Standard
10 Gigabit Ethernet10 Gigabit Ethernet02 09
Widespread
07 09 ??Defined
StandardWidespread
10© Copyright 2010 EMC Corporation. All rights reserved.
Future
16 Gb 16 Gb FCFC
40/100 Gb 40/100 Gb EthernetEthernet
32 Gb 32 Gb FCFC
40 & 100 Gb Ethernet (IEEE) standards will be completed in June 2010
16Gb FC (T11) standard is targeted for completion at the end of 2010
Server adoption of FC ~ 3+ years, of Ethernet ~ 5+ years – Backbone typically faster adoption
11© Copyright 2010 EMC Corporation. All rights reserved.
Ethernet Cabling
Type / Connector
Cable 1Gb 10Gb 40/100Gb
Copper (10GBase-T) / RJ-45
Cat6 or Cat6a
> 99% of existing cabling (lots of Cat 5e)
Some products on market, but not for FCoE yet
Cat6 55m; Cat 6a 100m
Not supported in initial standard
Optical (multimode) / LC
OM2 (orange) OM3 (aqua)
OM4* (aqua)
Rare
< 1% Ethernet, but Standard for FC
Most backbone deployments are optical
OM2 82m
OM3 300m
Expect shift to optical w/ 40/100Gb
OM3 100m
OM4 125m
Copper / SFP+DA (direct attach)
Twinax N/A Low power
5-10m distance (Rack solution)
Different short-distance option (QSFP)
12© Copyright 2010 EMC Corporation. All rights reserved.
Agenda
The Journey to Convergence
Protocols & Standards Update
Solution Evolution
Conclusion and Summary
13© Copyright 2010 EMC Corporation. All rights reserved.
Standards for Next Generation Data Center
Fibre Channel over Ethernet (FCoE) protocol
– Developed by International Committee for Information Technology Standards (INCITS) T11 Fibre Channel Interfaces Technical Committee
– Fibre Channel over Ethernet allows native Fibre Channel to travel unaltered over Ethernet
– FC-BB-5 standard ratified in June 2009
– FC-BB-6 in process to expand solution
Converged Enhanced Ethernet (CEE)
– Developed by IEEE Data Center Bridging (DCB) Task Group
– DCB/CEE creates an Ethernet environment that drops frames as rarely as Fibre Channel
– Technology commonly referred to as Lossless Ethernet
– IEEE standards targeting ratification in mid 2010
– Requirement for FCoE; Enhancement for iSCSI
Two parallel industry standards seek to drive I/O consolidation in large data centers over time:
Companies working on the standard committeesKey participants: Brocade, Cisco, EMC, Emulex, HP, IBM, Intel, QLogic, Oracle(Sun), others
14© Copyright 2010 EMC Corporation. All rights reserved.
CRCEthernetHeader
iSCSI is SCSI functionality transported using TCP/IP for delivery and routing in a standard Ethernet/IP environment
iSCSI and FCoE Framing
FCoE is FC frames encapsulated in Layer 2 Ethernet frames designed to utilize a Lossless Ethernet environment
– Large maximum size of FC requires Ethernet Jumbo Frames – No TCP, so Lossless environment required– No IP routing
Eth
ern
etH
ead
er
FC
oE
Hea
der
FC
Hea
der
FC Payload CR
C
EO
F
FC
S
FCoE Frame
iSCSI Frame IP TCP iSCSI Data
FC Frame
15© Copyright 2010 EMC Corporation. All rights reserved.
FCoE Frame Formats
Destination MAC Address
Source MAC Address
IEEE 802.1Q Tag
ET = FCoE Ver Reserved
Reserved
Reserved SOF
Encapsulated FC Frame(Including FC-CRC)
EOF Reserved
FCS
Reserved
FCoE Frame Format
Bit 0 Bit 31 Ethernet frames give a 1:1
encapsulation of FC frames– No segmenting FC frames across
multiple Ethernet frames– FCoE flow control is Ethernet based
BB Credit/R_RDY replaced by Pause/PFC mechanism
FC frames are large, require Jumbo frames– Max FC payload size is 2180 bytes– Max FCoE frame size is 2240 bytes
FCoE Initialization Protocol (FIP) used for discovery and login
16© Copyright 2010 EMC Corporation. All rights reserved.
virtual switch Hypervisor driver
Storage Drivers and Server Virtualization
NIC NICFCHBA
FCHBA
vNIC vNICvSCSI vSCSI
LAN traffic
CNA
CNA
FCoE follows FC path
Hypervisor
iSCSI traffic
*iSCSI initiator can also be in the VM
17© Copyright 2010 EMC Corporation. All rights reserved.
Storage Drivers and Server Virtualization
NIC NICFCHBA
FCHBA
vNIC vNICvSCSI vSCSI
Hypervisor
FCoE software in the guest would send traffic through the vSwitch to the vNIC
SW FCoE
SW FCoE
Hypervisor driver
No FCoE access here currently
virtual switchvSwitches from ESX (including
Cisco 1000v option) and Hyper-V are not
Lossless
18© Copyright 2010 EMC Corporation. All rights reserved.
FC-BB-6
Not required for multi-hop FCoE or other current deployments
Currently in the “herding cats” phase of defining goals
Likely to support point-to-point configuration which allows 2 FCoE devices to communicate without going through an FCF (or switch)
For more, see Erik Smith of EMC E-Lab’s presentation FCoE - Topologies, Protocol, and Limitations
Tues 8am and Thurs 8:30am
19© Copyright 2010 EMC Corporation. All rights reserved.
Lossless Ethernet
IEEE 802.1 Data Center Bridging (DCB) is the standards task group
Converged Enhanced Ethernet (CEE) is an industry consensus term
Link level enhancements (Priority Flow Control, Enhanced Transmission Selection, Data Center Bridging Exchange Protocol) are shipping in products today
– Standards expected to be ratified in June ‘10
The “CEE cloud” or DCB-enabled LAN is only for the portion of your network that requires lossless functionality
– Currently limited to multimode (300m) distances per link (no singlemode)– Limit the environment to the Data Center; Layer 2 (no routing)
Enhanced Ethernet provides the Lossless Infrastructure which enables FCoE
20© Copyright 2010 EMC Corporation. All rights reserved.
PAUSE and Priority Flow Control
PAUSE transforms Ethernet into a lossless fabric Classical 802.3x PAUSE is rarely implemented since it stops all traffic A new PAUSE known as Priority Flow Control (PFC) function that can
halt traffic according to priority tag while allowing traffic at other priority levels to continue
– Creates lossless virtual lanes
– PFC will be limited to Data Center
Per priority link level flow control– Only affect traffic that needs it
– Ability to enable it per priority
– Not simply 8 x 802.3x PAUSE
Switch A Switch B
21© Copyright 2010 EMC Corporation. All rights reserved.
Enhanced Transmission Selection and Data Center Bridging Exchange Protocol (DCBX)
Enhanced Transmission Selection (ETS) provides a common management framework for bandwidth management Allows configuration
of HPC & storage traffic to have appropriately higher priority
When a given load in a class does not fully utilize its allocated bandwidth, ETS allows other traffic classes to use the available bandwidth
Maintain low latency treatment of certain traffic classes
Offered Traffic
t1 t2 t3
10 GE Link Realized Traffic Utilization
3G/s HPC Traffic3G/s
2G/s
3G/sStorage Traffic3G/s
3G/s
LAN Traffic4G/s
5G/s3G/s
t1 t2 t3
3G/s 3G/s
3G/s 3G/s 3G/s
2G/s
3G/s 4G/s 6G/s
Data Center Bridging Exchange Protocol (DCBX) is responsible for configuration of link parameters for DCB functions Determines which devices support Enhanced Ethernet functions
22© Copyright 2010 EMC Corporation. All rights reserved.
Beyond Link Level
Congestion notification– IEEE 802.1Qau ratified
Allows a switch to notify attached ports to slow down transmission due to heavy traffic, in order to reduce the chances of packet drops or network deadlocks
Moves the management of congestion back to the edge, which helps alleviate network-wide bottlenecks
Layer 2 multipathing– IETF TRILL - TRansparent Interconnection of
Lots of Links Used with the Spanning Tree Protocol to
provide more efficient bridging and bandwidth aggregation
Focuses on a bridging capability that will increase bandwidth by allowing and aggregating multiple network paths
Standards are stable; products are coming soon
Throttle
Switch
Transmit QueueSwitch
Receive Buffer
Throttl
e
23© Copyright 2010 EMC Corporation. All rights reserved.
Agenda
The Journey to Convergence
Protocols & Standards Update
Solution Evolution
Conclusion and Summary
24© Copyright 2010 EMC Corporation. All rights reserved.
iSCSI Deployment
iSCSI was > 15% of revenue ($1.8B in ‘09) and > 20% capacity in SAN market in 2009 *
10 Gb iSCSI solutions are available– Can work on both Traditional Ethernet (recover
from dropped packets using TCP) or Lossless Ethernet (DCB) environment
iSCSI natively routable (IP)
iSCSI solutions are much smaller scale than FC
– A single FC director is larger than most iSCSI environments
Ethernet
iSCSI SAN
* According to IDC, 2009
25© Copyright 2010 EMC Corporation. All rights reserved.
FCoE Solutions in 2009
Converged Network Switch
Rack Mounted Servers
10 GbE CNAs
FC Attach
FCoE with direct attach of server to Converged Network Switch at top of rack or end of row
Tightly controlled solution
Server 10 GE adapters may be CNA or NIC
Storage is still a separate network
Ethernet LAN
Storage
Fibre Channel SAN
EthernetFC
26© Copyright 2010 EMC Corporation. All rights reserved.
Expansion of FCoE beyond a single switch
First solutions are with an FCoE aware Ethernet switch (also known as FIP snooping)
Enables rack bundle solutions
Rack Area Network (RAN) allow for the rack as a unit of design and a unit of management
FC switch
FCoE switchFCoE aware
Ethernet switch (embedded)
CNA (embedded)
FCoE enabled Blade Server
storage
27© Copyright 2010 EMC Corporation. All rights reserved.
CX4-480 or Celerra NS-
960
2 x Nexus 6140•FCoE, Ethernet, FC ports
8 x UCS 5100 chassis
•Includes CNA blade (2 x FCoE)
vSphere/ESX 4.0•16:1 consolidation•1024 VMs
Vblock1 example in
3 floor tiles
Cisco UCS– Optimized compute
environment utilizing FCoE technology
– Includes embedded FCoE adapter and ToR switch equivalent, plus the option for embedded FCoE switch
Vblock– Integrated Virtual Computing
Environment combining best-in-class networking, compute, storage, security and management solutions
– Cisco UCS + EMC storage + VMware vSphere/ESX
Cisco & VCE FCoE Rack Area Network (RAN)
28© Copyright 2010 EMC Corporation. All rights reserved.
IBM & HP FCoE offerings
IBM BladeServers w/ embedded FCoE adapter for pass-thru switch module or an embedded FCoE switch module which attaches to ToR switch
1x CNA Card
2-port10Gb CNA
2 x 10Gb Switch
10-port 10Gb Switch or 14-port IBM 10Gb
Pass-Thru
TOR switch splits LAN & SAN traffic
Blade Switch
Top of Rack
CNAs Converge traffic at the server
End of Row
Aggregation
Core
Converged edge will evolve from ToR/EoR to aggregation and core
layers
29© Copyright 2010 EMC Corporation. All rights reserved.
Challenges for FCoE
FCoE solution development will take time to expand and mature, just as with other technologies – customers looking to create topologies similar to existing FC configurations:
– Director support– Edge-core support (multi-hop)
Storage Network
Organizational domain overlap
30© Copyright 2010 EMC Corporation. All rights reserved.
EMC and Ethernet
Best Practices – Google “FCoE Tech Book” (FCoE & Ethernet)
Services– Design, Implementation, Performance and
Security offerings for networks
Products– Ethernet equipment for creating Converged
Network Environments
31© Copyright 2010 EMC Corporation. All rights reserved.
FCoE Timeline
Supported 2010
FCoE top-of-rack switches
2nd Generation CNAs
Windows, Linux, VMware
Cisco UCS and Vblock
IBM BladeCenter
Native FCoE storage
FCoE blades for FC and Ethernet directors
UNIX support
Open FCoE
Expanded multi-hop solutions
More embedded/server solutions
10Gb DCB LOMs
10G-BaseT w/ FCoE
TRILL solutions
Direct Connect (FC-BB-6) 40GbE/100GbE
Future
32© Copyright 2010 EMC Corporation. All rights reserved.
Agenda
The Journey to Convergence
Protocols & Standards Update
Solution Evolution
Conclusion and Summary
33© Copyright 2010 EMC Corporation. All rights reserved.
PrivateCloud
Virtualized Data Center
CloudComputing
Next Generation Data Center
10 Gigabit Ethernet Fibre
Channel
virtualization
common infrastructure
common management
EMC is working with the standards communities and partners to deliver the same reliability and robustness in the next generation virtual data center that we deliver today
The Converged Data Center sets the operational and capital efficiency foundationsfor the virtual data center and private clouds
34© Copyright 2010 EMC Corporation. All rights reserved.
Summary
A converged data center environment can be built using 10Gb Ethernet
Achieving a converged network requires consideration of technology, processes/best practices and organizational dynamics
10 Gigabit Ethernet solutions are maturing– Active industry participation is creating standards that allow solutions that can
integrate into existing data centers– Continued use of FC and adoption of FCoE can be flexible due to shared
management– FCoE and iSCSI will follow the Ethernet roadmap to 40 and 100 Gigabit in the future
35© Copyright 2010 EMC Corporation. All rights reserved.
Related References
Full collection of FCoE references (blog, video, whitepaper, presentations and other EMC links) http://blogstu.wordpress.com/tag/fcoe
Industry site with consolidated information http://www.fcoe.com/
T11 FCoE activity http://www.t11.org/fcoe IEEE 802.1 Data Center Bridging task group page
http://www.ieee802.org/1/pages/dcbridges.html
Other EMC Bloggers covering these technologies:– Chad Sakac http://virtualgeek.typepad.com/– Chuck Hollis http://chucksblog.typepad.com/– David Graham http://flickerdown.com/