server-to-network edge technologies

Upload: sravanthi-reddy

Post on 08-Apr-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/7/2019 Server-to-network edge technologies

    1/25

    Server-to-network edge technologies:converged networks and virtual I/O

    Technology brief

    Table of contents

    Executive Summary .............................................................................................................................. 2What is the server-to-network edge?....................................................................................................... 3Physical infrastructure challenges at the server-to-network edge ................................................................. 5Solving server-to-network complexity with converged networks .................................................................. 6

    iSCSI .............................................................................................................................................. 6Fibre Channel over Ethernet .............................................................................................................. 8

    Challenges at the server-to-network edge from virtual machines............................................................... 11Virtual Ethernet Bridges................................................................................................................... 11Solving complexity at the virtualized server-to-network edge with Edge Virtual Bridging ............................. 15

    Common EVB capabilities and benefits............................................................................................. 15Virtual Ethernet Port Aggregator (VEPA) ............................................................................................ 17Multichannel technology ................................................................................................................. 18VN-Tag......................................................................................................................................... 19Status of EVB and related standards................................................................................................. 20

    Summary .......................................................................................................................................... 20Appendix: Understanding SR-IOV and Virtual Connect Flex-10 ............................................................... 23For more information.......................................................................................................................... 25

    Call to action .................................................................................................................................... 25

  • 8/7/2019 Server-to-network edge technologies

    2/25

    Executive Summary

    This technology brief describes the server-to-network edge in a data center architecture andtechnologies that affect the server-to-network edge in both the physical and virtual infrastructure.

    The server-to-network edge is the link between the server and the first tier of the network infrastructure,and it is becoming an increasingly important area of the infrastructure. Within the data center, theserver-to-network edge is the point within the topology that contains the most connections, switches,

    and ports. The most common types of networks used in enterprise data centers are Ethernet for LANsand Fibre Channel for SANs. Typically, these have different topologies, administrators, security,performance requirements, and management structures.

    Converged networking technologies such as iSCSI, Fibre Channel over Ethernet (FCoE), andConverged Enhanced Ethernet (CEE) have the potential to simplify the networking infrastructure. CEEis the Ethernet implementation of the Data Center Bridging (DCB) protocols that are being developedby IEEE for any 802 MAC layer network. CEE and DCB are commonly used interchangeably. For thistechnology brief, the term CEE refers to Ethernet protocols and Ethernet products that are DCBcompliant. With 10 Gb Ethernet becoming more common, the potential for combining multiplenetwork streams over a single fabric becomes more feasible. HP recommends that customersimplement converged networking first at the server-to-network edge. This area contains the most

    complexity and thus offers the most opportunity for cost benefit by simplifying and reducing theamount of infrastructure. By implementing first at the server-to-network edge, customers will also beable to maintain existing administrator roles, network topologies, and security requirements, for theirLAN and SAN infrastructure. Customers who want to implement FCoE as a converged networkingfabric should be aware that some of the underlying standards are not finalized. By implementing firstat the server edge, customers will be able to take advantage of the standards that are in place whileavoiding the risk of implementing technology that will have to be replaced or modified as standardssolidify.

    The server-to-network edge is also becoming increasingly complex due to the sprawl of virtualmachines. Virtual machines add a new layer of virtual machine software and virtual networking thatdramatically impacts the associated network. Challenges from virtual machines include theperformance loss and management complexity of integrating software-based virtual switches, alsoreferred to as Virtual Ethernet Bridges (VEBs), into existing network management. In the near future,hardware NICs will be available that use Single Root I/O Virtualization (SR-IOV) technology to movesoftware-based virtual switch functionality into hardware.

    While hardware-based SR-IOV NICs will improve performance, compared to traditional software-onlyvirtual NICs, they do not solve the other challenges of management integration and limitedmanagement visibility into network traffic flows. To solve these challenges, the IEEE 802.1 WorkGroup is creating a standard called Edge Virtual Bridging (EVB). The EVB standard is based on atechnology known as Virtual Ethernet Port Aggregator (VEPA). Using VEPA, VM traffic within aphysical server is always forwarded to an external switch and directed back when necessary to thesame host. This process is known as a hair-pin turn, as the traffic does a 180-degree turn. The hair-pin turn provides for external switch visibility into and policy control over the virtual machine traffic

    flows and it can be enabled in most existing switches with firmware upgrades and no hardwarechanges.

    The HP approach to technologies at the server-to-network edge is based on using industry standards.This ensures that new technologies will work within existing customer environments andorganizational roles, yet will preserve customer choice. The HP goal for customers is to enable asimple migration to advanced technologies at the server-to-network edge without requiring an entireoverhaul strategy for the data center.

    2

  • 8/7/2019 Server-to-network edge technologies

    3/25

    This paper assumes that the reader is relatively familiar with Ethernet and Fibre Channel networks,and with existing HP ProLiant offerings such as BladeSystem and Virtual Connect Readers can see theFor more information section and the Appendix for additional information about HP products.

    What is the server-to-network edge?

    In this technology brief, the server-to-network edge refers to the connection points between servers andthe first layer of both local area network (LAN) and storage area network (SAN) switches. The most

    common networks used in enterprise data centers are Ethernet for LANs and Fibre Channel for SANs.

    Figure 1 illustrates a typical multi-tiered Ethernet infrastructure. Ethernet network architecture is oftenhighly tree-like, can be over-subscribed, and relies on a structure in this manner:

    Centralized (core) switches at the top layer Aggregation (distribution) switches in the middle layer Access switches at the bottom, or server-to-network edge layer

    Figure 1. Typical multi-tier Ethernet data center architecture (physical infrastructure)

    The dotted line around the Blade server/Top of rack(ToR) switch indicates an optional layer, depending onwhether the interconnect modules replace the ToR or add a tier.

    3

  • 8/7/2019 Server-to-network edge technologies

    4/25

    Figure 1 illustrates an architecture using both rack-based servers (such as HP ProLiant DL servers) andblade enclosures (such as HP BladeSystem c-Class enclosures) along with a variety of switches such astop-of-rack (ToR), end-of-row (EoR), distribution, and core switches:

    A single rack of servers may connect to top-of-rack switches (typically deployed in pairs forredundancy) that uplink to a redundant aggregation network layer.

    Several racks of servers may connect to end-of-row switches that consolidate the networkconnections at the edge before going into the aggregation network layer.

    Blade-based architectures may include server-to-network interconnect modules that are embeddedwithin the enclosure. If the interconnect modules are switches, they may either replace the existingaccess switch or may add an extra tier. If the interconnect module is a pass-through, theinfrastructure usually maintains the ToR switch.

    Figure 2 illustrates the two most common SAN architectures:

    SAN core architecture, also referred to as a SAN island, in which many servers connect directly toa large SAN director switch

    Edge-core architecture, which uses SAN fabric or director switches before connecting to the SANdirector core switch

    Figure 2. Typical single or two-layer architecture of a Fibre Channel SAN.

    SAN island Edge-Core

    The dotted line around the Blade server/FC switch indicates an optional layer, depending on whether the bladeenclosure uses interconnects that are embedded switches or pass-through modules

    4

  • 8/7/2019 Server-to-network edge technologies

    5/25

    Customers who want to maintain a SAN island approach with blade enclosures can use FibreChannel pass-through modules to connect directly to the director switch.

    With an edge-core SAN architecture, SAN designers that have blade enclosures can use one of twoapproaches:

    Use pass-through modules in the enclosure while maintaining an external edge switch Replace an existing external Fibre Channel edge switch with an embedded blade switch

    Typically, LAN and SAN architectures have different topologies: LAN architectures tend to have multi-layered, tree-like topologies while SAN architectures tend to remain flat. Often, a SAN architecturediffers from a LAN architecture in oversubscription ratios/cross-sectional bandwidth: a SAN istypically lightly oversubscribed by 2:1 to 8:1 (at the server edge), while a LAN architecture may beheavily oversubscribed1 by 10:1 or 20:1. Different topologies are one of the reasons that HP isfocusing on the server-to-network edge. Administrators should be allowed to maintain similarprocesses, procedures, data center structures, and organizational governance roles while reaping thebenefits of reduced infrastructure at the server-to-network edge.

    Physical infrastructure challenges at the server-to-networkedge

    As illustrated in Figures 1 and 2, the server edge is the most physically complex part of the datacenter networks; the server edge has the most connections, switches, and ports (especially inenvironments using only rack-based servers). For most enterprise data centers that use Fibre ChannelSANs and Ethernet LANs, each network type has its own requirements:

    Unique network adapters for servers Unique switches Network management systems designed for each network Different organizations to manage the networks

    Unique security requirements

    Different topologies Different performance requirementsEach of these differences add complexity and cost to the data center, especially for enterprise datacenters that have large installed Fibre Channel and Ethernet networks.

    For some customers that can use non-Fibre Channel storage, iSCSI solves these challenges withoutrequiring new Ethernet infrastructure. However, iSCSI technology is still evolving and has experienceda slow rate of adoption, especially for enterprise data centers. The iSCSI section included later in thepaper discusses iSCSI in more detail.

    1 Non-oversubscribed LAN topologies with large cross-sectional bandwidths are certainly possible, just not as common.

    5

  • 8/7/2019 Server-to-network edge technologies

    6/25

    Solving server-to-network complexity with convergednetworks

    Data center administrators can address complexity at the server-to-network edge by consolidatingserver I/O in various ways:

    Combining multiple lower-bandwidth connections into a single, higher-bandwidth connection Converging (combining) different network types (LAN and SAN) to reduce the amount of physicalinfrastructure required at the server edgeAdministrators can already use HP Virtual Connect Flex-10 technology to aggregate multipleindependent server traffic streams into a single 10Gb Ethernet (10GbE) connection. For instance,administrators can partition a 10GbE connection and replace multiple lower bandwidth physical NICports with a single Flex-10 port. This reduces management requirements, reduces the amount ofphysical infrastructure (number of NICs and number of interconnect modules), simplifies the amount ofcross-connect in the network, and reduces power and operational costs.

    The more common definition of converged networks refers to combining LAN and SAN protocols ontoa single fabric. The possibility of converging, or combining, data center networks has been proposedfor some time. InfiniBand, iSCSI, and other protocols have been introduced with the promise ofproviding efficient transport on a single fabric for multiple traffic classes and data streams. One of thegoals of a converged network is to simplify and flatten the typical Ethernet topology by reducing thenumber of physical components, leading to other opportunities to simplify management and improvequality of service (QoS). The benefits of a converged network include reduction of host/serveradapters, cabling, switch ports, and costs; simply stated, it enables simpler, more centralizedmanagement.

    Two technologies provide the most promise for use on a converged network: iSCSI and Fibre Channeover Ethernet (FCoE).

    iSCSI

    One of the reasons that the Fibre Channel protocol became prevalent is that it is a very lightweight,high-performance protocol that maps locally to the SCSI initiators and targets (Figure 3, far left).

    But Fibre Channel protocols are not routable and can be used only within a relatively limited area,such as within a single data center.

    The iSCSI protocol sought to improve that by moving the SCSI packets along a typical Ethernetnetwork using TCP/IP (Figure 3, far right). The iSCSI protocol serves the same purpose as FibreChannel in building SANs, but iSCSI runs over the existing Ethernet infrastructure and thus avoids thecost, additional complexity, and compatibility issues associated with Fibre Channel SANs.

    6

  • 8/7/2019 Server-to-network edge technologies

    7/25

    Figure 3. Comparison of storage protocol stacks

    1 EthernetEthernetFCFC

    IPIP

    TCPTCP

    iSCSIiSCSI

    CEECEE

    SCSI LayerSCSI Layer

    Operating Systems and ApplicationsOperating Systems and Applications

    FCoEFCoE

    FCPFCP

    An iSCSI SAN is typically comprised of software or hardware initiators on the host server connectedto an Ethernet network and some number of storage resources (targets). The iSCSI stack at both endsof the path is used to encapsulate SCSI block commands into Ethernet Packets for transmission over IPnetworks (Figure 4). Initiators include both software- and hardware-based initiators incorporated onhost bus adapters (HBAs) and NICs.

    Figure 4. iSCSI is SCSI over TCP/IP

    7

  • 8/7/2019 Server-to-network edge technologies

    8/25

    The first generations of iSCSI had some limitations that minimized its adoption:

    Traditional software-based iSCSI initiators require the server CPU to process the TCP/IP protocolsadded onto the SCSI stack.

    iSCSI did not have a storage management system equivalent to the Fibre Channel SANs. Configuring iSCSI boot was a manual and cumbersome process; therefore, it was not scalable to

    large numbers of servers.

    The newer generations of iSCSi technology solve these issues:

    High-performance adapters are now available that fully offload the protocol management to ahardware-based iSCSI HBA or NIC. This is called full iSCSI offload.

    Using 10Gb networks and 10Gb NICs with iSCSI SANs generates performance comparable to aFibre Channel SAN operating at 8 Gb.

    New centralized iSCSI boot management tools provide mechanisms for greater scalability whendeploying large numbers of servers

    With 10Gb-based iSCSI products, iSCSI becomes a more viable solution for converged networks forsmall-medium businesses as well as enterprise data centers. For customers with iSCSI-based storagetargets and initiators, iSCSI can be incorporated today with their existing Ethernet infrastructure.

    Unlike FCoE, iSCSI does not require the new Converged Enhanced Ethernet (CEE) infrastructure (seethe Fibre Channel over Ethernet section for more information). But if present, iSCSI will benefit fromthe QoS and bandwidth management offered by CEE. Because iSCSI administration requires thesame skill set as a TCP/IP network, using an iSCSI network minimizes the SAN administration skillsrequired, making iSCSI a good choice for green-field deployments or when there are smaller IT teamsand limited budgets. Using iSCSI storage can also play a role in lower, non-business critical tiers inenterprise data centers.

    Fibre Channel over Ethernet

    FCoE takes advantage of 10Gb Ethernets performance while maintaining compatibility with existingFibre Channel protocols. Typical legacy Ethernet networks allow Ethernet frames to be dropped,

    typically under congestion situations, and they rely on upper layer protocols such as TCP to provideend-to-end data recovery.2 Because FCoE is a lightweight encapsulation protocol and lacks thereliable data transport of TCP layer, it must operate on Converged Enhanced Ethernet (CEE) toeliminate Ethernet frame loss under congestion conditions.

    Because FCoE was designed with minimal changes to the Fibre Channel protocol, FCoE is a layer 2(non-routable) protocol just like Fibre Channel, and can only be used for short-haul communicationwithin a data center. FCoE encapsulates Fibre Channel frames inside of Ethernet frames (Figure 5).

    2 It is possible to create a lossless Ethernet network using existing 802.3x mechanisms. However if the network is carryingmultiple traffic classes, the existing mechanisms can cause QoS issues, limit the ability to scale a network, and impactperformance.

    8

  • 8/7/2019 Server-to-network edge technologies

    9/25

    Figure 5. Illustration of an FCoE packet

    The traditional data center model uses multiple HBAs and NICs in each server to communicate withvarious networks. In a converged network, the converged network adapters (CNAs) can be deployedin servers to handle both FC and CEE traffic, replacing a significant amount of the NIC, HBA, andcable infrastructure (Figure 6).

    Figure 6. Converged network adapter (CNA) architecture

    FCoE uses a gateway device (an Ethernet switch with CEE, legacy Ethernet, and legacy FC ports) topass the encapsulated Fibre Channel frames between the servers Converged Network Adapter andthe Fibre Channel-attached storage targets.

    9

  • 8/7/2019 Server-to-network edge technologies

    10/25

    There are several advantages to FCoE:

    FCoE uses existing OS device drivers. FCoE uses the existing Fibre Channel security and management model with minor extensions for the

    FCoE gateway and Ethernet attributes used by FCoE.

    Storage targets that are provisioned and managed on a native FC SAN can be accessedtransparently through an FCoE gateway.

    However, there are also some challenges with FCoE:

    Must be deployed using a CEE network Requires converged network adapters and new Ethernet hardware between the servers and storage

    targets (to accommodate CEE)

    Is a non-routable protocol and can only be used within the data center Requires an FCoE gateway device to connect the CEE network to the legacy Fibre Channel SANs

    and storage

    Requires validation of a new fabric infrastructure that includes both the Ethernet and Fibre ChannelConverged Enhanced Ethernet (CEE)

    An informal consortium of network vendors originally defined a set of enhancements to Ethernet toprovide enhanced traffic management and lossless operations. This consortium originally coined theterm Converged Enhanced Ethernet to describe this new technology. These proposals have nowbecome a suite of proposed standards from the Data Center Bridging (DCB) task group within theIEEE 802.1 Work Group. The IEEE is defining the DCB standards so that they could apply to any IEEE802 MAC layer network type, not just Ethernet. So administrators can consider CEE as being theapplication of the DCB draft standards to the Ethernet protocol. Quite commonly, the terms DCB andCEE are used interchangeably. For this technology brief, the term CEE refers to Ethernet protocols andEthernet products that are DCB compliant.

    There are four new technologies defined in the DCB draft standards:

    Priority-based Flow Control (PFC), 802.1Qbb Ethernet flow control that discriminates betweentraffic types (such as LAN and SAN traffic) and allows the network to selectively pause differenttraffic classes

    Enhanced Transmission Selection (ETS), 802.1Qaz formalizes the scheduling behavior of multipletraffic classes, including strict priority and minimum guaranteed bandwidth capabilities. This formaltransmission handling should enable fair sharing of the link, better performance and metering.

    Quantized Congestion Notification (QCN), 802.1Qau supports end-to-end flow control in aswitched LAN infrastructure and helps eliminate sustained, heavy congestion in an Ethernet fabric.QCN must be implemented in all components in the CEE data path (CNAs, switches, and so on)before the network can use QCN. QCN must be used in conjunction with PFC to completely avoiddropping packets and guarantee a lossless environment.

    Data Center Bridging Exchange Protocol (DCBX), 802.1Qaz supports discovery and

    configuration of network devices that support the technologies described above (PFC, ETS, andQCN).

    While three of the standards that define the protocols are nearing completion, the QCN standard isthe most complex, least understood, and requires the most hardware support. The QCN standard willbe critical in allowing IT architects to converge networks deeper into the distribution and corenetwork layers. Also, it is important to note that QCN will require a hardware upgrade of almost allexisting CEE components before it can be implemented in data centers.

    10

  • 8/7/2019 Server-to-network edge technologies

    11/25

    Transition to FCoEHP advises that the transition to FCoE can be a graceful implementation with little disruption toexisting network infrastructures if it is deployed first at the server-to-network edge and migrated furtherinto the network over time. As noted in the previous section, the lack of fully finalized CEE standardsis another reason HP views the server-to-network edge as the best opportunity for taking advantage ofconverged network benefits.

    Administrators could also start by implementing FCoE only with those servers requiring access to FC

    SAN targets. Not all servers need access to FC SANs. In general, more of a data centers assets useonly LAN attach than use both LAN and SAN. CNAs should be used only with the servers thatactually benefit from it, rather than needlessly changing the entire infrastructure.

    If administrators transition the server-to-network edge first to accommodate FCoE/CEE, this willmaintain the existing architecture structure and management roles, retaining the existing SAN andLAN topologies. Updating the server-to-network edge offers the greatest benefit and simplificationwithout disrupting the data centers architectural paradigm.

    Challenges at the server-to-network edge from virtualmachines

    While converged network technology addresses the complexity issues of proliferating physical server-to-network edge infrastructure, a different, but complementary, technology is required to solve thecomplexity of the server-to-network edge resulting from the growing use of virtual machines. This newlayer of virtual machines and virtual switches within each virtualized physical server introduces newcomplexity at the server edge and dramatically impacts the associated network.

    Challenges from virtual machines include the issues of managing virtual machine sprawl (and theassociated virtual networking) and performance loss and management complexity of integratingsoftware-based virtual switches with existing network management. These are significant challengesthat are not fully addressed by any vendor today. HP is working with other industry leaders todevelop standards that will simplify and solve these challenges.

    The following sections explain the types of virtualized infrastructure available today (or in the nearfuture), the terms used to describe these infrastructure components, and the related standardizationefforts.

    Virtual Ethernet Bridges

    The term Virtual Ethernet Bridge (VEB) is used by industry groups to describe network switches that areimplemented within a virtualized server environment and support communication between virtualmachines, the hypervisor, and external network switches.3 In other words, a VEB is a virtual Ethernetswitch. It can be an internal, private, virtual network between virtual machines (VMs) within a singlephysical server, or it can be used to connect VMs to the external network.

    Today, the most common implementations of VEBs are software-based virtual switches, orvSwitches, that are incorporated into all modern hypervisors. However, in the near future, VEBimplementations will include hardware-based switch functionality built into NICs that implement thePCI Single Root I/O Virtualization (SR-IOV) standard. Hardware-based switch functionality willimprove performance compared to software-based VEBs.

    3 The term bridge is used because IEEE 802.1 standards use the generic term bridge to describe what are commonly know as switches. Amore formal definition: A Virtual Ethernet Bridge (VEB) is a frame relay service within a physical end station that supports local bridgingbetween multiple virtual end stations and (optionally) the external bridging environment.

    11

  • 8/7/2019 Server-to-network edge technologies

    12/25

    Software-based VEBs Virtual Switches

    In a virtualized server, the hypervisor abstracts and shares physical NICs among multiple virtualmachines, creating virtual NICs for each virtual machine. For the vSwitch, the physical NIC acts asthe uplink to external network. The hypervisor implements one or more software-based virtual switchesthat connect the virtual NICs to the physical NICs.

    Data traffic receivedby a physical NIC is passed to a virtual switch that uses hypervisor-basedVM/Virtual NIC configuration information to forward traffic to the correct VMs.

    When a virtual machine transmits traffic from its virtual NIC, a virtual switch uses its hypervisor-basedVM/virtual NIC configuration information to forward the traffic in one of two ways (see Figure 7):

    If the destination is external to the physical server or to a different vSwitch, the virtual switchforwards traffic to the physical NIC.

    If the destination is internal to the physical server on the same virtual switch, the virtual switch usesits hypervisor-based VM/virtual NIC configuration information to forward the traffic directly back toanother virtual machine.

    Figure 7. Data flow through a Virtual Ethernet Bridge (VEB) implemented as a software-based virtual switch(vSwitch)

    Using software-based virtual switches offers a number of advantages:

    Good performance between VMs A software-based virtual switch typically uses only layer 2switching and can forward internal VM-to-VM traffic directly, with bandwidth limited only byavailable CPU cycles, memory bus bandwidth, or user/hypervisor-configured bandwidth limits.

    Can be deployed without an external switch Administrators can configure an internal networkwith no external connectivity, for example, to run a local network between a web server and afirewall running on separate VMs within the same physical server.

    Support a wide variety of external network environments Software-based virtual switches arestandards compliant and can work with any external network infrastructure.

    12

  • 8/7/2019 Server-to-network edge technologies

    13/25

    Conversely, software-based virtual switches also experience several disadvantages:

    Consume valuable CPU bandwidth The higher the traffic load, the greater the number of CPUcycles required to move traffic through a software-based virtual switch, reducing the ability tosupport larger numbers of VMs in a physical server. This is especially true for traffic that flowsbetween VMs and the external network.

    Lack network-based visibility Virtual switches lack the standard monitoring capabilities such asflow analysis, advanced statistics, and remote diagnostics of external network switches. Whennetwork outages or problems occur, identifying the root cause of problems can be difficult in avirtual machine environment. In addition, VM-to-VM traffic within a physical server is not exposed tothe external network, with the management of virtual systems not being integrated into themanagement system for the external network, and again, makes problem resolution difficult.

    Lack network policy enforcement Modern external switches have many advanced features that arebeing used in the data center: port security, quality of service (QoS), access control lists (ACL), andso on. Software-based virtual switches often do not have, or have limited support for, theseadvanced features. Even if virtual switches support some of these features, their management andconfiguration is often inconsistent or incompatible with the features enabled on external networks.This limits the ability to create end-to-end network policies within a data center.

    Lack management scalability As the number of virtualized servers begins to dramatically expandin a data center, the number of software-based virtual switches experiences the same dramatic

    expansion. Standard virtual switches must each be managed separately. A new technology calleddistributed virtual switches was recently introduced that allows up 64 virtual switches to bemanaged as a single device, but this only alleviates the symptoms of the management scalabilityproblem without solving the problem of lack of management visibility outside of the virtualizedserver.

    Hardware VEBs SR-IOV enabled NICs

    Moving VEB functionality into the NIC hardware improves the performance issues associated withsoftware-based virtual switches. Single-Root I/O virtualization (SR-IOV) is the technology that allows aVEB to be deployed into the NIC hardware.

    The PCI Special Interest Group (SIG) has developed SR-IOV and Multi-Root I/O Virtualization

    standards that allow multiple VMs running within one or more physical servers to natively share anddirectly access and control PCIe devices (commonly called direct I/O). The SR-IOV standardprovides native I/O virtualization for PCIe devices that are shared on a single physical server. (Multi-Root IO Virtualization refers to direct sharing of PCI devices between guest operating systems onmultiple servers.)

    SR-IOV supports one or more physical functions (full-feature PCI functions) and one or more virtualfunctions (light-weight PCI functions focused primarily on data movement). A capability known asAlternative Route Identifiers (ARI) allows expansion of up to 256 PCIe physical or virtual functionswithin a single physical PCIe device.

    NICs that implement SR-IOV allow the VMs virtual NICs to bypass the hypervisor vSwitch byexposing the PCIe virtual (NIC) functions directly to the guest OS. By exposing the registers directly tothe VM, the NIC reduces latency significantly from the VM to the external port. The hypervisorcontinues to allocate resources and handle exception conditions, but it is no longer required toperform routine data processing for traffic between the virtual machines and the NIC. Figure 8illustrates the architecture of an SR-IOV enabled NIC.

    13

  • 8/7/2019 Server-to-network edge technologies

    14/25

    Figure 8. Virtual Ethernet Bridge (VEB) implemented as an SR-IOV enabled NIC

    There are benefits to deploying VEBs as hardware-based SR-IOV enabled NICs:

    Reduces CPU utilization relative to software-based virtual switch implementations. With direct I/O,soft virtual switches are no longer part of the data path.

    Increases network performance due to the direct I/O between a guest OS and the NIC. This isespecially true for traffic flowing between virtual machines and the external networks.

    Supports up to 256 functions per NIC, significantly increasing the number of virtual networkingfunctions for a single physical server.

    While SR-IOV brings improvements over traditional software-only virtual NICs, there are still

    challenges with hardware-based SR-IOV NICs:

    Lack of network-based visibility SR-IOV NICs with direct I/O do not solve the network visibilityproblem. In fact, because of limited resources of cost effective NIC silicon, they often have evenfewer capabilities than software-based virtual switches.

    Lack of network policy enforcement Common data flow patterns to software-based virtual switchesand limited silicon resources in cost effective SR-IOV NICs results in no advanced policyenforcement features.

    Increased flooding onto external networks SR-IOV NICs often have small address tables and donot learn, so they may increase the amount of flooding, especially if there is a significant amountof VM-to-VM traffic.

    Lack of management scalability There are still a large number of VEBs (one per NIC port typicallythat will need to be managed independently from the external network infrastructure. Furthermore,these devices are limited because they typically have one VEB per port; whereas, software-basedvirtual switches can operate multiple NICs and NIC ports per VEB. Thus, the proliferation of VEBs tomanage is even worse with VEBs implemented in SR-IOV enabled NICs.

    SR-IOV requires a guest OS to have a paravirtualized driver to support the direct I/O with the PCIvirtual functions and operating system support for SR-IOV, which is not available in market-leadinghypervisors as of this writing.

    14

  • 8/7/2019 Server-to-network edge technologies

    15/25

    In summary, to obtain higher performance, the hypervisor-based software virtual switch is moving intohardware on the NIC using SR-IOV technology. Using NICs enabled with SR-IOV solves many of theperformance issues, but does not solve the issues of management integration and limited visibility intonetwork traffic flows.4 Next generation SR-IOV implementations that use Edge Virtual Bridging willprovide a better solution for the problem of management visibility, as described in the followingsection.

    Solving complexity at the virtualized server-to-network edgewith Edge Virtual Bridging

    Realizing the common challenges with both software-based and hardware-based VEBs, the IEEE802.1 Work Group is creating a new set of standards called Edge Virtual Bridging (EVB), IEEE802.1Qbg. These standards aim to resolve issues with network visibility, end-to-end policyenforcement, and management scalability at the virtualized server-to-network edge. EVB changes theway virtual bridging or switching is done to take advantage of the advanced features and scalablemanagement of external LAN switch devices.5

    Two primary candidates have been forwarded as EVB proposals:

    Virtual Ethernet Port Aggregator (VEPA) and Multichannel technology, created by an industrycoalition led by HP VN-Tag technology, created by CiscoBoth of these proposals solve similar problems and have many common characteristics, but theimplementation details and associated impact to data center infrastructure are quite different betweenthe two.

    Common EVB capabilities and benefits

    The primary goals of EVB are to solve the network visibility, policy management, and managementscalability issues faced by virtualized servers at the server-to-network edge.

    EVB augments the methods used by traditional VEBs to forward and process traffic so that the mostadvanced packet processing occurs in the external switches. Just like a VEB, an EVB can beimplemented as a software-based virtual switch, or in hardware-based SR-IOV NIC devices. Software-or hardware-based EVB operations can be optimized to make them simple and inexpensive toimplement. At the same time, the virtual NIC and virtual switch information is exposed to the externalswitches to achieve better visibility and manageability.

    With an EVB, traffic from a VM to an external destination is handled just like in a VEB (Figure 9, toptraffic flow). Traffic between VMs within a virtualized server travels to the external switch and backthrough a 180-degree turn, or hairpin turn (Figure 9, bottom traffic flow). Traffic is notsent directlybetween the virtual machines, as it would be with a VEB.

    This capability provides greater management visibility, control, and scalability. However, there is a

    performance cost associated with EVB because the traffic is moved deeper into the network, and theVM-to-VM traffic uses twice the bandwidth for a single packet one transmission for outgoing andone for incoming traffic. EVB is not necessarily a replacement for VEB modes of operation, but will beoffered as an alternative for applications and deployments that require the management benefits.

    4 SR-IOV is an important technology for the future. For existing servers, HP has implemented a similar method for aggregatingvirtual I/O with the HP Flex10 technology. For more information about how they differ, see the Appendix.

    5 EVB is defined as the environment where physical end stations, containing multiple virtual end stations, all require theservices of adjacent bridges forming a local area network. See the EVB tutorial on the 802.1 website at

    www.ieee802.org/1/files/public/docs2009/new-evb-thaler-evb-tutorial-0709-v09.pdf

    15

    http://www.ieee802.org/1/files/public/docs2009/new-evb-thaler-evb-tutorial-0709-v09.pdfhttp://www.ieee802.org/1/files/public/docs2009/new-evb-thaler-evb-tutorial-0709-v09.pdf
  • 8/7/2019 Server-to-network edge technologies

    16/25

    Thus, IT architects will have the choice VEB or EVB to match the needs of their applicationrequirements with the design of their network infrastructure.

    Figure 9. Basic architecture and traffic flow of an Edge Virtual Bridge (EVB)

    Most switches are not designed to forward traffic received on one port back to the same port (doingso breaks normal spanning tree rules). Therefore, external switches must be modified to allow thehairpin turn if a port is attached to an EVB-enabled server. Most switches already support the hairpinturn using a mechanism similar to that used for wireless access points (WAP). But rather than using theWAP negotiation mode, an EVB-specific negotiation mode will need to be added into the switchfirmware. Importantly, no ASIC development or other new hardware is required for a network switchto support these hairpin turns. The hairpin turn allows an external switch to apply all of its advancedfeatures to the VM-to-VM traffic flows, fully enforcing network policies.

    Another change required by EVBs is the way in which multicast or broadcast traffic received from theexternal network is handled. Because traffic received by the EVB from the physical NIC can comefrom the hairpin turn within the external switch (that is, it was sourced from a VM within a virtualizedserver), traffic must be filtered to forbid a VM from receiving multicast or broadcast packets that it sentonto the network. This would violate standard NIC and driver operation in many operating systems.

    Using EVB-enabled ports addresses the goals of network visibility and management scalability. Theexternal switch can detect an EVB-enabled port on a server and can detect the exchange ofmanagement information between the MAC addresses and VMs being deployed in the network. Thisbrings management visibility and control to the VM level rather than just to the physical NIC level aswith todays networks. Managing the connectivity to numerous virtual servers from the external switchand network eases the issues with management scalability.

    There are many benefits to using an EVB:

    Improves virtualized network I/O performance and eliminates the need for complex features insoftware-based virtual switches

    Allows the adjacent switch to incorporate the advanced management functions, thereby allowingthe NIC to maintain low-cost circuitry

    Enables a consistent level of network policy enforcement by routing all network traffic through theadjacent switch with its more complete policy-enforcement capabilities

    16

  • 8/7/2019 Server-to-network edge technologies

    17/25

    Provides visibility of VM-to-VM traffic to network management tools designed for physical adjacentswitches

    Enables administrators to gain visibility and access to external switch features from the guest OSsuch as packet processing (access control lists) and security features such as Dynamic HostConfiguration Protocol guard, address resolution protocol (ARP), ARP monitoring, source portfiltering, and dynamic ARP protection and inspection

    Allows administrators to configure and manage ports in the same manner for VEB and VEPA ports Enhances monitoring capabilities, including access to statistics like NetFlow, sFlow, rmon, portmirroring, and so onThe disadvantage of EVB is that VM-to-VM traffic must flow to the external switch and then backagain, thus consuming twice the communication bandwidth. Furthermore, this additional bandwidth isconsumed on the network connections between the physical server and external switches.

    Virtual Ethernet Port Aggregator (VEPA)

    The IEEE 802.1 Work Group has agreed to base the IEEE 802.1Qbg EVB standard on VEPAtechnology because of its minimal impact and minimal changes to NICs, bridges, existing standards,and frame formats (which require no changes).

    VEPA is designed to incorporate and modify existing IEEE standards so that most existing NIC andswitch products could implement VEPA with only a software upgrade. VEPA does not require newtags and involves only slight modifications to VEB operation, primarily in frame relay support. VEPAcontinues to use MAC addresses and standard IEEE 802.1Q VLAN tags as the basis for frameforwarding, but changes the forwarding rules slightly according to the base EVB requirements. Indoing so, VEPA is able to achieve most of the goals envisioned for EVB without the excessive burdenof a disruptive new architecture such as VN-Tags.

    Software-based VEPA solutions can be implemented as simple upgrades to existing software virtualswitches in hypervisors. As a proof-point, HP labs, in conjunction with University of California,developed a software prototype of a VEPA-enabled software switch, and the adjacent external switchthat implements the hairpin turn. The prototype solution required only a few lines of code within the

    Linux bridge module. The prototype performance, even though it was not fully optimized, showed theVEPA to be 12% more efficient than the traditional software virtual switch in environments withadvanced network features enabled.6

    In addition to software-based VEPA solutions, SR-IOV NICs can easily be updated to support the VEPAmode of operation. Wherever VEBs can be implemented, VEPAs can be implemented as well.

    VEPA enables a discovery protocol, allowing external switches to discover ports that are operating inVEPA mode and exchange information related to VEPA operation. This allows the full benefits ofnetwork visibility and management of the virtualized server environment.

    There are many benefits from using VEPA:

    A completely open (industry-standard) architecture without proprietary attributes or formats Tag-less architecture that achieves better bandwidth than software-based virtual switches, with less

    overhead and lower latency (especially for small packet sizes)

    Easy to implement, often as a software upgrade Minimizes changes to NICs, software switches, and external switches, thereby promoting low cost

    solutions

    6 Paul Congden, Anna Fischer, and Prasant Mohapatra, A Case for VEPA: Virtual Ethernet Port Aggregator, submitted forconference publication and under review as of this writing.

    17

  • 8/7/2019 Server-to-network edge technologies

    18/25

    Multichannel technology

    During the EVB standards development process, scenarios were identified in which VEPA could beenhanced with some form of standard tagging mechanism. To address these scenarios, an optionalMultichannel technology, complementary to VEPA, was proposed by HP and accepted by the IEEE802.1 Work Group for inclusion into the IEEE 802.1Qbg EVB standard. Multichannel allows thetraffic on a physical network connection or port (like a NIC device) to be logically separated intomultiple channels as if they are independent, parallel connections to the external network. Each of thelogical channels can be assigned to any type of virtual switch (VEB, VEPA, and so on) or directlymapped to any virtual machine within the server. Each logical channel operates as an independentconnection to the external network.

    Multichannel uses existing Service VLAN tags (S-Tags) that were standardized in IEEE 802.1ad,commonly referred to as the Provider Bridge or Q-in-Q standard. Multichannel technology usesthe extra S-Tag and incorporates VLAN IDs in these tags to represent the logical channels of thephysical network connection. This mechanism provides varied support:

    Multiple VEB and/or EVB (VEPA) virtual switches to share the same physical network connection toexternal networks. Administrators may need certain virtualized applications to use VEB switches fortheir performance and may need other virtualized applications to use EVB (VEPA) switches for theirnetwork manageability, all in the same physical server. For these cases, multichannel capability lets

    administrators establish multiple virtual switch types that share a physical network connection to theexternal networks.

    Directly mapping a virtual machine to a physical network connection or port while allowing thatconnection to be shared by different types of virtual switches. Many hypervisors support directlymapping a physical NIC to a virtual machine, but then only that virtual machine may make use ofthe network connection/port, thus consuming valuable network connection resources. Multichanneltechnology allows external physical switches to identify which virtual switch, or direct mappedvirtual machine, traffic is coming from, and vice-versa.

    Directly mapping a virtual machine that requires promiscuous mode operation (such as trafficmonitors, firewalls, and virus detection software) to a logical channel on a networkconnection/port. Promiscuous mode lets a NIC forward all packets to the application, regardless of

    destination MAC addresses or tags. Allowing virtualized applications, to operate in promiscuousmode lets the administrator use the external switchs more powerful promiscuous and mirroringcapabilities to send the appropriate traffic to the applications, without overburdening the resourcesin the physical server.

    Figure 10 illustrates how Multichannel technology supports these capabilities within a virtualizedserver.

    18

  • 8/7/2019 Server-to-network edge technologies

    19/25

    Figure 10. Multichannel technology supports VEB, VEPA, and direct mapped VMs on a single NIC

    The optional Multichannel capability requires S-Tags and Q-in-Q operation to be supported in theNICs and external switches, and, in some cases, it may require hardware upgrades, unlike the basicVEPA technology, which can be implemented in almost all current virtual and external physicalswitches. Multichannel does not have to be enabled to take advantage of simple VEPA operation.Multichannel merely enables more complex virtual network configurations in servers using virtualmachines.

    Multichannel technology allows IT architects to match the needs of their application requirements withthe design of their specific network infrastructure: VEB for performance of VM-to-VM traffic; VEPA/EVBfor management visibility of the VM-to-VM traffic; sharing physical NICs with direct mapped virtualmachines; and optimized support for promiscuous mode applications.

    VN-Tag

    VN-Tag technology from Cisco specifies a new Ethernet frame tag format that is not leveraged from,or built on, any existing IEEE defined tagging format. A VN-Tag specifies the virtual machine sourceand destination ports for the frame and identifies the frames broadcast domain. The VN-Tag isinserted and stripped by either VN-Tag enabled software virtual switches, or by a VN-Tag enabledSR-IOV NIC. External switches enabled with VN-Tag use the tag information to provide network

    visibility to virtual NICs or ports in the VMs and to apply network policies.

    Although using VN-Tags to create an EVB solves the problems of management visibility,implementating VN-Tags has significant shortcomings:

    VN-Tags do not leverage other existing standard tagging formats (such as IEEE 802.1Q, IEEE802.1ad, or IEEE 802.1X tags)

    Hardware changes to NIC and switch devices are required rather than simple software upgradesfor existing network devices. In other words, using VN-tags requires new network products (NICs,switches, and software) enabled with VN-Tags.

    19

  • 8/7/2019 Server-to-network edge technologies

    20/25

    EVB implemented with VN-Tag and traditional VEB cannot coexist in virtualized servers as with theVEPA with Multichannel technology

    M-Tag and port extension

    One scenario is not addressed by VEPA and Multichannel technology: port extension. Port Extendersare physical switches that have limited functionality and are essentially managed as a line card of theupstream physical switch. Products such as the Cisco Nexus Fabric Extenders and UniversalComputing System (UCS) Fabric Extenders are examples of Port Extenders. Generally, the terms Port

    Extenders and Fabric Extenders are synonymous. Port Extenders require tagged frames; they usethe information in these new tags to map the physical ports on the Fabric Extenders as virtual ports onthe upstream switches and to control how they forward packets to or from upstream Nexus or UCSswitches and replicate broadcast or multicast traffic.

    Initially, the port extension technology was considered as part of the EVB standardization effort;however, the IEEE 802.1 Work Group has decided that the EVB and port extension technologiesshould be developed as independent standards. Cisco VN-Tag technology has been proposed to theIEEE 802.1 Work Group as not only a solution for EVB, but also for use as the port extension tags.However, as the issues with VN-Tags have been debated in the work group, Cisco has modified itsport extension proposal to use a tag format called M-Tags for the frames communicated betweenthe Port Extenders and their upstream controlling switches. At the time of this writing, it is unclear

    whether these M-Tags will incorporate the VN-Tag format or other, more standard tag formats alreadydefined in the IEEE.

    Status of EVB and related standards

    As of this writing, the project authorization request (PAR) for the IEEE 802.1Qbg EVB standard hasbeen approved and the formal standardization process is beginning. The IEEE 802.1 Work Groupchose the VEPA technology proposal over the VN-Tag proposal as the base of the EVB standardbecause of its use of existing standards and minimal impact and changes required to existing networkproducts and devices. Multichannel technology is also being included as an optional capability in theEVB standard to address the cases in which a standard tagging mechanism would enhance basicVEPA implementations.

    Also as of this writing, the PAR for the IEEE 802.1Qbh Bridge Port Extension standard has beenapproved and the formal standardization process is beginning. The IEEE 802.1 Work Groupaccepted the M-Tag technology proposal as the base of the Port Extension standard. Again, it isunclear whether M-Tags will resemble VN-Tags or a standard tag format.

    It should be noted that there are a number of products on the market today, or that will be introducedshortly, that rely on technologies that have not been adopted by the IEEE 802.1 Work Group.Customers in the process of evaluating equipment should understand how a particular vendorsproducts will evolve to become compliant with the developing EVB and related standardstechnologies.

    SummaryThe server-to-network edge is a complicated place in both the physical and virtual infrastructures.Emerging converged network (iSCSI, FCoE, CEE/DCB) and virtual I/O (EVB, VEPA, and VEB)technologies will have distinct, but overlapping effects on data center infrastructure design. As thenumber of server deployments with virtualized I/O continues to increase, the ability to convergephysical network architectures should interact seamlessly with virtualized I/O. HP expects that thesetwo seemingly disparate technologies are going to become more and more intertwined as data centerarchitectures move forward.

    20

  • 8/7/2019 Server-to-network edge technologies

    21/25

    The goal of converged networks is to simplify the physical infrastructure through server I/Oconsolidation. HP has been working to solve the problems of complexity at the server edge for sometime, with, for example, these products and technologies:

    HP BladeSystem c-Class eliminates server cables through the use of mid-plane technology betweenserver blades and interconnect modules.

    HP Virtual Connect simplifies connection management by putting an abstraction layer between theservers and their external networks, allowing administrators to wire LAN / SAN connections onceand streamline the interaction between server, network, and storage administrative groups.

    HP Virtual Connect Flex10 consolidates Ethernet connections, allowing administrators to partitiona single 10 Gb Ethernet port into multiple individual server NIC connections. By consolidatingphysical NICs and switch ports, Flex-10 helps organizations make more efficient use of availablenetwork resources and reduce infrastructure costs.

    HP BladeSystem Matrix implements the HP Converged Infrastructure strategy by using a sharedservices model to integrate pools of compute, storage, and network resources. The Matrixmanagement console, built on HP Insight Dynamics, combines automated provisioning, capacityplanning, disaster recovery, and a self-service portal to simplify the way data center resources aremanaged. HP BladeSystem Matrix embodies HPs vision for data center management: managementthat is application-centric, with the ability to provision and manage the infrastructure to supportbusiness-critical applications across an enterprise.

    FCoE, CEE, and current-generation iSCSI are standards to achieve the long-unrealized potential of asingle unified fabric. With respect to FCoE in particular, there are important benefits to beginimplementing a converged fabric at the edge of the server network. By implementing convergedfabrics at the server-to-network edge first, customers gain benefits from reducing the physical numberof cables, adapters, and switches at the edge where there is the most traffic congestion and the mostassociated network equipment requirements and costs. In the future, HP plans to broaden VirtualConnect Flex-10 technology to provide solutions for converging different network protocols with its HPFlexFabric technology. HP plans to deliver the FlexFabric vision by converging the technology,management tools, and partner ecosystems of the HP ProCurve and Virtual Connect network portfoliosinto a virtualized fabric for the data center.

    Virtual networking technologies are becoming increasingly important (and complex) because of thegrowing virtual networking infrastructure supporting virtual machine deployment and management.Virtual I/O technologies can be implemented in the software space by sharing physical I/O amongmultiple virtual servers (vSwitches). They can also be implemented in hardware by abstracting andpartitioning I/O among one or more physical servers. New virtual I/O technologies to considershould be based on industry standards like VEPA and VEB, and should work within existing customerframeworks and organizational roles, not disrupt them. Customers will have a choice when it comesto implementation for performance (VEB) or for management ease (VEPA/EVB).

    Customers should understand how these standards and technologies intertwine at the edge of thenetwork. HP is not only involved in the development of standards to make the data centerinfrastructure better. HP has already developed products that work to simplify the server-to-network

    edge by reducing cables and physical infrastructure (BladeSystem), making the management ofnetwork connectivity easier (Virtual Connect) and aggregating network connectivity (VC Flex-10). HPFlexFabric technology will let an IT organization exploit the benefits of server, storage, and networkvirtualization going forward. As the number of server deployments using virtual machines continues toincrease, the nature of I/O buses and adapters will continue to change. HP is positioned well tonavigate these changes because of the companys skill set and intellectual property it holds in servers,compute blades, networking, storage, virtualized I/O, and management.

    As the data center infrastructure changes with converged networking and virtual I/O, HP provides themechanisms to manage a converged infrastructure by simplifying configuration and operations.

    21

  • 8/7/2019 Server-to-network edge technologies

    22/25

    Through management technologies such as those built into the HP BladeSystem Matrix, customers canuse consistent processes to configure, provision, and manage the infrastructure, regardless of the typeof technologies at the server-to-network edge: FCoE, iSCSI, VEB virtual switches, or EVB virtual switchtechnologies. Customers can choose the converged infrastructure and virtualization technologies thatbest suit their needs, and manage them with consistent software tools across the data center.

    22

  • 8/7/2019 Server-to-network edge technologies

    23/25

    Appendix: Understanding SR-IOV and Virtual Connect Flex-10

    The following information is taken from HP Flex-10 and SR-IOVWhat is the difference? ISSTechnology Update, Volume 8, Number 3, April 2009: www.hp.com/servers/technology

    Improving I/O performance

    As virtual machine software enables higher efficiencies in CPU use, these same efficiency enablersplace more overhead on physical assets. HP Virtual Connect Flex-10 Technology and Single Root I/OVirtualization (SR-IOV) both share the goal of improving I/O efficiency without increasing theoverhead burden on CPUs and network hardware. Flex-10 and SR-IOV technologies accomplish thisgoal through different approaches. This article explores the differences in architecture andimplementation between Flex-10 and SR-IOV.

    HP Virtual Connect Flex-10 Technology

    As described in the body of this technology brief, HP Virtual Connect Flex-10 technology is ahardware-based solution that enables users to partition a 10 gigabit Ethernet (10GbE) connection and

    control the bandwidth of each partition in increments of 100 Mb.Administrators can configure a single BladeSystem 10 Gb network port to represent multiple physicalnetwork interface controllers (NICs), also called FlexNICs, with a total bandwidth of 10 Gbps. TheseFlexNICs appear to the operating system (OS) as discrete NICs, each with its own driver. While theFlexNICs share the same physical port, traffic flow for each one is isolated with its own MAC addressand virtual local area network (VLAN) tags between the FlexNIC and VC Flex-10 interconnectmodule. Using the VC interface, an administrator can set and control the transmit bandwidthavailable to each FlexNIC.

    Single Root I/O Virtualization (SR-IOV)

    The ability of SR-IOV to scale the number of functions is a major advantage. The initial Flex-10offering is based on the original PCIe definition that is limited to 8 PCI functions per given device (4FlexNICs per 10Gb port on a dual port device). With SR-IOV, there is a function called AlternativeRoute Identifiers (ARI) that allows expansion of up to 256 PCIe functions. The scalability inherent inthe SR-IOV architecture has the potential to increase server consolidation and performance. AnotherSR-IOV advantage is the prospect of performance gains achieved by removing the hypervisorsvSwitch from the main data path. Hypervisors are essentially another operating system providingCPU, memory, and I/O virtualization capabilities to accomplish resource management and dataprocessing functions. The data processing functionality places the hypervisor squarely in the maindata path. In current I/O architecture, data communicated to and from a guest OS is routinelyprocessed by the hypervisor. Both outgoing and incoming data is transformed into a format that canbe understood by a physical device driver. The data destination is determined, and the appropriate

    send or receive buffers are posted. All of this processing requires a great deal of data copying, andthe entire process creates serious performance overhead. With SR-IOV, the hypervisor is no longerrequired to process, route, and buffer both outgoing and incoming packets. Instead, the SR-IOVarchitecture exposes the underlying hardware to the guest OS, allowing its virtual NIC to transfer datadirectly between the SR-IOV NIC hardware and the guest OS memory space. This removes thehypervisor from the main data path and eliminates a great deal of performance overhead. Thehypervisor continues to allocate resources and handle exception conditions, but it is no longerrequired to perform routine data processing. Since SR-IOV is a hardware I/O implementation, it alsouses hardware-based security and quality of service (QoS) features incorporated into the physical hostserver.

    23

    http://www.hp.com/servers/technologyhttp://www.hp.com/servers/technologyhttp://www.hp.com/servers/technology
  • 8/7/2019 Server-to-network edge technologies

    24/25

    24

    Advantage of using Virtual Connect Flex-10

    Flex-10 technology offers significant advantages over other 10 Gb devices that provide largebandwidth but do not provide segmentation. The ability to adjust transmit bandwidth by partitioningdata flow makes 10GbE more cost effective and easier to manage. It is easier to aggregate multiple1 Gb data flows and fully utilize 10 Gb bandwidth. The fact that Flex-10 is hardware based meansthat multiple FlexNICs are added without the additional processor overhead or latency associatedwith server virtualization (virtual machines). Significant infrastructure savings are also realized sinceadditional server NIC mezzanine cards and associated interconnect modules may not be needed.See http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01608922/c01608922.pdf for moredetailed information on Flex-10. It is important to note that Flex-10 is an available HP technology forProLiant BladeSystem servers, while SR-IOV is a released PCI-SIG specification at the beginning of theexecution and adoption cycle. As such, SR-IOV cannot operate in current environments withoutsignificant changes to I/O infrastructure and the introduction of new management software. Onceaccomplished, these changes in infrastructure and software would let SR-IOV data handling operatein a much more native and direct manner, reducing processing overhead and enabling highlyscalable PCI functionality.

    For more information on the SR-IOV standard and industry support for the standard, go to thePeripheral Component Interconnect Special Interest Group (PCI-SIG) site: www.pcisig.com.

    http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01608922/c01608922.pdfhttp://www.pcisig.com/http://www.pcisig.com/http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01608922/c01608922.pdf
  • 8/7/2019 Server-to-network edge technologies

    25/25

    For more information

    For additional information, refer to the resources listed below.

    Resource description Web address

    HP Industry Standard Server technologycommunications

    www.hp.com/servers/technology

    HP Virtual Connect technology www.hp.com/go/virtualconnect

    IEEE 802.1 Work Group website www.ieee802.org/1/

    IEEE Data Center Bridging standards:

    802.1Qbb Priority-based FlowControl

    802.1Qau QuantizedCongestion Notification

    802.1Qaz Data CenterBridging Exchange Protocol,Enhanced TransmissionSelection

    www.ieee802.org/1/pages/802.1bb.html

    www.ieee802.org/1/pages/802.1au.html

    www.ieee802.org/1/pages/802.1az.html

    IEEE Edge Virtual Bridging standard802.1Qbg

    www.ieee802.org/1/pages/802.1bg.html

    INCITS T11 Home Page (FCoE standard) www.t11.org

    Call to actionSend comments about this paper to [email protected].

    Follow us on Twitter: http://www.twitter.com/HPISSTechComm

    2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject tochange without notice. The only warranties for HP products and services are set forth in the expresswarranty statements accompanying such products and services. Nothing herein should be construed asconstituting an additional warranty. HP shall not be liable for technical or editorial errors or omissionscontained herein.

    http://www.hp.com/servers/technologyhttp://www.hp.com/go/virtualconnecthttp://www.ieee802.org/1/http://www.ieee802.org/1/pages/802.1bb.htmlhttp://www.ieee802.org/1/pages/802.1au.htmlhttp://www.ieee802.org/1/pages/802.1az.htmlhttp://www.ieee802.org/1/pages/802.1bg.htmlhttp://www.t11.org/mailto:[email protected]://www.twitter.com/HPISSTechCommhttp://www.twitter.com/HPISSTechCommmailto:[email protected]://www.t11.org/http://www.ieee802.org/1/pages/802.1bg.htmlhttp://www.ieee802.org/1/pages/802.1az.htmlhttp://www.ieee802.org/1/pages/802.1au.htmlhttp://www.ieee802.org/1/pages/802.1bb.htmlhttp://www.ieee802.org/1/http://www.hp.com/go/virtualconnecthttp://www.hp.com/servers/technologyhttp://www.hp.com/go/getconnectedhttp://www.twitter.com/HPISSTechComm