6387 10gige brochure copy - mellanox technologies...over ethernet and rdma over ethernet transport...
TRANSCRIPT
Optimal Price/PerformanceConnectX EN 10Gig Ethernet removes I/O bottlenecks in mainstream servers that are limiting application performance.
Servers supporting PCI Express 2.0 with 5GT/s will be able to fully utilize both 10Gb/s ports, balancing the I/O requirement
of these high-end servers. Hardware-based stateless offload engines handle the TCP/UDP/IP segmentation, reassembly,
and checksum calculations that would otherwise burden the host process. These offload technologies are fully compatible
with Microsoft RSS and NetDMA. Total cost of ownership is optimized by maintaining an end-to-end Ethernet network on
existing operating systems and applications.
Mellanox provides 10GigE adapters suitable for all network environments. The dual port SFP+ adapter supports
10GBASE-SR, -LR, and direct attached copper cable providing the flexibility to connect over short, medium, and long
distances. Single and dual port 10GBASE-T adapters provide easy connections up to 100m over familiar UTP wiring. The
dual port 10GBASE-CX4 adapter with its powered connectors can utilize active copper and fiber cables as well as passive
copper.
Investment ProtectionNew enhancements such as Data Center Ethernet (DCE) are being added to Ethernet for better I/O consolidations.
ConnectX EN enables servers and storage systems to utilize Low Latency Ethernet (LLE) which provides efficient RDMA
transport over Layer 2 Ethernet with IEEE 802.1q VLAN tagging and Ethernet Priority-based Flow Control (PFC). The LLE
software stack maintains existing and future application compatibility for bandwidth and latency sensitive clustering
applications. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage
existing data center fabric management solutions.
ConnectX EN protects investments by providing in hardware today support for DCE and Fibre Channel over Ethernet
(FCoE), as well as technologies such as SR-IOV that provide enhanced virtual machine performance for virtualized servers.
I/O VirtualizationConnectX support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization with ConnectX gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.
Quality of ServiceResource allocation per application or per VM is provided and protected by the advanced QoS supported by ConnectX EN. Service levels for multiple traffic types can be based on IETF DiffServ or IEEE 802.1p/Q, along with the Data Center Ethernet enhancements, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grain control of traffic – ensuring that applications run smoothly in today’s complex environment.
Software SupportConnectX EN is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XENServer. ConnectX EN supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks. Stateless offload connections are also easy to scale using multiple adapters to reach the desired level of performance and fault tolerance. ConnectX EN supports various management interfaces and has a rich set of configuring and management tools across operating systems.
ETHERNET– IEEE Std 802.3ae 10 Gigabit Ethernet– IEEE Std 802.3ak 10GBASE-CX4– IEEE Std 802.3an 10GBASE-T– IEEE Std 802.3ad Link Aggregation and Failover– IEEE Std 802.3x Pause– IEEE Std 802.1Q VLAN tags– IEEE Std 802.1p Priorities– Multicast– Jumbo frame support (10KB)– 128 MAC/VLAN addresses per port
TCP/UDP/IP STATELESS OFFLOAD– TCP/UDP/IP checksum offload– TCP Large Send (< 64KB) or Giant Send (64KB-16MB)
Offload for segmentation– Receive Side Scaling (RSS) up to 32 queues– Line rate packet filtering
ADDITIONAL CPU OFFLOADS– Traffic steering across multiple cores– Intelligent interrupt coalescence– Full support for Intel I/OAT– Compliant to Microsoft RSS and NetDMA
HARDWARE-BASED I/O VIRTUALIZATION– Single Root IOV– Address translation and protection– Multiple queues per virtual machine– VMware NetQueue support– PCISIG IOV compliant
CPU– AMD X86, X86_64– Intel X86, EM64T, IA-32, IA-64– SPARC– PowerPC, MIPS, and Cell
PCI EXPRESS INTERFACE– PCIe Base 2.0 compliant, 1.1 compatible– 2.5GT/s or 5.0GT/s link rate x8 (20+20Gb/s
or 40+40Gb/s bidirectional bandwidth)– Fits x8 or x16 slots– Support for MSI/MSI-X mechanisms
CONNECTIVITY– Interoperable with 10GigE switches and
routers– 20m+ of copper CX4 cable, with powered
connectors supporting active copper or fiber cables (MNEH28/9)
– 100m of Cat6a and Cat7 UTP, 55m of Cat5e and Cat6 (MNTH 18/28/29)
– 100m (OM-2) or 300m (OM-3) of multimode fiber cable, duplex LC connector from SFP+ optics module (MNPH28B/29B)
– 10m direct attached copper cable through SFP+ connector (MNPH28B/29B)
– Consult the factory for 10GBASE-LR availability
SAFETY – USA/Canada: cTUVus UL– EU: IEC60950– Germany: TUV/GS– International: CB Scheme
EMC (EMISSIONS)– USA: FCC, Class A– Canada: ICES, Class A– EU: EN55022, Class A– EU: EN55024, Class A– EU: EN61000-3-2, Class A– EU: EN61000-3-3, Class A– Japan: VCCI, Class A– Taiwan: BSMI, Class A
ENVIRONMENTAL– EU: IEC 60068-2-64: Random Vibration– EU: IEC 60068-2-29: Shocks, Type I / II– EU: IEC 60068-2-32: Fall Test
OPERATING CONDITIONS– Operating temperature: 0 to 55° C– Air flow: 200LFM @ 55° C – Requires 3.3V, 12V supplies
FEATURE SUMMARY COMPLIANCE COMPATIBILITY
350 Oakmead Parkway, Suite 100Sunnyvale, CA 94085Tel: 408-970-3400Fax: 408-970-3403www.mellanox.com
© Copyright 2009. Mellanox Technologies. All rights reserved. Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. Virtual Protocol Interconnect and BridgeX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERMellanox 10 Gigabit Ethernet Converged Network Adapters
VALUE PROPOSITIONS– As customers deploy VMware and implement server virtualization, the need
for bandwidth and availability increases significantly. ConnectX EN supports all networking features available in VMware ESX Server 3.5 including NetQueue for boosting I/O performance.
– Mellanox dual-port ConnectX EN adapters support PCIe 2.0 x8 (5 GT/s), enabling high availability and high performance.
Mellanox continues its leadership in providing high-performance networking technologies by delivering superior utilization and scaling in ConnectX® EN 10 Gigabit Ethernet Adapters, enabling data centers to do more with less.
BENEFITS■ Industry-leading throughput and latency performance■ Enabling I/O consolidation by supporting TCP/IP, FC
over Ethernet and RDMA over Ethernet transport protocols on a single adapter
■ Improved productivity and efficiency by delivering VM scaling and superior server utilization
■ Supports industry-standard SR-IO Virtualization technology and delivers VM pro tection and granular levels of I/O services to applications
■ High-availability and high-performance for data center networking
■ Software compatible with standard TCP/UDP/IP and iSCSI stacks
■ High level silicon integration and no external memory design provides low power, low cost and high reliability
TARGET APPLICATIONS■ Data center virtualization using VMware® ESX
Server or Citrix XenServer™■ Enterprise data center applications■ I/O consolidation (single unified wire for networking,
storage and clustering)■ Storage consolidations using FCoE■ Future-proofing Web 2.0 data centers and cloud
computing■ Video streaming■ Accelerating back-up and restore operations
– One of the significant challenges in a large data center is managing the number of cables connected to a server. Mellanox ConnectX dramatically reduces the number cables per server by providing multi-protocol support to leverage the single fabric for multiple applications.
– Virtual Protocol Interconnect™ (VPI) versions available with InfiniBand, Ethernet, Data Center Ethernet, and FCoE.
Ports 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 1 x 10GigE 2 x 10GigE 2 x 10GigE
ASIC ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX ENt ConnectX ENt
Connector CX4 CX4 SFP+ SFP+ SFP+ SFP+ RJ45 RJ45 RJ45
Cabling type CX4 Copper CX4 Copper Twinax or Direct Twinax or Direct SR Fibre Optic SR Fibre Optic CAT5E up to 55m CAT5E up to 55m CAT5E up to 55m Attach Copper Attach Copper MMF 62.5/50um MMF 62.5/50um CAT6 up to 55m CAT6 up to 55m CAT6 up to 55m up to 300m up to 300m CAT6A up to 100m CAT6A up to 100m CAT6A up to 100m
Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0
Speed 2.5GT/s 5.0GT/s 2.5GT/s 5.0GT/s 2.5GT/s 5.0GT/s 2.5GT/s 2.5GT/s 5.0GT/s
Lanes x8 x8 x8 x8 x8 x8 x8 x8 x8
Features
OS Support
RoHS Yes Yes Yes Yes Yes Yes Yes Yes Yes
Ordering Part MNEH28-XTC MNEH29-XTC MNPH28B-XTC MNPH29B-XTC MNPH28B-XTC MNPH29B-XTC MNTH18-XTC MNTH28B-XTC MNTH29B-XTC MFM1T02A-SR MFM1T02A-SR
RHEL, SLES, W2K3, W2K8, FreeBSD, VMWare ESX3.5
RHEL, SLES, W2K3, W2K8, FreeBSD, VMWare ESX3.5
Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV
Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV
FIBER OPTIC ADAPTERS 10GBASE-T ADAPTERS
OPERATING SYSTEMS/DISTRIBUTIONS– Novell SuSE Linux Enterprise Server
(SLES), Red Hat Enterprise Linux (RHEL), and other Linux distributions
– Microsoft Windows Server 2003/2008, Windows Compute Cluster Server 2003
– VMware ESX 3.5– Citrix XENServer 4.1
MANAGEMENT– MIB, MIB-II, MIB-II Extensions, RMON,
RMON 2– Configuration and diagnostic tools
Stateless Offload, FCoE Offload,Priority Flow Control, SR-IOV
COPPER ADAPTERS
RHEL, SLES, W2K3, W2K8,FreeBSD, VMWare ESX3.5
RHEL, SLES, W2K3, W2K8,FreeBSD, VMWare ESX3.5
Mellanox ConnectX® EN Ethernet
Network Interface Cards (NIC)
deliver high bandwidth and industry-leading
10GigE connectivity with stateless offloads
for converged fabrics in High-Performance
Computing, Enterprise Data Centers, and
Embedded environments. Clustered
databases, web infrastructure, and IP video
servers are just a few example applications
that will achieve significant throughput and
latency improvements resulting in faster
access, real-time response and increased
number of users per server. ConnectX EN
improves network performance by increasing
available bandwidth to the CPU and
providing enhanced performance, especially
in virtualized server environments.
Optimal Price/PerformanceConnectX EN 10Gig Ethernet removes I/O bottlenecks in mainstream servers that are limiting application performance.
Servers supporting PCI Express 2.0 with 5GT/s will be able to fully utilize both 10Gb/s ports, balancing the I/O requirement
of these high-end servers. Hardware-based stateless offload engines handle the TCP/UDP/IP segmentation, reassembly,
and checksum calculations that would otherwise burden the host process. These offload technologies are fully compatible
with Microsoft RSS and NetDMA. Total cost of ownership is optimized by maintaining an end-to-end Ethernet network on
existing operating systems and applications.
Mellanox provides 10GigE adapters suitable for all network environments. The dual port SFP+ adapter supports
10GBASE-SR, -LR, and direct attached copper cable providing the flexibility to connect over short, medium, and long
distances. Single and dual port 10GBASE-T adapters provide easy connections up to 100m over familiar UTP wiring. The
dual port 10GBASE-CX4 adapter with its powered connectors can utilize active copper and fiber cables as well as passive
copper.
Investment ProtectionNew enhancements such as Data Center Ethernet (DCE) are being added to Ethernet for better I/O consolidations.
ConnectX EN enables servers and storage systems to utilize Low Latency Ethernet (LLE) which provides efficient RDMA
transport over Layer 2 Ethernet with IEEE 802.1q VLAN tagging and Ethernet Priority-based Flow Control (PFC). The LLE
software stack maintains existing and future application compatibility for bandwidth and latency sensitive clustering
applications. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage
existing data center fabric management solutions.
ConnectX EN protects investments by providing in hardware today support for DCE and Fibre Channel over Ethernet
(FCoE), as well as technologies such as SR-IOV that provide enhanced virtual machine performance for virtualized servers.
I/O VirtualizationConnectX support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization with ConnectX gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.
Quality of ServiceResource allocation per application or per VM is provided and protected by the advanced QoS supported by ConnectX EN. Service levels for multiple traffic types can be based on IETF DiffServ or IEEE 802.1p/Q, along with the Data Center Ethernet enhancements, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grain control of traffic – ensuring that applications run smoothly in today’s complex environment.
Software SupportConnectX EN is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XENServer. ConnectX EN supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks. Stateless offload connections are also easy to scale using multiple adapters to reach the desired level of performance and fault tolerance. ConnectX EN supports various management interfaces and has a rich set of configuring and management tools across operating systems.
ETHERNET– IEEE Std 802.3ae 10 Gigabit Ethernet– IEEE Std 802.3ak 10GBASE-CX4– IEEE Std 802.3an 10GBASE-T– IEEE Std 802.3ad Link Aggregation and Failover– IEEE Std 802.3x Pause– IEEE Std 802.1Q VLAN tags– IEEE Std 802.1p Priorities– Multicast– Jumbo frame support (10KB)– 128 MAC/VLAN addresses per port
TCP/UDP/IP STATELESS OFFLOAD– TCP/UDP/IP checksum offload– TCP Large Send (< 64KB) or Giant Send (64KB-16MB)
Offload for segmentation– Receive Side Scaling (RSS) up to 32 queues– Line rate packet filtering
ADDITIONAL CPU OFFLOADS– Traffic steering across multiple cores– Intelligent interrupt coalescence– Full support for Intel I/OAT– Compliant to Microsoft RSS and NetDMA
HARDWARE-BASED I/O VIRTUALIZATION– Single Root IOV– Address translation and protection– Multiple queues per virtual machine– VMware NetQueue support– PCISIG IOV compliant
CPU– AMD X86, X86_64– Intel X86, EM64T, IA-32, IA-64– SPARC– PowerPC, MIPS, and Cell
PCI EXPRESS INTERFACE– PCIe Base 2.0 compliant, 1.1 compatible– 2.5GT/s or 5.0GT/s link rate x8 (20+20Gb/s
or 40+40Gb/s bidirectional bandwidth)– Fits x8 or x16 slots– Support for MSI/MSI-X mechanisms
CONNECTIVITY– Interoperable with 10GigE switches and
routers– 20m+ of copper CX4 cable, with powered
connectors supporting active copper or fiber cables (MNEH28/9)
– 100m of Cat6a and Cat7 UTP, 55m of Cat5e and Cat6 (MNTH 18/28/29)
– 100m (OM-2) or 300m (OM-3) of multimode fiber cable, duplex LC connector from SFP+ optics module (MNPH28B/29B)
– 10m direct attached copper cable through SFP+ connector (MNPH28B/29B)
– Consult the factory for 10GBASE-LR availability
SAFETY – USA/Canada: cTUVus UL– EU: IEC60950– Germany: TUV/GS– International: CB Scheme
EMC (EMISSIONS)– USA: FCC, Class A– Canada: ICES, Class A– EU: EN55022, Class A– EU: EN55024, Class A– EU: EN61000-3-2, Class A– EU: EN61000-3-3, Class A– Japan: VCCI, Class A– Taiwan: BSMI, Class A
ENVIRONMENTAL– EU: IEC 60068-2-64: Random Vibration– EU: IEC 60068-2-29: Shocks, Type I / II– EU: IEC 60068-2-32: Fall Test
OPERATING CONDITIONS– Operating temperature: 0 to 55° C– Air flow: 200LFM @ 55° C – Requires 3.3V, 12V supplies
FEATURE SUMMARY COMPLIANCE COMPATIBILITY
350 Oakmead Parkway, Suite 100Sunnyvale, CA 94085Tel: 408-970-3400Fax: 408-970-3403www.mellanox.com
© Copyright 2009. Mellanox Technologies. All rights reserved. Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. Virtual Protocol Interconnect and BridgeX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERMellanox 10 Gigabit Ethernet Converged Network Adapters
VALUE PROPOSITIONS– As customers deploy VMware and implement server virtualization, the need
for bandwidth and availability increases significantly. ConnectX EN supports all networking features available in VMware ESX Server 3.5 including NetQueue for boosting I/O performance.
– Mellanox dual-port ConnectX EN adapters support PCIe 2.0 x8 (5 GT/s), enabling high availability and high performance.
Mellanox continues its leadership in providing high-performance networking technologies by delivering superior utilization and scaling in ConnectX® EN 10 Gigabit Ethernet Adapters, enabling data centers to do more with less.
BENEFITS■ Industry-leading throughput and latency performance■ Enabling I/O consolidation by supporting TCP/IP, FC
over Ethernet and RDMA over Ethernet transport protocols on a single adapter
■ Improved productivity and efficiency by delivering VM scaling and superior server utilization
■ Supports industry-standard SR-IO Virtualization technology and delivers VM pro tection and granular levels of I/O services to applications
■ High-availability and high-performance for data center networking
■ Software compatible with standard TCP/UDP/IP and iSCSI stacks
■ High level silicon integration and no external memory design provides low power, low cost and high reliability
TARGET APPLICATIONS■ Data center virtualization using VMware® ESX
Server or Citrix XenServer™■ Enterprise data center applications■ I/O consolidation (single unified wire for networking,
storage and clustering)■ Storage consolidations using FCoE■ Future-proofing Web 2.0 data centers and cloud
computing■ Video streaming■ Accelerating back-up and restore operations
– One of the significant challenges in a large data center is managing the number of cables connected to a server. Mellanox ConnectX dramatically reduces the number cables per server by providing multi-protocol support to leverage the single fabric for multiple applications.
– Virtual Protocol Interconnect™ (VPI) versions available with InfiniBand, Ethernet, Data Center Ethernet, and FCoE.
Ports 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 1 x 10GigE 2 x 10GigE 2 x 10GigE
ASIC ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX ENt ConnectX ENt
Connector CX4 CX4 SFP+ SFP+ SFP+ SFP+ RJ45 RJ45 RJ45
Cabling type CX4 Copper CX4 Copper Twinax or Direct Twinax or Direct SR Fibre Optic SR Fibre Optic CAT5E up to 55m CAT5E up to 55m CAT5E up to 55m Attach Copper Attach Copper MMF 62.5/50um MMF 62.5/50um CAT6 up to 55m CAT6 up to 55m CAT6 up to 55m up to 300m up to 300m CAT6A up to 100m CAT6A up to 100m CAT6A up to 100m
Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0
Speed 2.5GT/s 5.0GT/s 2.5GT/s 5.0GT/s 2.5GT/s 5.0GT/s 2.5GT/s 2.5GT/s 5.0GT/s
Lanes x8 x8 x8 x8 x8 x8 x8 x8 x8
Features
OS Support
RoHS Yes Yes Yes Yes Yes Yes Yes Yes Yes
Ordering Part MNEH28-XTC MNEH29-XTC MNPH28B-XTC MNPH29B-XTC MNPH28B-XTC MNPH29B-XTC MNTH18-XTC MNTH28B-XTC MNTH29B-XTC MFM1T02A-SR MFM1T02A-SR
RHEL, SLES, W2K3, W2K8, FreeBSD, VMWare ESX3.5
RHEL, SLES, W2K3, W2K8, FreeBSD, VMWare ESX3.5
Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV
Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV
FIBER OPTIC ADAPTERS 10GBASE-T ADAPTERS
OPERATING SYSTEMS/DISTRIBUTIONS– Novell SuSE Linux Enterprise Server
(SLES), Red Hat Enterprise Linux (RHEL), and other Linux distributions
– Microsoft Windows Server 2003/2008, Windows Compute Cluster Server 2003
– VMware ESX 3.5– Citrix XENServer 4.1
MANAGEMENT– MIB, MIB-II, MIB-II Extensions, RMON,
RMON 2– Configuration and diagnostic tools
Stateless Offload, FCoE Offload,Priority Flow Control, SR-IOV
COPPER ADAPTERS
RHEL, SLES, W2K3, W2K8,FreeBSD, VMWare ESX3.5
RHEL, SLES, W2K3, W2K8,FreeBSD, VMWare ESX3.5
Mellanox ConnectX® EN Ethernet
Network Interface Cards (NIC)
deliver high bandwidth and industry-leading
10GigE connectivity with stateless offloads
for converged fabrics in High-Performance
Computing, Enterprise Data Centers, and
Embedded environments. Clustered
databases, web infrastructure, and IP video
servers are just a few example applications
that will achieve significant throughput and
latency improvements resulting in faster
access, real-time response and increased
number of users per server. ConnectX EN
improves network performance by increasing
available bandwidth to the CPU and
providing enhanced performance, especially
in virtualized server environments.
Optimal Price/PerformanceConnectX EN 10Gig Ethernet removes I/O bottlenecks in mainstream servers that are limiting application performance.
Servers supporting PCI Express 2.0 with 5GT/s will be able to fully utilize both 10Gb/s ports, balancing the I/O requirement
of these high-end servers. Hardware-based stateless offload engines handle the TCP/UDP/IP segmentation, reassembly,
and checksum calculations that would otherwise burden the host process. These offload technologies are fully compatible
with Microsoft RSS and NetDMA. Total cost of ownership is optimized by maintaining an end-to-end Ethernet network on
existing operating systems and applications.
Mellanox provides 10GigE adapters suitable for all network environments. The dual port SFP+ adapter supports
10GBASE-SR, -LR, and direct attached copper cable providing the flexibility to connect over short, medium, and long
distances. Single and dual port 10GBASE-T adapters provide easy connections up to 100m over familiar UTP wiring. The
dual port 10GBASE-CX4 adapter with its powered connectors can utilize active copper and fiber cables as well as passive
copper.
Investment ProtectionNew enhancements such as Data Center Ethernet (DCE) are being added to Ethernet for better I/O consolidations.
ConnectX EN enables servers and storage systems to utilize Low Latency Ethernet (LLE) which provides efficient RDMA
transport over Layer 2 Ethernet with IEEE 802.1q VLAN tagging and Ethernet Priority-based Flow Control (PFC). The LLE
software stack maintains existing and future application compatibility for bandwidth and latency sensitive clustering
applications. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage
existing data center fabric management solutions.
ConnectX EN protects investments by providing in hardware today support for DCE and Fibre Channel over Ethernet
(FCoE), as well as technologies such as SR-IOV that provide enhanced virtual machine performance for virtualized servers.
I/O VirtualizationConnectX support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization with ConnectX gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.
Quality of ServiceResource allocation per application or per VM is provided and protected by the advanced QoS supported by ConnectX EN. Service levels for multiple traffic types can be based on IETF DiffServ or IEEE 802.1p/Q, along with the Data Center Ethernet enhancements, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grain control of traffic – ensuring that applications run smoothly in today’s complex environment.
Software SupportConnectX EN is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XENServer. ConnectX EN supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks. Stateless offload connections are also easy to scale using multiple adapters to reach the desired level of performance and fault tolerance. ConnectX EN supports various management interfaces and has a rich set of configuring and management tools across operating systems.
ETHERNET– IEEE Std 802.3ae 10 Gigabit Ethernet– IEEE Std 802.3ak 10GBASE-CX4– IEEE Std 802.3an 10GBASE-T– IEEE Std 802.3ad Link Aggregation and Failover– IEEE Std 802.3x Pause– IEEE Std 802.1Q VLAN tags– IEEE Std 802.1p Priorities– Multicast– Jumbo frame support (10KB)– 128 MAC/VLAN addresses per port
TCP/UDP/IP STATELESS OFFLOAD– TCP/UDP/IP checksum offload– TCP Large Send (< 64KB) or Giant Send (64KB-16MB)
Offload for segmentation– Receive Side Scaling (RSS) up to 32 queues– Line rate packet filtering
ADDITIONAL CPU OFFLOADS– Traffic steering across multiple cores– Intelligent interrupt coalescence– Full support for Intel I/OAT– Compliant to Microsoft RSS and NetDMA
HARDWARE-BASED I/O VIRTUALIZATION– Single Root IOV– Address translation and protection– Multiple queues per virtual machine– VMware NetQueue support– PCISIG IOV compliant
CPU– AMD X86, X86_64– Intel X86, EM64T, IA-32, IA-64– SPARC– PowerPC, MIPS, and Cell
PCI EXPRESS INTERFACE– PCIe Base 2.0 compliant, 1.1 compatible– 2.5GT/s or 5.0GT/s link rate x8 (20+20Gb/s
or 40+40Gb/s bidirectional bandwidth)– Fits x8 or x16 slots– Support for MSI/MSI-X mechanisms
CONNECTIVITY– Interoperable with 10GigE switches and
routers– 20m+ of copper CX4 cable, with powered
connectors supporting active copper or fiber cables (MNEH28/9)
– 100m of Cat6a and Cat7 UTP, 55m of Cat5e and Cat6 (MNTH 18/28/29)
– 100m (OM-2) or 300m (OM-3) of multimode fiber cable, duplex LC connector from SFP+ optics module (MNPH28B/29B)
– 10m direct attached copper cable through SFP+ connector (MNPH28B/29B)
– Consult the factory for 10GBASE-LR availability
SAFETY – USA/Canada: cTUVus UL– EU: IEC60950– Germany: TUV/GS– International: CB Scheme
EMC (EMISSIONS)– USA: FCC, Class A– Canada: ICES, Class A– EU: EN55022, Class A– EU: EN55024, Class A– EU: EN61000-3-2, Class A– EU: EN61000-3-3, Class A– Japan: VCCI, Class A– Taiwan: BSMI, Class A
ENVIRONMENTAL– EU: IEC 60068-2-64: Random Vibration– EU: IEC 60068-2-29: Shocks, Type I / II– EU: IEC 60068-2-32: Fall Test
OPERATING CONDITIONS– Operating temperature: 0 to 55° C– Air flow: 200LFM @ 55° C – Requires 3.3V, 12V supplies
FEATURE SUMMARY COMPLIANCE COMPATIBILITY
350 Oakmead Parkway, Suite 100Sunnyvale, CA 94085Tel: 408-970-3400Fax: 408-970-3403www.mellanox.com
© Copyright 2009. Mellanox Technologies. All rights reserved. Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. Virtual Protocol Interconnect and BridgeX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERMellanox 10 Gigabit Ethernet Converged Network Adapters
VALUE PROPOSITIONS– As customers deploy VMware and implement server virtualization, the need
for bandwidth and availability increases significantly. ConnectX EN supports all networking features available in VMware ESX Server 3.5 including NetQueue for boosting I/O performance.
– Mellanox dual-port ConnectX EN adapters support PCIe 2.0 x8 (5 GT/s), enabling high availability and high performance.
Mellanox continues its leadership in providing high-performance networking technologies by delivering superior utilization and scaling in ConnectX® EN 10 Gigabit Ethernet Adapters, enabling data centers to do more with less.
BENEFITS■ Industry-leading throughput and latency performance■ Enabling I/O consolidation by supporting TCP/IP, FC
over Ethernet and RDMA over Ethernet transport protocols on a single adapter
■ Improved productivity and efficiency by delivering VM scaling and superior server utilization
■ Supports industry-standard SR-IO Virtualization technology and delivers VM pro tection and granular levels of I/O services to applications
■ High-availability and high-performance for data center networking
■ Software compatible with standard TCP/UDP/IP and iSCSI stacks
■ High level silicon integration and no external memory design provides low power, low cost and high reliability
TARGET APPLICATIONS■ Data center virtualization using VMware® ESX
Server or Citrix XenServer™■ Enterprise data center applications■ I/O consolidation (single unified wire for networking,
storage and clustering)■ Storage consolidations using FCoE■ Future-proofing Web 2.0 data centers and cloud
computing■ Video streaming■ Accelerating back-up and restore operations
– One of the significant challenges in a large data center is managing the number of cables connected to a server. Mellanox ConnectX dramatically reduces the number cables per server by providing multi-protocol support to leverage the single fabric for multiple applications.
– Virtual Protocol Interconnect™ (VPI) versions available with InfiniBand, Ethernet, Data Center Ethernet, and FCoE.
Ports 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 1 x 10GigE 2 x 10GigE 2 x 10GigE
ASIC ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX ENt ConnectX ENt
Connector CX4 CX4 SFP+ SFP+ SFP+ SFP+ RJ45 RJ45 RJ45
Cabling type CX4 Copper CX4 Copper Twinax or Direct Twinax or Direct SR Fibre Optic SR Fibre Optic CAT5E up to 55m CAT5E up to 55m CAT5E up to 55m Attach Copper Attach Copper MMF 62.5/50um MMF 62.5/50um CAT6 up to 55m CAT6 up to 55m CAT6 up to 55m up to 300m up to 300m CAT6A up to 100m CAT6A up to 100m CAT6A up to 100m
Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0
Speed 2.5GT/s 5.0GT/s 2.5GT/s 5.0GT/s 2.5GT/s 5.0GT/s 2.5GT/s 2.5GT/s 5.0GT/s
Lanes x8 x8 x8 x8 x8 x8 x8 x8 x8
Features
OS Support
RoHS Yes Yes Yes Yes Yes Yes Yes Yes Yes
Ordering Part MNEH28-XTC MNEH29-XTC MNPH28B-XTC MNPH29B-XTC MNPH28B-XTC MNPH29B-XTC MNTH18-XTC MNTH28B-XTC MNTH29B-XTC MFM1T02A-SR MFM1T02A-SR
RHEL, SLES, W2K3, W2K8, FreeBSD, VMWare ESX3.5
RHEL, SLES, W2K3, W2K8, FreeBSD, VMWare ESX3.5
Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV
Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV
FIBER OPTIC ADAPTERS 10GBASE-T ADAPTERS
OPERATING SYSTEMS/DISTRIBUTIONS– Novell SuSE Linux Enterprise Server
(SLES), Red Hat Enterprise Linux (RHEL), and other Linux distributions
– Microsoft Windows Server 2003/2008, Windows Compute Cluster Server 2003
– VMware ESX 3.5– Citrix XENServer 4.1
MANAGEMENT– MIB, MIB-II, MIB-II Extensions, RMON,
RMON 2– Configuration and diagnostic tools
Stateless Offload, FCoE Offload,Priority Flow Control, SR-IOV
COPPER ADAPTERS
RHEL, SLES, W2K3, W2K8,FreeBSD, VMWare ESX3.5
RHEL, SLES, W2K3, W2K8,FreeBSD, VMWare ESX3.5
Mellanox ConnectX® EN Ethernet
Network Interface Cards (NIC)
deliver high bandwidth and industry-leading
10GigE connectivity with stateless offloads
for converged fabrics in High-Performance
Computing, Enterprise Data Centers, and
Embedded environments. Clustered
databases, web infrastructure, and IP video
servers are just a few example applications
that will achieve significant throughput and
latency improvements resulting in faster
access, real-time response and increased
number of users per server. ConnectX EN
improves network performance by increasing
available bandwidth to the CPU and
providing enhanced performance, especially
in virtualized server environments.
Optimal Price/PerformanceConnectX EN 10Gig Ethernet removes I/O bottlenecks in mainstream servers that are limiting application performance.
Servers supporting PCI Express 2.0 with 5GT/s will be able to fully utilize both 10Gb/s ports, balancing the I/O requirement
of these high-end servers. Hardware-based stateless offload engines handle the TCP/UDP/IP segmentation, reassembly,
and checksum calculations that would otherwise burden the host process. These offload technologies are fully compatible
with Microsoft RSS and NetDMA. Total cost of ownership is optimized by maintaining an end-to-end Ethernet network on
existing operating systems and applications.
Mellanox provides 10GigE adapters suitable for all network environments. The dual port SFP+ adapter supports
10GBASE-SR, -LR, and direct attached copper cable providing the flexibility to connect over short, medium, and long
distances. Single and dual port 10GBASE-T adapters provide easy connections up to 100m over familiar UTP wiring. The
dual port 10GBASE-CX4 adapter with its powered connectors can utilize active copper and fiber cables as well as passive
copper.
Investment ProtectionNew enhancements such as Data Center Ethernet (DCE) are being added to Ethernet for better I/O consolidations.
ConnectX EN enables servers and storage systems to utilize Low Latency Ethernet (LLE) which provides efficient RDMA
transport over Layer 2 Ethernet with IEEE 802.1q VLAN tagging and Ethernet Priority-based Flow Control (PFC). The LLE
software stack maintains existing and future application compatibility for bandwidth and latency sensitive clustering
applications. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage
existing data center fabric management solutions.
ConnectX EN protects investments by providing in hardware today support for DCE and Fibre Channel over Ethernet
(FCoE), as well as technologies such as SR-IOV that provide enhanced virtual machine performance for virtualized servers.
I/O VirtualizationConnectX support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization with ConnectX gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.
Quality of ServiceResource allocation per application or per VM is provided and protected by the advanced QoS supported by ConnectX EN. Service levels for multiple traffic types can be based on IETF DiffServ or IEEE 802.1p/Q, along with the Data Center Ethernet enhancements, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grain control of traffic – ensuring that applications run smoothly in today’s complex environment.
Software SupportConnectX EN is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XENServer. ConnectX EN supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks. Stateless offload connections are also easy to scale using multiple adapters to reach the desired level of performance and fault tolerance. ConnectX EN supports various management interfaces and has a rich set of configuring and management tools across operating systems.
ETHERNET– IEEE Std 802.3ae 10 Gigabit Ethernet– IEEE Std 802.3ak 10GBASE-CX4– IEEE Std 802.3an 10GBASE-T– IEEE Std 802.3ad Link Aggregation and Failover– IEEE Std 802.3x Pause– IEEE Std 802.1Q VLAN tags– IEEE Std 802.1p Priorities– Multicast– Jumbo frame support (10KB)– 128 MAC/VLAN addresses per port
TCP/UDP/IP STATELESS OFFLOAD– TCP/UDP/IP checksum offload– TCP Large Send (< 64KB) or Giant Send (64KB-16MB)
Offload for segmentation– Receive Side Scaling (RSS) up to 32 queues– Line rate packet filtering
ADDITIONAL CPU OFFLOADS– Traffic steering across multiple cores– Intelligent interrupt coalescence– Full support for Intel I/OAT– Compliant to Microsoft RSS and NetDMA
HARDWARE-BASED I/O VIRTUALIZATION– Single Root IOV– Address translation and protection– Multiple queues per virtual machine– VMware NetQueue support– PCISIG IOV compliant
CPU– AMD X86, X86_64– Intel X86, EM64T, IA-32, IA-64– SPARC– PowerPC, MIPS, and Cell
PCI EXPRESS INTERFACE– PCIe Base 2.0 compliant, 1.1 compatible– 2.5GT/s or 5.0GT/s link rate x8 (20+20Gb/s
or 40+40Gb/s bidirectional bandwidth)– Fits x8 or x16 slots– Support for MSI/MSI-X mechanisms
CONNECTIVITY– Interoperable with 10GigE switches and
routers– 20m+ of copper CX4 cable, with powered
connectors supporting active copper or fiber cables (MNEH28/9)
– 100m of Cat6a and Cat7 UTP, 55m of Cat5e and Cat6 (MNTH 18/28/29)
– 100m (OM-2) or 300m (OM-3) of multimode fiber cable, duplex LC connector from SFP+ optics module (MNPH28B/29B)
– 10m direct attached copper cable through SFP+ connector (MNPH28B/29B)
– Consult the factory for 10GBASE-LR availability
SAFETY – USA/Canada: cTUVus UL– EU: IEC60950– Germany: TUV/GS– International: CB Scheme
EMC (EMISSIONS)– USA: FCC, Class A– Canada: ICES, Class A– EU: EN55022, Class A– EU: EN55024, Class A– EU: EN61000-3-2, Class A– EU: EN61000-3-3, Class A– Japan: VCCI, Class A– Taiwan: BSMI, Class A
ENVIRONMENTAL– EU: IEC 60068-2-64: Random Vibration– EU: IEC 60068-2-29: Shocks, Type I / II– EU: IEC 60068-2-32: Fall Test
OPERATING CONDITIONS– Operating temperature: 0 to 55° C– Air flow: 200LFM @ 55° C – Requires 3.3V, 12V supplies
FEATURE SUMMARY COMPLIANCE COMPATIBILITY
350 Oakmead Parkway, Suite 100Sunnyvale, CA 94085Tel: 408-970-3400Fax: 408-970-3403www.mellanox.com
© Copyright 2009. Mellanox Technologies. All rights reserved. Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. Virtual Protocol Interconnect and BridgeX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERMellanox 10 Gigabit Ethernet Converged Network Adapters
VALUE PROPOSITIONS– As customers deploy VMware and implement server virtualization, the need
for bandwidth and availability increases significantly. ConnectX EN supports all networking features available in VMware ESX Server 3.5 including NetQueue for boosting I/O performance.
– Mellanox dual-port ConnectX EN adapters support PCIe 2.0 x8 (5 GT/s), enabling high availability and high performance.
Mellanox continues its leadership in providing high-performance networking technologies by delivering superior utilization and scaling in ConnectX® EN 10 Gigabit Ethernet Adapters, enabling data centers to do more with less.
BENEFITS■ Industry-leading throughput and latency performance■ Enabling I/O consolidation by supporting TCP/IP, FC
over Ethernet and RDMA over Ethernet transport protocols on a single adapter
■ Improved productivity and efficiency by delivering VM scaling and superior server utilization
■ Supports industry-standard SR-IO Virtualization technology and delivers VM pro tection and granular levels of I/O services to applications
■ High-availability and high-performance for data center networking
■ Software compatible with standard TCP/UDP/IP and iSCSI stacks
■ High level silicon integration and no external memory design provides low power, low cost and high reliability
TARGET APPLICATIONS■ Data center virtualization using VMware® ESX
Server or Citrix XenServer™■ Enterprise data center applications■ I/O consolidation (single unified wire for networking,
storage and clustering)■ Storage consolidations using FCoE■ Future-proofing Web 2.0 data centers and cloud
computing■ Video streaming■ Accelerating back-up and restore operations
– One of the significant challenges in a large data center is managing the number of cables connected to a server. Mellanox ConnectX dramatically reduces the number cables per server by providing multi-protocol support to leverage the single fabric for multiple applications.
– Virtual Protocol Interconnect™ (VPI) versions available with InfiniBand, Ethernet, Data Center Ethernet, and FCoE.
Ports 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 1 x 10GigE 2 x 10GigE 2 x 10GigE
ASIC ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX ENt ConnectX ENt
Connector CX4 CX4 SFP+ SFP+ SFP+ SFP+ RJ45 RJ45 RJ45
Cabling type CX4 Copper CX4 Copper Twinax or Direct Twinax or Direct SR Fibre Optic SR Fibre Optic CAT5E up to 55m CAT5E up to 55m CAT5E up to 55m Attach Copper Attach Copper MMF 62.5/50um MMF 62.5/50um CAT6 up to 55m CAT6 up to 55m CAT6 up to 55m up to 300m up to 300m CAT6A up to 100m CAT6A up to 100m CAT6A up to 100m
Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0
Speed 2.5GT/s 5.0GT/s 2.5GT/s 5.0GT/s 2.5GT/s 5.0GT/s 2.5GT/s 2.5GT/s 5.0GT/s
Lanes x8 x8 x8 x8 x8 x8 x8 x8 x8
Features
OS Support
RoHS Yes Yes Yes Yes Yes Yes Yes Yes Yes
Ordering Part MNEH28-XTC MNEH29-XTC MNPH28B-XTC MNPH29B-XTC MNPH28B-XTC MNPH29B-XTC MNTH18-XTC MNTH28B-XTC MNTH29B-XTC MFM1T02A-SR MFM1T02A-SR
RHEL, SLES, W2K3, W2K8, FreeBSD, VMWare ESX3.5
RHEL, SLES, W2K3, W2K8, FreeBSD, VMWare ESX3.5
Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV
Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV
FIBER OPTIC ADAPTERS 10GBASE-T ADAPTERS
OPERATING SYSTEMS/DISTRIBUTIONS– Novell SuSE Linux Enterprise Server
(SLES), Red Hat Enterprise Linux (RHEL), and other Linux distributions
– Microsoft Windows Server 2003/2008, Windows Compute Cluster Server 2003
– VMware ESX 3.5– Citrix XENServer 4.1
MANAGEMENT– MIB, MIB-II, MIB-II Extensions, RMON,
RMON 2– Configuration and diagnostic tools
Stateless Offload, FCoE Offload,Priority Flow Control, SR-IOV
COPPER ADAPTERS
RHEL, SLES, W2K3, W2K8,FreeBSD, VMWare ESX3.5
RHEL, SLES, W2K3, W2K8,FreeBSD, VMWare ESX3.5
Mellanox ConnectX® EN Ethernet
Network Interface Cards (NIC)
deliver high bandwidth and industry-leading
10GigE connectivity with stateless offloads
for converged fabrics in High-Performance
Computing, Enterprise Data Centers, and
Embedded environments. Clustered
databases, web infrastructure, and IP video
servers are just a few example applications
that will achieve significant throughput and
latency improvements resulting in faster
access, real-time response and increased
number of users per server. ConnectX EN
improves network performance by increasing
available bandwidth to the CPU and
providing enhanced performance, especially
in virtualized server environments.
Optimal Price/PerformanceConnectX EN 10Gig Ethernet removes I/O bottlenecks in mainstream servers that are limiting application performance.
Servers supporting PCI Express 2.0 with 5GT/s will be able to fully utilize both 10Gb/s ports, balancing the I/O requirement
of these high-end servers. Hardware-based stateless offload engines handle the TCP/UDP/IP segmentation, reassembly,
and checksum calculations that would otherwise burden the host process. These offload technologies are fully compatible
with Microsoft RSS and NetDMA. Total cost of ownership is optimized by maintaining an end-to-end Ethernet network on
existing operating systems and applications.
Mellanox provides 10GigE adapters suitable for all network environments. The dual port SFP+ adapter supports
10GBASE-SR, -LR, and direct attached copper cable providing the flexibility to connect over short, medium, and long
distances. Single and dual port 10GBASE-T adapters provide easy connections up to 100m over familiar UTP wiring. The
dual port 10GBASE-CX4 adapter with its powered connectors can utilize active copper and fiber cables as well as passive
copper.
Investment ProtectionNew enhancements such as Data Center Ethernet (DCE) are being added to Ethernet for better I/O consolidations.
ConnectX EN enables servers and storage systems to utilize Low Latency Ethernet (LLE) which provides efficient RDMA
transport over Layer 2 Ethernet with IEEE 802.1q VLAN tagging and Ethernet Priority-based Flow Control (PFC). The LLE
software stack maintains existing and future application compatibility for bandwidth and latency sensitive clustering
applications. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage
existing data center fabric management solutions.
ConnectX EN protects investments by providing in hardware today support for DCE and Fibre Channel over Ethernet
(FCoE), as well as technologies such as SR-IOV that provide enhanced virtual machine performance for virtualized servers.
I/O VirtualizationConnectX support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization with ConnectX gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.
Quality of ServiceResource allocation per application or per VM is provided and protected by the advanced QoS supported by ConnectX EN. Service levels for multiple traffic types can be based on IETF DiffServ or IEEE 802.1p/Q, along with the Data Center Ethernet enhancements, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grain control of traffic – ensuring that applications run smoothly in today’s complex environment.
Software SupportConnectX EN is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XENServer. ConnectX EN supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks. Stateless offload connections are also easy to scale using multiple adapters to reach the desired level of performance and fault tolerance. ConnectX EN supports various management interfaces and has a rich set of configuring and management tools across operating systems.
ETHERNET– IEEE Std 802.3ae 10 Gigabit Ethernet– IEEE Std 802.3ak 10GBASE-CX4– IEEE Std 802.3an 10GBASE-T– IEEE Std 802.3ad Link Aggregation and Failover– IEEE Std 802.3x Pause– IEEE Std 802.1Q VLAN tags– IEEE Std 802.1p Priorities– Multicast– Jumbo frame support (10KB)– 128 MAC/VLAN addresses per port
TCP/UDP/IP STATELESS OFFLOAD– TCP/UDP/IP checksum offload– TCP Large Send (< 64KB) or Giant Send (64KB-16MB)
Offload for segmentation– Receive Side Scaling (RSS) up to 32 queues– Line rate packet filtering
ADDITIONAL CPU OFFLOADS– Traffic steering across multiple cores– Intelligent interrupt coalescence– Full support for Intel I/OAT– Compliant to Microsoft RSS and NetDMA
HARDWARE-BASED I/O VIRTUALIZATION– Single Root IOV– Address translation and protection– Multiple queues per virtual machine– VMware NetQueue support– PCISIG IOV compliant
CPU– AMD X86, X86_64– Intel X86, EM64T, IA-32, IA-64– SPARC– PowerPC, MIPS, and Cell
PCI EXPRESS INTERFACE– PCIe Base 2.0 compliant, 1.1 compatible– 2.5GT/s or 5.0GT/s link rate x8 (20+20Gb/s
or 40+40Gb/s bidirectional bandwidth)– Fits x8 or x16 slots– Support for MSI/MSI-X mechanisms
CONNECTIVITY– Interoperable with 10GigE switches and
routers– 20m+ of copper CX4 cable, with powered
connectors supporting active copper or fiber cables (MNEH28/9)
– 100m of Cat6a and Cat7 UTP, 55m of Cat5e and Cat6 (MNTH 18/28/29)
– 100m (OM-2) or 300m (OM-3) of multimode fiber cable, duplex LC connector from SFP+ optics module (MNPH28B/29B)
– 10m direct attached copper cable through SFP+ connector (MNPH28B/29B)
– Consult the factory for 10GBASE-LR availability
SAFETY – USA/Canada: cTUVus UL– EU: IEC60950– Germany: TUV/GS– International: CB Scheme
EMC (EMISSIONS)– USA: FCC, Class A– Canada: ICES, Class A– EU: EN55022, Class A– EU: EN55024, Class A– EU: EN61000-3-2, Class A– EU: EN61000-3-3, Class A– Japan: VCCI, Class A– Taiwan: BSMI, Class A
ENVIRONMENTAL– EU: IEC 60068-2-64: Random Vibration– EU: IEC 60068-2-29: Shocks, Type I / II– EU: IEC 60068-2-32: Fall Test
OPERATING CONDITIONS– Operating temperature: 0 to 55° C– Air flow: 200LFM @ 55° C – Requires 3.3V, 12V supplies
FEATURE SUMMARY COMPLIANCE COMPATIBILITY
350 Oakmead Parkway, Suite 100Sunnyvale, CA 94085Tel: 408-970-3400Fax: 408-970-3403www.mellanox.com
© Copyright 2009. Mellanox Technologies. All rights reserved. Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. Virtual Protocol Interconnect and BridgeX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERMellanox 10 Gigabit Ethernet Converged Network Adapters
VALUE PROPOSITIONS– As customers deploy VMware and implement server virtualization, the need
for bandwidth and availability increases significantly. ConnectX EN supports all networking features available in VMware ESX Server 3.5 including NetQueue for boosting I/O performance.
– Mellanox dual-port ConnectX EN adapters support PCIe 2.0 x8 (5 GT/s), enabling high availability and high performance.
Mellanox continues its leadership in providing high-performance networking technologies by delivering superior utilization and scaling in ConnectX® EN 10 Gigabit Ethernet Adapters, enabling data centers to do more with less.
BENEFITS■ Industry-leading throughput and latency performance■ Enabling I/O consolidation by supporting TCP/IP, FC
over Ethernet and RDMA over Ethernet transport protocols on a single adapter
■ Improved productivity and efficiency by delivering VM scaling and superior server utilization
■ Supports industry-standard SR-IO Virtualization technology and delivers VM pro tection and granular levels of I/O services to applications
■ High-availability and high-performance for data center networking
■ Software compatible with standard TCP/UDP/IP and iSCSI stacks
■ High level silicon integration and no external memory design provides low power, low cost and high reliability
TARGET APPLICATIONS■ Data center virtualization using VMware® ESX
Server or Citrix XenServer™■ Enterprise data center applications■ I/O consolidation (single unified wire for networking,
storage and clustering)■ Storage consolidations using FCoE■ Future-proofing Web 2.0 data centers and cloud
computing■ Video streaming■ Accelerating back-up and restore operations
– One of the significant challenges in a large data center is managing the number of cables connected to a server. Mellanox ConnectX dramatically reduces the number cables per server by providing multi-protocol support to leverage the single fabric for multiple applications.
– Virtual Protocol Interconnect™ (VPI) versions available with InfiniBand, Ethernet, Data Center Ethernet, and FCoE.
Ports 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 1 x 10GigE 2 x 10GigE 2 x 10GigE
ASIC ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX ENt ConnectX ENt
Connector CX4 CX4 SFP+ SFP+ SFP+ SFP+ RJ45 RJ45 RJ45
Cabling type CX4 Copper CX4 Copper Twinax or Direct Twinax or Direct SR Fibre Optic SR Fibre Optic CAT5E up to 55m CAT5E up to 55m CAT5E up to 55m Attach Copper Attach Copper MMF 62.5/50um MMF 62.5/50um CAT6 up to 55m CAT6 up to 55m CAT6 up to 55m up to 300m up to 300m CAT6A up to 100m CAT6A up to 100m CAT6A up to 100m
Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0
Speed 2.5GT/s 5.0GT/s 2.5GT/s 5.0GT/s 2.5GT/s 5.0GT/s 2.5GT/s 2.5GT/s 5.0GT/s
Lanes x8 x8 x8 x8 x8 x8 x8 x8 x8
Features
OS Support
RoHS Yes Yes Yes Yes Yes Yes Yes Yes Yes
Ordering Part MNEH28-XTC MNEH29-XTC MNPH28B-XTC MNPH29B-XTC MNPH28B-XTC MNPH29B-XTC MNTH18-XTC MNTH28B-XTC MNTH29B-XTC MFM1T02A-SR MFM1T02A-SR
RHEL, SLES, W2K3, W2K8, FreeBSD, VMWare ESX3.5
RHEL, SLES, W2K3, W2K8, FreeBSD, VMWare ESX3.5
Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV
Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV
FIBER OPTIC ADAPTERS 10GBASE-T ADAPTERS
OPERATING SYSTEMS/DISTRIBUTIONS– Novell SuSE Linux Enterprise Server
(SLES), Red Hat Enterprise Linux (RHEL), and other Linux distributions
– Microsoft Windows Server 2003/2008, Windows Compute Cluster Server 2003
– VMware ESX 3.5– Citrix XENServer 4.1
MANAGEMENT– MIB, MIB-II, MIB-II Extensions, RMON,
RMON 2– Configuration and diagnostic tools
Stateless Offload, FCoE Offload,Priority Flow Control, SR-IOV
COPPER ADAPTERS
RHEL, SLES, W2K3, W2K8,FreeBSD, VMWare ESX3.5
RHEL, SLES, W2K3, W2K8,FreeBSD, VMWare ESX3.5
Mellanox ConnectX® EN Ethernet
Network Interface Cards (NIC)
deliver high bandwidth and industry-leading
10GigE connectivity with stateless offloads
for converged fabrics in High-Performance
Computing, Enterprise Data Centers, and
Embedded environments. Clustered
databases, web infrastructure, and IP video
servers are just a few example applications
that will achieve significant throughput and
latency improvements resulting in faster
access, real-time response and increased
number of users per server. ConnectX EN
improves network performance by increasing
available bandwidth to the CPU and
providing enhanced performance, especially
in virtualized server environments.
Optimal Price/PerformanceConnectX EN 10Gig Ethernet removes I/O bottlenecks in mainstream servers that are limiting application performance.
Servers supporting PCI Express 2.0 with 5GT/s will be able to fully utilize both 10Gb/s ports, balancing the I/O requirement
of these high-end servers. Hardware-based stateless offload engines handle the TCP/UDP/IP segmentation, reassembly,
and checksum calculations that would otherwise burden the host process. These offload technologies are fully compatible
with Microsoft RSS and NetDMA. Total cost of ownership is optimized by maintaining an end-to-end Ethernet network on
existing operating systems and applications.
Mellanox provides 10GigE adapters suitable for all network environments. The dual port SFP+ adapter supports
10GBASE-SR, -LR, and direct attached copper cable providing the flexibility to connect over short, medium, and long
distances. Single and dual port 10GBASE-T adapters provide easy connections up to 100m over familiar UTP wiring. The
dual port 10GBASE-CX4 adapter with its powered connectors can utilize active copper and fiber cables as well as passive
copper.
Investment ProtectionNew enhancements such as Data Center Ethernet (DCE) are being added to Ethernet for better I/O consolidations.
ConnectX EN enables servers and storage systems to utilize Low Latency Ethernet (LLE) which provides efficient RDMA
transport over Layer 2 Ethernet with IEEE 802.1q VLAN tagging and Ethernet Priority-based Flow Control (PFC). The LLE
software stack maintains existing and future application compatibility for bandwidth and latency sensitive clustering
applications. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage
existing data center fabric management solutions.
ConnectX EN protects investments by providing in hardware today support for DCE and Fibre Channel over Ethernet
(FCoE), as well as technologies such as SR-IOV that provide enhanced virtual machine performance for virtualized servers.
I/O VirtualizationConnectX support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization with ConnectX gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.
Quality of ServiceResource allocation per application or per VM is provided and protected by the advanced QoS supported by ConnectX EN. Service levels for multiple traffic types can be based on IETF DiffServ or IEEE 802.1p/Q, along with the Data Center Ethernet enhancements, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grain control of traffic – ensuring that applications run smoothly in today’s complex environment.
Software SupportConnectX EN is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XENServer. ConnectX EN supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks. Stateless offload connections are also easy to scale using multiple adapters to reach the desired level of performance and fault tolerance. ConnectX EN supports various management interfaces and has a rich set of configuring and management tools across operating systems.
ETHERNET– IEEE Std 802.3ae 10 Gigabit Ethernet– IEEE Std 802.3ak 10GBASE-CX4– IEEE Std 802.3an 10GBASE-T– IEEE Std 802.3ad Link Aggregation and Failover– IEEE Std 802.3x Pause– IEEE Std 802.1Q VLAN tags– IEEE Std 802.1p Priorities– Multicast– Jumbo frame support (10KB)– 128 MAC/VLAN addresses per port
TCP/UDP/IP STATELESS OFFLOAD– TCP/UDP/IP checksum offload– TCP Large Send (< 64KB) or Giant Send (64KB-16MB)
Offload for segmentation– Receive Side Scaling (RSS) up to 32 queues– Line rate packet filtering
ADDITIONAL CPU OFFLOADS– Traffic steering across multiple cores– Intelligent interrupt coalescence– Full support for Intel I/OAT– Compliant to Microsoft RSS and NetDMA
HARDWARE-BASED I/O VIRTUALIZATION– Single Root IOV– Address translation and protection– Multiple queues per virtual machine– VMware NetQueue support– PCISIG IOV compliant
CPU– AMD X86, X86_64– Intel X86, EM64T, IA-32, IA-64– SPARC– PowerPC, MIPS, and Cell
PCI EXPRESS INTERFACE– PCIe Base 2.0 compliant, 1.1 compatible– 2.5GT/s or 5.0GT/s link rate x8 (20+20Gb/s
or 40+40Gb/s bidirectional bandwidth)– Fits x8 or x16 slots– Support for MSI/MSI-X mechanisms
CONNECTIVITY– Interoperable with 10GigE switches and
routers– 20m+ of copper CX4 cable, with powered
connectors supporting active copper or fiber cables (MNEH28/9)
– 100m of Cat6a and Cat7 UTP, 55m of Cat5e and Cat6 (MNTH 18/28/29)
– 100m (OM-2) or 300m (OM-3) of multimode fiber cable, duplex LC connector from SFP+ optics module (MNPH28B/29B)
– 10m direct attached copper cable through SFP+ connector (MNPH28B/29B)
– Consult the factory for 10GBASE-LR availability
SAFETY – USA/Canada: cTUVus UL– EU: IEC60950– Germany: TUV/GS– International: CB Scheme
EMC (EMISSIONS)– USA: FCC, Class A– Canada: ICES, Class A– EU: EN55022, Class A– EU: EN55024, Class A– EU: EN61000-3-2, Class A– EU: EN61000-3-3, Class A– Japan: VCCI, Class A– Taiwan: BSMI, Class A
ENVIRONMENTAL– EU: IEC 60068-2-64: Random Vibration– EU: IEC 60068-2-29: Shocks, Type I / II– EU: IEC 60068-2-32: Fall Test
OPERATING CONDITIONS– Operating temperature: 0 to 55° C– Air flow: 200LFM @ 55° C – Requires 3.3V, 12V supplies
FEATURE SUMMARY COMPLIANCE COMPATIBILITY
350 Oakmead Parkway, Suite 100Sunnyvale, CA 94085Tel: 408-970-3400Fax: 408-970-3403www.mellanox.com
© Copyright 2009. Mellanox Technologies. All rights reserved. Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. Virtual Protocol Interconnect and BridgeX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERMellanox 10 Gigabit Ethernet Converged Network Adapters
VALUE PROPOSITIONS– As customers deploy VMware and implement server virtualization, the need
for bandwidth and availability increases significantly. ConnectX EN supports all networking features available in VMware ESX Server 3.5 including NetQueue for boosting I/O performance.
– Mellanox dual-port ConnectX EN adapters support PCIe 2.0 x8 (5 GT/s), enabling high availability and high performance.
Mellanox continues its leadership in providing high-performance networking technologies by delivering superior utilization and scaling in ConnectX® EN 10 Gigabit Ethernet Adapters, enabling data centers to do more with less.
BENEFITS■ Industry-leading throughput and latency performance■ Enabling I/O consolidation by supporting TCP/IP, FC
over Ethernet and RDMA over Ethernet transport protocols on a single adapter
■ Improved productivity and efficiency by delivering VM scaling and superior server utilization
■ Supports industry-standard SR-IO Virtualization technology and delivers VM pro tection and granular levels of I/O services to applications
■ High-availability and high-performance for data center networking
■ Software compatible with standard TCP/UDP/IP and iSCSI stacks
■ High level silicon integration and no external memory design provides low power, low cost and high reliability
TARGET APPLICATIONS■ Data center virtualization using VMware® ESX
Server or Citrix XenServer™■ Enterprise data center applications■ I/O consolidation (single unified wire for networking,
storage and clustering)■ Storage consolidations using FCoE■ Future-proofing Web 2.0 data centers and cloud
computing■ Video streaming■ Accelerating back-up and restore operations
– One of the significant challenges in a large data center is managing the number of cables connected to a server. Mellanox ConnectX dramatically reduces the number cables per server by providing multi-protocol support to leverage the single fabric for multiple applications.
– Virtual Protocol Interconnect™ (VPI) versions available with InfiniBand, Ethernet, Data Center Ethernet, and FCoE.
Ports 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 2 x 10GigE 1 x 10GigE 2 x 10GigE 2 x 10GigE
ASIC ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX EN ConnectX ENt ConnectX ENt
Connector CX4 CX4 SFP+ SFP+ SFP+ SFP+ RJ45 RJ45 RJ45
Cabling type CX4 Copper CX4 Copper Twinax or Direct Twinax or Direct SR Fibre Optic SR Fibre Optic CAT5E up to 55m CAT5E up to 55m CAT5E up to 55m Attach Copper Attach Copper MMF 62.5/50um MMF 62.5/50um CAT6 up to 55m CAT6 up to 55m CAT6 up to 55m up to 300m up to 300m CAT6A up to 100m CAT6A up to 100m CAT6A up to 100m
Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0
Speed 2.5GT/s 5.0GT/s 2.5GT/s 5.0GT/s 2.5GT/s 5.0GT/s 2.5GT/s 2.5GT/s 5.0GT/s
Lanes x8 x8 x8 x8 x8 x8 x8 x8 x8
Features
OS Support
RoHS Yes Yes Yes Yes Yes Yes Yes Yes Yes
Ordering Part MNEH28-XTC MNEH29-XTC MNPH28B-XTC MNPH29B-XTC MNPH28B-XTC MNPH29B-XTC MNTH18-XTC MNTH28B-XTC MNTH29B-XTC MFM1T02A-SR MFM1T02A-SR
RHEL, SLES, W2K3, W2K8, FreeBSD, VMWare ESX3.5
RHEL, SLES, W2K3, W2K8, FreeBSD, VMWare ESX3.5
Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV
Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV
FIBER OPTIC ADAPTERS 10GBASE-T ADAPTERS
OPERATING SYSTEMS/DISTRIBUTIONS– Novell SuSE Linux Enterprise Server
(SLES), Red Hat Enterprise Linux (RHEL), and other Linux distributions
– Microsoft Windows Server 2003/2008, Windows Compute Cluster Server 2003
– VMware ESX 3.5– Citrix XENServer 4.1
MANAGEMENT– MIB, MIB-II, MIB-II Extensions, RMON,
RMON 2– Configuration and diagnostic tools
Stateless Offload, FCoE Offload,Priority Flow Control, SR-IOV
COPPER ADAPTERS
RHEL, SLES, W2K3, W2K8,FreeBSD, VMWare ESX3.5
RHEL, SLES, W2K3, W2K8,FreeBSD, VMWare ESX3.5
Mellanox ConnectX® EN Ethernet
Network Interface Cards (NIC)
deliver high bandwidth and industry-leading
10GigE connectivity with stateless offloads
for converged fabrics in High-Performance
Computing, Enterprise Data Centers, and
Embedded environments. Clustered
databases, web infrastructure, and IP video
servers are just a few example applications
that will achieve significant throughput and
latency improvements resulting in faster
access, real-time response and increased
number of users per server. ConnectX EN
improves network performance by increasing
available bandwidth to the CPU and
providing enhanced performance, especially
in virtualized server environments.