the case for pci express on the backplane

30
The Case for PCI Express ® on the Backplane Miguel Rodriguez Sr. Product Marketing Engineer PLX Technology, Inc. Copyright © 2010, PCI-SIG, All Rights Reserved 1

Upload: teo2005

Post on 20-Jul-2016

12 views

Category:

Documents


1 download

DESCRIPTION

PCIE Backpane

TRANSCRIPT

Page 1: The Case for PCI Express on the Backplane

The Case for PCI Express® on the Backplane

Miguel Rodriguez

Sr. Product Marketing Engineer

PLX Technology, Inc.

Copyright © 2010, PCI-SIG, All Rights Reserved 1

Page 2: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference Copyright © 2010, PCI-SIG, All Rights Reserved 2

Disclaimer

All opinions, judgments, recommendations, etc. that

are presented herein are the opinions of the presenter

of the material and do not necessarily reflect the

opinions of the PCI-SIG®.

The information in this presentation refers to a

specification still in the development process. All

material is subject to change before the specification

is released.

Page 3: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference Copyright © 2010, PCI-SIG, All Rights Reserved 3

Overview

Today’s backplane requirements are more demanding than before

De-facto backplane technology, GbE, can no longer keep up with growing demands

Other technologies, 10GbE, IB and PCIe, are in line as the backplane technology successor

PCIe has evolved from supporting a simple IO model to having the necessary features for becoming the ideal backplane interconnect

Page 4: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Backplane Requirements

Need to support Inter-processor communication (IPC)

MPI and others

Access to an external local area network (LAN)

Popular interconnect used now is gigabit Ethernet

Access to an external storage area network (SAN)

Such as Fibre Channel, SAS, etc.

Copyright © 2010, PCI-SIG, All Rights Reserved 4

Page 5: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Backplane Requirements

Needs support for a high performance fabric

High throughput: 10Gbps and over

Low latency: single digit microseconds

Reliable physical interface

For deploying blade servers

– Over 20 inches of FR4

Cabling between rack mounted servers

– 5 meter long cables on average

Copyright © 2010, PCI-SIG, All Rights Reserved 5

Page 6: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Traditional Backplane

Copyright © 2010, PCI-SIG, All Rights Reserved 6

Processing

Node

IPC FC LAN

Processing

Node

IPC FC LAN

IPC switch fabric

SAN switch fabric

LAN switch fabric

Multi-Interconnect

Backplane

IPC 10GbE or Infiniband Adapter

FC Fibre Channel Adapter

LAN Gigabit Adapter

Page 7: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Backplane Technologies

Three candidates are ready to make the claim for a unified backplane

PCI Express

10 Gigabit Ethernet

Infiniband

Each provide a set of capabilities which address the backplane requirements

Copyright © 2010, PCI-SIG, All Rights Reserved 7

Page 8: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

10GbE on the Backplane

Ethernet is accepted and currently used for both LAN and IPC

Gigabit for LAN

10Gigabit for IPC

However, Ethernet is not ideal for storage traffic

It has a tendency to drop packets in the wake of congestion (lossy)

Upper layer protocols (TCP/IP) are used to improve reliability at the expense of inherent overhead

Copyright © 2010, PCI-SIG, All Rights Reserved 8

Page 9: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

10GbE on the Backplane

From storage perspective, iSCSI layer is built atop TCP/IP

Provides a more reliable mechanism over ethernet

Complexities in iSCSI protocol require dedicated iSCSI adapter at server

Adapter handles actual storage communication

Supports iSCSI-based storage arrays only

Requires gateway device to other storage infrastructures

Usage model of iSCSI device contradicts the definition of a unified IO interface

Copyright © 2010, PCI-SIG, All Rights Reserved 9

Page 10: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

10GbE on the Backplane

Copyright © 2010, PCI-SIG, All Rights Reserved 10

Processing

Node

IPC

10GbE switch fabric

IPC 10GbE Adapter

Processing

Node

IPC

LAN iSCSI

storage

FC

SAN

iSCSI to FC

gateway

Page 11: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

10GbE on the Backplane Converged Enhanced Ethernet (CEE)

Recent improvement efforts to Ethernet to improve reliability (prevent packet drops)

Allows the FC protocol to run directly over Ethernet

– Fibre Channel over Ethernet (FCoE)

– Full benefits of FCoE not realized unless it is offloaded to hardware

Supports both functions simultaneously (FCoE and 10GbE)

Both models (CEE and FCoE) still in early stages

Not widely deployed as most infrastructure is FC based

Requires new type of FCoE switch capable of handling both FCoE and TCP/IP

Uses Converged Network Adapters (CNA) to create a unified fabric

Supports both functions simultaneously (FCoE and 10GbE)

Copyright © 2010, PCI-SIG, All Rights Reserved 11

Page 12: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

10GbE on the Backplane

Copyright © 2010, PCI-SIG, All Rights Reserved 12

Processing

Node

CNA

Converged Enhanced Ethernet (CEE)

CNA 10GbE CEE Adapter

Processing

Node

CNA

LAN FC

SAN

FCoE

Switch

Page 13: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Infiniband on the Backplane

High throughput and low latency technology

Ideal for IPC and commonly deployed in High Performance Computing (HPC)

LAN connectivity requires TCP/IP over IB (IPoIB) gateway

SAN connectivity requires Fibre Channel (FC) to Infiniband (IB) gateway

Additional overhead and cost to connect to LAN and SAN

Copyright © 2010, PCI-SIG, All Rights Reserved 13

Page 14: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Infiniband on the Backplane

Copyright © 2010, PCI-SIG, All Rights Reserved 14

Processing

Node

IPC

Infiniband switch fabric

IPC Infiniband Adapter

Processing

Node

IPC

LAN FC

SAN

IPoIB

gateway

FC to IB

gateway

Page 15: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Traditional PCIe System

Parent/Child model

One CPU to many IO endpoints

CPU manages all the IO in a system

Single address domain

Copyright © 2010, PCI-SIG, All Rights Reserved 15

Processing Node

CPU

NB

IO IO IO

PCIe

Switch

Page 16: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

PCI Express on the Backplane

No longer a CPU to IO interconnect only

Supports more advanced models

Implements advanced mechanisms to enable IPC and LAN directly on the PCIe fabric

Via address translation, doorbell and scratchpad mechanisms

Supports Shared IO model

Only one fabric to manage

Supported natively on the CPU

No need to bridge to other protocols for IPC

Copyright © 2010, PCI-SIG, All Rights Reserved 16

Page 17: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Non-Transparent Bridging

Vendor specific feature built on top of PCIe

Implemented as an endpoint pair inside a PCIe switch

– NT-v: endpoint found by the host on the upstream port

– NT-l: endpoint found by the host on the downstream port

Presents a Type 0 configuration header to software

– Configuration transaction terminated at NT endpoint

Copyright © 2010, PCI-SIG, All Rights Reserved 17

P2P

P2P P2P

Virtual PCI Bus

Upstream Port

Downstream Ports

Software View of a Three Port Transparent PCIe Switch

P2P

P2P

NT-l

Virtual PCI Bus

Upstream Port

Downstream Ports

Software View of aThree Port Transparent PCIe Switch with NT

NT-v

P2P

Page 18: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

PCI Express on the Backplane

Non-Transparent support enables clustering

Implements address translation mechanisms to enable address isolation between hosts

Utilizes doorbell and scratchpad registers for generating interrupts between hosts

Connects to an external PCIe switch fabric to which other nodes and IO are also connected to

NT hardware serves as a software platform

NT driver can expose industry standard APIs

– TCP/IP, Sockets, etc.

Copyright © 2010, PCI-SIG, All Rights Reserved 18

Page 19: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

PCI Express on the Backplane

Copyright © 2010, PCI-SIG, All Rights Reserved 19

Processing

Node

PLX

PCI Express switch fabric

PLX PCIe NT Adapter

Processing

Node

PLX

LAN FC

SAN

10GbE

Adapter

FC

Adapter

PCIe Direct IPC

Note: PCIe NT functionality can be part

of the PCI Express switch fabric.

Page 20: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

PCI Express on the Backplane

Shared IO model using Non-Transparent port

NT port and device driver provide the requisite support for IO sharing

Does not require MR-IOV; uses SR-IOV endpoints instead

Shared adapter appears private to each processing node

Avoids the over-provisioning of the IO resources

Communication to other nodes

Logically, similar to that of using a dedicated IPC adapter

Physically, adapter resides remotely

Copyright © 2010, PCI-SIG, All Rights Reserved 20

Page 21: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

PCI Express on the Backplane

Copyright © 2010, PCI-SIG, All Rights Reserved 21

Processing

Node

PLX

PCI Express switch fabric

PLX PCIe NT Adapter

Processing

Node

PLX

LAN FC

SAN

10GbE

Adapter

FC

Adapter

Shared IO device moves data between hosts

Page 22: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Performance

Low latency and high throughput

Makes for a good performance foundation

Additionally, interconnect must support an efficient interface

In order to take advantage of hardware capabilities

Hardware capabilities and application usage model for

10GbE

Infiniband

PCI Express

Copyright © 2010, PCI-SIG, All Rights Reserved 22

Page 23: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Hardware Capabilities

10GbE offers evolutionary scalability with minimal disruption

Higher throughput (gigabit to 10 gigabit)

10GbE fails in two important areas

Latency

– Lossy interconnect can drop packets unpredictable latencies

– CEE enhancements may alleviate congestion; improvement on latency is not yet clear

– Current latency offered by Infiniband and PCIe < 2us

Jitter

Copyright © 2010, PCI-SIG, All Rights Reserved 23

Page 24: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Hardware Capabilities

Infiniband

Throughput

– Can sustain high data rates (40Gbps QDR; 80Gbps EDR)

However, throughput limited to PCIe interface capability

– PCIe 2.0 x8 cannot sustain an IB dual-port interface at 80Gbps EDR

Offers low latency interface

Serves as an PCIe to IB bridge

Copyright © 2010, PCI-SIG, All Rights Reserved 24

Page 25: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Hardware Capabilities

PCI Express

High throughput with PCIe 3.0

– 8GT/s per lane x16 = 128GT/s

Low latency

– PCIe switch latency is less 130ns

– End-to-end latency is less than 1us typically

PCIe native on the processor/chipset

– No need to terminate inside the box

– Can extend outside the system allowing user to take advantage of the latency and bandwidth potential

– Allows direct read/write of remote memory

Copyright © 2010, PCI-SIG, All Rights Reserved 25

Page 26: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Application Usage Model

An application running on a server can transfer data to/from remote server in one of two ways

Messaging model

Application writes to a common buffer on remote node

Data is locally copied from the common buffer to the target application buffer

Predominantly used by Ethernet

Also supported by Infiniband

Copyright © 2010, PCI-SIG, All Rights Reserved 26

Page 27: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Application Usage Model

Remote Direct Memory Access (RDMA)

Enables an application to directly access data on the memory of a remote server

Eliminates data copies low latency; high bandwidth

Natively supported by Infiniband

Supported by Ethernet via iWarp

– RDMA at the TCP/IP level

– Complete protocol needs to be offloaded to hardware to gain any meaningful performance

Copyright © 2010, PCI-SIG, All Rights Reserved 27

Page 28: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference

Application Usage Model

Direct Addressing Mode

Unique mechanism supported by PCIe only

Provides capability for local server to access data in remote memory (remote server)

Non-Transparent approach allows translation mechanism for remote memory access

From sender’s perspective, memory operation targets its own address space

From receiver’s perspective, memory operation originates within its own address space

In the context of IPC communication

Direct address mode is considered more efficient

Copyright © 2010, PCI-SIG, All Rights Reserved 28

Page 29: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference Copyright © 2010, PCI-SIG, All Rights Reserved 29

Summary

PCIe provides an attractive value proposition

Extremely affordable with exceptional performance

Integration in mainstream processor chips

Key Features that distinguish PCIe

High performance in terms of bandwidth and latency

Support for IPC using Non-Transparent shared memory

IO virtualizations in multi-host environment

Page 30: The Case for PCI Express on the Backplane

PCI-SIG Developers Conference Copyright © 2010, PCI-SIG, All Rights Reserved 30

Thank you for attending the PCI-SIG Developers Conference 2010.

For more information please go to www.pcisig.com