poweredge m-series blades sales
TRANSCRIPT
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
PowerEdge M-Series Blades
Sales & Technical Training for Channel Partners
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
To Blade or Not To BladeA good time to consider Dell blades:Need Better density – Running out of room in your data center and need to consolidate racks.
Need Lower power and cooling – High cost and/or limited power. Dell has some of the most energy efficient blades available.1
Want Fast deployment – Dell has the fastest and easiest blades to deploy.2
Have 6 servers or more – Choose the M805/M905 for memory, I/O & expandability or the M600/M605 for density.
HPCC – Low power and great density for large deployments. Dell has unrivaled I/O capabilities for HPC C needs.When to think twice before choosing blades:
Need lots of internal storage – 3+ local hard drives
Require Standard PCI slots – Legacy or other PCI slots
Have 4 servers or less – May not be cost effective
2
1 Principled Technologies SPECjbb performance and power consumption on Dell, HP, and IBM blade servers, December 2007
2 Out Of Box comparison between HP, IBM and Dell by Principled Technologies, December 2007, http://www.dell.com/downloads/global/products/pedge/en/pe_blades_outofbox_comp.pdf
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
Who can use Dell Blades?What Applications Are Suitable?
Medium to Large Enterprises– Customers requiring the most energy efficient systems, customers challenged by
server management and deployment, customers needing flexibility to scale effectively, customers challenged with data center sprawl and needing cabling and management efficiencies.
Research Institutions & HPCC Customers– Customers needing density, more processors per Rack U for the utmost in high
density, high performance, energy efficient computing.
Typical Applications M600 & M605
– Financial & Brokerage Applications;
– Virtualized/Consolidated workloads;
– Web and Network Infrastructure;
– Front end SAN compute nodes (SQL, Oracle, Exchange, etc.).
M805 & M905
– High End Databases and heavy virtualization deployments;
– Solutions requiring full High Available connections;
– Customers needing servers with large, affordable memory pools;
– Great for Oracle, SAP, MS Exchange, MS SQL and other high end DB.3
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
#1 Virtualization Blade (16-Core): Dell PowerEdge m9051
USE LESS POWER: 12% More Energy Efficient versus the competition2
BETTER PERFORMANCE: Up to 25% better Performance/Watt versus the competition2
FLEXIBLE: FlexI/O & FlexAddress
DELL POWEREDGE SERVER OVERVIEW
BLADE SERVERS• Great solution where space is a
premium
• Additional storage provided by external SCSI/SAS/SATA or fibre channel enclosures
1 According to VMmark scores for 16-core, 4 socket blades as of October 1, 2008. VMmark is a product of VMware. VMmark uses SPECjbb®2005 and SPECweb®2005, which are available from the Standard Performance Evaluation Corporation (SPEC). For the latest VMmark results, visit http://www.vmware.com/products/vmmark/results.html. 2 Principled Technologies SPECjbb performance and power consumption on Dell, HP, and IBM blade servers, December 2007
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
M1000e Chassis
5
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
PowerEdge M-Series Blades
ManagementFor simple local and remote
access
M1000e Enclosure
I/O ModulesDesigned for flexibility and
throughput
M600 – Intel 2s blade
M605 – AMD 2s blade
M805 – AMD full height 2s blade for increased RAM & I/O
M905 – AMD full height 4s blade for increased RAM & I/O
Chassis Management Controller (CMC)
Integrated DRAC
Internal KVM
Front LCD
FlexAddress
Ethernet Switches (PowerConnect™, Cisco®)
Fibre Channel Switches (Brocade®)
Infiniband Switch(Cisco®)
Ethernet Pass-through
FC Pass-through
Blade ServersChoice for the data center
Scalability , flexibility and power efficiency
6
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
PowerEdge M1000e10U Chassis Houses 16 x 2-socket
blades– Provision for additional blade form factors
Interactive Chassis LCD/control panel
– Deployment “Wizard”
– Chassis, blade, & I/O module information & alerts
– Front VGA and 2 USB ports for KVM
Chassis Front
Chassis RearChassis Management Controllers (CMC)
– Optional redundancy
Integrated Management Controllers (iDRAC)– Each blade features remote management
functionality
Optional Local KVM switch (Avocent)– Each blade has vMedia/vKVM standard
6 I/O Module bays
9 Hot plug/Redundant cooling fans
6 Hot plug/redundant power supplies (3+3)– 200+ volt only
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
M1000e Back of Chassis
6 Power Supplies
Chassis Management Controller (CMC) A
Chassis Management Controller (CMC) B
Integrated KVM (IKVM)
9 Chassis Fans
Switch Fabric AFixed GigE LOMs
Switch Fabric BGigE, 10GbE, FCP, IB
Switch Fabric CGigE, 10GbE, FCP, IB
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
I/O Connectivity
Fabric B & C Current Maximum Throughput Per Port
Future Theoretical Throughput Per Port
Supported Technologies(Many not available at launch)
Incoming Throughput
20Gb/S (Max speed requires
DDR IB)
40 Gb/S (Max speed requires
QDR IB) GbE, 10GbEFC 4/8
SDR, DDR, QDR Infiniband (IB)
Outgoing Throughput
20Gb/S (Max speed requires
DDR IB)
40 Gb/S (Max speed requires
QDR IB)
Total 40Gb/S per port 80 Gb/S per port
LOM 1
LOM 2
Mezz 1
PORT 1PORT 2
Mezz 2
PORT 1
PORT 2
Blade Server I/O
Fabric A
Fabric B
Fabric C
Fabric A
Fabric B
Fabric C
M1000e Switch Bays
Fabric A – GB Only
Current & Maximum
Throughput Per Port
Incoming Throughput
1Gb/S
Outgoing Throughput
1Gb/S
Total 2Gb/S One port per fabric connects to the corresponding switch
Maximum Throughput per Blade 324Gb/S
Maximum Throughput per Chassis 5.184Tb/S
M600/M605 Blade
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
Comparing the Blades Servers
10
Feature M600 M605 M805 M905
Form Factor Half height blade – 1 Slot Half height blade – 1 Slot Full height blade – 2 Slots Full height blade – 2 Slots
CPU2S Intel
Quad Core/Dual Core (up to 120W)
2S AMD Quad Core/Dual Core
(up to 105W ACP)
2S AMD Quad Core/Dual Core
(up to 105W ACP)
4S AMD Quad Core/Dual Core
(up to 75W ACP)
Local Storage2 x 2.5” SAS/SATA
(hot plug)2 x 2.5” SAS/SATA
(hot plug) 2 x 2.5” SAS (hot plug) 2 x 2.5” SAS (hot plug)
RAIDSAS6/IRCERC6
SAS6/IRCERC6
SAS6/IRCERC6
SAS6/IRCERC6
Memory 8 x FBD up to 64GB 8 x DDR2 up to 64GB 16 x DDR2 up to 128GB 24 x DDR2 up to 192GB
Blade I/O2 x LOMs
2 x dual-port mezz cards2 x LOMs
2 x P dual-port mezz cards4 x LOMs
4 x dual-port mezz cards4 x LOMs
4 x dual-port mezz cards
Integrated Management
iDRAC iDRAC iDRAC iDRAC
Internal Persistent
StorageYes in CMC Yes in CMC
CMC and
SD card per blade for embedded hypervisor
CMC and SD card per blade for embedded hypervisor
The Customer Choice When:
Intel Install Base AMD Install Base
Virtualization
Extended I/O Needs
Extended RAM Needs
Virtualization
Extended I/O Needs
Extended RAM Needs
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
11
M805 / M905DESIGNED FOR VIRTUALIZATION
I WANT THE BENEFITS OF VIRTUALIZATION WHILE
AVOIDING THE RISKS OF ADOPTION
WE HAVE LIMITED VIRTUALIZATION
EXPERIENCE AND THE MARKETPLACE SENDS
MIXED MESSAGES
• Virtualization Ready from Factory• Pre-tested solutions for your
applications• More I/O• More DIMMs• Integrated SD card for
embedded hypervisors• Your Choice: VMware ®, Citrix ®,
Microsoft ®
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
12
M805 / M905DELIVER DATABASE EFFICIENCIES
I NEED TO SUPPORT THE MOST DEMANDING APPLICATIONS IN
MY ENVIRONMENT
I NEED TO OPTIMIZE MY RESOURCES TO RUN MISSION CRITICAL APPLICATIONS COST EFFECTIVELY
HOW CAN DELL FULL HEIGHT BLADES HELP ME ACCOMPLISH MY GOALS?
• Highest blade DIMM counts = more available memory
• Perfect for SAP, Oracle, Microsoft Exchange 2007
• Better Utilize Data Center space
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
13
M805 / M905EXPANDED NETWORK CONNECTIVITYTHREE HIGHLY AVAILABLE FABRICS
I WANT TO TRANSFORM MY
BUSINESS-CRITICAL INFRASTRUCTURE
I’M BEING ASKED TO IMPROVE PERFORMANCE AND
MANAGEABILITY, WHILE REDUCING COSTS AND
COMPLEXITY.
I WANT RACK DENSE CAPABILITY INA FRACTION OF THE SPACE, BETTER PERFORMANCE PER WATT & NO SACRIFICES
• Three Highly Available I/O Fabrics
• Enterprise Class Feature Functionality
• Flexible deployment options• Seamless management
interfaces• One of the industry’s best
energy efficient chassis1
1 Principled Technologies SPECjbb performance and power consumption on Dell, HP, and IBM blade servers, December 2007. http://www.dell.com/downloads/global/products/pedge/en/pe_blades_specjbb2005.pdf
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
M805 Key FeaturesExceptional 2 socket blade performance with the flexibility IT
professionals demand, including:1.Three Highly Available, Fully Redundant I/O Fabrics. Unequaled1.
2.33% more DIMMs vs. HP BL480c = Enhanced RAM Capacity2.
3.Strong performance in virtual environments.
4.8 high speed ports vs. 5 for HP in same form factor. (BL480c)3.
The 2-socket M805 offers great virtualization, I/O and memory capabilities plus robust processing power.
An internal SD card for embedded hypervisors, designed to maximize virtualization performance.
M805 has 33% more capacity than the HP BL480c (full height 2 socket) with 12 DIMMs. 2
If you are buying HP or IBM 4 socket blades for DIMM capacity, you can now get the same DIMM count in the 2 socket M805.
14
1M905 (&M805) have 4 NIC ports and 4 x 10GbE ports. HP per their documentation on June 2008 (http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00698534/c00698534.pdf), has 4 NIC ports and 3 x 10GbE ports on the BL480c. IBM per their documentation on June 2008 (http://www.redbooks.ibm.com/redbooks/pdfs/sg247523.pdf) has 4 NICs and 2 x 10GbE ports on the LS41 MPE2 RAM capacity compared in June 2008 to HP BL480c, (http://h18004.www1.hp.com/products/blades/components/c-class-bladeservers.html)3 Through put capacity compared in June 2008 to HP c7000 chassis with BL480c server (http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00698534/c00698534.pdf)
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
M905 Key FeaturesFour sockets, 24 DIMMS, no barriers — the M905 delivers powerful
performance in virtualized environments.1.Tremendous RAM Capacity. 50% more DIMMs than HP & IBM1.
2.Three Highly Available and redundant I/O Fabrics. Unequaled2.
3.Built for Virtualization.
4.8 high speed ports vs. 5 for HP in same form factor3.
Designed to increase efficiency and performance in data centers, delivering tremendous system resources in a blade form factor.
An internal SD card for embedded hypervisors, designed to maximize virtualization performance.
Compact dimensions yield a server solution that’s efficient, easy to deploy and easy to manage.
If you need to get to more RAM than HP or IBM can deliver, the M905 does it with 50% more DIMM slots. HP blades: BL680c/BL685c; IBM blade: LS41.
15
1 RAM capacity compared in June 2008 to HP BL680c, HP BL685c (http://h18004.www1.hp.com/products/servers/proliant-bl/c-class/680c/comparison.html) and IBM LS41 MPE (http://www-03.ibm.com/systems/bladecenter/hardware/servers/ls41/features.html)2M905 (&M805) have 4 NIC ports and 4 x 10GbE ports. HP per their documentation on June 2008 (http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00698534/c00698534.pdf), has 4 NIC ports and 3 x 10GbE ports on the BL480c. IBM per their documentation on June 2008 (http://www.redbooks.ibm.com/redbooks/pdfs/sg247523.pdf) has 4 NICs and 2 x 10GbE ports on the LS41 MPE3 Throughput capacity compared in June 2008 to HP c7000 chassis with BL680c and BL685c servers (http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00698534/c00698534.pdf)
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
M805 / M905 ADDITIONAL SPECS
16
I/O Mezzanine Card Options
Four total PCIe x8 Mezzanine Card slots per M805 blade (optional)Available Options for all 4 slots:Dual port Gb Ethernet w/ TOE (New Broadcom 5709)Dual Port FC4 Qlogic QME2472Dual Port FC4 Emulex M Lpe1105Dual Port Mellanox ConnectX-MDI 4xDDR Infiniband mezzanine card10Gb EthernetDual Port FC8
Storage
Internal Hot Swap SAS Hard Drives: 2 maximum2.5 inch SAS (10K rpm): 36GB, 73GB, 146GB2.5 inch SAS (15K rpm): 36GB, 73GBMaximum Internal Storage:
Up to 300GB via two x 2.5" 146 GB hot-plug SAS hard drives
External Storage: Disk Storage Options Dell EqualLogic™ PS5000 SeriesPowerVault™ NX1950 Unified Storage SolutionPowerVault MD3000iDell/EMC products: Dell/EMC fibre channel and/or iSCSI external storage, including Dell/EMC CX300, CX3-10c , CX3-20, CX3-40, and CX3-80; CX4-120, CX4-240, CX4-480 and CX4-960
Operating Systems
Factory Installed O/S: Microsoft Windows® Server 2008, Standard and Enterprise Edition x32 Microsoft Windows® Server 2008 , Standard and Enterprise Edition x64, including Hyper-VMicrosoft Windows® Server 2008 x64 Datacenter, including Hyper-VMicrosoft Windows® Server 2008 x64, Web Edition x32 and x64Microsoft Windows® Server 2003 R2, Standard and Enterprise Edition x32 & 64Microsoft Windows® Server 2003 R2 x64, Standard and Enterprise Edition Red Hat® Linux® Enterprise v5, x32 and 64Red Hat® Linux® Enterprise v4.5, AS, ESRHEL 5APSUSE Linux Enterprise Server 10, x86-64Supported O/SMicrosoft Windows® Server 2003SUSE Linux Enterprise Server 9SolarisVMware Infrastructure 3, Standard or Enterprise
o VMware 3.0o VMware 3.5
Embedded Hypervisor via SD card (optional)
VMware Infrastructure 3, Standard or Enterprise; with VMware ESXi 3.5
Citrix XenServer Dell Express or Enterprise Editions
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
M805 / M905 Memory
17
• Memory modules must be installed in pairs, beginning with DIMMA1 and DIMMA2 (processor 1), and DIMMB1 and DIMMB2 (processor 2).
• The memory modules must be identical in speed and technology. The DIMMs in each pair must be the same size and speed. They also must be the same type, x4 or x8 , dual-rank or single-rank
• For best system performance, all DIMMs should be identical memory size, speed, and technology.
• Within a DIMM group, a pair of DIMMs of one size can be mixed with a pair of DIMMs of a different size (N+3, or up to three DIMM sizes larger). Larger capacity DIMMs must occupy the lower numbered sockets.
M805
8 DIMMs
8 DIMMs
M905
8 DIMMs
8 DIMMs
4 DIMMs
4 DIMMs
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
DELL CONFIDENTIAL 18
M905 Full-Height Blade Fabric DiagramFabric A:
•Dedicated to Ethernet LOMs; two separate dual port NICs = four ports per blade
•NICs physically separate & each port on a single NIC routes to a separate I/O module
Fabrics B & C:
• Customizable for Ethernet, Fibre Channel, Infiniband
• Four I/O cards per blade
•Two separate Fabric B cards
•Two separate Fabric C cards
• Two ports per I/O card
• Each port on a single card routes to a separate I/O module
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
DELL CONFIDENTIAL 19
M805 Full-Height Blade Fabric DiagramFabric A:
•Dedicated to Ethernet LOMs; two separate dual port NICs = four ports per blade
•NICs physically separate & each port on a single NIC routes to a separate I/O module
Fabrics B & C:
• Customizable for Ethernet, Fibre Channel, Infiniband
• Four I/O cards per blade
•Two separate Fabric B cards
•Two separate Fabric C cards
• Two ports per I/O card
• Each port on a single card routes to a separate I/O module
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
DELL CONFIDENTIAL 20
Mixed-Height Blades Fabric Diagram
Fabric A:
•Dedicated to Ethernet LOMs
•NICs physically separate & each port on a single NIC routes to a separate I/O module
Fabrics B & C:
•Customizable for Ethernet, Fibre Channel, Infiniband
•Each port on a single card routes to a separate I/O module
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
Ethernet Fibre ChannelSwitchesDell PowerConnect™ M6220
Cisco® Catalyst 3032Cisco® Catalyst 3130GCisco® Catalyst 3130X
Pass-Through (10/100/1000 Mb capable)
FC Switch / Access GatewayBrocade® 4424
Pass-Through
Infiniband SwitchCisco® M SFS7000E (DDR 20Gbps)
Available I/O Modules
21
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
Cisco Ethernet Switch OptionsCisco Catalyst 3130X -- 10G Rack Switch
• 2x10GE uplinks (X2 – CX4, SR, LRM optics)
• 4xGE uplinks - 4xRJ45
• Virtual Blade Switch interconnect enabledCisco Catalyst 3130G -- GE Rack Switch
• Up to 8xGE uplinks - 4xRJ45 & up to 4 SFPs (copper or optical)
• Virtual Blade Switch interconnect enabled
Cisco Catalyst 3032 -- Entry Level GE Switch
• Up to 8xGE uplinks - 4xRJ45 & up to 4 SFPs (copper or optical)Virtual Blade Switch
• Interconnect up to 9 CBS 3130 switches to create a single logical switch
• Simplifies manageability & consolidates uplinks to lower TCO
Software• IP Base software stack includes- Advanced L2 switching + basic IP
routing• IP Services & Adv IP Services available ONLY for CBS 3130
• Adds advanced IP routing and IPv6 compatibility 22
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
Ethernet Switch Detail
Cisco Blade Switch 3130
Simplifying Connectivity- Flexible switch platforms support a multitude of connectivity options•Base L2/3 switching platforms with 4 fixed 10/100/1000Mb uplinks•Add stacking and/or 10Gb uplinks when needed
•PowerConnect M6220 – Plug in optional 10Gb and/or stacking modules•Cisco 3130 - upgrade to stacking and/or 10Gb uplinks via license keys•Modular 10Gb uplinks support multiple interfaces (CX4, SR/LR, LRM, 10G BASE-T)
48Gb stacking module
2 x 10Gb Optical XFP (SR/LR)
Uplinks
2 x 10Gb Copper CX-4
Uplinks
2 x 10GBASE-T Copper Uplinks
(Q1’08)
4 x fixed 10/100/1000Mb
(RJ-45)
Two Option Bays for: 4 x fixed 10/100/1000M
b (RJ-45)
64Gb StackWise Plus Ports
•Enabled w/ License Key
Two Option Bays for:
1 x 10Gb X2
(License Key)•CX4 Copper•LRM or SR
Optical
2 x 1Gb SFPs
•Optical or Copper
PowerConnect M6220
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
NPIV Gateway Fibre Channel Switch
Internal Ports
16 (FC4) 16 (FC4)
External Ports
8 (FC4) 8 (FC4)
Features Low Cost
NPIV Access Gateway
• Full Fabric Switching•12 ports active (8 Blades) • Optional upgrade to 24 active ports (16 Blades)
Brocade 4424 Fibre Channel Switch
Internal Ports 16
External Ports 8
Speed 4x DDR (Double Data Rate)
Throughput 20Gb/s
Cisco/TopSpin Infiniband Switch
Fibre Channel and Infiniband
24
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
Power ManagementPower Management Features on the M1000e Real-time power tracking (aggregate chassis & individual blade)
• High/Low “watermarks”
User-configurable chassis “power ceiling” with policies
• Alert on Ceiling – sends an alert if ceiling is reached• Throttle on Ceiling – lower proc/memory frequency to reduce consumption
Slot based power prioritization
• User-configurable priorities, works with “power ceiling” feature
25
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
Chassis PowerDell Gives Four Power Options
20A Single Phase
20
A
PD
U2
0A
P
DU
20
A
PD
U2
0A
P
DU
20
A
PD
U2
0A
P
DU
•1 20A circuit per PSU supports Max configuration w/ grid redundancy
•2 (4 for grid redundancy) 20A circuits support Avg. Configurations
30A Single Phase
30
A
PD
U
30
A
PD
U
•2 30A circuits per Chassis supports Avg configuration w/ Grid Redundancy
60A Single Phase
60
A
PD
U
60
A
PD
U
•1 (2 for rednt grids) 60A circuit per Chassis supports Max configuration
30A Three Phase
30
A
PD
U
30
A
PD
U
•1 (2 for redundant grids) per chassis supports Max configuration
6 x 2300W PSUs per chassis:Automatically engages only the supplies required to power a given configurationDynamic N+N, N+1, or N+0
Supported Modes:3 + 3 (Grid & Power Supply redundancy)3 + 1 (Power Supply redundancy)3 + 0 (non-redundant)
DELL CONFIDENTIAL - INTERNAL USE ONLY
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
27
Simple Blade ManagementCMC ACMC A CMC
BCMC
BiKVM
iKVMCMC
Chassis Management Controller(s)Management,
monitoring, and alerts for chassis, blades, and I/O
modules
iDRACIntegrated Dell
Remote Access CardOne per blade allows for easy access to manage and monitor individual
blades
iKVMIn-Chassis Avocent
KVMSeamlessly tiers into Dell & Avocent KVM infrastructure remotely or via local “crash
cart”
Chassis LCDLocal LCD for quick blade
deployment, hardware information, and enclosure health
FlexAddressPersistent assignment of WWN/MAC addresses to
individual blade slots using CMC SD card
Multiple remote & local
management options plus full
OpenManage integration to help
meet your data center’s needs
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
iDRACIntegrated Dell Remote Access Card
• One iDRAC per blade• Integrates into CMC or can be used to access blades directly• Dedicated internal Ethernet connection to each CMC• IPMI 2.0 compliant, OOB Web/CLI, Virtual Media/KVM• Access iDRAC direct w/ browser, Telnet/SSH, & IPMI tools
iKVMAvocent KVM Switch• Local KVM (front or rear)• Seamlessly tier into Dell & Avocent KVM infrastructure
Blade-16
CMC 1
L2+ Switch
CMC MPUChassis Mngmt
CMC 2
L2+ Switch
CMC MPUChassis Mngmt
Blade-1
iDRAC iDRAC
IOM-1
Mgmt
IOM-6
Mgmt
Local KVM
Management Network 1
Management Network 2
Analog video, USB Keyboard
and mouse
I2C
Power Supply 1
Power Supply 2
Power Supply 6
Chassis buttons, FRU,
LEDs
PWM, TACH
Intranet Intranet
OSCAR
FAN 1 FAN 2 FAN 9
I2C
LCD Panel
CMCChassis Mgmt Controller
• Optional redundancy
• Central point for infrastructure monitoring, inventory, and control
• Real-time power management
• Dedicated internal Ethernet connection to each iDRAC & I/O switch
28
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
29
Understanding FlexAddress (Persistent WWN/MAC)
• FlexAddress provides chassis assigned WWN & MAC addresses which persist to a slot in the chassis instead of to the blade itself. FlexAddress is designed to simplify deployment & make Data Centers more adaptable to change. This is an optional feature on the M1000e
•FlexAddress Benefits
• Blades can be replaced without disrupting/changing SAN zoning, deployment schemes, or MAC based licensing
• Customers can quickly obtain a list of all MAC/WWNs in the chassis by slot and can be assured these will never change
• Extremely simple and quick to implement/deploy
• Can be used with any IO module (switch or pass-through)
• Boot from SAN customers receive an almost no touch blade replacement
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
30
FlexAddress Implementation
• A Chassis Management Controller (CMC) receives an SD card provisioned with a unique pool of: 208 MACs and 64 WWNs -Enough addresses for any IO combination on a blade
• SD card is inserted at factory or in a customer location (firmware must be updated). Once enabled the SD card is tied to that specific chassis
• User can configure FlexAddress by IO Fabric (A/B/C) and/or by blade slot
• WWN/MAC addresses are pushed to the Control Panel memory (a serviceable, hot swap, component behind the chassis LCD). At this point the SD card can be removed from the CMC if desired. Only 1 FlexAddress card is required per chassis, even if redundant CMCs are present
• FlexAddress is implemented via the CMC, thus it can be used with any M Series blade and any IO module (switch or pass-through)
SD Slot on bottom of CMC
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
31
Feature M805 – AMD – 8 / 10U chassis HP BL480c – Intel – 8 / 10U chassis
Processors Up to two Quad-Core AMD Opteron 2300 Series
Up to two Quad-Core Intel Xeon 5400 or 5300 series processors Up to two Dual-Core Intel Xeon 5200 or 5100 series processors
Chipset Dual NVIDIA MCP55 for additional I/O (extra PCIe lanes Intel 5000P
Embedded Hypervisor
Yes, delivered via SD card: VMware ESXi ,Citrix XenServer virtualization technology
Delivery projected to be on hdd or internal USB key, VMware ESX I, Citrix XenServer
Memory 16 dimm slots support up to 128GB of RAM w/ 8GB dimms 12 dimm slots support up to 48GB of RAM
Disk Drives 2 x 2.5" hot-plug SAS drives (10k or 15k) 4 x 2.5" hot-plug SAS or SATA drives
RAID Options SAS 6/ir (H/W based) with RAID 0/1 support; CERC6 with cache and RAID 0/1 support
Integrated HP Smart Array P400i Controller with 256MB cache, RAID 0/1/5
I/O Fabrics 3 Highly Available, fully redundant fabrics 2 Highly Available fabrics + 1 extra connection
PCI slots 4 optional PCIe x8 Mezzanine Card Slots 3 I/O expansion mezzanine slots (two x8 and one x4 PCI Express)
USB Ports 3 external 1 internal
Onboard NICs 2 x Dual port embedded Broadcom 5709 Gigabit Ethernet LOMs w/ TOE for Windows and iSCSI offload;
4 embedded network adapter ports, plus one additional management network adapter port
ManagementDell OpenManage; Chassis Management with redundant dedicated NICs; iKVM access via dedicated NIC
HP Systems Insight Manager
Remote Management
Integrated Dell Remote Access Controller (iDRAC) iLO2
M805 Competitive Compare vs. HP1
Green show competitive differentiators for Dell1 Obtained from http://h18004.www1.hp.com/products/servers/proliant-bl/c-class/480c/specifications.html on September 5,2008
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
32
Feature M905 – AMD – 8 / 10U chassis HP BL685c – AMD – 8 / 10U chassis HP BL680c – Intel – 8 / 10U chassis
Processors Up to Four Quad-Core AMD Opteron 8300 Series
Up to four Quad-Core AMD Opteron 8300 Series processors Up to four AMD Opteron 8200 Series processors
Two or four Quad-Core Intel Xeon 7300 Series processors; Demand Based Switching (DBS) with Enhanced Intel SpeedStep® Technology on select models
Chipset Dual NVIDIA MCP55 for additional I/O (extra PCIe lanes nVidia CK8-04, IO-04 Intel 7300
Embedded Hypervisor
Yes, delivered via SD card: VMware ESXi, Citrix XenServer virtualization technology
Delivery projected to be on hdd or external USB key, VMware ESX I, Citrix XenServer
Delivery projected to be on hdd or internal USB key; VMware ESX I, Citrix XenServer
Memory 24 dimm slots support up to 192GB of RAM w/ 8GB dimms
16 dimm slots support up to 64GB of RAM 16 dimm slots support up to 128GB of RAM
Disk Drives 2 x 2.5" hot-plug SAS drives 2 x 2.5" hot-plug SAS or SATA drives 2 x 2.5" hot-plug SAS or SATA drives
RAID Options
SAS 6/ir (H/W based) with RAID 0/1 support; CERC6 with cache and RAID 0/1 support
Integrated Smart Array E200i RAID controller with 64 MB cache, Supports RAID 0,1.
Integrated HP Smart Array P400i Controller with 256 MB cache, RAID 0/1.
I/O Fabrics 3 Highly Available, fully redundant fabrics 2 Highly Available fabrics + 1 dual port card 2 Highly Available fabrics + 1 dual port card
PCI slots 4 optional PCIe x8 Mezzanine Card Slots 3 I/O expansion mezzanine slots (All x8 PCI Express)
3 I/O expansion mezzanine slots (two PCI Express x8 and one x4)
USB Ports 3 external 1 internal 1 internal
Onboard NICs
2 x Dual port embedded Broadcom 5709 Gigabit Ethernet LOMs w/ TOE for Windows and iSCSI offload
4 embedded network adapter ports, plus one additional management network adapter port
4 embedded network adapter ports, plus one additional management network adapter port
ManagementDell OpenManage; Chassis Management w/ redundant dedicated NICs; iKVM access via dedicated NIC
HP Systems Insight Manager HP Systems Insight Manager
Remote Management
Integrated Dell Remote Access Controller (iDRAC) iLO2 iLO2
M905 Competitive Compare vs. HP1
Green show competitive differentiators for Dell
1 Obtained from http://h18004.www1.hp.com/products/servers/proliant-bl/c-class/685c/index.html and http://h18004.www1.hp.com/products/servers/proliant-bl/c-class/680c/index.htmlon September 5,2008
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
33
Feature M905 – AMD – 8 per 10U chassis IBM LS41 – AMD – 7 per 9U chassis
Processors Up to Four Quad-Core AMD Opteron 8300 Series 1 to 4 AMD Opteron Model 8214HE, 8216HE, 8218, 8220 and 8222
Chipset Dual NVIDIA MCP55 for additional I/O (extra PCIe lanes ServerWorks
Embedded Hypervisor
Yes, delivered via SD card: VMware ESXi, Citrix XenServer virtualization technology None
Memory 24 dimm slots support up to 192GB of RAM w/ 8GB dimms Up to 16 dimm slots per server blade, up to 64GB of VLP DDRII
Disk Drives 2 x 2.5" hot-plug SAS drives Up to two non hot-swap 2.5" SFF SAS disks / Supports up to three hot-swap SFF SAS disks in optional Storage and I/O Expansion Unit
RAID Options SAS 6/ir (H/W based) with RAID 0/1 support; CERC6 with cache and RAID 0/1 support
RAID 0, 1 / supports RAID-1, 1E & 5 with fully populated Storage and I/O Expansion blade
I/O Fabrics 3 Highly Available, fully redundant fabrics 2 Highly Available fabrics
PCI slots 4 optional PCIe x8 Mezzanine Card Slots Three I/O expansion mezzanine slots (two PCI-X, one PCI-Express)
USB Ports 3 external None
Onboard NICs 2 x Dual port embedded Broadcom 5709 Gigabit Ethernet LOMs w/ TOE for Windows and iSCSI offload;
Each blade includes two or four (depending on configuration) TOE enabled NIC ports *Planned iSCSI and RDMA support available at a later date.
Management Dell OpenManage; Chassis Management w/ redundant dedicated NICs; iKVM access via dedicated NIC
IBM Director
Remote Management
Integrated Dell Remote Access Controller (iDRAC) BMC
M905 Competitive Compare vs. IBM1
Green show competitive differentiators for Dell1 Obtained from http://www-03.ibm.com/systems/bladecenter/hardware/servers/ls41/specs.html on September 5,2008
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
34
Dell PowerEdge M-Series Key Advantages versus HP C-ClassOverview & Benefits:1.ENERGY EFFICIENCY –
2.MODULARITY - The only fully modular blade enclosure, the M1000e features Flex I/O switch modules providing customers easy and effective scaling of their I/O infrastructure. A feature lacking from HP
3.SIMPLICITY – FlexAddress, for example, is an easy, low cost way to simplify a network and reduce down time. Simple to install, it truly delivers much greater TCO than HP Virtual Connect
The M1000e provides: • increased performance• up to 19% more energy efficiency than HP1 • up to 25% better performance/watt than HP1
1Principled Technologies SPECjbb performance and power consumption on Dell, HP, and IBM blade servers, December 2007
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
Competitive Advantages vs. HP C-Class
Chassis Flexibility Dell M1000e can support adjacent ½ height and full height blades HP has a mechanical divider between top and bottom blades for every 4 slots1
– Full height blades cannot be installed next to ½ height blades within a quadrant
M1000e – no divider HP c7000 – divider
1 Based on detailed review of HP C7000 configuration/specification options from http://h18004.www1.hp.com/products/quickspecs/12810_div/12810_div.html , September 12, 2008
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
Dell PowerEdge M1000e Key Advantages versus HP C3000
– HP’s SMB chassis’ can support 110v power and you can “plug it in the wall”, but…
– The c3000 chassis can support a maximum of 4 blades (with two 80W processors, 2GB memory, no hard drives) to be able to plug it into the wall1 – and plugging it into a single circuit is risky (lack of redundancy/protection)
– To expand to the full capacity of the chassis (and leverage the infrastructure across the most blades -- when system actually becomes most cost effective) – customers will need to move to 208+v power
1 Based on detailed review of specification posted at HP http://h18004.www1.hp.com/products/quickspecs/12790_div/12790_div.html#Technical%20Specifications and www.hp.com/go/bladesystem/powercalculator as of September 12, 2008
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
Dell PE M1000e w/ M600
HP c-Class Blade System
IBM BladeCenter H
Fully Configured Chassis
Watts per chassis 3524 4326 3494
Watts per blade 220 270 250
1Principled Technologies SPECjbb performance and power consumption on Dell, HP, and IBM blade servers, December 2007 2Based on 1250 kWh/year used on lighting average US home (EPA). $2,600 based on annual kWh savings, and EPA data of average rate of $.94 per kWh. CO2 claim based on kWh savings using EPA data available here: http://www.epa.gov/cleanenergy/energy-resources/calculator.html *
RUN IT BETTER. . .Third-Party PowerEdge Performance/Power Test1
The M1000e vs. HP provides: • increased performance• up to 19% more energy efficiency • up to 25% better performance/watt
The M1000e vs. IBM provides:• increased performance• up to 12% more energy efficiency• up to 29% better performance/wattWhat does this really mean? A rack of Dell blades vs. HP/IBM blades:
• Saves ~$2,600 annually in power consumption2
• Produces less tons CO2, equivalent to the CO2 sequestered by 4 acres of pine forest annually2
37
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
“The blades in the Dell PowerEdge M1000e Blade System achieved better performance/watt than the HP or IBM Blade Systems at every configuration we tested.”
Best Blade Performance (SPECjbb2005) + Best Blade Power Efficiency =Best Blade Performance/Watt
Dell PowerEdge M600 Blade System (SPECjbb2005
bops/watt)
HP BladeSystem c-Class (SPECjbb2005
bops/watt)
IBM BladeCenter H Type 8852 (SPECjbb2005 bops/watt)
Percentage performance/watt increase
Dell over HP
Percentage performance/watt
increase Dell over IBM
1 blade464.54 352.06 246.89 31.95% 88.16%
2 blades642.40 502.52 388.93 27.83% 65.17%
10 blades919.95 738.40 714.47 24.59% 28.76%
Maximum blades
958.86 (16 blades)
764.97 (16 blades)
745.70 (14 blades)
25.35% 28.58%
Performance/watt results for each server by blade configuration. Higher numbers are better. Principled Technologies SPECjbb performance and power consumption on Dell, HP, and IBM blade servers, December 2007
RUN IT BETTER. . .Third-Party PowerEdge Performance/Power Test1
38
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
Dell PSUs beat HP throughout the load curve1
Peak at >91% efficiency More importantly, we focus on the low end of load curve, as that is where
PSUs (especially in a blade enclosure with multiple supplies) spend their time
M1000e vs. HP c7000 PSU Efficiency Compare
83%
84%
85%
86%
87%
88%
89%
90%
91%
92%
20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
% of Load
Eff
icie
nc
y
Dell HP
Power Supply Efficiency Compare
391Principled Technologies SPECjbb performance and power consumption on Dell, HP, and IBM blade servers - http://www.dell.com/downloads/global/products/pedge/en/pe_blades_specjbb2005.pdf, December 2007
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
40
DELL M1000E FAN VS. HP C7000 FAN- power consumption at equal airflow
Dell fans move same amount of air at much lower power1, but also:• HP c7000 chassis also requires 1 additional fan2 (10 vs 9 for Dell)• HP’s impedance is 50% higher, thus their fans must spin faster to deliver same level of airflow (this comparison doesn’t take this into account)• Dell M1000e fan efficiency ~35%• HP c7000 fan efficiency ~23%
401 Principled Technologies SPECjbb performance & power consumption on Dell, HP, and IBM blade servers, http://www.dell.com/downloads/global/products/pedge/en/pe_blades_specjbb2005.pdf, Dec. 20072 Based on detailed review of specification posted at HP http://h18004.www1.hp.com/products/quickspecs/12810_div/12810_div.html as of September 12, 2008
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
41
Persistent WWN/MAC – Dell vs. HP
Competitive ComparisonIndependent of Switch or Pass-Through Module?
Unique pool of MAC/WWNs?
Low cost?
Dell YES YES YES (Low cost of an SD card)
HP NO Only supported with HP Virtual Connect (VC) switch1
NO Every VC switch has the same 64 pools of MAC/WWNs – requires customer to select the right pool in each chassis1
NO$5000-$9000 VC switches2
• CMC assigns WWN/MAC values to blade slots in place of factory-assigned blade WWN/MAC
• Allows for blades to be swapped without affecting SAN Zoning, iSCSI zoning, or any MAC-dependent functions
• SD card is provisioned with a UNIQUE pool of 208 MACs and 64 WWNs
• SD card is inserted into the CMC at Dell Factory or APOS – SD card is now tied to that specific chassis
• User-configurable to enable iSCSI MAC, Ethernet MAC, and/or WWN Persistence
1 Based on detailed review of HP website http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00814156/c00814156.pdf on September 15, 20082 Based on review of HP website http://h30094.www3.hp.com/product.asp?sku=3742793&pagemode=ca on September 15, 2008
42
DELL FLEXADDRESS VS. HP VIRTUAL CONNECT IN CISCO SYSTEMS SWITCHED ENVIRONMENTS?Capability Dell FlexAddress HP Virtual Connect
Offers worldwide naming/MAC address (WWN/MAC) persistence
Yes Yes
WWN/MAC persistence is independent of the switches
Yes, works with any switch (e.g., Cisco) and current management infrastructure
No, requires HP Virtual Connect switch blades and proprietary management infrastructure1
Cost effective approach to achieve the benefits of WWN/MAC persistence
Yes, only cost is a SD card
No, the cost of HP a VirtualConnect switch blades ($5-9k)2
Is designed so a “Rip & Replace” of Cisco LAN switch environments is not required for to get the benefit of WWN/MAC persistence
Yes, works as advertised with Cisco
No, requires purchase of VC module that replaces Cisco LAN switches1
1 Based on detailed review of HP website http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00814156/c00814156.pdf on September 15, 20082 Based on review of HP website http://h30094.www3.hp.com/product.asp?sku=3742793&pagemode=ca and http://h30094.www3.hp.com/searchresults.asp?search=keyword&search_field=description&search_criteria=hp+virtual+connect on September 15, 2008
DE
LL
CO
NF
IDE
NT
IAL
. P
AR
TN
ER
US
E O
NL
Y:
FO
R P
AR
TN
ER
UN
DE
R P
AR
TN
ER
DIR
EC
T A
GR
EE
ME
NT O
NL
Y
DELL
CO
NFI
DEN
TIA
L. P
AR
TN
ER
US
E O
NLY
: F
OR
PA
RTN
ER
UN
DER
PA
RTN
ER
DIR
EC
T A
GR
EEM
EN
T
ON
LY
DELL CONFIDENTIAL - INTERNAL USE ONLY
The M Series
43
Unbeatable Power Efficiency
Innovative fans, power supplies and little air
impedanceIn the same configuration,
the M1000e uses up to 19% less power than HP and up to 12% less than
IBM per blade
Purely Modular
Modular components from blades, to power supplies and fans plus the world’s
first upgradable blade switches greatly improving any return on investment
Factory Tested and Configured
Dell is the only server vendor to always fully
factory customize and test every server greatly
saving deployment and configuration time