new dell poweredge c servers data analytics...

63
Dell PowerEdge C Servers Data Analytics Solution A Dell Deployment Guide Dell Greenplum Release A0 September 2010

Upload: others

Post on 24-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

  • Dell PowerEdge C Servers Data Analytics Solution

    A Dell Deployment Guide

    Dell │ Greenplum

    Release A0

    September 2010

  • Dell | Greenplum Database Solution | Deployment Guide

    Page ii

    © 2010 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without

    the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.

    Dell, the DELL logo, and the DELL badge, PowerConnect, and PowerVault are trademarks of Dell Inc.

    Microsoft, Windows, Windows Server, and Active Directory are either trademarks or registered

    trademarks of Microsoft Corporation in the United States and/or other countries. Other trademarks and

    trade names may be used in this document to refer to either the entities claiming the marks and names

    or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than

    its own.

    September 2010

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 1

    CONTENTS

    1. BILL OF MATERIALS ............................................................................................................................................ 2

    1.1 RACKING ................................................................................................................................................................. 2

    1.2 POWER OPTIONS ...................................................................................................................................................... 2

    1.3 POWER JUMPERS ...................................................................................................................................................... 5

    1.4 KEYBOARD, MONITOR, MOUSE ................................................................................................................................... 5

    1.5 SERVERS .................................................................................................................................................................. 6

    1.6 NETWORKING .......................................................................................................................................................... 7

    2. INSTALLATION .................................................................................................................................................... 9

    2.1 RACKING ................................................................................................................................................................. 9

    2.2 NETWORK ............................................................................................................................................................. 10

    2.3 SERVERS ................................................................................................................................................................ 17

    3. VALIDATION ..................................................................................................................................................... 38

    4. LABELS ............................................................................................................................................................. 39

    4.1 RACKS .................................................................................................................................................................. 39

    4.2 SERVERS ................................................................................................................................................................ 39

    4.3 SWITCHES .............................................................................................................................................................. 39

    4.4 PATCH PANELS ....................................................................................................................................................... 39

    4.5 NETWORK CABLES ................................................................................................................................................... 39

    5. PHYSICAL INSTALLATION DETAILS .................................................................................................................... 41

    5.1 OPTION CONCERNS ................................................................................................................................................. 41

    5.2 RACKING ............................................................................................................................................................... 41

    5.3 NETWORK DIAGRAMS .............................................................................................................................................. 52

    APPENDIX A: BILL OF MATERIALS......................................................................................................................... 56

    APPENDIX A1: MASTER NODE BILL OF MATERIALS ............................................................................................................. 56

    APPENDIX A2: SEGMENT NODE BILL OF MATERIALS ............................................................................................................ 57

    APPENDIX A3: ETL NODE BILL OF MATERIALS ................................................................................................................... 59

    APPENDIX A4: 42U RACK .............................................................................................................................................. 61

    APPENDIX A5: 24U RACK .............................................................................................................................................. 61

    APPENDIX A6: POWERCONNECT 6248............................................................................................................................. 61

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 2

    1. Bill of Materials The bill of materials for a cluster is dependent on the options selected. The primary option is the

    number of segment nodes. This determines the capacity, performance, and physical size of the

    solution. However, other options such as rack power requirements, component location in the racks,

    and the specific configuration of master nodes are also important considerations.

    This section does not include all of the manufacturer’s part numbers necessary to complete a cluster.

    Please contact your Dell Sales Representative for a detailed BOM, including manufacturer’s part

    numbers, for any specific cluster. This section is intended to give an overview of the elements included

    in the clusters.

    There are several general considerations common to all clusters. Under these classifications, all of the

    supported options in the standard configuration can be addressed.

    1.1 Racking

    1.1.1 Racks

    Racks need to be standard, 19”, four post racks that are not less than 1070mm deep. The rack

    diagrams in section 6 are based on 42U high racks. Shorter racks can be used but the rack count,

    weight per rack, and power requirements per rack need to be adjusted based on the new rack height.

    The Dell PowerEdge 4220 24U Rack Enclosure is the rack unit for all 42U-rack installations. This rack

    enclosure features:

    More power distribution options than previous generations of Dell racks

    Excellent airflow (doors are 80% perforated)

    Various cable routing options.

    The physical dimensions are: Height 78.7" (1999mm); Width 23.82" (605mm); Depth 42.15" (1071mm)

    The static load rating for this rack enclosure is 2500 lbs.

    The Dell Part Number is 42GFDS

    The rack builder needs to define the size and number of filler panels needed in the 42U rack

    configuration.

    1.2 Power Options

    1.2.1 North American Power Distribution

    Each rack requires internal power distribution units (PDUs). All PDUs are mounted vertically in the rear

    corners of the rack and do not consume rack units.

    There are two power standards available, single-phase and three-phase PDUs. This relates to the design

    of the power circuits. Single-phase circuits are more common and less expensive to install. Three-phase

    circuits are common in large data centers as they more efficiently transport large amounts of power.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 3

    1.2.1.1 Single-Phase

    The recommended PDUs for single-phase power on North American deliveries are:

    APC 7841 (Dell Part Number: A1643442)

    More information on this part can be found at

    http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7841.

    North American Single-Phase circuits should conform to the following parameters:

    Input Power per Circuit

    Default Plug Number of Circuits

    Required Number of Supported

    Segment Nodes

    30A / 208V L6-30P twistlock;

    3-prong 4 16 (14 on master rack)

    Each rack will require four of these circuits to support the maximum number of servers in the rack with

    full redundant power. The L6-30P plug requires an L6-30R receptacle:

    L6-30R Receptacle L6-30R Plug

    1.2.1.2 Three-Phase

    Three-phase power uses three single-phase circuits aligned in a specific way to provide highly available

    and reliable power. This circuit design is common to larger data centers but is expensive to install if

    not already present. Three-phase should only be used in cases where the destination is already familiar

    with the costs and requirements involved in provisioning three-phase circuits.

    The recommended PDUs for three-phase power on North American deliveries are:

    APC AP7868 (Dell Part Number: A0470127)

    More information on these units is available at

    http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7868.

    Input Power per Circuit

    Default Plug Number of Circuits

    Required Number of Supported

    Segment Nodes

    35A / 208V CS8365 2 16 (14 on master rack)

    Each rack will require two of these PDUs to support the maximum number of servers in the rack with

    fully redundant power. CS8365 plugs require CS8365 receptacles.

    CS8365 Receptacle CS8365 Plug

    http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7841http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7868

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 4

    1.2.2 International Power Distribution

    1.2.2.1 Single-Phase

    The recommended PDUs for single-phase power on International deliveries are:

    APC 7853 (Dell Part Number: A1666540)

    More information on these units is available at

    http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7853.

    Input Power per Circuit

    Default Plug Number of Circuits

    Required Number of Supported

    Segment Nodes

    30A / 208V IEC60309 2P+E 2 16 (14 on master rack)

    Each rack will require two of these PDUs to support the maximum number of servers in the rack with

    fully redundant power. IEC60309 2P+E plugs require IEC60309 2P+E receptacles.

    IEC60309 2P+E Receptacle IEC60309 2P+E Plug

    1.2.2.2 Three-Phase

    Three-phase power uses three single-phase circuits aligned in a specific way to provide highly available

    and reliable power. This circuit design is common to larger data centers but is expensive to install if

    not already present. Three-phase should only be used in cases where the destination is already familiar

    with the costs and requirements involved in provisioning three-phase circuits.

    The recommended PDUs for three-phase power on International deliveries are:

    ServerTech CS-24V4-P32MA

    More information on these units is available at

    http://www.servertech.com/products/smart-pdus/smart-pdu-cs-24v-c13-c19-.

    Input Power per Circuit

    Default Plug Number of Circuits

    Required Number of Supported

    Segment Nodes

    32A / 400V IEC 60309 3P+N+E

    5-prong, 2 16 (14 on master rack)

    Each rack will require two of these circuits to support the maximum number of servers in the rack with

    full redundant power. The IEC 60309 3P+N+E plug requires an accompanying receptacle:

    3P+E+N Receptacle 3P+E+N Plug

    http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7853http://www.servertech.com/products/smart-pdus/smart-pdu-cs-24v-c13-c19-

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 5

    1.3 Power Jumpers The default PDU is a base unit including a pigtail (the cord extending from the PDU to the electrical

    source) but no power jumpers (cords that extend from the PDU to the backs of the servers).

    Two jumpers per device (server, switch, etc.) are required. Each must be long enough to run from the

    PDU, through any cable management, to the device.

    The PDUs present C14 receptacles and the servers present C13 plugs. Power jumpers need to be C14

    plugs on one end (shrouded with three vertical prongs) and C14 receptacles on the other (three vertical

    holes).

    C14 Plug C13 Receptacle

    1.4 Keyboard, Monitor, Mouse A keyboard, monitor, mouse drawer is included in the master rack of each cluster. This is connected to

    the secondary master node and used as the cluster console in the data center. The Dell recommended

    part is the Dell 17” T1700 KVM Drawer.

    In addition to the rack console tray, a small adapter called a VGA port saver is recommended. This

    adapter fits on the end of the tray’s video cable and is what is plugged into servers. This protects the

    pins in the tray’s cable, which is hard wired. Damage to the pins in the adapter from unplugging and

    plugging the video cable is resolved by replacing the adapter. Damage to the video cable’s pins can

    only be resolved by replacing the console tray itself.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 6

    1.5 Servers Specific part numbers are not listed for servers here. For specific parts lists, please contact Dell.

    1.5.1 Master Nodes

    Both master nodes must be identically configured. The Bill of Materials for the Master Node

    configuration is listed in Appendix A1.

    1.5.2 Segment Nodes

    All segment nodes must be identically configured. The Bill of Materials for the Segment Node

    configuration is listed in Appendix A2.

    1.5.2.1 Segment Disk Size Options

    The following disk options are recommended:

    Disk Size and Type Usable Space in Segment Node

    RAID-5 RAID-10

    300GB 6g 10k SAS 1012GB 549GB

    450GB 6g 10k SAS 1592GB 896GB

    600GB 6g 10k SAS 2181GB 1244GB

    1.5.3 ETL Nodes

    The Bill of Materials for the ETL Node configuration is listed in Appendix A3.

    1.5.3.1 ETL Node Disk Size Options

    The following disk options are recommended:

    Disk Size and Type Usable Space in Segment Node

    RAID-5 RAID-10

    300GB 6g 10k SAS 1012GB 549GB

    450GB 6g 10k SAS 1592GB 896GB

    600GB 6g 10k SAS 2181GB 1244GB

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 7

    1.6 Networking

    1.6.1 Administration Network

    The administration network requires one CAT6 cable for each server in the cluster. These should be

    orange. This only accounts for the cables in the racks; cables that would span racks need to be

    addressed separately.

    All servers including master, segment, and ETL nodes must be counted to determine the right

    administration network switch count. The table shows the administration network requirements by

    server count.

    Management Connection Count (includes all devices in the cluster

    with network management)

    Administration Switch Requirements

    Dell

    Up to 21 segment and 4 ETL nodes 1 x Dell 6248

    22 to 49 segment and 5 to 8 ETL nodes

    2 x Dell 6248

    When more than one switch is used, the switches must be cross-connected to ensure console access

    through the master nodes.

    1.6.1.1 Cables

    The number of cables required for a solution varies according to the options selected. In general, each

    server and switch installed will use one cable for the administration network.

    The types of cables required are as follows:

    Connection Type Description Cable Type Color

    Administration

    Connection between server NET MGT ports (BMC) and the administration

    switches. The connection may include a patch panel.

    Category 6 Orange

    1.6.2 Interconnect Networks

    Interconnect Connection Count (includes all devices in the cluster

    with network management)

    Administration Switch Requirements

    Dell

    Up to 22 2 x Dell 6248

    23 to 45 4 x Dell 6248

    The 1Gb interconnect is formed by four, 1Gb networks. Each node in the cluster is connected to each

    network through a separate, 1Gb network interface (four connections per node).

    1.6.2.1.1 Patch Panels

    The 1Gb interconnect networks use RJ-45 patch panels. An RJ-45 patch panel is a device containing 24

    pairs of RJ-45 ports organized back to back. Each pair is used to tie two network cables together. The

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 8

    back side of each pair is used to connect to servers inside the panel’s rack and the front side to

    connect to switches outside the panel’s rack. Any connection between server and switch in the same

    rack are connected directly.

    If a server and switch in a connection are mounted in different racks, the server should connect

    through a patch panel mounted in its rack. The switch in the second rack connects directly to the patch

    panel in the first. If any ETL option is selected, each rack without an ETL switch requires an additional

    patch panel.

    Please consult the racking diagrams for information on how many of these panels each rack requires.

    1.6.2.1.2 Cables

    Each node in the cluster requires four connections to the interconnect networks. Cases where the node

    and the switch(es) to which it connects are in the same rack, one cable per connection is used. Cases

    where the node and the switch(es) to which it connects are not in the same rack, two cables are used.

    The types of cables required are as follows:

    Connection Type Description Cable Type Color

    Interconnect Connection between server PCIe NICs and

    interconnect switches. The connection may include a patch panel.

    Category 6 Yellow

    1.6.3 ETL Networks

    If the total number of nodes including master, segment, and ETL nodes is greater than 46, a separate

    ETL network is required. Implementing this network means adding an additional PCIe network card to

    each node and using that card to connect each node through an additional set of switches.

    Please consult Dell’s support team for assistance.

    1.6.4 Customer Network(s)

    The customer network will need one connection for each master node (two total) for connectivity to

    the database cluster. Additional connections from the customer network cannot be accommodated in

    the C2100. Each of these connections is cabled via a gray cable from the server(s) in question to the in-

    rack patch panels. Connections from the patch panel to the customer network may be any color.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 9

    2. Installation This section of the document provides instructions for configuring the servers in the cluster.

    2.1 Racking Each configuration requires a specific rack plan. All rack plans can be found in section seven of this

    document.

    There are single- and multi-rack configurations determined by the number of servers present in the

    configuration. A single-rack configuration is one where all the planned equipment fits into one rack.

    Multi-rack configurations require two or more racks to accommodate all the planned equipment.

    In general, these principles need to be followed:

    2.1.1 Rules Applying to all Clusters

    The following racking rules must be followed by all configurations.

    Prior to racking any hardware, a site survey must be performed to determine what power option is desired, if power cables will be top or bottom of the rack, and whether network switches and patch panels will be top or bottom of the rack.

    No more than 16 Dell C2100s in any rack.

    The stand-by master node is racked mid-rack in the master rack to make the optical drive more accessible and to keep it close to the KVM tray.

    All computers, switches, arrays, and racks must be labeled on both the front and back.

    All computers, switches, arrays, and racks must be labeled as described in the section on labels later in this document.

    All installed devices must be connected to two or more power distribution units in the rack where the device is installed.

    2.1.2 Rules Applying to Single-Rack Clusters

    This section contains racking rules that apply to configurations that fit in a single rack. Single-rack configurations are those for which the equipment to be racked does not exceed the power, weight, and space limitations of a single rack.

    Single-rack configurations may not deploy any dedicated ETL switches. ETL nodes in the single rack configuration connect through the interconnect switches.

    All single-rack configurations are top-mounted switches. If the network cables are to run through the floor, they must be run through the rack.

    2.1.3 Rules Applying to Multi-Rack Clusters

    This section contains racking rules that apply to configurations requiring more than one rack.

    No more than three switches in any rack. This is to control the thickness of incoming cable bundles.

    A clear document both listing and diagramming where each cable connects by device port identification should be prepared and delivered for each cluster built.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 10

    2.2 Network In addition to configuring the network management capabilities of each switch, there are specific

    cabling requirements. Please review the panel and rack diagrams in section eight for more information.

    2.2.1 General Considerations

    All cables must be run according to established cabling standards. Tight bends or crimps should be eliminated.

    All cables must be clearly labeled at each end. The label on each end of the cable must trace the path the cable follows between server and switch. This includes: The switch name and port The patch panel name and port, if applicable The server name and port

    Please see the example in section on labels in this document.

    If both the server and switch in a connection are mounted in the same rack, they are connected directly with no patch panel in between.

    If a server and switch in a connection are mounted in different racks, the server may connect through a patch panel mounted in its rack.

    Inter-rack connections using CAT6 or fiber cables extend between the server rack’s patch panel and the switch directly.

    2.2.2 Customer LAN Connections

    Customer LAN connections are gray, CAT6 cables.

    All LAN connections are cabled from server to patch panel inside the rack. The customer must provide the cables that connect the patch panels to the customer network infrastructure unless otherwise arranged prior to shipping the configuration.

    Master nodes have two connections to the customer LAN.

    ETL nodes must have enough network bandwidth to support the load required by the customer.

    This table shows the server port to use on each server type for the customer's network.

    Host Type Physical RJ-45 Port Network Interface Name

    Primary Master Node Built-in port 1 eth0

    Secondary Master Node Built-in port 1 eth0

    ETL Nodes Built-in ports 1 and 2 eth0 and eth1

    NOTE: Linux does not always name NICs consistently. Do not rely on the names listed in this table;

    manually verify the name the OS assigns to each port on each server.

    Assign IP addresses according to the local requirements.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 11

    2.2.3 Administration Network

    Administration network cables are orange, CAT5e cables.

    If there are 22 or fewer servers counting master nodes, segment nodes, and ETL nodes, one 48-port switch is racked in the master rack.

    If there are 22 to 59 servers counting master nodes, segment nodes, and ETL nodes two, 48-port admin switches are used with one racked in the master rack and another racked in the second segment rack with the third and fourth interconnect switches.

    All servers connect to the administration network through the iBMC ports.

    Master servers connect to the administration network through the second on-board NIC to provide access to the administration network.

    This table shows the server port to use on each server type for the administration network where # is the server number. For example, sdw34 is given 192.168.0.34 and etl3 is given 192.168.0.203.

    Host Type Physical RJ-45 Port Network Interface Name IP Address

    Primary Master Node Network management iBMC 192.168.0.254

    Built-in Port 3 eth1 192.168.0.241

    Secondary Master Node Network management iBMC 192.168.0.253

    Built-in Port 3 eth1 192.168.0.242

    Segment Nodes Network management iBMC 192.168.0.#

    ETL Nodes Network management iBMC 192.168.0.2#

    NOTE: Linux does not always name NICs consistently. Do not rely on the names listed in this table;

    manually verify the name the OS assigns to each port on each server.

    All administration network switches present should be cross connected and all NICs attached to these switches participate in the 192.168.0.0/24 network.

    Use the following table to determine the correct IP address for each non-server device:

    Device IP Address Port

    First Interconnect Switch 192.168.0.211 Net Management if available; 1/48 if not

    Second Interconnect Switch 192.168.0.212 Net Management if available; 1/48 if not

    Third Interconnect Switch 192.168.0.213 Net Management if available; 1/48 if not

    Fourth Interconnect Switch 192.168.0.214 Net Management if available; 1/48 if not

    First ETL Switch 192.168.0.221 Net Management if available; 1/48 if not

    Second ETL Switch 192.168.0.222 Net Management if available; 1/48 if not

    First Administration Switch 192.168.0.231 Net Management if available; 1/48 if not

    Second Administration Switch 192.168.0.232 Net Management if available; 1/48 if not

    NOTE: Use of 1/48 or 1/24 port for network management requires setup in the switch. Please see

    the switch’s vendor-supplied documentation prior to connecting to properly configure this port.

    Without configuration, connecting this port will form extraneous physical routes affecting cluster

    performance.

    Use the following map to determine where to connect cables.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 12

    First Administration Switch:

    19

    M20

    21

    201

    202

    203

    204

    221

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    UP

    N/A

    211

    212

    254

    253

    241

    242

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    The number in each port indicates which IP address the connected NIC will have on the 192.18.0.0/24

    network.

    Second Administration Switch:

    44

    M45

    46

    47

    48

    49

    205

    206

    207

    208

    222

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    N/A

    UP

    N/A

    213

    214

    22

    23

    24

    25

    26

    27

    28

    29

    30

    31

    32

    33

    34

    35

    36

    37

    38

    39

    40

    41

    42

    43

    The number in each port indicates which IP address the connected NIC will have on the 192.18.0.0/24

    network.

    2.2.4 Interconnect Networks

    Interconnect network connections are yellow, CAT6 cables.

    The first two interconnect switches are always racked in the master rack.

    If there are four interconnect switches, the second two switches are racked in the first segment rack (rack number two for the cluster).

    Master and segment nodes connect to the interconnect through add-in NICs (not built-in NIC ports).

    When using only two gigabit interconnect switches, the first and third ports on the interconnect NIC are cabled to the first switch and the second and fourth ports cabled to the second.

    When using four gigabit switches, the first port from the interconnect NIC is cabled to the first switch, second to the second switch, third to the third switch, and fourth to the fourth switch.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 13

    2.2.4.1 2 x Gigabit Switches

    This table shows the server port to use on each server type for the standard interconnect network where # is the server number. For example, sdw34 is given 192.168.1.34 and etl3 is given 192.168.1.203 on the first interconnect network.

    Host Type Physical RJ-45 Port Network Interface

    Destination Switch IP Address

    Primary Master

    1st port on PCIe card eth4 1st Interconnect

    Switch 192.168.1.254

    2nd port on PCIe card eth5 2nd Interconnect

    Switch 192.168.2.254

    3rd port on PCIe card eth6 1st Interconnect

    Switch 192.168.3.254

    4th port on PCIe card eth7 2nd Interconnect

    Switch 192.168.4.254

    Secondary Master

    1st port on PCIe card eth4 1st Interconnect Switch

    192.168.1.253

    2nd port on PCIe card eth5 2nd Interconnect

    Switch 192.168.2.253

    3rd port on PCIe card eth6 1st Interconnect

    Switch 192.168.3.253

    4th port on PCIe card eth7 2nd Interconnect

    Switch 192.168.4.253

    Segment

    1st port on PCIe card eth4 1st Interconnect Switch

    192.168.1.#

    2nd port on PCIe card eth5 2nd Interconnect

    Switch 192.168.2.#

    3rd port on PCIe card eth6 1st Interconnect

    Switch 192.168.3.#

    4th port on PCIe card eth7 2nd Interconnect

    Switch 192.168.4.#

    ETL

    1st port on PCIe card eth4 1st Interconnect

    Switch 192.168.1.2#

    2nd port on PCIe card eth5 2nd Interconnect

    Switch 192.168.2.2#

    3rd port on PCIe card eth6 1st Interconnect

    Switch 192.168.3.2#

    4th port on PCIe card eth7 2nd Interconnect

    Switch 192.168.4.2#

    NOTE: Linux does not always name NICs consistently. Do not rely on the names listed in this table;

    manually verify the name the OS assigns to each port on each server.

    The first interconnect switch should be attached to NICs that participate in the 192.168.1.0/24 and 192.168.3.0/24 networks. The second switch should service the 192.168.2.0/24 and 192.168.4.0/24 networks.

    The following diagrams show which ports are used by which NICs.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 14

    First Interconnect Switch

    3.

    254

    F F3.

    253

    3.

    1

    3.

    2

    3.

    3

    3.

    4

    3.

    5

    3.

    6

    3.

    7

    3.

    8

    3.

    9

    3.

    10

    3.

    11

    3.

    12

    3.

    13

    3.

    14

    3.

    15

    3.

    16

    3.

    17

    3.

    18

    3.

    19

    3.

    20

    3.

    21

    AD

    M

    1.

    254

    1.

    253

    1.

    1

    1.

    2

    1.

    3

    1.

    4

    1.

    5

    1.

    6

    1.

    7

    1.

    8

    1.

    9

    1.

    10

    1.

    11

    1.

    12

    1.

    13

    1.

    14

    1.

    15

    1.

    16

    1.

    17

    1.

    18

    1.

    19

    1.

    20

    1.

    21

    N/A

    F F

    The number in each port indicates which IP address the connected NIC will have on its interconnect

    network. For example, the port labeled 1.6 is connected to the NIC with the 192.168.1.6 address.

    Second Interconnect Switch

    4.

    254

    F F4.

    253

    4.

    1

    4.

    2

    4.

    3

    4.

    4

    4.

    5

    4.

    6

    4.

    7

    4.

    8

    4.

    9

    4.

    10

    4.

    11

    4.

    12

    4.

    13

    4.

    14

    4.

    15

    4.

    16

    4.

    17

    4.

    18

    4.

    19

    4.

    20

    4.

    21

    AD

    M

    2.

    254

    2.

    253

    2.

    1

    2.

    2

    2.

    3

    2.

    4

    2.

    5

    2.

    6

    2.

    7

    2.

    8

    2.

    9

    2.

    10

    2.

    11

    2.

    12

    2.

    13

    2.

    14

    2.

    15

    2.

    16

    2.

    17

    2.

    18

    2.

    19

    2.

    20

    2.

    21

    N/A

    F F

    The number in each port indicates which IP address the connected NIC will have on its interconnect

    network. For example, the port labeled 2.6 is connected to the NIC with the 192.168.2.6 address.

    NOTE: No ports are shown for ETL nodes. These should be put starting in the last port in the top

    row of each switch working backwards. If there are more connections to make than there are

    ports when ETL nodes are added, increase to four switches.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 15

    2.2.4.2 4 x Gigabit Switches

    This table shows the server port to use on each server type for the extended interconnect network where # is the server number. For example, sdw34 is given 192.168.1.34 and etl3 is given 192.168.1.203 on the first interconnect network.

    Host Type Physical RJ-45 Port Network Interface

    Destination Switch IP Address

    Primary Master

    1st port on PCIe card eth4 1st Interconnect

    Switch 192.168.1.254

    2nd port on PCIe card eth5 2nd Interconnect

    Switch 192.168.2.254

    3rd port on PCIe card eth6 3rd Interconnect

    Switch 192.168.3.254

    4th port on PCIe card eth7 4th Interconnect

    Switch 192.168.4.254

    Secondary Master

    1st port on PCIe card eth4 1st Interconnect

    Switch 192.168.1.253

    2nd port on PCIe card eth5 2nd Interconnect

    Switch 192.168.2.253

    3rd port on PCIe card eth6 3rd Interconnect

    Switch 192.168.3.253

    4th port on PCIe card eth7 4th Interconnect

    Switch 192.168.4.253

    Segment

    1st port on PCIe card eth4 1st Interconnect

    Switch 192.168.1.#

    2nd port on PCIe card eth5 2nd Interconnect

    Switch 192.168.2.#

    3rd port on PCIe card eth6 3rd Interconnect

    Switch 192.168.3.#

    4th port on PCIe card eth7 4th Interconnect

    Switch 192.168.4.#

    ETL

    1st port on PCIe card eth4 1st Interconnect

    Switch 192.168.1.2#

    2nd port on PCIe card eth5 2nd Interconnect

    Switch 192.168.2.2#

    3rd port on PCIe card eth6 3rd Interconnect

    Switch 192.168.3.2#

    4th port on PCIe card eth7 4th Interconnect

    Switch 192.168.4.2#

    NOTE: Linux does not always name NICs consistently. Do not rely on the names listed in this table;

    manually verify the name the OS assigns to each port on each server.

    The first interconnect switch should service the 192.168.1.0/24 network. The second switch should service the 192.168.3.0/24 network. The third switch should service the 192.168.2.0/24 network. Finally, the fourth switch should service the 192.168.4.0/24 network.

    The following diagrams show which ports are used by which NICs.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 16

    First Interconnect Switch

    1.

    22

    F F1.

    23

    1.

    24

    1.

    25

    1.

    26

    1.

    27

    1.

    28

    1.

    29

    1.

    30

    1.

    31

    1.

    32

    1.

    33

    1.

    34

    1.

    35

    1.

    36

    1.

    37

    1.

    38

    1.

    39

    1.

    40

    1.

    41

    1.

    42

    1.

    43

    1.

    44

    AD

    M

    1.

    254

    1.

    253

    1.

    1

    1.

    2

    1.

    3

    1.

    4

    1.

    5

    1.

    6

    1.

    7

    1.

    8

    1.

    9

    1.

    10

    1.

    11

    1.

    12

    1.

    13

    1.

    14

    1.

    15

    1.

    16

    1.

    17

    1.

    18

    1.

    19

    1.

    20

    1.

    21

    N/A

    F F

    The number in each port indicates which IP address the connected NIC will have on its interconnect

    network. For example, the port labeled 1.6 is connected to the NIC with the 192.168.1.6 address.

    Second Interconnect Switch

    2.

    22

    F F2.

    23

    2.

    24

    2.

    25

    2.

    26

    2.

    27

    2.

    28

    2.

    29

    2.

    30

    2.

    31

    2.

    32

    2.

    33

    2.

    34

    2.

    35

    2.

    36

    2.

    37

    2.

    38

    2.

    39

    2.

    40

    2.

    41

    2.

    42

    2.

    43

    2.

    44

    AD

    M

    2.

    254

    2.

    253

    2.

    1

    2.

    2

    2.

    3

    2.

    4

    2.

    5

    2.

    6

    2.

    7

    2.

    8

    2.

    9

    2.

    10

    2.

    11

    2.

    12

    2.

    13

    2.

    14

    2.

    15

    2.

    16

    2.

    17

    2.

    18

    2.

    19

    2.

    20

    2.

    21

    N/A

    F F

    The number in each port indicates which IP address the connected NIC will have on its interconnect

    network. For example, the port labeled 2.6 is connected to the NIC with the 192.168.2.6 address.

    Third Interconnect Switch

    3.

    22

    F F3.

    23

    3.

    24

    3.

    25

    3.

    26

    3.

    27

    3.

    28

    3.

    29

    3.

    30

    3.

    31

    3.

    32

    3.

    33

    3.

    34

    3.

    35

    3.

    36

    3.

    37

    3.

    38

    3.

    39

    3.

    40

    3.

    41

    3.

    42

    3.

    43

    3.

    44

    AD

    M

    3.

    254

    3.

    253

    3.

    1

    3.

    2

    3.

    3

    3.

    4

    3.

    5

    3.

    6

    3.

    7

    3.

    8

    3.

    9

    3.

    10

    3.

    11

    3.

    12

    3.

    13

    3.

    14

    3.

    15

    3.

    16

    3.

    17

    3.

    18

    3.

    19

    3.

    20

    3.

    21

    N/A

    F F

    The number in each port indicates which IP address the connected NIC will have on its interconnect

    network. For example, the port labeled 3.6 is connected to the NIC with the 192.168.3.6 address.

    Fourth Interconnect Switch

    4.

    22

    F F4.

    23

    4.

    24

    4.

    25

    4.

    26

    4.

    27

    4.

    28

    4.

    29

    4.

    30

    4.

    31

    4.

    32

    4.

    33

    4.

    34

    4.

    35

    4.

    36

    4.

    37

    4.

    38

    4.

    39

    4.

    40

    4.

    41

    4.

    42

    4.

    43

    4.

    44

    AD

    M

    4.

    254

    4.

    253

    4.

    1

    4.

    2

    4.

    3

    4.

    4

    4.

    5

    4.

    6

    4.

    7

    4.

    8

    4.

    9

    4.

    10

    4.

    11

    4.

    12

    4.

    13

    4.

    14

    4.

    15

    4.

    16

    4.

    17

    4.

    18

    4.

    19

    4.

    20

    4.

    21

    N/A

    F F

    The number in each port indicates which IP address the connected NIC will have on its interconnect

    network. For example, the port labeled 4.6 is connected to the NIC with the 192.168.4.6 address.

    NOTE: No ports are shown for ETL nodes. These should be put starting in the last port in the top

    row of each switch working backwards. If there are more connections to make than there are

    ports when ETL nodes are added, implement a dedicated ETL network.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 17

    2.3 Servers

    2.3.1 iBMC Configuration

    The iBMC is a network-enabled device installed in each server allowing for remote control of the

    server. Console access, SNMP monitoring, and more are available through the iBMC. There are two ways

    to configure the network properties of the iBMC. Choose the one that is most convenient.

    2.3.1.1 Configuring iBMC IP Address in BIOS

    The iBMC’s network configuration can be performed in the server’s BIOS setup utility. Do the following:

    - Press F2 during the Power-On Self Test (POST).

    - Go to the Server menu and select Set BMC LAN Configuration.

    - Set the network as shown. Use a correct iBMC IP address and subnet mask

    appropriate to the network in use. To change a value, use the arrow keys to make

    the field to change current and then the space bar to open the field for editing.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 18

    Press Return to save the change.

    - When all changes are made, Press F10 so save the changes and exit the BIOS Setup

    Utility.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 19

    2.3.1.2 Port Redirection

    Out-of-band connectivity to the iBMC is done through ipmitool. The iBMC does not support ssh or telnet

    connections. To enable ipmitool access to the iBMC and the console, make the following changes in the

    BIOS setup utility:

    - During the POST, press F2 to access the BIOS Setup Utility.

    - Once in the utility, go to the Server menu.

    - Set the fields as shown and press F-10 to exit and save the changes.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 20

    2.3.1.3 Connecting to the BMC – IPMITOOL Connection

    Connections to the BMC can be formed with the ipmitool from another server. In this way, it is possible

    to access the server console and perform tasks during the server POST without having to browse to the

    BMC.

    To make a console connection to the server’s BMC, do the following:

    Log into another server where ipmitool is available. Some ipmitool implementations do not work for this purpose but ipmitool as delivered with Red Hat or SuSE do work. If this connection is via ssh, use the –e ^ switch to change the escape key sequence from “~.” to “^.”. The escape sequence for ipmitool is also “~.”. If the ssh sequence is not remapped, closing the ipmitool session will drop the ssh connection rather than the ipmitool session.

    Issue the following command: ipmitool -I lanplus -H [ip address of the target BMC] -U root -P root sol activate The session will immediately start. If the OS is configured to present a login on ttyS1, there will be a login prompt. If the server is rebooted, the POST will appear in this session.

    To leave, press ~. As the first characters in a line. This will terminate the ipmitool session. If the connection to the server running ipmitool is via ssh and the escape sequence was not remapped, this will drop the ssh session to the server and may leave the ipmitool session active with no connection to a terminal.

    HTTP Connection

    Connections to the BMC are formed by browsing to the BMC address. Use ssh to the master node with

    the following ssh command:

    ssh -L 8000:[ip address or hostname of BMC desired]:443 root@mdw

    Once the ssh session is connected, open a browser and browse to the following URL:

    https://localhost:8000/login.html

    A separate ssh connection using a different port (8001, 8002, etc.) can be made if connection to more

    than one iBMC at a time is desired. Be sure the local port used is not already in use.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 21

    2.3.2 Hard Disk Configuration – MASTER Node

    Master nodes have seven, hot swappable disks. Configure six of these into a single, RAID-5 stripe set

    with the following properties:

    128k stripe width

    Adaptive Read-ahead

    Disk Cache Disabled

    Cached I/O

    This needs to be configured in the RAID card’s option ROM. Do the following:

    - During the POST, press CTRL-H to enter the option ROM on the RAID card when

    prompted.

    - When the option ROM starts, select the correct adapter (typically there is only one)

    and press the Start button.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 22

    - Select the Configuration Wizard link.

    NOTE: The image shown includes more disks than would be normal in the master

    node.

    - Select New Configuration.

    This stripe set will be reported to the OS as /dev/sda. During the OS install,

    partition this disk according to the relevant install guide standards.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 23

    - Answer Yes to the confirmation question.

    - Select the Manual Configuration option.

    - Select the six drives to include in the stripe set and click Add To Array.

    - Click the Accept DG button to save the disk group.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 24

    - Click the Next button to go to the next screen.

    - Add the current disk group to the span.

    - Click Next to go to the next screen, Choose RAID5 for RAID Level and set the stripe

    width to 128 KB as shown.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 25

    - Click Accept to save the virtual drive and answer Yes to the confirmation question.

    - Click Next to go to the next screen.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 26

    - Click Accept to save the configuration and answer Yes to the confirmation

    questions.

    - Set VD0 to be the bootable disk and click Go.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 27

    - Click Home.

    - Select the remaining unconfigured disk (the image shown includes more disks than

    would be present in a master node) and open its properties. Select the Make

    Dedicated HSP property to designate the disk a hot spare.

    - Click the Go button to save the change.

    - Click the Home button and exit the utility.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 28

    2.3.3 Hard Disk Configuration – Segment and ETL Nodes (using RAID Controller BIOS)

    Use this process to configure the data disks in the RAID card’s option ROM. This can be done prior to

    installing and operating system:

    - During the POST, press CTRL-H to enter the option ROM on the RAID card when

    prompted.

    - When the option ROM starts, select the correct adapter (typically there is only one)

    and press the Start button.

    - Select the Configuration Wizard link.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 29

    - Select New Configuration.

    This stripe set will be reported to the OS as /dev/sda. During the OS install,

    partition this disk according to the relevant install guide standards.

    - Answer Yes to the confirmation question.

    - Select the Manual Configuration option.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 30

    - Select the six drives to include in the first stripe set and click Add To Array.

    - Click the Accept DG button to save the disk group.

    - Select the next six drives to include in the second stripe set and click Add To

    Array.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 31

    - Click Accept DG and then Next.

    - Add the first disk group to the SPAN and click Next.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 32

    - Change the properties as shown, click Accept, and Yes on the confirmation screen.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 33

    - Click the Back button to return to the SPAN screen. Add the second disk group to

    the SPAN.

    - Click Next to go to the next screen.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 34

    Set the virtual disk properties as shown, click Accept, then Yes on the confirmation

    screen.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 35

    - Click Next to go to the next screen.

    - Click Accept and Yes on the confirmation screens.

    - Click Home and exit the utility.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 36

    2.3.4 Hard Disk Configuration – Segment and ETL Nodes (using MegaCLI)

    Segment and ETL nodes have 12 hot-swappable disks.

    The disks should be configured into two disk groups of six disks each. One virtual disk encompassing the

    entire disk group should be configured on each.

    NOTE: Skip this procedure if you configured the hard disks using the RAID Controller BIOS

    described in the previous section.

    This process can be performed with the MegaCLI utility. This process requires a running operating

    system or the use of the RAID card’s Pre-Boot CLI environment:

    First, determine how the server identifies everything. It is important to determine the

    enclosure number and the physical disk slot numbers. These are not consistent server

    to server. Typically, internal disks do not have an enclosure number but this is not

    consistent. Use the following commands to determine this information:

    MegaCli64 -EncInfo -a0 | grep -e Enclosure -e “Device ID”

    This command lists all the attached enclosures on the first card. The output

    will be something like this on a C2100 without any external disks:

    Enclosure 0:

    Device ID : 252

    In this case, the Enclosure number is either nothing or 252. The correct

    value is determined later.

    MegaCli64 -PDList -a0 | grep -e ^Slot -e “^Device Id”

    This command lists the slot and device IDs for all the disks attached to the

    first card. The output will be something like this on a C2100 without any

    external disks:

    Slot Number: 8

    Device ID: 8

    Slot Number: 9

    Device ID: 9

    Slot Number: 10

    Device ID: 10

    Slot Number: 11

    Device ID: 11

    Slot Number: 12

    Device ID: 12

    Slot Number: 13

    Device ID: 13

    Slot Number: 14

    Device ID: 14

    Slot Number: 15

    Device ID: 15

    Slot Number: 16

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 37

    Device ID: 16

    Slot Number: 17

    Device ID: 17

    Slot Number: 18

    Device ID: 18

    Slot Number: 19

    Device ID: 19

    This indicates that the disk slots are numbered started at 8 and

    ending at 19.

    MegaCli64 -PDInfo -PhysDrv[:19] -a0 | grep Slot

    MegaCli64 -PDInfo -PhysDrv[252:19] -a0 | grep Slot

    These commands show the details for a specific physical drive (the example

    shows 19 since 19 is in the list from the previous command). The specifier

    for the drive is Enclosure:Slot. In the first step, the enclosure’s ID was

    identified as 252. These commands determine if the ID shown works in

    commands or if the correct specifier for the enclosure is no value.

    Successful output from these commands is:

    Slot Number: 19

    The command that succeeds is an example of the correct disk specification

    for the next commands.

    Next, list out the disk layouts. Typically, the first six disks will be in the first group and

    the second six disks in the next. Using the previous example and assuming the correct

    enclosure specification is no value, the raid groups would be:

    First group: [:8,:9,:10,:11,:12,:13]

    Second group: [:14,:15,:16,:17,:18,:19]

    Finally, execute the commands to create the virtual disks:

    MegaCli64 -CfgLdAdd –R5[:8,:9,:10,:11,:12,:13] WB ADRA Cached

    NoCachedBadBBU -strpsz128 -a0

    MegaCli64 -CfgLdAdd –R5[:14,:15,:16,:17,:18,:19] WB ADRA Cached

    NoCachedBadBBU -strpsz128 -a0

    These commands create two virtual disks. Each will be made up of six drives, use RAID-

    5, write-back when the battery is good, adaptive read ahead, and a stripe width of

    128k. These should appear in the OS as /dev/sdb and /dev/sdc.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 38

    2.3.5 Firmware

    2.3.5.1 Server

    Apply the most recent firmware available from Dell for the C2100. This material is available from Dell

    Support.

    2.3.5.2 Disk Controller

    Make sure that the RAID card has at least firmware version 2.30.03-0775 which is distributed by LSI as

    part of FW Package 12.3.0-0022. The latest firmware is available at this URL:

    http://www.lsi.com/storage_home/products_home/internal_raid/megaraid_sas/6gb_s_value_line/sas9

    260-8i/index.html.

    3. Validation Most of the validation effort is performed after the OS is installed and a variety of OS-level tools are

    available. A checklist is included in the relevant OS installation guide that should be separately printed

    and signed for delivery and includes the issues raised in this section.

    At this stage, the following items should be examined and verified:

    All devices are installed in positions consistent with the racking diagrams in this document.

    All cables are labeled according to the standards in this document.

    All servers are labeled according to the standards in this document.

    All switches and patch panels are labeled according to the standards in this document.

    All racks are labeled according to the standards in this document.

    All devices power on.

    All hot-swappable devices are properly seated.

    No devices show any warning or fault lights.

    All network management ports are accessible via the administration LAN.

    All cables are neatly dressed into the racks and have no sharp bends or crimps.

    All rack doors and covers are installed and close properly.

    All servers extend and retract without pinching or stretching cables.

    http://www.lsi.com/storage_home/products_home/internal_raid/megaraid_sas/6gb_s_value_line/sas9260-8i/index.htmlhttp://www.lsi.com/storage_home/products_home/internal_raid/megaraid_sas/6gb_s_value_line/sas9260-8i/index.html

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 39

    4. Labels

    4.1 Racks Each rack in a reference configuration is labeled at the top of the rack and on both the front and back.

    Racks are named Master Rack or Segment Rack # where # is a sequential number starting at 1. A rack

    label would look like this:

    MASTER RACK SEGMENT RACK 1

    4.2 Servers Each server is labeled on both the front and back of the server. The label should be the base name of

    the server’s network interfaces. In other words, if a segment node is known as sdw15-1, sdw15-2,

    sdw15-3, and sdw15-4, the label on that server would be sdw15.

    sdw15

    4.3 Switches Switches are labeled according to their purpose. Interconnect switches are i-sw, administration

    switches are a-sw, and ETL switches are e-sw. Each switch is assigned a number starting at 1. Switches

    are labeled on the front of the switch only because the back is generally not visible when racked.

    i-sw-1

    4.4 Patch Panels Each rack may have as many as three patch panels. The panels are named panel and use a letter (A, B,

    or C) to differentiate within a rack. Each panel is labeled on its face only.

    Single-rack configurations have no patch panels. Multi-rack configurations without the ETL option

    selected may have a panel-A or both panel-A and panel-B. When the ETL option is added, panel-C is

    added. In racks with only panel-A, adding the ETL option still adds panel-C so that those racks have

    panel-A and panel-C.

    panel-A

    4.5 Network Cables

    4.5.1 CAT6

    CAT6 cables are used for the 1Gb interconnect. When using this interconnect, each system has

    between three and nine connections depending on which system and which options are configured. In

    addition, the connections flow through patch panels. This means any one connection between server

    and switch may have two cables and four connection points.

    CAT6 cables are large enough around that labels should be wrapped around the cable without leaving a

    tab hanging off the cable. Tabs tend to tear off and become trapped in device fans. Also, tabs cluster

    up near the switches leaving a messy appearance. Be sure that the top and bottom of the label overlap

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 40

    a small amount to ensure that the label will remain affixed to the cable. PTouch labels at least ¾-inch

    high are acceptable; ½-inch PTouch do not stay on cables and cannot be used.

    Each network cable is labeled at each end with enough information to trace the connection both

    directions. The cable label needs the following elements:

    The server identifier

    A server is identified by the rack name where it is installed, the node name of

    the server, and the interface number the current connection uses. For

    example, the fully qualified server identifier for a connection from the net1

    interface (nge1) in sdw10, a server racked in rack 2 would be:

    rack-2.sdw10-2

    The patch panel identifier

    The unique name of a patch panel includes the rack name where the panel is

    installed, the panel name and, the port used for the connection. There are

    between one and three panels in each rack and each panel has 24 ports.

    Therefore, a fully qualified patch panel name for the 10th port in the second

    panel in the second rack would be:

    rack-2.panel-B.10

    For connections that go straight from server to switch leave this identifier blank

    on the label.

    Switch identifier

    A switch identifier includes the rack where the switch is installed, the switch

    name, and the port number used in the switch for the current connection. The

    switch name indicates what kind of switch is in use for the connection; “a” for

    Administration, “I” for Interconnect, and “e” for ETL. For example, a fully

    qualified switch identifier for the 36th port in the second interconnect switch

    installed in the first rack would be:

    rack-1.i-sw-2.36

    So, a label for a connection from sdw15’s net0 port to the first interconnect switch’s number 37 port

    would look as follows if sdw15 is installed in the third rack and uses the 15th port in the first panel in

    the third rack:

    rack-3.sdw15-1

    rack-3.panel-1.15

    rack-1.i-sw-1.37

    In the example, there are two cables connecting sdw15-1 and i-sw-1.37. Theses cables meet at rack-

    3.panel-A.15. This means there are four labels, one at each end of each cable and all identical to the

    example.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 41

    5. Physical Installation Details

    5.1 Option Concerns

    5.1.1 Additional Segment Nodes

    Each additional segment node has the same connections and place in the diagrams as sdw2.

    When the 21st segment node is added and the 1Gb interconnect is used, the configuration requires four

    Gig-E switches, one for each Class C IP range defined to the interconnect LAN.

    In addition, adding the 23rd host of any type requires the use of an additional 48-port 10/100 switch for

    use as the administration LAN. If two administration switches are present, they have to be cross-

    connected to permit full access to all connected consoles.

    5.1.2 Additional ETL Nodes and ETL Network

    ETL networks are not supported on single rack configurations. A single rack configuration can include

    one to four ETL nodes each of which connect the ETL nodes directly to the interconnect network.

    The standard configuration allows for up to two ETL nodes for every 10 segment nodes. The maximum

    supported ETL nodes is eight. Each additional ETL node has the same connections and place in the

    diagrams as etl1.

    Configurations where the segment to ETL node ratio is greater than 2 to 1, the segment nodes are

    connected to the ETL network only once each. The ETL nodes can send data at no faster rate than

    500MB/s. When the ratio is more than 2 segment nodes for each ETL node, no single segment node

    needs more than 1Gb/s bandwidth to receive its share of the ETL nodes’ output.

    Care must be taken to account for ETL node network management ports on the administration LAN.

    The standard administration LAN is a 48-port, 10/100 switch. Typically, there is room to add a few ETL

    nodes to the configuration and use the same switch. However, if the number of ETL nodes plus the

    number of all the hosts in the configuration (administration, master, secondary master, and segment)

    totals 48 or more, additional switches are required.

    5.1.3 Top- or Bottom-Mounted Options

    The rack power cables, network switches, and patch panels can be mounted in the top or bottom of

    the racks. This option does not affect the network drawings but does change the rack layouts.

    5.1.4 Interconnect Network Type

    The drawings show two interconnect switches. In some cases using the 1Gb interconnect, more than

    two switches may be deployed. This does not change the number of logical networks used or how they

    are deployed.

    5.1.5 Segment Node Disk Type

    This option addresses the individual disk size in Dell C2100s and has no bearing whatsoever on the

    network drawings.

    5.2 Racking There are four distinct rack layouts applicable to the reference configuration. Which ones are used in

    any cluster depends on the number and types of hosts included in the configuration.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 42

    5.2.1 Single-Rack Cluster

    A single rack cluster is one with only a master rack. Only the top-mounted master rack configuration is

    supported in a single-rack cluster.

    NOTE: Always mount switches and patch panels rear-facing. Shown here front facing to account for

    each device’s position.

    5.2.1.1 Network Connections

    The following tables show the number of connections each host consumes in the single rack cluster

    along with the connection type (direct to switch or patch panel). To determine the total number of

    connections present in the solution, multiply the number of each host type (master and segment) by

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 43

    the column totals in the table.

    Please note that no connection information is given for dedicated ETL networks. These are custom

    designed on a case-by-case basis. Please consult with the Dell support team for more information on

    how these are implemented.

    5.2.1.1.1 1Gb Interconnect

    Master Node (Primary or Secondary) Segment Node

    Networks Direct to Switch

    Patch Panel Direct to Switch

    Patch Panel

    Customer LAN 0 1 0 0

    Administration 2 0 1 0

    Interconnect 4 0 4 0

    5.2.1.1.2 RJ-45 Patch Panel Maps

    1Gb Interconnect, 2 to 14 Segment Nodes

    Use this map to determine how to use the ports in the rack’s patch panel. This panel applies to 1Gb

    networks. Two panels are used to preserve room in the rack to expand to a multi-rack solution.

    N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/AN/A N/A N/A N/A N/A N/A N/A N/A N/A N/A

    mdw

    to

    LAN

    smdw

    to

    LAN

    N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/AN/A N/A N/A N/A N/A N/A N/A N/A N/A N/A

    The connections labeled with mdw relate to the primary master and those with smdw relate to the

    secondary master.

    5.2.2 Multiple Racks

    Clusters with more than one rack or that are expected to grow to more than one rack are considered

    multi-rack configurations.

    5.2.2.1 Master Rack

    The master rack in a multi-rack configuration is the same as a single-rack configuration. However, a

    master rack may use 48 or 52 port 10Gb switches when using the 10Gb interconnect. Also, master racks

    may have the switches mounted in the bottom.

    NOTE: The bottom-mount option is not shown. When using bottom mount, move all switches and

    panels to the bottom of the rack and move all the other devices up.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 44

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 45

    5.2.2.1.1 Network Connections

    The following tables show the number of connections each host consumes in the single rack cluster

    along with the connection type (direct to switch or patch panel). To determine the total number of

    connections present in the solution, multiply the number of each host type (master and segment) by

    the column totals in the table.

    Please note that no connection information is given for dedicated ETL networks. These are custom

    designed on a case-by-case basis. Please consult with the Dell support team for more information on

    how these are implemented.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 46

    5.2.2.1.1.1 1Gb Interconnect, 2 – 21 Segment Nodes

    Master Node (Primary or Secondary) Segment Node

    Networks Direct to Switch

    Patch Panel Direct to Switch

    Patch Panel

    Customer LAN 0 2 0 0

    Administration 2 0 1 0

    Interconnect 4 0 4 0

    5.2.2.1.1.2 1Gb Interconnect, 22 - 44 Segment Nodes

    Master Node (primary or secondary) Segment Node

    Networks Direct to

    Switch Patch Panel

    Direct to

    Switch Patch Panel

    Customer LAN 0 1 0 0

    Administration 2 0 1 0

    Interconnect 2 2 2 2

    5.2.2.1.1.3 RJ-45 Patch Panel Port Maps

    Use this map to determine how to use the ports in the rack’s patch panel.

    1Gb Interconnect, 2 to 21 Segment Nodes:

    N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/AN/A N/A N/A N/A N/A N/A N/A N/A N/A N/A

    mdw

    to

    LAN

    smdw

    to

    LAN

    N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/AN/A N/A N/A N/A N/A N/A N/A N/A N/A N/A

    The connections labeled with mdw relate to the primary master and those with

    smdw relate to the secondary master.

    1Gb Interconnect, 22 to 44 Segment Nodes:

    3.

    11

    3.

    12

    3.

    13

    3.

    14N/A N/A N/A N/A N/A N/A

    3.

    254

    3.

    253

    3.

    1

    3.

    2

    3.

    3

    3.

    4

    3.

    5

    3.

    6

    3.

    7

    3.

    8

    3.

    9

    3.

    10

    mdw

    to

    LAN

    smdw

    to

    LAN

    4.

    11

    4.

    12

    4.

    13

    4.

    14N/A N/A N/A N/A N/A N/A

    4.

    254

    4.

    253

    4.

    1

    4.

    2

    4.

    3

    4.

    4

    4.

    5

    4.

    6

    4.

    7

    4.

    8

    4.

    9

    4.

    10

    The connections labeled with mdw relate to the primary master and those with

    smdw relate to the secondary master.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 47

    The number in each port indicates which IP address the connected NIC will have on its interconnect

    network. For example, the port labeled 3.6 is connected to the NIC with the 192.168.3.6 address.

    5.2.2.2 First Segment Rack

    The first segment rack contains the second two interconnect switches when using four interconnect

    switches. These two switches are not necessary until the 22nd segment node is added to the cluster.

    The space for these switches is reserved in the second segment rack since this rack could potentially

    hold the 22nd segment node.

    When the 22nd segment node is added, the switches are installed into the first segment rack and the

    racks re-cabled to redistribute the network connections.

    In addition, the second administration switch will be placed into this rack when needed.

    5.2.2.2.1 Network Connections

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 48

    The following tables show the number of connections each host consumes in the single rack cluster

    along with the connection type (direct to switch or patch panel). To determine the total number of

    connections present in the solution, multiply the number of each host type (master and segment) by

    the column totals in the table.

    Please note that no connection information is given for dedicated ETL networks. These are custom

    designed on a case-by-case basis. Please consult with the Dell support team for more information on

    how these are implemented.

    5.2.2.2.1.1 1Gb Interconnect, 2 to 21 Segment Nodes

    Networks

    Segment Node

    Direct to Switch Patch Panel

    Customer LAN 0 0

    Administration 0 1

    Interconnect 0 4

    5.2.2.2.1.2 1Gb Interconnect, 22 to 44 Segment Nodes

    Networks

    Segment Node

    Direct to Switch Patch Panel

    Customer LAN 0 0

    Administration 1 0

    Interconnect 2 2

    5.2.2.2.1.3 RJ-45 Patch Panel Port Maps

    1 Gb Interconnect, 2 to 21 Segment Nodes

    3..

    20

    3..

    21N/A N/A

    0.

    15

    0.

    16

    0.

    17

    0.

    18

    0.

    19

    0.

    20

    0.

    21

    0.

    22

    1.

    15

    1.

    16

    1.

    17

    1.

    18

    1.

    19

    1.

    20

    1.

    21

    3..

    15

    3..

    16

    3..

    17

    3..

    18

    3..

    19

    4.

    20

    4.

    21N/A N/A

    0.

    23

    0.

    24

    0.

    25

    0.

    26

    0.

    27

    0.

    28

    0.

    29

    0.

    30

    2.

    15

    2.

    16

    2.

    17

    2.

    18

    2.

    19

    2.

    20

    2.

    21

    4.

    15

    4.

    16

    4.

    17

    4.

    18

    4.

    19

    1 Gb Interconnect, 22 to 44 Segment Nodes

    1.

    27

    1.

    28

    1.

    29

    1.

    30

    0.

    15

    0.

    16

    0.

    17

    0.

    18

    0.

    19

    0.

    20

    0.

    21

    0.

    22

    1.

    15

    1.

    16

    1.

    17

    1.

    18

    1.

    19

    1.

    20

    1.

    21

    1.

    22

    1.

    23

    1.

    24

    1.

    25

    1.

    26

    2.

    27

    2.

    28

    2.

    29

    2.

    30

    0.

    23

    0.

    24

    0.

    25

    0.

    26

    0.

    27

    0.

    28

    0.

    29

    0.

    30

    2.

    15

    2.

    16

    2.

    17

    2.

    18

    2.

    19

    2.

    20

    2.

    21

    2.

    22

    2.

    23

    2.

    24

    2.

    25

    2.

    26

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 49

    5.2.2.3 Second Segment Rack

    The second segment rack contains only segment or ETL nodes; no switches or other servers.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 50

    5.2.2.3.1.1 1Gb Interconnect, Any Number of Segment Nodes

    Networks

    Segment Node

    Direct to Switch Patch Panel

    Customer LAN 0 0

    Administration 0 1

    Interconnect 0 4

    5.2.2.3.1.2 RJ-45 Patch Panel Port Maps

    1 Gb Interconnect, Any Number of Segment Nodes:

    1.

    43

    1.

    44

    1.

    45

    0.

    31

    0.

    32

    0.

    33

    0.

    34

    0.

    35

    0.

    36

    0.

    37

    0.

    38N/A

    1.

    31

    1.

    32

    1.

    33

    1.

    34

    1.

    35

    1.

    36

    1.

    37

    1.

    38

    1.

    39

    1.

    40

    1.

    41

    1.

    42

    2.

    43

    2.

    44

    2.

    45

    0.

    39

    0.

    40

    0.

    41

    0.

    42

    0.

    43

    0.

    44

    0.

    45N/A N/A

    2.

    31

    2.

    32

    2.

    33

    2.

    34

    2.

    35

    2.

    36

    2.

    37

    2.

    38

    2.

    39

    2.

    40

    2.

    41

    2.

    42

    3.

    43

    3.

    44

    3.

    45N/A N/A N/A N/A N/A N/A N/A N/A N/A

    3.

    31

    3.

    32

    3.

    33

    3.

    34

    3.

    35

    3.

    36

    3.

    37

    3.

    38

    3.

    39

    3.

    40

    3.

    41

    3.

    42

    4.

    43

    4.

    44

    4.

    45N/A N/A N/A N/A N/A N/A N/A N/A N/A

    4.

    31

    4.

    32

    4.

    33

    4.

    34

    4.

    35

    4.

    36

    4.

    37

    4.

    38

    4.

    39

    4.

    40

    4.

    41

    4.

    42

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 51

    5.2.2.4 Third Segment Rack

    This rack contains only segment nodes; no switches or other servers.

    5.2.2.4.1 Network Connections

    The following tables show the number of connections each host consumes in the single-rack cluster

    along with the connection type (direct to switch or patch panel). To determine the total number of

    connections present in the solution, multiply the number of each host type (master and segment) by

    the column totals in the table.

    Please note that no connection information is given for dedicated ETL networks. These are custom

    designed on a case-by-case basis. Please consult with the Dell support team for more information on

    how these are implemented.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 52

    5.2.2.4.1.1 1Gb Interconnect, Any Number of Segment Nodes

    Networks

    Segment Node

    Direct to Switch Patch Panel

    Customer LAN 0 0

    Administration 0 1

    Interconnect 0 4

    5.2.2.4.1.2 RJ-45 Patch Panel Port Maps

    10 Gb Interconnect, Any Number of Segment Nodes:

    N/A N/A N/A N/A N/A0.

    46

    0.

    47

    0.

    48

    0.

    49N/A N/A N/AN/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A

    5.3 Network Diagrams The following diagrams show network topographies for the Greenplum Data Warehouse cluster. These

    diagrams do not show all the potential options, only those affecting network complexity.

    5.3.1 1 Gb Interconnect

    The following diagrams show examples of clusters using 1Gb interconnect links.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 53

    5.3.1.1 Minimum Cluster

    The minimum configuration for the Greenplum Data Warehouse appliance consists of three network

    switches, two dedicated master nodes (primary and secondary), and two segment nodes. Two of the

    switches are Gigabit Ethernet switches and the third is a 10/100 switch. Data has to be loaded through

    the master nodes in this configuration.

    192.168.4.0/24

    Site Network Infrastructure

    Interconnect Switch #1

    48-port, 10/100/1000 Switch

    192.168.1.0/24 192.168.3.0/24

    Interconnect Switch #2

    48-port, 10/100/1000 Switch

    192.168.2.0/24

    Administration Network

    24-port, 10/100 Switch

    192.168.0.0/24

    MDW Primary Master

    C2100

    iBMC 192.168.0.254

    eth0 Local IP

    192.168.1.254 eth4 eth1 192.168.0.203

    192.168.2.254 eth5

    192.168.3.254 eth6

    192.168.4.254 eth7

    SMDW Secondary Master

    C2100

    iBMC 192.168.0.253

    eth0 Local IP

    192.168.1.253 eth4 eth1 192.168.0.202

    192.168.2.253 eth5

    192.168.3.253 eth6

    192.168.4.253 eth7

    SDW1 Segment

    C2100

    iBMC 192.168.0.1

    192.168.1.1 eth4

    192.168.2.1 eth5

    192.168.3.1 eth6

    192.168.4.1 eth7

    SDW2 Segment

    C2100

    iBMC 192.168.0.2

    192.168.1.2 eth4

    192.168.2.2 eth5

    192.168.3.2 eth6

    192.168.4.2 eth7

    192.168.4.0/24

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 54

    5.3.1.2 Minimum Cluster with ETL Access

    This drawing shows the minimum configuration including ETL capacity. In this case, additional network

    interfaces are put into the segment nodes and used to connect to the customer network. Data is loaded

    from existing servers over these interfaces.

    Site Network Infrastructure

    Interconnect Switch #1

    48-port, 10/100/1000 Switch

    192.168.1.0/24 192.168.3.0/24

    Interconnect Switch #2

    48-port, 10/100/1000 Switch

    192.168.2.0/24 192.168.4.0/24

    Administration Network

    24-port, 10/100 Switch

    192.168.0.0/24

    MDW Primary Master

    C2100

    iBMC 192.168.0.254

    eth0 Local IP

    192.168.1.254 eth4 eth1 192.168.0.203

    192.168.2.254 eth5

    192.168.3.254 eth6

    192.168.4.254 eth7

    SMDW Secondary Master

    C2100

    iBMC 192.168.0.253

    Eth0 Local IP

    192.168.1.253 eth4 eth1 192.168.0.202

    192.168.2.253 eth5

    192.168.3.253 eth6

    192.168.4.253 eth7

    SDW1 Segment

    C2100

    iBMC 192.168.0.1

    192.168.1.1 eth4

    192.168.2.1 eth5

    192.168.3.1 eth6

    192.168.4.1 eth7

    SDW2 Segment

    C2100

    iBMC 192.168.0.2

    192.168.1.2 eth4

    192.168.2.2 eth5

    192.168.3.2 eth6

    192.168.4.2 eth7

    eth0 Local IP

    eth1 Local IP

    eth0 Local IP

    eth1 Local IP

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 55

    5.3.1.3 Minimum Configuration With ETL Server

    This drawing shows the minimum configuration including an ETL server. This example builds on the last

    by bringing the ETL server onto the interconnect network. In the previous diagram, the cluster was

    opened up to the customer LAN to provide access to source data. In this case, the server with source

    data is connected to the interconnect network. This is preferred because it is more secure and less

    complicated.

    Site Network Infrastructure

    Interconnect Switch #1

    48-port, 10/100/1000 Switch

    192.168.1.0/24 192.168.3.0/24

    Interconnect Switch #2

    48-port, 10/100/1000 Switch

    192.168.2.0/24 192.168.4.0/24

    Administration Network

    24-port, 10/100 Switch

    192.168.0.0/24

    MDW Primary Master

    C2100

    iBMC 192.168.0.254

    eth0 Local IP

    192.168.1.254 eth4 eth1 192.168.0.203

    192.168.2.254 eth5

    192.168.3.254 eth6

    192.168.4.254 eth7

    SMDW Secondary Master

    C2100

    iBMC 192.168.0.253

    eth0 Local IP

    192.168.1.253 eth4 eth1 192.168.0.202

    192.168.2.253 eth5

    192.168.3.253 eth6

    192.168.4.253 eth7

    SDW1 Segment

    C2100

    iBMC 192.168.0.1

    192.168.1.1 eth4

    192.168.2.1 eth5

    192.168.3.1 eth6

    192.168.4.1 eth7

    SDW2 Segment

    C2100

    iBMC 192.168.0.2

    192.168.1.2 eth4

    192.168.2.2 eth5

    192.168.3.2 eth6

    192.168.4.2 eth7

    Customer Data Sourcet This is an existing system on the customer network that is

    essentially an ETL node

    Lan Connection(s)

    192.168.1.46

    192.168.2.46

    192.168.3.46

    192.168.4.46

    The ETL server need not be an existing customer server; it can be a new server included in the cluster

    for this purpose. It is important to note that this configuration needs to carefully account for the

    number of network connections consumed by ETL servers. If the total number of servers on the

    interconnect, including ETL, exceeds 21, four interconnect switches are required. If the number

    exceeds 46, ETL servers need to be put on a dedicated network that includes the segment nodes from

    the cluster but is not the interconnect.

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 56

    Appendix A: Bill of Materials

    Appendix A1: Master Node Bill of Materials

    Base Unit: PowerEdge C2100 Expander Backplane support for 3.5" Hard Drives Redundant

    Power Supplies (224-8350)

    Processor: Intel Xeon X5670, 2.93Ghz, 12M Cache, 5.86 GT/s QPI, Turbo, HT (317-4017)

    Processor: Thermal Heatsink,CPU,C2100 (317-3934)

    Processor: Thermal Heatsink Front,C2100 (317-3935)

    Processor: Intel Xeon X5670, 2.93Ghz, 12M Cache, 5.86 GT/s QPI, Turbo, HT (317-4017)

    Memory: 48GB Memory (12x4GB), 1333MHz Dual Ranked RDIMMs for 2 Processors, Optimized (317-3394)

    Operating System: No OS, No Utility Partition (420-3323)

    NIC: Intel Gigabit ET Quad Port 1GbE, PCIe x4 (430-0771)

    Documentation Diskette:

    C2100 Documentation (330-8774)

    Additional Storage Products:

    Hard Drive Carrier,3.5,1-12PCS,C2100 (342-0981)

    Additional Storage Products:

    Hard Drive Carrier,3.5,1-12PCS,C2100 (342-0981)

    Additional Storage Products:

    Hard Drive Carrier,3.5,1-12PCS,C2100 (342-0981)

    Additional Storage Products:

    Hard Drive Carrier,3.5,1-12PCS,C2100 (342-0981)

    Additional Storage Products:

    300GB 15K RPM Serial-Attach SCSI 6Gbps 3.5in Hot Plug Hard Drive (342-1542)

    Additional Storage Products:

    300GB 15K RPM Serial-Attach SCSI 6Gbps 3.5in Hot Plug Hard Drive (342-1542)

    Additional Storage Products:

    300GB 15K RPM Serial-Attach SCSI 6Gbps 3.5in Hot Plug Hard Drive (342-1542)

    Additional Storage Products:

    300GB 15K RPM Serial-Attach SCSI 6Gbps 3.5in Hot Plug Hard Drive (342-1542)

    Feature Add-in LSI 9260-8i controllers for up to 12 HP Drives total (342-0993)

    Feature LSI 9260-8i SAS/SATA Card (342-1529)

    Feature C2100 Sliding Rail Kit (330-8520)

    Service: Dell Hardware Limited Warranty Initial Year (909-1677)

    Service: Dell Hardware Limited Warranty Extended Year (909-1668)

    Service: Pro Support for IT: Next Business Day Onsite Service After Problem Diagnosis, Initial Year (926-4080)

    Service: Pro Support for IT: Next Business Day Onsite Service After Problem Diagnosis, 2Year Extended (923-2322)

    Service: ProSupport for IT: 7x24 HW / SW Tech Support and Assistance for Certified IT Staff, 3 Year (923-2362)

    Service: Thank you choosing Dell ProSupport. For tech support, visit

  • Dell | Greenplum Database Solution | Deployment Guide

    Page 57

    http://support.dell.com/ProSupport or call 1-800-9 (989-3439)

    Installation: On-Site Installation Declined (900-9997)

    Misc: Hard Drive Carrier,3.5,1-12PCS,C2100 (342-0981)

    Misc: Hard Drive Carrier,3.5,1-12PCS,C2100 (342-0981)

    Misc: Hard Drive Carrier, 3.5, 1-12PCS, C2100 (342-0981)

    Misc: Hard Drive Carrier, 3.5,1-12PCS, C2100 (342-0981)

    Misc: 300GB 15K RPM Serial-Attach SCSI 6Gbps 3.5in Hot Plug Hard Drive (342-1542)

    Misc: 300GB 15K RPM Serial-Attach SCSI 6Gbps 3.5in Hot Plug Hard Drive (342-1542)

    Misc: 300GB 15K RPM Serial-Attach SCSI 6Gbps 3.5in Hot Plug Hard Drive (342-1542)

    Misc: 300GB 15K RPM Serial-Attach SCSI 6Gbps 3.5in Hot Plug Hard Drive (342-1542)

    Misc: Power Cord, C13 to C14, PDU Style, 12 Amps, 2 meter, Qty 1 (330-7353)

    Misc: Power Cord, C13 to C14, PDU Style, 12 Amps, 2 meter, Qty 1 (330-7353)

    Appendix A2: Segment Node Bill of Materials

    Base Unit: PowerEdge C2100 Expander Backplane support for 3.5" Hard Drives Redundant

    Power Supplies (224-8350)

    Processor: Intel Xeon X5670, 2.93Ghz, 12M Cache, 5.86 GT/s QPI, Turbo, HT (317-4017)

    Processor: Thermal Heatsink, CPU, C2100 (317-3934)

    Processor: Thermal Heatsink Front, C2100 (317-3935)

    Processor: Intel Xeon X5670, 2.93Ghz, 12M Cache, 5.86 GT/s QPI, Turbo, HT (317-4017)

    Memory: 48GB Memory (12x4GB), 1333MHz Dual Ranked RDIMMs for 2 Processors, Optimized (317-3394)

    Operating System: No OS, No Utility Partition (420-3323)

    NIC: Intel Gigabit ET Quad Port 1GbE, PCIe x4 (430-0771)

    Documentation Diskette:

    C2100 Documentation (330-8774)

    Additional Storage Products:

    Hard Drive Carrier, 3.5,1-12PCS, C2100 (342-0981)

    Additional Storage Products:

    Hard Drive Carrier, 3.5,1-12PCS, C2100 (342-0981)

    Additional Storage Products:

    Hard Drive Carrier, 3.5,1-12PCS, C2100 (342-0981)

    Additional Storage Products:

    Hard Drive Carrier, 3.5,1-12PCS, C2100 (342-0981)

    Additional Storage Products:

    600GB 15K RPM Serial-Attach SCSI 6Gbps 3.5in Hot Plug Hard Drive (342-1544)

    Additional Storage Products:

    600GB 15K RPM Serial-Attach SCSI 6Gbps 3.5in Hot Plug Hard Drive (342-1544)

    Additional Storage Products:

    600GB 15K RPM Serial-Attach SCSI 6Gbps 3.5in Hot Plug Hard Drive (342-1544)

    Additional Storage Products:

    600GB 15K RPM Serial-Attach SCSI 6Gbps 3.5in